Original photo by Anton Iakovenko/ Shutterstock

Women have contributed to almost every facet of life, from sports and science to art and politics. While some female role models are starting to get more recognition, we still tend to gloss over history’s supporting female characters — women who broke their own glass ceilings while serving others, smashing records, and pursuing personal passions. Here are a few stories you may have missed in history class.

Close-up of a stack of opened mailing letters.
Credit: sue hughes/ Unsplash

First Known Female Postmaster in Colonial America

Mary Katharine Goddard was among the first female publishers in the U.S., a socially precarious venture for a colonial woman during the country’s fight for independence. Working with her mother, Sarah, and brother, William, Mary Katharine founded multiple publications starting in the 1760s. William frequently traveled between cities to establish new papers, leaving the bulk of news collecting and printing to his sister. In 1774, he appointed Mary Katharine to run The Maryland Journal while he focused on other pursuits (such as lobbying for a national postal service) and served time in debtor’s prison. During the height of the Revolutionary War, Mary Katharine made a name for herself with fiery anti-British editorials. In 1775, she was appointed Baltimore’s first postmaster — likely the first woman to hold such a position in colonial America — and in 1777, Congress commissioned her to print copies of the Declaration of Independence. (Surviving copies feature her printer’s mark at the bottom.) Despite her success, however, Mary Katharine was pushed out of both roles at the war’s end. In 1784, William rescinded her title as publisher, creating a lifelong rift between the siblings. Not long after, she was also removed from her postmaster job on the basis of sex. She wrote to George Washington asking to be reinstated, but the President passed her complaint to the postmaster general, who left her plea unanswered.

A close up view of the Congressional Medal of Honor.
Credit: The Washington Post via Getty Images

First Woman Surgeon and Female Congressional Medal of Honor Recipient

Dr. Mary Edwards Walker was the second U.S. woman to receive a medical degree (following Dr. Elizabeth Blackwell), but she became known as the country’s first female surgeon. Following her medical school graduation in 1855, Walker went into practice with her husband and fellow doctor Albert Miller, though the Civil War would change the course of her career. Despite having a medical degree, Walker was denied work as a military surgeon in the Union Army because she was a woman. Instead, she volunteered at field hospitals in Washington, D.C., and Virginia until 1863, when Tennessee accepted her medical credentials and designated her as the Army’s first female surgeon. Walker’s proximity to battlefields put her at risk — in 1864, she was captured by Confederate troops and spent four months at the notoriously brutal Castle Thunder prison, where she suffered injuries that plagued her for the rest of her life. At the war’s end, Walker was awarded the Congressional Medal of Honor for Meritorious Service, an honor that Congress revoked in 1917 on the grounds that her medical work was not directly on the front lines. She refused to return the award for the remaining two years of her life and was posthumously re-awarded the medal in 1977. More than 100 years later, Walker remains the only woman to receive the Congressional Medal of Honor.

A look at a camera being used for television broadcasting.
Credit: Skreidzeleu/ Shutterstock

First Female TV Game Show Host

Actress Arlene Francis found her footing in entertainment as a radio host, but it was a TV first that catapulted her career to new heights. In 1949, Francis became the first woman to host a television game show in the United States. On Blind Date, a show Francis originally hosted over radio airwaves, male contestants competed for an all-expenses-paid outing with women hidden behind a wall, the obvious catch being that they couldn’t see their prospective dates and had to answer a litany of questions with the goal of being picked. Francis hosted the show for three years before moving on to films and Broadway stages, but her best-known role was a 25-year stint as a panelist on What’s My Line?, another TV game show.

A woman walking up the stairs of the Supreme Court.
Credit: Anton Iakovenko/ Shutterstock

First Native American Woman to Argue a Supreme Court Case

Lyda Conley’s legacy was preserving that of her ancestors — specifically their final resting place. Conley acted as a staunch (and armed) defender of the Wyandot National Burying Ground, a Kansas cemetery at risk of sale and destruction some 60 years after its creation. The cemetery was established in 1843 following typhoid and measles outbreaks that took hundreds of Wyandot lives; the loss was a particular blow to an Indigenous community that was forcibly relocated thanks to broken treaties with the U.S. government and the cruel Indian Removal Act of 1830. In 1890, Kansas senators introduced legislation to sell the burial ground; although it failed, the effort encouraged Lyda Conley to attend law school to defend the very cemetery in which her own parents, siblings, and grandparents were interred. Conley was admitted to the Missouri Bar in 1902, and within four years put her legal skills to work as the federal government moved to sell the cemetery. Conley and her sister Lena began a legal and physical siege for its protection, building an armed watch station called Fort Conley on the grounds and warning, “woe be to the man that first attempts to steal a body.” In 1910, her legal fight made its way to the U.S. Supreme Court, where she became the first Native American woman (and third woman ever) to argue a case before the judges. While the court ruled against her, years of media coverage about the cemetery worked in her favor. In 1913, the Kansas Senate passed legislation protecting the cemetery, which was designated a National Historic Landmark in 2017.

View of a plane flying under the clouds in the sky.
Credit: Kevin Woblick/ Unsplash

First Woman to Break the Sound Barrier

Aviator Jacqueline Cochran set more than 73 flight records during her lifetime, most for altitude and speed. In 1953, she also snagged the title for first woman to break the sound barrier. Her success was a product of her determination, which may have been honed during a difficult childhood; raised in Florida by a family with modest means, Cochran began working in a cotton mill at just six years old. At 10, she struck out on her own, working in salons for several years before launching her own cosmetics company in the mid-1930s. Around this time, in 1932, Cochran pursued her pilot’s license with the goal of more easily reaching cross-country clients and business partners. Instead, she found a new passion that led her to compete in air racing, where she began setting speed records. When World War II started a few years later, she shifted her focus again to find ways to put her talents to practical use. In 1941, Cochran recruited two dozen female pilots for the Air Transport Auxiliary, a World War II program that utilized civilian pilots to transport military planes. That same year, she became the first woman to fly a bomber across the Atlantic Ocean. And by late 1943, she was commander of the Women’s Air Force Service Pilots. Cochran continued flying after the war, with a renewed focus on speed; her reputation gained her access to military jets that helped her break records — including the sound barrier feat.

Red running track in The National Stadium of Thailand.
Credit: Pipop_Boosarakumwadi/ iStock

First Woman to Win Three Track-and-Field Olympic Gold Medals in a Single Year

No one would have guessed that Wilma Rudolph would be known as the fastest runner in the world by age 20 — most doctors believed she’d never even walk as an adult. After contracting scarlet fever, pneumonia, and polio when she was young, Rudolph lost much of her mobility, then slowly recovered with the help of leg braces she wore for several years. By the time she was nine years old, the determined future athlete had regained her ability to walk and began playing basketball; in high school, she was scouted by coaches for her speed on the court. One of those coaches invited Rudolph to train at Tennessee State University, where she refined her high-speed sprinting skills. She and her track teammates made two trips to the Olympics — first in 1956, when she was still in high school, and again in 1960. It was at the 1960 Games in Rome that Rudolph claimed three gold medals in track-and-field: one each in the 100-meter and 200-meter races, and another in the 4×100-meter relay. She became the first U.S. woman to do so at a single Olympics, simultaneously breaking three world records for speed. Rudolph retired from sports two years later but took up coaching and became a goodwill ambassador to French West Africa. Her Olympic achievements helped pave the way for the Black female athletes who would eventually break her records.

Nicole Garner Meeker
Writer

Nicole Garner Meeker is a writer and editor based in St. Louis. Her history, nature, and food stories have also appeared at Mental Floss and Better Report.

Original photo by Bychykhin_Olexandr/ iStock via Getty Images Plus

The struggle for workers’ rights in the U.S. is a fight that’s existed since the nation’s founding. The first Monday in September is a celebration of and memorial to the many workers, unions, and activists past and present who’ve secured hard-earned wins to help make America a prosperous nation for all its citizens. At the same time, the holiday is also often a nationwide end-of-summer bash. These six facts explore the history, meaning, and myths behind one of the most-loved holidays on the calendar.

New York City, Union Square workingman's demonstration 1882.
Credit: clu/ DigitalVision Vectors via Getty Images

Labor Day Started as a New York City Parade

On the morning of September 5, 1882, some 20,000 union workers marched through lower Manhattan. According to one newspaper report, the crowd was filled with “men on horseback, men wearing regalia, men with society aprons, and men with flags, musical instruments, badges, and all the other paraphernalia of a procession.” The parade celebrated the labor of the city’s union workers, who actually had to sacrifice a day’s pay in order to attend the celebration — but what a celebration it was. After the parade, 25,000 union members and their families filled Wendel’s Elm Park at 92nd Street and Ninth Avenue for a post-parade party, where beer kegs were “mounted in every conceivable place.” On September 5 the next year, New York’s Central Labor Union celebrated its second Labor Day parade, and the late summer holiday became a tradition.

Credit: ilbusca/ DigitalVision Vectors via Getty Images

The Holiday Was First Recognized by Oregon in 1887

Although New York held the first parades and even introduced the first legislation recognizing Labor Day, Oregon was actually the first to officially recognize the holiday, on February 21, 1887 (though the state reserved the first Saturday in June for Labor Day, rather than early September). Within the same year, Colorado, Massachusetts, New Jersey, and New York followed suit. It wasn’t until 1894 that Congress solidified Labor Day as a national holiday — the legislation was signed into law by then-President Grover Cleveland.

International Workers' Day In New York City.
Credit: Visual Studies Workshop/ Archive Photos via Getty Images

Most Countries Don’t Celebrate Workers in September

While reserving the first Monday in September for Labor Day hearkens back to the holiday’s New York origins, the decision was also designed to distract from a more unsavory moment in the history of U.S. labor relations. Most countries around the world actually celebrate unions and workers on May 1, otherwise known as International Workers’ Day. This international holiday actually has its origins in the U.S., when a clash between Chicago police and workers in 1886 left several dead and dozens injured. Known as the Haymarket Riot, the event went on to inspire International Workers’ Day in 1889. Uneasy honoring such a bloody moment in U.S. history — especially one that inspired widespread vitriol against labor unions — Congress and President Cleveland opted for a different date entirely.

The US Department of Labor Building.
Credit: ALEX EDELMAN/ AFP via Getty Images

There Was a Labor Day Before There Was a U.S. Department of Labor

Although the 19th century gave birth to what eventually became Labor Day — along with many other important historical moments that defined the worker’s struggle in both America and the world — the U.S. Labor Department wasn’t established until more than three decades after that first parade down New York City’s streets. Although the U.S. did establish the Bureau of Labor Statistics in 1884 and the Department of Commerce and Labor in 1903, the modern U.S. Department of Labor wasn’t created until 1913, when lame duck President William Howard Taft reluctantly signed it into law. Taft had such strong concerns about the bill (he thought it would hinder efficient administration) that he only signed it into law mere hours before his successor, Woodrow Wilson, took office. Today, the Department of Labor oversees labor laws, guarantees workers’ rights, and ensures safe working conditions. In 1933 — two decades after its creation — the department also became the first to be led by a woman, Secretary Frances Perkins (who was later recognized as a saint in the Episcopal Church).

Cars travel outbound for Labor Day weekend.
Credit: Mark Wilson/ Getty Images News via Getty Images

137 Million Americans Traveled for Labor Day in 2022

According to a survey conducted by The Vacationer, 53% of Americans traveled on Labor Day weekend in 2022, which equates to some 137 million people. This figure narrowly surpassed both Memorial Day and the Fourth of July as the busiest travel weekend in the U.S. Although most of that travel (about 36%) occurred via car, airports also tend to see a serious uptick of Americans traveling to popular domestic locations, such as New York, Los Angeles, or San Francisco, or even catching international flights to London, Rome, or Tokyo. In 2022, the Transportation Security Administration screened 8.76 million travelers — exceeding pre-pandemic numbers — with Friday marking the busiest day of the weekend.

White blouses on hangers.
Credit: sirtravelalot/ Shutterstock

Yes, You Can Definitely Wear White After Labor Day

The color white has its advantages during the summer, as it’s the best hue (or combination of all hues) to reflect the sun’s rays, but why is there an informal rule forbidding the color after summer is over? The roots of the rule date back to the 19th century as a means for upper-class women to distinguish themselves. The idea was that white clothes were only appropriate for weddings and resort wear, and because Labor Day stood in as the unofficial end of summer (though the astronomical end occurs weeks later), white shouldn’t be worn after Labor Day. If you’re looking for logic, there isn’t any — it was just an arbitrary rule meant to exclude those who didn’t have well-established fortunes and were less in the know. Somehow, this arbitrary rule survived more than a century, though it isn’t really recognized today — even if that doesn’t stop some people from still mentioning this pernicious piece of class warfare.

Darren Orf
Writer

Darren Orf lives in Portland, has a cat, and writes about all things science and climate. You can find his previous work at Popular Mechanics, Inverse, Gizmodo, and Paste, among others.

Original photo by Jack_the_sparow/ Shutterstock

The facts that you take for granted are not always actual facts. Everyday superstitions, often known as old wives’ tales (despite plenty of traditional wise teachings from old wives) shape the way we view the world, and it doesn’t always occur to us to question them. Have you ever considered that gum doesn’t sit in your stomach for seven years, or that coffee doesn’t stunt your growth? Here are the real facts about seven old wives’ tales you may have heard.

Pregnant woman expecting baby girl, holding pink red shoes.
Credit: Maridav/ Shutterstock

You Can’t Tell a Baby’s Sex From the Bump

One of the most pervasive genres of old wives’ tales is sex determination during pregnancy — but you can’t tell a baby’s sex from the outward appearance of a pregnant person’s body.

According to one myth, if the belly is sitting high it’s a girl, and if it’s lower, it’s a boy. In another, extra weight out front means a girl, and extra weight around the hips and bottom means a boy. Both are false, and have more to do with the anatomy of the pregnant person, whether it’s a first pregnancy, and the position of the baby.

A floating breakfast tray in a swimming pool.
Credit: TukangPhotoStock/ Shutterstock

You Can Swim Right After Eating

As a child, you may have been warned to wait a full 30 minutes after eating to jump in a pool or a lake, because not doing so could cause debilitating cramps that would cause you to drown. This is false. The reasoning behind the myth is that your body is using more blood to digest your snacks and doesn’t have enough left over to keep your arms and legs in swimming shape — and while the body does take a little extra blood to aid digestion, it’s not enough to give you more than a very minor cramp.

Aerial view of two kids popping gum bubbles.
Credit: Getty Images via Unsplash+

Swallowed Gum Doesn’t Stay in Your System for Years

Another common childhood myth is that when you swallow gum, it hangs out in your stomach for seven years. In reality, it makes its way through your body pretty quickly, except in extreme circumstances. But it is true that your body can’t digest it, so it comes out in the same form as when you first swallowed it. Too much swallowed gum can lead to intestinal blockages, but you’re not going to want to wait seven years to get that fixed.

Young woman holding a pink cup of coffee.
Credit: Svitlana Hulko/ Shutterstock

Coffee Doesn’t Stunt Your Growth

A long time ago, studies suggested that coffee could cause osteoporosis, a condition that causes bones to lose density, so a myth developed that coffee could stunt your growth. Later studies showed no clear link between coffee drinking and osteoporosis — it’s just that people who drank more coffee tended to not drink as many calcium-rich beverages such as milk, and lack of calcium can contribute to osteoporosis. However, osteoporosis itself doesn’t tend to make people shorter (although it’s associated with bone fractures that can). Nevertheless, the old wives’ tale persisted. If you have growing left to do, just make sure to eat a balanced diet and get all your vitamins and minerals.

Holding tweezers and plucking gray hair from head.
Credit: RJ22/ Shutterstock

Pulling Gray Hairs Doesn’t Cause Two More To Pop Up

There’s a myth that if you pluck out a gray hair, two more will grow back, but it’s not based in fact. Every strand of hair on our bodies grows from its own single hair follicle, and one by one, those follicles eventually stop producing pigment. When that happens, the follicle grows gray or white hair, but it still only grows one single hair. It has no effect on the surrounding follicles — so plucking a gray hair affects only that hair.

You do, however, run the risk of zero hairs growing back from that follicle when you pluck one, so if you want to avoid bald patches, it’s best to leave them on your head.

Close-up of a person cracking their knuckles.
Credit: Andrey_Popov/ Shutterstock

Cracking Knuckles Doesn’t Cause Arthritis

When you “crack” your knuckles, the telltale popping that comes from your joints sounds a little distressing, and maybe irritating, but it’s not as dramatic as it sounds. The noise is caused by nitrogen bubbles popping in your synovial fluid, a substance that lubricates the joints. The bubbles come back in about 20 minutes.

Contrary to a popular belief, cracking your knuckles doesn’t cause arthritis or any other long-term damage, but it doesn’t cause a lot of short-term relief either. You might feel a little looser in the joints, but the effect is mostly psychological.

Fake spiders marching across bedding.
Credit: Margaret M Stewart/ Shutterstock

You (Probably) Don’t Swallow Spiders in Your Sleep

You may have heard the urban legend that people swallow around eight spiders in our sleep every year, but you’ll be relieved to know that the number is closer to zero. We’re not especially appealing to crawl on, because even in our sleep we make a lot of noise and movement that spiders would be sensitive to. If a spider did crawl on you, it could wake you up before it became an accidental snack. It’s theoretically possible that a spider could crawl in your mouth while you’re sleeping, but it’s highly unlikely.

Sarah Anne Lloyd
Writer

Sarah Anne Lloyd is a freelance writer whose work covers a bit of everything, including politics, design, the environment, and yoga. Her work has appeared in Curbed, the Seattle Times, the Stranger, the Verge, and others.

Original photo by Eva-Katalin/ iStock

The Eiffel Tower is an iconic symbol of Paris, but over the years its presence has not always been so valued. It may be hard to believe now, but the Eiffel Tower was once met with contempt. Here are six things you might not know about the famed “Iron Lady.”

French engineer Alexandre Gustave Eiffel (1832 - 1923).
Credit: Hulton Archive via Getty Images

The Eiffel Tower Was Not Designed by Gustave Eiffel

Gustave Eiffel did not actually design his namesake tower. While his company was responsible for its construction, it was his two senior engineers, Maurice Koechlin and Emile Nouguier, who dreamed up the famous structure. It was said that Eiffel was not overly supportive of the initial design (drawn by Koechlin). However, after some tweaks and additions made by the French architect Stephen Sauvestre, Eiffel changed his tune, supported the project and eventually bought the rights to the patented design

Construction of the Eiffel Tower.
Credit: Roger Viollet via Getty Images

The Parisian Elite Originally Hated the Eiffel Tower

A group of 300 Parisian artists and intellectuals protested the Eiffel Tower during its construction, even going as far to call it “monstrous.” In 1887, they made their complaints public by publishing their feelings in Le Temps newspaper on Valentine’s Day in a piece titled “The Protest Against the Tower of Monsieur Eiffel.” The text declared that the obscene design of the Tower was not consistent with “French taste… art and history.” However, by the time the tower was completed in 1889, these complaints petered out when faced with the magnificent end result.

Workman shown atop the lofty Eiffel Tower.
Credit: Bettmann via Getty Images

The Eiffel Tower Reached New Heights

At the time of its construction in 1889, the Eiffel Tower reigned supreme as the tallest building in the world at a staggering 1,063 feet (324 meters). It was finally outdone in 1930 by New York City’s Chrysler Building.

World Fair of 1889 in Paris. A view of the Eiffel Tower.
Credit: ND/ Roger Viollet via Getty Images

The Eiffel Tower Was Supposed to Be Temporary

Erected in 1889, the famous Tower was built to commemorate the centennial of the French Revolution. It was originally given a lifespan of 20 years and was set to be demolished in 1909. However, the French decided to hang on to their precious tower once they realized the value of its radiotelegraph station (which proved very useful during World War I). It has since become a symbol of Parisian pride and a world-recognizable structure.

The Eiffel tower and the business district of La Defense are seen by night.
Credit: Chesnot/ Getty Images News via Getty Images

It Was Used to Take Down Criminals and Protect Paris

During World War I, the Eiffel Tower played a critical role in protecting the city and proved useful in gathering evidence on spies. The French used the tower’s transmitter to disrupt the Germans’ communication. The Eiffel Tower’s wireless station also intercepted messages from the enemy, which resulted in valuable knowledge on the German army. Lastly, the same wireless station allowed telecommunication officers to get their hands on a coded message that led to the capture of Mata Hari, the famous 20th-century female spy accused of espionage on behalf of Germany.  

The sun sets on the Eiffel Tower and the Champ de Mars in Paris.
Credit: JULIEN DE ROSA/ AFP via Getty Images

The Eiffel Tower Moves

The metal material used for the Tower’s construction causes it to expand — and therefore grow — in the summer under the blazing sun. When the temperatures turn colder, the Eiffel Tower naturally shrinks again. The change in its height can be as much as 15 centimeters (nearly 6 inches). The top of the Eiffel Tower also sways in the wind, moving roughly 7 centimeters (nearly 3 inches) side to side.

Interesting Facts
Editorial

Interesting Facts writers have been seen in Popular Mechanics, Mental Floss, A+E Networks, and more. They’re fascinated by history, science, food, culture, and the world around them.

Original photo by Pavel_Klimenko/ Shutterstock

When Earth was about 200 million years old, it passed through a field of rocks suspended in space. The rocks smashed into our planet and embedded millions of tons of new elements in Earth’s crust — including gold. Over time, the particles coalesced into veins, forming the bulk of the gold later mined for use in jewelry, currency, artworks, electronics, and more. Here are seven facts about this marvelous metal.

A worker displays a one kilogram gold bar at the ABC Refinery in Sydney, Australia.
Credit: DAVID GRAY/ AFP via Getty Images

Gold Has Unique Chemical Properties

Pure gold is sun-yellow, shiny, and soft, and has about the same hardness as a penny. It’s the most malleable metal: One gram of gold, equivalent in size to a grain of rice, can be hammered into a sheet of gold leaf measuring one square meter. Gold doesn’t rust or break down from friction or high temperatures. It conducts heat well and can be melted or reshaped infinitely without losing its elemental qualities. Gold can also be alloyed with other metals to increase hardness or create different colors. White gold, for example, is a mix of gold, nickel, copper, and zinc, while rose gold comprises gold, silver, and copper.

Gold bengals hanging from a jewelry rack.
Credit: Saj Shafique/ Unsplash

People Fashioned Gold Into Jewelry as Far Back as 4000 BCE

Cultures in the Middle East and the Mediterranean began using gold in decorative objects and personal ornaments thousands of years ago. The Sumer civilization of southern Iraq made sophisticated gold jewelry around 3000 BCE, and Egyptian dynasties valued gold for funerary art and royal regalia. By the time of the ancient Greek and Roman civilizations, gold was the standard for international commerce, and even played a role in mythology and literature. The story of Jason’s quest for the Golden Fleece may have emerged from an old method of filtering gold particles from streams with sheepskins.

Close-up of gold coins and bars of different currencies.
Credit: Zlaťáky.cz/ Unsplash

Governments Have Used Gold as Currency for Millennia

Traders in the Mediterranean region used gold rings, bars, or ingots as currency for centuries, and Chinese merchants bought and sold goods with gold tokens as far back as 1091 BCE. In the sixth century BCE, the civilization of Lydia (in present-day Turkey) minted the first gold coins. Cities across the Greek world followed suit, establishing gold coins as the standard currency for trade with Persia, India, and farther afield.

Gold nuggets and vintage brass telescope on an antique map.
Credit: Andrey Burmakin/ Shutterstock

The Search for Gold Fueled the European Invasion of the Americas

European nations’ lust for gold prompted numerous expeditions of discovery to the Americas, beginning in 1492 with Columbus’ voyage to Hispaniola. Spanish conquistadors found the Aztec and Inca cultures awash in gold, which the Native peoples viewed as sacred. The Indigenous leaders gave the conquistadors gifts of gold earrings, necklaces, armbands, figurines, ornaments, and other objects. Seeing the potential riches for the taking, the Spanish government quickly authorized the conquest of the Indigenous cities and requisition of their gold, spelling disaster for the Aztec and Inca peoples.

Antique black and white illustration of the Klondike gold rush.
Credit: ilbusca/ iStock

America’s First Gold Rush Took Place in 1803

Gold is spread across Earth’s crust in varying concentrations. Over the past two centuries, the discoveries of particularly large deposits have often sparked gold rushes. In 1799, 12-year-old Conrad Reed found a 17-pound nugget in a stream on his grandfather’s North Carolina farm, the first time gold was found in the United States. Four years later, the Reed Gold Mine opened and attracted other prospectors hoping to strike it rich. Gold rushes also occurred in California in 1848, Nevada in the 1860s, and the Klondike region in the 1890s. Major gold rushes took place in Australia in the 1840s and 1850s and in South Africa in the 1880s as well.

Close-up of a person taking a photo on their phone.

Today, Gold Is Everywhere From Your Smartphone to the ISS

Thanks to gold’s physical properties, it can be used for a huge range of applications in addition to currency, jewelry, and decorative objects. Dentists repair teeth with gold crowns and bridges, and some cancer therapies use gold nanoparticles to kill malignant cells. Gold also protects sensitive circuitry and parts from corrosion in consumer electronics, communication satellites, and jet engines. And gold sheets reflect solar radiation from spacecraft and astronauts’ helmets.

Gold Lingot from the gold mine of Kinross.
Credit: Brooks Kraft/ Sygma via Getty Images

The U.S. Still Maintains a Stockpile of Gold

During the Great Depression, when the U.S. monetary system was based on the Gold Standard — in which the value of all paper and coin currency was convertible to actual gold — the federal government established the Fort Knox Bullion Depository in Kentucky to store the gold needed to back the currency. The U.S. eliminated the Gold Standard in 1971, but still maintains a gold stockpile at Fort Knox. Today, it holds about 147 million ounces of gold in bars roughly the size of a standard brick. That’s about half of all of the gold owned by the United States.

Interesting Facts
Editorial

Interesting Facts writers have been seen in Popular Mechanics, Mental Floss, A+E Networks, and more. They’re fascinated by history, science, food, culture, and the world around them.

Original photo by Freder/ iStock

Woven into tapestries, glittering from stained-glass windows, standing guard as statues, or starring in our favorite stories, films, and TV shows, mythological beasts such as unicorns and dragons have been a part of many cultures for centuries. But where did they come from, and how did they capture our collective imagination? Read on for some fascinating details about the fantastic creatures that populate our mythical cultural zoo.

A mermaid tale seen from over the water.
Credit: Annette Batista Day/ Unsplash

Mermaids

Legends of part-human, part-fish beings can be found in many places around the world, including India, China, Scotland, Brazil, Greece, and beyond. In some European folklore, mermaids are said to live in fantastic underwater palaces decorated with gems from sunken ships, though they have also been known to perch on rocks above the surface, where they sing beautiful songs that lure sailors to their doom. They’re often depicted as pale or silvery, with long golden or reddish hair, and it’s said that they can transform their tails into legs and go ashore to mix with people if they wish. They lack souls, however, unless they marry a human and receive a baptism. In many stories, they can peer into the future or grant wishes.

Some scholars trace all mer-stories to Oannes, Lord of the Waters, a Babylonian deity adopted from the Akkadians, who worshipped him thousands of years ago. Though depictions varied, Oannes was often shown with the head and torso of a man and the lower body of a fish. He was said to dwell beneath the sea at night, but during the day, Oannes went on land to teach humans wisdom.

The first female mermaid-type creature arrived on the mythological scene a little later. She is usually identified as the Semitic moon goddess Atargatis, or Derceto, who threw herself into a lake after a dalliance with a mortal and acquired the body of a fish.

By the 16th century, the image of a mermaid perched on a rock, combing her long tresses with one hand and holding a mirror with the other, was well-established in the popular imagination. (The word “mermaid,” by the way, comes to us from the Old English mere, which once meant “sea.”) Sailors reported mermaid sightings for centuries, although whether they were really seeing seals or manatees is anyone’s guess. Some of these sightings continued even into the 19th century, when mermaid folklore inspired Hans Christian Andersen’s famous 1837 fairy-tale “The Little Mermaid.” More than 150 years later, Disney (loosely) adapted Andersen’s story into a beloved 1989 animated film of the same name, putting mermaids squarely in the mainstream.

The silhouette of a statue of a Centaur.
Credit: Giannis Papanikos/ Shutterstock

Centaurs

Centaurs come to us specifically from Greek mythology. The word “centaur” derives from the Greek kentauros, the name of a Thessalonian tribe who were renowned as expert horsemen. (No one knows where the word for the tribe itself came from.) For the ancient Greeks, centaurs were a race of creatures that were half-human and half-horse. They were said to have sprung from the mating of the hero Centaurus with a field of mares, or (in other versions) from King Ixion of Thessaly and a cloud he believed to be the goddess Hera. Centaurs were often described as wild and lascivious, although they could also be peaceful and wise, as in the case of the Centaur king Chiron, mentor to the hero Heracles.

The most famous story of the centaurs involves a wedding of the Lapith king Pirithous at which the centaurs got drunk and tried to carry off the women. Scenes from this wedding and a resulting fight are depicted on the relief panels above the columns of the Parthenon.

Beautiful arabian mare horse unicorn running free on meadow during sunset.
Credit: Anna Orsulakova/ iStock

Unicorns

The rare, magical unicorn was once thought of as native to India, although it also appears in Chinese myths and Mesopotamian artwork. The first Western account of the unicorn comes from the Greek writer Ctesias, who wrote a book on India based on stories he had heard from traders and other visitors to the Persian court. His book described a creature with a white body, purple head, and blue eyes, plus a long horn of red, white, and black. In later accounts, the unicorn is described as the size of a goat, with a beard, spiraled horn, and lion’s tail. Although no fossils of any unicorn-like creatures have been found, they were apparently real animals to ancients like Pliny the Elder, who wrote in detail about their supposed behavior and characteristics.

By the Middle Ages, unicorns were the subject of an elaborate body of folklore. They were said to be pure white and to dwell in forests, where flowers sprung up wherever they grazed. Because of their purity, they were associated with both the Virgin Mary and Jesus Christ. A unicorn’s horn — called an alicorn — was powerful medicine, able to purify water and detect poison. Royals drank from cups supposedly made from unicorn horns, but in fact often made from narwhal tusks sold by enterprising Viking traders. (At one point, the King of Denmark believed he had a unicorn-horn throne, but later scholars think it, too, was made from narwhal tusks.) Powdered unicorn horn was also a popular item in apothecary shops.

Because they were symbols of strength and nobility as well as purity, unicorns also frequently appeared on heraldic crests. In fact, the unicorn is the national animal of Scotland, where it has been part of the royal coat of arms since the 1500s. Another famous unicorn depiction is in the unicorn tapestries of France, which were produced in the late Middle Ages and still fascinate scholars today.

Close-up of 3 dragon statues.
Credit: Vlad Zaytsev/ Unsplash

Dragons

Like some other creatures on this list, dragons are found in ancient mythology from around the world — in Greek, Vedic, Teutonic, Anglo-Saxon, Chinese, and Christian cultures, among others. They have heads like crocodiles; scales of gold, silver, or other rich colors; large wings; and long, fearsome tails they use to beat and suffocate their opponents. Often said to be descended from giant water snakes, they are sometimes immune to fire, which they can swallow and breathe at will to incinerate their enemies.

In some ancient stories, dragons were thought to originally hail from Ethiopia or India. (Elephants were supposed to be their favorite food.) And in Western myths, they’re often depicted guarding treasure or trying to eat maidens. Christians associated them with sin and the devil.

In Chinese myths, they are far more benevolent, a symbol of divinity, royalty, and prosperity. Chinese dragons were first mentioned as early as the third millennium B.C., when a pair were supposedly seen by the Yellow Emperor (a mythological figure also known as Huangdi). According to legend, four dragon kings ruled over the four seas, and brought storms and rain. Dragon figures are still popular in Chinese culture today, as they are in Western fantasy art, literature, and role-playing games. (See: The Lord of the Rings, Game of Thrones, and Dungeons and Dragons.)

close-up of the tentacles of an octopus underwater.
Credit: Freder/ iStock

Kraken

The kraken has been recorded in Scandinavian writings for hundreds of years. This giant sea monster was said to haunt the icy waters near Norway, Iceland, and Sweden, where it would engulf ships in its massive tentacles and pull them to the bottom of the sea. It was usually described as having a giant bulbous head and eyes bigger than a person.

By some accounts, the kraken would anchor itself to the bottom of the ocean and feast on small fish that larger sea creatures sent their way to avoid being eaten themselves. (Scandinavian fisherman thus often said that if an area was teeming with fish, a kraken was probably nearby.) Once the kraken grew too fat to remain tethered to the sea floor, it would rise to the surface and attack ships. In other accounts, the creature rose to the surface when the waters were warmed by the fires of hell.

The kraken also reportedly had skin like gravel and was sometimes mistaken for an island; one account says that in 1700, a Danish priest celebrated mass on the back of a kraken. Some think that kraken accounts may have involved real-life giant squids, an elusive deep-sea creature that can weigh up to a ton and has eyes as big as a dinner plate, if not quite as big as a person.

A medieval castle with a statue of a griffin.
Credit: Pchelintseva Natalya/ Shutterstock

Griffins

In the lore of ancient Egypt and Greece, griffins were small, ferocious beings with the body of a lion and the head, wings, and talons of an eagle. The folklorist Adrienne Mayor has argued that stories of the griffin may have been inspired by ancient discoveries of fossils from Protoceratops dinosaurs, a relative of the Triceratops that had four legs, a sharp beak, and long shoulder blades that may have been interpreted as wings.

In any case, the earliest known depictions come from Egypt in the third millennium B.C. Back then, griffins were said to attack humans and horses, and were useful for protecting palaces, treasure, and tombs. The ancient Greeks thought they lived in Scythia — an empire centered on what is now Crimea — where they guarded the gold for which that land was famous. Like unicorns and dragons, they were popular on coats of arms and crests during the Middle Ages and beyond.

A look at a phoenix marble carving .
Credit: carekung/ Shutterstock

Phoenixes

The phoenix is a sacred bird associated with fire, the sun, and rebirth. About the size of an eagle, it’s said to have red-gold plumage, a long tail, and a harmonious song that sounds like a flute. Versions of the creature are found in Egyptian, Greek, and Chinese folklore, among other places.

In one ancient legend, after 500 years of life, the phoenix would make a nest of dry twigs, strike rocks with its beak until it lit a spark, and then set itself ablaze. Once the fire cooled, a new phoenix would rise from the ashes. Early Christian writers saw it as an image of the Resurrection. The bird was also associated with immortality, and only one was said to exist at any given time. (And in case you’re wondering, the town in Arizona is named for the mythological creature.)

Interesting Facts
Editorial

Interesting Facts writers have been seen in Popular Mechanics, Mental Floss, A+E Networks, and more. They’re fascinated by history, science, food, culture, and the world around them.

Original photo by Richard Levine/ Alamy Stock Photo

Nostalgia is a powerful feeling. Reminiscing about the past can be a bonding experience, whether it’s sharing memories of eating Jiffy peanut butter as a kid or hearing Darth Vader say, “Luke, I am your father,” for the first time. But sometimes reality isn’t quite how we remember it. Jiffy peanut butter never actually existed, for one, and Darth Vader never said those exact words. These are both examples of what has come to be known as the Mandela Effect, in which collective groups share a highly specific — yet completely false — memory. This phenomenon can pop up in the most unexpected of places, so prepare your brain for the unbelievable examples that lie ahead.

Nelson Mandela raises clenched fist.
Credit: TREVOR SAMSON/ AFP via Getty Images

Nelson Mandela Did Not Die in the 1980s

The term “Mandela Effect” was coined in 2009 by paranormal researcher Fiona Broome, who recounted her vivid memories of the coverage of Nelson Mandela’s death in the 1980s. From news clips to an emotional speech from Mandela’s widow, Broome was convinced that she accurately remembered the tragedy of Mandela dying in prison. In reality, Mandela was released from prison in 1990, went on to become South Africa’s first Black president, and died in 2013. Despite being completely off the mark, Broome wasn’t alone in her conviction. On her website, she went on to share the stories of over 500 other people who mysteriously and inexplicably held this same belief.

Close-up of a jar of Jiffy Peanut Butter.
Credit: Richard Levine/ Alamy Stock Photo

Jif vs. Jiffy Peanut Butter

As confirmed by a representative from the J.M. Smucker Company, Jiffy brand peanut butter has never existed. That doesn’t stop people from claiming that they loved eating Jiffy as a kid. These peanut butter aficionados are likely confusing this fictitious brand with the similarly-sounding Jif or Skippy. And it’s not just peanut butter — the Mandela Effect is widely prevalent among the foods we know (or think we know) and love. “Fruit Loops” are actually named “Froot Loops,” there’s no hyphen in KitKat, and it’s “Cup Noodles,” not “Cup O’ Noodles.”

View of the Berenstain Bears' family.
Credit: debra millet/ Alamy Stock Photo

Berenstain Bears or Berenstein Bears?

One visit to the Berenstain Bears’ official website and you can see that it’s clearly spelled “Berenstain.” The beloved children’s books about a family of bears were named after authors Stan and Jan Berenstain, who — like their creations — had an “a” in their last name. Yet many people who’ve read the books continue to insist (erroneously) that the name was once somehow spelled differently. In their possible defense, some early merchandise mistakenly featured both spellings, which may have led to some of the confusion. On top of that, audio tapes pronounced the name as “-steen,” which could have had a lasting influence on our collective psyche. Despite these arguments, the title is and always has been written as “The Berenstain Bears.”

Darth Vader from Star Wars.
Credit: United Archives GmbH/ Alamy Stock Photo

Darth Vader Never Said “Luke, I Am Your Father”

“Luke, I am your father” may be one of the most misquoted movie phrases of all time. Every Star Wars fan can remember the pivotal scene from Star Wars: Episode V – The Empire Strikes Back, in which Darth Vader reveals that he’s Luke Skywalker’s, well, father. But the phrasing most people know is incorrect — watch it back and you’ll find that Vader actually says, “No, I am your father.” This is just one of many examples of the Mandela Effect in film. The queen in Disney’s 1937 animated film Snow White never says, “Mirror, mirror, on the wall,” referring to it instead as “Magic mirror.” And at no point in Silence of the Lambs does Hannibal Lecter ever say, “Hello, Clarice.” However, after years of fans misquoting the movie, the line “Hello, Clarice” was finally written into the film’s 2001 sequel.

Aerial view of two people playing Monopoly.
Credit: Maria Lin Kim/ Unsplash

The Monopoly Man Never Wore a Monocle

The Monopoly Man is known for his top hat, mustache, and monocle, right? Well, that popular image is at least partly wrong. While the top hat and mustache have been part of Rich Uncle Pennybags’ appearance since he was first introduced in 1936, he’s never worn a monocle. Some psychologists believe that our collective subconscious could have been influenced by the advertising mascot Mr. Peanut (the mascot for Planters Peanuts), who’s just as well known and wears both a top hat and monocle. Gene Brewer, an associate professor in cognitive psychology at Arizona State University, explains that our brains can combine subjects with similar traits — “In studies, when you show participants word pairs and ask them to remember ‘blackmail’ and ‘jailbird,’ half of them will later say they remember learning the word ‘blackbird.’”

Close-up of a Fruit of the Loom clothing tag.
Credit: Lenscap/ Alamy Stock Photo

Fruit of the Loom’s “Vanishing” Cornucopia

Take a look at the tag on a piece of Fruit of the Loom apparel. Now take a look again, just to be sure. Even though every fiber of your being may have thought otherwise, there’s no cornucopia to be found in the logo. As far back as 1893, when the logo was introduced — long before anyone on the internet claimed differently — it’s just been a simple combination of an apple and different varieties of grapes, with leaves on the side. It’s not clear why so many people remember a cornucopia being present.

Smokey the Bear and "Little Smokey".
Credit: Bettmann via Getty Images

It’s Just “Smokey Bear”

For over 75 years, the U.S. Forest Service has featured an ursine mascot warning about forest fires. After all this time, you’d think we’d know his name. Commonly and mistakenly referred to as “Smokey the Bear,” this long-tenured advertising icon is actually just Smokey Bear. Some attribute this mistake to a 1952 song about Smokey, in which songwriters Steve Nelson and Jack Rollins added a “the” to his name in order to retain the song’s rhythm. While some may continue to argue over Smokey’s name, there’s much less ambiguity when it comes to who can prevent forest fires. That’s just “you.”

Bennett Kleinman
Staff Writer

Bennett Kleinman is a New York City-based staff writer for Optimism Media, and previously contributed to television programs such as "Late Show With David Letterman" and "Impractical Jokers." Bennett is also a devoted New York Yankees and New Jersey Devils fan, and thinks plain seltzer is the best drink ever invented.

Original photo by BOOCYS/ Shutterstock

From the comfiest of sneakers to the highest of stilettos, shoes are a key component of any wardrobe. But while loafers and clogs may seem like just another accessory to some, footwear has a rich and fascinating history dating back millennia. So lace up your boots and take a stroll through this list of six incredible facts about shoes.

Converse shoes are seen in a store.
Credit: Joe Raedle/ Getty Images News via Getty Images

Chuck Taylor All Stars Were the First Signature Athletic Shoes

Long before Jordans and Kobes hit the market, the first athlete to lend his name to a signature shoe was Chuck Taylor, a semiprofessional basketball player from Indiana. Converse created its All Star sneaker in 1917 with the sport of basketball in mind, and by 1921, Taylor had signed on to help sell the shoes out of the company’s office in Chicago. Taylor wasn’t a celebrity in the same way that today’s NBA players are, but as part of his job, he organized promotional basketball clinics for Converse and worked with coaches and athletes all over the country. He became so closely associated with the brand that people started referring to All Stars as “Chuck Taylor’s shoes,” even before his name was physically affixed to the sneakers in the early 1930s.

Within a few decades, other signature shoes followed. In 1958, Celtics star Bob Cousy worked with a company called PF Flyers to design a shoe that sold 14 million pairs in its first year. And in 1973, Puma released the Puma Clyde, named for New York Knicks star Clyde Frazier. Of course, the biggest names in the signature shoe game are Nike and Michael Jordan, who teamed up on the Air Jordan I (the first of many releases) in 1985. Jordan is undoubtedly Nike’s most successful signature athlete, but he wasn’t the company’s first. That title belongs to Wayne Wells, a freestyle wrestler who won gold at the 1972 Olympics. Wells signed a contract with Nike that same year and helped design a wrestling shoe to which he lent his name, paving the way for future athletes to sign on with the brand.

Neil Armstrong lunar surface training.
Credit: Heritage Images/ Hulton Archive via Getty Images

Neil Armstrong Left His Shoes on the Moon

After Neil Armstrong took one of the most consequential steps in human history, the boots he used to do so were discarded on the moon. In fact, both of the Apollo 11 astronauts who walked on the lunar surface — Armstrong and Buzz Aldrin — left behind their overshoes, along with their portable life-support systems. Leaving the gear wasn’t a symbolic gesture; it helped to offset the added weight of collected moon rocks that the shuttle would be taking back. And the astronauts didn’t return to Earth barefoot, either. The treaded overshoes they abandoned were worn atop flat-soled pressure boots (which they kept) for added traction while traversing the moon’s rocky terrain.

Leather Chelsea boot detail on wood.
Credit: Thomas Faull/ iStock via Getty Images Plus

The First Slip-On Elastic Boots Were Made for Queen Victoria

In the early 1800s, boots were a popular style among both men and women, though tying them with rudimentary laces and buttons made putting them on difficult. English inventor Joseph Sparkes Hall realized there had to be a better way, and in 1837, he designed the first pair of elastic-sided boots, which he presented to Queen Victoria that same year (the year she ascended to the throne).

This new slip-on boot provided the comfort of slippers with the stability of laced shoes, and became well known thanks to Victoria’s blessing. As Sparkes explained in The Book of the Feet, written in 1846, “Her Majesty has been pleased to honor the invention with the most marked and continued patronage; it has been my privilege for some years to make boots of this kind for Her Majesty, and no one who reads the court circular, or is acquainted with Her Majesty’s habits of walking and exercise in the open air, can doubt the superior claims of the elastic over every other kind of boot.” Hall’s patented design would go on to inspire the modern-day Chelsea boot, which has been worn by everyone from the Beatles to the Stormtrooper characters in Star Wars.

Female horseback rider in heels.
Credit: Anne Ackermann/ Photodisc via Getty Images

High Heels Were Originally Worn by Horseback Riders

Though they’ve since become a symbol of high fashion, high-heeled shoes originally had more of a practical use. They were commonly worn throughout horseback-riding cultures around the 10th century, and were particularly popular in Persia, where the cavalry found that 1-inch heels added extra stability in stirrups when they stood up to fire their bows. Persia later sent a delegation of soldiers to Europe in the 17th century, which in turn inspired European aristocrats to add high heels to their personal wardrobes. Heeled boots became all the rage among members of the upper class throughout Europe, and in 1670, France’s Louis XIV passed a law mandating that only members of the nobility could wear heels. In the 18th century, the style became increasingly gendered as heels grew in popularity among women. By the start of the French Revolution in 1789, men of the French nobility had largely given up on the trend in favor of broader, sturdier shoes.

The presidential shoe collection.
Credit: Haydn West – PA Images via Getty Images

Though there’s no exclusive contract, Johnston & Murphy serves as the unofficial footwear provider of U.S. Presidents, having designed shoes for America’s commanders in chief since the company was established in 1850 by William J. Dudley, who offered to make shoes for President Millard Fillmore. (Dudley called his business the William J. Dudley Shoe Company, but his partner James Johnston renamed it after Dudley died and he brought on William Murphy as a new partner.)

In the decades since, Johnston & Murphy has been tasked with crafting a wide variety of presidential kicks, with the smallest being a size 7 for Rutherford B. Hayes and the largest a size 14 for Abraham Lincoln. Some of the more famous styles have included black lace-up boots for Lincoln, black wingtips for President Kennedy, black cap-toe shoes beloved by Ronald Reagan, and black oxfords for Barack Obama, which came in a handcrafted box of Hawaiian-sourced wood.

King Charles I's buskin boots.
Credit: Heritage Images/ Hulton Archive via Getty Images

Ancient Greek Actors Wore Different Footwear for Dramatic and Comedic Roles

In addition to their narratives, ancient Greek tragedies and comedies could often be distinguished by the type of footwear the actors wore. Dramatic actors wore a style known as a buskin, a boot with a thick sole believed to be anywhere between 4 and 10 inches high. This set them apart from comedic actors, who wore just thin socks on their feet. It was thought that buskins gave serious performers a more prominent stage presence compared to their humorous counterparts.

Bennett Kleinman
Staff Writer

Bennett Kleinman is a New York City-based staff writer for Optimism Media, and previously contributed to television programs such as "Late Show With David Letterman" and "Impractical Jokers." Bennett is also a devoted New York Yankees and New Jersey Devils fan, and thinks plain seltzer is the best drink ever invented.

Original photo by B Isnor/ Shutterstock

For millennia, lighthouses have guided wayward ships away from hazardous waters, providing safety during powerful storms. Lighthouses still help seafarers today, though modern sailors have many more navigational tools at their disposal, from GPS to detailed nautical charts, buoys, and radar beacons. These days, many lighthouses have become romantic relics of another era, one in which people set sail with only the power of the wind and looked toward lighthouses to guide them back home. These seven illuminating facts about lighthouses include just a few reasons why these structures continue to fascinate us and remain popular tourist destinations today.

Lighthouse of Alexandria.
Credit: MR1805/ iStock

Antiquity’s Most Famous Lighthouse Is One of the Seven Wonders of the Ancient World

The Lighthouse of Alexandria, also known as the Pharos of Alexandria, was built during the reign of Ptolemy II of Egypt, around 280 BCE. For centuries, it was one of the tallest structures in the world, with reports estimating that it reached about 350 feet high. The lighthouse stood on the island of Pharos in the harbor of Alexandria, named after Alexander the Great and the capital of the Ptolemaic Kingdom (which lasted from 305 BCE to 30 BCE).

Sadly, frequent earthquakes in the Mediterranean region badly damaged the lighthouse, and it was completely destroyed by the 14th century. However, the lighthouse served as an archetype from which all other lighthouses derived, and its importance is embedded in many Romance languages — for instance, the word “pharos” is sometimes used in English to mean “lighthouse.” In 1994 French archaeologists discovered remains of the famous lighthouse on the seabed, and UNESCO is working to declare the area a submerged World Heritage Site.

Aerial view of Big Sable Point Lighthouse near Ludington, Michigan.
Credit: Frederick Millett/ Shutterstock

The U.S. Has More Lighthouses Than Any Other Country

The United States’ first lighthouse was built in 1716 on Little Brewster Island near Boston, Massachusetts. Lighthouses were so important to early America that in 1789 the first U.S. Congress passed the Lighthouse Act, which created the United States Lighthouse Establishment under the Department of the Treasury. Today, the U.S. is home to over 700 lighthouses — more than any other country in the world. However, the state with the most lighthouses isn’t located on the coast of the continental U.S. Michigan — surrounded by four of the five Great Lakes — is home to 130 lighthouses, including the remote lighthouse on Stannard Rock, nicknamed “the loneliest place in North America.”

Hercules tower in Spain.
Credit: Migel/ Shutterstock

The Romans Built the Oldest Surviving Lighthouse

In the first century CE, the ancient Romans built the Farum Brigantium, known today as the Tower of Hercules — the world’s oldest lighthouse that is still functional. The lighthouse continues to guide and signal sailors from La Coruña harbor in northwestern Spain. An 18th-century restoration of the tower thankfully preserved the original core of the structure while improving its functionality. Now a UNESCO World Heritage Site, the Tower of Hercules is the only Greco-Roman lighthouse from antiquity that has retained such a high level of structural integrity, and it continues to shine its light across the Atlantic to this day.

View of Nantucket Lightship.
Credit: Cathy Kovarik/ Shutterstock

“Lightships” Once Sailed the Seas

Although lighthouses were originally designed as immovable land structures, in 1731 English inventor Robert Hamblin designed the first modern lightship and moored it at the Nore sandbank at the mouth of the Thames River. As its name suggests, the ship had a lighted beacon and was used to provide safe navigation in areas where building a land-based lighthouse was impractical. The U.S. had its own lightship service, which began in 1820 and lasted 165 years. The country’s last lightship, the Nantucket, retired in 1985 after being replaced by more modern technology such as automated buoys. Today, the United States lightship Nantucket (LV-112) is registered as a National Historic Landmark.

View of a lens used for lighthouses.
Credit: Science & Society Picture Library via Getty Images

An 1819 Invention Gave Lighthouses a Major Upgrade That Still Exists Today

In the early 19th century, lighthouses weren’t particularly good at steering ships away from land, as the most common lenses used in lighthouses at the time, known as Lewis lamps, were not nearly powerful enough. Enter French inventor Augustin-Jean Fresnel, who in 1821 introduced his eponymous lens. The Fresnel lens used a series of prisms to focus all the light from a lamp in one direction and magnify it into a much more powerful beam. Soon, Fresnel lenses were installed in lighthouses all over the world. Not only did they offer vastly improved functionality, they were also stunningly beautiful. The Fresnel lens was so revolutionary that the technique is still used today in flood lights and professional lighting equipment.

Baltimore Harbor Lighthouse in Chesapeake Bay, Maryland.
Credit: Michael Ventura/ Alamy Stock Photo

The U.S. and Soviet Union Experimented With Nuclear-Powered Lighthouses

In 1964, the Baltimore Harbor Light, which sits at the mouth of the Magothy River, became the first — and last — nuclear-powered lighthouse ever built by the United States. Originally constructed in 1908, the Baltimore Harbor Light operated as a far more typical lighthouse for 56 years, until it became the subject of a Coast Guard experiment. The U.S government installed a 4,600-pound atomic fuel cell generator,  and the lighthouse ran on nuclear power for a year before the project was dismantled (thankfully with no signs of nuclear contamination).

Although the U.S.’s experiment with nuclear lighthouses was short-lived, the Soviet Union embraced them more enthusiastically, building 132 nuclear-powered lighthouses along the notoriously inhospitable Northeast Passage, a shipping route between the Atlantic and Pacific oceans along Russia’s Arctic coast. After the fall of the Soviet Union in the early 1990s, Russia abandoned the upkeep of these lighthouses. But, being nuclear-powered, they kept shining their light for years afterward.

Flannan Islands Lighthouse (21 miles west of Lewis).
Credit: Ian Cowe/ Alamy Stock Photo

A Remote Scottish Lighthouse Was the Sight of an Enduring Mystery

The Flannan Isles Lighthouse is located on the remote, uninhabited island of Eilean Mòr in northern Scotland. From the outside, the lighthouse is remarkably similar to many other lighthouse structures built around the turn of the 20th century — so you might not guess that it was the setting of a notorious unsolved disappearance that inspired the 2018 film The Vanishing starring Gerard Butler.

On December 15, 1900, the transatlantic steamer Archtor noticed the lighthouse wasn’t lit while traveling to the port town of Leith. A team from the local lighthouse board visited the island a few days later and discovered no sign of the three lighthouse keepers who were supposed to be on duty. The table was set for dinner, and an oilskin (a type of raincoat) was still on its hook. A preliminary investigation concluded that two of the lighthouse keepers likely traveled to the west platform to secure a supply box during a storm and accidentally tumbled into the sea. When the last keeper went to investigate (without his oilskin), he likely met a similar fate. Rumors on the mainland posited more fanciful explanations, including mythical sea serpents or even murder. While those explanations have been largely dismissed, it’s unlikely we’ll ever know for sure what happened at Flannan Isles Lighthouse.

Darren Orf
Writer

Darren Orf lives in Portland, has a cat, and writes about all things science and climate. You can find his previous work at Popular Mechanics, Inverse, Gizmodo, and Paste, among others.

Original photo by TORWAISTUDIO/ Shutterstock

Movement does our bodies good. But you know what’s easier than running a marathon? Learning a few quick facts about exercise, no pain or gain required.

We aren’t doctors, so we can’t advise you on the best ways for you to exercise — but we can rattle off some trivia about it. Where did the 10,000 steps benchmark come from? What’s the deal with a “runner’s high”? These six interesting facts may not help you get fit, but at least you’ll learn something.

Group of runners during sunset.
Credit: YanLev/ iStock

Exercise Can Get Some People High

You may have heard of a “runner’s high,” or a rush of euphoria after exercise that’s not actually limited to runners. It’s a real biological phenomenon, although it’s relatively rare. The commonly held belief is that it’s caused by hormones called endorphins, but they don’t cross the blood-brain barrier. The more likely culprit is the endocannabinoid system, the same system that cannabis interacts with to create its psychoactive effects.

Exercise increases the amount of endocannabinoids in the bloodstream, which can cross the blood-brain barrier. For some people, this can cause a rush of euphoria, reduced anxiety, and improved mood. This isn’t especially common, though, and there’s much about the phenomenon scientists are still trying to figure out.

3D rendered medically accurate illustration of the hippocampus.
Credit: SciePro/ Shutterstock

Exercise Can Help You Think More Clearly

Ever take a walk to clear your head? It might not just be a change of scenery that gives you a much-needed reset. A growing body of research shows that exercise, including walking, increases cognitive ability.

Exercising increases blood flow, including to the brain. The increase in energy and oxygen could boost performance. But it gets more complex than that. When we exercise, the hippocampus, a part of our brain necessary for learning and memory, becomes more active — and when there’s increased energy in the hippocampus, we think more effectively. Regular exercise could even help reverse age-related brain damage.

baby boy in diaper crawling next to a window.
Credit: Onjira Leibe/ Shutterstock

Even Babies Need Exercise

Babyhood offers an unparalleled opportunity to mostly just eat and sleep, but in between, infants need at least some exercise. Giving infants several opportunities to move around each day could improve motor skills, bone health, and social development. Tummy time — supervised time with a baby lying face-down — strengthens babies’ neck, shoulder, and arm muscles, too. The World Health Organization (WHO) recommends that babies are active several times a day, including at least 30 minutes on the stomach. Babies still get plenty of dozing time, though; the WHO recommends 12 to 16 hours of sleep for infants 4 months through 11 months of age.

A woman walking with a pedometer reaching her goal of 10,000 steps.
Credit: Angela Schmidt/ Shutterstock

10,000 Steps Was Invented for Pedometer Marketing

If you have a smartwatch or other fitness tracker, you might get a little celebratory notification when you hit 10,000 steps — or maybe you’ve just heard someone refer to “getting their 10,000 steps in.” That benchmark persists because it’s a nice, round number that’s easier to use in marketing materials, not because there’s any scientific basis for it.

Way back in the 1960s, a Japanese company invented a pedometer called Manpo-kei, or “10,000 steps meter,” building off momentum from the 1964 Tokyo Olympics. Nearly 60 years later, it’s still the default setting in many step counters, including Fitbit devices.

While getting 10,000 steps a day is a healthy habit, you don’t have to take that many to see benefits from walking, according to experts. One study found that just 4,400 steps a day can lower the risk of early death by 41%. Benefits increased with additional steps, but topped out at around 7,500 (at least in one study looking at mortality in older women). Of course, your mileage may vary depending on your goals, exercise pace, and general health, but there’s no reason to feel discouraged if you’re not getting a full 10,000 in every day.

Greek Gymnasium at the Time of the First Olympic Games.
Credit: Universal History Archive/ Universal Images Group via Getty Images

“Gymnasium” Comes From the Greek for “School for Naked Exercise”

Today, “gymnasium” or “gym” can refer to a lot of things having to do with physical activity, like a school gymnasium, a health club, or a playground jungle gym. It comes from the ancient Greek word gymnasion, or “school for naked exercise.” Gymnos meant “naked,” and the people using the gym didn’t wear clothes — they just oiled or dusted themselves up. In ancient Greece, physical education was just as important as the arts, and these facilities eventually grew more elaborate, with surrounding changing rooms, baths, and practice rooms.

Close-up image of woman a gardening.
Credit: Juice Flair/ Shutterstock

Gardening Counts as Exercise

Getting your hands dirty in your garden isn’t just a mood-boosting pastime — it’s great exercise, too. All that digging, hauling, and moving works all your major muscle groups, improves mobility, and boosts endurance. It burns some serious energy, too: Even light gardening or yard work can burn more than 300 calories per hour for a 154-pound person, according to the Centers for Disease Control and Prevention. That’s comparable to going dancing or taking a hike. For heavy yard work, like chopping wood, the number jumps up to 440 calories per hour, although the exact number will vary depending on the nature of the work and each individual body.

It’s easy to build a more strenuous workout from your existing gardening routine with simple adjustments like carrying heavier cans of water, switching to a push mower, or increasing walking around your yard. And there’s an additional healthy bonus to garden exercise: Fresh veggies!

Sarah Anne Lloyd
Writer

Sarah Anne Lloyd is a freelance writer whose work covers a bit of everything, including politics, design, the environment, and yoga. Her work has appeared in Curbed, the Seattle Times, the Stranger, the Verge, and others.