Original photo by Pavel_Klimenko/ Shutterstock

When Earth was about 200 million years old, it passed through a field of rocks suspended in space. The rocks smashed into our planet and embedded millions of tons of new elements in Earth’s crust — including gold. Over time, the particles coalesced into veins, forming the bulk of the gold later mined for use in jewelry, currency, artworks, electronics, and more. Here are seven facts about this marvelous metal.

A worker displays a one kilogram gold bar at the ABC Refinery in Sydney, Australia.
Credit: DAVID GRAY/ AFP via Getty Images

Gold Has Unique Chemical Properties

Pure gold is sun-yellow, shiny, and soft, and has about the same hardness as a penny. It’s the most malleable metal: One gram of gold, equivalent in size to a grain of rice, can be hammered into a sheet of gold leaf measuring one square meter. Gold doesn’t rust or break down from friction or high temperatures. It conducts heat well and can be melted or reshaped infinitely without losing its elemental qualities. Gold can also be alloyed with other metals to increase hardness or create different colors. White gold, for example, is a mix of gold, nickel, copper, and zinc, while rose gold comprises gold, silver, and copper.

Gold bengals hanging from a jewelry rack.
Credit: Saj Shafique/ Unsplash

People Fashioned Gold Into Jewelry as Far Back as 4000 BCE

Cultures in the Middle East and the Mediterranean began using gold in decorative objects and personal ornaments thousands of years ago. The Sumer civilization of southern Iraq made sophisticated gold jewelry around 3000 BCE, and Egyptian dynasties valued gold for funerary art and royal regalia. By the time of the ancient Greek and Roman civilizations, gold was the standard for international commerce, and even played a role in mythology and literature. The story of Jason’s quest for the Golden Fleece may have emerged from an old method of filtering gold particles from streams with sheepskins.

Close-up of gold coins and bars of different currencies.
Credit: Zlaťáky.cz/ Unsplash

Governments Have Used Gold as Currency for Millennia

Traders in the Mediterranean region used gold rings, bars, or ingots as currency for centuries, and Chinese merchants bought and sold goods with gold tokens as far back as 1091 BCE. In the sixth century BCE, the civilization of Lydia (in present-day Turkey) minted the first gold coins. Cities across the Greek world followed suit, establishing gold coins as the standard currency for trade with Persia, India, and farther afield.

Gold nuggets and vintage brass telescope on an antique map.
Credit: Andrey Burmakin/ Shutterstock

The Search for Gold Fueled the European Invasion of the Americas

European nations’ lust for gold prompted numerous expeditions of discovery to the Americas, beginning in 1492 with Columbus’ voyage to Hispaniola. Spanish conquistadors found the Aztec and Inca cultures awash in gold, which the Native peoples viewed as sacred. The Indigenous leaders gave the conquistadors gifts of gold earrings, necklaces, armbands, figurines, ornaments, and other objects. Seeing the potential riches for the taking, the Spanish government quickly authorized the conquest of the Indigenous cities and requisition of their gold, spelling disaster for the Aztec and Inca peoples.

Antique black and white illustration of the Klondike gold rush.
Credit: ilbusca/ iStock

America’s First Gold Rush Took Place in 1803

Gold is spread across Earth’s crust in varying concentrations. Over the past two centuries, the discoveries of particularly large deposits have often sparked gold rushes. In 1799, 12-year-old Conrad Reed found a 17-pound nugget in a stream on his grandfather’s North Carolina farm, the first time gold was found in the United States. Four years later, the Reed Gold Mine opened and attracted other prospectors hoping to strike it rich. Gold rushes also occurred in California in 1848, Nevada in the 1860s, and the Klondike region in the 1890s. Major gold rushes took place in Australia in the 1840s and 1850s and in South Africa in the 1880s as well.

Close-up of a person taking a photo on their phone.

Today, Gold Is Everywhere From Your Smartphone to the ISS

Thanks to gold’s physical properties, it can be used for a huge range of applications in addition to currency, jewelry, and decorative objects. Dentists repair teeth with gold crowns and bridges, and some cancer therapies use gold nanoparticles to kill malignant cells. Gold also protects sensitive circuitry and parts from corrosion in consumer electronics, communication satellites, and jet engines. And gold sheets reflect solar radiation from spacecraft and astronauts’ helmets.

Gold Lingot from the gold mine of Kinross.
Credit: Brooks Kraft/ Sygma via Getty Images

The U.S. Still Maintains a Stockpile of Gold

During the Great Depression, when the U.S. monetary system was based on the Gold Standard — in which the value of all paper and coin currency was convertible to actual gold — the federal government established the Fort Knox Bullion Depository in Kentucky to store the gold needed to back the currency. The U.S. eliminated the Gold Standard in 1971, but still maintains a gold stockpile at Fort Knox. Today, it holds about 147 million ounces of gold in bars roughly the size of a standard brick. That’s about half of all of the gold owned by the United States.

Interesting Facts
Editorial

Interesting Facts writers have been seen in Popular Mechanics, Mental Floss, A+E Networks, and more. They’re fascinated by history, science, food, culture, and the world around them.

Original photo by Freder/ iStock

Woven into tapestries, glittering from stained-glass windows, standing guard as statues, or starring in our favorite stories, films, and TV shows, mythological beasts such as unicorns and dragons have been a part of many cultures for centuries. But where did they come from, and how did they capture our collective imagination? Read on for some fascinating details about the fantastic creatures that populate our mythical cultural zoo.

A mermaid tale seen from over the water.
Credit: Annette Batista Day/ Unsplash

Mermaids

Legends of part-human, part-fish beings can be found in many places around the world, including India, China, Scotland, Brazil, Greece, and beyond. In some European folklore, mermaids are said to live in fantastic underwater palaces decorated with gems from sunken ships, though they have also been known to perch on rocks above the surface, where they sing beautiful songs that lure sailors to their doom. They’re often depicted as pale or silvery, with long golden or reddish hair, and it’s said that they can transform their tails into legs and go ashore to mix with people if they wish. They lack souls, however, unless they marry a human and receive a baptism. In many stories, they can peer into the future or grant wishes.

Some scholars trace all mer-stories to Oannes, Lord of the Waters, a Babylonian deity adopted from the Akkadians, who worshipped him thousands of years ago. Though depictions varied, Oannes was often shown with the head and torso of a man and the lower body of a fish. He was said to dwell beneath the sea at night, but during the day, Oannes went on land to teach humans wisdom.

The first female mermaid-type creature arrived on the mythological scene a little later. She is usually identified as the Semitic moon goddess Atargatis, or Derceto, who threw herself into a lake after a dalliance with a mortal and acquired the body of a fish.

By the 16th century, the image of a mermaid perched on a rock, combing her long tresses with one hand and holding a mirror with the other, was well-established in the popular imagination. (The word “mermaid,” by the way, comes to us from the Old English mere, which once meant “sea.”) Sailors reported mermaid sightings for centuries, although whether they were really seeing seals or manatees is anyone’s guess. Some of these sightings continued even into the 19th century, when mermaid folklore inspired Hans Christian Andersen’s famous 1837 fairy-tale “The Little Mermaid.” More than 150 years later, Disney (loosely) adapted Andersen’s story into a beloved 1989 animated film of the same name, putting mermaids squarely in the mainstream.

The silhouette of a statue of a Centaur.
Credit: Giannis Papanikos/ Shutterstock

Centaurs

Centaurs come to us specifically from Greek mythology. The word “centaur” derives from the Greek kentauros, the name of a Thessalonian tribe who were renowned as expert horsemen. (No one knows where the word for the tribe itself came from.) For the ancient Greeks, centaurs were a race of creatures that were half-human and half-horse. They were said to have sprung from the mating of the hero Centaurus with a field of mares, or (in other versions) from King Ixion of Thessaly and a cloud he believed to be the goddess Hera. Centaurs were often described as wild and lascivious, although they could also be peaceful and wise, as in the case of the Centaur king Chiron, mentor to the hero Heracles.

The most famous story of the centaurs involves a wedding of the Lapith king Pirithous at which the centaurs got drunk and tried to carry off the women. Scenes from this wedding and a resulting fight are depicted on the relief panels above the columns of the Parthenon.

Beautiful arabian mare horse unicorn running free on meadow during sunset.
Credit: Anna Orsulakova/ iStock

Unicorns

The rare, magical unicorn was once thought of as native to India, although it also appears in Chinese myths and Mesopotamian artwork. The first Western account of the unicorn comes from the Greek writer Ctesias, who wrote a book on India based on stories he had heard from traders and other visitors to the Persian court. His book described a creature with a white body, purple head, and blue eyes, plus a long horn of red, white, and black. In later accounts, the unicorn is described as the size of a goat, with a beard, spiraled horn, and lion’s tail. Although no fossils of any unicorn-like creatures have been found, they were apparently real animals to ancients like Pliny the Elder, who wrote in detail about their supposed behavior and characteristics.

By the Middle Ages, unicorns were the subject of an elaborate body of folklore. They were said to be pure white and to dwell in forests, where flowers sprung up wherever they grazed. Because of their purity, they were associated with both the Virgin Mary and Jesus Christ. A unicorn’s horn — called an alicorn — was powerful medicine, able to purify water and detect poison. Royals drank from cups supposedly made from unicorn horns, but in fact often made from narwhal tusks sold by enterprising Viking traders. (At one point, the King of Denmark believed he had a unicorn-horn throne, but later scholars think it, too, was made from narwhal tusks.) Powdered unicorn horn was also a popular item in apothecary shops.

Because they were symbols of strength and nobility as well as purity, unicorns also frequently appeared on heraldic crests. In fact, the unicorn is the national animal of Scotland, where it has been part of the royal coat of arms since the 1500s. Another famous unicorn depiction is in the unicorn tapestries of France, which were produced in the late Middle Ages and still fascinate scholars today.

Close-up of 3 dragon statues.
Credit: Vlad Zaytsev/ Unsplash

Dragons

Like some other creatures on this list, dragons are found in ancient mythology from around the world — in Greek, Vedic, Teutonic, Anglo-Saxon, Chinese, and Christian cultures, among others. They have heads like crocodiles; scales of gold, silver, or other rich colors; large wings; and long, fearsome tails they use to beat and suffocate their opponents. Often said to be descended from giant water snakes, they are sometimes immune to fire, which they can swallow and breathe at will to incinerate their enemies.

In some ancient stories, dragons were thought to originally hail from Ethiopia or India. (Elephants were supposed to be their favorite food.) And in Western myths, they’re often depicted guarding treasure or trying to eat maidens. Christians associated them with sin and the devil.

In Chinese myths, they are far more benevolent, a symbol of divinity, royalty, and prosperity. Chinese dragons were first mentioned as early as the third millennium B.C., when a pair were supposedly seen by the Yellow Emperor (a mythological figure also known as Huangdi). According to legend, four dragon kings ruled over the four seas, and brought storms and rain. Dragon figures are still popular in Chinese culture today, as they are in Western fantasy art, literature, and role-playing games. (See: The Lord of the Rings, Game of Thrones, and Dungeons and Dragons.)

close-up of the tentacles of an octopus underwater.
Credit: Freder/ iStock

Kraken

The kraken has been recorded in Scandinavian writings for hundreds of years. This giant sea monster was said to haunt the icy waters near Norway, Iceland, and Sweden, where it would engulf ships in its massive tentacles and pull them to the bottom of the sea. It was usually described as having a giant bulbous head and eyes bigger than a person.

By some accounts, the kraken would anchor itself to the bottom of the ocean and feast on small fish that larger sea creatures sent their way to avoid being eaten themselves. (Scandinavian fisherman thus often said that if an area was teeming with fish, a kraken was probably nearby.) Once the kraken grew too fat to remain tethered to the sea floor, it would rise to the surface and attack ships. In other accounts, the creature rose to the surface when the waters were warmed by the fires of hell.

The kraken also reportedly had skin like gravel and was sometimes mistaken for an island; one account says that in 1700, a Danish priest celebrated mass on the back of a kraken. Some think that kraken accounts may have involved real-life giant squids, an elusive deep-sea creature that can weigh up to a ton and has eyes as big as a dinner plate, if not quite as big as a person.

A medieval castle with a statue of a griffin.
Credit: Pchelintseva Natalya/ Shutterstock

Griffins

In the lore of ancient Egypt and Greece, griffins were small, ferocious beings with the body of a lion and the head, wings, and talons of an eagle. The folklorist Adrienne Mayor has argued that stories of the griffin may have been inspired by ancient discoveries of fossils from Protoceratops dinosaurs, a relative of the Triceratops that had four legs, a sharp beak, and long shoulder blades that may have been interpreted as wings.

In any case, the earliest known depictions come from Egypt in the third millennium B.C. Back then, griffins were said to attack humans and horses, and were useful for protecting palaces, treasure, and tombs. The ancient Greeks thought they lived in Scythia — an empire centered on what is now Crimea — where they guarded the gold for which that land was famous. Like unicorns and dragons, they were popular on coats of arms and crests during the Middle Ages and beyond.

A look at a phoenix marble carving .
Credit: carekung/ Shutterstock

Phoenixes

The phoenix is a sacred bird associated with fire, the sun, and rebirth. About the size of an eagle, it’s said to have red-gold plumage, a long tail, and a harmonious song that sounds like a flute. Versions of the creature are found in Egyptian, Greek, and Chinese folklore, among other places.

In one ancient legend, after 500 years of life, the phoenix would make a nest of dry twigs, strike rocks with its beak until it lit a spark, and then set itself ablaze. Once the fire cooled, a new phoenix would rise from the ashes. Early Christian writers saw it as an image of the Resurrection. The bird was also associated with immortality, and only one was said to exist at any given time. (And in case you’re wondering, the town in Arizona is named for the mythological creature.)

Interesting Facts
Editorial

Interesting Facts writers have been seen in Popular Mechanics, Mental Floss, A+E Networks, and more. They’re fascinated by history, science, food, culture, and the world around them.

Original photo by Richard Levine/ Alamy Stock Photo

Nostalgia is a powerful feeling. Reminiscing about the past can be a bonding experience, whether it’s sharing memories of eating Jiffy peanut butter as a kid or hearing Darth Vader say, “Luke, I am your father,” for the first time. But sometimes reality isn’t quite how we remember it. Jiffy peanut butter never actually existed, for one, and Darth Vader never said those exact words. These are both examples of what has come to be known as the Mandela Effect, in which collective groups share a highly specific — yet completely false — memory. This phenomenon can pop up in the most unexpected of places, so prepare your brain for the unbelievable examples that lie ahead.

Nelson Mandela raises clenched fist.
Credit: TREVOR SAMSON/ AFP via Getty Images

Nelson Mandela Did Not Die in the 1980s

The term “Mandela Effect” was coined in 2009 by paranormal researcher Fiona Broome, who recounted her vivid memories of the coverage of Nelson Mandela’s death in the 1980s. From news clips to an emotional speech from Mandela’s widow, Broome was convinced that she accurately remembered the tragedy of Mandela dying in prison. In reality, Mandela was released from prison in 1990, went on to become South Africa’s first Black president, and died in 2013. Despite being completely off the mark, Broome wasn’t alone in her conviction. On her website, she went on to share the stories of over 500 other people who mysteriously and inexplicably held this same belief.

Close-up of a jar of Jiffy Peanut Butter.
Credit: Richard Levine/ Alamy Stock Photo

Jif vs. Jiffy Peanut Butter

As confirmed by a representative from the J.M. Smucker Company, Jiffy brand peanut butter has never existed. That doesn’t stop people from claiming that they loved eating Jiffy as a kid. These peanut butter aficionados are likely confusing this fictitious brand with the similarly-sounding Jif or Skippy. And it’s not just peanut butter — the Mandela Effect is widely prevalent among the foods we know (or think we know) and love. “Fruit Loops” are actually named “Froot Loops,” there’s no hyphen in KitKat, and it’s “Cup Noodles,” not “Cup O’ Noodles.”

View of the Berenstain Bears' family.
Credit: debra millet/ Alamy Stock Photo

Berenstain Bears or Berenstein Bears?

One visit to the Berenstain Bears’ official website and you can see that it’s clearly spelled “Berenstain.” The beloved children’s books about a family of bears were named after authors Stan and Jan Berenstain, who — like their creations — had an “a” in their last name. Yet many people who’ve read the books continue to insist (erroneously) that the name was once somehow spelled differently. In their possible defense, some early merchandise mistakenly featured both spellings, which may have led to some of the confusion. On top of that, audio tapes pronounced the name as “-steen,” which could have had a lasting influence on our collective psyche. Despite these arguments, the title is and always has been written as “The Berenstain Bears.”

Darth Vader from Star Wars.
Credit: United Archives GmbH/ Alamy Stock Photo

Darth Vader Never Said “Luke, I Am Your Father”

“Luke, I am your father” may be one of the most misquoted movie phrases of all time. Every Star Wars fan can remember the pivotal scene from Star Wars: Episode V – The Empire Strikes Back, in which Darth Vader reveals that he’s Luke Skywalker’s, well, father. But the phrasing most people know is incorrect — watch it back and you’ll find that Vader actually says, “No, I am your father.” This is just one of many examples of the Mandela Effect in film. The queen in Disney’s 1937 animated film Snow White never says, “Mirror, mirror, on the wall,” referring to it instead as “Magic mirror.” And at no point in Silence of the Lambs does Hannibal Lecter ever say, “Hello, Clarice.” However, after years of fans misquoting the movie, the line “Hello, Clarice” was finally written into the film’s 2001 sequel.

Aerial view of two people playing Monopoly.
Credit: Maria Lin Kim/ Unsplash

The Monopoly Man Never Wore a Monocle

The Monopoly Man is known for his top hat, mustache, and monocle, right? Well, that popular image is at least partly wrong. While the top hat and mustache have been part of Rich Uncle Pennybags’ appearance since he was first introduced in 1936, he’s never worn a monocle. Some psychologists believe that our collective subconscious could have been influenced by the advertising mascot Mr. Peanut (the mascot for Planters Peanuts), who’s just as well known and wears both a top hat and monocle. Gene Brewer, an associate professor in cognitive psychology at Arizona State University, explains that our brains can combine subjects with similar traits — “In studies, when you show participants word pairs and ask them to remember ‘blackmail’ and ‘jailbird,’ half of them will later say they remember learning the word ‘blackbird.’”

Close-up of a Fruit of the Loom clothing tag.
Credit: Lenscap/ Alamy Stock Photo

Fruit of the Loom’s “Vanishing” Cornucopia

Take a look at the tag on a piece of Fruit of the Loom apparel. Now take a look again, just to be sure. Even though every fiber of your being may have thought otherwise, there’s no cornucopia to be found in the logo. As far back as 1893, when the logo was introduced — long before anyone on the internet claimed differently — it’s just been a simple combination of an apple and different varieties of grapes, with leaves on the side. It’s not clear why so many people remember a cornucopia being present.

Smokey the Bear and "Little Smokey".
Credit: Bettmann via Getty Images

It’s Just “Smokey Bear”

For over 75 years, the U.S. Forest Service has featured an ursine mascot warning about forest fires. After all this time, you’d think we’d know his name. Commonly and mistakenly referred to as “Smokey the Bear,” this long-tenured advertising icon is actually just Smokey Bear. Some attribute this mistake to a 1952 song about Smokey, in which songwriters Steve Nelson and Jack Rollins added a “the” to his name in order to retain the song’s rhythm. While some may continue to argue over Smokey’s name, there’s much less ambiguity when it comes to who can prevent forest fires. That’s just “you.”

Bennett Kleinman
Staff Writer

Bennett Kleinman is a New York City-based staff writer for Optimism Media, and previously contributed to television programs such as "Late Show With David Letterman" and "Impractical Jokers." Bennett is also a devoted New York Yankees and New Jersey Devils fan, and thinks plain seltzer is the best drink ever invented.

Original photo by BOOCYS/ Shutterstock

From the comfiest of sneakers to the highest of stilettos, shoes are a key component of any wardrobe. But while loafers and clogs may seem like just another accessory to some, footwear has a rich and fascinating history dating back millennia. So lace up your boots and take a stroll through this list of six incredible facts about shoes.

Converse shoes are seen in a store.
Credit: Joe Raedle/ Getty Images News via Getty Images

Chuck Taylor All Stars Were the First Signature Athletic Shoes

Long before Jordans and Kobes hit the market, the first athlete to lend his name to a signature shoe was Chuck Taylor, a semiprofessional basketball player from Indiana. Converse created its All Star sneaker in 1917 with the sport of basketball in mind, and by 1921, Taylor had signed on to help sell the shoes out of the company’s office in Chicago. Taylor wasn’t a celebrity in the same way that today’s NBA players are, but as part of his job, he organized promotional basketball clinics for Converse and worked with coaches and athletes all over the country. He became so closely associated with the brand that people started referring to All Stars as “Chuck Taylor’s shoes,” even before his name was physically affixed to the sneakers in the early 1930s.

Within a few decades, other signature shoes followed. In 1958, Celtics star Bob Cousy worked with a company called PF Flyers to design a shoe that sold 14 million pairs in its first year. And in 1973, Puma released the Puma Clyde, named for New York Knicks star Clyde Frazier. Of course, the biggest names in the signature shoe game are Nike and Michael Jordan, who teamed up on the Air Jordan I (the first of many releases) in 1985. Jordan is undoubtedly Nike’s most successful signature athlete, but he wasn’t the company’s first. That title belongs to Wayne Wells, a freestyle wrestler who won gold at the 1972 Olympics. Wells signed a contract with Nike that same year and helped design a wrestling shoe to which he lent his name, paving the way for future athletes to sign on with the brand.

Neil Armstrong lunar surface training.
Credit: Heritage Images/ Hulton Archive via Getty Images

Neil Armstrong Left His Shoes on the Moon

After Neil Armstrong took one of the most consequential steps in human history, the boots he used to do so were discarded on the moon. In fact, both of the Apollo 11 astronauts who walked on the lunar surface — Armstrong and Buzz Aldrin — left behind their overshoes, along with their portable life-support systems. Leaving the gear wasn’t a symbolic gesture; it helped to offset the added weight of collected moon rocks that the shuttle would be taking back. And the astronauts didn’t return to Earth barefoot, either. The treaded overshoes they abandoned were worn atop flat-soled pressure boots (which they kept) for added traction while traversing the moon’s rocky terrain.

Leather Chelsea boot detail on wood.
Credit: Thomas Faull/ iStock via Getty Images Plus

The First Slip-On Elastic Boots Were Made for Queen Victoria

In the early 1800s, boots were a popular style among both men and women, though tying them with rudimentary laces and buttons made putting them on difficult. English inventor Joseph Sparkes Hall realized there had to be a better way, and in 1837, he designed the first pair of elastic-sided boots, which he presented to Queen Victoria that same year (the year she ascended to the throne).

This new slip-on boot provided the comfort of slippers with the stability of laced shoes, and became well known thanks to Victoria’s blessing. As Sparkes explained in The Book of the Feet, written in 1846, “Her Majesty has been pleased to honor the invention with the most marked and continued patronage; it has been my privilege for some years to make boots of this kind for Her Majesty, and no one who reads the court circular, or is acquainted with Her Majesty’s habits of walking and exercise in the open air, can doubt the superior claims of the elastic over every other kind of boot.” Hall’s patented design would go on to inspire the modern-day Chelsea boot, which has been worn by everyone from the Beatles to the Stormtrooper characters in Star Wars.

Female horseback rider in heels.
Credit: Anne Ackermann/ Photodisc via Getty Images

High Heels Were Originally Worn by Horseback Riders

Though they’ve since become a symbol of high fashion, high-heeled shoes originally had more of a practical use. They were commonly worn throughout horseback-riding cultures around the 10th century, and were particularly popular in Persia, where the cavalry found that 1-inch heels added extra stability in stirrups when they stood up to fire their bows. Persia later sent a delegation of soldiers to Europe in the 17th century, which in turn inspired European aristocrats to add high heels to their personal wardrobes. Heeled boots became all the rage among members of the upper class throughout Europe, and in 1670, France’s Louis XIV passed a law mandating that only members of the nobility could wear heels. In the 18th century, the style became increasingly gendered as heels grew in popularity among women. By the start of the French Revolution in 1789, men of the French nobility had largely given up on the trend in favor of broader, sturdier shoes.

The presidential shoe collection.
Credit: Haydn West – PA Images via Getty Images

Though there’s no exclusive contract, Johnston & Murphy serves as the unofficial footwear provider of U.S. Presidents, having designed shoes for America’s commanders in chief since the company was established in 1850 by William J. Dudley, who offered to make shoes for President Millard Fillmore. (Dudley called his business the William J. Dudley Shoe Company, but his partner James Johnston renamed it after Dudley died and he brought on William Murphy as a new partner.)

In the decades since, Johnston & Murphy has been tasked with crafting a wide variety of presidential kicks, with the smallest being a size 7 for Rutherford B. Hayes and the largest a size 14 for Abraham Lincoln. Some of the more famous styles have included black lace-up boots for Lincoln, black wingtips for President Kennedy, black cap-toe shoes beloved by Ronald Reagan, and black oxfords for Barack Obama, which came in a handcrafted box of Hawaiian-sourced wood.

King Charles I's buskin boots.
Credit: Heritage Images/ Hulton Archive via Getty Images

Ancient Greek Actors Wore Different Footwear for Dramatic and Comedic Roles

In addition to their narratives, ancient Greek tragedies and comedies could often be distinguished by the type of footwear the actors wore. Dramatic actors wore a style known as a buskin, a boot with a thick sole believed to be anywhere between 4 and 10 inches high. This set them apart from comedic actors, who wore just thin socks on their feet. It was thought that buskins gave serious performers a more prominent stage presence compared to their humorous counterparts.

Bennett Kleinman
Staff Writer

Bennett Kleinman is a New York City-based staff writer for Optimism Media, and previously contributed to television programs such as "Late Show With David Letterman" and "Impractical Jokers." Bennett is also a devoted New York Yankees and New Jersey Devils fan, and thinks plain seltzer is the best drink ever invented.

Original photo by B Isnor/ Shutterstock

For millennia, lighthouses have guided wayward ships away from hazardous waters, providing safety during powerful storms. Lighthouses still help seafarers today, though modern sailors have many more navigational tools at their disposal, from GPS to detailed nautical charts, buoys, and radar beacons. These days, many lighthouses have become romantic relics of another era, one in which people set sail with only the power of the wind and looked toward lighthouses to guide them back home. These seven illuminating facts about lighthouses include just a few reasons why these structures continue to fascinate us and remain popular tourist destinations today.

Lighthouse of Alexandria.
Credit: MR1805/ iStock

Antiquity’s Most Famous Lighthouse Is One of the Seven Wonders of the Ancient World

The Lighthouse of Alexandria, also known as the Pharos of Alexandria, was built during the reign of Ptolemy II of Egypt, around 280 BCE. For centuries, it was one of the tallest structures in the world, with reports estimating that it reached about 350 feet high. The lighthouse stood on the island of Pharos in the harbor of Alexandria, named after Alexander the Great and the capital of the Ptolemaic Kingdom (which lasted from 305 BCE to 30 BCE).

Sadly, frequent earthquakes in the Mediterranean region badly damaged the lighthouse, and it was completely destroyed by the 14th century. However, the lighthouse served as an archetype from which all other lighthouses derived, and its importance is embedded in many Romance languages — for instance, the word “pharos” is sometimes used in English to mean “lighthouse.” In 1994 French archaeologists discovered remains of the famous lighthouse on the seabed, and UNESCO is working to declare the area a submerged World Heritage Site.

Aerial view of Big Sable Point Lighthouse near Ludington, Michigan.
Credit: Frederick Millett/ Shutterstock

The U.S. Has More Lighthouses Than Any Other Country

The United States’ first lighthouse was built in 1716 on Little Brewster Island near Boston, Massachusetts. Lighthouses were so important to early America that in 1789 the first U.S. Congress passed the Lighthouse Act, which created the United States Lighthouse Establishment under the Department of the Treasury. Today, the U.S. is home to over 700 lighthouses — more than any other country in the world. However, the state with the most lighthouses isn’t located on the coast of the continental U.S. Michigan — surrounded by four of the five Great Lakes — is home to 130 lighthouses, including the remote lighthouse on Stannard Rock, nicknamed “the loneliest place in North America.”

Hercules tower in Spain.
Credit: Migel/ Shutterstock

The Romans Built the Oldest Surviving Lighthouse

In the first century CE, the ancient Romans built the Farum Brigantium, known today as the Tower of Hercules — the world’s oldest lighthouse that is still functional. The lighthouse continues to guide and signal sailors from La Coruña harbor in northwestern Spain. An 18th-century restoration of the tower thankfully preserved the original core of the structure while improving its functionality. Now a UNESCO World Heritage Site, the Tower of Hercules is the only Greco-Roman lighthouse from antiquity that has retained such a high level of structural integrity, and it continues to shine its light across the Atlantic to this day.

View of Nantucket Lightship.
Credit: Cathy Kovarik/ Shutterstock

“Lightships” Once Sailed the Seas

Although lighthouses were originally designed as immovable land structures, in 1731 English inventor Robert Hamblin designed the first modern lightship and moored it at the Nore sandbank at the mouth of the Thames River. As its name suggests, the ship had a lighted beacon and was used to provide safe navigation in areas where building a land-based lighthouse was impractical. The U.S. had its own lightship service, which began in 1820 and lasted 165 years. The country’s last lightship, the Nantucket, retired in 1985 after being replaced by more modern technology such as automated buoys. Today, the United States lightship Nantucket (LV-112) is registered as a National Historic Landmark.

View of a lens used for lighthouses.
Credit: Science & Society Picture Library via Getty Images

An 1819 Invention Gave Lighthouses a Major Upgrade That Still Exists Today

In the early 19th century, lighthouses weren’t particularly good at steering ships away from land, as the most common lenses used in lighthouses at the time, known as Lewis lamps, were not nearly powerful enough. Enter French inventor Augustin-Jean Fresnel, who in 1821 introduced his eponymous lens. The Fresnel lens used a series of prisms to focus all the light from a lamp in one direction and magnify it into a much more powerful beam. Soon, Fresnel lenses were installed in lighthouses all over the world. Not only did they offer vastly improved functionality, they were also stunningly beautiful. The Fresnel lens was so revolutionary that the technique is still used today in flood lights and professional lighting equipment.

Baltimore Harbor Lighthouse in Chesapeake Bay, Maryland.
Credit: Michael Ventura/ Alamy Stock Photo

The U.S. and Soviet Union Experimented With Nuclear-Powered Lighthouses

In 1964, the Baltimore Harbor Light, which sits at the mouth of the Magothy River, became the first — and last — nuclear-powered lighthouse ever built by the United States. Originally constructed in 1908, the Baltimore Harbor Light operated as a far more typical lighthouse for 56 years, until it became the subject of a Coast Guard experiment. The U.S government installed a 4,600-pound atomic fuel cell generator,  and the lighthouse ran on nuclear power for a year before the project was dismantled (thankfully with no signs of nuclear contamination).

Although the U.S.’s experiment with nuclear lighthouses was short-lived, the Soviet Union embraced them more enthusiastically, building 132 nuclear-powered lighthouses along the notoriously inhospitable Northeast Passage, a shipping route between the Atlantic and Pacific oceans along Russia’s Arctic coast. After the fall of the Soviet Union in the early 1990s, Russia abandoned the upkeep of these lighthouses. But, being nuclear-powered, they kept shining their light for years afterward.

Flannan Islands Lighthouse (21 miles west of Lewis).
Credit: Ian Cowe/ Alamy Stock Photo

A Remote Scottish Lighthouse Was the Sight of an Enduring Mystery

The Flannan Isles Lighthouse is located on the remote, uninhabited island of Eilean Mòr in northern Scotland. From the outside, the lighthouse is remarkably similar to many other lighthouse structures built around the turn of the 20th century — so you might not guess that it was the setting of a notorious unsolved disappearance that inspired the 2018 film The Vanishing starring Gerard Butler.

On December 15, 1900, the transatlantic steamer Archtor noticed the lighthouse wasn’t lit while traveling to the port town of Leith. A team from the local lighthouse board visited the island a few days later and discovered no sign of the three lighthouse keepers who were supposed to be on duty. The table was set for dinner, and an oilskin (a type of raincoat) was still on its hook. A preliminary investigation concluded that two of the lighthouse keepers likely traveled to the west platform to secure a supply box during a storm and accidentally tumbled into the sea. When the last keeper went to investigate (without his oilskin), he likely met a similar fate. Rumors on the mainland posited more fanciful explanations, including mythical sea serpents or even murder. While those explanations have been largely dismissed, it’s unlikely we’ll ever know for sure what happened at Flannan Isles Lighthouse.

Darren Orf
Writer

Darren Orf lives in Portland, has a cat, and writes about all things science and climate. You can find his previous work at Popular Mechanics, Inverse, Gizmodo, and Paste, among others.

Original photo by TORWAISTUDIO/ Shutterstock

Movement does our bodies good. But you know what’s easier than running a marathon? Learning a few quick facts about exercise, no pain or gain required.

We aren’t doctors, so we can’t advise you on the best ways for you to exercise — but we can rattle off some trivia about it. Where did the 10,000 steps benchmark come from? What’s the deal with a “runner’s high”? These six interesting facts may not help you get fit, but at least you’ll learn something.

Group of runners during sunset.
Credit: YanLev/ iStock

Exercise Can Get Some People High

You may have heard of a “runner’s high,” or a rush of euphoria after exercise that’s not actually limited to runners. It’s a real biological phenomenon, although it’s relatively rare. The commonly held belief is that it’s caused by hormones called endorphins, but they don’t cross the blood-brain barrier. The more likely culprit is the endocannabinoid system, the same system that cannabis interacts with to create its psychoactive effects.

Exercise increases the amount of endocannabinoids in the bloodstream, which can cross the blood-brain barrier. For some people, this can cause a rush of euphoria, reduced anxiety, and improved mood. This isn’t especially common, though, and there’s much about the phenomenon scientists are still trying to figure out.

3D rendered medically accurate illustration of the hippocampus.
Credit: SciePro/ Shutterstock

Exercise Can Help You Think More Clearly

Ever take a walk to clear your head? It might not just be a change of scenery that gives you a much-needed reset. A growing body of research shows that exercise, including walking, increases cognitive ability.

Exercising increases blood flow, including to the brain. The increase in energy and oxygen could boost performance. But it gets more complex than that. When we exercise, the hippocampus, a part of our brain necessary for learning and memory, becomes more active — and when there’s increased energy in the hippocampus, we think more effectively. Regular exercise could even help reverse age-related brain damage.

baby boy in diaper crawling next to a window.
Credit: Onjira Leibe/ Shutterstock

Even Babies Need Exercise

Babyhood offers an unparalleled opportunity to mostly just eat and sleep, but in between, infants need at least some exercise. Giving infants several opportunities to move around each day could improve motor skills, bone health, and social development. Tummy time — supervised time with a baby lying face-down — strengthens babies’ neck, shoulder, and arm muscles, too. The World Health Organization (WHO) recommends that babies are active several times a day, including at least 30 minutes on the stomach. Babies still get plenty of dozing time, though; the WHO recommends 12 to 16 hours of sleep for infants 4 months through 11 months of age.

A woman walking with a pedometer reaching her goal of 10,000 steps.
Credit: Angela Schmidt/ Shutterstock

10,000 Steps Was Invented for Pedometer Marketing

If you have a smartwatch or other fitness tracker, you might get a little celebratory notification when you hit 10,000 steps — or maybe you’ve just heard someone refer to “getting their 10,000 steps in.” That benchmark persists because it’s a nice, round number that’s easier to use in marketing materials, not because there’s any scientific basis for it.

Way back in the 1960s, a Japanese company invented a pedometer called Manpo-kei, or “10,000 steps meter,” building off momentum from the 1964 Tokyo Olympics. Nearly 60 years later, it’s still the default setting in many step counters, including Fitbit devices.

While getting 10,000 steps a day is a healthy habit, you don’t have to take that many to see benefits from walking, according to experts. One study found that just 4,400 steps a day can lower the risk of early death by 41%. Benefits increased with additional steps, but topped out at around 7,500 (at least in one study looking at mortality in older women). Of course, your mileage may vary depending on your goals, exercise pace, and general health, but there’s no reason to feel discouraged if you’re not getting a full 10,000 in every day.

Greek Gymnasium at the Time of the First Olympic Games.
Credit: Universal History Archive/ Universal Images Group via Getty Images

“Gymnasium” Comes From the Greek for “School for Naked Exercise”

Today, “gymnasium” or “gym” can refer to a lot of things having to do with physical activity, like a school gymnasium, a health club, or a playground jungle gym. It comes from the ancient Greek word gymnasion, or “school for naked exercise.” Gymnos meant “naked,” and the people using the gym didn’t wear clothes — they just oiled or dusted themselves up. In ancient Greece, physical education was just as important as the arts, and these facilities eventually grew more elaborate, with surrounding changing rooms, baths, and practice rooms.

Close-up image of woman a gardening.
Credit: Juice Flair/ Shutterstock

Gardening Counts as Exercise

Getting your hands dirty in your garden isn’t just a mood-boosting pastime — it’s great exercise, too. All that digging, hauling, and moving works all your major muscle groups, improves mobility, and boosts endurance. It burns some serious energy, too: Even light gardening or yard work can burn more than 300 calories per hour for a 154-pound person, according to the Centers for Disease Control and Prevention. That’s comparable to going dancing or taking a hike. For heavy yard work, like chopping wood, the number jumps up to 440 calories per hour, although the exact number will vary depending on the nature of the work and each individual body.

It’s easy to build a more strenuous workout from your existing gardening routine with simple adjustments like carrying heavier cans of water, switching to a push mower, or increasing walking around your yard. And there’s an additional healthy bonus to garden exercise: Fresh veggies!

Sarah Anne Lloyd
Writer

Sarah Anne Lloyd is a freelance writer whose work covers a bit of everything, including politics, design, the environment, and yoga. Her work has appeared in Curbed, the Seattle Times, the Stranger, the Verge, and others.

Original photo by georgeclerk/ iStock

We often rely on the medicine cabinet when headaches and stomach pains creep up, but how often have you paused in a moment of discomfort to think about the origins of the medications that aid your maladies? Chances are, not very often. Yet many of the over-the-counter drugs kept in a home first-aid kit have their own interesting stories — like these eight common items.

Close-up of a pack of Benadry pills.
Credit: Smith Collection/Gado/ Archive Photos via Getty Images

Benadryl

The tiny pink tablets that can clear up allergic reactions may never have been invented if chemist George Rieveschl had succeeded in his first career. The Ohio-born scientist initially planned to become a commercial artist, but couldn’t line up much work thanks to the Great Depression. Instead of pursuing his art dream, Rieveschl studied chemistry at the University of Cincinnati, where his experiments years later on muscle-relaxing drugs uncovered a histamine-blocking medication. “It seemed like bad luck at the time,” Rieveschl once told the Cincinnati Post about his unexpected career shift, “but it ended up working pretty well.”

Tums in a store isle.
Credit: Jeff Greenberg/ Universal Images Group via Getty Images

Tums

Creating a new medication is sometimes a labor of love — at least, that’s the case for Tums. In 1928, Missouri pharmacist Jim Howe developed the chalky tablets to treat his wife’s indigestion. However, it wasn’t until Nellie Howe gave out samples of her husband’s concoction to seasick travelers on a cruise ship that Jim was inspired to sell his acid relievers. Tums hit pharmacy shelves two years later, sold for 10 cents per roll, with a name chosen from a radio contest in St. Louis, the same city where 99% of Tums have been made for nearly 100 years.

Eye drop entering an eye.
Credit: Marina Demeshko/ Shutterstock

Eye Drops

Soothing irritated eyes isn’t just a modern problem — researchers believe humans have been using some form of eye drops for at least 3,500 years. Medicinal recipes from ancient Egyptians included heavy metals like copper and manganese. Modern eye drops are far removed from these early origins, but most contain saline, which was first used for medical treatment in 1832.

Pepto Bismol in the grocery store.
Credit: Justin Sullivan/ Getty Images Entertainment via Getty Images

Pepto-Bismol

No one knows why the original maker of bismuth subsalicylate, aka Pepto-Bismol, chose to dye it a bright pink, considering the solution is actually beige before coloring is added. However, the unnamed physician from the early 1900s who first mixed up the stuff is credited with trying to cure cholera, a deadly food and waterborne illness that causes severe stomach distress. Initially called “Mixture Cholera Infantum” and meant for small children, the stomach-soothing blend contained zinc salts, oil of wintergreen, and a now-iconic pink coloring, among other ingredients. While the early version of Pepto couldn’t cure cholera (which requires rehydration and sometimes antibiotics), it did help treat symptoms, which is why it became popular with doctors. In the early 20th century, New York’s Norwich Pharmacal Company sold its version, called Bismosal, in 20-gallon tubs; today, name brand Pepto-Bismol is manufactured by Procter & Gamble, who continue to dye it the recognizably rosy hue.

Aspirin pills in a hand to be taken orally.
Credit: dszc/ iStock

Aspirin

The headache-melting ingredient in aspirin, acetylsalicylic acid, is a human-made substance, though it’s a cousin to salicylic acid, a naturally occurring substance found in the bark of willow and myrtle trees. Humans have gathered those ingredients for medicinal remedies for millennia; ancient Egyptians and Greeks used them to tamp down fevers and pain. Synthetic versions were first made in 1874, and by the turn of the century, German chemist Felix Hoffman created the first aspirin — initially sold in powder form — as a remedy for his father’s rheumatism.

Acetaminophen drug In prescription medication pills bottle.
Credit: luchschenF/ Shutterstock

Acetaminophen

American chemist Harmon N. Morse first developed acetaminophen, which would eventually become the world’s most widely used pain reliever, in 1878. However, it would take decades for the medication — called paracetamol outside the U.S. — to become an over-the-counter medication, thanks to fears that it could cause methemoglobinemia, a blue skin discoloration that signals issues with how blood cells carry oxygen throughout the body. (The fears turned out to be unfounded.) In 1955, McNeil Laboratories began manufacturing its version of the drug, called Tylenol, which would eventually be purchased by Johnson & Johnson and marketed as more safe and effective than aspirin. While many scientists didn’t agree with this claim, Tylenol became an over-the-counter medication within five years and soared in popularity in the following decades.

Woman hold a box of 400mg ibuprofen tablets in her hand and a glass of water.
Credit: Cristian Storto/ Shutterstock

Ibuprofen

The inventors of ibuprofen created the inflammation-relieving drug with one health condition in mind: rheumatoid arthritis. Pharmacologist Stewart Adams and chemist John Nicholson began their hunt for an aspirin alternative in the 1950s, with the goal of creating a safe, long-term option for patients with the autoimmune condition. Testing one experimental compound on his own headache following a night of drinking, Adams found the prizewinning formula, which was patented in 1962 and rolled out to U.K. pharmacies under the name Brufen. By 1984, ibuprofen had become an over-the-counter medication in the U.K. and U.S., and by the time its original patent expired in 1985, more than 100 million people in 120 countries had taken the medication.

opening capsule filled with fruits and nutrients.
Credit: Eoneren/ iStock

Multivitamins

Vitamins are more preventative than reactive; they can’t cure indigestion or discomfort. But in their earliest forms, vitamins were meant to ward off health conditions caused by nutritional deficiencies. The first commercial vitamin tablets emerged in 1920 following decades of research into illnesses such as beriberi (a vitamin B1 deficiency), and gained traction during World War II due to fears of nutritional constraints caused by rationing. Vitamin consumption held on after the war ended, and today nearly 60% of Americans take a daily vitamin or dietary supplement.

Nicole Garner Meeker
Writer

Nicole Garner Meeker is a writer and editor based in St. Louis. Her history, nature, and food stories have also appeared at Mental Floss and Better Report.

Original photo by Philip Reeve/ Shutterstock

At only about 10% of the global population, left-handed people are definitely in the minority. But even though left-handers are few and far between, some of society’s most notable figures have written, thrown a ball, or played an instrument with their left hand. There are even some famous fictional characters who are avowed southpaws, including Ned Flanders from The Simpsons, who runs a store, the Leftorium, catering to left-handed people. As we celebrate International Left Handers Day — a holiday that falls each August 13 — let’s get to know a little more about some of the most prominent lefties throughout history.

Leonardo Da Vinci statue in Firenze, Italia.
Credit: IPGG/ Shutterstock

Leonardo da Vinci

Though there’s some argument over whether Leonardo da Vinci was exclusively a lefty or actually ambidextrous, his peers referred to him by the term “mancino,” which is Italian slang for a left-hander. Leonardo was known for a unique style of taking notes, referred to as “mirror writing,” in which he wrote from right to left. (One theory is that the method was meant to avoid ink smudges with his left hand.) His left-handedness also now plays a key role in authenticating his drawings, as experts often look for signs of left-handed strokes and slants in order to confirm whether a piece is a genuine Leonardo da Vinci work. While the Renaissance polymath embraced being a lefty, one of his contemporaries defied it — Michelangelo actually retrained himself to write and draw with his right hand instead of accepting his natural left-handedness.

Babe Ruth smacks pout a few homers in practice before game.
Credit: Bettmann via Getty Images

Babe Ruth

Known for being arguably the greatest baseball slugger of all time, the left-handed-hitting Babe Ruth began his career as one of the most dominant southpaw pitchers of the 1910s. Ruth switched to the outfield after being sold from the Boston Red Sox to the New York Yankees, where his lefty power stroke earned him nicknames like the “Great Bambino” and “Sultan of Swat.” All told, Ruth socked 714 homers during his illustrious career, good enough for third place behind the scandal-plagued Barry Bonds and legendary Hank Aaron. On rare occasions, Ruth would experiment with batting right-handed, though his success from that side was limited.

Jimi Hendrix (1942 - 1970) caught mid guitar-break during his performance.
Credit: Evening Standard/ Hulton Archive via Getty Images

Jimi Hendrix

Revered as one of the greatest guitar virtuosos in rock history, Hendrix made the unique choice to play a right-handed guitar upside down in order to accommodate his left-handed proclivities (although he also performed some tasks with his right hand). His father, Al, forced Jimi to play guitar right-handed, because he believed that left-handedness had sinister connotations (a belief that was once common — the word “sinister” comes from Latin meaning “on the left side”). While Jimi did his best to oblige his father when Al was present, he would flip the guitar as soon as his dad left the room, and he also had it restrung to more easily be played left-handed. Hendrix isn’t the only legendary lefty rocker: Paul McCartney of the Beatles and Nirvana’s Kurt Cobain also strummed their guitars with their left hands.

Marie Curie (1867-1934), Polish-French physicist who won two Nobel Prizes.
Credit: Everett Collection/ Shutterstock

Marie Curie

Given the fact that men are more likely to be left-handed than women, this list has been sorely lacking thus far in terms of famous females. One of history’s greatest left-handers, however, was none other than the groundbreaking scientist Marie Curie. A Nobel Prize winner, Curie helped to discover the principles of radioactivity and was the matriarch of a family full of lefty scientists; her husband Pierre and daughter Irene also possessed the trait. Left-handedness is surprisingly common among well-known scientists even outside of the Curie family — Sir Isaac Newton and computer scientist Alan Turing were southpaws too.

Astronaut Neil Armstrong smiles inside the Lunar Module.
Credit: NASA/ Hulton Archive via Getty Images

Neil Armstrong

According to NASA, more than 20% of Apollo astronauts were lefties, which makes them more than twice as likely to be left-handed compared to the average person. Neil Armstrong was no exception to this statistical oddity — the first man to walk on the moon was indeed left-handed. Needless to say, Armstrong’s left-handedness was truly out of this world.

U.S. President Barack Obama waves after he spoke to the American people.
Credit: Alex Wong/ Getty Images News via Getty Images

Barack Obama

The 44th President of the United States was the eighth left-handed individual to hold said office, though prior to the 20th century, only President James Garfield is known to have been a lefty. Lefties were elected to the presidency more frequently beginning in 1929 with Herbert Hoover: Six Presidents since have been left-handed, including a run of three straight with Ronald Reagan, George H. W. Bush, and Bill Clinton. While signing his first executive order in 2009, Obama quipped: “That’s right. I’m a lefty. Get used to it.” Presidential left-handedness may not be a coincidence — some experts believe that lefties have a stronger penchant for language skills, which could help their rhetoric on the campaign trail.

Statue of Queen Victoria.
Credit: Philip Reeve/ Shutterstock

Queen Victoria (And Other Members of the Royal Family)

England’s great monarch Queen Victoria (who ruled 1837-1901) was known for her left-handedness. Though she was trained to write with her right hand, she would often paint with her natural left. She’s just one of a few members of the royal family with the trait. Victoria’s great-grandson, King George VI — as well as George’s wife, Elizabeth — were also regal lefties, and George’s left-handedness was often prominently on display while playing tennis, one of his favorite hobbies. Two current heirs to the throne and presumed future kings are also proud lefties: Prince William has joked that “left-handers have better brains,” and his young son George has shown a penchant for using his left hand while doing everything from clutching toys to waving at adoring fans.

Bennett Kleinman
Staff Writer

Bennett Kleinman is a New York City-based staff writer for Optimism Media, and previously contributed to television programs such as "Late Show With David Letterman" and "Impractical Jokers." Bennett is also a devoted New York Yankees and New Jersey Devils fan, and thinks plain seltzer is the best drink ever invented.

Original photo by Jeremy Bezanger/ Unsplash

For some, cooking is an art. For others, a hobby. But for a considerable number of people, cooking is a daily obstacle that must be overcome in order to satiate hunger. But whether you’re new to the kitchen or know a stovetop like the back of your hand, there are some food questions that just have to be asked. From understanding chemical reactions to testing food myths, here are a few things that every cook in the kitchen should know.

Close-up of a woman picking up a dropped cupcake from the floor.
Credit: New Africa/ Shutterstock

Is the “Five-Second Rule” Real?

Most people know the “five-second rule”: the idea that if food that’s fallen on the floor has been there less than five seconds, it’s still acceptable to eat. No one knows the origins of this questionable rule — and plenty of people think it’s kind of gross — but that hasn’t stopped anyone from picking up a dropped Oreo and shouting “five-second rule!” before.

Though its origins may be murky, actual scientists have devoted time and resources to testing the five-second rule. And surprisingly, it’s not an entirely bogus theory — depending on the cleanliness of the floor.

To be clear, no scientist has gone on record recommending that you eat dropped food. However, a science experiment conducted at the University of Illinois Urbana-Champaign proved that as long as the food was picked up within the five-second time limit, the presence of microorganisms on the dropped food was minimal. However, the experiment was conducted after first sanitizing the flooring, and it only applied to hard flooring like tile and wood, which are less likely to serve as an incubator for pathogens. No testing was conducted on carpeting and other soft surfaces, which can hold moisture and become breeding grounds for bacteria.

Let’s cut to the chase: It’s definitely not recommended to blindly follow the five-second rule. You have no way of knowing which pathogens are on your floor, so unless you regularly disinfect, it’s best to play it safe. According to the experts, dry foods are slightly safer than wet ones because moisture is a great medium for pathogens to attach themselves. So, a potato chip or cracker might acquire a minimal pathogen transfer whereas an apple or slice of banana might test positive for a higher pathogen count. But we recommend a new rule: When in doubt, throw it out.

Aerial view of a person cutting a raw onion.
Credit: Alina Kholopova/ Shutterstock

Why Do You Cry When You Cut Onions?

There’s no need to cry over spilled milk, but what about chopped onions? You can thank a chemical combination of enzymes and sulfur for the tears that well up while you make dinner.

Onions use sulfur to make a mixture of amino acids and enzymes during the growing process. The acids and enzymes are separated and stored in different regions of the onion’s cells, which are called vacuoles. While the onion remains whole, the amino acids and enzymes in the onion’s cells remain separated. Once you cut into the onion, however, everything mixes together. When the two substances are combined, they form a chemical known as syn-Propanethial-S-oxide, or lachrymatory factor (LF). LF is an irritant that’s easily vaporized when it reacts with the air.

LF isn’t strong enough to affect tougher parts of your body such as your skin, but it can irritate more sensitive regions. As the vapors waft up toward your face, your eyes will begin to sting. Your body — sensing the irritant — will release a torrent of tears in an attempt to wash the chemicals from your eyes. Luckily, LF can’t do any serious damage, even in high quantities.

Producing LF is the onion’s way of defending against anything that may want to eat it. As soon as an animal bites into the root, its eyes start to burn and it’s reminded to stay away from onions.

Unfortunately for onions, humans are persistent.

A little boy licking the snow from outside.
Credit: Nicole Elliott/ Unsplash

How Do Taste Buds Work?

Taste buds are the reason we pucker our lips when we suck on a lemon wedge or smile when savor a piece of chocolate. They’re how we can identify our favorite foods. In fact, without taste buds we wouldn’t be able to sense the five basic tastes: salty, sweet, sour, bitter, and umami. But what exactly are taste buds and how do they work?

Every tongue is covered in visible bumps known as papillae that fall into four categories: filiform, fungiform, circumvallate, and foliate. Each papillae type except for filiform carries a number of taste buds that are continuously being replaced. In total, every tongue has an average of 10,000 taste buds, which are replaced about every two weeks.

Despite what some may believe, there are no specific areas of the tongue responsible for a particular taste. Instead, it’s the taste receptors scattered across your tongue that pinpoint the proper flavor.

The taste buds in your different papillae are simply a combination of basal cells, columnar (structural) cells, and receptor cells. Different types of receptor cells are coated with proteins intended to attract specific chemicals that are linked to one of the five basic tastes. When the receptor cell identifies the chemical it binds with, it will send a signal through a neural network to the brain via microvilli, or microscopic hairs on every taste bud.

There is more to taste than just the tongue, however. Lining the uppermost part of the human nose are olfactory receptors that are responsible for smell, and they send messages that further hone in on specific tastes. When you chew food, a chemical is released that travels to the upper part of your nose and activates the olfactory receptor. These receptors work in conjunction with the receptors on taste buds to help the brain recognize the taste. This helps explain why a cold or allergies can hinder one’s sense of taste, making everything taste bland.

Young woman goes to sleep under a white comforter.
Credit: Troyan/ Shutterstock

Does Tryptophan Really Make You Tired?

Anyone who’s passed out after indulging in a Thanksgiving feast knows the theory: tryptophan, an amino acid found in turkey, makes you sleepy. But is this conventional wisdom actually true?

The short answer is … not exactly. L-tryptophan, as it’s officially known, can also be found in everything from chicken and yogurt to fish and cheese, none of which are typically associated with sleepiness. Once ingested, tryptophan is converted into the B-vitamin niacin, which helps create the neurotransmitter serotonin. Serotonin plays a key role in melatonin levels and sleep itself, hence the apparent causal link between turkey and fatigue.

Plenty of other amino acids are present in turkey, however, and most of them are found in greater abundance — meaning that, when all those chemicals are rushing to your brain after your second helping, tryptophan rarely wins the race.

If, however, the tryptophan gets a little assistance in the form of carbohydrates, it gets a better shot at dominating your system. Eating carbs — which abound in Thanksgiving dishes like mashed potatoes and stuffing — produces insulin, which flushes every amino acid except tryptophan from your bloodstream. Thus, your post-Thanksgiving sleepiness is actually the result of a perfect storm composed of tryptophan, carbs, and the large portions typically associated with the holiday.

Raspberries and blueberries organized on a pink background.
Credit: Jeremy Bezanger/ Unsplash

What Are Superfoods?

While many nutritionists and physicians recommend healthy eating over fad diets, some foods offer more nutritional benefits than others. That’s why, in recent years, you might have heard about “superfoods” and why you should incorporate them into your diet.

The term “superfood” doesn’t come from medical science. Instead, it was designed by marketers at food companies to help boost sales. But in general, the term applies to particular foods that are nutrient-rich and provide significant health benefits when consumed regularly.

One example of a superfood are eggs, which feature two powerful antioxidants: lutein and zeaxanthin. They are also low in calories, averaging 77 calories per egg. And most importantly, they’re full of nutrients such as iron, phosphorous, selenium, and a myriad of vitamins including A, B2, B5, and B12.

There are also a variety of fruits and vegetables that qualify as superfoods, including berries. Berries are rich in antioxidants, high in fiber, and contain a wide array of vitamins, particularly Vitamins C and K1, manganese, copper, and folate. But nutrient levels can vary widely between berries. For example, strawberries have the highest vitamin C levels of the superfood berries. This heart-healthy food can also help lower inflammation and improve blood sugar and insulin response.

While the phrase “superfoods” might not have a hard definition, there’s plenty of evidence to show that certain foods can improve your health and reduce your risk of serious conditions such as cancer, high blood pressure, and heart disease.

Interesting Facts
Editorial

Interesting Facts writers have been seen in Popular Mechanics, Mental Floss, A+E Networks, and more. They’re fascinated by history, science, food, culture, and the world around them.

Original photo by Subbotina Anna/ Shutterstock

Oysters contain multitudes: They’re protein-rich treats, highly efficient water filters, reef builders, and pretty rock-makers. They exist in coastal regions all over the world, from the Aleutian Islands of Alaska to the warm waters around New Zealand, and we’ve been eating them for thousands of years. Yet as Jonathan Swift wrote in his book Polite Company, “He was a bold man that first ate an oyster.” After all, the shellfish can be a little treacherous to open, and what’s inside isn’t everybody’s cup of tea. But how much do you really know about oysters? Do they all make pearls? Have they always been a delicacy? What can you do with the shells? Pry open these six interesting facts about some of the world’s most divisive shellfish.

A man opening oysters and extracting pearls.
Credit: ebonyeg/ Shutterstock

Not All Pearls Are Shiny

Pearls, the semiprecious gems popular for jewelry and other adornment, are created when some kind of unwelcome object, such as a grain of sand, enters an oyster’s shell. The oyster shields itself by wrapping the irritating object in a substance called nacre, a tough material that develops inside the shells of oysters from the Aviculidae family, also known as pearl oysters. The nacre builds up into a pearl, which can be of several different colors.

Oysters cultivated for food are from the Ostreidae family, or true oysters. They create pearls when things sneak into their shells, too, but their pearls don’t have the same lustrous coating that those of pearl oysters do — so they end up just small and bland.

European flat oyster (Ostrea edulis) underwater.
Credit: aquapix/ Shutterstock

Oysters Can Change Sex

Many of the oysters commonly used for food, including European flat oysters, Pacific oysters, and Atlantic oysters, change sex during their lifetimes — sometimes a few times. European flat oysters alternate based on seasons and water temperature. In other species, most oysters are born male and eventually the population evens out. Most older oysters are female, but some change back at some point. The exact mechanism that makes this happen is still something of a mystery.

Living oyster under the sea water.
Credit: Pix Box/ Shutterstock

One Oyster Can Filter Up to 50 Gallons of Water a Day

Oysters are a critical part of marine ecosystems because they eat by filtering water, removing sediment and nitrogen in the process. One adult oyster can filter up to 50 gallons of water a day, although the exact rate depends on water conditions. A typical ocean environment has stressors, like high or low temperatures, predators, and especially dirty water, that can slow down their feeding process. In more typical conditions, an oyster filters 3 to 12.5 gallons of water a day, which is still extraordinarily helpful.

All this water filtration does have a couple of drawbacks: Too many oysters can reduce the nutrients in the water for other animals, and because they take in a lot of junk, they can pass toxins onto us when we eat them.

Copious Oysters shells.
Credit: Sun_Shine/ Shutterstock

Oyster Shells Are Recyclable

Don’t throw away your oyster shells when you’re done shucking — they’re the best material for rebuilding oyster beds, which sometimes create giant reefs that can be home to all kinds of marine life. When oysters reproduce, they release larvae into the ocean, which float around looking for somewhere to attach themselves. With the loss of reef habitats, those spots can be harder to find. Those larvae love to cling to old oyster shells, which makes discarded shells one of the best tools for sustainable oyster farming and rebuilding marine ecosystems — something they certainly can’t do in a landfill. Many municipalities and conservation groups in oyster-rich areas offer some kind of recycling program.

Man placing metal bag with oysters on oyster farm.
Credit: Bartosz Luczak/ Shutterstock

Humans Have Been Cultivating Oysters for Thousands of Years

Oyster farms, particularly sustainable oyster farms, are nothing new. A 2022 archaeological study in the United States and Australia found that Indigenous groups cultivated oyster reefs as far back as 6,000 years ago, and managed to maintain healthy oyster populations for as long as 5,000 years, even with intense harvesting.The oldest oyster middens — hills of oyster shells — were in California and Massachusetts. One midden in Florida contained more than 18 billion oyster shells.

Overharvesting has damaged modern-day oyster populations; the study also found that 85% of oyster habitats from the 19th century were gone by the 21st century.

Fresh oysters platter with sauce and lemon.
Credit: Artur Begel/ Shutterstock

Oysters certainly have their fans in the 21st century, but not like they did in the 19th century. Back then, they were a staple protein because they were both abundant and extremely cheap. They were equally treasured in both fine dining establishments and on city streets. Oyster houses were incredibly common, and inspired the kind of camaraderie and revelry that bars do today — some of them sold beer with oysters as a free snack.

Their popularity wasn’t limited to coastlines; middle America couldn’t get enough of them, either. Oysters were shipped via rail even before beef was. Households would buy them by the barrel and put them in soups, sauces, and even stuffing.

So what happened to the oyster craze of yesteryear? Several things. With overharvesting, the supply wasn’t as great as it once was. Growing cities started dumping sewage into the water, and oyster beds became disease vectors. New food safety regulations — and an end to child labor — meant businessmen couldn’t get away with shady practices that made oysters cheap.

The final nail in the oyster coffin was Prohibition. Oyster bars had already mostly disappeared by then, but they lost their drinking clientele to speakeasies, and their nondrinking regulars thought they were still too much like saloons.

Sarah Anne Lloyd
Writer

Sarah Anne Lloyd is a freelance writer whose work covers a bit of everything, including politics, design, the environment, and yoga. Her work has appeared in Curbed, the Seattle Times, the Stranger, the Verge, and others.