Original photo by Kris Hoobaer/ iStock

Do black cats fill you with dread? Do you avoid making plans when Friday falls on the 13th? Are you careful around ladders, mirrors, and salt? If so, you’re following some centuries-old ideas about objects and activities that supposedly bring bad luck. But have you ever stopped to wonder why?

In many cases, the origins of these superstitions have multiple layers, meaning they might go back to pagan, Christian, medieval, or Victorian beliefs all at once. In other cases, the story is far more modern than you might think. Read on for some of the strange and surprising stories behind our most common folk beliefs.

Hand knocking on a door.
Credit: magico110/ Shutterstock

Knocking on Wood

In the United States, we say “knock on wood” (in the U.K., it’s “touch wood”) in a variety of situations, like after mentioning something we hope will happen, or while discussing something good that we want to remain in a positive place. It’s a means of averting misfortune, making sure we don’t “tempt fate.” Some explanations for the practice mention a Celtic or otherwise pagan association with tree spirits, the idea being that knocking on wood (particularly once-sacred trees like oak and ash) might awaken these deities and confer their protection. Others note a Christian association with the wood of the cross.

But the origins of this practice are probably much more modern, and banal. In A Dictionary of English Folklore, scholars Jacqueline Simpson and Steve Roud note that the earliest known reference to the practice only dates to 1805. It seems linked to 19th-century children’s games like “Tiggy Touchwood” — types of tag in which children were safe from capture if they touched something wooden, like a door or tree.

In his book The Lore of the Playground, Roud writes: “Given that the game was concerned with ‘protection,’ and was well known to adults as well as children, it is almost certainly the origin of our modern superstitious practice of saying, ‘Touch wood.’ The claim that the latter goes back to when we believed in tree spirits is complete nonsense.”

Close-up of a sleepy black cat.
Credit: Antonino Visalli/ Unsplash

Black Cats

In some parts of the world, black cats are considered lucky, but in the U.S. they’ve often been associated with evil. The link goes back to a medieval conception of cats, the devil, and witches as one big happy family. Some sources claimed that Satan’s favorite form to take was a black cat, while witches supposedly either kept cats as familiars or changed into cats themselves. In an age when witches were blamed for just about everything that went wrong, cats — particularly shadowy black cats — were routinely killed.

Sadly, these awful associations were strengthened during the plague outbreaks of the 14th to 17th centuries. The bacteria that causes the plague wasn’t identified until 1894, and without understanding why people were getting sick, villagers doubled down on the idea of cats (and again, especially black cats) as a source of misfortune.

Unfortunately for them, killing cats of any color just helped rats — which carried the type of fleas that spread the plague — proliferate. It would have been far better for their health if European peasants had taken a page from the ancient Egyptians and worshiped their cats instead.

Woman pouring salt out of her hands.
Credit: Goldfaery/ iStock

Spilling Salt

Salt is essential to human life and was once an extremely valuable commodity, so much so that the word “salary” derives from it. The crystalline mineral was used in ancient Jewish, Greek, and Roman sacrifices, and it was the primary means of preserving food before refrigeration came along. Over the years, salt became associated with purity, incorruptibility, and sanctity — good for both staving off rot and evil spirits. It stood to reason, then, that spilling salt was bad for both the budget and soul.

During the Renaissance, Leonardo da Vinci strengthened the association between spilled salt and misfortune by depicting Judas with a saltcellar knocked over next to him in his painting “The Last Supper.”

At some point, a belief arose that taking a pinch of salt with the right hand and throwing it over the left shoulder would counteract any bad luck caused by spilling the stuff. The idea comes from an imagined link between the left side and the devil — as well as the idea that Satan just can’t stand salt.

A pretty young woman holding a broken mirror.
Credit: Photographee.eu/ Shutterstock

Breaking Mirrors

If you grew up toward the end of the 20th century, you’re almost certainly familiar with the idea that breaking a mirror brings seven years of bad luck. Part of this notion is very old: A variety of ancient cultures believed that one’s reflection could steal bits of one’s soul, and so damaging a reflection could damage a person’s spirit. But folklorists have only traced the idea of bad luck from breaking a mirror to 1777, perhaps because of an association between mirrors, magicians, and “diabolical” divination.

So why seven years of bad luck, specifically? That part only dates from the mid-19th century. It’s not clear exactly where the link came from, but it may be a Roman idea that the body replenishes itself every seven years — meaning that was enough time to lift any curse.

Friday the 13th being highlighted on a calendar.
Credit: Gustavo Frazao/ Shutterstock

Friday the 13th

This superstition marries ideas about both Friday and the number 13 to create what is supposedly the unluckiest day of the calendar. The aura of doom around the number 13 may go back to early civilizations who based their numerical systems on the number 12. (That’s how we got 12-month calendars and days divided into 12-hour segments, for one thing.) Because it came right after 12, 13 was seen as a problematic or strange leftover.

Odd as it may seem, the association is reinforced by two stories of ancient dinner parties. In Norse mythology, evil was introduced into the world when the trickster god Loki showed up as the 13th guest at a dinner in Valhalla. Judas Iscariot, who betrayed Jesus, was also the 13th guest to arrive at the Last Supper. That led to a belief, starting around the 17th century, that it was unlucky to have 13 guests at a table. Incidentally (or not), it was also imagined that witches’ covens usually numbered 13.

Friday, meanwhile, was the day Jesus was crucified. By tradition, it was also thought to be the day Eve gave Adam the apple and they were cast out of the Garden of Eden. In Britain, Friday was also Hangman’s Day, when those condemned to die met their fate. Somehow, over the centuries, these ideas combined to give Friday a bad rep — at least until TGIF came along.

Yet it was only the Victorians who combined the ideas around Friday and the number 13 to create the idea of Friday the 13th as being uniquely unlucky. Of course, these days the American horror film franchise may have reinforced the idea.

Woman wearing suit and hat carrying handbag walking under ladder.
Credit: Debrocke/ClassicStock/ Archive Photos via Getty Images

Walking Under Ladders

Like spilling salt, the superstition against walking under ladders may be partly practical. If you see a ladder, there’s a good chance someone is standing on it, and it would be dangerous for both parties if the ladder were bumped or fell.

But most explanations add a religious dimension. These stem from the shape a ladder makes as it leans against a wall — a triangle, which suggests a trinity. In ancient Egypt, triangles were a sacred shape (think of the pyramids), and they believed that to walk through one was to “break” something sacred to the gods. In Christianity, of course, the trinity is also sacred, and the same idea supposedly applied. Furthermore, a ladder was also said to have rested against Jesus’ crucifix, becoming a symbol of misfortune. There’s also an association with the gallows, where a ladder was often placed so people could climb up to the rope.

However, in A Dictionary of English Folklore, Simpson and Roud once again throw cold water on an ancient basis for this belief. They note that the earliest reference to ladders as unlucky is only about 200 years old, and that most of these older explanations are theories that lack any documented evidence.

A person fingers crossed behind their back.
Credit: file404/ Shutterstock

Crossing Your Fingers

Crossing the middle finger over the index finger “for luck” is one of the most widely understood gestures in the U.K. and the U.S., even if these days we usually say something like “fingers crossed” rather than perform the action.

It’s said — unsurprisingly — that the gesture is a reference to the cross, and anything associated with the cross is supposed to be good luck (or a form of protection, such as saying a prayer while making the sign of the cross). But it may not be as old as it’s often reported: Folklorists have only found reference to it starting in the early 20th century.

White playing cards 666 in the right lower corner on black cast iron surface.
Credit: Kris Hoobaer/ iStock

666

This one really is old. In the “Book of Revelation,” there’s a prophecy about a “Great Beast” who will rule the planet and mark his followers with the “number of his name” — 666. Commentators have referred to that “beast” as Satan, or the Antichrist. (Coincidentally, these lines come from the 13th chapter of Revelation, for anyone wanting to stack superstitions.)

But the Book of Revelation was written in a code that often referred to the Roman Empire. Some scholars say the three sixes are a reference to the Roman Emperor Nero’s name as spelled out in Hebrew letters, although it requires a bit of forcing. The supposedly Satanic associations have had surprising staying power, however: Even today, phone numbers with 666 are often rejected or considered a joke.

Interesting Facts
Editorial

Interesting Facts writers have been seen in Popular Mechanics, Mental Floss, A+E Networks, and more. They’re fascinated by history, science, food, culture, and the world around them.

Original photo by PictureLux / The Hollywood Archive/ Alamy Stock Photo

When The Flintstones premiered on ABC in 1960, New York Times critic Jack Gould derided the show as “an inked disaster” and Jackie Gleason considered suing, contending the primetime cartoon experiment was a Honeymooners copycat set in 10,000 BCE. Still, fans grew attached to Fred, Wilma, Barney, and Betty — at least until the introduction of The Great Gazoo, a green alien meant to lay the groundwork for Hanna-Barbera’s next unconventional family sitcom, The Jetsons. With iconic friendships, a theme song earworm, and countless ancient/modern mash-ups, here’s how the show chiseled its way into our collective conscience.

William Hannah & Joseph Barbera.
Credit: Jeff Kravitz/ FilmMagic via Getty Images

The Partnership Between William Hanna and Joseph Barbera

William Hanna and Joseph Barbera met when they were in their late 20s, as new hires in MGM’s fledgling animation department. Discovering that they shared similar comic sensibilities, they teamed up on 15 years of Tom and Jerry antics, earning two Oscar nominations for Best Short Subject, Cartoons. When MGM shuttered its animation department in 1957, the duo — intent on segueing into television — formed Hanna-Barbera Productions, and created the first animated half-hour series, The Huckleberry Hound Show. To save time and money, the pair pioneered “limited animation,” which basically presented a series of storyboard drawings, linked by small movements like bobbing heads and talking lips. The president of distributor Screen Gems asked Hanna and Barbera if they wanted to collaborate on a primetime television cartoon — even though standalone cartoons had only been successful thus far as morning or afternoon kids’ programming. They accepted the challenge.

Barney Rubble, Fred Flintstone, Betty Rubble, and Wilma Flintstone.
Credit: LMPC via Getty Images

Masterminding the Series

To engineer a hit with the viewership potential of Father Knows Best or Leave It to Beaver, Hanna-Barbera decided to focus their show on a suburban family — with some sort of unique twist. They brainstormed central characters who were Romans, Indigenous People, pilgrims, Appalachian people, and nomads. Then, animator Dan Gordon doodled two cavemen dressed in animal skins. His figures flanked a record player that had a live bird’s beak as its needle. Character designer Ed Benedict tried to add more features present in early humans, but at Barbera’s urging, he made the physiques more refined, even giving Wilma a stone necklace that resembled oversized pearls. The series was named after the primary caveman couple, then named The Flagstones.

Kids Watching the Flinstones on TV, 1972.
Credit: Boris Spremo/ Toronto Star via Getty Images

Finding a Network

A 90-second pilot for The Flagstones was filmed in 1959. Toting the footage and storyboards, Barbera traveled to New York City for two months of dismal pitch meetings with networks and sponsors. Finally, on the last day of his trip, ABC greenlit the show for a 28-episode first season. However, the daily comic strip Hi and Lois already had a family called the Flagstons; The Gladstones served as a placeholder title until the parties arrived at The Flintstones. Decades later, in 1994, Cartoon Network aired The Flagstones pilot after it was recovered from a New York storage facility. Father Knows Best veteran Jean Vander Pyl (Wilma) was the only actor to lend her voice to both the pilot and the eventual series.

The voice actors for the US animated sitcom 'The Flintstones'.
Credit: Silver Screen Collection/ Moviepix via Getty Images

Casting the Ultimate Period Piece

Character actor Alan Reed won the role of Fred. A year after The Flintstones debuted, Reed played Sally Tomato — the mob boss who welcomes Holly Golightly for weekly prison visits — in Breakfast at Tiffany’s. Once, when asked to say, “Yahoo!” in Fred’s voice, Reed ad-libbed a replacement that became the character’s signature. “Yabba dabba do!” was inspired by the 1950s jingle for men’s hair product, Brylcreem (“A little dab’ll do ya”). Meanwhile, the original voice of Bugs Bunny, Daffy Duck, and Porky Pig and scores of other Looney Tunes characters, Mel Blanc, was hired to play Fred’s best friend and next-door neighbor, Barney Rubble. The animation legend picked up a second recurring part on the Stone Age series, supplying the barks for the Flintstones’ pet dinosaur, Dino.

In 1961, Blanc survived a head-on car crash but spent two weeks in a coma and 70 days in the hospital. During this period, Barney was voiced by Daws Butler, the performer who voiced Fred in The Flagstones pilot, as well as Huckleberry Hound and Yogi Bear on The Huckleberry Hound Show. Upon Blanc’s release, he was temporarily confined to a body cast, and series recording sessions relocated to his home for about 40 episodes. Rounding out the core cast was Bea Benaderet, who had been Lucille Ball’s first choice to play Ethel on I Love Lucy. For four seasons, Benaderet took on The Flintstones’ second female lead, Betty Rubble, until she exited to star in Petticoat Junction. Geraldine “Gerry” Johnson portrayed Betty for the remaining seasons.      

Betty Rubble, Barney Rubble, Fred Flintstone, Wilma Flintstone , "The Flintstones" (circa 1960).
Credit: PictureLux / The Hollywood Archive/ Alamy Stock Photo

A Lasting Cultural Impact

Over the course of six seasons and 166 episodes, The Flintstones carved out a formidable TV legacy. The show was the premiere 30-minute animated sitcom, as well as the first cartoon ever nominated for Outstanding Comedy Series at the Primetime Emmys — an honor The Simpsons has never even achieved.

Despite its laugh track, The Flintstones embarked on nuanced storylines in its middle seasons about routes to parenthood. After Fred and Wilma became U.S. television’s first animated couple to sleep in the same bed, nine episodes were devoted to Wilma’s pregnancy with their daughter, Pebbles. During the following season, with Barney and Betty, the series acknowledged the plight of infertility, a rarely addressed topic on screen or in society at the time. The Rubbles eventually adopted a son, Bamm-Bamm. The Flintstones proved that there was a grown-up audience for animation, emboldening future TV creators to tackle mature themes such as parental abandonment (The Simpsons), politics (South Park), mortality (Archer), and mental illness (Bojack Horseman) — to great critical acclaim.

Additionally, The Flintstones was an early satirist of TV tropes and celebrity culture that helped establish the practice of famous guest stars doing cameos as themselves. Ann-Margret, Ed Sullivan, Tony Curtis, Rock Hudson, and Cary Grant were among the prominent personalities that entered Bedrock. The show also gave rise to numerous TV spin-offs, two live-action films, and millions of brontosaurus cranes worth of merchandise sales, ranging from Fruity Pebbles cereal to Flintstones Vitamins. After a robust second life in syndication, The Flintstones recently found a new home on HBO Max.

Interesting Facts
Editorial

Interesting Facts writers have been seen in Popular Mechanics, Mental Floss, A+E Networks, and more. They’re fascinated by history, science, food, culture, and the world around them.

Original photo by omersukrugoksu/ iStock

Every U.S. state has a nickname, whether or not the state legislature has made it official. Many need no explanation at all, with names inspired by a state’s abundant wildlife or most abundant exports. Louisiana’s Pelican State moniker, for example, refers to the big birds that catch fish along the state’s meandering coastline, while Georgia is called the Peach State for its famous bounty of sweet summer fruit. As for Massachusetts and Rhode Island, no one wonders why they’re referred to as the Bay State and the Ocean State, respectively. Other state nicknames are a little more curious, however. Here are some explanations and inspirations behind some of our country’s more unusual state nicknames.

Vintage Welcome to Colorado sign.
Credit: PTZ Pictures/ Shutterstock

Colorado, The Centennial State

Colorado joined the union as a state in 1876 — exactly 100 years after the signing of the Declaration of Independence. Thus, Colorado became known as the Centennial State. Unofficially, the state is often referred to as “Colorful Colorado” for its unspoiled mountain backdrops and colorful vistas. In fact, the state’s department of transportation famously erects “Welcome to Colorful Colorado” signs along many of the state’s highways.

Close-up of Iowa on a map.
Credit: belterz/ iStock

Iowa, The Hawkeye State

Iowa’s nickname is in honor of a Native American leader and warrior of the Sauk tribe. A veteran of the War of 1812 and the Black Hawk War, Chief Black Hawk’s personal memoir was the first Native American autobiography published in America. He died in 1838 in Davis County, Iowa, where a local newspaper publisher renamed his paper The Hawk-Eye and Iowa Patriot in his honor. The Hawkeye State nickname was made official in 1838, before Iowa even became a state.

The Connecticut state flag waving along with the national flag of the United States.
Credit: rarrarorro/ Shutterstock

Connecticut, The Constitution State

In 1959, Connecticut’s general assembly declared a state nickname — the Constitution State. The reason behind the moniker: a series of government documents adopted by the Connecticut Colony council entitled the Fundamental Orders of 1638-39 that were actually the first written rules of government used in the United States. The orders may very well be the first written constitution in American history, and it’s certainly safe to say they laid the groundwork for the United States Constitution. So, even though the nickname only became official in the 1950s, Connecticut earned it!

Seeds and fallen leaves of a red buckeye tree.
Credit: Gerry Bishop/ Shutterstock

Ohio, The Buckeye State

Ohio’s Buckeye State nickname stems from the buckeye trees that proliferate within the state’s natural spaces, specifically broad grasslands and low hills. These trees famously bear nuts that Native Americans and early settlers likened to the eyes of male deer — or bucks. The buckeye is even the official state tree, designated by Ohio legislature in 1953.

That said, the moniker is more than this native tree — Ohioans have been referring to themselves as Buckeyes at least since the presidential election of 1840, when Ohio resident William Henry Harrison ascended to the Oval Office. The politician’s supporters used buckeye wood to fashion carved campaign souvenirs in support of Harrison (who only served 31 days in office before succumbing to pneumonia.

Vintage illustration of Greetings from Indiana, the Hoosier State.
Credit: Found Image Holdings Inc/ Corbis Historical via Getty Images

Indiana, The Hoosier State

Per the Indiana State Library, the Hoosier State nickname comes from a poem called “The Hoosier’s Nest.” Published in The Indianapolis Journal in 1833, the poem inspired Indianians to adopt the nickname — possibly starting at a Jackson Day dinner in Indianapolis that same year and becoming widely used to describe state residents by the 1930s. Why Hoosier? The state’s historical bureau points to one Samuel Hoosier as the source. A contractor, Hoosier preferred Indiana laborers for his various projects. Of course, the moniker grew even more popular after a movie of the same name was released in 1986. Set in the 1950s, the Oscar-nominated film tells the story of a high school basketball team participating in an Indiana state championship.

President Washington crossing of the Delaware River in 1776.
Credit: GraphicaArtis/ Archive Photos via Getty Images

Maryland, The Old Line State

According to state government officials in Maryland, historians believe the Old Line State nickname came directly from General George Washington in tribute to the colony’s Line Regiment troops, who bravely served under him in the Revolutionary War. The Old Line term — which Marylanders adopted and still use widely — was common in Washington’s writings. But Maryland is sometimes referred to as the Free State as well. That unofficial moniker refers to the state’s abolition of slavery in its constitution back in 1864.

Welcome to Missouri, the show-me state handwriting on a square sheet over a map.
Credit: marekuliasz/ Shutterstock

Missouri, The Show-Me State

Missouri’s nickname dates back to 1899. In a speech at a Philadelphia naval banquet, Missouri Congressman Willard Duncan Vandiver famously stated, “Frothy eloquence neither convinces nor satisfies me. I am from Missouri. You have got to show me.” He was speaking of his personal conservative and sometimes skeptical stance — one he believed reflected the spirit of Missourians who (sometimes stubbornly) subscribe to common sense values. All that said, the nickname is widely used across Missouri today — but only unofficially.

Close-up of gold and silver bars.
Credit: Inok/ iStock

Montana, The Treasure State

Montana’s nickname — the Treasure State — refers to its rich mineral reserves, including its gold and silver mines. The state motto refers to these treasures as well: It’s “oro y plata,” Spanish for “gold and silver.” Such an abundance of riches has fed a thriving mining industry since the late 19th century (the nickname was coined in 1895). One of Montana’s unofficial nicknames is a bit less glamorous, however. First published in 1922, the “Stubbed-Toe State” moniker relates to many injuries an amateur hiker adventuring through Montana will likely face.

New Mexico sunset panorama of White Sands Desert dunes.
Credit: Mlenny/ iStock

New Mexico, The Land of Enchantment

The Land of Enchantment nickname refers to New Mexico’s magical and often otherworldly beauty, which ranges from desert dunes and red rock formations to evergreen forests, and includes sites such as the White Sands National Monument, the Rio Grande Gorge, and the Capulin Volcano. The name only became official in 1999, and it derives from a Lilian Whiting book of the same name that espouses the unique beauty of America’s Southwest. Before 1999, New Mexico test-drove other nicknames including the Land of the Heart’s Desire, the Land Without Law, the Science State, and many others.

Interesting Facts
Editorial

Interesting Facts writers have been seen in Popular Mechanics, Mental Floss, A+E Networks, and more. They’re fascinated by history, science, food, culture, and the world around them.

Interesting Facts

To the modern mind, the ancient world is a fascinating place. Many of us have spent hours pondering the pyramids of Egypt or learning about daily life in ancient Rome. But the world of many centuries ago also contains plenty of surprises. What sprawling pre-Columbian civilization had its own postal system? Where can you find the world’s biggest pyramid, or the oldest mummies? (If you answered “Egypt” for either question, try again.) Read on for the stories behind these and many other astounding aspects of the distant human past.

Credit: Hercules Milas/ Alamy Stock Photo

The Antikythera Mechanism Is a 2,000-Year-Old “Computer” From Ancient Greece

The Antikythera Mechanism is one of the most astounding archaeological finds in history. Discovered within the ruins of an ancient Greco-Roman shipwreck in 1900, it was brought to the surface the following year as part of the world’s first major underwater archaeological excavation. Initially, the mechanism — in dozens of corroded, greenish pieces of bronze — was overlooked in favor of the many bronze and marble statues, coins, amphorae, and other intriguing items the shipwreck contained. But in the 1950s, science historian Derek J. de Solla Price took particular interest in the machine, convinced that it was in fact an ancient computer. In the early 21st century, advanced imaging techniques proved Price correct.

Of course, this is an analog computer we’re talking about, not a digital one. About the size of a mantle clock, the Antikythera Mechanism was a box full of dozens of gears with a handle on the side. When the handle turned, the device calculated eclipses, moon phases, the movements of the five visible planets — Mercury, Venus, Mars, Jupiter, and Saturn — and more. It even included a dial for the timing of the ancient Olympics and religious festivals. Nothing else like it is known from antiquity (the machine has been dated to around the first century BCE), and nothing like it shows up in the archaeological record for another 1,000 years. Scientists aren’t sure exactly who made the device, although the ancient Greek astronomer and mathematician Hipparchus has been suggested as the creator, and the famed mathematician and inventor Archimedes may also have been involved. While its origin will likely remain a mystery, the mechanism’s purpose has grown clearer with time — and its existence has completely altered our understanding of the history of technology.

Credit: Ridofranz/ iStock

The Ancient Egyptians Invented Toothpaste

The ancient Egyptians are known for many firsts. Hieroglyphics, papyrus, the calendar, and even bowling all come from the minds of the ancient people along the Nile. Egyptians were also some of the first to pay particular attention to oral care. They invented the first breath mint, toothpicks have been found alongside mummies, and they created the oldest known formula for toothpaste.

One of the earliest medicinal texts, the Ebers Papyrus, contains an astoundingly accurate understanding of the human circulatory system as well as an assortment of medicinal remedies. Written around 1550 BCE, this ancient text also describes a very old form of toothpaste. This early dentifrice was likely made from ingredients such as ox hooves, ashes, burnt eggshells, and pumice (a type of volcanic rock), but by the fourth century CE, when Egypt was under Roman rule, the recipe evolved to include salt, pepper, mint, and dried iris flower, based on descriptions in another papyrus. Egyptians may have applied the paste with toothbrushes made from frayed twigs.

Credit: PositiveTravelArt/ Shutterstock

The World’s Oldest Mummies Are in Chile

Egypt may be home to the world’s most famous mummies, but not the world’s oldest. That distinction belongs to Chile, where mummified remains predate their Egyptian counterparts by more than 2,000 years. Known as the Chinchorro mummies, these artificially preserved hunter-gatherers were first discovered just over a century ago in the Atacama Desert, the driest nonpolar desert in the world. Their relatively recent discovery is perhaps explained by the fact that they weren’t buried in ostentatious pyramids but rather — after being skinned and refurbished with natural materials — wrapped in reeds and placed in shallow, modest graves. It’s estimated that the oldest Chinchorro mummies date back 7,000 years. Some are now in museums, while others remain underground in land currently threatened by climate change, as rising humidity levels alter the famously dry conditions of the desert. UNESCO added the Chinchorro mummies and the settlement where they were found to the World Heritage list in July 2021, and a museum devoted to them has been developed in the northern port city of Arica, with plans to expand it.

Credit: Bryan Busovicki/ Shutterstock

The Easter Island Heads Have Bodies

Few historical artifacts are as mesmerizing — or as mysterious — as the Easter Island statues. Known as moai (pronounced “mo-eye”), meaning “statue” in Rapa Nui (the Native name for the island, its Indigenous people, and their language), the statues are believed to represent ancestral chiefs who protected the inhabitants of this 63-square-mile island in the Pacific centuries ago. Possibly built between 1400 and 1650 CE, the statues were transported to massive stone platforms known as ahu, and usually arranged so their backs faced the sea. Although their average height is only around 13 feet (bodies and heads included), many weigh more than 10 metric tons.

Most of the 900 or so moai aren’t buried, and when Europeans first arrived in the early 18th century, they could clearly see their bodies standing tall. But the 150 or so that are buried have become the most popular and photogenic. Resting on the slopes of the Rano Raraku volcanic crater (which is also the stone quarry for the statues), these moai were slowly entombed by continuous erosion and landslides over hundreds of years until only their heads remained. Luckily, this unintended burial preserved their tattoo-like markings, a strong tradition among the Rapa Nui people.

Credit: Capuski/ iStock

The Aztecs Considered Cacao Beans More Valuable Than Gold

You may love chocolate, but probably not as much as the Aztecs did. This Mesoamerican culture, which flourished in the 15th and early 16th centuries, believed cacao beans were a gift from the gods and used them as a currency that was more precious than gold. The biggest chocoholic of them all may have been the ninth Aztec emperor, Montezuma II (1466–1520 CE), who called cacao “the divine drink, which builds up resistance and fights fatigue. A cup of this precious drink permits a man to walk for a whole day without food.” To say he practiced what he preached would be an understatement: Montezuma II was known to drink 50 cups of hot chocolate a day (from a golden goblet, no less). His preferred concoction is said to have been bitter and infused with chiles.

Needless to say, that was an expensive habit. Aztec commoners could only afford to enjoy chocolate on special occasions, whereas their upper-class counterparts indulged their sweet tooth more often. That’s in contrast to the similarly chocolate-obsessed Maya, many of whom had it with every meal and often threw chile peppers or honey into the mix for good measure.

Credit: adventtr/ iStock

Vikings Didn’t Really Wear Horned Helmets

Aside from long blond hair, horned helmets are probably the most famous Viking accessory — but that would have been a surprise to the real Scandinavian warriors who plundered Europe between the ninth and 11th centuries. The Viking horned helmet convention dates only to the 19th century: In 1876, costume designer Carl Emil Doepler introduced it in Richard Wagner’s famous opera “Der Ring des Nibelungen” (“The Ring of the Nibelung,” often called the “Ring Cycle”). At the time, Germans were fascinated with the story of the Vikings, so Doepler plopped the ancient headdress of the Germans — the horned helmet — on Wagner’s Viking protagonists. The opera proved so popular that by 1900 the horned helmet was inextricably entwined with Vikings themselves, appearing in art, ads, and literature.

Yet during the Viking era, Norse warriors never actually wore horned helmets — and especially not during battle, where they’d probably have gotten in the way. Some artifacts, such as a tapestry discovered with the famous Oseberg ship burial in 1904, do depict horned figures, but these “horned” occurrences only happened — if they happened at all — during rituals. To date, archaeologists have uncovered only two preserved Viking helmets: Both are made of iron, both have guards around the eyes and nose, and both are entirely without horns.

Credit: valiantsin suprunovich/ iStock

Cleopatra Lived Closer to the iPhone’s Debut Than to the Building of the Pyramids at Giza

When we think about nations and empires, we’re usually thinking in terms of centuries, but ancient Egypt stretched on for three millennia. The empire’s first pharaoh, Menes, united the country and formed the first dynasty on the Nile around 3100 BCE. Nearly 500 years later (more than double the entire history of the United States), the first of the Great Pyramid’s 2.3 million stone blocks was put into place. These blocks were the beginnings of an illustrious tomb for the Fourth Dynasty Pharaoh Khufu. Within the next century, two other pyramids (along with an equally impressive sphinx) were completed nearby. Today, the three Pyramids of Giza are regarded as the oldest — and the only surviving — of the Seven Wonders of the World.

It wasn’t until about 2,500 years after that first block was wedged into place that Cleopatra VII was born, around 69 BCE. Although the world of Cleopatra feels more comparable to the ancient reign of Khufu than the technological reign of the iPhone, first introduced in 2007, she’s about 400 years closer to our own age than to the creation of Egypt’s most famous wonders — which have now been standing for an incredible 4,500 years.

Credit: mediamasmedia/ iStock

The World’s First Vending Machine Dispensed Holy Water

Democracy, theater, olive oil, and other bedrocks of Western civilization all got their start with the Greeks. But even some things that might seem like squarely modern inventions have Hellenistic roots, including the humble vending machine. In the first century CE, Greek engineer and mathematician Heron of Alexandria published a two-volume treatise on mechanics called Pneumatica. Within its pages were an assortment of mechanical devices capable of all types of wonders: a never-ending wine cup, rudimentary automatic doors, singing mechanical birds, various automata, the world’s first steam engine, and a coin-operated vending machine.

Heron’s invention wasn’t made with Funyuns and Coca-Cola in mind, however: It dispensed holy water. In Heron’s time, Alexandria was a province of the Greek empire and home to a cornucopia of religions, with Roman, Greek, and Egyptian influences. To stand out, many temples hired Heron to supply mechanical miracles meant to encourage faith in believers. Some of these temples also had holy water, and experts believe Heron’s vending machine was invented to moderate acolytes who took too much of it.

The mechanism itself was simple enough: When a coin was inserted in the machine, it weighed down a balancing arm, which in turn pulled a string opening a plug on a container of liquid. Once the coin dropped off the arm, the liquid stopped flowing. It would be another 1,800 years before modern vending machines began to take shape — many of them using the same principles as Heron’s miraculous holy water dispenser.

Credit: posztos/ Shutterstock

Jericho Is Likely the Oldest Continuously Inhabited City in the World

Although scenic metropolises like Toronto and Vienna may top the rankings of greatest cities to call home, these urban centers have nothing on Jericho when it comes to historical charm. After all, this Middle Eastern oasis has hosted human residents for at least 11,000 years, making it likely the oldest continuously inhabited city in the world.

Thanks in large part to the nearby water supply now known as Elisha’s Spring (or Ein es-Sultan), nomadic hunter-gatherers began settling these fertile grounds as the Mesolithic Period drew to an end, perhaps around 9000 BCE. By around 8300 BCE, inhabitants had already constructed a bordering wall of stone, along with a 28-foot tower that may have served as a cosmological marker. Although the initial colony of 2,000 to 3,000 inhabitants had dissipated by 7000 BCE, subsequent communities sprung to life as residents continued to hone agricultural techniques, with each settlement building on top of the previous one. Altogether, some 23 layers of civilizations have been uncovered in the area.

Credit: Pav-Pro Photography Ltd/ Shutterstock

The Incas Developed Their Own Postal System

The Inca Empire was a powerful pre-Columbian civilization that once covered almost the entire west coast of South America. By 1471, the empire stretched for more than 3,400 miles, and was the largest empire in the world at the time, including European nations. The Incas built a vast network of roads that stretched for more than 25,000 miles, and used llamas as their beasts of burden.

Because their empire was so large and intricate, the Incas had to come up with a way to quickly and efficiently spread messages throughout the land. Chaski were the postal carriers of the Inca Empire. They were trained runners who could collectively cover up to 150 miles per day. The runners worked using a relay system — the first chaski ran 6 to 9 miles until he reached a small house, called a chaskiwasi, where another runner waited to complete the next leg of the journey. They used the extensive road network and specialized rope bridges to quickly move through the empire.

Credit: aphotostory/ Shutterstock

The Great Wall of China Was Built With Porridge in the Mortar

Traversing thousands of miles across eastern Asia, the Great Wall of China has stood as a symbol of the country’s military and technological know-how for more than 2,000 years. And thanks to a team of scientists at Zhejiang University, we now know that the secret to its legendary endurance is … sticky rice soup?

As explained in Accounts of Chemical Research in 2010, the scientists stumbled upon this discovery while examining mortar samples from the Great Wall and other long-standing Chinese buildings. They realized the mortar was an unusual composite created from slaked lime and congee, the former a heated type of limestone exposed to water, and the latter a pudding-like rice porridge commonly eaten throughout Asia. When combined with the lime’s calcium carbonate, a complex carbohydrate in the congee known as amylopectin helped stymie the development of calcium carbonate crystals in the mortar, resulting in a compressed structure that gave the ancient barrier the strength to withstand earthquakes and bulldozers. While not invented until around the fifth century CE, well after the initial parts of the Great Wall were raised, the sticky rice-lime mortar was used for the well-preserved sections that remain from the Ming dynasty (the 14th through 17th centuries).

Credit: GMVozd/ iStock

Beer Dates Back at Least 5,000 Years

Beer is as old as history — and by some counts, even older. Many experts assert that the emergence of Sumerian cuneiform in the fourth millennium BCE marks the beginning of recorded history. The first hard evidence of beer brewing also comes from the Sumerians of Mesopotamia, in a town called Godin Tepe (now part of Iran). In 1992, archaeologists there discovered traces of beer in jar fragments dated to around 3500 BCE. However, some scholars suggest that beer is as old as grain agriculture itself — which would put the boozy beverage’s invention at around 10,000 BCE, somewhere in the Fertile Crescent.

Strangely (or not), thousands of Sumerian tablets make mention of beer. In fact, it even makes an appearance in the Epic of Gilgamesh, often regarded as the oldest surviving piece of literature. But among all these references, no recipes for this ancient brew were ever recorded. The closest thing to step-by-step instructions is a text known as the Hymn to Ninkasi (aka the goddess of beer). Written around 1800 BCE, this hymn describes the malts, cooked mash, and vats used in the beer-making process. It seems that Sumerian beer had mostly two ingredients: malted barley and beer bread, or bappir, which introduced yeast for fermentation. The beer was then drunk from communal jars, and its sediments were largely filtered out by drinking the concoction from reed straws.

Credit: JDStone/ Shutterstock

Some of the Stones at Stonehenge Came From 150 Miles Away

Questions abound when it comes to Stonehenge, but not everything about the monument is shrouded in mystery. We know, for instance, that around 100 stones make up the site — and that some of them came from nearly 150 miles away. Given that Stonehenge is 5,000 years old, that’s quite the feat. This raises two crucial questions: Who transported said stones, and how? That’s where the mystery begins. For one thing, no one’s sure who built England’s world-famous monument, with everyone from Merlin to aliens receiving credit from various factions; more plausible culprits include Danes, Celts, and Druids.

The stones at Stonehenge are grouped into two types: larger blocks known as “sarsen stones,” and smaller stones in the central area known as “bluestones.” Over the last decade or so, researchers have confirmed that the bluestones came from the Preseli Hills of western Wales, about 150 miles from Stonehenge. (The sarsen stones, meanwhile, were likely found 20 or 30 miles away from the monument.) As for how the bluestones made that long journey, we only have theories: Some scholars believe they were dragged on wooden rafts, although others have suggested that a glacier carried the stones at least part of the way. Most archaeologists scoff at the glacier theory, however, and research in 2019 at outcroppings in the Preseli Hills both conclusively linked them to Stonehenge and confirmed evidence of quarrying work around 3000 BCE — the same era when Neolithic builders were first constructing the mysterious stone circles. That means human hands took the rock from the locations in Wales, but as for exactly how, we simply don’t know — and possibly never will.

Credit: Mitriakova Valeriia/ Shutterstock

The Oldest Example of Knitted Socks Comes From Ancient Egypt

Around 300 CE, in the Roman Egyptian city of Antinoöpolis, a child stuffed their feet into a pair of striped wool socks. The child’s name has long been lost to history, but one of the socks is now likely the oldest piece of knitted footwear ever discovered. Pulled from a 1,700-year-old refuse heap during a British excavation in Egypt in 1913–14, the socks now live at the British Museum in London. In 2018, they underwent multispectral imaging that revealed they were once as colorful as some of the cotton creations that adorn our feet today. Scientists found tiny traces of three plant-based dyes (red, blue, and yellow) that ancient Egyptians used on the wool to create seven beautiful shades of stripes. The Egyptians also used a single-needle looping technique, now called “nålbinding,” to create the socks. The technique predates both modern knitting and crocheting, and is named for the many ancient examples that have been found in modern Norway.

Today, some consider pairing socks with sandals a fashion faux pas, but ancient cultures around the Mediterranean felt differently. These particular Egyptian socks had two compartments for toes, and were specifically designed to fit sandals by separating the big toe from its companions. Centuries earlier, ancient Greeks wore socks made from animal pelts along with their sandals. Turns out, pairing socks and Birkenstocks may be one of humanity’s oldest footwear traditions.

Credit: A-Babe/ Shutterstock

Ancient Greek and Roman Sculptures Were Originally Painted

Sculpture from classical antiquity is often presented in museums, textbooks, and more as a world of white marble. But these representations aren’t an accurate portrayal of the past: Ancient Athens and Rome were full of eye-popping color, with statues sporting vibrant togas and subtle skin tones. In fact, no sculpture was considered complete without a dazzling coat of paint.

Over time, these impermanent paints — left unprotected from the elements — wore away, leaving behind unblemished stone and a false legacy of monotone marble. This perception of the “whiteness” of antiquity was cemented in the 18th century, tied to racist ideals that equated the paleness of the body with beauty. When German scholar Johann Winckelmann (sometimes called the “father of art history”) glimpsed flecks of color on artifacts found near the ancient Roman cities of Pompeii and Herculaneum, he brushed off the work as Etruscan — a civilization he considered beneath the grandeur of ancient Rome. Besides bits of color still clinging to some statues, other evidence of the Mediterranean’s colorful past survives in frescoes from Pompeii (which even depict a Roman in the act of painting a statue). In recent decades, the art world has been busy recreating the colorful past of Western civilization as archaeologists use UV light to illuminate certain pigments and art exhibits travel the world to unshroud the colorful palette of these ancient civilizations.

Credit: Photoprofi30/ Shutterstock

France Has One of the World’s Largest Collections of Standing Stones

The Carnac Stones are a group of more than 3,000 megalithic standing stones in the French village of Carnac, Brittany. These stones date back to the Neolithic period and were probably erected between 3300 and 4500 BCE. They are one of the world’s largest collections of menhirs — upright stones arranged by humans. There is no real evidence to confirm their purpose, but that hasn’t stopped researchers from hazarding guesses. Some theorize they were used as calendars and observatories by farmers and priests. According to Christian mythology, the stones are pagan soldiers who were petrified by Pope Cornelius. Local folklore, meanwhile, says that the stones stand in straight lines because they were once part of a Roman army. The story goes on to say that the Arthurian wizard Merlin turned the Romans to stone.

Credit: yegorovnick/ Shutterstock

Not All Vikings Came From Scandinavia

Sweden, Norway, and Denmark receive most of the attention regarding Viking history, but a group of warriors known as the Oeselians lived on a large island called Ösel. Known as Saaremaa today, the island is located off Estonia’s coast in the Baltic Sea. According to 13th-century Estonian documents, Oeselians built merchant ships and warships that could carry about 30 men each.

In 2008, workers inadvertently discovered a burial ground in the town of Salme that included human remains, along with swords, spears, knives, axes, and other weapons. Archaeologists excavated the site (and later a second site nearby) and found the remains of two Swedish ships dating to about 750 CE. One ship contained neatly ordered remains and the other more haphazard, indicating battles had taken place. Archaeologists believe the two ships likely carried Swedish Vikings who met their end while attacking the Oeselians.

Credit: Dmitry Rukhlenko/ Shutterstock

The Americas Contain More Pyramids Than the Rest of the World Combined

In ancient Mesoamerica, a region spanning from much of modern-day Mexico through most of Central America, peoples such as the Inca, Aztec, Maya, and Olmec had their own styles of pyramids dating back to about 1000 BCE — and they built a lot of them. Unlike Egypt, they weren’t used exclusively for tombs.

The most well-known Mesoamerican pyramids are the ones in Teotihuacan, an Aztec city near present-day Mexico City. The Pyramid of the Sun, the largest of the structures, and the nearby Pyramid of the Moon were both constructed by putting rubble inside a set of retaining walls, building adobe brick around it, then casing in limestone. The Pyramid of the Sun hides an extra secret: another pyramid, accessible through a cave underneath. These pyramids were built between 1 and 200 CE, although the pyramid inside the cave is even older.

The Great Pyramid in La Venta, an ancient Olmec civilization by present-day Tabasco, Mexico, is much different: It’s essentially a mountain made of clay. Later Olmec pyramids were also earth mounds, only faced with stone in a stepped structure.

The largest pyramid on the planet by volume, not height, is the Great Pyramid of Cholula, or Tlachihualtepetl, in Mexico. It dates back to around 200 BCE, and is essentially six pyramids on top of one another. Later civilizations expanded on previous construction, taking care to preserve the original work. It’s made of adobe bricks and, whether accidentally or through a deliberate effort from the locals, eventually became covered in foliage and was later abandoned. When Spanish invaders led by Hernan Cortez came through, murdered 3,000 people, and destroyed more visible structures, they thought Tlachihualtepetl was part of the natural topography and let it be.

Credit: ecstk22/ Shutterstock

Rome Was the First City in the World to Reach 1 Million Inhabitants

Today, Tokyo is the world’s largest city by population, with more than 37 million residents, but long before the Japanese metropolis took that honor, there was another record-holder: Rome. The ancient city was the world’s largest back in 133 BCE, when it became the first city to reach 1 million inhabitants.

Everyday life in ancient Rome was largely dictated by wealth: Affluent residents lived in finely decorated townhouses (and often had countryside estates for trips out of the city), while lower-income citizens resided in apartment-like buildings called insulae. But all social classes enjoyed the perks of living in a major city, including fresh water piped in from aqueducts, and the availability of markets, entertainment, and even food stalls that served quick meals. Rome’s population eventually declined as the Roman Empire fell, yet no city would surpass its record population for millennia — that is, until London became the world’s largest city, with 1 million people in 1800 and more than 6 million people by 1810.

Credit: tony french/ Alamy Stock Photo

Egyptian Hieroglyphics Were Undecipherable Until the Rosetta Stone

While digging the foundation for a new fort in July 1799, soldiers in Napoleon’s army found a fragment of stone in the Nile that bore the same message in three languages: Egyptian hieroglyphics, Demotic script, and ancient Greek. By comparing the Greek text to the other two passages, scholars could finally decode the meaning of the hieroglyphics. Before the Rosetta Stone’s discovery, ancient Egyptian writing had been an undecipherable mystery. Later, scholars such as Thomas Young and Jean-François Champollion showed that the hieroglyphics on the stone revealed names of important figures and other details of ancient Egyptian history. Reportedly, Champollion was so excited to have deciphered the mystery that he fainted.

Credit: Boris Stroujko/ Shutterstock

The Incas Used String and Knots to Record Information

Although the Incas had no known written language, they weren’t without a means of recording important information. Quipu were Andean textiles that used a system of colored string and knots to record data. These textiles were both recorded and read by officials known as “quipucamayocs.” Evidence suggests that quipu were first developed by the Wari civilization, who lived in Peru between about 450 and 1000 CE. Scholars believe the Incas used quipu both to record hard data — such as census figures, inventory, and other administrative information — and as a way to encode Incan myths and histories. Because of the Andes’ arid climate, the quipu were well preserved for centuries. Today, hundreds of quipu are displayed in museums around the world, with the biggest collection now residing at the Ethnological Museum of Berlin in Germany.

Credit: AlexAnton/ Shutterstock

The Great Pyramids of Giza Created Whole Cities Around Them

Building pyramids as large as the Great Pyramids of Giza was a major undertaking, and required a lot of labor — especially the Great Pyramid of Khufu, which, at 481 feet high, was the tallest building in the world for thousands of years. (The date of its construction is debated, but may have begun around 2550 BCE.)

Archaeologists have uncovered two “towns” around the Great Pyramids that housed not only pyramid-builders, but bakers, carpenters, weavers, stoneworkers, and others who supported day-to-day life. Some lived in family dwellings with their own courtyards and kitchens, while others, likely itinerant workers, slept in something more like a barracks. There is so much we don’t know about these areas, but one thing’s for sure: Based on animal bones and pottery found around the site, everyone there was very well fed… and had plenty of beer to drink.

Credit: Album/ Alamy Stock Photo

The Tale of Genji — Often Considered the World’s First Novel — Ends Inconclusively

Spread across 54 chapters and some 1,300 pages (in English), Murasaki Shikibu’s Genji Monogatari, or “The Tale of Genji,” explores the tumultuous love life of its aristocratic titular hero during Heian-period Japan (794–1185 CE). Written around the beginning of the 11th century, Genji is an incredibly ambitious work featuring some 400+ named characters and a 70-year-long narrative that spans generations. Because of its realistic setting, psychological depth, and the detailed development of its heroes — Prince Genji and his son Kaoru — many consider Genji to be the world’s first novel, and thus Shikibu, who served as a lady-in-waiting at Japan’s imperial court, the world’s first novelist. An instant success, the book is still hugely influential in Japan today.

But one detail about the story has perplexed readers and scholars for a millennium: The ending isn’t much of an ending. One of Kaoru’s love interests becomes a Buddhist nun, and Kaoru is foiled in an attempt to make contact with her — hardly a satisfying conclusion after 1,000+ pages of Heian-era court romance. Translators have debated whether this abrupt and unsatisfying ending was the author’s intention or if the story remains incomplete, perhaps because Murasaki died before she could finish it. Others argue that she might not have had a concept of a traditional narrative ending, and anyway was not writing for publication. Instead, Genji’s many chapters were originally passed among the women of court in handwritten notebooks. Scholars will likely never know the definitive answers behind the ending, but the abruptness gives Genji a modern feel and reinforces the novel’s pervading Buddhistic sense of “mono no aware,” a phrase associated with the “beautiful yet tragic fleetingness of life.”

Credit: Nick N A/ Shutterstock

Latvian Vikings Were Known as the “Last Pagans”

A tribe of fierce Viking warriors known as the Curonians lived along the Baltic coastline of modern-day Latvia starting around the fifth century CE. The Curonians were referred to as Europe’s last pagans, since they resisted all attempts to convert to Christianity long after neighboring nations did so — by some accounts, they practiced ancient rituals into the 19th century. They frequently raided Swedish settlements and attacked merchant ships, often forming alliances with other groups.

The Curonians were also among the region’s wealthiest groups, primarily due to the trade of amber (precious fossilized tree resin). The Baltic region contains vast amounts of amber, nicknamed “the gold of the North,” and Baltic amber was once traded all over Europe and northern Africa. One of the Curonians’ primary settlements, Seeburg, was along the Baltic coast in modern-day Grobina. There, you can visit the Curonian Viking Settlement, an attraction that immerses visitors in folklore and activities such as archery, boat trips, and excursions to visit historical sites.

Credit: Em Campos/ Alamy Stock Photo

Rome Still Uses an Aqueduct Built During the Roman Empire

While the Romans didn’t invent the aqueduct — primitive irrigation systems can be found in Egyptian, Assyrian, and Babylonian history — Roman architects perfected the idea. In 312 BCE, the famed Roman leader Appius Claudius Caecus erected the first aqueduct, the Aqua Appia, which brought water to the growing population of the Roman Republic. Today, the Acqua Vergine — first built during the reign of Emperor Augustus in 19 BCE as the Aqua Virgo — still supplies Rome with water more than 2,000 years after its construction (though it’s been through several restorations).

The main reason for the aqueduct’s longevity, along with that of many of Rome’s ancient buildings, is its near-miraculous recipe for concrete. An analysis by the Massachusetts Institute of Technology discovered that Roman concrete could essentially self-heal due to its lime clasts (small mineral chunks) and a process known as “hot mixing” (mixing in the lime at extremely high temperatures). Today, researchers are studying how the material functioned in the hopes of applying secrets from the Eternal City to today’s building materials.

Featured image credit: Original photo by David Madison via Getty Images

Interesting Facts
Editorial

Interesting Facts writers have been seen in Popular Mechanics, Mental Floss, A+E Networks, and more. They’re fascinated by history, science, food, culture, and the world around them.

Original photo by iam_os/ Unsplash

You’re no doubt familiar with nicknames like the Big Apple and City of Light, but do you know how cities like New York and Paris earned their famous sobriquets? The backstories behind some of these monikers are straightforward — as in, literal-translation-of-Greek straightforward — while others involve everything from horse racing to molasses. Here’s how seven famous cities got their nicknames.

The New York City cityscape.
Credit: Emiliano Bar/ Unsplash

The Big Apple (New York)

Before it was used to refer to The City That Never Sleeps, a “big apple” was an idiom used to mean a very big deal, an object of great desire or big dreams. The first time New York City was referred to as a “big apple” in print may have been in 1909, when American journalist Edward S. Martin wrote in his book The Wayfarer in New York that those in the Midwest are “apt to see in New York a greedy city … it inclines to think that the big apple gets a disproportionate share of the national sap.” The phrase doesn’t seem to have been intended as a nickname, however — especially since the name in question wasn’t capitalized.

In fact, it was actually a horse-racing column published by the New York Morning Telegraph that popularized the term. “The dream of every lad that ever threw a leg over a thoroughbred and the goal of all horsemen. There’s only one Big Apple. That’s New York,” racing journalist John J. Fitz Gerald wrote in a 1924 column eventually called “Around the Big Apple.” However, Fitz Gerald apparently first heard the term from two Black stable hands in New Orleans. As etymologist Michael Quinion explains, “the Big Apple was the New York racetracks … the goal of every aspiring jockey and trainer .. for those New Orleans stable hands the New York racing scene was a supreme opportunity, like an attractive big red apple.”

The expression was later popularized by jazz musicians in the 1920s and 1930s, then picked up in the 1970s by president of the New York Convention and Visitors Bureau Charles Gillett, who began a tourism campaign around the slogan that was designed to counter New York’s rising crime rates and bad reputation, among other issues.

General view of Paris at dusk with the Eiffel Tower and the Hotel des Invalides.
Credit: Mike Hewitt/ Getty Images News via Getty Images

City of Light (Paris)

This one isn’t just literal. Paris was indeed among the first major cities in Europe to use street lighting, in an effort to deter crime, but its evocative moniker is more metaphorical than you might expect. France’s capital was also an unofficial capital of the Age of Enlightenment, which in France is considered to have begun in 1715 and ended in 1789 — a period bookended by the death of Louis XIV and the French Revolution, respectively.

Also known as the Age of Reason, this movement was hardly limited to Paris. Still, it was there that philosophers like Voltaire and Jean-Jacques Rousseau espoused many of their most influential ideas; where the Montgolfier Brothers launched the first human-powered flight of a hot-air balloon; and where the Treaty of Paris was signed in 1783 (ending the American Revolution), among myriad other historic achievements. Next time you swoon over an image of the city lit up at night, keep in mind that light bulbs also represent ideas.

Night view of the a Chicago theater.
Credit: Benjamin Rascoe/ Unsplash

The Windy City (Chicago)

Believe it or not, this one probably has nothing to do with weather; although situated on Lake Michigan, Chicago isn’t even especially windy when compared to other major cities, at least not in the literal sense. Rather, the moniker likely comes from the once-common perception that residents of Chicago in general and its politicians in particular were “windbags” who were “full of hot air” — which is to say, given to making grandiose claims that weren’t altogether truthful.

The most common origin story for the nickname is a column written by New York Sun editor Charles Dana in 1890, when the two cities were competing to host the World’s Fair three years later. In it, he advised anyone reading to pay little mind to the “nonsensical claims of that windy city.” There’s just one problem with this theory, widely accepted though it is: There’s no evidence that such a column was ever written. Even if it did exist, Chicago had already been known as the Windy City since at least the 1870s.

Steamboat River Boat Natchez docked on the Mississippi River in New Orleans French quarter.
Credit: Edwin Remsberg/ The Image Bank via Getty Images

The Big Easy (New Orleans)

If you were lucky enough to spend time in New Orleans circa the early 1900s, you may have found yourself at a dance hall called the Big Easy. And while that nickname for everyone’s favorite Creole-inflected city didn’t take off until the 1960s — when Betty Guillaud, a gossip columnist at the Times-Picayune, used it to highlight the differences between laid-back NOLA and bustling New York — the connection to music is indisputable.

Jazz was born in the city’s Black communities, and the rich musical legacy of New Orleans goes far beyond that genre. Some attribute the Big Easy nickname to the comparative ease with which aspiring musicians could make both a living and names for themselves in N’Awlins; whatever the case, the sobriquet was well known enough by 1970 for James Conway to name his crime novel set in New Orleans after it.

As with many nicknames, it’s likely that each of these holds some responsibility. That’s especially fitting for a melting pot of a city like New Orleans, where variety isn’t just the spice of life — it’s a way of life.

A view of the LOVE public artwork in Philadelphia.
Credit: Zack Frank/ Shutterstock

City of Brotherly Love (Philadelphia)

Anyone who’s studied Ancient Greek will know this one: Philadelphia literally means “brotherly love” (phílos adelphós) in that language of yore. It was named by William Penn, who founded the Pennsylvania Colony and was an early member of the Society of Friends, better known as Quakers; his experience with religious persecution inclined him to make both the colony and the city a place where people were free to worship as they chose.

In addition to being noble in and of itself, this also had a highly positive effect on Philadelphia. Penn’s goodwill improved relations with Tammany, the Chief of the Lenape peoples, and helped transform Philadelphia into what was then the largest and most important city in America.

A cityscape view of the city of Rome.
Credit: iam_os/ Unsplash

The Eternal City (Rome)

The Roman Empire didn’t actually last forever, but that didn’t stop poets and other scholars from predicting that it would. Tibullus is widely credited as being the first to refer to his fair city as “Urbs Aeterna,” which in Latin means “Eternal City”; another poet, Virgil, later went on to use the line imperium sine fine — meaning “empire without end.”

The exact line Tibullus used in his first century BCE poem “Elegies” was “Romulus aeternae nondum formaverat urbis moenia, consorti non habitanda Remo,” which translates to “not yet had Romulus drawn up the Eternal City’s walls, where Remus as co-ruler was fated not to live.” The empire itself might not have been eternal, but Rome is as magnificent as ever thousands of years later.

George Washington Statue in the Boston Public Garden.
Credit: joe daniel price/ Moment via Getty Images

Beantown (Boston)

Though it’s also known as the Cradle of Liberty and even the Hub of the Universe by some, no nickname has stuck to Boston quite like Beantown. On one level, the origin is as simple as you’d expect: Boston baked beans are a regional dish differentiated by the use of molasses for a bit of sweetness alongside the salt pork and/or bacon flavoring. There’s also more history to it, as the tradition dates back to the 17th century and involves both Native Americans (who made baked beans without the molasses) and the slave trade (which is how the area got much of its molasses in the first place).

For quite a long time — even through today, to an extent — Beantown was used more by sailors and other visitors than it was by actual locals. Some nicknames aren’t quite as beloved by natives as they are by the rest of us.

Interesting Facts
Editorial

Interesting Facts writers have been seen in Popular Mechanics, Mental Floss, A+E Networks, and more. They’re fascinated by history, science, food, culture, and the world around them.

Original photo by Mike Flippo/ Shutterstock

What’s in a name? It depends a lot on who you are and where you come from. Maybe you’re named after a religious figure, or an ancestor or two. If you were born in the mid-’90s, you might be named Brooke, but that name is all but extinct for the babies of the 2020s. Maybe, like a certain famous surrealist painter, you were given more than a dozen names at birth. Some cultures keep it simple and have just one name per person. These eight facts about names across time and space show us how much our monikers can reveal about history and culture.

Middle Name on a printed form close up.
Credit: Zoltan Fabian/ iStock

Modern Middle Names Started With 15th-Century Aristocrats

Having three or more names dates back to at least ancient Rome, but middle names as we know them today started in Italy just as it was entering the Renaissance. The earliest middle names were Catholic saint names given to children in elite families in the hopes that saints would protect them. The practice became common by the late 15th century, then spread to lower classes and nearby countries. In the early 1800s, 45% of French boys were given at least one middle name; by the end of that century, the number jumped to 69%. The practice gained traction in Great Britain and the United States during the 19th century too, although it was exceedingly rare to have middle names in those countries before 1800. By this point, middle names weren’t necessarily religious — they just gave families more room to honor a second relative or a matriarchal lineage when naming their children.

John Quincy Adams, the 6th President of the United States.
Credit: Library of Congress/ Archive Photos via Getty Images

The First U.S. President With a Middle Name Was John Quincy Adams

Middle names weren’t common in Great Britain or the United States until the 19th century, so it’s unsurprising that the first five Presidents, born between 1732 and 1758, didn’t have them. It was still unusual when the first President with a middle name, John Quincy Adams, was born in 1767 — but it ended up being pretty convenient as a way to distinguish him from the other John Adams (the second U.S. President). The next President with a middle name was number nine, William Henry Harrison.

At least three later Presidents went by their middle names: Ulysses S. Grant (first name Hiram; the S doesn’t stand for anything), Grover Cleveland (first name Stephen), and Woodrow Wilson (first name Thomas).

Spanish artist Pablo Picasso.
Credit: George Stroud/ Hulton Archive via Getty Images

Pablo Picasso Had Around 15 Names

Most people know a certain influential surrealist/cubist painter as Pablo Picasso or even just Picasso. But the artist, who was born in Andalusia, actually had around 15 names, inspired by saints and members of his family. His full name, not necessarily in this order, was Pablo Diego José Francisco de Paula Juan Nepomuceno Crispín Crispiniano María de los Remedios de la Santísima Trinidad Ruiz Picasso. At first, he incorporated his second-to-last name, Ruiz, into his signature — his earliest paintings were signed P. Ruiz, then P. Ruiz Picasso, then P. R. Picasso. He signed just “Ruiz” for some cartoons. Eventually, he settled on plain old “Picasso.”

A young woman dancing with her dog in the living room.
Credit: Moyo Studio/ iStock

Some people get really creative in naming their dogs, but for every Mutt Damon or Babaganoush, there are plenty of old standards. According to the American Kennel Club, the two most popular names for dogs are Milo (for male dogs) and Luna (for female dogs). Other incredibly common dog names are Max, Bella, Teddy, and Daisy.

Circa 1550, Pope Marcellus II.
Credit: Hulton Archive via Getty Images

The Last Pope to Use His Birth Name Was in 1555

The pope rarely goes by the name he was born and baptized with. For example, Pope Francis, who ascended to the papacy in 2013, was born Jorge Mario Bergoglio. The tradition may have started because John II, who ascended in 533, was named Mercurius for the god Mercury, and pagan gods aren’t exactly popular with the Catholic Church. Now, Popes choose names that honor saints and previous popes. The last time someone bucked tradition was in 1555, when Marcellus II retained his baptismal name.

A man holding Icelandic flag near scenic waterfall.
Credit: anyaberkut/ iStock

Iceland Has a Naming Committee That Approves or Denies New Names

Iceland takes its culture very seriously, and has a preapproved list of traditional names that citizens can use — which applies whether they’re newborn babies or older newcomers. If someone wants to use a name outside of that list, they need to apply to the Personal Names Committee for approval. New names need to fit the structure of the Icelandic linguistics and alphabet, be able to accommodate the language’s structure of word endings, and not “cause the bearer embarrassment.” The committee receives around 100 applications each year and rejects about half of them. (And they’re not the only country with a naming committee.)

This, predictably, sometimes causes conflicts — as in the case of the British expat who couldn’t renew his children’s passports under Icelandic law because their names were Harriet and Duncan.

Young traditional Myanmar girls using digital tablet pc together.
Credit: szefei/ Shutterstock

Single Names Are Common in Indonesia and Myanmar

Many countries don’t have the traditional surnames we’re used to in most of the Western world. Iceland, in addition to their strict first-naming conventions, doesn’t use family names — your last name just means “son of [father’s name]” or “daughter of [mother’s name].” But in some cultures, one single name (no last name) is normal.

Myanmar, formerly Burma, has a naming structure of single names and honorifics. Former United Nations Secretary-General U Thant had a single name, “Thant”; “U” translates roughly to “Mr.” Single names are also common among Javanese people in Indonesia, sometimes because of tradition, other times because of forced assimilation policies that required them to drop their last name. Other places where single names are common include India, where last names only became common after British colonization, and Tibet.

Bride and groom's hands with wedding rings.
Credit: Sergiy Zavgorodny/ Shutterstock

Most Women in Heterosexual Marriages Still Take Their Husband’s Name

When a woman marries a man in the United States, it’s more common than it used to be for her to keep her own last name — but according to a Pew Research study, around 79% still adopt their husband’s last name. In the study, 14% kept their last name, and 5% hyphenated both names. More than 90% of men kept their last names, with 1% hyphenating and 1% taking their spouse’s last name.

It’s more likely for younger women (ages 18 to 49) than older women (50 and over) to hang onto their name, although 73% of them still opt to change it. Women with advanced college degrees and with left-leaning political views are also less likely to change their names.

Interesting Facts
Editorial

Interesting Facts writers have been seen in Popular Mechanics, Mental Floss, A+E Networks, and more. They’re fascinated by history, science, food, culture, and the world around them.

Original photo by Photo 12/ Alamy Stock Photo

John Wayne, the shy son of a struggling pharmacist, wasn’t all that much like the characters he played — he wasn’t really a swaggering marshal, a brooding brawler, or prone to shooting up troublemakers in frontier towns. He didn’t even respond to being called “John.” Yet the commanding aura he used to mesmerize audiences eventually made his legend indistinguishable from the individual beneath the cowboy hat and drawl. Here are eight real-life facts about the larger-than-life actor who set the gold standard for the lawmakers, the justice-dispensers, and the men of action he portrayed to unparalleled effect on the big screen.

A young portrait of John Wayne.
Credit: ARCHIVIO GBB/ Alamy Stock Photo

He Wasn’t Born John Wayne

Born Marion Robert Morrison in 1907 in Winterset, Iowa, the future movie star earned his longtime nickname, Duke (or “The Duke”), well before he adopted his famed stage name. According to Scott Eyman’s John Wayne: The Life and Legend, after Wayne’s family moved to California, they adopted an Airedale terrier named Big Duke, prompting local firemen to dub the skinny boy who chased after the dog “Little Duke.”

More than a decade later, with Duke Morrison set for his first starring role in The Big Trail (1930), Fox Studios head Winfield Sheehan decided to rename the young actor after maverick Revolutionary War General “Mad Anthony” Wayne, with the “John” something of an afterthought.

John Wayne playing college football at University of Southern California.
Credit: PictureLux / The Hollywood Archive/ Alamy Stock Photo

He Was a Talented Football Player Before Becoming an Actor

Given his 6-foot-4-inch, 200-plus-pound frame, it’s perhaps unsurprising that Wayne was a standout football player in his younger years. Per The Life and Legend, Wayne starred on a championship-winning Glendale High School football team in the early 1920s, before earning a scholarship to play at the University of Southern California. Although he lost his scholarship (allegedly after getting injured in a bodysurfing accident) during his junior year, Wayne had already spent time working in the Fox props department via his head coach’s connections, and as such was prepared to continue in the motion picture industry after his football prospects disintegrated.

John Wayne on horseback as a cowboy.
Credit: Screen Archives/ Moviepix via Getty Images

John Wayne Was One of the First “Singing Cowboys”

Between his first headlining role in The Big Trail and his leap to stardom with Stagecoach (1939), Wayne toiled away in dozens of forgettable feature films through the 1930s. That included a stint in talkies — such as Riders of Destiny (1933) and Lawless Range (1935) — as a singing cowboy, an archetype soon made famous by Gene Autry. But while Autry was a legitimate musician, Wayne relied on the “movie magic” of a dubbed voice and guitar strumming to look the part. Embarrassed by the inability to perform his characters’ songs during public appearances, Wayne informed his bosses that he was retiring from the lip-syncing business.

John Wayne acting in a helmet and a military uniform.
Credit: Bettmann via Getty Images

He Was Criticized for Being a Draft Dodger

Perhaps surprisingly for someone who represented American ruggedness in the flesh, Wayne never signed on for military service during World War II. Even as peers such as Clark Gable, Henry Fonda, and Jimmy Stewart enlisted, Wayne was initially given a pass as the sole provider for his family, and later obtained additional deferment as a movie star who best served “national interest.” Although Wayne did entertain American troops overseas on behalf of the United Services Organization (USO), he occasionally experienced a rude welcome from the servicemen who didn’t appreciate the “fake machismo” he demonstrated on screen. Later biographers have suggested that Wayne remained in Hollywood to further a career that was just taking off, with his guilt over not serving fueling public displays of patriotism.

John Wayne on set enjoying a game of chess.
Credit: Mirrorpix via Getty Images

He Frequently Enjoyed Games of Chess

His reputation as a man’s man notwithstanding, Wayne also enjoyed headier activities such as chess. His affinity for the game of kings stretched all the way back to at least high school, with one teacher recalling the teenager’s “aggressive” style in matches. Often seen hunched over a chessboard between takes on set, Wayne was said to have rung up an undefeated record against industry buddies Ed Faulkner and Jimmy Grant. However, he may not have been a particularly gracious loser; he reportedly once sent a board and pieces flying after getting badly beaten by fellow actor William Windom.

The 136–foot Wild Goose yacht, formerly owned by John Wayne.
Credit: Allen J. Schaben/ Los Angeles Times via Getty Images

He Turned a Former Navy Warship Into a Pleasure Boat

In his later years, Wayne enjoyed spending increasing amounts of time aboard his 136-foot yacht, the Wild Goose. Wayne bought the vessel, originally built as a U.S. Navy minesweeper during World War I, in the early 1960s, and had it renovated to include such luxuries as a saloon, a fireplace, and a bridal suite. Although Wayne most treasured the family getaways aboard his yacht, he also used it to host parties for Hollywood luminaries, and lent it out to friends such as Tom Jones and Dennis Wilson. Like its owner, the Wild Goose even managed to work its way into the movie business, with appearances in The President’s Analyst (1967) and Skidoo (1968).

John Wayne in a tux circa, 1970.
Credit: Art Zelin/ Archive Photos via Getty Images

He Earned a Grammy Nomination for a Poetry Album

A few years after winning his first and only Oscar for his performance in True Grit (1969), Wayne nearly added to his trophy collection with the well-received release of his 1973 spoken-word poetry album, America, Why I Love Her. Written by John Mitchum, brother of Wayne’s sometime co-star Robert Mitchum, the album’s 10 tracks included such entries as the service-oriented “An American Boy Grows Up” and the anti-demonstration “Why Are You Marching, Son?” America, Why I Love Her spent 16 weeks on the Billboard 200 chart and earned a 1973 Grammy nomination for Best Spoken Word Album, although Richard Harris ultimately claimed the award for his rendition of Jonathan Livingston Seagull.

American actor John Wayne in 1974.
Credit: Evening Standard/ Hulton Archive via Getty Images

After Having a Lung Removed, He Performed for Another Decade

Possibly due to his work in the vicinity of a nuclear test site on the set of The Conqueror (1956), Wayne wound up having a lung (and multiple ribs) removed to treat cancer in 1964. Amazingly, he returned to the sort of action-heavy roles that had come to define his career, in films such as Hellfighters (1968) and Chisum (1970). Despite his willingness to soldier on, the veteran actor was clearly suffering from an array of health problems by the mid-1970s. After word of his health issues reached insurance companies, he wound up having to contribute a hefty portion to finance the insurance for what became his final film, The Shootist (1976). He succumbed to stomach cancer in 1979. (His family later created the John Wayne Cancer Foundation to help others with the disease.)

Tim Ott
Writer

Tim Ott has written for sites including Biography.com, History.com, and MLB.com, and is known to delude himself into thinking he can craft a marketable screenplay.

Original photo by Sun_Shine/ Shutterstock

Few things intrigue avid travelers like the unknown. Some of the world’s least-traversed destinations require a difficult journey — whether that’s a 30-mile hike through dense forest or two weeks at sea — while others are completely off-limits to visitors or have never been reached by humans. Wondering what lies beneath the most remote part of the Pacific Ocean? Or in what tiny locale you can find the world’s smallest flightless bird? Here are eight interesting facts about some of the most isolated, inaccessible, and hard-to-reach places on the planet.

The Lost City, also know as Ciudad Perdida.
Credit: Lukasz Malusecki/ Shutterstock

A Colombian City Was Completely Forgotten by Outsiders for 400 Years

For centuries, Ciudad Perdida (“Lost City” in Spanish) — located in the Sierra Nevada de Santa Marta mountains of Colombia — was a thriving urban center for the Tairona people. But the site was mysteriously abandoned after the 16th century, along with any knowledge of its existence as far as the outside world was concerned. Despite a detailed archeological survey of the area, it was a group of treasure hunters who ultimately rediscovered the city in the 1970s.

Archaeologists soon uncovered a vast network of stone structures and tiered terraces, some dating back to the seventh century. Around 80 of the site’s 400 acres are now open to the public, but visiting is no easy feat: It requires a four- to six-day round-trip hike through 30 miles of steep terrain and humid, mosquito-laden tropical forests. Hiring a local guide is required.

While there isn’t vehicle access, the good news is that visitors will find campsites, water stations, and even snack stands (at least one with Wi-Fi) along the way, many run by the Indegenous Kogi people who live in the area. Another upside to the grueling trek? The Sierra Nevada de Santa Marta region is a UNESCO-designated Biosphere Reserve, with a wide variety of flora and fauna, including nearly 630 bird species — many of which you won’t find anywhere else on Earth.

Stormy weather and waves in the Pacific Ocean.
Credit: IVAN KUZKIN/ Shutterstock

There’s a Space Junk Graveyard in the Remote Pacific Ocean

In 1992, a survey engineer named Hrvoje Lukatela discovered the Oceanic Pole of Inaccessibility, the spot in the ocean farthest away from any land. More familiarly known as Point Nemo, the pole is located nearly 1,700 miles from three roughly equidistant islands: Ducie Island in the Pitcairn Islands to the north, Easter Island to the northeast, and Maher Island in Antarctica to the south. To put its remoteness into perspective, the closest humans to this remote stretch of the Pacific Ocean are those aboard the International Space Station, which orbits about 250 miles above the Earth.

The discovery of Point Nemo didn’t have much usefulness, at least for most people on Earth. Not only is the area extremely difficult to get to, but it’s also within the South Pacific Gyre, a region that resists nutrient-rich waters. Point Nemo is, however, widely used for one purpose: disposing of space junk. Since the 1970s, the South Pacific Gyre has been the preferred spot for the United States, Japan, Russia, and several countries in Europe to drop their decommissioned equipment, since debris is less likely to hit the human population. When the International Space Station is retired in 2030, it will join more than 200 abandoned pieces of space equipment surrounding Point Nemo.

Map of the location of the inaccessibility pole.
Credit: WindVector/ Shutterstock

There’s Another North Pole — And It’s Even Harder to Visit

There’s the geographic North Pole, which is the northernmost point of the Earth’s axis of rotation, and then there’s the Northern Pole of Inaccessibility, a spot at the center of vast ice fields about 400 miles away from the geographic North Pole. Similar to the Oceanic Pole of Inaccessibility, this is the point in the Arctic Ocean that is farthest away from the nearest landmass.

Despite being first discovered in 1909, nobody has actually managed to reach the Northern Pole of Inaccessibility in the past 100-plus years — although the most intrepid adventurers keep trying. Making matters more complicated: The pole is also a moving target, shifting around as new islands are discovered, and researchers anticipate more movement due to rising sea levels.

A British team led by explorer Jim McNeill has made several attempts to reach the pole over the last two decades. However, they have faced numerous challenges: A 2003 mission never left basecamp after McNeill fell ill with a flesh-eating bacterial infection, and in 2006, McNeill made it 168 miles away from land before falling through disintegrating ice, forcing the team to retreat. The team’s most recent attempt was in 2019.

The Southern Pole of Inaccessibility, the corresponding point in Antarctica farthest from the nearest landmass in the Southern Ocean, has proved to be much more accessible. In the late 1950s, the Soviet Union built a meteorological research station there, along with a bust of Vladimir Lenin to mark the exact spot.

Giant turtle in the park of governors residence on St.Helena Island.
Credit: Umomos/ Shutterstock

St. Helena Island Is Home to the World’s Oldest Tortoise

St. Helena — a remote British Overseas Territory in the South Atlantic best known as the location of Napoleon’s final exile in 1815 — has another (more current) claim to fame: Jonathan, a Seychelles tortoise who is Earth’s oldest known living land animal, according to the Guinness Book of World Records. Now believed to be 190 years old, Jonathan was at least 50 years old when he was gifted to Sir William Grey-Wilson, a future governor of the island, in 1882. He still lives at the governor’s residence (31 governors later), along with fellow giant tortoises Emma, David, and Fred.

“While wars, famines, plagues, kings and queens, and even nations have come and gone, he has pottered on, totally oblivious to the passage of time,” Joe Hollis, Jonathan’s caregiver told the Washington Post in early 2022. Tortoises were a popular diplomatic gift at the time because they were easy to transport, since they were stackable and could go without food and water for an extended period. They were also considered a delicacy — a fate which Jonathan fortunately avoided.

It’s much easier to get to St. Helena than it was in Napoleon’s time, but it’s still fairly difficult. One of the most remote inhabited islands in the world, St. Helena is located about 1,200 miles west of Angola and 2,500 miles east of Brazil. Until 2017, visitors needed to board a five-day boat trip from South Africa, but with the opening of the island’s first airport, you can now catch the six-hour flight from Johannesburg every other Saturday.

The rare Inaccessible Island Flightless Rail.
Credit: Chris Howarth/South Atlantic/ Alamy Stock Photo

The Planet’s Smallest Flightless Bird Is Endemic to a Tiny, Remote Island

About 1,300 miles south of St. Helena, the island of Tristan da Cunha is also a British Overseas Territory, but much more difficult to reach. While St. Helena now has an airport, visiting Tristan da Cunha still requires a weeklong ocean journey from Cape Town (which can sometimes take even longer, depending on the weather). Tourists also need permission from the Island Council to visit.

“Tristan da Cunha” also refers to a group of islands. About 25 miles off the southwest coast of the island of Tristan da Cunha is the aptly named Inaccessible Island. Totaling just 5.4 square miles, the tiny island is surrounded by steep cliffs, making it difficult to even land a small boat there. That has resulted in a uniquely pristine ecosystem, which has allowed the world’s smallest extant flightless bird, the Inaccessible Island rail, to evolve and thrive. (At only about 5 or 6 inches big, it’s a little smaller than a dollar bill.) Current estimates place the island’s population around at least 9,000 birds.

The Inaccessible Island rail’s closest relatives are two South American bird species that are able to fly, but when their common ancestor landed on the island, it evolved in an entirely different direction. The birds developed longer bills and sturdier legs, and their wings turned stubby, with much smaller feathers. Since the birds could get most of their prey on the ground — such as moths, seeds, berries, and worms — and predators were scarce, flying became less of a priority. The island doesn’t even have any mice or rats that could pose a risk to chicks.

Close-up of Heard Island and McDonald Islands flag.
Credit: MP_Foto/ Shutterstock

Australia’s Tallest Mountain Is a Remote Volcano Named Big Ben

Located in the Southern Ocean about 2,500 miles southwest of Perth, Heard Island is home to one of Australia’s two active volcanoes and the country’s only glaciers. If you thought the sea journey to Inaccessible Island was arduous, expect a journey to this remote spot to take two weeks, depending on weather, through notoriously rough waters.

While 7,310-foot Mount Kosciuszko is the tallest mountain on the Australian mainland, Big Ben, covering much of Heard Island, is the country’s tallest mountain overall — over 9,000 feet above sea level. Because of the remote location and harsh conditions, very few people have attempted to summit Big Ben. Only three parties have ever completed the ascent: two expedition groups in 1965 and 1983, and one mountaineering club associated with the Australian Army in 1999/2000.

Not only is the journey to reach Heard Island lengthy, but actually landing a vessel there is also quite challenging. For this reason, few tourist groups visit the island. Most visitors are researchers in fields such as volcanology, ecology, and oceanography, along with environmental management organizations.

Satellite Communication Dish on top of TV Station.
Credit: Lee Yiu Tung/ Shutterstock

The World’s Northernmost Settlement Requires Radio Silence

With a latitude of 78.55 degrees north, Ny-lesund, Norway, is the world’s northernmost year-round settlement, located just north of Longyearbyen, in the Svalbard archipelago. The town is home to approximately 40 permanent residents, who can send mail from the world’s northernmost post office.

While Ny-Ålesund does welcome visitors, there are some unique rules tourists must follow. The most important of these is to turn off Wi-Fi and Bluetooth on all devices. The former mining town, established in 1916 and still owned by the King’s Bay mining company, has doubled as a research station since the 1960s, and surrounding it are finely tuned instruments that measure the Earth’s slightest movements. As smartphones can interfere with their measurements, visitors must observe radio silence while visiting. They should also avoid approaching the town’s dog yard; the dogs here are trained to alert the town at the first whiff of a polar bear (although you can observe them from a safe distance).

Ny-Ålesund doesn’t have overnight accommodations for tourists, but it does offer a free museum and the world’s northernmost gift shop, along with many cultural artifacts, including remnants of the mining town, and stunning glacier views.

The town is part of the icy Svalbard archipelago, known for its five-month-long polar night and excellent opportunities to view the northern lights. Most visitors stay in Longyearbyen, which has a significantly higher population of about 2,400, along with schools, churches, a grocery store, and a few breweries. ​​Longbearyen also offers various lodging options, from luxury hotels to remote cabins. However, visiting can be logistically challenging — flights typically run to Longyearbyen from Oslo only three days per week, and flights from there to Ny Alesund run twice weekly.

Salt lake with turquoise water and white salt on the shore near Siwa oasis.
Credit: Sun_Shine/ Shutterstock

A Remote Egyptian Oasis Has Its Own Language

To reach the Siwa Oasis and its 200 springs and thousands of palm and olive trees, you’ll have to travel 350 miles through the desert southwest of Cairo, Egypt. Despite Siwa being a well-established — albeit somewhat hard to reach — tourism destination, the culture and language that evolved in this isolated location is dominant among the local Berber peoples. Around 20,000 people speak the Siwi language, a dialect of the Tamazight language spoken across North Africa, and it is much more common in homes than Arabic. However, Siwa is not taught in schools, to the concern of Siwa residents and language preservationists. (The U.N. has also classified the language as “endangered.”)

It can take up to 12 hours to reach the Siwa Oasis by car or bus from Cairo, or three hours from the nearest Egyptian airport, Mersa Matruh. There’s plenty to see once you arrive: Cleopatra’s Spring, a large stone pool with surrounding cafes, is the most famous of the many springs in the oasis. The remains of the Temple of the Oracle, built in the sixth century BCE, are also a must-see. Visitors can rent bikes or even go sandboarding on the surrounding dunes.

Sarah Anne Lloyd
Writer

Sarah Anne Lloyd is a freelance writer whose work covers a bit of everything, including politics, design, the environment, and yoga. Her work has appeared in Curbed, the Seattle Times, the Stranger, the Verge, and others.

Original photo by fcafotodigital/ iStock

Celebrating the winter holiday season often includes good company and good drinks, many of which — like eggnog — are in high demand only during the last month of the year. Not every Yuletide drink started as a seasonal beverage, though many of these traditional libations can’t shake the association. Here are the backstories on eight popular holiday drinks and how they came to be.

Traditional winter eggnog in glass mug with milk.
Credit: ARVD73/ Shutterstock

Eggnog

Eggnog may be the most popular Yuletide beverage, with a history dating back to the Middle Ages. Many food historians believe modern eggnog is a descendant of posset, a milky, ale-like drink served warmed in medieval Britain. By the 13th century, posset had become popular among monks and was used in celebrations and toasts as a nod to good health and prosperity, since it contained sherry, milk, and eggs (all foods eaten by the wealthy). Sherry was eventually swapped for rum in the American colonies, though some early versions, like George Washington’s personal recipe, included bourbon whiskey instead.

Close-up of the ingredients of homemade Wassail.
Credit: Brent Hofacker/ Shutterstock

Wassail

Wishing someone “waes hael” is how the spiced and spiked wassail drink got its name. Traditionally celebrated on Twelfth Night (January 5), the Anglo-Saxon tradition of wassailing was meant either to ensure a good harvest in the new year or to share goodwill and blessings. In the case of the latter, members of the ruling class would serve wassail — a hot spiced drink made from cider, ale, or wine — and wish their guests good health and well-being in the year ahead. Over time, wassailing celebrations went mobile, with groups going house to house with a bowl of the beverage, singing songs, and spreading holiday cheer.

A bartender preparing classic hot toddy cocktail.
Credit: Mateusz Gzik/ Shutterstock

Hot Toddy

National Hot Toddy Day is marked on January 11, though it’s unclear where the warmed drink originated. Some historians believe the beverage was first served in India, where written descriptions from 1786 describe the “taddy” as an alcoholic drink made from liquor, hot water, sugar, and spices. Other lore suggests hot toddies were the invention of a doctor, who blended brandy, sugar, and cinnamon together to help patients battle their colds.

Hot buttered rum cocktail with cinnamon.
Credit: Oksana Mizina/ Shutterstock

Hot Buttered Rum

Hot buttered rum — a warmed drink made with rum, spices, hot water, and butter — is a cousin to the hot toddy, created in colonial America at a time when alcohol was widely believed to have medicinal benefits. Rum was the preferred drink of the colonists — who, by some estimates, consumed 3.7 gallons per person each year — in part because it was made from molasses, a sweetener that was easier and cheaper to obtain than refined sugar. Colonists believed rum was nutritious and that it could fortify the body, particularly during the coldest months of the year, though historians are unsure how butter became an ingredient in the drink.

Friends drinking delicious mulled wine.
Credit: Dasha Petrenko/ Shutterstock

Mulled Wine

Many historians believe the ancient Greeks were the inventors of mulled wine, creating the warmed and spiced drink they called ypocras as a way to use leftover wine. The ancient Romans followed suit, calling their version conditum paradoxum, and blending wine with honey and spices as a way to preserve the alcohol for long journeys. By the Middle Ages, mulled wine had become popular throughout Europe, because people believed the spices made the drink healthier. The libation’s association with the winter holidays was cemented in the 1800s, thanks to Charles Dickinson’s inclusion of a type of mulled wine in A Christmas Carol.

Puerto Rico coconut liquor drink called Coquito.
Credit: Mike Herna/ Shutterstock

Coquito

Coquito, a holiday drink served in Puerto Rican communities, translates to “little coconut,” an apt description since it contains coconut milk and cream, along with spices, sweetened condensed milk, and rum. It’s possible that the drink was introduced to the Caribbean nation by Spanish colonists, though many people believe the chilled, creamy beverage is an invention that first appeared around the early 1900s. By the 1950s, coquito recipes made their way into cookbooks, and today the drink is so synonymous with the Christmas season that it has its own day of celebration — December 21.

Copper cup with Moscow Mule cocktail.
Credit: Emmanuel Lozano/ Shutterstock

Moscow Mule

Compared to other classic holiday drinks, Moscow mules are relatively new; the cocktails containing vodka, lime juice, and ginger beer are less than 100 years old. The recipe has been attributed to multiple people in recent years, but the most widely circulated story claims it originated with John Martin, president of alcohol producer Heublein, which acquired the rights to Smirnoff Vodka in the late 1930s. Martin is said to have co-created the Moscow mule in an attempt to drum up vodka sales, with the help of Jack Morgan, a California bar owner who was trying to market his unpopular ginger beer concoction. The duo released their drink recipe in 1941, not only popularizing the beverage but serving it in copper mugs, which cocktail mythology suggests may have come from a third person looking to popularize their cups.

People clinking glasses with champagne.
Credit: New Africa/ Shutterstock

Champagne

Celebrating New Year’s Eve with a Champagne toast didn’t become a tradition until the late 19th or early 20th century, and it was mostly thanks to restaurant owners. Prior to the 1800s, it wasn’t common to mark the start of a new year by staying up past midnight, and in France, Champagne was typically reserved for state affairs, used to toast diplomatic meetings, coronations, and treaties. Following the French Revolution, however, Champagne became accessible to all classes, and the wealthy who fled the country helped create a market for the bubbly outside of France. By the mid-1800s, Champagne and sparkling wine sales surged in the U.S., and by the early 1900s, many restaurateurs provided it as the only refreshment option for New Year’s Eve meals, linking the drink to the holiday for a century to come.

Nicole Garner Meeker
Writer

Nicole Garner Meeker is a writer and editor based in St. Louis. Her history, nature, and food stories have also appeared at Mental Floss and Better Report.

Original photo by sarayut_sy/ Shutterstock

Humans have explored the deepest points of the oceans and the highest peaks on Earth, not to mention the surface of the moon. But our planet remains a source of surprising discoveries. In the first half of 2023, researchers identified dozens of species previously unknown to science, from tiny insects to amphibians, fish, and mammals. Here are just a handful of the amazing new fauna we’ve recently met.

Close-up of DiCaprio's snake.
Credit: © ALEJANDRO ARTEAGA/ CC BY 4.0 CC BY (AUSSCHNITT)

DiCaprio’s Snail-Eating Snake (Sibon irmelindicaprioae)

This red-eyed, strikingly patterned species of snail-eating snake makes its home in the treetops of rainforests in Colombia and Panama — but it also has a connection to Hollywood. Noted environmentalist Leonardo DiCaprio named the rare reptile after his mom, Irmelin, in an effort to highlight the risks it faces. (The actor partnered with the Nature and Culture International nonprofit to draw awareness to the new species.) DiCaprio’s snail-eating snake and four other new snake species, described in February 2023 in the journal ZooKeys, are threatened by rampant gold mining in the rainforest, which destroys the leafy cover the snakes need to survive.

Dorsal view of a male Mindanao gymnure skin specimen.
Credit: The Natural History Museum/ Alamy Stock Photo

Eastern Mindanao Gymnure (Podogymnura intermedia)

A group of zoologists from Chicago’s Field Museum, the Philippines, and elsewhere found a new species of gymnure, a hedgehog-like mammal with soft fur and a long probing nose, that lives only on two mountains on the Philippines’ second-largest island. P. intermedia’s habitat encompasses high-elevation forests on Mount Hamiguitan and Mount Kampalili, two peaks in the little-explored Eastern Mindanao Biodiversity Corridor, which supports numerous endemic plant and animal species. The researchers described the new gymnure — also known as a moonrat — in the journal Zootaxa in January 2023.

Tinker reed frog in Kwazulu, Natal.
Credit: Heinnie Prinsloo/ Shutterstock

A New Spiny-Throated Reed Frog (Hyperolius ukaguruensis)

According to a February 2023 report in the journal PLOS ONE, DNA analysis confirmed this diminutive amphibian, native to Tanzania’s Ukaguru Mountains, is a new species of spiny-throated reed frog. Unlike most of the world’s frogs, H. ukaguruensis doesn’t croak — and scientists are not sure how the creatures communicate with one another. Described as “golden greenish-brown” and found in dense swamps, the frog may already be at risk from human exploitation of its forested home.

Sketch view of a Psychropotes longicauda.
Credit: PF-(usna1)/ Alamy Stock Photo

A “Gummy Squirrel” (Psychropotes longicauda)

In a huge swath of the Pacific between Hawaii and Mexico, scientists surveyed areas of the deep ocean and discovered more than 5,000 species of sponges, arthropods, worms, sea urchins, and other invertebrates new to science. One of them was Psychropotes longicauda, a sea cucumber nicknamed a “gummy squirrel” due to its long curved tail, which lives at a depth of 16,000 feet. The findings, reported in the journal Current Biology in May 2023, shed light on a little-known part of the seabed that’s also being eyed for deep-sea mining.

Sketch of a small-spotted catshark.
Credit: ilbusca/ iStock

A New Demon Catshark (Apristurus ovicorrugatus)

A new type of shark from a genus with a dramatic name was identified in May 2023. Demon catsharks (Apristurus) scuttle along the seabed, gobbling up benthic prey; the new species also has spooky, catlike eyes with glowing white irises. Fortunately, the scientists who discovered the fish didn’t meet it in a dark alley — they happened upon an egg case with unusual ridges in a museum collection. They believed it was a novel catshark species, but couldn’t test the theory without a DNA sample from a live specimen. By chance, a research vessel picked up a catshark carrying the exact egg case, allowing the scientists to confirm their hunch. The evidence was reported in the Journal of Fish Biology.

View of a European crayfish (Astacus astacus).
Credit: Eva Foreman/ Shutterstock

Stony Fork Crayfish (Cambarus lapidosus) and Falls Crayfish (Cambarus burchfielae)

Not all newly discovered species are as charismatic as a catshark. Two new species of crayfish — freshwater crustaceans resembling tiny brown lobsters, nicknamed “mudbugs” — were unearthed in the scenic Appalachian Mountains of western North Carolina in April 2023. Each is found in only one small stream system, according to the study in Zootaxa. The Stony Fork crayfish is named after its home waterway, while the Falls crayfish is endemic to the Lewis Fork; both streams represent the entire ranges of the two species. They bring the total number of crayfish species in the state to 51, yet there may be many more lurking under rocks.

Credit: Herps Unchained/ Youtube

Río Negro Stream Frog (Hyloscirtus tolkieni)

This novel amphibian’s colors caught the eyes of researchers as they bushwhacked through the Río Negro-Sopladora National Park in the Andes of central Ecuador. Larger than other members of the genus Hyloscirtus, the new species has a mottled grayish-green back, a yellow belly covered in black splotches, long speckled yellow toes, and rosy pink eyes. The discoverers named the multihued frog, which depends on fresh, clear-flowing mountain streams for its survival, after The Hobbit and The Lord of the Rings author J.R.R. Tolkien. “The amazing colors of the new species evoke the magnificent creatures that seem to only exist in fantasy worlds,” the authors wrote in ZooKeys.

Giant grasshopper in selective focus.
Credit: Ummi Hassian/ Shutterstock

Nelson’s Pouncer Grasshopper (Melanoplus nelsoni)

The Edwards Plateau of central Texas was once a vast grassland with hills formed by eroding limestone. When settlers established permanent farms and towns, the grassland changed into a scrubby landscape punctuated by groves of small trees. Though humans and their livestock have altered the land, scientists are still discovering new species in this biodiverse region. In June 2023, scientists announced they’d found seven new flightless grasshopper species, one of which they named for Texas icon Willie Nelson. The insect, dubbed “Nelson’s Pouncer,” measures less than an inch long and makes its home among the ashe juniper forests of Texas Hill Country. The findings appeared in ZooKeys.

Interesting Facts
Editorial

Interesting Facts writers have been seen in Popular Mechanics, Mental Floss, A+E Networks, and more. They’re fascinated by history, science, food, culture, and the world around them.