From Eve’s metaphorical apple to the deadly flowers in folktales, poisonous plants have shaped human history and belief for a very long time. Somewhat alarmingly, many of the world’s most lethal plants are also widely grown in gardens and flower beds, made into jewelry, or cultivated for medicine. Here are a few plants you should think twice about getting near.
The manchineel is officially the world’s deadliest tree. Found amid mangrove forests in the Caribbean, Central America, and the Florida Keys, the machineel produces sap that can cause severe blisters and blindness if it comes into contact with one’s skin or eyes. Rainwater dripping from its leaves, or smoke wafting from its burning wood, can induce sores and pain in mucus membranes. Its applelike fruit is also extremely toxic.
An account published in the medical journal BMJ described a man’s experience of accidentally eating a manchineel fruit: “We noticed a strange peppery feeling in our mouths, which gradually progressed to a burning, tearing sensation and tightness of the throat. The symptoms worsened over a couple of hours until we could barely swallow solid food because of the excruciating pain … Recounting our experience to the locals elicited frank horror and incredulity, such was the fruit’s poisonous reputation.”
The rosary pea gets its name from the frequent use of its dried berries as rosary beads, as well as in jewelry and even children’s toys. Though native to tropical Asia, the woody vine has been cultivated as an ornamental plant in Florida and Hawaii and is now considered invasive. Its shiny red seeds, uniform in size and each crowned with a black dot, produce a deadly toxin called abrin; ingesting one seed is enough to kill you. Other symptoms of Abrus precatorius poisoning include bloody diarrhea, internal bleeding, nausea, severe vomiting, and abdominal pain.
Another highly toxic plant grown as an ornamental, castor bean features star-shaped leaves in greenish-bronze hues and small scarlet flowers on upright stalks, making it an exotic addition to gardens — despite being the source of ricin, one of the world’s deadliest poisons. All parts of the plant are toxic: Touching its leaves can cause painful skin rashes, while chewing the seeds or inhaling powdered seeds releases the poison. Numerous countries have used (or attempted to use) ricin in espionage or warfare. The U.S. military investigated its efficacy as a chemical weapon during both World Wars, and in 1978, an assassin — possibly a Soviet operative — injected a Bulgarian dissident writer with a ricin pellet on London’s Waterloo Bridge. The victim died four days later.
White snakeroot, a native wildflower with crowns of small white blooms on leafy stems, is found in meadows, backyards, pastures, and wooded areas across the eastern United States. It can grow almost anywhere, which makes it a particularly dangerous plant to humans. Livestock such as cows and goats can eat the poisonous leaves and root systems of the plant and pass a toxin called tremetol to the humans who drink their milk. Among early settlers, tremetol poisoning was known as milk sickness, the trembles, the staggers, or puking fever, among other colorful colloquialisms. Its symptoms included muscle tremors, weakness, cardiac myopathy, and difficulty breathing that often led to death. Nancy Hanks Lincoln, Abraham Lincoln’s mother, died of milk sickness in 1818.
Aptly named poison hemlock — which is actually part of the carrot family — closely resembles wild parsnip and parsley, so it may not be surprising that most cases of poisoning occur when foragers mistake the toxic plant for one of the edible ones. Its purple-spotted stems, leaves, seeds, and clusters of small white flowers contain a toxin called coniine that, when eaten, affects the nervous system and causes respiratory paralysis, eventually leading to death. It was used to execute prisoners in ancient Greece, the most famous victim being the philosopher Socrates in 399 BCE. Though it’s native to Europe (and elsewhere), poison hemlock was imported to the U.S. in the 19th century as an ornamental “fern” and is now found growing wild across the U.S. Poison hemlock is often confused with its even deadlier cousins, western water hemlock (Cicuta douglasii) and spotted water hemlock (Cicuta maculata), both of which are native to North America.
The name of this dangerous plant’s genus comes from Atropos, one of the three Fates in Greek mythology, who presided over death. Deadly nightshade contains tropane alkaloids that, if consumed, disrupt the body’s ability to regulate heart rate, blood pressure, digestion, and other involuntary processes, resulting in convulsions and death. This quality made it a handy tool for offing Roman emperors and Macbeth’s enemies. The effects of deadly nightshade poisoning are said to include sensations of flying, suggesting that the plant was the source of alleged witches’ “flying ointment” in early modern European folklore. Ironically, deadly nightshade is also the source of atropine, a drug used to treat low heart rate and even counteract the effects of eating toxic mushrooms.
Papaver somniferum produces opium, the source of morphine, heroin, oxycodone, codeine, and other narcotics — in other words, some of the most addictive substances on Earth. Opium poppies have been cultivated for medicinal purposes since at least 2100 BCE, according to a Sumerian clay tablet that is believed to contain the world’s oldest prescriptions, including one for opium. The Greek physicians Dioscorides and Galen, as well as the Muslim physician Avicenna, also wrote about opium’s therapeutic qualities. Opium-derived laudanum was the Victorian choice for calming crying babies, soothing headaches, or overcoming insomnia; in the U.K., its widespread use led to addiction crises dubbed “morphinomania” until laws restricted the sale of opium in the early 20th century. Today, it’s not a good idea to grow opium poppies even for ornamental purposes: The Drug Enforcement Administration even pressured Monticello’s gardeners to cease cultivation of Thomas Jefferson’s historical poppy plots in the early 1990s.
Interesting Facts
Editorial
Interesting Facts writers have been seen in Popular Mechanics, Mental Floss, A+E Networks, and more. They’re fascinated by history, science, food, culture, and the world around them.
Advertisement
top picks from the optimism network
Interesting Facts is part of Optimism, which publishes content that uplifts, informs, and inspires.
The meteorological conditions we refer to as the weather can be the source of some pretty serious myths and misconceptions. Some are simply funny superstitions (like using onions to predict the severity of the coming winter). Others thought to be hoaxes or hallucinations (like ball lightning) are now proven to be actual phenomena. Here are eight common myths about the weather — including some that actually have a grain of truth to them.
While everyone wishes it were true, this weather “fact” is false. Unfortunately, lightning can strike in the same location repeatedly — even during the same thunderstorm. This is especially true when it comes to tall objects, like TV antennas. For example, the Empire State Building is struck by lightning about 25 times per year.
Other common lightning myths include the idea that trees can provide safe shelter (your best bet is always to go indoors) and that touching a lightning victim might get you electrocuted. Fortunately, the human body does not store electricity — which means you can perform first aid on someone struck by lightning without that particular fear.
This one is both true and false. That’s because there are actually two types of waterspouts — those thin, rapidly swirling columns of air above water, sometimes seen in the Gulf of Mexico, Gulf Stream, and elsewhere.
The first is a “fair weather waterspout.” These form from the water up, move very little, and are typically almost complete by the time they’re visible. If they do move to land, they generally dissipate very quickly. The type known as “tornadic waterspouts,” on the other hand, are exactly what their name suggests: tornadoes that form over water, or move from land to water. Associated with severe thunderstorms, tornadic waterspouts can produce large hail and dangerous lightning. If they move to dry land, the funnel will pick up dirt and debris, just as a land-formed tornado would.
It’s Not Safe to Use Your Cellphone During a Thunderstorm
It’s not safe to use a landline when thunder and lightning are making the skies dramatic, just like it’s not safe to use any other appliances that are plugged in. But an (unplugged) cellphone should be fine, so long as you’re safely indoors. This myth may have arisen from situations in which people were struck by lightning and their cellphones melted, but it’s not because their cellphone “attracted” the lightning in any way. Of course, plugging in your cellphone (or laptop) to charge may present a danger.
The Groundhog Day tradition continues every February 2, when the members of the Punxsutawney Groundhog Club trek to Gobbler’s Knob, seeking weather wisdom from a series of woodchucks, all named “Punxsutawney Phil.” If Phil emerges from his burrow and sees his shadow (in bright sunshine), supposedly winter will hang around for six more weeks. If the day is overcast: Yay, early spring! The whole event is based on old Celtic superstitions, though, and Phil’s “predictions” are only correct about 40% of the time — but at least he’s no longer eaten after making the call.
It’s a pretty rare event, but deep storm clouds filled with raindrops later in the day may scatter light in a way that makes the sky look green. Such storm clouds likely mean severe weather — thunder, lightning, hail, or even a tornado — is on its way, but it’s no guarantee of a twister per se. One thing’s for sure: It’s definitely not a sign that frogs or grasshoppers have been sucked into the sky by the storm, as people used to think.
It isn’t the rubber tires that can keep a person inside a car safe from a direct lightning strike; it’s the metal cage of the vehicle that conducts 300 million volts of electricity into the ground. If you can’t get to shelter during a thunderstorm and must be in your (hard-topped) car, keep the windows rolled up and your hands off the car’s exterior frame.
This saying has some truth to it. Spider webs are sensitive to humidity, absorbing moisture that can eventually cause their delicate strands to break. For this reason, most spiders will remain in place when rain is imminent. So it stands to reason (at least according to folklore) that if spiders are busily spinning their webs, they may know something that we don’t. In other words: Prepare for a beautiful day! (It’s also true that most spiders seek out damp places, so if you don’t want them taking up residence in your house, a dry home is less hospitable.)
It’s not “weather” in the sense of atmospheric conditions, but earthquakes can be a pretty dramatic show of the Earth’s forces. Many of us learned this “tip” in school. However, the reality is that it was more true of older, unreinforced structures. Today, doorways generally aren’t stronger than other parts of the house, and the door itself may hit you in an earthquake. You’re far safer underneath a table or desk, particularly if it’s away from a window. (The CDC has more earthquake safety tips here.)
Interesting Facts
Editorial
Interesting Facts writers have been seen in Popular Mechanics, Mental Floss, A+E Networks, and more. They’re fascinated by history, science, food, culture, and the world around them.
Advertisement
top picks from the optimism network
Interesting Facts is part of Optimism, which publishes content that uplifts, informs, and inspires.
Whether created through purposeful experimentation or as the result of a happy accident, inventions have transformed our world. And they’re a rich source of fascinating facts, too — for example, did you know that the inventor of the stop sign couldn’t drive, or that a dentist helped create the cotton candy machine? Here are some of our greatest invention stories from around the site.
Credit: Photo courtesty Library of Congress via Getty Images
Mark Twain Invented Bra Clasps
The long-term uses for a product do not always materialize during the inventor’s lifetime. Such was the case with Mark Twain — the celebrated writer born Samuel Clemens — who filed a patent for a clothing accessory when he was 35 years old. Twain found wearing suspenders uncomfortable, so he came up with a device he called an “Improvement in Adjustable and Detachable Straps for Garments.” What he envisioned was a versatile two-piece strap — preferably elastic — that fastened with hooks. The hooks were inserted into a series of rows of small holes, chosen depending on how snug (or loose) the wearer wanted their garment. Twain thought this simple, gender-neutral tool could customize the fit of a wearer’s vests, shirts, pantaloons, or stays, a corset-like object that women wore under dresses. However, thanks to changing fashions, his garment straps were not produced for several decades. In 1914, four years after Twain’s death and long after his hard-won patent expired, Mary Phelps Jacob patented the first bra from handkerchiefs and ribbon. When she sold her patent to the Warner Brothers Corset Company, they added Twain’s straps to the back to keep the garment in place.
Chinese Takeout Containers Were Invented in America
In the U.S., plenty of Chinese restaurant fare features produce that doesn’t grow in China, such as broccoli. Thus it shouldn’t be terribly surprising that Americans also took liberties with how Chinese food is packaged. While plastic containers are utilized to hold delivery and takeout dishes in China, diners in the U.S. prefer a folded, six-sided box with a slim wire handle. Chicago inventor Frederick Weeks Wilcox patented this “paper pail” on November 13, 1894. Borrowing from Japanese origami, Wilcox elected to make each pail from a single piece of paper. This decision eventually proved critical in the transportation of Chinese cuisine, lessening the likelihood of leaks and allowing steam from hot foods to escape through the top folds. During the 1970s, a graphic designer at Bloomer Brothers’ successor, the Riegel Paper Corporation, embellished the boxes to include a pagoda and the words “Thank You” and “Enjoy” — all in red, a color that represents luck in China. The Riegel Paper Corporation evolved into Fold-Pak, the world’s top producer of takeout containers, which assembles 300 million cartons per year. Composed of solid-bleached-sulfate paperboard and boasting an interior polycoating, each food carrier expands into a handy plate if you remove the wire handle.
Bubble Wrap is one of the 20th century’s most versatile — and dare we say most beloved — inventions. The pliable, air-pocketed sheets have been used for decades to insulate pipes, protect fragile items, and even make dresses. And that’s not to mention the fascination some people have with popping the bubbles. But when it was first created in 1957 in New Jersey, inventors Al Fielding and Marc Chavannes had a different vision in mind for their ingenious padding: home decor. The pioneering duo hoped their creation — which trapped air between two shower curtains run through a heat-sealing machine — would serve as a textured wallpaper marketed to a younger generation with “modern” taste. The initial idea was a flop, however, and it took another invention of the time — IBM’s 1401 model computer — to seal Bubble Wrap’s fate as a packing material.
Under the company name Sealed Air, Fielding and Chavannes approached IBM about using the air-filled plastic in shipping containers, replacing traditional box-fillers like newspaper, straw, and horsehair. After passing the test of transporting delicate electronics, Sealed Air became a shipping industry standard. Over time, Fielding and Chavannes were granted six patents related to Bubble Wrap manufacturing, and Sealed Air continues to create new versions of the remarkable wrap — including a cheaper, unpoppable version that’s popular with cost-minded shippers (but not so much with bubble-popping enthusiasts).
The Inventor of the Stop Sign Never Learned How To Drive
Few people have had a larger or more positive impact on the way we drive than William Phelps Eno, sometimes called the “father of traffic safety.” The New York City-born Eno — who invented the stop sign around the dawn of the 20th century — once traced the inspiration for his career to a horse-drawn-carriage traffic jam he experienced as a child in Manhattan in 1867. “There were only about a dozen horses and carriages involved, and all that was needed was a little order to keep the traffic moving,” he later wrote. “Yet nobody knew exactly what to do; neither the drivers nor the police knew anything about the control of traffic.”
After his father’s death in 1898 left him with a multimillion-dollar inheritance, Eno devoted himself to creating a field that didn’t otherwise exist: traffic management. He developed the first traffic plans for New York, Paris, and London. In 1921, he founded the Washington, D.C.-based Eno Center for Transportation, a research foundation on multimodal transportation issues that still exists. One thing Eno didn’t do, however, is learn how to drive. Perhaps because he had such extensive knowledge of them, Eno distrusted automobiles and preferred riding horses. He died in Connecticut at the age of 86 in 1945 having never driven a car.
Love Seats Were Originally Designed to Fit Women’s Dresses, Not Couples
The two-seater upholstered benches we associate with cozy couples were initially crafted with another duo in mind: a woman and her dress. Fashionable attire in 18th-century Europe had reached voluminous proportions — panniers (a type of hooped undergarment) were all the rage, creating a wide-hipped silhouette that occasionally required wearers to pass through doors sideways. Upper-class women with funds to spare on trending styles adopted billowing silhouettes that often caused an exhausting situation: the inability to sit down comfortably (or at all). Ever astute, furniture makers of the period caught on to the need for upsized seats that would allow women with such large gowns a moment of respite during social calls.
As the 1800s rolled around, so did new dress trends. Women began shedding heavy layers of hoops and skirts for a slimmed-down silhouette that suddenly made small settees spacious. The midsize seats could now fit a conversation companion. When sweethearts began sitting side by side, the bench seats were renamed “love seats,” indicative of how courting couples could sit together for a (relatively) private conversation in public. The seat’s new use rocketed it to popularity, with some featuring frames that physically divided young paramours. While the small sofas no longer act as upholstered chaperones, love seats are just as popular today — but mostly because they fit well in small homes and apartments.
On January 5, 1858, Ezra J. Warner of Connecticut invented the can opener. The device was a long time coming: Frenchman Nicolas Appert had developed the canning process in the early 1800s in response to a 12,000-franc prize the French government offered to anyone who could come up with a practical method of preserving food for Napoleon’s army. Appert devised a process for sterilizing food by half-cooking it, storing it in glass bottles, and immersing the bottles in boiling water, and he claimed the award in 1810. Later the same year, Englishman Peter Durand received the first patent for preserving food in actual tin cans — which is to say, canned food predates the can opener by nearly half a century.
Before Warner’s invention, cans were opened with a hammer and chisel — a far more time-consuming approach than the gadgets we’re used to. Warner’s tool (employed by soldiers during the Civil War) wasn’t a perfect replacement, however: It used a series of blades to puncture and then saw off the top of a can, leaving a dangerously jagged edge. As for the hand-crank can opener most commonly used today, that wasn’t invented until 1925.
In Benjamin Franklin’s time — and for centuries before — lightning was a fear-inspiring phenomenon, known for starting fires, destroying buildings, and injuring people and livestock. Because little was known about how lightning worked, some people undertook unusual preventative measures against it, like ringing church bells to avert lightning strikes (even though that sent bell ringers dangerously high into steeples during storms). Perhaps that was why Franklin, the prolific inventor and founding father, was so captivated by lightning and devoted much of his scientific studies to experimenting with electricity. In 1752, Franklin undertook his now-storied kite exercise during a storm, correctly surmising that lightning must be electricity and that the mysterious energy was attracted to metal (though some historians have questioned whether the experiment actually ever happened).
With this concept in mind, Franklin designed the Franklin Rod, crafted from a pointed, iron stake. Heralded as a new, lifesaving invention that could guide the electrical currents from lightning into the ground, lightning rods sprung atop roofs and church steeples throughout the American colonies and Britain, and some were even anchored to ship masts to prevent lightning strikes at sea. Initially, some clergy were unwelcoming of the protective devices, believing lightning rods interfered with the will of the heavens; Franklin brushed off the criticism and continued his exploration of electricity, even developing some of the language — like the word “battery” — we use to talk about the force today.
When folks learn that one of cotton candy’s creators cleaned teeth for a living, jaws inevitably drop. Born in 1860, dentist William J. Morrison became president of the Tennessee State Dental Association in 1894. But Morrison was something of a polymath and a dabbler, and his varied interests also included writing children’s books and designing scientific processes: He patented methods for both turning cottonseed oil into a lard substitute and purifying Nashville’s public drinking water. In 1897, Morrison and his fellow Nashvillian — confectioner John C. Wharton — collaborated on an “electric candy machine,” which received a patent within two years. Their device melted sugar into a whirling central chamber and then used air to push the sugar through a screen into a metal bowl, where wisps of the treat accumulated. Morrison and Wharton debuted their snack, “fairy floss,” at the Louisiana Purchase Exposition of 1904 (better known as the St. Louis World’s Fair). Over the seven-month event, at least 65,000 people purchased a wooden box of the stuff, netting Morrison and Wharton the modern equivalent of more than $500,000.
Cool Whip, Pop Rocks, and Tang Were Invented by the Same Person
Growing up in Minnesota, William A. Mitchell spent his teenage years as a farmhand and carpenter, working to fund his college tuition. It took a few years for the future inventor to venture into food production after graduation, chemistry degree in hand; he first worked at Eastman Kodak creating chemical developers for color film, as well as at an agricultural lab. He then went to work at General Foods in 1941, contributing to the war effort by creating a tapioca substitute for soldier rations. In 1956, his quest to create a self-carbonating soda led to the accidental invention of Pop Rocks. A year later, he developed Tang Flavor Crystals, which skyrocketed to popularity after NASA used the powder in space to remedy astronauts’ metallic-tasting water. And by the time he’d retired from General Foods in 1976, Mitchell had also developed a quick-set gelatin, powdered egg whites, and a whipped cream alternative — the beloved Cool Whip that now dominates grocery store freezers.
In the mid-1960s, chemist Stephanie Kwolek was working in a Wilmington, Delaware, research lab for the textile division of the chemical company Dupont, which had invented another “miracle” fiber called nylon 30 years earlier. Fearing a looming gas shortage — one that arrived in earnest in 1973 — Dupont was searching for a synthetic material that could make tires lighter and stronger, replacing some of their steel and improving overall fuel efficiency. One day, Kwolek noticed that a particular batch of dissolved polyamides (a type of synthetic polymer) had formed a cloudy, runny consistency rather than the usual clear, syrup-like concoction. Although colleagues told Kwolek to toss it out, she persisted in investigating this strange mixture closely, discovering that it could be spun to create fibers of an unusual stiffness. Kevlar was born. Dupont introduced the “wonder fiber” in 1971, and the material began undergoing tests in ballistic vests almost immediately. By one estimate, it has saved at least 3,000 police officers from bullet wounds in the years since. Despite its myriad applications, Kevlar still delivers on its original purpose as an automotive component, whether baked into engine belts, brake pads, or yes, even tires.
Humans Invented Alcohol Before We Invented the Wheel
The wheel is credited as one of humankind’s most important inventions: It allowed people to travel farther on land than ever before, irrigate crops, and spin fibers, among other key benefits. Today, we often consider the wheel to be the ultimate civilization game-changer, but it turns out, creating the multipurpose apparatus wasn’t really on humanity’s immediate to-do list. Our ancient ancestors worked on other ideas first: boats, musical instruments, glue, and even alcohol. The oldest evidence of booze comes from China, where archaeologists have unearthed 9,000-year-old pottery coated with beer residue; in contrast, early wheels didn’t appear until around 3500 BCE, in what is now Iraq. But even when humans began using wheels, they had a different application — rudimentary versions were commonly used as potter’s wheels, a necessity for mass-producing vessels that could store batches of brew (among other things).
Writing Systems Were Independently Invented at Least Four Times
Much human innovation is a collective effort — scientists, innovators, and artisans building off the work of predecessors to develop some groundbreaking technology over the course of many years. But in the case of writing systems, scholars believe humans may have independently invented them four separate times. That’s because none of these writing systems show significant influence from previously existing systems, or similarities among one another. Experts generally agree that the first writing system appeared in the Mesopotamian society of Sumer in what is now Iraq. Early pictorial signs appeared some 5,500 years ago, and slowly evolved into complex characters representing the sounds of the Sumerian language. Today, this ancient writing system is known as cuneiform.
However, cuneiform wasn’t a one-off innovation. Writing systems then evolved in ancient Egypt, in the form of hieroglyphs, around 3200 BCE — only an estimated 250 years after the first examples of cuneiform. The next place that writing developed was China, where the Shang dynasty set up shop along the Yellow River and wrote early Chinese characters on animal bones during divination rituals around 1300 BCE. Finally, in Mesoamerica, writing began to take shape around 900 BCE, and influenced ancient civilizations like the Zapotecs, Olmecs, Aztecs, and Maya. Sadly, little is known about the history of many Mesoamerican languages, as Catholic priests and Spanish conquistadors destroyed a lot of the surviving documentation.
While most grade school students can tell you that the first airplane was flown by Orville and Wilbur Wright in 1903, the origins of the parachute go back further — significantly further, depending on your criteria. The Shiji, composed by Chinese historian Sima Qian, describes how as a young man the legendary third-century BCE Emperor Shun jumped from the roof of a burning building, using bamboo hats as a makeshift parachute. Leonardo da Vinci famously sketched a design for a pyramid-shaped parachute made from linen around 1485. Approximately 130 years later, Venetian Bishop Fausto Veranzio unveiled his own design in his Machinae Novae, and allegedly even tested the contraption himself.
But the first modern parachutist is generally considered to be France’s Louis-Sebastien Lenormand. Along with actually coining the term “parachute,” Lenormand initially tested gravity by leaping from a tree with two umbrellas, before flinging himself from the Montpellier Observatory with a 14-foot parachute in December 1783. Fourteen years later, another Frenchman, André-Jacques Garnerin, delivered the first truly death-defying parachute exhibition when he plunged from a hydrogen balloon some 3,200 feet above Paris, and rode out the bumpy descent with his 23-foot silk net to a safe landing.
Bagpipes Were Invented in the Middle East, Not Scotland
The reedy hum of bagpipes calls to mind tartan attire and the loch-filled lands of Scotland, which is why it might be surprising to learn that the wind-powered instruments weren’t created there. Music historians believe bagpipes likely originated in the Middle East, where they were first played by pipers thousands of years ago. The earliest bagpipe-like instruments have been linked to the Egyptians around 400 BCE, though a sculpture from the ancient Hittites — a former empire set in present-day Turkey — from around 1000 BCE may also resemble bagpipes.
Bagpipes slowly made their way throughout Europe, occasionally played by notable names in history like Roman Emperor Nero, and becoming widespread enough to be depicted in medieval art and literature. By the 15th century they had made their way to Scotland, where Highland musicians added their own influence. By some accounts, they modified the pipes to their modern appearance, by adding more drones, which emit harmonized sounds. Highland musicians also began the practice of hereditary pipers, aka passing the knowledge and skill of bagpiping through families, along with the duty of playing for Scottish clan leaders. All pipers of the time learned music by ear and memorization, a necessity considering the first written music for the pipes may not have appeared until the 18th century. One family — the MacCrimmons of the Scottish island of Skye — was particularly known for its influence in bagpiping, with six generations continuing the art, composing music, and teaching through their own piping college in the 17th and 18th centuries.
Chocolate Chips Were Invented After Chocolate Chip Cookies
Ruth Wakefield was no cookie-cutter baker. In fact, she is widely credited with developing the world’s first recipe for chocolate chip cookies. In 1937, Wakefield and her husband, Kenneth, owned the popular Toll House Inn in Whitman, Massachusetts. While mulling new desserts to serve at the inn’s restaurant, she decided to make a batch of Butter Drop Do pecan cookies (a thin butterscotch treat) with an alteration, using semisweet chocolate instead of baker’s chocolate. Rather than melting in the baker’s chocolate, she used an ice pick to cut the semisweet chocolate into tiny pieces. Upon removing the cookies from the oven, Wakefield found that the semisweet chocolate had held its shape much better than baker’s chocolate, which tended to spread throughout the dough during baking to create a chocolate-flavored cookie. These cookies, instead, had sweet little nuggets of chocolate studded throughout. The recipe for the treats — known as Toll House Chocolate Crunch Cookies — was included in a late 1930s edition of her cookbook, Ruth Wakefield’s Tried and True Recipes.
The cookies were a huge success, and Nestlé hired Wakefield as a recipe consultant in 1939, the same year they bought the rights to print her recipe on packages of their semisweet chocolate bars. To help customers create their own bits of chocolate, the bars came pre-scored in 160 segments, with an enclosed cutting tool. Around 1940 — three years after that first batch of chocolate chip cookies appeared fresh out of the oven — Nestlé began selling bags of Toll House Real Semi-Sweet Chocolate Morsels, which some dubbed “chocolate chips.” By 1941, “chocolate chip cookies” was the universally recognized name for the delicious treat. An updated version of Wakefield’s recipe, called Original Nestlé Toll House Chocolate Chip Cookies, still appears on every bag of morsels. For her contributions to Nestlé, Wakefield reportedly received a lifetime supply of chocolate.
A dessert accidentally created by a California kid has managed to stick around for over a century. One frigid night in the San Francisco Bay Area, young Frank Epperson took a glass of water and mixed in a sweet powdered flavoring using a wooden stirrer. He left the concoction on his family’s back porch overnight, and by morning, the contents had frozen solid. Epperson ran hot water over the glass and used the stirrer as a handle to free his new creation. He immediately knew he’d stumbled on something special, and called his treat an Epsicle, a portmanteau of his last name and “icicle.” Throughout his life, Epperson claimed that this experiment occurred in 1905, when he was 11 years old. While most publications agree, the San Francisco Chronicle’s website counters that local temperatures never reached freezing in 1905; they did, however, in nearby Oakland, where the Epperson family moved around 1907, meaning the fateful event may have happened a few years later.
In 1922, Epperson brought his frozen treat — which had since become beloved by friends and neighbors — to the Fireman’s Ball at Neptune Beach amusement park. It was a hit. Within two years, he had patented his ice pop on a wooden stick. Around the same time he began referring to his desserts as “popsicles” (a play on his children’s term for their father’s creation, “pop’s sicle”), but the word was absent from his patent, and a Popsicle Corporation quickly established itself elsewhere. “I should have protected the name,” Epperson later lamented. Although he briefly set up a royalty arrangement with the Popsicle Corporation, by 1925 he sold his patent rights to the Joe Lowe Company, which became the exclusive sales agent for the Popsicle Corporation. Over the decades, Epperson’s naming oversight cost him considerable profits.
Decades before doctors began to publicize the harmful effects of cigarettes, a 30-year-old Austrian executive decided to invent a refreshing alternative. In 1927, Eduard Haas III was managing his family’s baking goods business — the Ed. Haas Company — when he expanded the product line to include round, peppermint-flavored treats known as PEZ Drops. The German word for peppermint is pfefferminz, and Haas found the name for his new candies by combining the first, middle, and last letters of the German term. Clever advertising built national demand for the candy, which adopted its iconic brick shape in the 1930s and eventually nixed the “Drops.” PEZ were packaged in foil paper or metal tins until Haas hired engineer Oscar Uxa to devise a convenient way of extracting a tablet single-handedly. Uxa’s innovation — a plastic dispenser with a cap that tilted backward as springs pushed the candy forward — debuted at the 1949 Vienna Trade Fair.
A U.S. patent for the dispenser was obtained in 1952, but Americans of the day showed little interest in giving up smoking. So PEZ replaced the mint pellets with fruity ones and targeted a new demographic: children. In 1957, after experimenting with pricey dispensers shaped like robots, Santa Claus, and space guns, PEZ released a Halloween dispenser that featured a three-dimensional witch’s head atop a rectangular case. A Popeye version was licensed in 1958, and since then PEZ has gone on to produce some 1,500 different novelty-topped dispensers. An Austrian original that was revolutionized in America, PEZ is now enjoyed in more than 80 countries — and it’s still owned by the Ed. Haas Company.
Jennifer Lopez Inspired the Creation of Google Images
Jennifer Lopez has worn a lot of memorable dresses on a lot of red carpets over the years, but only one broke the internet to such an extent that it inspired the creation of Google Images. The multi-hyphenate entertainer first wore the plunging leaf-print silk chiffon Versace gown to the 2000 Grammy Awards in L.A., which former Google CEO Eric Schmidt later revealed led to “the most popular search query we had ever seen.” The problem was that the then-two-year-old search engine “had no surefire way of getting users exactly what they wanted: J.Lo wearing that dress.” Thus, in July 2001, “Google Image Search was born.”
Two decades later, to the delight of everyone in attendance, Lopez also closed out Versace’s Spring 2020 show in Milan by wearing a reimagined version of the dress, after other models walked the catwalk to the tune of her hit 2000 single “Love Don’t Cost a Thing.” After a projected montage of Google Images searches for the original dress and a voice saying, “OK, Google. Now show me the real jungle dress,” J.Lo herself appeared in an even more provocative and bedazzled rendition of the gown.
Silly Putty Was Developed During World War II as a Potential Rubber Substitute
World War II ran on rubber. From tanks to jeeps to combat boots, the Allied Forces needed an uninterrupted flow of rubber to supply fresh troops and vehicles to the front lines. Then, in late 1941, Japan invaded Southeast Asia — a key supplier of America’s rubber — and what was once a plentiful resource quickly became scarce. Americans pitched in, donating household rubber (think old raincoats and hoses) to help the war effort, but it wasn’t enough. So scientists set to work finding an alternative. A pair working separately at Dow Corning and General Electric independently developed a silicone oil/boric acid mixture that appeared promising. It was easily manipulated and could even bounce on walls, but in the end its properties weren’t similar enough to rubber to be useful in the war.
U.S. government labs eventually found a workable rubber substitute using petroleum, but the previously developed “nutty putty” stuck around until it fell into the hands of advertising consultant Peter Hodgson. Sensing an opportunity, Hodgson bought manufacturing rights, renamed it “Silly Putty,” and stuck some of it inside plastic eggs just in time for Easter 1950. But it wasn’t until Silly Putty’s mention in an issue of The New Yorker later that year that sales exploded, with Hodgson eventually selling millions of this strange, non-Newtonian fluid (that is, fluids whose viscosity changes under stress; ketchup and toothpaste are other examples). Since then, Silly Putty has found various serious uses, from teaching geology to physical therapy, and even took a ride on Apollo 8 in 1968, when it was used to keep the astronauts’ tools secure. A pretty impressive résumé for a substance that was initially considered a failure.
Marshmallows Were Invented as a Divine Food in Ancient Egypt
Today marshmallows are largely reserved for campfires and hot chocolate, but in ancient Egypt they were a treat for the gods. The ancients took sap from the mallow plant (which grows in marshes) and mixed it with nuts and honey. Scholars aren’t sure what the treat looked like, but they know it was thought suitable only for pharaohs and the divine. It wasn’t until 19th-century France that confectioners began whipping the sap into the fluffy little pillows we know and love today.
Amazing inventions come to curious minds, and that’s certainly the case for Swiss engineer George de Mestral. While on a walk in the woods with his dog, de Mestral noticed how burrs from a burdock plant stuck to his pants as well as his dog’s fur. Examining the burrs under a microscope, de Mestral discovered that the tips of the burr weren’t straight (as they appeared to the naked eye), but instead contained tiny hooks at the ends that could grab hold of the fibers in his clothing. It took nearly 15 years for de Mestral to recreate what he witnessed under that microscope, but he eventually created a product that both stuck together securely and could be easily pulled apart. In 1954, he patented a creation he dubbed “Velcro,” a portmanteau of the French words velours (“velvet”) and crochet (“hook”).
The First Home Security System Was Developed by a Nurse
If you’ve ever checked in on your home from vacation or caught a porch pirate making off with a recent delivery, you have Marie Van Brittan Brown to thank. As a nurse in New York City in the 1960s, Brown worked irregular shifts that often had her coming home at odd hours while her husband, an electronic technician, was away. Concerned about crime in their neighborhood and a lack of help from law enforcement, the Browns worked together to create the first home security system.
Marie’s design was extensive: It featured a motorized camera that could be repositioned among a set of peepholes, a TV screen for viewing outside in real-time (one of the earliest examples of closed-circuit TV or CCTV), and a two-way microphone for speaking to anyone outside her apartment. The security system also included a remote-controlled door lock and an alarm that could reach a security guard. (One newspaper account of the Browns’ invention suggested the alarm could be used by doctors and businesses to prevent or stop robberies.) Brown was awarded a patent for her thoroughly designed security system in 1966 but never pursued large-scale manufacturing of her product. Regardless, she still receives credit for her ingenuity, with a significant number of security system manufacturers recognizing her device as the grandmother of their own security tools.
An Extended Vacation Led to the Accidental Discovery of Penicillin
If you ever need to stress to your boss the importance of vacation, share the tale of penicillin. On September 3, 1928, Scottish physician Alexander Fleming returned to his laboratory at St. Mary’s Hospital in London after a vacation of more than a month. Sitting next to a window was a Petri dish filled with the infectious bacteria known as staphylococcus — but it’s what Fleming found in the dish alongside the bacteria that astounded him.
Inside the Petri dish was a fungus known as penicillium, or what Fleming at the time called “mould juice.” This particular fungus appeared to stop staphylococcus from spreading, and Fleming pondered whether its bacteria-phobic superpowers could be harnessed into a new kind of medicine. Spoiler: They could, and in the coming years, Fleming developed the world’s first antibiotic, winning the Nobel Prize for medicine in 1945 for his accidental yet world-changing discovery. “I did not invent penicillin. Nature did that,” Fleming once said. “I only discovered it by accident.”
Prosthetic Appendages Have Been in Use Since at Least 950 BCE
Scientists knew that the ancient Egyptian civilization was advanced, but they didn’t know just how advanced until they tested a prosthetic toe that came from the foot of a female mummy from about 950 to 710 BCE. While false body parts were often attached to mummies for burial purposes, experts agree that this toe was in fact used while the person was still alive. The wear and tear on the three-part leather and wood appendage (which was thought to be tied onto the foot or a sandal with string) proved that it was used to help the person walk, and tests using a replica of the toe fitted to a volunteer missing the same part of their foot showed that it significantly improved their gait in Egyptian-style sandals.
The Microwave Was Invented by Accident, Thanks to a Melted Chocolate Bar
The history of technology is filled with happy accidents. Penicillin, Popsicles, and Velcro? All accidents. But perhaps the scientific stroke of luck that most influences our day-to-day domestic life is the invention of the microwave oven. Today, 90% of American homes have a microwave, according to the U.S. Bureau of Labor Statistics, but before World War II, no such device — or even an inkling of one — existed.
During the war, Allied forces gained a significant tactical advantage by deploying the world’s first true radar system. The success of this system increased research into microwaves and the magnetrons (a type of electron tube) that generate them. One day circa 1946, Percy Spencer, an engineer and all-around magnetron expert, was working at the aerospace and defense company Raytheon when he stepped in front of an active radar set. To his surprise, microwaves produced from the radar melted a chocolate bar (or by some accounts, a peanut cluster bar) in his pocket. After getting over his shock — and presumably cleaning up — and conducting a few more experiments using eggs and popcorn kernels, Spencer realized that microwaves could be used to cook a variety of foods. Raytheon patented the invention a short time later, and by 1947, the company had released its first microwave. It took decades for the technology to improve, and prices to drop, before microwaves were affordable for the average consumer, but soon enough they grew into one of the most ubiquitous appliances in today’s kitchens.
Featured image credit:Original photo by stocksnapper/ iStock
Interesting Facts
Editorial
Interesting Facts writers have been seen in Popular Mechanics, Mental Floss, A+E Networks, and more. They’re fascinated by history, science, food, culture, and the world around them.
Advertisement
top picks from the optimism network
Interesting Facts is part of Optimism, which publishes content that uplifts, informs, and inspires.
Science has been able to shed light on many of life’s mysteries over the centuries, offering explanations for diseases, animal behavior, the cosmos, and more. We’ve come a long way from the days when life forms were thought to appear through spontaneous generation and bloodletting was used to cure almost any illness. But there still remain many scientific mysteries embedded in our daily lives. Here are five common occurrences that continue to defy explanations from the top scientific minds.
Credit: Brendan Smialowski via Getty Images
How Acetaminophen Works
You’d think that the accessibility of acetaminophen (Tylenol) as an over-the-counter painkiller would indicate a full understanding of its medicinal properties, but Big Pharma is still trying to figure this one out. Certainly scientists know the dangers of excessive doses, but exactly how the medication works to ease pain is still a mystery. It was once thought that acetaminophen functioned in the same manner as nonsteroidal anti-inflammatory drugs (NSAIDs) such as aspirin and ibuprofen, which block the formation of pain-producing compounds in the central nervous system. However, further testing indicated that this enzyme suppression only happens under certain chemical conditions in the body. Other researchers have examined the effects of acetaminophen on neurotransmission in the spinal cord, but a definitive mechanism remains elusive.
This one’s easy – cats purr because they’re happy you’re petting them, right? Except they also purr when they’re hungry, nervous, or in pain, so there are more complex matters to consider. One theory put forth by bioacoustician Elizabeth von Muggenthaler suggests that purring functions as an “internal healing mechanism,” as its low-frequency vibrations correspond to those used to treat fractures, edema, and other wounds. Additionally, since humans generally respond favorably to these soothing sounds, it’s possible that purring has evolved, in part, as a way for domesticated kitties to interact with their owners. And researchers at least believe they now know how purring happens – a “neural oscillator” in the cat brain is thought to trigger the constriction and relaxing of muscles around the larynx – so it may not be long before they home in on more precise reasons for this common, but still mysterious, form of feline communication.
It’s one of the great ironies of life that we supposedly never forget how to ride a bicycle yet lack a firm understanding of the mechanics that enable us to pull it off in the first place. Early attempts at rooting out answers gave rise to the “gyroscopic theory,” which credits the force created by spinning wheels with keeping bikes upright. This theory, however, was disproven in 1970 by chemist David Jones, who created a functional bike with a counter-rotating front wheel. Jones then floated his “caster theory,” which suggests that a bike’s steering axis, pointing ahead of where the front wheel meets the ground, produces a stabilizing “trail” similar to a shopping cart caster. However, this theory also has holes, as researchers demonstrated in a 2011 Science article showing that a bike with a negative trail – a steering axis pointing behind the wheel – could maintain balance with proper weight distribution. All of which goes to show that while biking is largely a safe activity, there remains a glaring question mark at the heart of a $54 billion global industry.
Maybe you’ve seen flocks of birds flying overhead to mark the changing of seasons or read about salmon fighting upstream to return to their birthplaces, but exactly how do these animals navigate in the midst of long distances and shifting geological conditions? In some cases, there are strong olfactory senses in play; a salmon can detect a drop of water from its natal source in 250 gallons of seawater, helping to guide the way “home.” But the possibilities get even stranger, as scientists are exploring the concept that light-sensitive proteins in the retinas of birds and other animals create chemical reactions that allow them to “read” the Earth’s magnetic field. It may seem far-fetched to think that birds rely on principles of quantum mechanics, but there may be no better explanation for how, say, the Arctic Tern stays on target while annually migrating more than 40,000 miles from pole to pole.
Given that we can pinpoint the health benefits and problems associated with proper and insufficient amounts of sleep, it’s baffling that we still don’t fully understand what this all-important restorative state does for the body. Older theories followed the notion that sleep helps people conserve energy while keeping them away from the dangers of the night, while more recent research explores how sleep contributes to the elimination of toxic neural buildups and promotes plasticity, the brain’s ability to adjust and reorganize from its experiences. Other experts hope to come across answers by studying glia cells, which are abundant in the central nervous system and possibly involved with regulating when we nod off and awaken. And if these diligent researchers ever do crack the code of what sleep does for us, maybe it will shed light on related nighttime mysteries — like why we dream.
Interesting Facts
Editorial
Interesting Facts writers have been seen in Popular Mechanics, Mental Floss, A+E Networks, and more. They’re fascinated by history, science, food, culture, and the world around them.
Advertisement
top picks from the optimism network
Interesting Facts is part of Optimism, which publishes content that uplifts, informs, and inspires.
Humans have long sought to control time. While it’s generally considered impossible to bend time to our will, there are two days of the year when we have a little sway over the clock. Daylight saving time — officially beginning on the second Sunday of March and ending on the first Sunday in November — is loathed as much as it is loved, but these six facts just might help you see the time warp in a whole new light.
Ben Franklin is often credited as the inventor of daylight saving time — after all, the concept seems on-brand for the founding father who once championed early waking and bedtimes as the key to success. It’s a myth that Franklin invented daylight saving time, though he did once suggest a similar idea. In 1784, Franklin (then living in France) wrote a letter to the Journal de Paris, suggesting that French citizens could conserve candles and money by syncing their schedules with the sun. Franklin’s proposal — wittily written and considered a joke by many historians — didn’t recommend adjusting clocks; the idea was to start and end the day with the sun’s rising and setting, regardless of the actual time.
Franklin’s proposal didn’t get far, but nearly 100 years later, another science-minded thinker devised the daylight saving time strategy we’re familiar with today. George Vernon Hudson, a postal worker and entomologist living in New Zealand, presented the basics of the idea in 1895. Hudson’s version moved clocks ahead two hours in the spring in an effort to extend daylight hours; for him, the biggest benefit of a seasonal time shift would be longer days in which he could hunt for bugs after his post office duties were finished. Hudson’s proposal was initially ridiculed, but three decades later, in 1927, New Zealand’s Parliament gave daylight saving a shot as a trial, and the Royal Society of New Zealand even awarded Hudson a medal for his ingenuity.
Only 35% of Countries Adjust Their Clocks Seasonally
Germany paved the way for daylight saving time in 1916, becoming the first country to enact Hudson’s idea as an energy-saving move in the midst of World War I. While many countries followed suit — mostly in North America, Europe, parts of the Middle East, and New Zealand — some of the world’s 195 countries didn’t. In fact, around the globe, it’s now more common to not make clock adjustments, especially in countries close to the equator, which don’t experience major seasonal changes in day length. In total, around 70 countries observe the time shift, though even in the U.S., where daylight saving time has been a standard practice mandated by federal law since 1966, two states don’t participate: Arizona and Hawaii.
The First U.S. Daylight Saving Time Was a Disaster
Marching into World War I, the U.S. adopted the European strategy of rationing energy by adjusting civilian schedules. With more daylight hours, homes and businesses could somewhat reduce their reliance on electricity and other fuels, redirecting them instead to the war effort. But in the early part of the 20th century, timekeeping across the country was far from consistent, so in March 1918, President Woodrow Wilson signed legislation that created the country’s five time zones. That same month, on Easter Sunday, daylight saving time went into effect for the first time ever — though the government’s efforts to create consistent clocks were initially a mess. Holiday celebrations were thrown off by the time changes, and Americans lashed out with a variety of complaints, believing the time change diminished attendance at religious services, reduced early morning recreation, and provided too much daylight, which supposedly destroyed landscaping.
The time shift was temporary, repealed in 1919 at the war’s end, and wouldn’t be seen again on the federal level until World War II. However, some cities and states picked up the idea, adjusting their clocks in spring and fall as they saw fit.
Credit: Frank Hurley/ Hulton Archive via Getty Images
In the U.S., Daylight Saving Time Once Had a Different Nickname
Because of its association with energy rationing during World War I, daylight saving time originally had a different nickname: war time. When the U.S. became involved in World War II nearly two decades later, war time returned, and was in place year-round from February 1942 until September 1945, when it was ditched at the war’s end.
The time change earned its modern title in 1966, when Congress passed the Uniform Time Act, which further standardized time zones and standardized the start and end dates for daylight saving time, among other things. Many countries that follow daylight saving time use the same terminology, though the seasonal time change goes by different labels in some regions: In the U.K., Brits have British Summer Time (BST).
Daylight saving lore has it that the spring and fall clock changes provide the biggest benefit for farms, though if cows could speak, they might say otherwise. Farmers — who supposedly benefit from the extra hour of light in the afternoon — have heavily lobbied against the time change since it was first enacted in 1918. That’s partially because it’s confusing for livestock such as dairy cows and goats, throwing off their feeding and milking schedules. Some farmers say the loss of morning light also makes it more difficult to complete necessary chores early in the day, and impacts how they harvest and move crops to market.
While farmers have pushed to drop daylight saving time, some industries — like the golf industry — have campaigned to keep it for their benefit. The extra daylight is known for bringing more putters to the courses, generating millions in golf gear sales and game fees. Other big business supporters include the barbecue industry (which sells more grills and charcoal in months with longer daylight hours) and candy companies (benefiting from longer trick-or-treating hours on Halloween).
President Woodrow Wilson knew that rolling clocks forward and backward twice a year would be somewhat disruptive, so his 1918 wartime plan tried to be minimally bothersome. Instead of adjusting clocks arbitrarily at midnight in March and November, Wilson chose 2 a.m., a time when no passenger trains were running in New York City. While the shift did impact freight trains, there weren’t as many as there are today, so daylight saving time was considered a relatively easy workaround for the railroads. The 2 a.m. adjustment is still considered the least troublesome time today, since most bars and restaurants are closed and the vast majority of people are at home, asleep — either relishing in or begrudgingly accepting their adjusted bedtime schedules.
Nicole Garner Meeker
Writer
Nicole Garner Meeker is a writer and editor based in St. Louis. Her history, nature, and food stories have also appeared at Mental Floss and Better Report.
Advertisement
top picks from the optimism network
Interesting Facts is part of Optimism, which publishes content that uplifts, informs, and inspires.
We all have them, yet our bodies are often a source of mystery. Some biological functions truly are perplexing — we still don’t know exactly why we dream or why we cry when we’re emotional, for example. But others have given rise to persistent myths, despite decades of debunking by scientists. Here are just a few common beliefs about the human body that you might be surprised aren’t true.
In the fourth century BCE, Aristotle identified five — and only five — senses: sight, smell, taste, hearing, and touch. Over the past 2,000 years, though, we’ve learned much more about our ability to perceive and process our surroundings. Some researchers have suggested we have 21 or more senses based on the types of receptors contained in our sensory organs. For example, we have mechanoreception, which includes the sense of balance; perception of hot and cold temperatures; receptors to taste bitter, sweet, salty, sour, and umami flavors; and even interoceptors for sensing blood pressure and lung inflation.
Humans are born with about 100 billion brain cells. Until the 1990s, most scientists believed 100 billion was all we’d ever have. Growing new neurons would interrupt communication among our existing brain cells and short-circuit the whole system — or so the theory went. Then, a 1998 study found evidence that humans could generate new cells in the brain’s hippocampus, an area associated with learning and memory. More recent studies have largely supported the idea, and suggest that we might be able to make up to 1,500 neurons a day. Though research continues, neurogenesis is good news: Growing fresh neurons may make our brains more resilient against Alzheimer’s, depression, anxiety, and other disorders.
Myth: Speaking of Brains, We Use Only 10% of Them at a Time
Maybe you’ve heard peppy TED Talk speakers say that our brains have limitless potential… if only we could employ them to their fullest extent. They might have been referring to the myth that we use just 10% of our brain power at a given time. This old chestnut probably grew from a 1907 article for the journal Science by William James, one of the founders of modern psychology. “Compared with what we ought to be, we are only half awake… we are making use of only a small part of our possible mental and physical resources,” he wrote. Dale Carnegie cited James in How to Win Friends and Influence People, his 1936 self-help bestseller. Eventually, someone — it isn’t clear who — claimed we were ignoring 90% of our mental powers. But there’s no scientific basis for the belief.
Myth: You Need to Drink Eight Glasses of Water a Day to Stay Hydrated
Mild dehydration can make you sluggish and dizzy, while a serious case can lead to organ damage. But there’s little evidence to suggest that eight 8-ounce glasses of water per day is the ideal amount to maintain health. This well-known myth likely arose from a 1945 government health bulletin, which claimed that “a suitable allowance of water for adults is usually 2.5 liters daily … Most of this quantity is contained in prepared foods.” Over time, people overlooked that second sentence and assumed the guidelines called for 2.5 liters per day in addition to water contained in foods. Today, dietitians say that the optimum amount of water intake per day varies from person to person, but that most of it should come from food sources like fresh fruits and vegetables.
You’ve probably heard adults admonish kids to wait a half-hour (or longer) after a meal before jumping back in the pool. Generations of people have believed that, soon after eating, blood is diverted from the limbs toward the gut to aid digestion. Swimming too soon would supposedly cause your stomach or extremities to cramp up, which could lead to drowning. In 2011, the Red Cross published a thorough investigation of the scientific literature and found no link between eating and drowning, concluding that “food intake restrictions prior to swimming are unnecessary.” On the other hand, you may want to wait for your lunch to settle before trying that backflip off the diving board.
Low temperatures alone can’t make you sick — you’re going to need a virus for that. The pervasive sense that cold weather causes colds stems from a few coinciding factors. One, viruses that cause the common cold are more active and resilient in winter in many parts of the world. Two, cold temperatures outside keep more people inside, where you might pick up someone else’s virus (and indoor heating can dry out your sinuses, making it easier for viruses to invade). Three, cold weather may also slow down your normal immune responses. All of these scenarios add up to a greater risk of being exposed to a virus that has a better chance of making you sick.
And while we’re on the topic, the myth that you should feed a cold (and starve a fever) probably arose in the medieval era, when people “treated” colds by raising a patient’s body temperature with hot meals. People also believed that fevers could be “cooled down” by depriving the body of food. In fact, bodies fighting both kinds of illnesses need proper nourishment, as well as rest and fluids.
Interesting Facts
Editorial
Interesting Facts writers have been seen in Popular Mechanics, Mental Floss, A+E Networks, and more. They’re fascinated by history, science, food, culture, and the world around them.
Advertisement
top picks from the optimism network
Interesting Facts is part of Optimism, which publishes content that uplifts, informs, and inspires.
It’s easy to take our bodies for granted, but they’re home to an array of wonders. Consider, for example, that our eyes can see distant galaxies, our hair contains traces of gold, and our veins could stretch for 60,000 miles if laid end to end — that’s enough to go around the world twice. Below, we’ve gathered some of our most fascinating facts about the human body from around the website. They just might help you see your anatomy in a whole new, and more wondrous, light.
Humans are born with about 100 billion brain cells. Until the 1990s, most scientists believed 100 billion was all we’d ever have. Growing new neurons would interrupt communication among our existing brain cells and short-circuit the whole system — or so the theory went. Then, a 1998 study found evidence that humans could generate new cells in the brain’s hippocampus, an area associated with learning and memory. More recent studies have largely supported the idea, and suggest that we might be able to make up to 1,500 neurons a day. Though research continues, neurogenesis is good news: Growing fresh neurons may make our brains more resilient against Alzheimer’s, depression, anxiety, and other disorders.
While most people dream in full color, around 12% of the population is tuned to Turner Classic Movies (so to speak), and often experiences dreams in only black and white.
The analogy to television is an apt one, as researchers discovered in 2008 that people under the age of 25 almost never dreamed in monochrome, while members of the boomer generation and older had dreams devoid of color roughly a quarter of the time. Although it is difficult to prove definitively that TV is to blame, the number of people who reportedly dream in grayscale has slowly fallen over the decades.
Gravity is an essential force on Earth: It keeps the planet in orbit at a safe and comfortable distance from the sun, and even holds our atmosphere in place. It does have a downside, however: It weighs down the human body, making us a tiny bit shorter by the end of the day. From the moment we climb out of bed in the morning, gravitational forces push down on us, applying downward pressure on our joints, compressing our spines, and causing our organs to settle. All that strain adds up, enough to shrink a body by 1 centimeter. Gravity is at work whether we’re sitting or standing, but at bedtime, our bodies get a slight reprieve as lying down redirects the force. Sleeping horizontally gives our spines and joints time to decompress and gain back the height lost during the day, making us once again slightly taller by morning.
Much of the work of perceiving the world around us actually takes place in the brain. In a way, our eyes act as a camera, and our brains as a kind of “darkroom” that develops that information into what we call our vision. One of the most perplexing aspects of this dual relationship is that the images projected onto our retina are actually upside-down. Because the cornea — the transparent part of the eye covering the iris and pupil — is a convex lens, when light enters the cornea, it’s flipped upside down. It’s the brain’s job to translate this inverted information, as well as two 2D images, one from each eye, into one cohesive 3D image.
Gold is present in low levels throughout the Earth; it’s been found on every continent except Antarctica, as well as in the planet’s core, the oceans, plants, and in humans, too. The average human body of about 150 pounds is said to contain about .2 milligrams of gold, which we excrete through our skin and hair. Babies less than 3 months old tend to have more gold in their manes than older people, thanks to the precious metal being passed along in human breast milk. And while no one’s suggesting we should mine the gold in hair or breast milk (as far as we know), researchers are studying whether gold — and other metals — might be recovered from human waste.
By the time most of us reach age 20 or so, the bones in our body are pretty much done growing. The growth plates that caused us to put on inches in our youth are now hardened bone, and in fact, adults tend to drop an inch or two in height as worn-out cartilage causes our spines to shrink over time. However, there are a few bones that buck this biological trend. Skulls, for example, never fully stop growing, and the bones also shift as we age. A 2008 study from Duke University determined that as we grow older, the forehead moves forward, while cheek bones tend to move backward. As the skull tilts forward, overlying skin droops and sags.
The skull isn’t the only bone that has a positive correlation with age. Human hips also continue to widen as the decades pass, meaning those extra inches aren’t only due to a loss of youthful metabolism. In 2011, researchers from the University of North Carolina School of Medicine discovered that hips continue to grow well into our 70s, and found that an average 79-year-old’s pelvic width was 1 inch wider than an average 20-year-old’s.
Human eyes are entirely unique; just like fingerprints, no two sets are alike. But some genetic anomalies create especially unlikely “windows” to the world — like gray eyes. Eye experts once believed that human eyes could appear in only three colors: brown, blue, and green, sometimes with hazel or amber added. More recently, the ashy hue that was once lumped into the blue category has been regrouped as its own, albeit rarely seen, color. Brown-eyed folks are in good company, with up to 80% of the global population sporting the shade, while blue eyes are the second most common hue. Traditionally, green was considered the least common eye color, though researchers now say gray is the most rare, with less than 1% of the population seeing through steel-colored eyes.
You Breathe Primarily Out of One Nostril at a Time
The human nose can smell up to 1 trillion odors, trap harmful debris in the air before it enters your lungs, and affect your sex life. But arguably its most important job is to condition the air you breathe before that air enters your respiratory tract. This means warming and humidifying the air before it passes to your throat and beyond. To do this, the nose undergoes a nasal cycle in which one nostril sucks in the majority of the air while the other nostril takes in the remaining portion. A few hours later (on average), the nostrils switch roles. This cycle is regulated by the body’s autonomic nervous system, which swells or deflates erectile tissue found in the nose.
Although we don’t notice this switch throughout the day, if you cover your nostrils with your thumb one at a time, you’ll likely observe that air flow through one is significantly higher than the other. This is also why one nostril tends to be more congested than the other when you have a cold.
The Human Eye Can See Objects Millions of Miles Away
While the majority of us wouldn’t consider our vision to be extraordinary, the human eye can see much farther than most of us realize. That’s because our ability to perceive an object is based not only on its size and proximity, but also on the brightness of the source. Practically speaking, our sight is hindered by factors such as atmospheric conditions and the Earth’s curvature, which creates the dropoff point of the horizon just 3 miles away. However, a trip outside on a clear night reveals the true power of our vision, since most of us are able to make out the faint haze of the Andromeda galaxy some 2.6 million light-years into space.
Bioluminescence, the strange biology that causes certain creatures to glow, is usually found at the darkest depths of the ocean where the sun’s light doesn’t reach. While these light-emitting animals seem otherworldly, the trait is actually pretty common — in fact, you’re probably glowing right now. According to researchers at Tohoku Institute of Technology in Japan, humans have their own bioluminescence, but at levels 1,000 times less than our eyes can detect. This subtle human light show, viewable thanks to ultra-sensitive cameras, is tied to our metabolism. Free radicals produced as part of our cell respiration interact with lipids and proteins in our bodies, and if they come in contact with a fluorescent chemical compound known as fluorophores, they can produce photons of light. This glow is mostly concentrated around our cheeks, forehead, and neck, and most common during the early afternoon hours, when our metabolism is at its busiest.
The Space Between Your Eyebrows Is Called the “Glabella”
You know hands, shoulders, knees, and toes (knees and toes), but has anyone ever introduced you to the glabella? This isn’t some hidden-away body part like the back of the elbow or something spleen-adjacent — it’s smack dab in the middle of your face. Latin for “smooth, hairless, bald,” the glabella is the small patch of skull nestled in the middle of your two superciliary arches (also known as your eyebrow ridges). Many people know of the glabella because of the wrinkles, or “glabellar lines,” that can appear in the area.
We Have Wisdom Teeth Because Our Ancestors Had to Chew More
Early humans were hunter-gatherers who survived on leaves, roots, meat, and nuts — things that required a lot of crushing ability. The more grinding teeth you have, the easier it is to eat tough foods. As humans evolved, they began to cook their food, making it softer and easier to chew. Having three full sets of molars became unnecessary.
Early humans also had larger jaws than we do today, which were able to support more teeth. Over time, as the need for super-powerful jaws decreased, human jaws got smaller. But the number of teeth stayed the same. That’s why today, many people need to get their wisdom teeth removed in order to create more space. Yet because wisdom teeth aren’t necessary for modern humans, they may someday cease to exist at all.
Researchers estimate that some 300 million people around the world are colorblind, most of them male. On the opposite end of the spectrum are those with an exceedingly rare genetic condition that allows them to see nearly 100 million colors — or 100 times as many as the rest of us. It’s called tetrachromacy, or “super vision,” and it’s the result of having four types of cone cells in the retina rather than the usual three. (Cones help our eyes detect light and are key to color vision.) Because of the way the condition is passed down via the X chromosome, the mutation occurs exclusively in women.
Maybe you’ve heard peppy TED Talk speakers say that our brains have limitless potential… if only we could employ them to their fullest extent. They might have been referring to the myth that we use just 10% of our brain power at a given time. This old chestnut probably grew from a 1907 article for the journal Science by William James, one of the founders of modern psychology. “Compared with what we ought to be, we are only half awake… we are making use of only a small part of our possible mental and physical resources,” he wrote. Dale Carnegie cited James in How to Win Friends and Influence People, his 1936 self-help bestseller. Eventually, someone — it isn’t clear who — claimed we were ignoring 90% of our mental powers. But there’s no scientific basis for the belief.
Babies Are Born With Almost 100 More Bones Than Adults
Babies pack a lot of bones in their tiny bodies — around 300, in fact, which is nearly 100 more than adult humans have. The reason for this is biologically genius: These extra bones, many of which are made entirely or partly of cartilage, help babies remain flexible in the womb and (most crucially) at birth, making it easier for them to pass through the birth canal. As a baby grows into childhood and eventually early adulthood, the cartilage ossifies while other bones fuse together. This explains the “soft spots” in a baby’s skull, where the bones have yet to fuse completely.
If you’ve ever seen someone track their pulse (in real life or on a crime drama), you’ll notice that the index and middle fingers are always pressed on the neck’s carotid artery, which is responsible for transporting blood to the brain. There’s a reason why doctors (and actors who play doctors on TV) use these fingers and not, say, their thumbs. While your thumb is good for many things, taking your pulse isn’t one of them. Unlike the other four digits, the thumb has its own exclusive artery, the princeps pollicis, which makes it biologically unreliable as a pulse reader — because you’ll feel it pulse instead of the artery in your neck.
We’ve all heard that we have five senses: sight, smell, hearing, touch, and taste. That idea goes back to the Greek philosopher Aristotle — but it’s wrong. Modern science has identified as many as 32 senses, by looking at receptors in our bodies with the job of receiving and conveying specific information. Senses you might not know you have include your vestibular sense, which controls balance and orientation; proprioception, which governs how our bodies occupy space; thermoception, which monitors temperature; nociception, our sense of pain; and many more.
Cilantro Tastes Like Soap to Some People Because of Their Genes
Many gourmands enjoy topping their fish, salads, and soups with a smattering of cilantro, while the herb makes others feel like they’re biting into a bar of Ivory Spring. The reason appears to be a matter of genetics. One 2012 study showed that people equipped with certain olfactory receptor genes are more prone to detecting cilantro’s aldehydes, compounds also commonly found in household cleaning agents and perfumes. While the percentage of the population that suffers from this fate tops out at about 20%, the resulting taste is apparently awful enough to spark passionate responses of the sort found on Facebook’s I Hate Cilantro page, which has more than 26,000 likes at the time of writing.
All humans demonstrate the same expressions for emotions the world over, according to body language expert David Matsumoto. That’s because we all generally have the same facial muscles and structure, regardless of age, sex, ethnicity, or culture. However, culture helps determine what emotions are expressed when — and how those expressions are perceived.
There are some surprising lengths packed inside the human body. The small intestine, for example, could stretch 22 feet end to end (though hopefully it never has to). Not to be outdone, our nerves could stretch 37 miles if laid end to end. However, none of this compares to our circulatory system. According to the British Heart Foundation, the veins in an adult human could stretch an astonishing 60,000 miles — that’s farther than it takes to circumnavigate the globe twice. Capillaries, which transport blood between arteries and veins, make up 80% of this length.
There are 206 bones in the average adult human body, and our hands take up the lion’s share. Each hand is home to 27 bones, along with 34 muscles and 123 ligaments. Some experts estimate that a quarter of the motor cortex, the part of the brain responsible for voluntary movement, is devoted to the manipulation of our hands alone.
Although hands are impressive structures, they only just beat our feet by one bone. Because Homo sapiens’ primate ancestors walked on all fours, human hands and feet developed in similar ways. In fact, almost every bone in the palm is arranged in a pattern similar to the metatarsals in the foot. The only exception is a bone located at the edge of the wrist called the pisiform, which attaches various ligaments and tendons.
After birth, a baby mostly sees in black and white — and that’s only the beginning of its problems. A newborn’s vision is also incredibly fuzzy, and limited to around 8 to 12 inches from its face during the first few weeks of life. Whereas average human sight is considered 20/20, it’s estimated that a newborn’s vision lies somewhere between 20/200 and 20/400. Because red has the longest wavelength (at 700 nanometers), the color doesn’t scatter easily, and it’s the first hue capable of being detected by a baby’s reduced visual range. Fortunately, most babies have attained most of the normal human visual faculties within a year from birth.
Your “funny bone,” named as such for its location near the humerus bone — “humorous,” get it? — is not really a bone at all. Rather, it’s part of the ulnar nerve, which runs from your neck all the way to your hand. Nerves are usually protected by bone, muscle, and fat, so they can perform their bioelectrical functions undisturbed, but a small part of the ulnar nerve in the back of the elbow is a little more exposed. There, the nerve is protected only by a tunnel of tissue, known as the cubital tunnel, so when you hit your “funny bone,” the ulnar nerve presses against the medial epicondyle (one of the knobby ends of the humerus bone), which in turn sends a painful sensation throughout your lower arm and hand. And because the nerve gives feeling to the pinky and ring fingers, those two digits may feel particularly sensitive compared to your other three fingers.
Our tongues, like our fingerprints, are specific to each individual. That’s right — people have tongue prints, which vary from one person to another due to both shape and texture. And perhaps surprisingly, the organ has been gaining some popularity as a method for biometric authentication. Where fingerprints can be altered, eyes affected by astigmatisms or cataracts, and voices changed just by the all-too-common cold, the human tongue is relatively protected from external factors. Sticking out one’s tongue for a print also involves a layer of conscious control and consent that goes beyond what’s required for retinal scans or even fingerprinting, which could make it a more appealing biometric tool. In fact, these “lingual impressions” may be so advantageous over other forms of authentication that some researchers have started investigating the idea of a tongue print database, using high-resolution digital cameras to record every ridge, line, and contour of that muscular organ in our mouths.
The involuntary spasmodic interruptions known as hiccups usually last only a few minutes. Then there’s the strange case of Charles Osborne, who was afflicted with a continuous case of hiccups for 68 years — recognized by Guinness World Records as the longest case of hiccups in history.
Osborne’s story began with an accident on June 13, 1922, in which he accidentally slipped and fell. His doctor later said he popped a blood vessel in his brain the size of a pin, and theorized that Osborne must have damaged the incredibly small area of the brain that controls and inhibits hiccups. Osborne’s diaphragm spasmed 20 to 40 times a minute, on average, during his waking hours — meaning he hiccuped roughly 430 million times throughout his life. To cope with this never-before-seen disorder, Osborne learned breathing techniques that effectively masked his constant hiccuping. Although he traveled the world in search of a cure — even offering $10,000 to anyone who could find one — the best he could do was cope with the affliction. Finally, in 1990, his diaphragm suddenly ended its 68-year-long spasmodic episode on its own. Osborne died less than a year later, but he was at least able to experience the final days of his life sans hiccups.
Interesting Facts
Editorial
Interesting Facts writers have been seen in Popular Mechanics, Mental Floss, A+E Networks, and more. They’re fascinated by history, science, food, culture, and the world around them.
Advertisement
top picks from the optimism network
Interesting Facts is part of Optimism, which publishes content that uplifts, informs, and inspires.
The color green is intimately tied to the human experience. The hue fills our world as the color of nature, and its particular wavelength has a fascinating relationship with our visual sense. The color can represent positive notions (peace and fertility) as well as negative ones (greed or envy). Although it’s considered a secondary color, because it’s a mix of both yellow and blue primary colors, green is maybe the most important hue in the visual spectrum — and these five mind-blowing facts explain why.
Human Eyes Are Most Sensitive to the Green Wavelength of Light
Electromagnetic radiation comes in a variety of types, including radio waves, gamma rays, and visible light. The human eye can perceive wavelengths around 380 to 740 nanometers (nm), also known as the visual light range. The size of the wavelength determines the color we see: For example, at 400 nm our eyes perceive the color violet (hence the name “ultraviolet” for wavelengths directly under 400 nm), whereas at 700 nm our eyes glimpse red (but can’t see the “infrared” wavelengths just beyond it). In the middle of this spectrum of visible light is the color green, which occupies the range between 520 to 565 nm and peaks at 555 nm. Because this is right in the middle of our visual range, our eyes are particularly sensitive to the color under normal lighting conditions, which means we can more readily differentiate among different shades of green. Scientists have also found that the color green positively affects our mood in part because our visual system doesn’t strain to perceive the color — which allows our nervous system to relax.
Credit: Leemage/ Corbis Historical via Getty Images
The Color Green Has Meant Many Things Throughout History
Today, the color green is associated with a variety of feelings and social movements. Turning a shade of green can indicate nausea, but you can also become “green” with envy. Green is closely associated with money and capitalism, while also embodying aspects of nature and the environmentalist, or “Green,” movement.
However, these cultural definitions have changed over millennia, and have different associations in different parts of the world. For example, in ancient Egypt, green was often linked with both vegetation and death, and Osiris (god of fertility and death) was often depicted as having green skin. These days, green is prevalent throughout the Muslim world — adorning the flags of Muslim-majority nations such as Iran and Saudi Arabia — because it was supposedly the prophet Mohammad’s favorite color. Many African nations also include the color green in their flags to represent the natural wealth of their continent, and Confucius believed green (more specifically jade) represented 11 separate virtues, including benevolence, music, and intelligence.
Hollywood Uses Green Screens Because of Human Skin Tones
If you’ve seen any big-budget Hollywood film, it probably used some variety of green screen-enabled special effects. In fact, some version of green screen technology, also known as “chroma keying,” has been around since the early days of film. The reason why screens are green is actually pretty simple — human skin is not green. When a camera analyzes chrominance, or color information, it can easily separate green (or blue) from the rest of the shot so that a video backdrop can be inserted.
The Color Green May Have Killed Napoleon Bonaparte
In 1775, German Swedish chemist Carl Wilhelm Scheele made a green-hued pigment that eventually bore his name — Scheele’s green. Unfortunately, the pigment was extremely dangerous, since it was made with arsenic. However, its rich hue ignited a craze for green, and the pigment was used in wallpaper, clothing, and even children’s toys. In fact, some historians believe that Napoleon Bonaparte died from the Scheele’s green pigment embedded in the wallpaper of his bedroom on the island of St. Helena.
However, that wasn’t the end of green’s deadly reputation. Decades later, impressionist painters — such as Paul Cézanne and Claude Monet — used a green pigment called Paris green that was highly toxic, if less dangerous than Scheele’s green. Experts suggest that the chemical could have contributed to Cézanne’s diabetes and Monet’s blindness.
No One Is Sure Why the Backstage Room Is Called a “Green Room”
One early reference to a “green room” in the sense of a waiting room appears in TheDiary of Samuel Pepys, the famed journal kept by a civil servant in 1660s London. Pepys mentions a “green room” when going to meet the royal family — likely a reference to the color of the walls. A “green room” was then tied to the theater in English playwright Thomas Shadwell’s 1678 comedy A True Widow, which includes the line: “Selfish, this Evening, in a green Room, behind the Scenes.” However, Shadwell doesn’t mention why it was called a green room. One notable London theater did have a dressing room covered in green fabric, but other theories behind the term reference actors going “green” because of nervousness, amateur or young (aka “green”) actors, or a place where early actors literally waited “on the green” lawns of outdoor theaters — among many other ideas. It’s possible we’ll never know the origin of the phrase for sure.
Darren Orf
Writer
Darren Orf lives in Portland, has a cat, and writes about all things science and climate. You can find his previous work at Popular Mechanics, Inverse, Gizmodo, and Paste, among others.
Advertisement
top picks from the optimism network
Interesting Facts is part of Optimism, which publishes content that uplifts, informs, and inspires.
The human body contains around 600 muscles, more than 200 bones, and all sorts of tendons, fascia, and organs — but some of them are pretty much obsolete, even if they make for decent party tricks. A few body parts have even started to disappear already, and are only present in certain segments of the population. In extreme cases, as people who have had appendectomies or wisdom tooth extractions can attest, it seems like some of these body parts exist only to hurt us.
Are you missing a mostly useless arm muscle? What muscles are key for dogs, but not particularly handy for us? These seven body parts are pretty much just along for the ride.
The appendix, a small pouch attached to the large intestine, is perhaps the best-known useless organ, doing little except occasionally getting infected. However, it turns out that it might not be entirely useless. Scientific theories have been floating around since 2007 that the appendix might actually serve as a “safe house” for beneficial gut bacteria, storing it to replenish it in the rest of the gut if it gets wiped out by illness (or, in modern times, antibiotics).
If this turns out to be accurate, it’s still not a particularly important organ, and if it gets severely infected, you still need to get it removed. Don’t worry: Hundreds of thousands of people get them taken out every year and are doing just fine.
Humans don’t need tails, but our ancestors sure did — and tailbones, also known as coccyxes, are the last remaining part of them, consisting of three to five vertebrae that aren’t connected to the spine. The coccyx is not a functional tail, but it is woven in with the ligaments, tendons, and muscles in the area. Occasionally, it gets rid of itself by fusing with the sacrum, another lower back bone. In cases of extreme pain that don’t resolve with any other treatment, people can get their coccyx surgically removed, but it’s unnecessary in the vast majority of cases. Occasionally a baby will be born with an actualtail — and human embryos generally form with a tail that later disappears as it grows into the tailbone — but it’s extremely rare.
Wisdom teeth, a third set of molars, have made dental surgery a rite of passage. For those who get them — many people don’t — they usually start emerging between the ages of 17 and 21. Often, there’s no room in the jaw, and the teeth end up trapped. When that happens, they need to be surgically extracted. Occasionally they grow in without incident and just become extra teeth.
It’s a lot of trouble for a set of teeth that we don’t even need. One theory is that our ancestors, who ate harder-to-chew things and didn’t have dentists, needed them as backup teeth. Modern science has gotten pretty good at just replacing teeth as they fall out, but wisdom teeth could still replace damaged molars in a pinch.
If you have a pet dog or cat, you’ve probably noticed their ears snapping to attention at an interesting or startling noise. Humans still have those muscles and, likely, the brain circuits associated with them. In one study, researchers observed tiny, involuntary movements in the directions of interesting sounds. For one part of the study, they had participants read a boring text while they played attention-grabbing sounds like crying babies and footsteps. Next, they had participants try to listen to a podcast while a second podcast played in the background. Those ear muscles fired up in both cases — they’re just obsolete for modern human beings. Some humans can still wiggle their ears, which does serve one purpose: It’s a cool party trick.
Some emerging research suggests these muscles may have a role in combating hair loss — and without them, we wouldn’t have a name for the iconic children’s horror series Goosebumps — but as far as basic survival goes, the arrectores pilorum are pretty much useless.
Most animals have a third eyelid, also called the nictitating membrane, which serves as a kind of windshield wiper that distributes tears and clears debris from the eye. This trait evolved out of human beings and some apes, but we still have a tiny vestigial remnant in the inner corners of our eyes. It’s a bit of eye tissue just inside that fleshy pink eye bump. In exceedingly rare cases — only two have ever been reported — humans can have a more developed nictitating membrane that covers a larger portion of the eye.
So why did we lose ours? One theory is that, unlike animals that still have them, we’re not typically sticking our faces directly into bushes or other animals to forage for food, so we have less debris to push out of our eyes.
The palmaris longus is a muscle stretching the length of our forearm that’s evolving away before our very eyes, literally — because it’s visible when you hold your hand and wrist a certain way, you can actually tell whether you still have yours on sight. It’s already missing in a significant portion of the population, and different studies around the world have observed its disappearance in anywhere between 1.5% and 63.9% of participants.
The muscle helps with wrist flexion in those who still have it, but it’s getting progressively weaker as other muscles take over its duties. If you don’t have one, you can still do all the same things as someone who does have it. While it’s unnecessary as is, the palmaris longus is pretty useful as a donor tendon for plastic surgery.
Sarah Anne Lloyd
Writer
Sarah Anne Lloyd is a freelance writer whose work covers a bit of everything, including politics, design, the environment, and yoga. Her work has appeared in Curbed, the Seattle Times, the Stranger, the Verge, and others.
Advertisement
top picks from the optimism network
Interesting Facts is part of Optimism, which publishes content that uplifts, informs, and inspires.
Whether bright orange, auburn, or more of a strawberry blond, red hair is a real eye-catcher. Celtic countries, like Scotland and Ireland, are most commonly associated with red hair, but fiery locks can pop up in people of multiple ethnicities around the world. Still, natural redheads are relatively rare — only one or two out of every 100 people can claim this distinction.
What makes red hair red? In what other ways are redheads unique? Which famous people are secret gingers… and which famous redheads are secret brunettes? Read on for 12 fabulous facts about carrot tops.
The difference between red hair and hair of other hues extends beyond just its color. On average, redheads have fewer hair strands overall compared to blonds and brunettes. Blonds, for example, have an average of around 150,000 strands of hair on their heads, whereas redheads have only 90,000 or so. Luckily, red hair tends to be coarser and thicker, so the discrepancy is not easily noticeable. Redheads also remain distinct from others as they age because red hair doesn’t go gray — instead, it turns silver or white.
Our genes make up a lot of who we are, including our looks. When people have red hair, it’s typically the MC1R gene that’s responsible.
The color of your hair comes from two possible pigments. Eumelanin makes your hair light or dark; people with black hair have a lot of it, while blonds don’t. The second pigment is pheomelanin, which is a redder pigment. Usually people don’t have a lot of the latter, because the MC1R gene converts pheomelanin into eumelanin. Redheads have a mutation in their MC1R gene that allows the pheomelanin — and the bright red color that comes with it — to flow free.
In order for someone to inherit red hair, both their parents need a mutated MC1R gene — and even then there’s about a 1-4 chance of redheadedness. Non-redheads can be redheaded gene carriers and not know it, although there are some ways to guess.
Auld Reekie, otherwise known as Edinburgh, likely has the highest concentration of redheads in the world. A DNA analysis conducted in 2014 discovered that 40% of people in the southeast region of Scotland, which contains the Scottish capital of Edinburgh, had variants of the red-haired gene. (Notably, they didn’t necessarily have red hair themselves, since the gene is recessive.) That percentage is higher than in any other region of Scotland, or the world. Of course, the area known as Scotland today has long been associated with red hair (though it’s believed the mutation first took place in central Asia).
The Two Most Powerful Tudor Monarchs Were Redheads
The Tudor dynasty, which ruled England from 1485 until the death of Queen Elizabeth I in 1603, sported a whole family of redheads, chief among them Henry VIII and the “Virgin Queen” herself.
King Henry was described as strong, broad-shouldered, and possessing golden-red hair. His daughter with Anne Boleyn (also sometimes described as having auburn hair), who became Queen Elizabeth I in 1558, similarly sported a fiery mane. However, Queen Elizabeth I took the fashion for red hair to a whole new level. Although red hair was often associated with barbarians — and the Irish and Scots, with whom England constantly quarreled, were seen as descended from such barbarians — the queen made the hue so popular that English courtiers allegedly dyed their hair and beards red to show support for her (and also the Protestant cause).
Credit: Brian Dowling/ Getty Images Entertainment via Getty Images
There’s an Annual Festival in the Netherlands That Celebrates Redheads
Every year, thousands of ruddy-haired people descend on Tilburg, Netherlands, for the Redhead Days Festival. Spread across three days, the event offers workshops on make-up and skin care tips as well as photo shoots and meet-and-greets. These events can be particularly impactful for people with red hair, as research in 2014 found that 90% of redheaded males experienced bullying simply because of their hair color. The event began by accident in 2005 when a local amateur painter placed an ad in a Dutch newspaper for 15 redheaded models — and 10 times that number showed up. The event was so popular that the redhead meet-up became an annual tradition and then a full-fledged festival. At the 2013 meet-up, 1,672 redheads set the world record for the largest gathering of people with natural red hair.
Portraits of George Washington often feature an elder statesman with powdered white hair (as was custom in the 18th century). However, as a young man, Washington actually had reddish-brown hair, as captured by portraits painted of him when he was younger. One example can be seen in the painting The Courtship of Washington, depicting Washington with his wife, Martha, and her two children (Washington never had biological children of his own). Even though the painting was completed some 60 years after Washington’s death, biographer Ron Chernow confirms that this was likely Washington’s true hair color. Mount Vernon, George Washington’s estate, even has a lock of his hair that displays the amber hue.
Despite a Persistent Myth, Redheads Are Not Going Extinct
In 2007, an article by an unnamed geneticist posited that redheads are going extinct. Despite many experts’ assertions to the contrary, the myth has persisted — but thankfully, no such extinction is on the horizon. This myth comes from the idea that recessive genes can essentially die out. If people with this rare genetic mutation (about 1% to 2% of the population) don’t have children, the color red will slowly fade away, or so the idea goes.
However, that’s not quite how genetics works. While only a few people (around 70 million to 140 million) sport red hair, many more are carriers of the gene, and it’s not uncommon for the red hair gene to skip a generation. Despite being carried through recessive genes, red hair color is genetically stable, meaning that evolution would need to select red hair as disadvantageous for some reason in order for it to become extinct. So no need to worry — red hair is here to stay.
Those With Freckles May Be Redheaded Gene Carriers
Carrying two MC1R gene mutations can cause redheadedness, but even people with only one mutation are three times as likely to develop ephelides, the medical term for what most of us think of as freckles, compared to those without any MC1R mutations. If you’re carrying two mutated MC1R genes, you’re 11 times more likely to get them. So if you have a light dusting of freckles across your nose (and they’re not just tattoos), your future child might be redheaded — even if your hair is blond or black. (Unfortunately, you also have a higher risk of developing skin cancer.)
The most familiar photos of Samuel Clemens, better known under his pen name Mark Twain, are not just in black and white, but also taken when he’s older. So it’s easy to miss that as a young man, Twain had vibrant redhair, which went gray in his 50s.
Before color photography, hair looked either light or dark — so many historical redheads are hiding in plain sight. Other redheads we’re not used to seeing in color include former U.S. President Calvin Coolidge and Dracula author Bram Stoker.
Lucille Ball was one of Hollywood’s most iconic redheads, but she was actually a natural brunette. She went blond early in her career, when she was only appearing in black and white movies — but when she was cast as the lead in her first technicolor role, 1943’s DuBarry Was a Lady, she made the switch.
Still, that red wasn’t really the same red that she became known for. Hairstylist Sydney Guilaroff developed her famous shade a few years later to go better with her apricot-colored costume in Ziegfeld Follies (1946), using a few different dyes and a henna rinse.
While it’s easy to remember Ball in color, her best-known role as Lucy Ricardo on I Love Lucywas, of course, shot in black and white — although colorized versions exist.
Redheads May Produce More Vitamin D Than Other People
Our bodies generate vitamin D when the sun’s ultraviolet rays interact with our skin. This essential vitamin helps us absorb calcium and ward off a host of other health problems. Unfortunately, redheaded people tend to have fair, sensitive skin that doesn’t pair well with too much sunlight; it burns easily and is susceptible to skin cancer. Fortunately, redheads may produce more vitamin D with less sun than other people, according to a 2020 study from the Czech Republic, making up for at least some of that lost sunlight. No wonder the dark and rainy climes of Scotland and Ireland are full of them.
While results vary, multiple studies have found that people with redhead genes have a different response to pain than other people. In one 2005 study, they were able to withstand greater levels of electrical current and were more sensitive to opioid medications. A separate 2005 study found they were more sensitive to pain from heat or cold, and also tested sensitivity to the anesthetic lidocaine. That one found that when applied under the skin, lidocaine was less effective on redheads than other participants, which may mean redheads sometimes need more anesthesia.
The evidence is far from conclusive, however. A more recent study, this one from 2020, concluded that MC1R genes do affect pain, but not the same variants that affect hair color. All in all, it’s a fascinating look at what one gene can do.
Darren Orf
Writer
Darren Orf lives in Portland, has a cat, and writes about all things science and climate. You can find his previous work at Popular Mechanics, Inverse, Gizmodo, and Paste, among others.
Advertisement
top picks from the optimism network
Interesting Facts is part of Optimism, which publishes content that uplifts, informs, and inspires.
Enter your email to receive facts so astonishing you’ll have a hard time believing they’re true. They are. Each email is packed with fascinating information that will prove it.
Sorry, your email address is not valid. Please try again.
Sorry, your email address is not valid. Please try again.