Cracker Jack’s early marketing warned prospective customers about the effects of the product. “Do not taste it,” one 1896 article cautioned. “If you do, you will part with your money easy.” Some historians believe that the caramel-coated popcorn and peanut treat jump-started the American snack food industry around the turn of the 20th century. It may even hold the title of the country’s first junk food, though the types of junk food popular today didn’t make their appearances until the 1950s. It all started with Chicago candy and popcorn peddlers Frederick and Louis Rueckheim, German immigrants who crafted a nonsticky caramelized popcorn as a way to stand out from other popcorn vendors. Their version — with a sweet, crunchy coating that was different from the salted popcorn and kettle corn available at the time — became a hit after it was mass-produced in 1896.
The “Take Me Out to the Ball Game” songwriter had never seen a baseball game.
Jack Norworth’s most famous tune was scrawled on an envelope during a subway ride in 1908, though the writer wasn’t on his way to a baseball game — he’d never even been to a professional one. Music publisher Albert von Tilzer, who penned the melody, also hadn’t seen a game.
It was a song, however, that helped cement Cracker Jack’s snack status. In 1908, songwriter Jack Norworth — entirely unknown to the Rueckheims — composed “Take Me Out to the Ball Game” after seeing an advertisement for an upcoming game. The song, which mentions the snack by name, led to a surge in sales that forever linked Cracker Jack with sports. Four years later, the Rueckheims sweetened their popcorn business with a marketing gimmick that would eventually be replicated by cereal brands, fast-food restaurants, and candymakers for decades to come: a toy in every box. By 1916, Cracker Jack was the bestselling snack worldwide.
Before Sailor Jack, Cracker Jack’s original mascots were teddy bears.
Advertisement
Popcorn was once banned in movie theaters.
It may feel like popcorn and movies have always gone hand-in-hand — except that at one time, they were a contested combo. Americans experienced a 19th-century popcorn boom; by 1848, it was common fare at circuses and street fairs. But when movie theaters emerged in the early 1900s, they tried to align themselves with upscale stages, creating lavish interiors with fine carpets and furniture that could be messied by patrons’ snacks. Plus, theater owners believed popcorn munching would be too distracting at a time when filmgoers were focused on reading silent film subtitles. (Neither argument seemed to deter theatergoers, who snuck in snacks anyway.) Everything changed when the Great Depression hit; by the 1930s, theaters were seeing as many as 90 million visitors per week, and serving inexpensive popcorn was one way to generate extra money in a rough economy. That change worked in favor of big-screen owners (and popcorn producers): By 1945, more than half of popcorn eaten in the U.S. was sold at movie theaters.
Nicole Garner Meeker
Writer
Nicole Garner Meeker is a writer and editor based in St. Louis. Her history, nature, and food stories have also appeared at Mental Floss and Better Report.
Advertisement
top picks from the Inbox Studio network
Interesting Facts is part of Inbox Studio, an email-first media company. *Indicates a third-party property.
The jaws of a crocodile are an amazing specimen of evolution. With a second jaw joint unlike anything found in mammals, a crocodile can spread the force of its tremendous bite throughout its mouth. In fact, crocodiles have the most powerful chomp in the animal kingdom, at 3,700 pounds per square inch for a saltwater crocodile — 30 times the force of a human bite. But that’s not the only interesting thing about a crocodile’s mouth: Their tongues are incapable of getting between those devastating jaws thanks to being permanently rooted to the floor of their mouths. A crocodile’s tongue is also held in place by a membrane attached to the roof in the back of the mouth, which keeps the throat closed when the animal is submerged.
Crocodiles are found on every habitable continent except Europe. As the planet started cooling some 50 million years ago, cold-blooded crocodiles retreated to warmer climates, leaving Europe (and also Antarctica) without any native crocodilians.
A crocodile’s immobile mouth muscle isn’t a new trait — its most famous ancient ancestor, the Tyrannosaurus rex, also couldn’t move its tongue (a fact Jurassic Park got very wrong). Researchers in 2018 compared the T. rex’s hyoid bones, the bones responsible for supporting the tongue, to those of modern birds and alligators, and found they exhibited tongue inhibition like the kind seen in modern crocodilians. The king of dinosaurs likely had an immovable tongue for similar reasons. With a bite that delivered 12,800 pounds of force per inch — four times that of even the crocodile — T. rex biology made sure to keep crucial body parts (i.e., the tongue) out of the way of the most powerful bite to ever walk the Earth.
Giraffe tongues are unique in the animal kingdom because they’re prehensile.
Advertisement
Crocodiles actually do cry “crocodile tears.”
When someone is feigning sadness, they’re sometimes said to be “crying crocodile tears.” This phrase linking crocodiles to their often teary-eyed display occurs in literature over the past several centuries. One of its earliest mentions appears inThe Voyage and Travels of Sir John Mandeville, published in the 14th century, which says, “these serpents slay men, and they eat them weeping.” Even William Shakespeare makes note of crocodile tears in Othello. Crocodiles do “cry,” but it’s mainly to keep their eyes lubricated if they’ve been out of water for long periods. In 2007, a zoologist from the University of Florida also proved that crocodiles weep when snacking, but theorized that the tears come from forced airflow (from a croc’s copious hissing and huffing), which in turn affects the reptile’s tear glands.
Darren Orf
Writer
Darren Orf lives in Portland, has a cat, and writes about all things science and climate. You can find his previous work at Popular Mechanics, Inverse, Gizmodo, and Paste, among others.
Advertisement
top picks from the Inbox Studio network
Interesting Facts is part of Inbox Studio, an email-first media company. *Indicates a third-party property.
The history of technology is filled with happy accidents. Penicillin, Popsicles, and Velcro? All accidents. But perhaps the scientific stroke of luck that most influences our day-to-day domestic life is the invention of the microwave oven. Today, 90% of American homes have a microwave, according to the U.S. Bureau of Labor Statistics, but before World War II, no such device — or even an inkling of one — existed.
Chocolate originated in what is now modern-day Belgium.
Although Belgium is now known for chocolate, the story of the sweet stuff begins in Mexico. Around 1900 BCE, the Mokaya, a pre-Olmec civilization, was likely the first to turn the cacao plant into chocolate. However, they consumed it as a drink — chocolate bars arrived much later.
During the war, Allied forces gained a significant tactical advantage by deploying the world’s first true radar system. The success of this system increased research into microwaves and the magnetrons (a type of electron tube) that generate them. One day circa 1946, Percy Spencer, an engineer and all-around magnetron expert, was working at the aerospace and defense company Raytheon when he stepped in front of an active radar set. To his surprise, microwaves produced from the radar melted a chocolate bar (or by some accounts, a peanut cluster bar) in his pocket. After getting over his shock — and presumably cleaning up — and then conducting a few more experiments using eggs and popcorn kernels, Spencer realized that microwaves could be used to cook a variety of foods. Raytheon patented the invention a short time later, and by 1947, the company had released its first microwave. It took decades for the technology to improve, and prices to drop, before microwaves were affordable for the average consumer, but soon enough they grew into one of the most ubiquitous appliances in today’s kitchens.
The first microwave oven was called the RadaRange and weighed 750 pounds.
Advertisement
The discovery of evidence for the Big Bang was also an accident.
In 1964 at Bell Labs outside Holmdel, New Jersey, radio astronomers Arno Penzias and Robert Wilson were frustrated with their antenna. The sensitive equipment was picking up a persistent buzzing noise that the pair first thought might be coming from the machine itself, nearby New York City, or even pigeons nesting in the antenna. However, once every explanation appeared to be accounted for, the two astronomers still detectedthe hum no matter where they pointed the antenna in the sky. After speaking with astronomers at Princeton University, the duo realized that they had actually detected the cosmic microwave background, which is leftover radiation from the Big Bang and evidence for the very beginning of the universe some 13.8 billion years ago. Fourteen years later, Penzias and Wilson were awarded the Nobel Prize in physics for their groundbreaking — and serendipitous — discovery.
Darren Orf
Writer
Darren Orf lives in Portland, has a cat, and writes about all things science and climate. You can find his previous work at Popular Mechanics, Inverse, Gizmodo, and Paste, among others.
Advertisement
top picks from the Inbox Studio network
Interesting Facts is part of Inbox Studio, an email-first media company. *Indicates a third-party property.
Run your fingers along the edge of a dime and you’ll notice the coin has a crimped edge. If you were to count those tiny grooves and valleys, you’d find 118 ridges, one less than is found on a quarter. But not all American coins have ridges, and here’s why. Rippled edges on larger-denomination coins have been a part of American currency since the U.S. Mint’s early days, and they were a clever solution to a massive currency conundrum: counterfeiting and fraud. Around the 1700s, coins were an easy target for money-generating schemes, including coin clipping: People would clip or shave off slim portions of a coin’s outer edge, cashing in the scraps of precious silver and gold. In Great Britain, coin clipping was so common that the crown deemed it a form of treason. In early North American settlements, “coining” — the actual production of fake coins — was equally problematic.
Quarters have always been the largest American coin.
Pennies first began circulating in March 1793, though many Americans were disgruntled with their initial design. The supersized coins were larger than today’s quarters, and some believed the Lady Liberty depicted looked inelegant. The coins were soon discontinued.
Knowing this, American coin makers added grooved bands — called reeded edges — to the thin sides of dimes, quarters, and larger coins then made from silver, in order to prevent shaving and make fraud more difficult. (Pennies and nickels remained smooth-sided since they were pressed from less-valuable copper and nickel.) But reeded edges lost some of their utility when the Coinage Act of 1965 changed the composition of dimes and quarters from silver to a copper-nickel blend. Reeded edges remain today as a design choice, and because they help people with visual impairments differentiate among coins.
Collecting and studying coins and other forms of money is called numismatics.
Advertisement
Coins were once commonly carved into “love tokens.”
Not all coins were saved or spent — in centuries past, some became tiny testaments to love. Love tokens, an idea that likely originated in Great Britain around the 13th century, were crafted by sanding away the faces of coins and using the precious metal as a blank canvas, which was then often engraved with memorable dates, a loved one’s initials, or romantic sentiments. After spreading to North America, love tokens reached peak popularity in the late 1800s, in part because they were used to memorialize family members lost during the Civil War. While they were initially etched by hand, professional carvers were sought out for more intricate designs, and the trend became so popular that crafters even set up booths at the 1893 world’s fair in Chicago. Many surviving love tokens are difficult to appraise or date because of their lack of detail and highly personal meaning, yet their greatest value may be the reminder that love can withstand the test of time.
Nicole Garner Meeker
Writer
Nicole Garner Meeker is a writer and editor based in St. Louis. Her history, nature, and food stories have also appeared at Mental Floss and Better Report.
Advertisement
top picks from the Inbox Studio network
Interesting Facts is part of Inbox Studio, an email-first media company. *Indicates a third-party property.
You know your head, shoulders, knees, and toes (knees and toes), but has anyone ever introduced you to the glabella? This isn’t some hidden-away body part like the back of the elbow or something spleen-adjacent — it’s smack dab in the middle of your face. Latin for “smooth, hairless, bald,” the glabella is the small patch of skull nestled in the middle of your two superciliary arches (also known as your eyebrow ridges). Many people know of the glabella because of the wrinkles, or “glabellar lines,” that can appear in the area.
Eyebrows are a uniquely human feature. As the forehead ridge of Homo sapiens receded to make room for our brain, eyebrows developed to keep moisture out of our eyes while also blocking sunlight.
Although smooth and hairless today, the glabella wasn’t always so. Our human ancestors, including Neanderthals, instead sported formidable brow ridges that likely evolved to display social dominance. As the brain of Homo sapiens grew, this brow receded until only the smallest of ridges survived — along with the smooth bit of bone in between. But the fortunes of this little piece of anatomical real estate weren’t just tied to evolution. Women in ancient Greece saw the unibrow as a beautiful feature, so much so that they’d paint soot on their glabellas to form a faux unibrow. Throughout the following centuries, fashion’s notion of the ideal eyebrow changed, but the glabella remained more or less true to its smooth, hairless name. Unless Frida Kahlo’s famous unibrow becomes a modern fashion trend, it’ll likely stay that way.
In Japan’s Heian era (794 to 1185 CE), some women removed their eyebrows and repainted them with ink.
Advertisement
Eyebrows played a crucial role in the rise of Homo sapiens.
Eyebrows are an often-overlooked asset of human beauty. Folks write poetry about gorgeous eyes and ballads on beautiful smiles, but eyebrows, while certainly an obsessed-over feature in modern beauty trends, rarely receive as much adoration. Yet according to anthropologists, the fuzzy caterpillars on our foreheads are vital to the survival of our species. A study in 2018 found that eyebrows figure prominently in human social interaction and aided early humans in forming large, complex social groups. One of these interactions occurs when people see each other at a distance — in that situation, people unconsciously raise their eyebrows in a way that apparently shows they’re not a threat. Eyebrows similarly raise toward the middle to signal sympathy, and their micro-movements can also play a key role in expressing trustworthiness or deception. With this ability to convey subtle emotions in only an “eyebrow flash,” humans formed larger and more diverse social groups on our journey toward becoming the dominant animal on the planet.
Darren Orf
Writer
Darren Orf lives in Portland, has a cat, and writes about all things science and climate. You can find his previous work at Popular Mechanics, Inverse, Gizmodo, and Paste, among others.
Advertisement
top picks from the Inbox Studio network
Interesting Facts is part of Inbox Studio, an email-first media company. *Indicates a third-party property.
The wheel is credited as one of humankind’s most important inventions: It allowed people to travel farther on land than ever before, irrigate crops, and spin fibers, among other key benefits. Today, we often consider the wheel to be the ultimate civilization game-changer, but it turns out that creating the multipurpose apparatus wasn’t really on humanity’s immediate to-do list. Our ancient ancestors worked on other ideas first: boats, musical instruments, glue, and even alcohol. The oldest evidence of booze comes from China, where archaeologists have unearthed 9,000-year-old pottery coated with beer residue; in contrast, early wheels didn’t appear until around 3500 BCE (about three millennia later), in what is now Iraq. But even when humans began using wheels, they had a different application — rudimentary versions were commonly used as potter’s wheels, a necessity for mass-producing vessels that could store batches of brew (among other things).
The potato-based alcohol is popular, but baijiu, a traditional Chinese liquor, accounts for nearly 25% of all spirits sold globally. Brewed from sorghum or rice, baijiu (meaning “clear liquor”) has an alcohol content of 50% to 65% and is meant to be imbibed as a shot.
Some researchers believe our long-standing relationship with alcohol began 10 million years ago thanks to a genetic mutation that allowed our bipedal ancestors to consume overly ripe fruit. Alcohol consumption eventually transitioned from a snack-time byproduct to a purposefully crafted, fermented beverage, and different cultures began to create their own brews independently. After China’s beer and wine appeared around 7000 BCE, early vintners in the Caucasus Mountains followed 1,000 years later. Sumerian brewers crafted beer around 3000 BCE, while Indigenous communities in the Americas, such as the Aztecs and Incas, later made their own alcoholic drinks from agave and corn. It may seem surprising that ancient humans were so fermentation-focused, but early alcohols played a major role in prehistoric communities: Booze was often the center of religious and social celebrations, and could serve as a go-to cure for illness and pain. In some cases, it even acted as a nutritious, probiotic boost during times of food scarcity. With their many uses, both lifesaving and life-enhancing, brewed beverages have withstood the test of time.
The world’s oldest wooden wheel was uncovered from a marsh in Slovenia.
Advertisement
It takes eight years to grow agave plants for tequila.
When European colonists first encountered Mexico’s native agave plants, they were intrigued by the succulents the Aztecs had been using to make clothing, rope, and intoxicating drinks. The spike-tipped plants, which grow as tall as 20 feet, were dug up and transplanted to greenhouses and botanical gardens throughout Spain, Portugal, and other parts of Europe starting in the 16th century. But most agave plants struggled to flourish in areas lacking their natural arid climate; in cooler countries, they were dubbed “century plants,” because those that survived the overseas journey didn’t bloom for nearly 100 years. Agave plants mature much faster when left in their natural habitats, but growing the crop for today’s tequila production is still a time investment. It traditionally takes about eight years before the plants are ready to harvest, though some agave crops are left to grow even longer.
Nicole Garner Meeker
Writer
Nicole Garner Meeker is a writer and editor based in St. Louis. Her history, nature, and food stories have also appeared at Mental Floss and Better Report.
Advertisement
top picks from the Inbox Studio network
Interesting Facts is part of Inbox Studio, an email-first media company. *Indicates a third-party property.
Sandwiched along the border of Zambia and Zimbabwe in Africa is one of the greatest natural wonders of the world (literally). Nearly twice as tall as Niagara Falls, Victoria Falls plunges some 350 feet down into the basin of a vast gorge below. The steady stream of water is so powerful, it’s created a rainforestlike microclimate, because its voluminous spray blankets the surrounding area continuously. Although the world largely knows this wonder as Victoria Falls (after Queen Victoria, naturally), locals have traditionally called it Mosi-oa-Tunya, meaning “The Smoke that Thunders.” The name is arguably a better fit, as the “thunder” of this massive waterfall can be heard from 25 miles away, and its “smoke” (aka water plume) can be seen even farther.
Victoria Falls is the widest waterfall in the world.
Victoria Falls isn’t the highest waterfall (that’s Venezuela's Angel Falls) or the widest, an accolade that belongs to the Khone Phapheng waterfall in Laos. This waterfall stretches more than 6 miles wide, whereas Victoria Falls is only a little more than a mile wide.
Victoria Falls is actually several waterfalls in one. On the Zimbabwe side, there’s Devil's Cataract, Main Falls, Rainbow Falls, and Horseshoe Falls, and on the Zambia side lies the Eastern Cataract. If you want to see the waterfall at its most dramatic, visit between February and May when the summer rains, and by extension Victoria Falls itself, are at their highest volume. However, other times of year have plenty to offer, too. Between mid-August and mid-December, daring adventurers can take a dip in Devil’s Pool, a swimming hole that brings those unburdened by acrophobia — fear of heights — to the very edge of the Fall’s dizzying plunge.
Victoria Falls is part of the Zambezi River, the fourth-largest river in Africa.
Advertisement
Another amazing water feature lies on the border of Zimbabwe and Zambia.
Some 150 miles northeast of Victoria Falls is yet another impressive body of water on the border of Zimbabwe and Zambia — an especially notable feat for two landlocked countries. Filled between 1958 and 1963, Lake Kariba is thelargest reservoir by volume in the world, stretching a staggering139 miles long and 25 miles wide. The lake serves as a reservoir that’s created by the Kariba Dam, the largest dam in Africa, and power plants on the dam provide hydroelectric energy forboth Zimbabwe and Zambia. Zimbabwe, for example,receives 57% of its annual electricity from the dam. In addition to providing much-needed power, the lake is also a tourist destination for both countries.
Interesting Facts
Editorial
Interesting Facts writers have been seen in Popular Mechanics, Mental Floss, A+E Networks, and more. They’re fascinated by history, science, food, culture, and the world around them.
Advertisement
top picks from the Inbox Studio network
Interesting Facts is part of Inbox Studio, an email-first media company. *Indicates a third-party property.
French cuisine is often considered the epitome of fine dining, and that could be because French cooks are said to have launched the modern restaurant — and even invented the word “restaurant” itself. Many etymologists and historians attribute the origins of both to A. Boulanger, a Parisian soup vendor who set up shop in 1765. Boulanger peddled bouillons restaurants — so-called restorative meat and vegetable broths, from the French restaurer, meaning “to restore or refresh” — an act that wasn’t entirely revolutionary, since other cooks were selling healing soups from “health houses” around the same time. But Boulanger’s approach was different because he also offered a menu of other meals at a time when most taverns and vendors served just one option, dictated by the chef. Boulanger’s concept of seating guests and allowing them to choose their desired meal exploded in popularity after the French Revolution at the end of the 18th century, as kitchen workers who formerly served aristocratic households set up their own dining rooms or joined new eateries. By 1804, French diners could choose from more than 500 restaurants across the country.
Waffle House doesn’t just sling breakfast; the 24-hour diner has also pressed its own jukebox records since the mid-1980s. Restaurant-themed songs across genres (such as gospel, bluegrass, and R&B) are released under the Waffle Records label and exclusively played at the chain’s diners.
Some historians disagree with this long-told tale of the restaurant’s origin, suggesting there isn’t much evidence by way of historical documentation to prove Boulanger was a real person. And others believe attributing the public dining room to French ingenuity isn’t wholly accurate, since humans have been offering up their cooking talent to the hungry masses for millennia. Take, for example, how Chinese chefs in major cities such as Kaifeng and Hangzhou customized menus to appeal to traveling businessmen looking for familiar meals nearly 700 years before France’s iteration of the restaurant. Or the excavated ruins at Pompeii dating to 79 CE that include ornately decorated food stalls called thermopolia, where hungry Romans could choose from a variety of ready-to-eat dishes. Though the names have differed, smart humans have been selling snacks to each other for a long, long time.
Founded in 1921, White Castle is America’s oldest fast-food burger restaurant.
Advertisement
The first American diners were mobile.
Most of the diners Americans patronize today are stationary spots, but the country’s earliest greasy spoons were more like modern food trucks. First called “night lunch wagons” by Rhode Island inventor Walter Scott in 1872, the horse-drawn diners served hot meals to patrons who were often late-shift workers or partiers looking for meals long after other restaurants had closed. Soon after, ingenious restaurateurs developed rolling eateries complete with seats, some providing both a meal and transportation to hungry diners looking to travel across town. By the 1890s, trains began incorporating the concept (ticket holders were previously responsible for supplying their own meals), debuting dining cars that fed patrons on long journeys across the growing West. The original dining carriages, however, quickly fell out of style; maintenance costs, city bans, and competition from brick-and-mortar restaurants pushed many proprietors out of business by the early 1900s. Those that survived swapped their carts for permanent locations often resembling their original carts or made from modified railroad dining cars — an iconic look that remains today.
Nicole Garner Meeker
Writer
Nicole Garner Meeker is a writer and editor based in St. Louis. Her history, nature, and food stories have also appeared at Mental Floss and Better Report.
Advertisement
top picks from the Inbox Studio network
Interesting Facts is part of Inbox Studio, an email-first media company. *Indicates a third-party property.
Whales are some of the most majestic creatures on the planet. The blue whale is the largest animal to ever exist, the bowhead whale can live for more than 200 years, and a few humpback whales saved the future of humanity in Star Trek IV: The Voyage Home. In fact, these creatures are so amazing that even their earwax is a vital tool — at least for helping scientists understand the mysterious mammals themselves. Take, for instance, the 10-inch-long earplug of an adult blue whale (Balaenoptera musculus). Cetologists — scientists who study whales — can cut into a plug of earwax and learn the whale’s age, much as dendrochronologists do with tree rings. Earwax from blue whales (and other large whales such as humpbacks) forms rings, known as “laminae,” every six months, which give scientists a snapshot of the creature’s entire life through cycles of summer feeding and winter migration.
Fifty million years ago, the early ancestor of all cetaceans walked on four legs. This goat-like mammal, dubbed Pakicetus, lived on riverbanks in India and Pakistan. Slowly, its descendants became more comfortable in water until they eventually evolved into today’s whales.
And these waxy earplugs can tell scientists more than just a whale’s age. Earplugs also capture a chronological “chemical biography” that shows what chemicals and pollutants were found in the animal’s body throughout its life, including levels of the stress hormone cortisol. Scientists have compared whale cortisol levels with whaling data, using records from 1870 to 2016, and found an unmistakable positive correlation. The only discrepancy was during World War II, when whale stress levels increased despite a decrease in whaling overall (scientists assume increased military activity was the likely culprit). Despite a near-international moratorium on whaling in the 1980s, whales still exhibit high cortisol levels thanks to increased ship noise, climate change, and other factors. But with the help of whale earwax, scientists can at least continue to examine the health of these majestic beasts and the oceans they inhabit.
The scientific name for earwax is actually cerumen.
Advertisement
Using Q-tips to clean your ears is a bad idea.
If you see or feel excess wax in your ear, you should grab a Q-tip, right? Not so fast. Earwax actually plays an important role in auditory health. Produced by the skin in the ear canal, earwax prevents dust and other debris from damaging deeper structures such as the eardrum. However, an excess of earwax can cause “impaction,” which produces symptoms including irritation, hearing loss, and even dizziness. But removing earwax buildup with a cotton swab is not recommended. Otolaryngologists (doctors who treat the ears, neck, throat, and other areas) warn that cotton swabs can actually exacerbate impaction by pushing wax toward the eardrum, where it can harden. If your ears do become impacted, see your local ENT or primary care physician — but don’t toss those Q-tips. You can still use them for cleaning your outer ear or other hard-to-reach spots like faucets, computer keyboards, or car interiors.
Darren Orf
Writer
Darren Orf lives in Portland, has a cat, and writes about all things science and climate. You can find his previous work at Popular Mechanics, Inverse, Gizmodo, and Paste, among others.
Advertisement
top picks from the Inbox Studio network
Interesting Facts is part of Inbox Studio, an email-first media company. *Indicates a third-party property.
A few years into his reign, Russian Czar Peter I (aka “Peter the Great”) decided to study abroad. Worried that Russia was lagging behind in key technological areas, especially when it came to shipbuilding, Peter traveled incognito from 1697–98 to various European countries, including Prussia, Holland, and England, in an effort to modernize his own nation. Afterward, with his newly learned shipbuilding know-how, he created Russia’s first navy.
In Rome, urine was prized for its ammonia, which was used in cleaning products and toothpaste. The urine trade was so lucrative that Emperor Vespasian placed a tax on it in 70 CE. When confronted about the new tax, he famously stated “pecunia non olet,” or “money doesn’t stink.”
But it wasn’t just maritime skills Peter learned on his “Grand Embassy.” He also picked up a few fashion and grooming ideas — including a particular interest in the freshly shaven chins of most Western European men. Determined to integrate Russia into the increasingly powerful club of European countries, Peter established (around 1705) a tax that fiscally punished anyone sporting a beard. The tax was progressive, with the well-to-do shelling out more for their facial adornments than the peasantry; nobility and merchants could pay as much as 100 rubles a year, while peasants might pay one kopek (1/100 of a ruble). Yet the tax was almost universally reviled — and even helped spark a few riots. The biggest opponent of the tax was the Russian Orthodox Church, which regarded clean-shaven faces as sinful. Despite this stiff opposition, Peter I stuck with the tax and was known to even shave off the beards of his guests at parties, much to the horror displayed on their now-clean-shaven faces.
When Peter I visited Western Europe, he traveled incognito under the name Sergeant Pyotr Mikhaylov.
Advertisement
Sideburns are named after a Union general in the Civil War.
Sideburns have been found on the faces of several famous figures, from Alexander the Great to Charles Darwin, but it wasn’t until the U.S. Civil War (1861–1865) that the term “sideburns” came into being, thanks to a particularly hirsute Union general. Ambrose Burnside wasn’t much of a general: At the Battle of Antietam, his ineffective command meant his soldiers struggled to take a stone bridge (now called Burnside Bridge) and turned what could’ve been a Union victory into a draw. At Fredericksburg, things went from bad to worse, as Burnside led several failed assaults against Robert E. Lee’s forces. But what Burnside might’ve lacked in military acumen, he made up for with his luxurious facial hair, which connected his side-whiskers to his mustache (his chin remained clean-shaven). After the war, many men copied the general’s look, and these facial facsimiles were called “burnsides.” Over the years, the term eventually flipped into its modern spelling.
Darren Orf
Writer
Darren Orf lives in Portland, has a cat, and writes about all things science and climate. You can find his previous work at Popular Mechanics, Inverse, Gizmodo, and Paste, among others.
Advertisement
top picks from the Inbox Studio network
Interesting Facts is part of Inbox Studio, an email-first media company. *Indicates a third-party property.