The Heinz ketchup bottle is an iconic piece of packaging and design. Whether large or small, made of glass or plastic, the bottle is instantly recognizable on tables and shelves throughout much of the world. And if you’ve ever looked closely at one of those bottles, you’ll likely have noticed a certain number prominently embossed on the glass or printed on the label: 57.
You may have sat there, waiting for your burger to arrive, wondering about the significance of that particular number. Does Heinz make exactly 57 products and is ketchup the 57th? Does the ketchup contain 57 secret ingredients? Or do the digits refer to something else entirely?
The truth behind this peculiar number is an intriguing combination of creative inspiration, savvy marketing, and numerical superstition. Let’s uncap this mystery together.
The history of Heinz began in earnest in 1869, when 25-year-old Henry J. Heinz created his first product, a high-quality grated horseradish based on his mother’s recipe. The business expanded, eventually leading to the birth of what became the company’s flagship product: Heinz tomato ketchup.
The now-legendary sauce first appeared on U.S. shelves in 1876 — but those early bottles didn’t feature the number 57. According to the Heinz History Center (an affiliate of the Smithsonian Institution), the origins of the Heinz 57 trademark go back to 1896, when H.J. Heinz saw a sign advertising “21 styles of shoes” while riding an elevated train in New York City. The intriguing effect of the advertisement struck Heinz immediately.
He realized advertising a specific number of product varieties gave weight to a brand; it sounded substantial and impressive, giving customers a sense of abundance and expertise. At that moment, Heinz decided his company needed its own magic number — and “57 varieties” was born.
At the time Heinz had his numerical eureka moment, his company was producing more than 60 products, ranging from plum pudding to olive oil and peanut butter. He could have tallied up the total number of Heinz varieties and chosen the actual, literal number of products, but he went with 57 instead — partly because he liked the way it looked and sounded. There was also a certain amount of superstition involved: He later revealed that five was his lucky number and seven was his wife’s.
Firmly set on 57, Heinz didn’t hesitate to fully incorporate the number into the brand. He put “57” and “57 varieties” everywhere: on bottles, delivery wagons, buildings, and billboards across the country. The number even appeared in large numerals etched into hillsides.
Having originally started as an idea plucked from almost nowhere during a routine train ride, 57 went on to dominate the company’s labels and advertising strategy for decades. Even as Heinz expanded to hundreds of products, the company never abandoned its signature number and the mythical yet sweet-sounding “57 varieties” tagline that can still be found on Heinz bottles today.
Tony Dunnell
Writer
Tony is an English writer of nonfiction and fiction living on the edge of the Amazon jungle.
Advertisement
top picks from the Inbox Studio network
Interesting Facts is part of Inbox Studio, an email-first media company. *Indicates a third-party property.
Standing proudly on one end of Washington, D.C.’s National Mall, the Lincoln Memorial honors its namesake Abraham Lincoln, the 16th president of the United States, who’s known for leading the nation through the Civil War and for helping put an end to slavery with the Emancipation Proclamation.
The memorial is among the most historically significant and recognizable sites in the United States, serving as the backdrop for major moments such as Martin Luther King Jr.’s “I Have a Dream” speech and appearing in popular films including Forrest Gump and National Treasure. Yet despite its prominence, the monument has plenty of secrets to share. Here are five fascinating facts about this famous landmark.
The architectural style of the Lincoln Memorial, which was completed in 1922, is notably different from the typical style of the early 20th century. New York architect Henry Bacon took a unique yet meaningful approach to the monument’s design, modeling it after the Parthenon in Athens, Greece.
Bacon believed the design should reflect Lincoln’s values. According to the National Park Service (NPS), Lincoln’s lifelong passion for and defense of democracy inspired Bacon to draw from the architecture of ancient Greece, the birthplace of democratic ideals.
All told, the memorial measures 190 feet long, 120 feet wide, and 99 feet tall. Bacon chose to use various types of stone for his design. The exterior and upper stairs are constructed from Colorado marble, while the accents were sourced from other states; the terrace is made from Massachusetts granite and the chamber floor uses Tennessee pink marble.
The inside of the Lincoln Memorial features several important inscriptions, but the one on the north interior wall is particularly interesting. Lincoln’s Second Inaugural Address from 1865 is carved into the chamber’s limestone —and if you look closely you can see that one of the words was originally misspelled..
While speaking of national reconciliation after the Civil War, Lincoln said, “With high hope for the future no prediction in regard to it is ventured.” But the engraver made an error when translating this sentiment to the wall. When engraving the word “FUTURE,” artist Ernest C. Bairstow — who expertly completed all the other lettering and small details on the memorial — inadvertently carved a capital “E” at the beginning of the word, resulting in “EUTURE.”
According to the NPS, the mistake was likely the result of using an “E” stencil rather than an “F.” Though the error has since been corrected by filling in the bottom line of the letter to revert it to an “F,” the shadow of the “E” is still visible if you look for it.
The most prominent exterior feature of the Lincoln Memorial are the 36 towering Doric columns, each standing 44 feet tall with a base diameter of more than 7 feet. And the number of columns is no coincidence — it’s symbolic. Architect Henry Bacon chose to surround the memorial with 36 columns to represent the 36 states in the U.S. at the time of Lincoln’s death on April 14, 1865.
By the time the memorial was completed in 1922, there were48 states in the nation. (Alaska and Hawaii weren’t granted statehood until 1959.) And the newer states aren’t overlooked by the design. Inscribed at the top of the memorial, above the exterior columns, are the names of the 48 contiguous U.S. states, and in 1976, a plaque honoring Alaska and Hawaii was added to the plaza as well.
Credit: National Photo Company Collection/ Library of Congress
The Current Site Was Once in the Potomac River
The site of the Lincoln Memorial and much of the surrounding area were once buried beneath the Potomac River and its marshy shores. The site was called Kidwell Flats — an especially muddy portion of the river.
The original plans for the National Mall were drawn up in 1791 and ended at the Washington Monument, at the edge of the Potomac’s original shoreline. To the west was swampy marshland, known for being buggy, musty, and unpleasant. However, city planners wanted to add more land to the National Mall’s total area, so a plan was devised to make that marshland more hospitable.
The U.S. Army Corps of Engineers dredged the Potomac during the 1880s and ’90s, dumping soil west and south of the Washington Monument. Today, that area is not only home to the Lincoln Memorial but also to the World War II Memorial, the Martin Luther King Jr. Memorial, and the Jefferson Memorial, among other landmarks.
The Lincoln Memorial Reflecting Pool was also built atop this dredged soil. Constructed during the 1920s, the original pool lacked pilings to support it, so it sank about a foot into the marshy land below. That slow sinking caused cracks and leakage, subsequently requiring about 30 million gallons of water each year to refill.
Since its construction, the landmark has undergone several renovations. In 2012, it received a complete overhaul, featuring a restored bottom and a new sustainable water system.
Credit: 1946- Carol M. Highsmith Archive/ Library of Congress
There’s Going To Be a Museum Underneath the Memorial
It’s a question NPS guides get all the time: “What’s underneath the Lincoln Memorial?” For the last century, the answer has been quite boring: an empty basement.
Despite a widely held myth that the late president is buried beneath his memorial (he was actually laid to rest in Springfield, Illinois), the monument’s so-called “undercroft” is relatively empty. The cavernous, bunker-like structure features massive concrete columns and graffiti left behind by its original builders.
But that’s all about to change: An immersive museum featuring 15,000 square feet of public exhibit space is set to open in July 2026. The exhibits will illuminate the monument’s history, its construction, and its role in civil rights demonstrations.
The undercroft spans 43,800 square feet in total, meaning only part of it will become museum space. Floor-to-ceiling glass walls will allow visitors to view the undeveloped section of the monument’s concrete foundations, offering a rare behind-the-scenes look at one of America’s most enduring symbols.
Rachel Gresh
Writer
Rachel is a writer and period drama devotee who's probably hanging out at a local coffee shop somewhere in Washington, D.C.
Advertisement
top picks from the Inbox Studio network
Interesting Facts is part of Inbox Studio, an email-first media company. *Indicates a third-party property.
Though some people view rodents as unwelcome pests, to others they’re beloved members of the family. Critters such as guinea pigs, hamsters, and even rats are popular pets thanks to their small size, playful demeanor, and intelligent nature.
But as common as these pint-sized pets are, there’s still a lot you may not know about them. Here are five facts about rodent pets for you to nibble on.
Rather than soaking in water, chinchillas take dust baths to keep their fur looking lush and their skin free from irritation. Chinchillas have some of the densest fur of any land mammal, with up to 80 hairs growing out of a single follicle. That density makes it tough for chinchilla fur to dry efficiently after getting wet, and the trapped moisture can lead to matted fur and skin problems.
As a solution, chinchillas rely on the alternative bathing method of rolling around in fine dust, which clears away debris and distributes their body’s natural oils. It’s typically best for the animals to use dust made from absorbent compounds such as volcanic ash or pumice that help draw out excess moisture.
While wild chinchillas take dust baths as they please, it’s recommended that pet owners offer two to four dust baths a week, ranging from three to five minutes each time. That routine keeps a chinchilla’s fur soft, helps prevent irritation and fungal growth, and also ensures the animal’s mental wellness.
Teddy Roosevelt and his family owned dozens of guinea pigs, including during their time in the White House. At one point in 1900, the Roosevelts cared for a whopping 22 guinea pigs simultaneously. The Theodore Roosevelt Center adds that there’s evidence of an additional eight guinea pigs owned by the Roosevelts at some point, bringing the known total of those family pets over the years to 30.
While not all those pet guinea pigs were named, we know of at least five that lived in the White House: Admiral Dewey, Dr. Johnson, Bishop Doane, Fighting Bob Evans, and Father O’Grady. The Roosevelts also owned a particularly large guinea pig they named “The Prodigal Son,” as well as guinea pigs named Harvard, Princeton, and Mr. and Mrs. Longworth.
Gerbils Are Social Creatures — But Hamsters Are Not
Some rodents are highly social creatures best kept in pairs, while others thrive when left alone in solo enclosures. Gerbils are part of the first category, as they live in packs of two to 15 in the wild. It’s recommended to keep domesticated gerbils in same-sex pairs to avoid unwanted breeding and to introduce the pairs at a young age so they can forge bonds early in life. That social interaction keeps pet gerbils happy and prevents them from feeling lonely or stressed.
Hamsters, on the other hand, are solitary animals that prefer to live alone, though many pet owners make the mistake of keeping them in pairs or near other animals. Typical hamsters are highly territorial and may become aggressive toward other hamsters, who they’re likely to view as threats rather than companions.
Whether you’re trying to keep a pet mouse in its enclosure or stop wild mice from entering your home, it’s important to be aware that mice can fit through the tiniest of spaces. In fact, mice can squeeze through holes as narrow as a quarter inch — roughly equivalent to the diameter of a standard No. 2 pencil.
Though a mouse’s skull is its largest bony feature, the shape is long and narrow, which allows the animal to fit their heads through teeny tiny holes. What’s more, mice have sloping clavicles that are angled in a way that doesn’t impede their movement, and their ribcages can compress inward. Rats, meanwhile, have a similarly flexible bone structure, and can squeeze into holes as small as a quarter.
Rats have two types of teeth: molars that stop growing when fully formed and incisors that grow endlessly. Those four fang-like incisors are located in the front of the mouth, two on top and two on the bottom. If they get too long, they can make it difficult for a rat to eat, potentially causing a slew of health complications, which is why it’s vital for rats to keep their incisors ground down to a manageable length.
It’s recommended that rat owners feed the creatures a proper diet and provide them chew toys to grind their teeth on. You may also notice your pet rat performing an action called “bruxing,” in which the critter softly grinds its incisors against each other to wear them down.
During particularly intense bruxing sessions, it’s common for a rat’s eyes to bulge and rapidly vibrate as its jaw muscle presses against its eyeballs and pushes them outward. This is known as “eye-boggling,” and it’s normal behavior indicative of a happy rat.
Bennett Kleinman
Staff Writer
Bennett Kleinman is a New York City-based staff writer for Inbox Studio, and previously contributed to television programs such as "Late Show With David Letterman" and "Impractical Jokers." Bennett is also a devoted New York Yankees and New Jersey Devils fan, and thinks plain seltzer is the best drink ever invented.
Advertisement
top picks from the Inbox Studio network
Interesting Facts is part of Inbox Studio, an email-first media company. *Indicates a third-party property.
Think about the last time you stepped outside on a cold winter morning. You may have noticed the air smelled different: crisp, clean, even invigorating — though you couldn’t put your finger on why. It’s not your imagination; across climates and continents, people experience winter air as feeling and smelling fresher than at any other time of year.
But what makes cold air smell good? Is it the snow, the pine trees, or something intangible in the air itself? Scientists say it’s a combination of factors, relating the world’s natural rhythms as well as the inner workings of the human brain.
One of the primary reasons winter air smells so good is that there’s simply fewer aromas competing for your attention. Warm air holds more moisture, and that moisture helps carry smells. In summer, heat and humidity intensify odors from soil, plants, pavement, garbage, and pollution, creating a thick mix of scents — some pleasant and many less so.
Cold air, however, behaves differently. As temperatures drop, the tiny airborne molecules responsible for smell — called volatile organic compounds — move more slowly and evaporate less easily. Fewer of those odor molecules are released into the air from sources such as plants, soil, or decaying organic matter.
Wintry air is also usually drier, especially after a freeze. Without humidity to help transport odors, many everyday smells fade into the background. Pollen disappears, plant growth slows, and bacteria that cause decay become less active.
The result isn’t that winter air smells like anything especially good — more accurately, it smells like less. And our brains tend to interpret that absence of competing smells as clean and fresh.
Cold weather changes not only the air but also how your body experiences it. When you inhale cold air, nerve endings inside your nose react to the temperature, which you perceive as a sharp, tingling sensation. That response comes partly from a nerve system that detects cold and irritation — the same system that makes mint or menthol feel refreshing.
At the same time, cold, dry air can slightly reduce the sensitivity of your smell receptors, the specialized cells that detect odors, meaning fewer smells register as strongly. But rather than dulling the sensation, this often has the opposite effect: With fewer odors coming in, each inhale feels clearer and more distinct.
The contrast also matters. Stepping from a warm indoor space into cold outdoor air creates an immediate sensory shift, prompting your brain to pay closer attention. Even if there’s less to smell, the physical sensation of cold air makes the experience feel sharper and more vivid.
Many people, including beloved TV character Lorelai Gilmore, swear they can smell snow before it falls. But while winter may have a unique fragrance, snow is just frozen freshwater and therefore has no odor.
Those people are simply sensing the atmospheric changes that often precede snowfall. Humidity tends to rise, air pressure drops, and existing scents — trees, soil, distant wood smoke — can become more noticeable.
Cold air is also denser, allowing smells to linger longer and travel farther without dispersing as quickly. Over time, some people learn to associate that specific mix of cold, moisture, and stillness with approaching snow. So if you think you can smell an impending snowstorm, it’s not the snowflakes you’re smelling — it’s your brain recognizing a familiar winter pattern.
Even in winter, plants continue to influence the way the air smells. Evergreen trees including pine, fir, and spruce produce aromatic compounds known as terpenes, which give the trees their characteristic scents and can be noticeable even in cold weather. In winter, with many deciduous plants dormant, those evergreen aromas stand out more clearly against a backdrop of muted seasonal smells.
Lower temperatures also suppress biological activity such as microbial decomposition, which otherwise releases musty, earthy odors, leaving relatively fewer unpleasant natural scents in the air. In some cases, subtle chemical reactions in the snow, soil, or frozen plants can even generate new, faint aromas unique to winter landscapes.
Winter air may smell fresh, but that doesn’t always mean it’s objectively cleaner. In cities, for example, cold weather can actually trap pollutants close to the ground. A layer of cold, dense air can act like a lid, preventing exhaust and other pollutants from rising and dispersing. As a result, air quality can worsen even in winter.
At the same time, cold temperatures slow the evaporation of odor-causing chemicals, so fewer strong or unpleasant smells reach your nose — which can make the air seem fresher than it really is.
Whether or not it’s cleaner, the crispness of winter air — which tends to be dry rather than humid — can make breathing feel more refreshing. And the simpler mix of scents and slower outdoor chemical activity can create a sense of clarity that smells great and feels restorative.
Kristina Wright
Writer
Kristina is a coffee-fueled writer living happily ever after with her family in the suburbs of Richmond, Virginia.
Advertisement
top picks from the Inbox Studio network
Interesting Facts is part of Inbox Studio, an email-first media company. *Indicates a third-party property.
Dreams are a universal human experience, yet they remain one of the most mysterious aspects of sleep. Researchers continue to explore why we dream, how long dreams last, and what we dream about — work that has uncovered some key insights, from the frequency of nightmares to the science behind lucid dreaming.
While some dreams feel empowering and exciting, others can be stressful or scary, but all of these experiences are typically side effects of healthy brain activity during sleep. Below are five facts that explore the fascinating world of dreams.
Credit: Unsplash+ via Getty Images
Most People Have Recurring Dreams
An estimated 60-75% of adults have experienced at least one recurring dream in their lifetime. According to psychology professor Antonio Zadra of the University of Montreal, most recurring dreams are unpleasant, often appearing during periods of real-life stress and dissipating once the stressor is resolved. Many researchers agree that dreaming helps us process emotions and work through unresolved stress, which is why negative recurring dreams may fade once the underlying issue is addressed.
If you’ve ever dreamt about your teeth falling out or being late to a class, you’re not alone. Certain kinds of recurring dreams are common among both adults and children, including falling, being chased, flying, losing teeth, being naked in public, being late, and taking a test. Psychologists speculate that these themes may represent core emotions that emerge at certain points in our lives. For instance, dreams about falling may indicate feelings of anxiety or instability and may arise during times of transition or high stress.
These themes occur across age groups, though the content often differs. Children’s dreams about being chased, for example, often feature the dreamer being pursued by monsters, wild animals, witches, or ghoulish creatures. By contrast, adults may experience being chased by more grown-up concerns such as burglars, strangers, mobs, or shadowy figures.
We Spend an Average of Two Hours Dreaming Each Night
Have you ever had a dream that felt like it lasted an entire day? Chances are it was only minutes long in reality, but those short scenes add up to around two hours of total dreaming per night.
Most dreams occur during rapid eye movement (REM) sleep, a stage that lasts between 10 minutes and an hour. Because our bodies cycle through sleep stages multiple times per night, we experience an average of four to six REM periods nightly. The first REM period after falling asleep lasts for only a few minutes, so those early dreams are exceptionally brief.
Toward the early hours of the morning, REM periods lengthen, lasting around half an hour, with a maximum length of one hour. Still, it’s rare for a single dream to span that long. Instead, REM periods usually consist of multiple shorter dreams.
Scientists use several methods to determine the lengths of dreams. One of the most common is electroencephalography (EEG), which measures brainwaves, allowing researchers to determine when participants are in REM sleep and dreaming. Similarly, fMRI (a type of brain imaging) measures blood flow, showing which areas of the brain are active during dreaming.
Another valuable tool is dream reporting, wherein scientists wake participants after a timed REM period and ask for dream reports, linking REM duration with perceived dream length. Combining those methods helps scientists determine roughly how long the average sleeper dreams.
Dreams can also occur during the other stages of sleep. There are four sleep stages in total: REM and three phases of non-rapid eye movement (NREM) sleep, N1, N2, and N3. While research indicates NREM dreams do occur, they’re less frequent and much shorter than REM dreams — think of them as a fleeting thought rather than a complex dream featuring storylines and details. That’s because the brain is much more active during REM sleep than during NREM, leading to more vivid dreams.
Because NREM dreams are short, incomplete thoughts rather than full narratives, they don’t account for much of our total dreaming time. This is why researchers use REM duration as a proxy for estimating total dream time, leading to the estimate of roughly two hours of dreaming per night, according to the National Institutes of Health.
Have you ever wondered why you don’t fall out of bed during an especially animated dream? The brain has a special protective mechanism to keep us safe and sound: It temporarily paralyzes us during REM, the stage of sleep involving vivid dreaming. When we’re in REM sleep, many physiological changes can occur, including increases in blood pressure, heart rate, brain activity, and breathing. In fact, most neurons in our brains fire just as much, or sometimes more, in deep sleep than they do when we’re awake.
This allows for very emotional, intense, and elaborate dreams during the REM cycle. Of course, our brains must also protect our bodies from acting out these scenarios. To accomplish this, the pons (the part of the brainstem that handles unconscious processes) and the rostral ventromedial medulla (the part that can block or amplify pain signals sent to the spinal cord) work together to suppress skeletal muscle tone, a process known as muscle atonia.
That near-total paralysis of voluntary muscles turns physical readiness off during REM sleep, allowing us to sleep soundly. During NREM sleep, muscle tone is reduced but not eliminated, though most NREM dreams are typically less vivid and physically demanding.
Bad dreams can take different forms: Nightmares are more common and generally less intense, while night terrors can be severe and disruptive.
Nightmares are often related to real-life stressors, such as a child’s fear of separation or an adult’s job insecurity. But they may also be fictional and unrelated to waking events. An estimated 20-30% of children and 5-8% of adults experience frequent nightmares, which often occur in the second half of the night, during those longer stretches of REM sleep.
A night terror is far more intense than a nightmare, often startling the dreamer awake. Usually occurring early in the night during NREM sleep, night terrors are caused by the overstimulation of the central nervous system, leading to sudden waking, crying, screaming, confusion, and other unpleasant reactions. Despite those intense responses, night terrors thankfully have a limited recall period, whereas nightmares are more often remembered.
Fortunately, night terrors are relatively uncommon, especially in adults. An estimated 1-6.5% of children (1 to 12 years of age) and 1-4% of adults experience night terrors. They’re most common in toddlers and young kids, but as the nervous system matures, night terrors typically fade without treatment, making them rare in adults.
Realizing we’re in a dream — known as lucid dreaming — is a phenomenon that approximately 51% of people have reportedly experienced at least once. Though the ability to control our dreams was once considered a myth, in 1981, a study conducted by psychophysiologist Stephen LaBerge of Stanford University established the scientific validity of lucid dreaming. While in REM sleep and dreaming, study participants were able to perform eye movement patterns that LaBerge had previously asked them to perform, demonstrating that some dreamers can control their actions.
Researchers continue to investigate why we lucid dream. Though many theories are actively being explored, cognitive neurophysiology expert Nicolas Zink, author of the 2015 paper “Theories of Dreaming and Lucid Dreaming,” believes the best explanation is the protoconsciousness theory, which proposes dreaming during REM sleep represents a fundamental state of brain organization that supports waking consciousness and maintains emotional balance.
For many people, the appeal of lucid dreaming lies more in “how” to experience it than “why.” Lucid dreaming can often be pleasant — from traveling the world to soaring through the sky — so some people seek to induce it. One of the most popular techniques is “Mnemonic Induction of Lucid Dreams” (MILD), developed by LaBerge.
The process begins with accurate dream recall upon awakening during the night. According to LaBerge, before falling back asleep, the dreamer must focus on the dream as they repeat, “Next time I’m dreaming, I will remember that I am dreaming.” Repeating this phrase while visualizing the dream may help the dreamer reenter it and become lucid.
Rachel Gresh
Writer
Rachel is a writer and period drama devotee who's probably hanging out at a local coffee shop somewhere in Washington, D.C.
Advertisement
top picks from the Inbox Studio network
Interesting Facts is part of Inbox Studio, an email-first media company. *Indicates a third-party property.
The world’s biggest birds can be ranked in various ways: by weight, height, or wingspan — and then there’s the question of whether or not to include flightless birds. Penguins, for example, are quite bulky, but no penguin species can fly. Conversely, the wandering albatross is an exceptional flyer with an immense wingspan of up to 12 feet, but it weighs only about as much as a human toddler.
We decided to look at the world’s avian heavyweights by mass alone, regardless of whether or not they can fly. Our only requirement is that the bird in question must still exist, as extinct species are a whole different ball game.
Take, for example, the Vorombe titan, a species of elephant bird that once lived in Madagascar before going extinct some 1,000 years ago. That colossal bird stood as tall as 9 feet 10 inches and had an estimated weight of around 1,800 pounds — far larger than any bird living today.
Here, then, are seven of the heaviest birds roaming the Earth today, ranked in ascending order by mass, from impressively large to heavier (and taller) than an average human.
The wild turkey holds the distinction of being among the heaviest flying birds in the world. Unlike their domesticated counterparts, they’re surprisingly agile and swift fliers, despite reaching weights in excess of 25 pounds.
According to the National Wild Turkey Federation, the largest wild turkey on record — harvested by a hunter in Kentucky — weighed a mighty 37.61 pounds, about twice the size of the turkeys typically placed on a Thanksgiving table. Wild turkeys manage to gain all that bulk through opportunistic foraging and a varied, omnivorous, and protein-rich diet that includes berries, acorns, nuts, seeds, insects, and small reptiles.
Africa’s kori bustard is the world’s heaviest flying bird, with males weighing up to 40 pounds (females are much smaller, averaging 11 to 13 pounds). Unsurprisingly, the kori bustard expends a lot of energy to fly, so it remains on the ground most of the time and only takes to the air when necessary — typically to avoid predators.
When flight is required, the birds use their long legs to get a running start and take to the air with powerful wing beats (using their 7-to-9 foot wingspan) before transitioning to slower, steadier flaps once airborne. Keeping low to the ground, kori bustards typically land soon after taking off, normally within sight of their launch.
The greater rhea is South America’s largest bird, standing up to 5 feet tall and weighing between 33 and 66 pounds. These birds, which are related to ostriches and emus, roam the grasslands and pampas of Argentina, Bolivia, Brazil, Paraguay, and Uruguay. They’re completely flightless, using their long, powerful legs to outrun predators such as cougars and jaguars.
Greater rheas have unusually long wings for flightless birds. While useless for flight, the wings are important for balance and for changing direction while running at speeds of up to 40 miles per hour. The birds are also excellent swimmers, using their legs, necks, and wings to cross rivers and marshes with surprising grace and ease.
The emperor penguin is the heaviest of all the penguin species. Adults stand at around 43 to 47 inches tall and can weigh as much as 100 pounds, though weights vary greatly by sex and season. During the brutal Antarctic winters, emperor penguins need all the blubber they can muster to insulate themselves from the extreme cold, and they huddle together in tightly packed groups to keep warm.
Of course they are, like all penguin species, flightless — and they’re not particularly adept at walking, either, often displaying a comical clumsiness on land. But emperor penguins excel in the water: They’re exceptional swimmers, capable of diving deeper and for longer than any other bird.
The emu is Australia’s largest native bird and the second-tallest bird in the world (but third in terms of overall bulk). They can reach heights of more than 6 feet tall, and the largest specimens weigh as much as 120 pounds.
Unlike greater rheas, emus have tiny vestigial wings that are only about 7 inches long. Flying is certainly not an option, making running their way of life. Using their powerful legs, emus are capable of sustained speeds of at least 30 miles per hour and even faster short sprints — with each stride nearly 9 feet long. Emus use their strong legs, heavy feet, and sharp nails to defend themselves from predators, while also relying on their impressive agility when surprised — they can jump 7 feet straight up to escape trouble.
Weighing up to 170 pounds and reaching heights of 6 feet, the southern cassowary is the second-heaviest bird in the world. Found in Australia, Indonesia, and Papua New Guinea, these massive flightless birds have a distinct appearance, with bright blue faces, red wattles, and a prominent, helmet-like casque atop their heads.
These are shy, solitary birds, living alone in rainforests and only coming together when it’s time to breed. While not inherently aggressive, they are territorial and will attack if provoked or angered — and when a cassowary gets mad, it doesn’t hold back.
Widely considered the world’s most dangerous bird, the southern cassowary has incredibly powerful legs and a 4-inch, dagger-like claw on its middle toe. When threatened, they’re capable of delivering devastating kicks and slashes, including to humans, although attacks are rare and fatalities even rarer.
Credit: Unsplash+ via Getty Images
Common Ostriches
The common ostrich is the undisputed heavyweight of the avian world. Adult ostriches typically weigh between 250 and 300 pounds and can reach heights of up to 9 feet. (Females tend to be shorter, closer to 6 feet.) Native to Africa, these birds are well-suited to the continent’s dry, open landscapes, having sacrificed the ability to fly for incredible speed on land.
Ostriches are the world’s fastest animals on two legs, capable of sprinting at 43 mph and maintaining a cruising speed of 30 mph for 10 miles. An ostrich’s kick, meanwhile, is so powerful it can kill a lion.
Being such big birds, they also lay big eggs — the biggest eggs of any living animal, in fact. The largest ostrich egg ever recorded weighed a whopping 5 pounds 11 ounces.
You may have heard tell of ostriches burying their heads in the sand when they’re scared, but that’s just a myth. That common misconception likely arose because ostriches dig shallow holes as nests for their eggs, and when they use their beaks to turn the eggs, it appears as though they’re sticking their heads in the sand.
Tony Dunnell
Writer
Tony is an English writer of nonfiction and fiction living on the edge of the Amazon jungle.
Advertisement
top picks from the Inbox Studio network
Interesting Facts is part of Inbox Studio, an email-first media company. *Indicates a third-party property.
If you were to guess Boston cream pie was invented in Boston or Nashville hot chicken originated in Nashville, you’d be correct. But sometimes, it’s not so obvious that a food is named after its place of origin. Examples of this include one of the most popular cheeses on the planet, a fruit found in every produce section, and a common source of plant-based protein. Here’s a look at six foods you may not have realized are named for the places they came from.
Long before it was produced in Vermont or Wisconsin, cheddar cheese originated in the English village of Cheddar, located in the county of Somerset about 145 miles west of London. The cheese’s origins date to the 12th century, when it was stored in caves in Cheddar that helped maintain an ideal humidity and temperature for maturation. The cheese became popular by 1170 — a year in which Baron Alured de Lincoln is recorded as buying 10,240 pounds of cheddar (though the records refer to it as just “cheese” from the Somerset region).
So when did people start calling it “cheddar cheese”? The Oxford English Dictionarycites the earliest written record of the term dating to 1659. Indeed, it was a common custom at the time for English cheesemakers to name products after their place of origin.
English-speakers typically pronounce the “lima” in “lima beans” as LY-ma, which is differently from how they’d say “Lima, Peru” (LEE-ma). That may be why people in the U.S. don’t often realize lima beans are named after Peru’s capital city.
What English-speakers know as lima beans refers to a native Andean legume called “pallar” in the region. The name “lima beans” caught on after the 16th-century Spanish conquest of the Incas. Peru’s European rulers exported the local legumes to the United Kingdom and later the United States, contained in packaging that stated they were made in Lima, Peru. As you may suppose, that earned the legume the name “lima beans” in those English-speaking nations.
It’s a common myth that Fig Newtons were named for the English polymath Isaac Newton. In reality, the name of the fig-filled treat is a nod to its place of origin. The cookie was first manufactured in 1891 by the Kennedy Biscuit Company in Cambridge, Massachusetts.
At the time, the company liked to name its products for nearby communities (e.g., Shrewsbury biscuits, Beacon Hill cookies, etc.). So plant manager James Hazen opted to call this new cookie the “Newton” after the Boston suburb 6 miles away. In 1991, the city of Newton held a 100th anniversary celebration of the Fig Newton to honor that etymological connection.
The name “jalapeño” translates to “of Jalapa” in Spanish. Jalapa — or Xalapa, as is the more formal spelling — is the capital city of the Mexican state of Veracruz, whose name comes from the Aztecan word “xalapan,” meaning “sand by the water.”
Though jalapeño peppers aren’t commonly grown in Xalapa, the city is where they were widely commercialized thanks to a food pickling business there. Known as La Jalapeña, the business was known for its canned goods, chorizo, and chilies. In 1922, it received a patent for pickled chilies, and thus began the successful worldwide commercialization of these spicy peppers. They were exported far and wide, and the term “jalapeño pepper” — inspired in part by the packaging, which read “La Jalapeña” — was coined in the U.S.
The Waldorf salad is named neither for a country nor a town, but rather for the Waldorf-Astoria Hotel in New York City. This isn’t to be confused with the still-standing Waldorf Astoria located on Park Avenue — instead, it refers to the historic hotel that was razed in 1929 so the Empire State Building could be built at the site.
It was at that world-famous establishment that the leafless salad was created by Oscar Tschirky, a former busboy and popular maître d’hôtel who was a bit of a celebrity in his own right. In 1896, Tschirky publishedThe Cook Book by “Oscar” of the Waldorf, which contained recipes he’d crafted in the hotel kitchen. The book included a recipe for the hotel’s namesake salad, though at the time the dish contained only apples, celery, and mayonnaise. Grapes and nuts were added later, sometime before the late 1920s.
Wiener schnitzel has nothing to do with sausage, hot dogs, or any other foods that English speakers commonly refer to as “wieners.” Rather, the “wiener” in the name is German for the phrase “from Vienna,” as “Wien” is the German word for Austria’s capital city Vienna. “Schnitzel,” meanwhile, is the word for the breaded veal cutlet that serves as the dish’s primary component. Though variants of this dish have existed since the late 18th century, the term “wiener schnitzel” only dates to the 1850s, according to the Oxford English Dictionary.
That said, the name “wiener” as a nickname for hot dogs also has Viennese origins. In that case, the word refers to a Viennese-style sausage called “wienerwurst.” The culinary nickname “wiener” was coined in the United States no later than 1880, and it originally referred specifically to sausages from Vienna. But by the 1930s, Americans had begun saying “wiener” to describe hot dogs and other sausages, regardless of whether they came from Vienna.
Bennett Kleinman
Staff Writer
Bennett Kleinman is a New York City-based staff writer for Inbox Studio, and previously contributed to television programs such as "Late Show With David Letterman" and "Impractical Jokers." Bennett is also a devoted New York Yankees and New Jersey Devils fan, and thinks plain seltzer is the best drink ever invented.
Advertisement
top picks from the Inbox Studio network
Interesting Facts is part of Inbox Studio, an email-first media company. *Indicates a third-party property.
In the U.S. and many other parts of the world, students are graded on an “A” to “F” scale, seldom questioning why one letter is missing. “E” isn’t found on most modern report cards — but why?
This isn’t a simple oversight, but rather the result of centuries of evolving grading practices. By tracing the history of student evaluations, we can uncover why the letter “E” quietly disappeared from report cards across the United States.
In 1785, Yale University President Ezra Stiles introduced what is believed to be the first proper grading system in the American colonies. That four-point scale, written in Latin, comprised the following categories: optimi (best), secondioptimi (second best), inferiores boni (less good), and pejores (worse).
By 1837, mathematics and philosophy professors at Harvard had adopted a 100-point grading system, though it looked different than the one we use today. The modern 100-point scale features corresponding letter values and typically looks like this: “A” is 90-100, “B” is 80-89, “C” is 70-79, and “D” is 60-69.
But Harvard used a strictly numerical scale without any corresponding letters, and the ranges were as follows: 100 (perfect), 75-99, 51-74, 26-50, and 25 or below. The average grades followed a bell curve, with most students hovering around 50. Scores on both extremes (above 75 and below 25) were rare.
Numerical grades gained traction across the country, and by the early 20th century, it became the most common grading system.Teachers at schools of all levels began assigning and recording grades using this 100-point scale, and, for the first time, modern grades inched toward a universal system.
Although numerical grading was the most popular method of assessing students from the mid-1800s through the early 1900s, another system emerged and evolved alongside it: letter grades. Teachers at Mount Holyoke College in Massachusetts began using letter grades as early as 1884. By the 1896-1897 school year, Mount Holyoke had become the first U.S. school to have documented use of a uniform letter grading system.
Letters were assigned to numerical ranges, but those ranges differed from Harvard’s. Instead, the system looked similar to what’s used in schools today: An “A” grade (excellent) was 95-100, “B” (good) was 85-94, “C” (fair) was 76-84, “D” (barely passed) was 75, and “E” was a failing grade, though it didn’t have a corresponding number.
The following year, Mount Holyoke altered its grading system, adding an “F” for the first time. The numerical ranges were adjusted to include the new letter, and the grading scale spanned “A” through “F.” Interestingly, though, the college also retained the “E,” thus increasing its grading scale from five categories to six. But this move proved unpopular, and other schools began removing the “E” grade.
Experts have several theories about why “E” began to fade, including a push for a more efficient system. By the early 20th century, educators believed that fewer grading categories would help teachers streamline the process, simplifying the system.
Isidor Edward Finkelstein, author of The Marking System in Theory and Practice (1913), was influential in this line of thought. Specifically, he and his colleagues believed that five divisions was the optimal number on a marking scale. That meant the modern grading system needed to drop one letter. As researchers Kimberly Tanner and Dr. Jeffrey Schinske wrote in their article “Teaching More by Grading Less (or Differently),” the “E” grade was an easy target because “F” so clearly stood for “fail.”
The article cites another issue with “E”: Some students assumed it stood for “excellent” despite it marking unsatisfactory grades, making it the most misunderstood letter out of the bunch. It was a perfect storm — “F” was a clearer stand-in for “fail” while “E” confused and crowded the grading scale. By the 1930s, “E” grades had disappeared from American schools.
During the latter half of the 20th century, the letter grading system “A” to “F” (excluding “E”) became standard across the country. But that doesn’t mean there aren’t exceptions. In elementary schools, a different letter scale is often seen on report cards, especially for younger students in kindergarten through third grade.
For instance, grades may include “D” (“developing”), “E” (“expanding”), “S” (“satisfactory”), and “N” (“needs improvement”). As these grading systems vary by school district, the “E” may also mean “excellent” or “exceeding expectations.” So no, “E” hasn’t been entirely banished from the modern education system, but it has undoubtedly lost its place in the standard lineup of U.S. letter grading, remaining a curious omission from student report cards across America.
Rachel Gresh
Writer
Rachel is a writer and period drama devotee who's probably hanging out at a local coffee shop somewhere in Washington, D.C.
Advertisement
top picks from the Inbox Studio network
Interesting Facts is part of Inbox Studio, an email-first media company. *Indicates a third-party property.
Blue jeans have long been a staple of wardrobes around the world, worn by everyone from construction workers to rock stars. Certain looks, from James Dean’s cuffed 1950s denim to Steve Jobs’ faithful 501s have even become ingrained in our cultural imagination. The classic color of jeans now feels essential to their identity, but it wasn’t originally chosen for stylistic reasons. So why are jeans typically blue?
Long before Levi Strauss patented riveted work pants and created the modern blue jean in 1873, laborers across medieval Europe also wore trousers that were dyed blue, first with locally grown woad (a plant in the mustard family) and later with imported indigo. Those early work pants weren’t jeans as we know them today, but they did set the stage for our modern version — particularly the color.
Blue dye wasn’t used simply because it was available — it proved handy for other reasons, too. The dark color hid the grime that came with sweating in the sun or toiling away in soil, and indigo’s unique properties made it a particularly durable choice. As fabric comes out of the dye vat, exposure to the air causes indigo to oxidize and solidify, forming a thin coating around the fibers. This helps indigo resist fading far better and for far longer than most other natural dyes.
There may have been another, subtler benefit to dying trousers blue: The Indigo plant has long been valued in traditional Chinese and Indian medicine for its antibacterial properties, and it’s possible that indigo-dyed garments resisted odor slightly better than undyed cloth, an obvious advantage at a time when washing clothes was infrequent.
By the mid-19th century, cotton pants had become standard workwear for miners, railroad workers, and other laborers in the American West, and the textile industry that supplied them was booming. American mills such as the Amoskeag Manufacturing Company in Manchester, New Hampshire — then one of the largest textile producers in the world — were reliably producing indigo-dyed cotton twill known as denim, a proven workhorse material that had achieved popularity throughout Europe and the Americas.
One of Amoskeag’s customers, a San Francisco dry-goods merchant named Levi Strauss, stocked the company’s blue denim fabric, and Reno tailor Jacob Davis purchased it. When Davis began reinforcing work trousers with metal rivets, he did so on pants made of both undyed duck canvas and blue denim. His customers gravitated toward the most practical color: dark blue.
Eventually, synthetic indigo, which was developed in the 1890s, made blue denim cheaper and easier to produce than ever before. By the 1900s, undyed work wear was all but discontinued, and blue became the de facto dye for work pants.
Not only was blue denim standard workwear by the 20th century, but it also became a cultural juggernaut. Hollywood’s cowboys of the 1930s cemented blue jeans as symbols of rugged Americana; soldiers wore denim abroad during World War II, spreading the look’s popularity overseas; and teenagers in the postwar years embraced the garment as a uniform of rebellion. Women who had worn denim in wartime factories also continued reaching for it long after the war ended.
By that point, blue jeans’ ability to retain their sturdiness even as the dye subtly faded at creases and edges had become a signature style. Today, jeans are a timeless wardrobe staple — around 3 billion pairs were sold in 2022 alone — and though they’re available in many colors, the classic pair of jeans will always be blue.
Nicole Villeneuve
Writer
Nicole is a writer, thrift store lover, and group-chat meme spammer based in Ontario, Canada.
Advertisement
top picks from the Inbox Studio network
Interesting Facts is part of Inbox Studio, an email-first media company. *Indicates a third-party property.
You probably reach for some kind of kitchen utensil every day, whether a fork, spoon, spatula, or cheese grater. These tools seem pretty straightforward, but some conceal clever uses that may go unnoticed. For instance, have you ever used the little loop on your vegetable peeler? Or measured pasta with your serving spoon?
Some of these features were designed intentionally, while others have been happily found to be useful in more ways than one. Here are a few of the ingenious hidden features found on everyday kitchen tools.
Kitchen shears are an often-overlooked utensil. A good pair can do more than just tear open food packaging in a pinch; they’re an easy way to cut through meat, chop vegetables, finely snip herbs, and even slice up a rustic pizza.
But that’s not all they do. Take a look at the blades on your shears — if one edge has a crescent-shaped cut-out, that’s a bone notch, meant to help stop slippage and cut through bones in poultry or fish.
There’s another neat feature, too: On the inside of the handles, shears often have serrated metal teeth that form a circle, which can be used to twist open a stubborn jar or twist-top bottle or even to crack shelled nuts or shellfish.
That hole in the center of your pronged pasta server isn’t just for drainage when scooping pasta out of boiling water. Depending on the spoon’s design, the opening may also work as a measuring tool for approximating a single serving of dry spaghetti noodles.
Keep in mind, however, that while the latter can be handy, it’s not universal. Pasta servers aren’t made to one specific standard, so the size and shape of the center hole can vary widely. Serving sizes vary too; a standard package of store-bought spaghetti considers a portion to be roughly 2 ounces (with roughly a quarter-sized diameter when held in a bunch), but no one spoon will reliably measure that exact amount every time.
Many vegetable peelers are adorned with a small, barely noticeable bump on them. This can usually be found right on the top of older-style straight peelers, while on newer, wider Y-shaped peelers, it’s usually on the side.
This notch is designed to remove potato eyes or other vegetable and fruit blemishes without having to switch to a paring knife. It’s a small but useful detail that can save time and frustration — and maybe a nicked fingertip or two.
It’s simple to use: Place the notch over the unwanted spot and scrape or scoop it away. This works on potatoes, carrots, or any firm fruit or vegetable with imperfections, and you barely even have to break your peeling stride.
Many people associate box graters so strongly with cheese that they simply refer to it as a cheese grater. And, yes, a box grater is primarily used for grating cheese. One side grates the long, thick shreds ideal for melting (the most commonly used side), while another shreds cheese or vegetables into finer strands, and another into thin slices. But what about those imperceptibly tiny holes that feel almost dangerous to the touch?
That’s known as the rasp-style grater, and while it can indeed be used for cheese — hard varieties such as parmesan work best — it’s best when zesting citrus. You’ll want to press the rind or other food lightly against this side and move it in short strokes, taking care not to scrape your knuckles. It’s also excellent for grating tough spices such as cinnamon and nutmeg.
Many cutting boards have wide handles that make it easier to hang them up or pull them out from inside a cupboard, but they also work as a funnel for transferring chopped food or food scraps. Enterprising home cooks have figured out that if you slide the handle opening over the edge of the counter, you can easily push scraps through the hole and into a garbage or compost bin below — no more fluttering onion skins escaping as you carry your board from point A to point B.
Additionally, you can lift the cutting board over to your cooktop and safely slide prepped ingredients into a pot or pan. The board can also act as a splashguard in the process if the water or cooking oil is already hot.
Of course, not every cutting board has a handle with a hole, nor will the holes always be the ideal shape and size for moving food or waste through. If yours does, however, you may just have a new way to use one of your trustiest tools.
Colanders aren’t just for draining pasta — their perforations can also be used to quickly and neatly strip herb leaves from their stems. Simply feed the stems through the holes from the inside to the outside, pull, and the leaves slide off with minimal effort right into the colander.
You’ll want to use the medium-sized holes on your colander and make sure you’re not using stems or leaves that have gone too limp or woody. It makes for a little bit of extra cleanup, but it may be worth it to avoid taking up drawer space with a one-use tool such as a specialized herb stripper.
We’ve all been there: standing over a tomato sauce simmering away on the stove with a trusty wooden spoon in hand. But when you’re done stirring, where do you rest the spoon without leaving a mess on the counter or grabbing an extra dish if you don’t have a specialized spoon rest?
As it turns out, many saucepan handles solve this problem with the small hole at the end of the handle. While that hole is primarily meant for hanging the pot on a hook or peg, it can also double as a spoon rest with teh right setup.
Simply stick the bottom end of the spoon’s handle the slot, leaving it slanted up toward the pot so any drips will fall back in the sauce and not all over your counter. Just be cautious and give the spoon a quick check for any heat transfer before grabbing it to finish your dish.
Nicole Villeneuve
Writer
Nicole is a writer, thrift store lover, and group-chat meme spammer based in Ontario, Canada.
Advertisement
top picks from the Inbox Studio network
Interesting Facts is part of Inbox Studio, an email-first media company. *Indicates a third-party property.