Original photo by Charles T. Peden/ Shutterstock

April showers bring May flowers… and in some cases, many more flowers than expected. Superblooms — aka the massive, all-at-once bloom of millions of wildflowers in the same area — create vibrant floral tapestries that cover hillsides, valleys, and even deserts. Superblooms are a remarkable reminder of Mother Nature’s unpredictability, often drawing in crowds by the thousands to witness rare seas of flowers. Let these six facts about wildflower superblooms grow your knowledge about this floral phenomenon.

Wildflower Super bloom in Southern California.
Credit: MODpix/ Shutterstock

Superblooms Are Years in the Making

Superblooms are an astounding sight to see — valleys, meadows, and deserts filled with the burst of thousands upon thousands of blooms. But much of that awe comes from the fact that superblooms are generally unpredictable; to have a successful superbloom means that weather conditions have to be perfectly timed. The burst of flowers typically occurs in more arid regions of the country — think some areas of Southern California — when warming spring temperatures pair with an adequate amount of precipitation during the previous fall and winter months. Billions of wildflower seeds, which have sometimes laid dormant underground for years waiting for the right conditions, emerge all at once, creating the phenomenon of thousands of buds opening simultaneously.

However, not every winter produces enough water for an annual superbloom. Years with soaking rains, especially after years of drought, often give the best chances. Even with enough precipitation, a potential wave of floral blooms is up against other challenges, like climate conditions. Temperatures that are too hot or too cold can impede germination or growth for plants that do sprout, while herbivores in search of food can feast on vulnerable seedlings before they can debut their buds. And if the flowers bloom before pollinators are ready to emerge, there can be fewer viable seeds for future plants.

Death Valley wildflower super bloom.
Credit: saraporn/ Shutterstock

Some Superblooms Follow A Pattern

Death Valley is one of the hottest places on Earth (once reaching a record temperature of 134 degrees Fahrenheit in 1913), and also the driest spot in North America — two characteristics that don’t seem particularly flower-friendly. Yet Death Valley is home to a regularly occurring superbloom that fills the desert with millions of blossoms, featuring species such as the yellow desert sunflower and the pink desert sand verbena. On average, the area receives a scant 2 inches of rain per year, but in years with more frequent precipitation and few damaging windstorms (which can batter delicate plants), superblooms are more likely. Death Valley’s conditions seem almost timed, combining perfectly about once a decade, with the most recent superblooms occurring in 2016, 2005, and 1998 — and even then, the phenomenon is short-lived, often lasting just a few weeks until temperatures tick upwards.

Close-up of a poppy field under mount Tsukuba.
Credit: Alexander Pyatenko/ iStock

Any Big Bloom Can Be Called a “Superbloom”

The word “superbloom” has typically come to mean a massive bloom of the same flower species all at once, but the term doesn’t have a scientific basis. Instead, it’s a phrase that emerged among news agencies to describe the phenomenon to the general public. Many researchers agree that since there’s no scientific definition of what does (and doesn’t) count as a superbloom, a bloom of any size could technically be described as one. According to Richard Minnich, an earth sciences professor, “it’s all in the eye of the beholder.”

Beautiful apricot flowers and super moon with the blue sky background.
Credit: loveallyson/ Shutterstock

Some Superblooms Can Be Seen From Space

Acres of blooming fields can overwhelm the senses from the ground, making it hard to see just how large a superbloom might be. Amazingly, those floral booms can sometimes be seen from space, giving Earth-dwellers a chance to appreciate their magnitude with help from satellite imagery. In 2019, the Landsat 8 satellite used by NASA and the U.S. Geological Survey recorded an orange poppy superbloom in Southern California’s Walker Canyon. The photos, taken 480 miles above Earth, show miles of hillside covered in the state’s official flower, along with hundreds of parked cars from visitors who flocked to the area.

Southern California wild flower super bloom.
Credit: mikeledray/ Shutterstock

Superblooms Can Disappear As Quickly As They Happen

Predicting when, or if, a superbloom might happen is difficult, but so is determining how long one might last. In California and other Western states, superbloom season generally begins in late winter, with lower elevations seeing blooms emerge between mid-February and mid-April. Higher elevation areas tend to remain cooler for longer, meaning superblooms in those regions are more likely to occur between April and July. When flowers do emerge, there’s no guarantee they’ll stay around long. Depending on the species, some wildflower superblooms can last upwards of two months, but weather conditions can quickly shrivel the show — as in March 2015, when a heat wave in the Mojave Desert ended a poppy bloom after just two weeks.

A person walking a dog amidst the flowering poppies.
Credit: Simone Hogan/ Shutterstock

You Can Help Protect Superblooms

Massive wildflower blooms can attract thousands of visitors hoping to snap the perfect picture and experience the amazing view, but steady streams of admirers can actually harm the potential for future superblooms. Floral ecosystems are fragile and can become stressed from large numbers of people who trek off-trail and through the blooms. Walking through the flowers can also cause the spread of invasive plants if seeds are carried in on shoes and gear, and heavy foot traffic can trample blooms and keep them from dropping seeds that would fuel the next generation of flowers. Many botanists say it takes just two steps to ethically enjoy current flower explosions (and those in years to come): Stay on the trail and take with you only photos and memories of the wildflower wonder.

Nicole Garner Meeker
Writer

Nicole Garner Meeker is a writer and editor based in St. Louis. Her history, nature, and food stories have also appeared at Mental Floss and Better Report.

Original photo by Dabarti CGI/ Shutterstock

The world runs on deoxyribonucleic acid, or DNA. Apart from being an excellent spelling bee word, DNA also provides the genetic instructions for the growth, function, and reproduction of all living organisms and viruses. Two polynucleotide chains coil around one another and form the double helix DNA structure that makes you, you. Despite it being the microscopic engine that makes life on Earth possible, humans have only known about the existence of DNA for about 150 years. In that time, scientists have discovered a lot about these genetic building blocks — so much so that doctors can now use gene therapy to treat cancer, while biologists ponder whether to bring back entire extinct species, such as the woolly mammoth. These six facts explore the incredible science of DNA: its discovery, its function, and its impact on human history.

scientist sketching DNA structure.
Credit: Wichy/ Shutterstock

Human DNA Contains 3 Billion Base Pairs

Base pairs form the rung on the twisted DNA ladder, in which each “rung” is composed of nucleotides containing nitrogen bases adenine (A), thymine (T), guanine (G), and cytosine (C). Because adenine always pairs with thymine, and guanine to cytosine, DNA chains are often expressed as just a series of letters (e.g., “AGGTCCAATG” is an expression of 10 base pairs). Human DNA contains 3 billion of these base pairs stretched across 23 pairs of chromosomes, each with different instructions.

Of the total 46 chromosomes, we receive half from our mother and half from our father. The nucleus of every somatic cell (i.e., not sperm or eggs) in the human body contains these chromosomes, but certain cells only access the relevant chromosome for its particular function (eye color, for example, is restricted to a certain section of chromosome 15). DNA usually codes for a protein or group of proteins, which form cells that then become living tissue coalescing together into organs that, when put together, wind up as you.

Captive Chimpanzee in an outdoor habitat.
Credit: Crystal Alba/ Shutterstock

Humans Share 98.8% of Their Genome With Chimpanzees

Sit a human next to another great ape, such as a chimpanzee or bonobo, and the differences are pretty stark — but our DNA suggests otherwise. The closest living genetic cousin to Homo sapiens, chimpanzees and bonobos each share about 98.8% of our DNA sequence. The similarity comes from the fact that humans shared a common ancestor with these primate species around 9.3 million to 6.5 million years ago, which is basically last week in the context of Earth history. But as humans, chimps, and bonobos evolved separately, the differences slowly grew, with each species adding its own divergent DNA. Although a 1.2% difference doesn’t seem like a lot, small changes in DNA can have major consequences. After all, with 3 billion base pairs, that means that there are still 35 million differences between humans and chimps. It’s also worth noting that even if we share genes with chimps, those genes can express differently, with some turned up high in humans, while the same gene can be a low hum in a chimp or bonobo. All of these differences combined is what separates humans from their primate cousins — and all other living things for that matter. In fact, humans share around 60% of their genes with bananas, and that’s something any self-respecting primate can get behind.

Johannes Friedrich Miescher (1844 – 1895).
Credit: GL Archive/ Alamy Stock Photo

In the 19th Century, DNA Was Called “Nuclein”

Although our current understanding of DNA really started to take off with the description of the double helix structure in 1953, scientists had already known about the existence of DNA for nearly a century by that time. Swiss chemist Johann Friedrich Miescher discovered DNA in 1869, although to arrive at that discovery he had to undergo some less-than-savory science. At the time, Miescher was studying white blood cells, which fight infections and diseases. Although notoriously tricky to extract from a human’s lymph nodes, white blood cells could be found in abundance on used bandages. So Miescher traveled to local health clinics, took their used bandages, and wiped off the pus and grime. He then bathed the cells in warm alcohol to reduce the lipids and also used enzymes to eat through the proteins. What was left behind was some kind of gray matter that Miescher (successfully) identified as a previously unknown biological substance, which he called “nuclein.” In the early 1880s, German physician Albrecht Kossel discovered the substance’s acidic properties as well as the aforementioned nitrogen bases, and by the end of the decade, nuclein was renamed to the more accurate “nucleic acid.”

Molecule of DNA in a double helix.
Credit: Kateryna Kon/ Shutterstock

The Discovery of DNA’s Double Helix Is Controversial

On May 6, 1952, British chemist Rosalind Franklin oversaw the taking of the first photograph depicting DNA’s double helix structure, at King’s College London. Technically her 51st X-ray diffraction pattern, the image became known as simply “Photo 51.” Yet the 1962 Nobel Prize for the discovery of the molecular structure of DNA only honored her colleague Maurice Wilkins, along with English physicist Francis Crick and American biologist James Watson. So what gives?

In 1953, Crick and Watson had written a paper revealing DNA’s twisting shape to the entire world, and only in the paper’s final paragraph mentioned that the discovery was “stimulated by a knowledge of the general nature of the unpublished experimental results and ideas” of two scientists at King’s College. In Watson’s own autobiography, he mentions that Franklin had no idea that her results had been shared with Crick and Watson via her colleague Wilkins, and when she published her own paper later, the reception wasn’t nearly as earth-shattering.

Recent studies have suggested that Franklin was a true collaborator with Watson and Crick, despite receiving much less credit than her male colleagues. Dying at age 37 in 1958 from ovarian cancer (likely due to her work with X-rays), Franklin was ineligible for the 1962 Nobel Prize. (By custom, the award was not handed out posthumously at the time, a rule that became codified in 1974.) Thankfully, history has slowly brought Franklin’s contributions to light and, in 2019, the European Space Agency even announced that their newest Mars rover would be officially renamed the “Rosalind Franklin” — a pretty stellar constellation prize.

Neanderthal skull in the foreground, with a human skull in the background.
Credit: Petr Student/ Shutterstock

All Humans Have Some Trace of Neanderthal DNA

DNA contains all the information that makes up all living things, but it also reveals interesting facts about our past. For one thing, all humans share 99.9% of the same genes, with the 0.1% caused by substitutions, deletions, and insertions in the genome (an important tool for understanding diseases). We also know, thanks to DNA, that humans are much less genetically diverse than other animal species. This means that all 8 billion humans today grew from a population of only 10,000 breeding pairs of Homo sapiens, and that our ancestors likely experienced genetic bottlenecks that caused serious population declines.

Amazingly, glimpses into our human lineage are also locked away in our DNA, because every human on the planet has some genetic material adopted from a completely different species of human — Neanderthals. Although Homo sapiens are the only human species on the planet today, the Earth has played host to upwards of 20 different human species throughout millions of years. For a time, Homo sapiens shared the planet with Neanderthals (Homo neanderthalensis) and even interbred with them. Remnants of those dalliances still live within our chromosomes, passed on from generation to generation. Although Europeans and Asians share the largest percentage of Neanderthal DNA (around 2%), Africans also share a small percentage (which wasn’t discovered until 2020).

Human genome DNA sequence.
Credit: Gio.tto/ Shutterstock

Scientists Didn’t Finish Sequencing the Complete Human Genome Until 2022

In October 1990, the Human Genome Project formed to accomplish one goal: to sequence the entire human genome. Regarded as “one of the most ambitious and important scientific endeavors in human history,” the project essentially mapped a blueprint of human biology that greatly improved medicine and sequencing technology. It took 13 years to map the human genome (there are 3 billion base pairs after all), but the project finally declared success in 2003: ”We have before us the instruction set that carries each of us from the one-cell egg through adulthood to the grave,” said leading genome sequencer Robert Waterston at the time.

However, the announcement technically jumped the gun, because the Human Genome Project had only sequenced what was technologically possible, which came out to about 92% of the genome. The last 8% proved to be much trickier, because these regions contained highly repetitive DNA. Over the next two decades, advancements in DNA sequencing methods and computational tools allowed scientists to close the gap, and on April 1, 2022, the Telomere-to-Telomere consortium announced that all 3 billion base pairs had finally been sequenced.

Darren Orf
Writer

Darren Orf lives in Portland, has a cat, and writes about all things science and climate. You can find his previous work at Popular Mechanics, Inverse, Gizmodo, and Paste, among others.

Original photo by rarrarorro/ iStock

The United States Constitution is one of the most famous documents in history. The landmark papers are the culmination of four months of vigorous debate at the 1787 Constitutional Convention in Philadelphia, Pennsylvania. The adoption of the finalized constitution established the laws of the land for the start of President George Washington’s administration on March 4, 1789.

While some of its passages and amendments are familiar from history class, here are a few lesser-known facts behind the processes that shaped the backbone of the American republic.

Close-up of the Preamble to the US Constitution. It starts with the phrase We The People.
Credit: Joe Sohm/Visions of America/ Universal Images Group via Getty Images

The U.S. Constitution Is Among the World’s Shortest and Oldest Governing Documents

The original Constitution checks in at 4,543 words, including signatures, and expands to 7,762 words when adding in the 27 amendments. Sometimes cited as the shortest and oldest governing document in use by any major nation, it is outdone in brevity only by the Constitution of Monaco, which measures a trim 3,814 words, and in age by the Constitution of San Marino, which dates back to 1600.

Desk where the U.S. Constitution was signed.
Credit: Christian Offenberg / Alamy Stock Photo

An Assistant Clerk Engrossed the Original for $30

The job of officially putting the framers’ legalese to paper fell to Jacob Shallus, assistant clerk of the Pennsylvania State Assembly, who had limited time to scrawl the words across four pieces of parchment before the signing date of September 17, 1787. Despite making several mistakes in his haste, prompting the clerk to insert words in some areas and scribble out others with a penknife, the founding fathers were satisfied with the effort and paid Shallus $30.

Independence Hall of Philadelphia.
Credit: SeanPavonePhoto/ iStock

The State of Pennsylvania Is Misspelled … Or Is It?

One of the Constitution’s apparent glaring errors was not the fault of the harried engrosser but that of Alexander Hamilton, who took it upon himself to categorize each group of signees by state and designated the host group as “Penslyvania.” Then again, the state’s name also appears that way on the Liberty Bell, evidence that the esteemed statesman was not so much careless as simply following an accepted spelling at the time.

Writing the Declaration of Independence.
Credit: Photo 12/ Universal Images Group via Getty Images

Benjamin Franklin Was the Convention’s Oldest Delegate

Plagued by gout and kidney stones, Benjamin Franklin reportedly was carried to the Pennsylvania State House on a chair held by four prisoners from the Walnut Street Jail. Despite his weakened condition, the 81-year-old statesman made his mark on the convention by brokering compromises between the warring factions. He also penned a powerful speech, delivered by Pennsylvania’s James Wilson, that urged his colleagues to set aside their doubts and formally approve the fruits of their labor.

The Signing of the Constitution of the United States.
Credit: GraphicaArtis/ Archive Photos via Getty Images

Only 39 of 55 Delegates Signed the Constitution

Despite Franklin’s impassioned push for unity, less than three-quarters of the delegates applied their signatures to the Constitution. Several left before the conclusion of the convention, while three who stuck around to the end — George Mason and Edmund Randolph of Virginia and Elbridge Gerry of Massachusetts — refused to accept the many compromises and endorse the document. Delaware’s John Dickinson also departed early, due to illness, but had fellow state delegate George Read “sign” his name in absentia.

Springtime in Providence, Rhode Island.
Credit: DenisTangneyJr/ iStock

Rhode Island Was the Final Original State to Approve the Constitution

Concerned about handing too much power to the central government, Rhode Island boycotted the Constitutional Convention altogether and earned a measure of infamy as the only one of the 13 original states not to be a signatory. Local support lagged even after New Hampshire became the ninth state to ratify the Constitution, thereby rendering it binding, in June 1788. It took an explicit threat from the Senate, which passed a bill prohibiting interstate commerce with Rhode Island in May 1790, for the holdouts to vote for ratification.

United States Bill of Rights document.
Credit: leezsnow/ iStock

The Bill of Rights Initially Consisted of 12 Amendments

Addressing the Constitution’s lack of individual protections, Congress approved 12 of the 19 amendments proposed by James Madison, before the states excised two more and ratified the 10 that became the Bill of Rights on December 15, 1791. One of the rejected articles, which establishes parameters for ever-increasing membership in the House of Representatives, technically remains pending before Congress. The other, which prohibits lawmakers from awarding themselves a raise until the following session of Congress, later resurfaced in 1992 as the 27th Amendment.

American flag waving with the Capitol Hill.
Credit: rarrarorro/ iStock

A Record 61 Years Elapsed Between the Passage of 12th and 13th Amendments

After the Bill of Rights went into effect, it took approximately three years until the 11th Amendment, which limited lawsuits against the states, was added in February 1795, and another nine-plus years for the 12th Amendment, which separated voting for Presidents and Vice Presidents, to become official in June 1804. It then took a whopping 61.5 years to formally eliminate slavery with the December 1865 ratification of the 13th Amendment, the longest period to date between constitutional amendments in U.S. history.

Tim Ott
Writer

Tim Ott has written for sites including Biography.com, History.com, and MLB.com, and is known to delude himself into thinking he can craft a marketable screenplay.

Original photo by 66 north/ Unsplash

There’s more to Antarctica than cold weather and penguins, though it does have plenty of both. And while we’ve learned much about the elusive continent since it was first discovered around 200 years ago, it maintains an air of mystery unlike few places on Earth. Interesting facts abound when it comes to Antarctica — here are eight of them.

A section of the West Antarctic Ice Sheet with mountains.
Credit: Mario Tama/ Getty Images News via Getty Images

Antarctica Is the World’s Largest Desert

The word “desert” tends to evoke images of extreme heat, cacti, and vast expanses of sand. The technical definition is less fanciful: an area that receives no more than 25 centimeters (10 inches) of precipitation per year. With that in mind, it’s perhaps less surprising that Antarctica is the world’s largest desert. At 5.5 million square miles, it edges out both the Arctic (5.4 million square miles) and Sahara (3.5 million) deserts, with the Arabian and Gobi deserts rounding out the top five. Antarctica only receives about 6.5 inches of precipitation in a given year, almost all of it as snow.

King penguins in the snow in South Georgia, Antarctica.
Credit: elmvilla/ iStock

Antarctica Is Also the Coldest, Windiest, Driest, and Highest Continent

Antarctica is a land of extremes, and it ranks first among the seven continents on several scales. In addition to being the coldest continent, it’s also the windiest, driest, and highest one. The coldest Antarctic temperature (and thus the coldest on Earth) was recorded at Vostok Station in July 1983 at -128.6°F. The highest wind speed recorded on the continent was at the Dumont d’Urville station in July 1972 at 199 mph. The average elevation is 8,200 feet — by comparison, the average elevation in the U.S. is a measly 2,500 feet.

Clocks in an airport showing the time of different major cities.
Credit: ymgerman/ Shutterstock

There’s No Official Time Zone in Antarctica

What time is it in Antarctica right now? There are a lot of different ways to answer that question, as the world’s fifth-largest continent doesn’t have an official time zone. Instead, some research stations (there are about 50 permanent stations on the continent) are synched up to the local time in the countries they hail from, while others observe the local time of whichever country is closest (for example, the Palmer Station, an American outpost, keeps Chile Summer Time, or CLST). Daylight Saving Time complicates matters further, with stations such as Troll (from Norway) switching from Greenwich Mean Time (GMT) to Central European Summer Time (CEST) when the clocks change in Europe.

Emilio Marcos Palma is cradled by his father after becoming the first baby born in the Antarctic.
Credit: Horacio Villalobos/ Corbis Historical via Getty Images

At Least 11 Babies Have Been Born in Antarctica

On January 7, 1978, something happened that had never happened before: A human was born in Antarctica. His name was Emilio Marcos Palma, and his parents were part of Esperanza Base, an Argentine research station. Ten more babies came into the world there throughout the rest of the decade and into the ’80s, all of them either Argentine or Chilean, with some commentators suggesting this was a concerted effort from both countries to strengthen their respective claims to Antarctica. Because all 11 survived, Antarctica technically has the lowest infant mortality rate of any continent: 0%.

A view of a crystal blue water flooding an ice cave.
Credit: DCrane/ Shutterstock

Antarctica Has a Lake So Salty It Doesn’t Freeze

Antarctica is known for its permafrost, but at least one part of it never freezes: Deep Lake, which is so salty — 10 times more than the ocean, which puts it on a similar level as the Dead Sea — that it stays liquid even at extreme temperatures. It’s considered one of the planet’s least productive ecosystems, as the cold and hypersalinity prevent almost all life from thriving there (although it is home to a collection of extremophiles — organisms that thrive in the most extreme conditions on Earth). Deep Lake is 180 feet below sea level and only gets saltier at increased depths.

A rocky ice mountain in the Weddell Sea of Antarctica.
Credit: 66 north/ Unsplash

Antarctica Is Bigger Than the United States

Though most map projections don’t convey it very well, Antarctica is big — really big. With an area of 5.4 million square miles, it’s both the fifth-largest continent (ranking ahead of both Europe and Australia) by size and roughly one-and-a-half times the size of the United States.

Big cruise ship in the Antarctic waters.
Credit: Volodymyr Goinyk/ Shutterstock

No One’s Sure Who Discovered Antarctica

Long before a human set foot on Antarctica, explorers were obsessed with learning more about the Antarctic Circle. The circle was first crossed in 1773 by Captain James Cook, but it took another 47 years before Antarctica was actually seen by human eyes. The question of who can actually lay claim to that achievement remains disputed more than 200 years later, with Russian explorer Fabian Gottlieb von Bellingshausen reporting having seen “an ice shore of extreme height” on January 27, 1820 and Edward Bransfield of the Royal Navy describing “high mountains covered with snow” on January 30 of the same year.

What’s known as the Heroic Age of Antarctic Exploration wouldn’t begin until the end of the 19th century, with Norwegian explorer Roald Amundsen and his team first reaching the South Pole on December 14, 1911 — a feat matched just five weeks later by Brit Robert Falcon Scott.

Flags of original signatory nations of the Antarctic Treaty at the South Pole, Antarctica.
Credit: Colin Harris / era-images/ Alamy Stock Photo

Antarctica Is Officially Dedicated to Peaceful Purposes

Though some countries have tried to claim it for their own, Antarctica doesn’t belong to any nation, government, or other entity. That was made official when 12 countries — Argentina, Australia, Belgium, Chile, France, Japan, New Zealand, Norway, South Africa, the Soviet Union, the United Kingdom, and the United States — signed the Antarctic Treaty on December 1, 1959. That this happened during the Cold War is no coincidence — the treaty was, among other things, an arms-control agreement setting aside the entire continent as a scientific preserve where no military activity is allowed.

A total of 54 countries now abide by the agreement, which has three key provisions: “Antarctica shall be used for peaceful purposes only,” “freedom of scientific investigation in Antarctica and cooperation toward that end … shall continue,” and “scientific observations and results from Antarctica shall be exchanged and made freely available.” All signatories have abided by the treatment, and Antarctica remains a hub of important research today.

Michael Nordine
Staff Writer

Michael Nordine is a writer and editor living in Denver. A native Angeleno, he has two cats and wishes he had more.

Original photo by chris_tina/ Shutterstock

From Grape Nuts to Lucky Charms, breakfast cereal exists on a broad spectrum that ranges from nutritious to cavity-inducing. These toasty, ready-to-eat grains have been around since 1860 and have been a staple of American mornings since the 1950s. Between convenience and clever marketing, they’ve become ubiquitous among children and adults alike. But there’s a lot you might not know about them. For example, what’s the deal with cereal box prizes? How did Cheerios get their name? Do Froot Loops actually have different flavors? These eight facts are part of a complete breakfast.

A little boy pouring cereal into a bowl.
Credit: PeopleImages.com – Yuri A/ Shutterstock

The First Cereal Prize Was (Probably) a Book

Prizes included with boxes of sugary cereals used to be a mainstay of product marketing. Notable examples include cheap plastic toys, baseball cards, and even a video game on CD-ROM — a Doom mod called Chex Quest.

What’s likely the earliest example was a little more literary, though. That honor is usually given to Kellogg’s, which offered a book to customers who checked out at the grocery store with two boxes of Corn Flakes in 1910. The book, The Funny Jungleland Moving Picture Book, featured horizontal flaps that could be moved to create different pictures and stories.

Prizes started appearing inside cereal boxes in the 1920s, when Malt-O-Meal began packaging whistles at the bottom of the box.

Aerial view of Cheerios in a bowl of milk.
Credit: freestocks/ Unsplash

Cheerios Used to Be Called Cheerioats

There are few, if any, cereals more iconic than Cheerios, but if you thought the name came from their round shape, you’d be mistaken. When the brand originally launched in 1941, they were called Cheerioats. In 1945, Quaker Oats claimed that it had exclusive rights to “oats” for its oatmeal — laughable in today’s oat-heavy market — and General Mills dropped the “at” from the end of the name. As of 2018, Cheerios is the bestselling cereal in the United States (just above Honey Nut Cheerios in second place), so General Mills really came out ahead in the end.

Cereal bowl and spoon with milk.
Credit: Linda Studley/ Shutterstock

Some Common Cereal Is Magnetic

It’s incredibly common for cereal to be fortified with extra vitamins and minerals, including iron. Just like any other iron — whether it’s in a skillet or a fence — the iron added to breakfast cereal is magnetic. Cereals with a lot of iron in them (like fortified cornflakes) even react to magnets when they’re floating in liquid. While the iron in some whole cereal is enough to be magnetic on its own, for a more in-depth, science fair-style experiment, you could try crushing up cereal and seeing how much pure iron you can pull out of it.

A bowl full of Froot Loops cereal.
Credit: Sergio Rojo/ Shutterstock

Froot Loops Are All the Same Flavor

The O’s of Froot Loops come in a variety of fruity colors, as if they each represent a different fruit flavor. However, the color is the only real difference between those O’s, because the flavor is the same throughout the box. You may still taste a difference between the colors, but it’s probably because your vision tells you to expect something different.

Speaking of fruity misconceptions, it’s always been spelled “Froot Loops” — contrary to a popular belief that the name changed because of a lawsuit over the cereal’s lack of real fruit.

Boxes of Kellogg's Frosted Flakes cereal.
Credit: SAUL LOEB/ AFP via Getty Images

Tony the Tiger Beat Other Animals to Become the Frosted Flakes Mascot

Imagine for a second that the Frosted Flakes slogan isn’t “they’re grrrrrreat,” because the mascot is not a tiger, but a kangaroo, and the kangaroo makes more of a coughing sound. When Kellogg’s launched Frosted Flakes in 1952, it experimented with several mascots — including Katy the Kangaroo, Elmo the Elephant, and Newt the Gnu — to see which one would be more popular with consumers. Tony turned out to be more popular across demographics, and Katy, Elmo, and Newt are now just distant memories.

Close-up of lucky Charms cereal marshmallows.
Credit: Sergio Rojo/ Shutterstock

Pink Hearts Are the Only Original Lucky Charm Marshmallow Left

If you haven’t had Lucky Charms since you were a kid, you may be in for a surprise, because General Mills makes adjustments to its lineup every so often. With a whopping eight marshmallow shapes (they’re called “marbits”) in today’s cereal, when a new one comes along, another steps out. But Lucky Charms launched in 1964 with just four marbits: green clovers, pink hearts, orange stars, and yellow moons. Now the moons are blue, the stars are yellow-and-orange shooting stars, and the green clovers are part of a hat. The pink hearts are the only ones that remain close to their original form.

Other shapes have come and gone completely, like the blue diamond, pot of gold, crystal ball, and green tree. The most recent addition is the purple unicorn, which replaced the hourglass.

Wheaties featuring Michael Jordan.
Credit: Keith Homan/ Alamy Stock Photo

Michael Jordan Has Appeared on More Wheaties Boxes Than Anyone Else

Wheaties, aka the Breakfast of Champions, has existed since 1924 and has featured athletes on its boxes since 1934; Lou Gehrig was the first. Over 90 years of sporty branding, there have been a few repeats, but Michael Jordan has graced the front of the box the most, at 19 times over 30 years. The five-time NBA MVP and Space Jam star most recently appeared on a box design commemorating the cereal’s 100th anniversary.

An advertising for Rice Krispies ceral.
Credit: Picture Post via Getty Images

There’s a Secret Fourth Rice Krispies Elf

Everybody knows about elves Snap, Crackle, and Pop, named for the sounds the cereal makes when it mingles with milk. The trio have been promoting Rice Krispies in one form or another since the 1930s, starting with Snap as a solo act, before Crackle and Pop joined him in 1941. But few remember the fourth cereal brother, a nonverbal space-elf named Pow, who appeared for a very brief time in the early 1950s. He appeared in only two commercials, riding a hovercraft and drawing attention to the cereal’s “power from whole grain rice.” According to Kellogg’s, he was never meant to be an “official character.”

Image Ad
Sarah Anne Lloyd
Writer

Sarah Anne Lloyd is a freelance writer whose work covers a bit of everything, including politics, design, the environment, and yoga. Her work has appeared in Curbed, the Seattle Times, the Stranger, the Verge, and others.

Original photo by PictureLux / The Hollywood Archive/ Alamy Stock Photo

Judy Garland, sometimes billed as “the world’s greatest entertainer,” accomplished a lot during her short and storied life, from her childhood vocal performances and breakout role in The Wizard of Oz to her dramatic comeback in A Star Is Born. Known for her giant voice, even at an early age, and magnetic stage presence, Garland won the hearts of showbiz executives, other entertainers, and adoring fans alike.

Today, Garland is well known not just for her performances but for a volatile behind-the-scenes life that made her career both successful and inconsistent — leading to tabloid scandals, breaks from public life, and, eventually, epic resurgences. Despite those slower periods, Garland managed to create an outsized legacy in just 47 years alive on this planet. Where did her stage name come from? Why is her only Academy Award pint-sized? What happened later in her career? These six facts about Judy Garland will have you strolling down a yellow brick memory lane.

Judy Garland costumed in ruffles for her first major role in the Kiddie Revue in Los Angeles.
Credit: Bettmann via Getty Images

She Started Performing When She Was 2 Years Old

Judy was born to a pair of vaudeville performers and theater operators, and by the time she came along, her two older sisters had already started appearing onstage — so in some ways, showbiz was inevitable. After begging her parents to let her perform, she got her big debut at the family’s theater when she was just 2 years old. She had been tasked with singing her favorite holiday song, “Jingle Bells,” and got so excited that she sang it more than once in a row.

This started a new era of Judy and her sisters performing as a trio, although she emerged quickly as the standout of the group. While all three were talented, and even appeared together in the 1929 short film The Big Revue, it was Judy who caught the attention of performers and promoters on the road. At just 13 years old, she signed to Metro-Goldwyn-Mayer, and started going to school on the MGM lot with other child stars, including Mickey Rooney.

Hoagy Carmichael, the film actor is shown here at the piano.
Credit: Bettmann via Getty Images

The Stage Name “Judy” Came From a Hoagy Carmichael Song

Judy’s legal name was Frances Ethel Gumm, after her parents, Frank and Ethel. The couple had expected a boy after having two girls, and planned to name him Frank Jr., so Frances was both a compromise and an inside joke. In everyday life, she was simply known as “Baby” or “Baby Gumm.”

The last name “Garland” came about while she and her sisters, then known as the Gumm Sisters, were touring. ”Gumm Sisters” didn’t exactly roll off the tongue, and a popular comedian emceeing a series of performances came up with “Garland Sisters.”

“Judy,” however, didn’t come until later, and for a time she was known professionally as Frances Garland. The first name came along after one of her older sisters decided to go by a stage name. Sick of both “Baby” and “Frances,” she picked her own fresh moniker from Hoagy Carmichael’s latest hit, “Judy.” She was especially drawn to one line: “If she seems a saint but you find that she ain’t, that’s Judy.”

She encountered some family resistance to the new name, but refused to respond to anything but “Judy” as soon as she’d made her decision, so it stuck pretty quickly.

Photo of Garland singing on a MGM show.
Credit: Bettmann via Getty Images

MGM Made Her Wear Nose-Altering Accessories

Garland rose to superstardom with her doe-eyed look, but in her days at MGM, she was considered, however unfairly, a kind of ugly duckling compared to the more willowy starlets in the MGM stable. In her earlier years, when the priority was preserving her childlike look, she carried rubber discs in a small carrying case, along with caps for her teeth. She’d insert the discs in her nose to give it a more upturned look. Because the studio wanted to keep her looking as young as possible, her breasts were also often bound.

Once she was a little older and starring in less-childlike roles, such as Esther Smith in Meet Me in St. Louis, she started wearing a canvas and metal corset that required two people on either side to pull the strings tight. (It’s a wonder she was still able to sing.)

The Tin Man (Jack Haley), Dorothy (Judy Garland) and the Scarecrow in a feature film.
Credit: FPG/ Moviepix via Getty Images

“The Wizard of Oz” Helped Earn Her an Itty-Bitty Academy Award

While she was nominated a few times, Garland’s only Academy Award came in 1940, and it was actually a miniature version of the iconic statuette. Garland was one of just a handful of people to win the special award known as the “Juvenile Oscar,” first awarded to six-year-old Shirley Temple in 1935.

The award typically celebrated a young actor’s achievement in the previous year, and in 1939 Garland had starred in two films: Babes in Arms and The Wizard of Oz. At the time she accepted the award, presented by her former classmate and previous Juvenile Oscar recipient Mickey Rooney, she was just a few months shy of her 18th birthday. The award really does look tiny with a teenager holding it — and even tinier next to full-size Academy Awards, like the one her daughter Liza Minnelli won for Cabaret in 1973.

The Juvenile Oscar wasn’t awarded every year, so it took a special situation to warrant the special trophy. Just 12 were awarded in the 26 years it existed; the last one was awarded in 1961 to Hayley Mills, who appeared in Pollyanna the year before. A 16-year-old Patty Duke won a regular Best Supporting Actress award two years later.

Photograph of Judy Garland and Bing Crosby singing into a microphone in a studio.
Credit: Bettmann via Getty Images

She Was Under 5 Feet Tall

Another thing that set her apart from other singer-actresses at the time was her height. Garland stood just 4 feet, 11 inches. When she was a child actress, she was still around the same height as her frequent costar Mickey Rooney, but was already noticeably shorter than other MGM stars such as Deanna Durbin. Ever notice how the famous ruby slippers from The Wizard of Oz have enough of a heel to add a couple of inches?

And Garland didn’t get any taller with age. Because she had a particularly short upper body, the height difference was a little more noticeable when she was seated next to someone — high heels can’t do much for you if you’re not standing up.

Judy Garland performing on stage at the Fontainebleau Hotel.
Credit: Ray Fisher/ The Chronicle Collection via Getty Images

In 1959, She Was Told She’d Never Work Again

After decades of overwork, substance abuse, and mental health struggles, Garland was in pretty rough shape by the time her late 30s rolled around. In late November 1959, she was admitted to New York’s Doctors Hospital with a barrage of symptoms both physical and mental, severe enough to be life-threatening. Over the next three weeks, her future seemed uncertain, and doctors drained 20 whole quarts of fluid from her body. After a bevy of tests, the lead physician on her case said that this would be the end of her career, permanently — she was to limit physical activity for the rest of her life, and “under no circumstances” could she work again.

Garland apparently responded “whoopee!” before collapsing back into bed. She’d later tell LIFE that the news made her feel “relieved”: “The pressure was off me for the first time in my life.”

She did stop working… for a time. She was in the hospital for a total of five months, then four more recovering at her Beverly Hills home. By the end of the summer, she was excited about music again.

In a career full of comebacks, this may have been her greatest. Garland embarked on a national tour that got rave reviews, and took a serious, non-singing role in the drama Judgment at Nuremberg that earned her an Oscar nomination. After delivering consistent, showstopping performances across the country, Garland recorded her famous live album, Judy at Carnegie Hall. The show was legendary: Despite a near lack of promotion, it sold out within hours. Audience members left their seats to crowd the stage (she asked them to sit down so others could see, and they obliged). After multiple encores, fans lingered at the stage entrance for an hour and a half.

After it was released, the album won four awards at the 1962 Grammys, two attributed to its star performer: Album of the Year (beating the soundtracks to both West Side Story and Breakfast at Tiffany’s), Best Female Solo Vocal Performance, Best Engineering, and Best Album Cover.

Garland would continue to have ups and downs for the last several years of her life, but the proclamation her doctor made in 1959 certainly didn’t bear out.

Sarah Anne Lloyd
Writer

Sarah Anne Lloyd is a freelance writer whose work covers a bit of everything, including politics, design, the environment, and yoga. Her work has appeared in Curbed, the Seattle Times, the Stranger, the Verge, and others.

Original photo by ronstik/ iStock

Houseplants adorn our homes with beautiful foliage, but they’re not just for decoration. They enrich our lives by giving us something to care for and can be a rewarding hobby, especially for those who focus on complex plants like orchids. It’s possible that houseplants even reduce indoor air pollution (though this is controversial) and can dampen noise from the street.

While plants may seem pretty straightforward from a distance, we’re learning more every day about their inner workings — like if and how they respond to music. Hone in on specific species and there are all kinds of things to learn, such as how some trick pollinators into coming their way, while others can survive for years without water. These six facts about houseplants might have you running to the nursery for some new additions, or at least thinking differently about your leafy roomies.

Close-up of a girl spraying water in her lucky bamboo plant.
Credit: sansubba/ iStock

Lucky Bamboo Is Not Real Bamboo

Lucky bamboo has great cultural significance, especially when given as a gift or placed to improve feng shui. It’s often sold in stunning sculptural arrangements, like braids and twirls, and adorned with colorful ribbon, although it grows just fine as straight stalks in soil or plain water. Lucky bamboo thrives in all kinds of settings, and even does fine in low light, so it’s a mainstay of bookshelves and cubicles all over the world.

While it looks a little like true bamboo, it’s not related to the panda food. Its scientific name is Dracaena sanderiana, and it’s more closely related to dragon trees, ti plants, and snake plants. Like real bamboo, it’s super easy to grow, but unlike real bamboo, it’s not going to invade your entire garden.

close up of a decomposed fly in half opened venus flytrap.
Credit: Andreas Häuslbetz/ iStock

Poaching a Venus Flytrap Is a Felony

Venus flytraps are incredibly popular houseplants for both pest control and sheer curiosity appeal, but in the wild, their habitat is itty-bitty. They grow only in a 75-mile radius in damp savannas, mostly located in North Carolina, and they’re in danger of losing their very specific habitat, which needs fire to thrive. Venus flytraps are considered a “species of special concern” in North Carolina, and people digging them up for use as houseplants is a major threat to their population. As of 2015, only about 35,000 of them were left growing in the wild.

In 2014, the state made poaching them a felony, punishable by up to 29 months in prison and steep fines. The U.S. Fish and Wildlife Service recommends checking flytraps before you buy them; plants grown in a nursery will likely be consistent in size and have uniform soil free from weeds.

Young tangerine or kumquat tree with fruits in a wicker pot.
Credit: t.sableaux/ Shutterstock

Citrus Plants Can Grow Indoors

Those of us who don’t live in perpetually sunny climates may not be able to grow lime trees in our backyards, but that doesn’t mean giving up on a citrus tree dream. In colder climates, many varieties make great houseplants, as long as they’re a variety that takes to it; you’re not going to grow a giant orange tree in your living room, but dwarf trees can be a great option near sunny windows. Acidic fruits, like lemons and limes, need less heat to ripen, making them slightly better candidates for indoor growing, but some varieties of satsuma, citron, and kumquat are also great choices. Speaking of fruit…

Monstera deliciosa plant closeup including fruit bodies.
Credit: prill/ iStock

Monstera Plants Produce Tasty Tropical Fruit

As a houseplant, Monstera deliciosa (also known as split-leaf philodendron or Swiss cheese plant) is better known for its giant, heart-shaped leaves marked by beautiful patterns of notches and holes. Indoors, they typically stop there, but in their native Central American tropical forests, these already larger-than-life plants eventually climb nearby trees and reach heights of up to 70 feet, with large blossoms that resemble peace lilies. Eventually, those flowers become scaly fruit that tastes like a cross between a banana, pineapple, and mangohence, deliciosa.

Above view of woman hands holding Rose of Jericho, Selaginella lepidophylla.
Credit: FotoHelin/ iStock

Resurrection Plants Can Go Seven Years Without Water

Even for those who have never been able to keep a houseplant alive, Selaginella lepidophylla — also known as a resurrection plant, the Rose of Jericho, or, confusingly, the False Rose of Jericho — is a pretty low lift. They’re native to the Chihuahuan Desert in northern Mexico and parts of Texas, New Mexico, and Arizona, and have an incredible survival strategy for the dry heat. They allow themselves to dry out, and then bounce around as tumbleweeds until they find somewhere damp to settle down. Once hydrated, they spread out their fern-like fronds and turn green. If that area dries out, they just curl back up into a ball and repeat the cycle. Once dormant and dead-looking again, the plant can survive for up to seven years.

Selaginella lepidophylla does not need to be rooted to come back to life, and it only takes a couple of hours to go from dormant to vibrant. As houseplants, they are nearly impossible to kill. Just make sure nobody mistakes them for actually being dead — their biggest household hazard is becoming accidental trash.

Close-up of female hands holding an orchid plant.
Credit: Maryviolet/ iStock

Some Orchids Use Trickery To Reproduce

Orchids are bizarre plants, and there are a lot of them. With more than 25,000 varieties, they make up around 10% of the world’s plant species, and many of them have their own set of tricks. When it’s time to spread pollen, a number turn to mimicry to lure in beneficial insects. Some release pheromones that smell like female insects, while others go so far as to visually imitate their pollinators — that is, specialized petals lure in male bees or wasps, who “mate” with the flower before moving onto another, picking up pollen in the process.

Sarah Anne Lloyd
Writer

Sarah Anne Lloyd is a freelance writer whose work covers a bit of everything, including politics, design, the environment, and yoga. Her work has appeared in Curbed, the Seattle Times, the Stranger, the Verge, and others.

Original photo by Dom Slike/ Alamy Stock Photo

Few figures in American history are as well regarded as Martin Luther King Jr. is today. A civil rights leader who worked tirelessly in the fight for justice and equality, even as he was threatened and attacked for doing so, King organized and participated in countless marches and protests to combat racial discrimination, laying the groundwork for important victories such as the Civil Rights Act of 1964 and the Voting Rights Act of 1965. His name is synonymous with the movement, and his message — most famously expressed in his “I Have a Dream” speech at the March on Washington for Jobs and Freedom in 1963 — continues to resonate and inspire today. But for all we know about his trailblazing activism, there are still a few details about his life that may surprise you. Here are six lesser-known facts about MLK.

Close-up of the Reverend Dr. Martin Luther King, Jr.
Credit: Bettmann via Getty Images

His Birth Name Was Michael

When Martin Luther King Jr. was born on January 15, 1929, his name wasn’t what we know it to be today. According to MLK’s original birth certificate, filed on April 12, 1934, his given name was Michael King Jr. His switch to a new name had to do with his father, who served as senior pastor at Atlanta’s Ebenezer Baptist Church. In 1934, King Sr. traveled to Germany, where he witnessed the budding rise of hate-fueled Nazism throughout the country. Germany was also where, in 1517, theologian and monk Martin Luther wrote his Ninety-Five Theses, which in turn inspired the Protestant Reformation. That movement held great significance to King Sr., who, upon returning to the states, chose the name “Martin Luther” for both himself and his son. MLK Jr. would rise to prominence under this new name, though he didn’t officially amend his birth certificate until July 23, 1957, when the name “Michael” was crossed out and the words “Martin Luther Jr.” were printed next to it.

Dr. Martin Luther King, Jr. receives honorary degree at Hofstra University.
Credit: Newsday LLC via Getty Images

He Received a “C” at School for Public Speaking

Although he’s known now for being a prolific public speaker, MLK Jr. wasn’t always appreciated for his eloquence. In fact, while attending Crozer Theological Seminary in Chester, Pennsylvania, King received a “Pass,” a “C+,” and a “C” in his public speaking course during the 1948-49 school year. This proved to be an anomaly, though; by the end of King’s time at the seminary, he was a straight-A student, class valedictorian, and student body president. He later attended Boston University, where he got his Ph.D. in systematic theology at the age of 25 in 1955, thus earning the title of doctor.

Slideshow of Martin Luther King Jr. at the Grammy's.
Credit: Kevin Winter/ WireImage via Getty Images

He Was a Three-Time Grammy Nominee

King was not a musician, but the spoken-word recordings of his most famous speeches earned him several Grammy nominations. The first came in 1964 at the 6th Annual Grammy Awards, where “We Shall Overcome (The March On Washington… August 28, 1963)” was nominated for Best Documentary, Spoken Word, Or Drama Recording (Other Than Comedy). Two other nominations were bestowed upon him posthumously, at the 11th Grammy Awards in 1969 for his recording of “I Have A Dream” (in the Best Spoken Word Recording category), and in that same category for “Why I Oppose the War in Vietnam” at the 13th Grammys in 1971. The latter was his first and only win, but his “I Have A Dream” speech was later voted into the Grammy Hall of Fame in 2012.

Martin Luther King, sculpted by Tim Crawley.
Credit: John Stillwell – PA Images via Getty Images

London’s Westminster Abbey Features a Statue of MLK

In 1998, a statue honoring Dr. King was unveiled at Westminster Abbey in London, a city where he famously spoke in 1964 while visiting Europe to accept his Nobel Peace Prize. The statue was among a group of 10 of the 20th century’s most celebrated Christian martyrs, which were installed above the Great West Door in niches that had stood vacant for 35 years. Queen Elizabeth II presided over the unveiling, which also honored notable religious figures such as El Salvador’s Archbishop Oscar Romero and Franciscan friar Maximilian Kolbe of Poland. Designed by Tim Crawley, the statues are made of French Richemont limestone and weigh almost a ton each.

Of course, Westminster Abbey is far from the only place to honor King artistically. There are several statues and memorials in the U.S., too, though perhaps none is more prominent than the Stone of Hope, a 30-foot-tall granite statue of King unveiled on D.C.’s National Mall in 2011.

The Star Trek crew.
Credit: Bettmann via Getty Images

He Was a Huge Fan of “Star Trek”

MLK was not only a huge fan of Star Trek but a pivotal figure in the career trajectory of one of the show’s most beloved actors. Star Trek was the only program King allowed his children to stay up late to watch, in large part because of the character Uhura, played by African American actress Nichelle Nichols. King viewed Nichols’ role as one of the few examples of equality on television — a belief that he expressed to Nichols upon meeting her at a fundraiser for the NAACP. After the show’s first season ended in 1967, Nichols had been leaning toward departing Star Trek for a role on Broadway. In the end, however, she was swayed by King’s passionate words about her power and influence as a role model for Black women, and decided to remain a member of the cast.

Civil rights activist Dr Martin Luther King with his wife Correta Scott.
Credit: Hulton Deutsch/ Corbis Historical via Getty Images

King and His Wife Spent Their Honeymoon at a Funeral Parlor

MLK met the woman who would become his wife, Coretta Scott, in Boston in January 1952. They married the next year on June 18, 1953, on Scott’s parents’ lawn in Alabama, though their ensuing honeymoon took an unusual turn. After being denied at several whites-only hotels throughout Marion, a town that held many segregationist beliefs, MLK and his wife were invited by a friend to spend their wedding night in the back room of a funeral parlor. It wasn’t until five years into their marriage that the pair took a more traditional honeymoon trip to Mexico.

Bennett Kleinman
Staff Writer

Bennett Kleinman is a New York City-based staff writer for Optimism Media, and previously contributed to television programs such as "Late Show With David Letterman" and "Impractical Jokers." Bennett is also a devoted New York Yankees and New Jersey Devils fan, and thinks plain seltzer is the best drink ever invented.

Original photo by e'walker/ Shutterstock

More than 2 million people visit Mount Rushmore each year, making the towering presidential monument South Dakota’s biggest tourist attraction. The sculptor behind the visionary memorial, Gutzon Borglum, called it “a shrine to democracy,” with the carved granite faces of Presidents George Washington, Thomas Jefferson, Theodore Roosevelt, and Abraham Lincoln symbolizing the founding of the nation and its timeless values. From its controversial location to its starring role in a classic Hitchcock caper, here are six fascinating facts you might not know about Mount Rushmore.

View of Sculptured Faces of Former Presidents at Mount Rushmore.
Credit: Bettmann via Getty Images

Mount Rushmore Has Been Controversial From the Start

The idea for a monument in the Black Hills of South Dakota generated controversy even before the first blast of dynamite took place. The 1868 Treaty of Fort Laramie, signed by the U.S. government and Sioux nations, reserved the Black Hills for the exclusive use of the Sioux peoples. The mountain that became Mount Rushmore is a sacred site in Lakota culture (part of the Sioux); its name translates to “the Six Grandfathers,” representing the supernatural deities that the Lakota believe are responsible for their creation. But within a decade, the U.S. had broken the treaty, leading to skirmishes including the U.S. defeat at the Battle of Little Bighorn in 1876. The federal government used its loss in the battle to justify its occupation of the Black Hills (which was later found to be unconstitutional).

Despite this bloody history, South Dakota state historian Doane Robinson proposed the Black Hills as the site for a new tourist attraction. He contacted sculptor Gutzon Borglum, who had recently been working on a monument to Confederate leaders on Stone Mountain in Georgia, and invited him to South Dakota. In 1925, Borglum began designing the colossal monument at Mount Rushmore (renamed for New York lawyer Charles E. Rushmore, who traveled to the Black Hills to review legal titles of properties in 1884). Once the project was approved and funded by Congress, carving commenced in October 1927.

Workman on Mt. Rushmore repairing Lincoln's nose.
Credit: Bettmann via Getty Images

Each Head on Mount Rushmore Is Over Five Stories Tall

With such a large canvas, chipping away with chisels was not going to cut it. Borglum used dynamite to blast away 90% of the granite rock face, which left between 3 and 6 inches of granite to be carved more finely. Workers suspended in slings from the top of the monument drilled a series of closely spaced holes to weaken the rock, then removed the excess chunks by hand. The detailed facial features were achieved with hand tools that left perfectly smooth surfaces.

In all, nearly 400 men and women spent over 14 years working on the monument. When the entire sculpture was completed in 1941, each presidential head measured about 60 feet tall. The Presidents’ eyes are roughly 11 feet wide, their noses measure about 21 feet long, and their mouths are around 18 feet wide. And because Mount Rushmore’s granite erodes extremely slowly — about 1 inch in 10,000 years — those features won’t shrink any time soon.

Faces of Presidents George Washington, Thomas Jefferson, Theodore Roosevelt & Abraham Lincoln.
Credit: Harold M. Lambert/ Archive Photos via Getty Images

The Four Presidents Represent Specific Aspects of American History

The four Presidents depicted on Mount Rushmore were chosen for their key roles in American history. The carved face of George Washington, completed in 1930, is the most prominent figure on the memorial and represents the founding of the nation. Thomas Jefferson, dedicated in 1936, stands for the growth of the United States, thanks to his authorship of the Declaration of Independence and his roles in the Louisiana Purchase and the Lewis and Clark expedition. Borglum chose the figure of Abraham Lincoln, dedicated in 1937, to represent American unity for his efforts to preserve the nation during the Civil War. Theodore Roosevelt, finished in 1939, symbolizes the development of the United States as a world power (he helped negotiate the construction of the Panama Canal, among other achievements) and champion of the worker as he fought to end corporate monopolies.

View of Mount Rushmore.
Credit: Bettmann via Getty Images

Some Have Proposed Additions to Mount Rushmore

Various additions to Mount Rushmore’s pantheon have been suggested over the decades. In 1937, a woman named Rose Arnold Powell enlisted the help of First Lady Eleanor Roosevelt in her proposal to add the head of Susan B. Anthony, but Congress ultimately refused to allocate funds. Other suggestions have included the additions of Presidents John F. Kennedy, Ronald Reagan, Franklin D. Roosevelt, and even Barack Obama. However, the mountain’s original sculptor Gutzon Borglum insisted that the rock was unable to support further carving, and modern engineers have backed up that claim.

View Mount Rushmore Hall of Records.
Credit: Science History Images/ Alamy Stock Photo

There’s a Semi-Secret Vault Behind the Sculpture

Borglum’s initial plans for Mount Rushmore called for a grand Hall of Records within the mountain behind Lincoln’s head. The 100-foot-long hall was planned to house the Declaration of Independence, U.S. Constitution, and other documents key to the nation’s history. Borglum envisioned the hall to be decorated with busts of notable Americans and a gigantic gold-plated eagle with a 38-foot wingspan over the entrance. But the twin tragedies of Borglum’s death in 1941 and the start of World War II put an end to his vision. The half-excavated chamber sat empty until 1998, when officials placed 16 panels explaining the history of the monument and its sculptor — along with the words to the Bill of Rights, U.S. Constitution, and Declaration of Independence — in a box and sealed it inside the vault. The chamber remains closed to the public.

Hanging from a cliff at Mount Rushmore in 'North By Northwest', directed by Alfred Hitchcock.
Credit: Silver Screen Collection/ Moviepix via Getty Images

The National Park Service Didn’t Want Hitchcock to Film “North by Northwest” at Mount Rushmore

The climax of Alfred Hitchcock’s 1959 thriller North by Northwest has Roger Thornhill (played by Cary Grant) and Eve Kendall (played by Eva Marie Saint) dangling over the precipice of Mount Rushmore as a villain tries to push them off. However, prior to filming, the National Park Service worried that the filmmakers would desecrate the monument. The agency made Hitchcock agree to strict rules, including promising not to shoot any violent scenes at Mount Rushmore or have live actors scrambling over the Presidents’ faces, even on a soundstage mock-up.
But the director failed to keep his word. Days before the film’s premiere, the park service issued a terse statement calling the movie “a crass violation of its permit” and wanting to “make the record clear on what the agreement provided and who failed to live up to it.” Still, the editor of the local Sioux Falls newspaper recognized the movie’s legacy. “Obviously,” he wrote, “this picture is worth its weight in gold to Rushmore from a publicity viewpoint.”

Interesting Facts
Editorial

Interesting Facts writers have been seen in Popular Mechanics, Mental Floss, A+E Networks, and more. They’re fascinated by history, science, food, culture, and the world around them.

Original photo by Lifestyle pictures/ Alamy Stock Photo

Few films have had as profound an impact on cinema as the original Star Wars and the multibillion-dollar franchise it inspired. For nearly 50 years, fans have been dressing up as Jedi, stormtroopers, and Sith, and imagining their own adventures in a galaxy far, far away. In fact, the films have had such a cultural impact that May 4 (“May the Fourth Be With You”) is essentially an official holiday for Star Wars fans the world over. Here are seven little-known facts about Star Wars, exploring both the production of the films and the inspiration behind the saga’s most iconic characters.

Star Wars movie site in Tunisia.
Credit: Education Images/ Universal Images Group via Getty Images

Filming the Original “Star Wars” Almost Caused an International Conflict

Although Star Wars is famously set in a galaxy far, far away, George Lucas used real-world sets and locations to stand in for extraterrestrial locales throughout the original trilogy. The ice planet Hoth in Empire Strikes Back was filmed near the town of Finse, Norway, while the forest moon of Endor scenes made use of the giant redwoods near Crescent City, California.

One of the most iconic locations in all nine films is the Skywalker homestead on the desert planet of Tatooine. Lucas decided to shoot these scenes, which kick off the entire Star Wars franchise, in the desert of Tunisia (though parts were also filmed in Death Valley, California). In the mid-1970s, Tunisia had a tense relationship with the Libyan government, run by Muammar Gaddafi. Star Wars filmed in Nefta, Tunisia, not far from the Tunisian-Libyan border. The biography George Lucas: A Life details how the Libyan government originally perceived the production as a military buildup along the border, mistaking a Jawa Sandcrawler for military hardware. Libyan inspectors even crossed the border to confirm that these otherworldly vehicles posed no real military threat. Thankfully, the matter ended smoothly.

Darth Vader at the European premiere of "Star Wars: The Rise of Skywalker".
Credit: Gareth Cattermole/ Getty Images Entertainment via Getty Images

Darth Vader’s Look Is Based on a Real Japanese Samurai

The inspiration behind the original Star Wars is famously pulled from a variety of sources. The iconic title crawl that sets up the space drama in the film’s opening seconds can be found in 1930s adventure serials like Flash Gordon and Buck Rogers. The space battles between TIE fighters and X-Wings are a direct reference to WWII dogfighting, and the concept of the Jedi is likely lifted from the pages of Frank Herbert’s Dune.

But the most iconic character in the entire saga is undoubtedly Darth Vader, and his look is based on a very real historical figure — a Japanese samurai warlord named Date Masamune. Ralph McQuarrie, the concept artist behind the original trilogy of films, was influenced by Japanese samurai armor, and especially the jet-black armor of Masamune, who was born in 1567. The helmets are the most alike, but McQuarrie also borrowed the extended neck piece from Masamune’s armor. Vader’s helmet includes additional influences from helmets worn by the German army during WWII, all used to create the most ominous villain the galaxy (and moviegoers) have ever seen.

Harrison Ford, as Hans Solo, on the set of Star Wars: Episode IV.
Credit: Sunset Boulevard/ Corbis Historical via Getty Images

“I Have a Bad Feeling About This” Is Said in Every “Star Wars” Film

The entire Star Wars saga is filled with little Easter eggs and references to other characters and events throughout the franchise. One that can be easily missed is the phrase “I have a bad feeling about this,” said in every single Star Wars film (and sometimes even uttered multiple times). The phrase is also found in one-off live-action films, animated TV shows, video game series, and novels, and has become a kind of “in-joke” among Star Wars creators.

Notably, The Last Jedi, the eighth film in the Star Wars saga, appears to be the only exception, as no character seemingly utters the famous phrase on screen. But director Rian Johnson confirmed that BB-8 actually delivers the line in binary, after which Poe Dameron, played by Oscar Isaac, retorts, “Happy beeps here, buddy, come on.”

a close up of Porg in Star Wars.
Credit: PictureLux / The Hollywood Archive/ Alamy Stock Photo

“The Last Jedi” Invented Porgs To Digitally Mask Real-Life Puffins

One of the most important locations in Rian Johnson’s The Last Jedi is the remote island on the planet Ahco-To, where a disgruntled Luke Skywalker spends his self-imposed exile and subsequently trains an adamant Rey. These scenes were shot on a very real Irish island called Skellig Michael. Although perfect for creating a much-needed sense of isolation, the island is also a wildlife preserve for puffins. The puffins became a real problem during the many scenes filmed on the island, as they constantly flew into shots and disrupted production. By law, The Last Jedi crew couldn’t mess with them, so according to Jake Lunt Davies, a creature concept designer on the film, the team decided to design an in-universe creature that lived on the island and digitally replaced any puffins that got in the shot with them. Hence, Porgs were born.

Members of the American Pop group N'Sync pose in front of a green screen.
Credit: Mikki Ansin/ Archive Photos via Getty Images

‘N Sync Was Almost in “Attack of the Clones”

Turn back the clock to 2001, and pop culture was obsessed with both the new Star Wars prequel franchise and the boy band ‘N Sync. At the behest of George Lucas’ daughter (along with the daughter of producer Rick McCullum), the members of ‘N Sync were offered minor roles during the final battle on Geonosis. Justin Timberlake and Lance Bass declined the invitation, supposedly too tired from touring, but the other three band members — Joey Fatone, JC Chasez, and Chris Kirkpatrick — donned Jedi robes and shot their scenes for the film. The moment was particularly special for Fatone, who had an entire room of his house dedicated to Star Wars memorabilia. Sadly, the footage wasn’t used in the final cut, and the blink-and-you’ll-miss-it cameo instead became a little-known piece of Star Wars history.

Darth Vader accepts the Ultimate Villain award from George Lucas onstage.
Credit: Kevin Winter/ Getty images Entertainment via Getty Images

The Original “Star Wars” Almost Wasn’t Made

It’s almost unfathomable that a movie studio would pass up the opportunity to make Star Wars, but in the mid-1970s, George Lucas’ little indie film was perilously close to never being made. Lucas first tried to get the rights to Flash Gordon in order to make his own big-screen version, but when he was unable to secure a deal, he decided to make his own space adventure. Once he had the idea, he needed the money, but United Artists, Universal, and even Disney (which later bought the franchise rights for $4.05 billion in 2012) all passed on funding the film.

Finally, 20th Century Fox agreed to finance the project, not because they thought the film would be any good, but mostly to secure a relationship with the up-and-coming director. With an initial budget of only $8 million (eventually bumped up to $11 million) and plenty of disasters during filming and post-production, Star Wars was born from both financial and artistic adversity, yet it has gone on to inspire generations of fans around the globe.

Darren Orf
Writer

Darren Orf lives in Portland, has a cat, and writes about all things science and climate. You can find his previous work at Popular Mechanics, Inverse, Gizmodo, and Paste, among others.