Original photo by Dennis Forster/ Shutterstock

Fish are hardly exotic animals to those of us who spend our days on land. After all, we can spot them in a nearby lake or river, view the colorful varieties at a pet store, or peruse the edible ones at the grocery store. And yet, because they occupy a sphere of the planet inherently inhospitable to sustaining our own lives, they remain often-mysterious creatures. Here are nine things to know about our fishy friends, so near to us and still a world apart.

Colorful aquarium, showing different colorful fishes swimming.
Credit: Goinyk Production/ Shutterstock

There Are More Than 36,000 Species of Fish

We’ve all likely heard the post-breakup adage that there are other fish in the sea, without stopping to consider the actual numbers behind the statement. By some estimates, there are 3.5 trillion fish in the world’s oceans. While that amount is difficult to verify, the oft-updated Eschmeyer’s Catalog of Fishes has zeroed in on more than 36,000 distinct species, a total that surpasses the combined species of amphibians, reptiles, birds, and mammals from the world’s remaining vertebrates.

A snakehead fish on the shore.
Credit: Vladimir Konstantinov/ Shutterstock

Some Fish Can Breathe on Land

Although fish are largely equipped with gills to absorb oxygen from water, at least 450 species have developed some sort of physiological adaptation that also allows for air breathing. African lungfish, for example, possess a modified swim bladder that functions as a lung when these fish burrow into mud during times of drought. Snakeheads also sport primitive lungs that enable them to survive out of water for up to four days. And mudskippers are known to be quite active in swampy environments thanks to the ability to breathe through their skin and mouth lining.

View of Smallmouth bass fish underwater.
Credit: Fine Art/ Corbis Historical via Getty Images

Fish Have Been Around for More Than 500 Million Years

As the planet’s first vertebrates, fish mark an important place on the evolutionary ladder. The earliest fish, which left behind fossils more than 500 million years old, were jawless types known as agnathans. Although some of these creatures developed formidable-looking armor and spikes, agnathans were largely phased out after jawed fish, the gnathostomes, surfaced around 450 million years ago. Fish today largely fall into one of two categories: Osteichthyes, or bony fish, and Chondrichthyes, which include cartilaginous species like sharks and rays. The only jawless fish to survive into modern times are lampreys and hagfishes.

Whale shark and underwater photographer.
Credit: Krzysztof Odziomek/ Shutterstock

Fish Sizes Range From a Few Millimeters to 40 Feet Long

The largest fish in the world is the whale shark, which regularly surpasses 40 feet in length and 20 tons in weight, but has also reportedly reached extremes of 65 feet and 42 tons. The sunfish, known to top out at around 10 feet and around 5,000 pounds, is the largest bony fish. At the other end of the spectrum is the blink-and-you’ll-miss-it stout infantfish, which generally grows to about 7 millimeters and a weight of less than 1 milligram.

Fish school on underwater coral reef in the ocean.
Credit: Rich Carey/ Shutterstock

Fish Schools Have No Clear Leader

As opposed to shoals, which are loose collections of fish in a given area, schools are marked by the highly synchronized movements of a group of swimmers who turn, dart, and dive in near-unison. Beyond providing an impressive underwater display, schools serve a practical function by helping individuals swim and eat more efficiently and providing safety in numbers. Unlike with, say, a flying flock of geese, there is no one leader who sets the tone for the rest of the school to follow; fish instead react to the actions of their immediate neighbors, thanks to a sensory system that detects changes in movement and pressure, known as the lateral line.

Koi Fish near green underwater plants.
Credit: koifish/ Shutterstock

Fish Communicate in Numerous Ways

Given the massive numbers and diversity of fish found throughout the world’s waterways, it should come as no surprise that these animals have developed wide-ranging methods of communication. Some relay information about schooling, mating, and territorial marking by way of chemical signals, while others get the message across by emitting electrical pulses or light. Additionally, researchers have come to realize that fish communicate with sounds far more frequently than previously thought, with more than 800 species expressing themselves by way of various grunts, groans, barks, moans, and hoots.

A fish scale (striped bass) used to determine the fish's age by reading the growth rings.
Credit: gary corbett/ Alamy Stock Photo

You Can Tell a Fish’s Age by Counting Its Rings

They may not sport wooden trunks like trees, but bony fish do have ear stones known as otoliths, which grow at varying intervals through the course of a lifetime. Calcium carbonate accumulates on these structures in a thick, opaque layer during heavier feeding periods, usually in summer, but accrues in a thinner, more translucent layer when food supplies are sparse; taken together, a set of alternating light-dark rings counts as one year of growth. Similarly, ever-growing fish scales produce rings of alternating thickness as surrounding water temperatures heat, cool, and heat again, and can be paired off to delineate a passing year.

Two goldfish swimming in an Aquarium.
Credit: Mirek Kijewski/ Shutterstock

Fish Have Shown a Capacity to Learn and Solve Problems

Although the consumption of fish is known to be good for our brains, fish are largely dismissed when it comes to naming the smartest animals. Yet recent evidence has shown that fish possess stronger cognitive capabilities than originally believed — they can use tools to obtain food and even perform rudimentary math. The scientists behind a 2014 study also demonstrated that goldfish could be trained to “drive” a tank mounted on wheels, revealing navigational abilities that transcend the context of their immediate environments.

Jar with gold fish in hands of young girl.
Credit: Konstantin Tronin/ Shutterstock

While not as playful or snuggly as other common domestic animals, fish are among the most popular pets around. According to the American Pet Products Association, freshwater fish resided in approximately 11.1 million U.S. households in 2023; that placement ranked third behind dogs (65.1 million households) and cats (46.5 million). And the reason for this may well be the relatively low financial expenditures required: In 2018, per Business Insider, pet owners were shelling out nearly $140 per month for dogs and $93 for cats, but just $62.53 per month for their aquatic companions.

Tim Ott
Writer

Tim Ott has written for sites including Biography.com, History.com, and MLB.com, and is known to delude himself into thinking he can craft a marketable screenplay.

Original photo Joe Hendrickson/ iStock

If you’re anything like the average American, you spend plenty of time in the kitchen — over 400 hours a year, according to a 2022 poll. The kitchen is considered by many to be one of the most important rooms in the home, and if the facts that follow are any indication, it may also be one of the most interesting. Here are six delicious, surprising facts about the room in which you cook, socialize, and store your food.

View of an architect Margaret Schütte-Lihotzky inspired kitchen.
Credit: ullstein bild Dtl via Getty Images

The Modern Apartment Kitchen Was Invented by a Member of the Anti-Nazi Resistance

If you live in an apartment or condo, your kitchen is likely modeled off of a design pioneered by Austrian architect Margarete Schütte-Lihotzky in 1926. Her model kitchen, known as the Frankfurt Kitchen, was invented in response to rising urbanization and the emerging field of domestic science. Schütte-Lihotzky was inspired by the tight quarters of railroads and ships, and created the efficient, small galley kitchen we know today.

Each Frankfurt kitchen had labeled storage bins, easy-to-clean surfaces, built-in accessories like ironing boards and a drawer for garbage, and a swiveling stool so housewives could achieve what Schütte-Lihotzky called “the rationalization of housework.” Designed with busy, independent women in mind, the model kitchen was adopted throughout Europe. Over 10,000 were produced between 1925 and 1930.

Yet Schütte-Lihotzky wasn’t just a kitchen pioneer. She was also an ardent Communist who became a member of the anti-Nazi resistance during World War II. She was arrested and imprisoned by the Gestapo, and barely escaped with her life after Vienna was liberated in 1945. As a Communist, she suffered a lack of commissions in Austria during the Cold War, but worked in China, Cuba, the Soviet Union, the German Democratic Republic, and elsewhere.

View of a turnspit dog next to cooking supplies.
Credit: Chronicle/ Alamy Stock Photo

Dogs Used to Cook Their Owners’ Food

Before there were modern kitchens, there were open-fire facilities where cooks roasted, braised, and boiled their food. But those cooks weren’t always human: In the 16th century, British breeders actually created a breed of dog designed to turn the wheels of spits in kitchens. Known as Canis vertigus (“dizzy dog” in Latin), the “turnspit dog” ran in a hamster-like contraption that kept meat-roasting spits turning … and turning … and turning.

Short and sturdy, turnspit dogs were often given Sundays off so they could accompany their owners to church. But the hardy dogs couldn’t survive the drop-off in demand that accompanied more modern kitchens, and the breed has since gone extinct.

View of a kitchen microwave and strovetop.
Credit: inhauscreative/ iStock

Microwaves Were Discovered By Accident

The development of radar helped the Allies win World War II — and oddly enough, the technological advances of the war would eventually change kitchens forever. In 1945, American inventor Percy Spencer was fooling around with a British cavity magnetron, a device built to make radar equipment more accurate and powerful, when he realized it could do something else: cook food.

With the help of a bag of popcorn and, some say, a raw egg, Spencer proved that magnetrons could heat and even cook food. First marketed as the Radarange, the microwave oven launched for home use in the 1960s. Today, they’re as ubiquitous as the kitchen sink — all thanks to the Allied push to win the war.

Bananas on banana hanger on a kitchen counter.
Credit: Jun Zhang/ iStock

There’s a Reason Your Bananas Don’t Taste Like Banana Flavoring

Have some bananas ripening on your kitchen counter? You may have wondered why they don’t taste anything like the “banana flavor” you find in candy and ice pops. That’s because banana flavor is thought to be based on the Gros Michel banana — not the banana variety most commonly sold today. Also known as Big Mikes, these small bananas were prized for their sweet flavor and reigned supreme on the world banana market. Then, a soil fungus called Panama disease decimated the variety in the 1950s, and farmers turned to Cavendish bananas to fill world demand instead.

Julia Child sits inside her kitchen.
Credit: TIM SLOAN/ AFP via Getty Images

This Cooking Icon’s Entire Kitchen Is in the Smithsonian

Culinary guru Julia Child hosted her TV shows, and her foodie friends, inside a roomy, efficient kitchen in her Cambridge, Massachusetts, home. Complete with high countertops to accommodate her 6-foot-3 frame, pegboards for wall storage, and a restaurant stove the chef bought in the 1950s, the kitchen was packed with cookbooks, appliances, and Child’s iconic pots and pans.

Though the chef died in 2004, her kitchen lives on at the Smithsonian Institution’s National Museum of American History in Washington, D.C. Child donated it to the museum in the early 2000s, and museum workers painstakingly documented it down to the items in each cabinet, then moved the entire thing to the museum and reassembled it there.

According to the Smithsonian, the kitchen — still on display on the museum’s ground floor — was arranged exactly as Child had it when she donated it. “Only the walls and floor were fabricated by the museum and the bananas and tomatoes are replicas,” the museum writes. “Everything else was Julia’s.”

Close up of woman hands cutting fresh avocado in a modern kitchen.
Credit: BONDART PHOTOGRAPHY/ Shutterstock

This Could Be the Most Dangerous Item in Your Kitchen

If you’re like most home cooks, you’re cautious with home appliances and open flames. But there’s reason to fear an unexpected item in your kitchen: the avocado. The pitted fruit is a must-buy for anyone who loves guacamole or avocado toast, but it presents a clear and present danger for many home cooks.

One 2020 study describes an “epidemic” of hand injuries due to attempts to cut the green guys, citing an estimated 50,413 avocado-related injuries between 1998 and 2017. Women between 23 and 39 years of age were most at risk, with the majority of avocado-related ER visits occurring on Sundays between April and July. So be forewarned — and treat the delicious delicacies with the respect they deserve.

Erin Blakemore
Writer

Erin Blakemore is a Boulder, Colorado journalist who writes about history, science, and the unexpected for National Geographic, the Washington Post, Smithsonian, and others.

Original photo by Cinematic/ Alamy Stock Photo

It’s pumpkin spice season, and that means it’s time for Linus, Lucy, Snoopy, That Round-Headed Kid, and the whole gang to appear in It’s the Great Pumpkin, Charlie Brown. One of the most beloved of the animated “Peanuts” specials, it’s based on Charles M. Schulz’s long-running comic strip. The newspaper comic debuted in 1950, and the nearly 18,000 strips published before Schulz’s death in 2000 make “Peanuts” perhaps the longest-running story ever told by one person. Whether you’re waiting in the pumpkin patch with Linus or trick-or-treating (not for rocks!) with everyone else, here are five fun facts about some of America’s favorite cartoon specials.

Bill Melendez, the animator for Peanuts film and television productions, at his Sherman Oaks studio..
Credit: David Bohrer/ Los Angeles Times via Getty Images

First, There Were Fords

In 1956, country and gospel singer Tennessee Ernie Ford became the host of the prime-time musical variety program The Ford Show, which was sponsored by the Ford Motor Company (no relation). In 1959, Ford licensed the “Peanuts” comic strip characters to do TV commercials and intros for the show, hiring film director and animator José Cuauhtémoc “Bill” Melendez to bring the figures to life. Melendez, who started his career at Walt Disney Studios, was the only artist whom Schulz would authorize to animate the characters. The multitalented Melendez also provided the “voices” for Snoopy and Woodstock.

A CHARLIE BROWN CHRISTMAS, 1965 film.
Credit: Allstar Picture Library Limited/ Alamy Stock Photo

And Then Came Christmas

The animated commercials (and The Ford Show) were a huge hit. On December 9, 1965, the 30-minute A Charlie Brown Christmas made its debut on CBS. Some predicted that the show’s use of child actors, lack of a laugh track, and jazz soundtrack would render it a flop. Instead, A Charlie Brown Christmas won an Emmy and a Peabody and became an annual tradition, airing on broadcast television for 56 years before moving to the Apple TV+ streaming service in 2020. Jazz composer and pianist Vince Guaraldi’s score became a bestselling album, with more than 5 million copies sold. It’s the second-oldest recurring holiday animation, coming after Rudolph the Red-Nosed Reindeer, which made its first appearance in 1964.

A CHARLIE BROWN CHRISTMAS, 1965 film.
Credit: Allstar Picture Library Ltd/ Alamy Stock Photo

A “Peanuts” Special Probably Killed (Aluminum) Christmas Trees

A Charlie Brown Christmas was a critique of the materialism and commercialism of the Christmas season, and was especially harsh on the mid-’60s mania for shiny aluminum trees. The Mirro Aluminum Company (then known as the Aluminum Specialty Company) of Manitowoc, Wisconsin, began producing Evergleam aluminum trees in 1959, and at its peak in 1964, made around 150,000 of them a year. In the special, Lucy orders Charlie Brown to “get the biggest aluminum tree you can find … maybe paint it pink!” Charlie Brown instead chooses a half-dead, barely needled little fir. Sales of the shiny fake trees plummeted soon after.

IT'S THE GREAT PUMPKIN CHARLIE BROWN, 1966 film.
Credit: Allstar Picture Library Ltd/ Alamy Stock Photo

Halloween and Thanksgiving Came After Christmas

The first “Peanuts” special was such a hit that it soon spawned an entire industry of “Peanuts” specials. Many were themed around holidays, including Arbor Day. It’s the Great Pumpkin, Charlie Brown, which aired in 1966, has our poor hero receiving rocks instead of candy while trick-or-treating. The plot of A Charlie Brown Thanksgiving, meanwhile, which aired in 1973, had Peppermint Patty inviting the gang to Charlie’s house for dinner — when he was supposed to eat with his grandmother. Linus, Snoopy, and Woodstock pull together a feast of toast, popcorn, pretzels, and jelly beans … but there’s a happy traditional turkey for everyone at the end.

A Boy Named Charlie Brown, 1969 film.
Credit: LMPC via Getty Images

In addition to the holiday-themed programs (which included shows for New Year’s, Valentine’s Day, and Easter), the “Peanuts” specials empire includes a full-length feature, A Boy Named Charlie Brown. Greenlit in 1969 after the success of other specials, A Boy Named Charlie Brown has its namesake competing in the National Spelling Bee, only to blow his chances by misspelling the word “beagle.” There are also documentaries and television series, including new releases like Welcome Home Franklin, which aired for the first time in 2024.

Cynthia Barnes
Writer

Cynthia Barnes has written for the Boston Globe, National Geographic, the Toronto Star and the Discoverer. After loving life in Bangkok, she happily calls Colorado home.

Original photo by ju_see/ Shutterstock

There are many things to love about fall — from the brisk air to the tantalizing scent of pumpkin spice — but there’s one striking visual that sets autumn apart from any other season: the brilliant hues of red, orange, and yellow foliage. From the chemical composition of leaves to their surprising place in the Japanese culinary world, here are six fascinating facts about the science and culture of autumn leaves that may leave you (sorry) wanting more.

Beautiful landscape with magic autumn trees and fallen leaves in the mountains.
Credit: Andrij Vatsyk/ Shutterstock

Deciduous Trees Change Color, But Coniferous Trees Don’t

The bright crimson and gold tones of fall foliage are found primarily on the branches of deciduous trees, an arboreal subset that includes oaks, maples, birches, and more. The word “deciduous” itself stems from the Latin decidere, meaning “to fall off,” and the term is used to describe trees that — unlike conifers and other evergreens — lose their leaves during the autumn as they transition into seasonal dormancy. Deciduous trees have broadleaves: flat, wide leaves that are more susceptible to weather-induced changes compared to the thin needles of their coniferous counterparts.

As sunlight decreases and temperatures drop, chlorophyll production in these broadleaf trees ramps up, which in turn gives way to other pigments that produce the red, orange, and yellow tones of autumn. There are some geographic exceptions to this rule, however. Deciduous trees in the southern United States are more likely to maintain their green color than those in the North, primarily due to the region’s milder winters.

No matter the location, most coniferous trees — a group that includes pines, spruces, and firs — will maintain their green needles year-round. The needles feature a waxy coating that protects them from the elements and contain a fluid that helps them resist freezing. Those factors create conditions that allow coniferous trees to survive harsh winters with their verdant colors intact, although conifers will lose some of their oldest needles each fall. A rare subset, called deciduous conifers, crosses both worlds, with needles that change to brilliant hues and then drop off each fall.

Close-up of bright autumn leaves in water.
Credit: ju_see/ Shutterstock

A Leaf’s Color Is Determined by Its Tree Type

There are three different pigments responsible for the coloration of autumn leaves: chlorophyll, carotenoids, and anthocyanin. Chlorophyll, the most basic pigment that every plant possesses, is a key component of the photosynthetic process that gives leaves their green color during the warmer, brighter months. The other two pigments become more prevalent as conditions change. Carotenoids are unmasked as chlorophyll levels deplete; these produce more yellow, orange, and brown tones. Though scientists once thought that anthocyanin also lay dormant during the warmer months, they now believe that production begins anew each year during the fall. The anthocyanin pigment not only contributes to the deep red color found in leaves (and also fruits such as cranberries and apples), but it also acts as a natural sunscreen against bright sunlight during colder weather.

During the transformative autumnal months, it’s easier to discern the types of trees based on the color of their leaves. Varying proportions of pigmentation can be found in the chemical composition of each tree type, leading to colorful contrasts. For example, red leaves are found on various maples (particularly red and sugar maples), oaks, sweetgums, and dogwoods, while yellow and orange shades are more commonly associated with hickories, ashes, birches, and black maples. Interestingly, the leaves of an elm tree pose an exception, as they shrivel up and turn brown.

Autumn leaves falling to the ground in city park.
Credit: borchee/ iStock

The Etymology of the Word “Fall” Refers to Falling Leaves

Prior to the terms “fall” and “autumn” making their way into the common lexicon, the months of September, October, and November were generally referred to as the harvest season, a time of year for gathering ripened crops. Some of the first recorded uses of the word “fall” date back to 1500s England, when the term was a shortened version of “fall of the year” or “fall of the leaf.” The 1600s saw the arrival of the word  “autumn,” which came from the French word automne and was popular among writers such as Chaucer and Shakespeare. By the 18th century, “autumn” became the predominant name for the season in England, though over the following century, the word “fall” would grow in popularity across the Atlantic. But while some proper British English linguists consider fall to be an Americanism, the term actually originated in England, and both “autumn” and “fall” are used interchangeably today.

Close-up of tempura maple leaves.
Credit: Aflo Co., Ltd./ Alamy Stock Photo

Tempura-Fried Maple Leaves Are a Japanese Delicacy

While most Americans rake up autumn leaves and throw them into a garbage bin, in Japan, they are the main ingredient of a delicacy. Momiji tempura is a popular snack that originated in the city of Minoh, about 10 miles north of Osaka, where the first commercial fried leaf vendor opened in 1910. Legend has it that around 1,300 years ago, a traveler was so taken by the beauty of the autumn maple leaves in the region that he decided to cook them in oil and eat them. Fear not if you’re a germaphobe, though — the leaves used in momiji tempura are freshly picked off trees, never scooped up from the ground. Preparation involves soaking the maple leaves in salt water (sometimes for up to a year), frying them in a tempura batter, and coating them with sugar and sesame seeds for a sweet, crunchy treat.

American flag in the autumn trees.
Credit: RobShea/ Shutterstock

American Trees Produce Redder Leaves Than Northern European Ones

While America is home to a wide array of both reddish and yellow autumnal hues, trees in Northern Europe are more universally yellow in color. One fascinating theory for why that is goes back to 35 million years ago. During the ice age of the Pleistocene era, America’s north-to-south mountain ranges allowed for animals on either side to migrate south to warmer climates, whereas the east-to-west Alps of Europe trapped many animal species that became extinct as freezing conditions took hold in the north. The result was American trees producing more anthocyanins — and thus a darker red color — to help ward off insects, whereas European trees didn’t need to do the same, since extinct insect species no longer posed a threat. This phenomenon also occurred in East Asia, where forests bear a similar resemblance to those in America, as opposed to the uniquely yellow forests of Northern Europe.

Momijigari by Okada Saburosuke (Isetan Mitsukoshi)
Credit: UtCon Collection/ Alamy Stock Photo

Modern-Day Leaf Peepers Have a Centuries-Old Japanese Precedent

There’s a term for serious fans of fall leaves: leaf peepers. These hobbyists travel near and far to view some of the most spectacular autumn forests around the world. In certain areas across the United States, such as New England, leaf peeping has proven to be a multi-billion-dollar industry, with those states welcoming millions of tourists annually to view the breathtaking foliage. People can even join up with fellow leaf peepers on online forums to report fall foliage sightings, view webcams, and utilize other tools for those who may not be able to travel to see these forests in person.

A similar tradition in Japan is called momijigari, or “autumn-leaf hunting.” The custom is believed to have started among the elite during the Heian Period toward the end of the eighth century, and later gained widespread popularity during the Edo Period in the 18th century. With around 1,200 species of trees (out of an estimated 73,300 on Earth), Japan is certainly a prime location for leaf peeping.

Bennett Kleinman
Staff Writer

Bennett Kleinman is a New York City-based staff writer for Optimism Media, and previously contributed to television programs such as "Late Show With David Letterman" and "Impractical Jokers." Bennett is also a devoted New York Yankees and New Jersey Devils fan, and thinks plain seltzer is the best drink ever invented.

Original photo by New Africa/ Shutterstock

The color yellow is one of the more perplexing hues on the color wheel. Humans perceive the color as both energetic and aggressive as well as warm and frustrating. Some cultures consider it a symbol of power and good fortune, while others conflate the color with weakness (“yellow-bellied”) or manipulation (“yellow journalism”). These six facts explore the ins and outs of the tone and all the various ways it fills our everyday lives.

Yellow Pages directory showing categories of information.
Credit: Stephen VanHorn/ Shutterstock

Yellow Pages Were Created Because a Printer Ran Out of White Paper

One day in 1883, Cheyenne, Wyoming-based printer Reuben H. Donnelley was busy printing the latest edition of the phone directory when he unexpectedly ran out of white paper. Unwilling to put off production until he could restock, he instead resorted to finishing the job with yellow paper, unknowingly creating an icon of the then-nascent information age. After subscribers commented on how these yellow pages were easy to find amid piles of white-hued publications, Donnelley produced the first official Yellow Pages phone book three years later. Using the color yellow for telephone business directories then became the norm around the world.

Drawing of a happy smiling emoticon on a yellow paper.
Credit: Gonzalo Aragon/ Shutterstock

The Creator of the Yellow Smiley Face Only Made $45

The bright-yellow smiley face is a symbol baked into the fabric of the digital age, providing the foundation for the emoji that have become the techno-hieroglyphics of modern life. But that very first exaggerated face had to come from somewhere — and that somewhere was Worcester, Massachusetts. In 1963, the State Mutual Life Assurance Company reached out to graphic designer Harvey Ball to create a symbol to help boost company morale. “I made a circle with a smile for a mouth on yellow paper, because it was sunshiny and bright,” Ball later told the Associated Press. For 10 minutes of work, he received $45 — not a bad rate, but not exactly commensurate with the $500 million business that yellow-hued grin inspired.

Huangdi, First Emperor of a unified China.
Credit: Pictures from History/ Universal Images Group via Getty Images

The First Emperor of China, Huangdi, Was Known As the Yellow Emperor

Although purple is often associated with emperors, kings, and queens throughout European history, yellow is the color of royalty in ancient China. This has to do in large part with the country’s quasi-mythological emperor Huangdi, who supposedly ruled around the 27th century BCE. It’s traditionally believed that Huangdi (huang means “yellow” in Chinese) introduced wooden houses and the bow and arrow, and defended China against bands of marauding barbarians. For these (supposed) efforts, Huangdi now stands as a legendary figure and mythical progenitor of the Han Chinese people. Although the historicity of Huangdi has been called into question by historians in the 20th century (CE, that is), his story has exalted the color yellow in Chinese culture as a hue that embodies royalty, power, and good fortune.

A view of the blinding hot sun.
Credit: Ed Connor/ Shutterstock

The Sun Isn’t Actually Yellow

Many kindergarten drawings magnetized to fridges around the world feature a big yellow sun with bright rays shooting in all directions. But is the sun actually yellow? How our eyes perceive the sun’s color relies on a variety of things, including the light’s intensity, environmental factors, and the limitations of human biology. Because Earth’s atmosphere is so effective at reflecting blue light, the light that eventually reaches our eyes has a slight yellow tint to it. When the sun is closer to the horizon at sunset, the sun’s light passes through even more of Earth’s atmosphere, thus scattering even more blue light, which makes our host star a warmer, reddish hue. But when astronauts escape the confines of Earth’s atmosphere, the sun they see is completely white, as the star basically emits across the entire electromagnetic spectrum, from radio to gamma waves. According to NASA, however, the sun emits most of its energy at around 500 nanometers in the visible spectrum. This means the sun is technically blue-green, but the physical limitations of our eyes prevent us from perceiving it.

Yellow school bus.
Credit: ErickN/ Shutterstock

We Are Biologically Wired To See School Bus Yellow

Parked outside schools across the U.S. are bright-yellow school buses, and that subtle color assault on your eyes is by design. In 1939, school transportation officials met at Columbia University to standardize buses in an effort to make them both safer and cheaper to mass-produce. During this meeting, 50 shades of yellowish orange were pinned to the wall, with Color 13432 — known today as National School Bus Glossy Yellow — eventually emerging as the winner. Humans are trichromatic, meaning our eyes have three types of photoreceptor cells (red, blue, and green), and as a wavelength, school bus yellow is at the peak where two of our three photoreceptors (red and green) are equally stimulated — so it essentially sends double the transmissions to the brain.

Yellow flowers given on a first date.
Credit: Prostock-studio/ Shutterstock

Maybe Avoid the Color Yellow on a First Date

While yellow is beloved by some cultures, school transportation boards, and certain springtime pollinators, the color scores low marks when it comes to fashion. According to a 2010 study published in the journal Evolutionary Psychology, yellow ranks among the lowest for both men and women when it comes to a “mean attractiveness score.” Conversely, the highest-scoring colors among both sexes were red and black.

Additionally, an unrelated survey held in 2013 by a U.K. online retailer asked around 2,500 adults about the attractiveness of certain clothing colors, and yellow again scored low marks. To add insult to injury, yet another survey found the color yellow inspired the least amount of confidence (along with orange and brown).

However, some of these preferences may be regional. A 2019 study surveyed 6,000 people across 55 countries about their feelings for yellow in general, and found that preferences for yellow increased in areas with rainy weather far from the equator. This is likely because the color yellow is more closely associated with warm, sunny weather, which is rarer the farther you move away from the globe’s middle.

Darren Orf
Writer

Darren Orf lives in Portland, has a cat, and writes about all things science and climate. You can find his previous work at Popular Mechanics, Inverse, Gizmodo, and Paste, among others.

Original photo by Kristoffer Tripplaar/ Alamy Stock Photo

The Girl Scouts organization is known for exuding compassion, promoting leadership, and perhaps most famously of all, selling cookies. Since the group was established in 1912 by Juliette Gordon Low, Girl Scouting has blossomed into a global movement — a far cry from its humble origins as a single troop of 18 girls in Savannah, Georgia. In the United States, Girl Scouts raise money for their cause by selling their highly popular and ultra-decadent namesake brand of cookies. In honor of those mouthwatering snacks (which are on sale now!), here are six delectable facts about Girl Scout Cookies to sink your teeth into.

Two boxes of Girl Scout Cookies and milk.
Credit: LES BREAULT/ Alamy Stock Photo

There Are Three Mandatory Flavors Sold Each Year

Though there have been many changes to the kinds of Girl Scout Cookies sold over the decades, three stalwart flavors are mandated each year: Thin Mints, Do-si-dos (also called Peanut Butter Sandwiches), and Trefoils. None of these varieties existed in their current form in the earliest years of cookie sales, but a version of Thin Mints can be traced back to 1939, when troops started selling a flavor known as “Cooky-Mints.” By the 1950s, shortbread had joined the lineup, alongside the renamed Chocolate Mints and sandwich cookies in vanilla and chocolate varieties. Peanut Butter Sandwiches hit the scene soon after, and by 1966, all three of the aforementioned flavors were among the group’s bestsellers. Other cookies came and went in the decades that followed, but Thin Mints, Do-si-dos, and Trefoils have been staples since the 1970s — and for good reason.

Thin Mints are the Girl Scouts’ No. 1 bestselling cookie variety, and the most searched-for Girl Scout Cookies in the majority of U.S. states. Do-si-dos rank fifth in sales (after Samoas/Caramel deLites, Peanut Butter Patties/Tagalongs, and Adventurefuls), and Trefoils feature a version of the Girl Scout logo and were inspired by the original Girl Scout Cookie recipe.

Colorful boxes of Girl Scout cookies fill the back of a minivan.
Credit: MediaNews Group/Orange County Register via Getty Images

Elizabeth Brinton may not be a household name, but she’s a legend among Girl Scout Cookie sellers. From 1978 to 1990, Brinton sold 100,000 boxes of cookies before ultimately hanging up what she called her “cookie coat.” She began by selling cookies door to door, but in 1985 she pivoted to setting up shop at a local Virginia metro station to sell the treats to passengers during rush hour. Brinton sold 11,200 boxes in that year alone, and was soon dubbed the “Cookie Queen” by the media. She went on to set the record for the most Girl Scout Cookies sold in a single year, with 18,000 boxes, though that number was nearly doubled in 2021 by Girl Scout Lilly Bumpus, who sold a staggering 32,484 boxes. Brinton’s career record of 100,000 boxes has since been surpassed, too, but the Girl Scout who broke it, Katie Francis, actually consulted the Cookie Queen for advice.

Brinton told Francis to “think outside of the box” — a maxim that served her well back in the 1980s. In 1985, Brinton wrote to her local congressman, Frank Wolf, to ask for his help in selling cookies to then-President Ronald Reagan, and in 1986, Wolf accompanied her to the White House, where she sold one box of every flavor to President Reagan. She also sold a few boxes to Reagan’s Vice President, George H.W. Bush, and Supreme Court Justices Sandra Day O’Connor, Harry A. Blackmun, and William H. Rehnquist.

Girl Scout displaying a calendar.
Credit: Bettmann via Getty Images

Girl Scouts Sold Calendars Instead of Cookies During World War II

Due to wartime shortages, the Girl Scouts briefly pivoted away from the culinary world during World War II. The U.S. government began rationing sugar in May 1942, and butter in March 1943 — both integral ingredients in the Girl Scout Cookie creation process. Because of this, the Girl Scouts had trouble filling orders, though in certain instances local troops were supplied ingredients by benefactors, or Girl Scouts baked cookies specifically for members of the military. Most troops, however, had to find other ways to raise money, so in 1944, the Girl Scout National Equipment Service began producing calendars to be sold for 25 cents.

Fortunately for both the Scouts and their customers, the cookie drought was only temporary. By 1946, ingredients were no longer being rationed, and cookie sales resumed and then grew; by 1950, the line of Girl Scout Cookies had been expanded to add new flavors.

Girl Scout offers cookie to TV's Jinx Falkenburg McCrary.
Credit: Bettmann via Getty Images

Girl Scout Cookies Were Originally Homemade

It may be hard to fathom today, given the sheer breadth of the current cookie operation, but Girl Scout Cookies were originally homemade. A troop in Muskogee, Oklahoma, baked and sold the first cookies in a school cafeteria in 1917, and other troops soon followed suit. A few years later in 1922, a Chicago-based magazine called The American Girl published a recipe to be used by Girl Scouts all over the country. It was just a simple sugar cookie containing butter, sugar, milk, eggs, vanilla, flour, and baking powder, but it was a hit with consumers.

Throughout the 1920s, Girl Scout Cookies were baked by troop members with help from their parents and members of the local community. The treats were subsequently packaged in wax paper, sealed with a sticker, and sold for 25 to 35 cents per dozen. It wasn’t until 1934 that the Girl Scouts of Greater Philadelphia Council became the first council to sell commercially baked cookies; within two years, the national organization began licensing the cookie-making process to commercial bakeries.

An array of Girl Scouts cookies.
Credit: Mariah Tauger/ Los Angeles Times via Getty Images

Girl Scout Cookies Differ Slightly Depending on Which Bakery Made Them

In the late 1940s, 29 bakers were licensed to make Girl Scout Cookies. Today, Girl Scouts get their goods from just two licensed bakeries: ABC Bakers in Virginia and Little Brownie Bakers in Kentucky. Depending on which bakery produces the cookies your local troop sells, you may find that the snacks have slightly different names. For instance, Tampa residents receive Samoas from Little Brownie Bakers, whereas people who live just a few hours away in Orlando chow down on the virtually identical Caramel deLites from ABC Bakers.

And it’s not just the branding that may differ from city to city. Cookies might also look or taste different due to minor discrepancies in each bakery’s recipes. For example, ABC’s Thin Mints are crunchier and mintier than Little Brownie’s richer and chocolatier version, and Caramel deLites are heavier on the coconut flavor than Samoas. A few cookies are also specific to one bakery: Currently, S’mores are made only by Little Brownie Bakers, while Lemonades are exclusive to ABC Bakers. (Little Brownie has a completely different lemon cookie called Lemon-Ups.) No matter which bakery provides the cookies, though, you’re in for an indulgent treat.

A pile of different Girl Scout Cookies.
Credit: The Washington Post via Getty Images

Over 50 Flavors Have Been Discontinued

Some Girl Scout Cookie flavors are likely never to go away, due to their enduring popularity, but not all cookies are so lucky. Some 51 former varieties have come and gone in the decades since the snacks were first introduced. That’s not to say these bygone flavors didn’t have their fans, of course; many people look back fondly upon these scrumptious but discontinued treats, which include Kookaburras, a combination of Rice Krispies and chocolate, and Golden Yangles, a savory cheddar cheese cracker. There’s always the possibility of a comeback, though, as Lemon Chalet Cremes made a brief return in 2007 after having been phased out in the 1990s. It was a short-lived run, but you can still hold out hope that your favorite former flavor may return someday.

Bennett Kleinman
Staff Writer

Bennett Kleinman is a New York City-based staff writer for Optimism Media, and previously contributed to television programs such as "Late Show With David Letterman" and "Impractical Jokers." Bennett is also a devoted New York Yankees and New Jersey Devils fan, and thinks plain seltzer is the best drink ever invented.

Original photo by andrew1998/ Shutterstock

If Americans know of only a handful of U.S. poets, one of them is likely to be Emily Dickinson. She’s described as a kind of nun of poetry — a single woman who stayed home in a white dress writing poems only for herself.

But her full story is more complex. Emily Dickinson was not entirely reclusive; she thrived on her written correspondence, something like people today who spend hours socializing online, and shared her poetry with her inner circle. Despite her voluminous output, very few of her poems were published during her lifetime — and none with her permission. Find out more about this fascinating woman of letters below.

Portrait of Emily Elizabeth Dickinson c. 1846, American poet.
Credit: Culture Club/ Hulton Archive via Getty Images

Emily Dickinson Kept Up Lifelong Friendships

Emily Dickinson rarely left the grounds of her home after 1865, when she was in her mid-30s, for reasons that are unclear. But she was social. Scholars have about a thousand of her letters to at least a hundred friends and family members — which may only be a fraction of all the letters she wrote. She often enclosed a poem in her letters. In a 40-year friendship, she sent Susan Dickinson, her brother’s wife, more than 250 poems. Thomas Wentworth Higginson, the author of an Atlantic Monthly article that encouraged young people to write, received about 100 of her poems.

A Large Christian Cross Silhouette with a Rainbow.
Credit: D. Lentz/ iStock

Emily Dickinson Stopped Attending Church in Her 30s

In her most famous poem, “I heard a Fly buzz–when I died–”, she takes on a question most of us wonder about at some point: What is it like when we die? In the poem, she imagines people at her bedside waiting for the “King” to “Be witnessed-in the Room.” Instead she hears a fly and then enters darkness.  

Does this suggest a lack of faith? Again, it’s not clear. As a young girl, Emily attended services with her family every week, and participated in daily religious observances at home. Her mother was Calvinist, and in Emily’s teen years, several of her friends and her sister, father, and eventually her brother joined that Protestant movement. She resisted, saying to a friend, I am one of the lingering bad ones.” By her late 30s, she had stopped attending services. However, her poetry continued to approach religious themes.

Portrait of Edward Dickinson.
Credit: Reading Room 2020/ Alamy Stock Photo

Her Father Was a U.S. Congressman

Dickinson was born into a prominent family. Her grandfather, one of the founders of Amherst College in Amherst, Massachusetts, overextended himself and lost the family’s land holdings. While most of the family left, her father, Edward Dickinson, stayed in Amherst and worked hard to reestablish the family’s standing. He succeeded, eventually entering politics and becoming a state representative and senator, then a U.S. representative.  

As a child, Emily was afraid of him. She came to see him as “the oldest and the oddest sort of foreigner,” a lonely man, despite his longtime residence in Amherst. However, he was appreciated. After he collapsed while giving a speech in the state legislature on a hot June morning in 1874, and then passed away, the town of Amherst shut down for his funeral.

Close-up of an Emily Dickinson poem in a book.
Credit: Andrii Chernov/ Alamy Stock Photo

She Composed Almost 1,800 Poems

Envelopes and scraps of paper suggest that Dickinson wrote notes spontaneously. Relatives say that she wrote at a table in her bedroom and at one in the dining room, where she could see the plants in her conservatory, and recited aloud in private. “I know that Emily Dickinson wrote most emphatic things in the pantry, so cool and quiet, while she skimmed the milk; because I sat on the footstool behind the door, in delight, as she read them,” her cousin Louisa wrote in her journal.

Among her papers discovered after Dickinson’s death were 40 handmade books that included more than 800 of her poems. All in all, she wrote close to 1,800 poems.

Portrait of Emily Dickinson sitting in a chair.
Credit: The History Collection/ Alamy Stock Photo

Only 10 of Her Poems Were Published in Her Lifetime

While Dickinson was alive, only 10 of her poems were published. Scholars believe that she did not authorize any of the publications, half of which appeared in the Springfield Daily Republican. (All were credited to “anonymous.”) The paper’s founder, Samuel Bowles, was a family friend, to whom Emily had sent about 40 of her poems.

Close-up of a white dress.
Credit: Evgenyrychko/ Shutterstock

Her White Dress Was Not Unusual

In her late 40s and early 50s, Dickinson wore a white cotton dress with mother-of-pearl buttons that was typical attire at home for women at the time. It could easily be cleaned with bleach. Did it mean anything to Emily that her dress was white? Her survivors chose to have her buried in white, within a white casket, when she died at age 55. It’s also true that the poet heroine of Aurora Leigh, a nine-book novel in verse by Elizabeth Barrett Browning, one of Emily’s favorites, wore white. But scholars don’t know if choosing that color had any particular significance for her.

Temma Ehrenfeld
Writer

Temma Ehrenfeld has written for a range of publications, from the Wall Street Journal and New York Times to science and literary magazines.

Original photo by tandemich/ Shutterstock

Instituted in 1582, the Gregorian calendar is one of humanity’s more accurate attempts to capture the journey of our Earth around the sun. Yet it’s far from the most complex calendar ever created. That accolade lies with the Maya calendar instead, instituted by the Mesoamerican civilization that flourished around the sixth century BCE (although descendants of the Maya still exist today in Guatemala and elsewhere).

Marking time for thousands of years, the Maya calendar has been the subject of more than a century of scholarship, while also forming the bedrock of many kooky doomsday predictions (some that even NASA felt compelled to debunk). This strange dichotomy gives the Maya calendar the unenviable distinction of being the most widely known and simultaneously misunderstood historical calendar in existence

These six facts clarify a few things about this fascinating cyclical calendar, and explore an entirely different way to count the days.

Maya calendar of Mayan or Aztec vector hieroglyph signs and symbols.
Credit: NurPhoto via Getty Images

The Maya Calendar Isn’t Entirely Maya

While historians refer to this ancient Mesomaerican calendar as “Mayan,” parts of the calendar likely originated with the Olmecs, an ancient civilization in southern Mexico that thrived from 1,600 BCE to 350 BCE (or perhaps even some older, pre-Olmec society). Before the infamous Spanish conquistador Hernán Cortés arrived in the New World in 1519, every Mesoamerican society used some form of this ancient calendar, including the Aztecs, Zapotecs, and of course, the Maya, who improved upon this original calendar greatly. Although different societies had different names for aspects of the calendar, the Mayan vocabulary is what’s most commonly used.

View of one of the Mayan calendar variations.
Credit: Panther Media GmbH/ Alamy Stock Vector

The Maya Calendar Is Actually Made Up of Three Calendars

There are aspects of the Maya calendar that are fundamentally different from our own. In the Gregorian calendar, we consider days and months to be cyclical, in that they repeat over and over, but years are considered linear — it will never be the year 2022 again, for example. Mesoamerican cultures thought differently; in the Maya calendar, all time is cyclical, and years recur every 52 years.

It helps to understand that the Maya calendar is actually many calendars. First, there’s the calendar round, which itself is made of two calendars. There’s the Sacred Round, which lasts 260 days. Theories suggest that it represents the human gestation period or possibly the Maya agricultural cycle. Then, there’s the haab, or Solar Round, a 365-day calendar. Together, each day using this two-calendar system has four designations — a day number and day name from the Sacred Round and a day number and month name from the Solar Round, which creates names like 3 Manik 14 Pohp or 4 Ahau 8 Cumku. Every 52 years, or 18,980 days, these names repeat.

How do you keep track of time longer than 52 years? Well, that’s where the third calendar — the Long Count calendar — comes into play. In this calendar, a day is known as a k’in, 20 k’in make a uinal, and 18 uinal make a tun (which is about a year). After that, 20 tun make a k’atun, and 20 k’atun make a b’ak’tun — a unit of time nearly 400 years long.

A view of a long count calendar date.
Credit: Reading Room 2020/ Alamy Stock Photo

The Long Count Calendar Uses Five Numbers to Mark a Day

While the Maya used all three calendars concurrently, the Long Count calendar is the one that’s usually talked about when people are discussing the “Maya calendar,” and today, scholars use five Arabic numerals when talking about dates associated with this calendar (the Maya used hieroglyphs). For example, on our calendar, U.S. Independence Day is 7/4/1776 (or 4/7/1776 in many other countries). The Maya Long Count calendar marks that day as 12.8.0.1.13. This number includes the b’ak’tun, k’atun, tun, winal, and k’in, from left to right. So how do we read these numbers? To figure that out we have to go back to what the Maya believed to be the beginning of creation (which isn’t as far back as you might think).

Close-up of an Aztec calendar.
Credit: VojtechVlk/ Shutterstock

The Long Count Calendar Starts on August 11, 3114 BCE

The Maya believed a cycle of creation lasted 13 b’ak’tun. When retroactively calculated, that means the Long Count calendar began on August 11, 3114 BCE. The 19th day (or k’in) of this calendar (August 30, 3114 BCE) would look like 0.0.0.0.19 before turning over to 0.0.0.1.0, and so on using the units mentioned above. July 4, 1776, or 12.8.0.1.13, is thus day 13 of the first uinal of the tun zero of the eighth k’atun and the 12th b’ak’tun.

This system continued counting the days for thousands of years until the Long Count calendar approached its 13th b’ak’tun, the moment Mayas believed creation started anew. That created some far-fetched doomsday predictions on the part of some more creative prognosticators — though notably not the Maya themselves.

Mayan calendar for the year 2012.
Credit: Kram78/ Shutterstock

December 21, 2012 Was Not Doomsday According to the Maya

On December 21, 2012, the Long Count calendar clicked over to the 13th b’ak’tun and finished what’s known as a “Grand Cycle,” which lasts 5,125.366 years. Although the Mayas believed this to be an important day, they simply assumed that a new “Grand Cycle” would begin on December 22, 2012. In other words, there is absolutely no evidence of apocalyptic Mesoamerican folklore surrounding this day.

But in the months leading up to this date, doomsayers nonetheless believed natural disasters or a colliding Planet X would end all life as we know it. Of course, none of this happened. In fact, for most of the 6 million Maya still living today, December 21, 2012, was just like any other day — perhaps even better because it was a Friday.

Close-up portrait of Ernst Förstemann.
Credit: The Picture Art Collection/ Alamy Stock Photo

The Maya Calendar Was Deciphered by a German Librarian

No one knew how to understand the complex system of Maya timekeeping for hundreds of years — most Mayan cities were abandoned by about 900 CE, for reasons historians are still debating, and the civilization went into a mysterious decline. But in the late 19th century, things began to change when Ernst Forstemann, a librarian at the Royal Library in Dresden, Germany, started poring over one of the most important documents of pre-Columbian civilization. Known as the Dresden Codex, it’s one of the oldest surviving books written in the Americas, dating to around 1200 CE — and it’s filled with Maya hieroglyphics.

Through his analysis, Forstemann discovered the inner workings of the three Maya calendars, and that the date 4 Ahau 8 Cumku, as found in the Calendar Round, was actually the start for the Long Count calendar. Over the next century, scholars continued deciphering the intricate calendar and language of the Maya, revealing a complex people of great cultural and scientific sophistication.

Darren Orf
Writer

Darren Orf lives in Portland, has a cat, and writes about all things science and climate. You can find his previous work at Popular Mechanics, Inverse, Gizmodo, and Paste, among others.

Original photo by Collection Christophel/ Alamy Stock Photo

Undead. Hungry for blood. Centuries-old. Hollywood movie monsters have been keeping audiences awake at night and fearful about turning dark corners for more than seven decades. First popularized on the big screen in the 1930s during the silent-to-sound transition, these iconic black-and-white creations continue to frighten moviegoers and inspire modern updates in film, TV, and beyond. From a blood-thirsty vampire and an oversized ape to a creature lurking from the deep, here are the origins of seven haunting old-school movie monsters.

Bela Lugosi and Helen Chandler in Dracula.
Credit: Bettmann via Getty Images

Dracula (1931)

Universal Pictures hesitated before making a Dracula movie. When first presented with the idea in the 1920s, the studio worried about negative audience reactions to a supernatural tale centering around a bloodthirsty vampire. But then a successful play based on Bram Stoker’s novel Dracula (1897) arrived in theaters, and Universal became desperate for a hit. Dracula was greenlit with the intention of placing silent screen superstar Lon Chaney Sr., known as the “Man of a Thousand Faces,” in the title role. However, Chaney died in 1930. Bela Lugosi, who’d won over audiences in U.S. theatrical productions of Dracula, was subsequently hired to portray the vampire onscreen. His good looks, Hungarian accent, and ability to carry off a tux and cape (attire that had initially been seen on the stage) helped make the movie a hit.

Boris Karloff in his role as the resurrected monster in the classic Universal horror film 'Frankenstein'
Credit: Archive Photos/ Moviepix via Getty Images

Frankenstein (1931)

Dracula‘s success prompted Universal to search for another monster movie for Lugosi, now a star. The studio opted for Frankenstein, based on Mary Shelley’s 1818 book, with Lugosi slated to play Frankenstein’s monster, a dead criminal’s body reanimated by science. Lugosi wasn’t thrilled about a role that called for his face to be hidden under layers of makeup, but he needn’t have been concerned. When James Whale was brought on to direct, he didn’t want Lugosi in the part, and instead selected Boris Karloff. The monster’s makeup was applied by Jack Pierce, who used his skills to create a flat dome on Karloff’s head to reflect the skull surgery the monster would have endured. Other touches, such as neck bolts and shortening the sleeves of Karloff’s coat to suggest long arms, resulted in an unforgettable archetype. Paired with Karloff’s acting abilities, which communicated the monster’s existential pain, this film won over critics and succeeded at the box office.

Boris Karloff in the 1932 motion picture The Mummy.
Credit: Bettmann via Getty Images

The Mummy (1932)

Universal soon wanted to feature Karloff in another monster movie: The Mummy. Rather than based on a book or play, this movie was partially inspired by the Egypt-mania that overtook the world following the discovery of King Tut’s tomb in 1922. The screenplay was penned by a former reporter, John L. Balderston, who’d written about the tomb. The film also spoke to fears of a so-called Curse of Tutankhamun, which had supposedly claimed the lives of several people with ties to the tomb’s opening. In the story, an Egyptian priest (Karloff) who was buried alive for trying to resuscitate his dead lover is himself restored to life when someone reads a magical scroll. Karloff appeared onscreen in bandages and in makeup that gave him an ancient, withered face (again thanks to Pierce’s skills). The movie, another hit for Karloff and Universal, installed mummies forever in the pantheon of movie monsters.

King Kong rises over NYC with Fay Wray in his clutches.
Credit: Bettmann via Getty Images

King Kong (1933)

Universal featured many cinematic monsters, but wasn’t the only studio to cash in on the phenomenon. In 1933, RKO Pictures wowed moviegoers with a rampaging giant ape known as King Kong. King Kong’s beginnings can be traced to Merian Coldwell Cooper filming exotic locations across the globe in the 1920s. His voyages sparked an idea for a movie that would feature a real gorilla in New York City — but then the Great Depression nixed any notion of getting the funds to shoot abroad or transport a gorilla. Cooper found a job at RKO, where he saw Willis O’Brien using stop-motion animation on another film. Cooper and RKO head David O. Selznick believed that this technique could work for a movie about an enormous ape on the loose in New York City. The result, which Cooper co-directed, was the perennially popular King Kong.

 Lon Chaney Jr., grabbing actress, Evelyn Ankers, in the Universal picture, "The Wolfman."
Credit: George Rinhart/ Corbis Historical via Getty Images

The Wolf Man (1941)

Universal’s Werewolf of London (1935) wasn’t a big hit, but the studio eventually decided to try another werewolf film. In The Wolf Man, Lon Chaney Jr. plays Larry Talbot, who returns from America to his family’s Welsh estate. He’s bitten by a wolf soon after his arrival, which leads to his transformation into the Wolf Man. Screenwriter Curt Siodmak drew on legends of men transforming into destructive wolves, as well as lore that a werewolf emerges during a full moon and can only be killed by silver. The movie’s original title was Destiny, to evoke how outside forces can overshadow personal will. Audiences flocked to the film and empathized with the Wolf Man, cursed with an affliction he cannot control.

The 'gill man', a scaly creature with webbed and clawed hands.
Credit: John Kobal Foundation/ Moviepix via Getty Images

Creature From the Black Lagoon (1954)

Creature From the Black Lagoon was the last in Universal’s old-school era of monster success. The initial idea for the film came from writer and producer William Alland, who in the 1940s heard a tale about a fish-man living in the Amazon and wrote a story treatment in 1952. But it was the look of the monstrous creature that made the film stand out. This was largely conceived by Milicent Patrick, an artist employed by Universal’s special effects shop (though her male boss claimed credit at the time). For her creature designs, Patrick studied prehistoric life from the Devonian period, a time 400 million years in the past, when some species were leaving the oceans to live on land. Though it had the option to shoot in color, Universal stuck to black and white for this movie; this cost-saving choice links this monster to earlier ones.

Screen still of Godzilla 1954.
Credit: United Archives/ Hulton Archive via Getty Images

Gojira (1954) / Godzilla (1956)

Hollywood wasn’t the only place to birth movie monsters. In 1954, Japan’s Toho Studios released Gojira, about an ancient reptile who was brutally awakened by a nuclear test. Director Honda Ishiro wanted to make the movie in part due to the devastation wreaked by the nuclear bombs dropped on Hiroshima and Nagasaki during World War II. Further inspiration came early in 1954, when crew members of a Japanese fishing vessel got radiation sickness due to a nuclear test. Gojira, his name a combination of the Japanese words for whale (“kujira”) and gorilla (“gorira”), was embraced by Japanese audiences. The film was re-edited for its U.S. release. Scenes were added in which Raymond Burr played an American reporter following the story of this monster, but this version erased any message about the dangers of nuclear weapons and testing. Gojira was given a new name as well, becoming Godzilla, King of the Monsters.

Michael Nordine
Staff Writer

Michael Nordine is a writer and editor living in Denver. A native Angeleno, he has two cats and wishes he had more.

Original photo by FotoDuets/ iStock

What’s not to love about blue? While not a common pigment in nature — our cave-painting ancestors rarely messed with the stuff — the color fills our days thanks to blue skies and deep blue oceans. Blue has been revered since Neolithic times, and today it remains one of humanity’s favorite shades. Here are six amazing facts about the color that prove why this little slice of the rainbow will always fascinate us.

Wooden pencils of different blue shades on light blue background.
Credit: H_Ko/ Shutterstock

Your Favorite Color Is Blue (Probably)

Since the first color surveys in the 1800s, blue has been our species’ favorite color overall — and the trend is truly global. One YouGov study from 2015 surveyed 10 countries across four continents, and found that blue reigned supreme in all of them, scoring 8 to 18 percentage points higher than the second-place color. In the U.S., 32% of respondents preferred blue, with men consistently favoring the color (40%) compared to women (24%). Green scored the silver medal, and purple nabbed the bronze. Scientists who study color theorize that this widespread love of blue is likely because it’s often tied to positive experiences, such as Earth’s bright blue skies and oceans. Oh, and humanity’s least favorite color? Dark yellowish brown.

An aerial view of Hong Kong and the blue sky.
Credit: Bogdan Okhremchuk/ iStock

The Sky Is Blue Because of the Color’s Wavelength

The sun sends its rays to Earth as white light, meaning they contain everything in the color spectrum (red, orange, yellow, green, blue, indigo, violet). But blue is unlike the other colors, because its specific wavelength (between 450 and 495 nanometers) is more frequently scattered by particles in the atmosphere in a process known as Rayleigh scattering. At midday, the sky is pale blue as the sun’s light travels through less of the atmosphere, but as the sun heads toward the horizon, the sky becomes a richer blue because light travels through more of the atmosphere (thus scattering more blue light).

However, this is only half of the answer, because indigo and violet have even shorter wavelengths than blue, which raises the question: Why isn’t the sky violet? Figuring out this conundrum means taking a closer look at the human eye. The cones inside the eye are coded to perceive red, green, and blue (what’s known as trichromatic vision) and it’s the combinations of these inputs that determine variations of color. Because of the eye’s sensitivity to the color blue, the sky takes on that particular hue instead of violet. Other animals likely perceive the sky (and the rest of the world) in a different hue because most mammals are actually dichromatic, meaning they only have two different types of cones.

Close up of a little boy with bright blue eyes.
Credit: loops7/ iStock

Blue Eyes Aren’t Actually Blue

Between 6,000 and 10,000 years ago, all humans had brown eyes, until a single genetic mutation caused one human to be born without the usual brown-black melanin pigment that colors irises brown. Irises without this pigment experience what’s known as the Tyndall effect (yes, the same reason why the sky is blue). Because of blue’s short wavelength, that spectrum of light is reflected most by the fibers in the iris, causing eyes to take on a bluish color even though there is no blue pigment present at all. Today about 10% of the world’s population has blue eyes, though that number is skewed heavily by northern Europeans. In Finland and Estonia, for example, 89% of people have blue eyes — the highest percentage in the world. The U.S. comes in much lower at around 27%.

Close-up of lapis natural gemstone on a wooden floor.
Credit: Luen Wantisud/ iStock

Lapis Lazuli Was a Blue Gem Prized by Many Ancient Civilizations

Many ancient civilizations in Mesopotamia, Egypt, China, Greece, Rome, and India prized a blue gem known as lapis lazuli. The gem appears in the myths of the Sumerians as far back as 4000 BCE, and in ancient Egypt, pharaohs wore the precious stones in pieces of ostentatious jewelry (likely bedecked with other blue gemstones such as sapphire and turquoise).

Lapis lazuli was largely mined in present-day northeastern Afghanistan starting as early as the seventh millennium BCE. The stone appears in some of the world’s earliest civilizations, including the Indus Valley Civilization in India and Pakistan. Lapis lazuli continued playing a role in human culture for thousands of years, even up to the Renaissance, as painters often ground the gem to produce the highly sought-after pigment ultramarine. The pigment was so expensive, even legendary painters such as Michelangelo couldn’t afford it.

A blue wall of Chefchaouen, Morocco.
Credit: Anton_Ivanov/ Shutterstock

There’s a City in Morocco That’s Painted Blue

Chefchaouen is a city nestled in the Rif Mountains of northwestern Morocco. With one of the country’s most popular medinas outside of Marrakech, the city is already on the map for many tourists, but Chefchaouen is also known by another famous moniker — “the Blue Pearl.” That’s because a large portion of the city has been painted various shades of blue.

There is no official story as to why the city is plastered with blue hues, though there are many local legends. Some say the blue color keeps buildings cool, others say it deters mosquitoes, while others still say the blue color honors the importance of the nearby Ras el-Maa Waterfall. However, the leading theory dates back to the 15th century, when Chefchaouen served as a refugee camp for Jews and Muslims fleeing persecution by the Spanish Inquisition. Per Jewish tradition, these refugees painted their homes blue (an important color in Judaism) as a reminder of God’s power. Although the Jewish population has since largely left the area, the Muslim-majority city continued the tradition, and the Blue Pearl continues to shine centuries later.

Man hand with large veins on a blue background.
Credit: Yekatseryna Netuk/ Shutterstock

Your Blue Veins Are Actually an Illusion

Look at your arm, and you’ll see blue veins crisscrossing just beneath the skin. That’s an optical illusion. Human veins are not blue, and are actually transparent. While deoxygenated blood is a darker hue of red (which you’ve likely seen if you’ve ever donated blood or had blood drawn), the blue color comes from your skin scattering light, so that we perceive the veins beneath the skin as blue. The color perception of veins can also change depending on skin tone, as darker skin will turn veins more of a greenish color. While blue blood doesn’t occur naturally in humans, it is found in animals such as the horseshoe crab (whose blue blood has actually saved countless human lives). This family of crab sports blue blood because it contains copper pigments instead of iron.

Interesting Facts
Editorial

Interesting Facts writers have been seen in Popular Mechanics, Mental Floss, A+E Networks, and more. They’re fascinated by history, science, food, culture, and the world around them.