Original photo by AleksandarNakic/ iStock

Think about the last time you stepped outside on a cold winter morning. You may have noticed the air smelled different: crisp, clean, even invigorating — though you couldn’t put your finger on why. It’s not your imagination; across climates and continents, people experience winter air as feeling and smelling fresher than at any other time of year.

But what makes cold air smell good? Is it the snow, the pine trees, or something intangible in the air itself? Scientists say it’s a combination of factors, relating the world’s natural rhythms as well as the inner workings of the human brain.

Credit: Daniela Jovanovska-Hristovska/ iStock

There’s Less To Smell

One of the primary reasons winter air smells so good is that there’s simply fewer aromas competing for your attention. Warm air holds more moisture, and that moisture helps carry smells. In summer, heat and humidity intensify odors from soil, plants, pavement, garbage, and pollution, creating a thick mix of scents — some pleasant and many less so.

Cold air, however, behaves differently. As temperatures drop, the tiny airborne molecules responsible for smell — called volatile organic compounds — move more slowly and evaporate less easily. Fewer of those odor molecules are released into the air from sources such as plants, soil, or decaying organic matter.

Wintry air is also usually drier, especially after a freeze. Without humidity to help transport odors, many everyday smells fade into the background. Pollen disappears, plant growth slows, and bacteria that cause decay become less active. 

The result isn’t that winter air smells like anything especially good — more accurately, it smells like less. And our brains tend to interpret that absence of competing smells as clean and fresh.

Credit: Tamer ALKIS/ iStock

Cold Air Sharpens the Senses

Cold weather changes not only the air but also how your body experiences it. When you inhale cold air, nerve endings inside your nose react to the temperature, which you perceive as a sharp, tingling sensation. That response comes partly from a nerve system that detects cold and irritation — the same system that makes mint or menthol feel refreshing.

At the same time, cold, dry air can slightly reduce the sensitivity of your smell receptors, the specialized cells that detect odors, meaning fewer smells register as strongly. But rather than dulling the sensation, this often has the opposite effect: With fewer odors coming in, each inhale feels clearer and more distinct.

The contrast also matters. Stepping from a warm indoor space into cold outdoor air creates an immediate sensory shift, prompting your brain to pay closer attention. Even if there’s less to smell, the physical sensation of cold air makes the experience feel sharper and more vivid.

Credit: BbenPhotographer/ iStock

You’re Not Smelling Snow

 Many people, including beloved TV character Lorelai Gilmore, swear they can smell snow before it falls. But while winter may have a unique fragrance, snow is just frozen freshwater and therefore has no odor. 

Those people are simply sensing the atmospheric changes that often precede snowfall. Humidity tends to rise, air pressure drops, and existing scents — trees, soil, distant wood smoke — can become more noticeable.

Cold air is also denser, allowing smells to linger longer and travel farther without dispersing as quickly. Over time, some people learn to associate that specific mix of cold, moisture, and stillness with approaching snow. So if you think you can smell an impending snowstorm, it’s not the snowflakes you’re smelling — it’s your brain recognizing a familiar winter pattern.

Credit: Jason/ iStock

Winter Chemistry at Work

 Even in winter, plants continue to influence the way the air smells. Evergreen trees including pine, fir, and spruce produce aromatic compounds known as terpenes, which give the trees their characteristic scents and can be noticeable even in cold weather. In winter, with many deciduous plants dormant, those evergreen aromas stand out more clearly against a backdrop of muted seasonal smells.

Lower temperatures also suppress biological activity such as microbial decomposition, which otherwise releases musty, earthy odors, leaving relatively fewer unpleasant natural scents in the air. In some cases, subtle chemical reactions in the snow, soil, or frozen plants can even generate new, faint aromas unique to winter landscapes.

Credit: redtea/ iStock

Cold Doesn’t = Clean

Winter air may smell fresh, but that doesn’t always mean it’s objectively cleaner. In cities, for example, cold weather can actually trap pollutants close to the ground. A layer of cold, dense air can act like a lid, preventing exhaust and other pollutants from rising and dispersing. As a result, air quality can worsen even in winter. 

At the same time, cold temperatures slow the evaporation of odor-causing chemicals, so fewer strong or unpleasant smells reach your nose — which can make the air seem fresher than it really is.

Whether or not it’s cleaner, the crispness of winter air — which tends to be dry rather than humid — can make breathing feel more refreshing. And the simpler mix of scents and slower outdoor chemical activity can create a sense of clarity that smells great and feels restorative.

Kristina Wright
Writer

Kristina is a coffee-fueled writer living happily ever after with her family in the suburbs of Richmond, Virginia.

Original photo by Unsplash+ via Getty Images

Dreams are a universal human experience, yet they remain one of the most mysterious aspects of sleep. Researchers continue to explore why we dream, how long dreams last, and what we dream about — work that has uncovered some key insights, from the frequency of nightmares to the science behind lucid dreaming. 

While some dreams feel empowering and exciting, others can be stressful or scary, but all of these experiences are typically side effects of healthy brain activity during sleep. Below are five facts that explore the fascinating world of dreams.

Credit: Unsplash+ via Getty Images

Most People Have Recurring Dreams

An estimated 60-75% of adults have experienced at least one recurring dream in their lifetime. According to psychology professor Antonio Zadra of the University of Montreal, most recurring dreams are unpleasant, often appearing during periods of real-life stress and dissipating once the stressor is resolved. Many researchers agree that dreaming helps us process emotions and work through unresolved stress, which is why negative recurring dreams may fade once the underlying issue is addressed.

If you’ve ever dreamt about your teeth falling out or being late to a class, you’re not alone. Certain kinds of recurring dreams are common among both adults and children, including falling, being chased, flying, losing teeth, being naked in public, being late, and taking a test. Psychologists speculate that these themes may represent core emotions that emerge at certain points in our lives. For instance, dreams about falling may indicate feelings of anxiety or instability and may arise during times of transition or high stress.

These themes occur across age groups, though the content often differs. Children’s dreams about being chased, for example, often feature the dreamer being pursued by monsters, wild animals, witches, or ghoulish creatures. By contrast, adults may experience being chased by more grown-up concerns such as burglars, strangers, mobs, or shadowy figures.

Credit: Ron Lach/ Pexels

We Spend an Average of Two Hours Dreaming Each Night

Have you ever had a dream that felt like it lasted an entire day? Chances are it was only minutes long in reality, but those short scenes add up to around two hours of total dreaming per night. 

Most dreams occur during rapid eye movement (REM) sleep, a stage that lasts between 10 minutes and an hour. Because our bodies cycle through sleep stages multiple times per night, we experience an average of four to six REM periods nightly. The first REM period after falling asleep lasts for only a few minutes, so those early dreams are exceptionally brief.

Toward the early hours of the morning, REM periods lengthen, lasting around half an hour, with a maximum length of one hour. Still, it’s rare for a single dream to span that long. Instead, REM periods usually consist of multiple shorter dreams. 

Scientists use several methods to determine the lengths of dreams. One of the most common is electroencephalography (EEG), which measures brainwaves, allowing researchers to determine when participants are in REM sleep and dreaming. Similarly, fMRI (a type of brain imaging) measures blood flow, showing which areas of the brain are active during dreaming. 

Another valuable tool is dream reporting, wherein scientists wake participants after a timed REM period and ask for dream reports, linking REM duration with perceived dream length. Combining those methods helps scientists determine roughly how long the average sleeper dreams. 

Dreams can also occur during the other stages of sleep. There are four sleep stages in total: REM and three phases of non-rapid eye movement (NREM) sleep, N1, N2, and N3. While research indicates NREM dreams do occur, they’re less frequent and much shorter than REM dreams — think of them as a fleeting thought rather than a complex dream featuring storylines and details. That’s because the brain is much more active during REM sleep than during NREM, leading to more vivid dreams.

Because NREM dreams are short, incomplete thoughts rather than full narratives, they don’t account for much of our total dreaming time. This is why researchers use REM duration as a proxy for estimating total dream time, leading to the estimate of roughly two hours of dreaming per night, according to the National Institutes of Health.

Credit: nadia_bormotova/ iStock

Our Brains Temporarily Paralyze Us While We Dream

Have you ever wondered why you don’t fall out of bed during an especially animated dream? The brain has a special protective mechanism to keep us safe and sound: It temporarily paralyzes us during REM, the stage of sleep involving vivid dreaming. When we’re in REM sleep, many physiological changes can occur, including increases in blood pressure, heart rate, brain activity, and breathing. In fact, most neurons in our brains fire just as much, or sometimes more, in deep sleep than they do when we’re awake. 

This allows for very emotional, intense, and elaborate dreams during the REM cycle. Of course, our brains must also protect our bodies from acting out these scenarios. To accomplish this, the pons (the part of the brainstem that handles unconscious processes) and the rostral ventromedial medulla (the part that can block or amplify pain signals sent to the spinal cord) work together to suppress skeletal muscle tone, a process known as muscle atonia

That near-total paralysis of voluntary muscles turns physical readiness off during REM sleep, allowing us to sleep soundly. During NREM sleep, muscle tone is reduced but not eliminated, though most NREM dreams are typically less vivid and physically demanding.

Credit: Rawpixel/ Adobe Stock

Nightmares Are Different From Night Terrors

Bad dreams can take different forms: Nightmares are more common and generally less intense, while night terrors can be severe and disruptive. 

Nightmares are often related to real-life stressors, such as a child’s fear of separation or an adult’s job insecurity. But they may also be fictional and unrelated to waking events. An estimated 20-30% of children and 5-8% of adults experience frequent nightmares, which often occur in the second half of the night, during those longer stretches of REM sleep.

A night terror is far more intense than a nightmare, often startling the dreamer awake. Usually occurring early in the night during NREM sleep, night terrors are caused by the overstimulation of the central nervous system, leading to sudden waking, crying, screaming, confusion, and other unpleasant reactions. Despite those intense responses, night terrors thankfully have a limited recall period, whereas nightmares are more often remembered.

Fortunately, night terrors are relatively uncommon, especially in adults. An estimated 1-6.5% of children (1 to 12 years of age) and 1-4% of adults experience night terrors. They’re most common in toddlers and young kids, but as the nervous system matures, night terrors typically fade without treatment, making them rare in adults.

Credit: gremlin/ iStock

Some People Can Control Their Dreams

Realizing we’re in a dream — known as lucid dreaming — is a phenomenon that approximately 51% of people have reportedly experienced at least once. Though the ability to control our dreams was once considered a myth, in 1981, a study conducted by psychophysiologist Stephen LaBerge of Stanford University established the scientific validity of lucid dreaming. While in REM sleep and dreaming, study participants were able to perform eye movement patterns that LaBerge had previously asked them to perform, demonstrating that some dreamers can control their actions.

Researchers continue to investigate why we lucid dream. Though many theories are actively being explored, cognitive neurophysiology expert Nicolas Zink, author of the 2015 paper “Theories of Dreaming and Lucid Dreaming,” believes the best explanation is the protoconsciousness theory, which proposes dreaming during REM sleep represents a fundamental state of brain organization that supports waking consciousness and maintains emotional balance.

For many people, the appeal of lucid dreaming lies more in “how” to experience it than “why.” Lucid dreaming can often be pleasant — from traveling the world to soaring through the sky — so some people seek to induce it. One of the most popular techniques is “Mnemonic Induction of Lucid Dreams” (MILD), developed by LaBerge.

The process begins with accurate dream recall upon awakening during the night. According to LaBerge, before falling back asleep, the dreamer must focus on the dream as they repeat, “Next time I’m dreaming, I will remember that I am dreaming.” Repeating this phrase while visualizing the dream may help the dreamer reenter it and become lucid.

Rachel Gresh
Writer

Rachel is a writer and period drama devotee who's probably hanging out at a local coffee shop somewhere in Washington, D.C.

Original photo by indianeye/ iStock

The world’s biggest birds can be ranked in various ways: by weight, height, or wingspan — and then there’s the question of whether or not to include flightless birds. Penguins, for example, are quite bulky, but no penguin species can fly. Conversely, the wandering albatross is an exceptional flyer with an immense wingspan of up to 12 feet, but it weighs only about as much as a human toddler. 

We decided to look at the world’s avian heavyweights by mass alone, regardless of whether or not they can fly. Our only requirement is that the bird in question must still exist, as extinct species are a whole different ball game. 

Take, for example, the Vorombe titan, a species of elephant bird that once lived in Madagascar before going extinct some 1,000 years ago. That colossal bird stood as tall as 9 feet 10 inches and had an estimated weight of around 1,800 pounds — far larger than any bird living today.   

Here, then, are seven of the heaviest birds roaming the Earth today, ranked in ascending order by mass, from impressively large to heavier (and taller) than an average human.

Credit: Robert Winkler/ iStock

Wild Turkeys 

The wild turkey holds the distinction of being among the heaviest flying birds in the world. Unlike their domesticated counterparts, they’re surprisingly agile and swift fliers, despite reaching weights in excess of 25 pounds. 

According to the National Wild Turkey Federation, the largest wild turkey on record — harvested by a hunter in Kentucky — weighed a mighty 37.61 pounds, about twice the size of the turkeys typically placed on a Thanksgiving table. Wild turkeys manage to gain all that bulk through opportunistic foraging and a varied, omnivorous, and protein-rich diet that includes berries, acorns, nuts, seeds, insects, and small reptiles.

Credit: Chris Stenger/ Unsplash

Kori Bustards

Africa’s kori bustard is the world’s heaviest flying bird, with males weighing up to 40 pounds (females are much smaller, averaging 11 to 13 pounds). Unsurprisingly, the kori bustard expends a lot of energy to fly, so it remains on the ground most of the time and only takes to the air when necessary — typically to avoid predators. 

When flight is required, the birds use their long legs to get a running start and take to the air with powerful wing beats (using their 7-to-9 foot wingspan) before transitioning to slower, steadier flaps once airborne. Keeping low to the ground, kori bustards typically land soon after taking off, normally within sight of their launch.

Credit: Gerald Corsi/ iStock

Greater Rheas 

The greater rhea is South America’s largest bird, standing up to 5 feet tall and weighing between 33 and 66 pounds. These birds, which are related to ostriches and emus, roam the grasslands and pampas of Argentina, Bolivia, Brazil, Paraguay, and Uruguay. They’re completely flightless, using their long, powerful legs to outrun predators such as cougars and jaguars. 

Greater rheas have unusually long wings for flightless birds. While useless for flight, the wings are important for balance and for changing direction while running at speeds of up to 40 miles per hour. The birds are also excellent swimmers, using their legs, necks, and wings to cross rivers and marshes with surprising grace and ease.

Credit: VargaJones/ iStock

Emperor Penguins 

The emperor penguin is the heaviest of all the penguin species. Adults stand at around 43 to 47 inches tall and can weigh as much as 100 pounds, though weights vary greatly by sex and season. During the brutal Antarctic winters, emperor penguins need all the blubber they can muster to insulate themselves from the extreme cold, and they huddle together in tightly packed groups to keep warm. 

Of course they are, like all penguin species, flightless — and they’re not particularly adept at walking, either, often displaying a comical clumsiness on land. But emperor penguins excel in the water: They’re exceptional swimmers, capable of diving deeper and for longer than any other bird.

Credit: JohnCarnemolla/ iStock

Emus 

The emu is Australia’s largest native bird and the second-tallest bird in the world (but third in terms of overall bulk). They can reach heights of more than 6 feet tall, and the largest specimens weigh as much as 120 pounds. 

Unlike greater rheas, emus have tiny vestigial wings that are only about 7 inches long. Flying is certainly not an option, making running their way of life. Using their powerful legs, emus are capable of sustained speeds of at least 30 miles per hour and even faster short sprints — with each stride nearly 9 feet long. Emus use their strong legs, heavy feet, and sharp nails to defend themselves from predators, while also relying on their impressive agility when surprised — they can jump 7 feet straight up to escape trouble.

Credit: BirdImages/ iStock

Southern Cassowaries 

Weighing up to 170 pounds and reaching heights of 6 feet, the southern cassowary is the second-heaviest bird in the world. Found in Australia, Indonesia, and Papua New Guinea, these massive flightless birds have a distinct appearance, with bright blue faces, red wattles, and a prominent, helmet-like casque atop their heads. 

These are shy, solitary birds, living alone in rainforests and only coming together when it’s time to breed. While not inherently aggressive, they are territorial and will attack if provoked or angered — and when a cassowary gets mad, it doesn’t hold back. 

Widely considered the world’s most dangerous bird, the southern cassowary has incredibly powerful legs and a 4-inch, dagger-like claw on its middle toe. When threatened, they’re capable of delivering devastating kicks and slashes, including to humans, although attacks are rare and fatalities even rarer.

Credit: Unsplash+ via Getty Images

Common Ostriches 

The common ostrich is the undisputed heavyweight of the avian world. Adult ostriches typically weigh between 250 and 300 pounds and can reach heights of up to 9 feet. (Females tend to be shorter, closer to 6 feet.) Native to Africa, these birds are well-suited to the continent’s dry, open landscapes, having sacrificed the ability to fly for incredible speed on land. 

Ostriches are the world’s fastest animals on two legs, capable of sprinting at 43 mph and maintaining a cruising speed of 30 mph for 10 miles. An ostrich’s kick, meanwhile, is so powerful it can kill a lion

Being such big birds, they also lay big eggs — the biggest eggs of any living animal, in fact. The largest ostrich egg ever recorded weighed a whopping 5 pounds 11 ounces.

You may have heard tell of ostriches burying their heads in the sand when they’re scared, but that’s just a myth. That common misconception likely arose because ostriches dig shallow holes as nests for their eggs, and when they use their beaks to turn the eggs, it appears as though they’re sticking their heads in the sand.

Tony Dunnell
Writer

Tony is an English writer of nonfiction and fiction living on the edge of the Amazon jungle.

Credit: Original photo by Candice Bell/ iStock

If you were to guess Boston cream pie was invented in Boston or Nashville hot chicken originated in Nashville, you’d be correct. But sometimes, it’s not so obvious that a food is named after its place of origin. Examples of this include one of the most popular cheeses on the planet, a fruit found in every produce section, and a common source of plant-based protein. Here’s a look at six foods you may not have realized are named for the places they came from.

Credit: Jim Monk/ Alamy Stock Photo

Cheddar Cheese

Long before it was produced in Vermont or Wisconsin, cheddar cheese originated in the English village of Cheddar, located in the county of Somerset about 145 miles west of London. The cheese’s origins date to the 12th century, when it was stored in caves in Cheddar that helped maintain an ideal humidity and temperature for maturation. The cheese became popular by 1170 — a year in which Baron Alured de Lincoln is recorded as buying 10,240 pounds of cheddar (though the records refer to it as just “cheese” from the Somerset region).

So when did people start calling it “cheddar cheese”? The Oxford English Dictionary cites the earliest written record of the term dating to 1659. Indeed, it was a common custom at the time for English cheesemakers to name products after their place of origin.

Credit: Boris Stroujko/ Adobe Stock

Lima Beans

English-speakers typically pronounce the “lima” in “lima beans” as LY-ma, which is differently from how they’d say “Lima, Peru” (LEE-ma). That may be why people in the U.S. don’t often realize lima beans are named after Peru’s capital city. 

What English-speakers know as lima beans refers to a native Andean legume called “pallar” in the region. The name “lima beans” caught on after the 16th-century Spanish conquest of the Incas. Peru’s European rulers exported the local legumes to the United Kingdom and later the United States, contained in packaging that stated they were made in Lima, Peru. As you may suppose, that earned the legume the name “lima beans” in those English-speaking nations.

Credit: stresstensor/ iStock

Fig Newtons

It’s a common myth that Fig Newtons were named for the English polymath Isaac Newton. In reality, the name of the fig-filled treat is a nod to its place of origin. The cookie was first manufactured in 1891 by the Kennedy Biscuit Company in Cambridge, Massachusetts. 

At the time, the company liked to name its products for nearby communities (e.g., Shrewsbury biscuits, Beacon Hill cookies, etc.). So plant manager James Hazen opted to call this new cookie the “Newton” after the Boston suburb 6 miles away. In 1991, the city of Newton held a 100th anniversary celebration of the Fig Newton to honor that etymological connection.

Credit: Andrei Filippov / Stockimo/ Alamy Stock Photo

Jalapeño Peppers

The name “jalapeño” translates to “of Jalapa” in Spanish. Jalapa — or Xalapa, as is the more formal spelling — is the capital city of the Mexican state of Veracruz, whose name comes from the Aztecan word “xalapan,” meaning “sand by the water.” 

Though jalapeño peppers aren’t commonly grown in Xalapa, the city is where they were widely commercialized thanks to a food pickling business there. Known as La Jalapeña, the business was known for its canned goods, chorizo, and chilies. In 1922, it received a patent for pickled chilies, and thus began the successful worldwide commercialization of these spicy peppers. They were exported far and wide, and the term “jalapeño pepper” — inspired in part by the packaging, which read “La Jalapeña” — was coined in the U.S.

Credit: Pictorial Press Ltd/ Alamy Stock Photo

Waldorf Salad

The Waldorf salad is named neither for a country nor a town, but rather for the Waldorf-Astoria Hotel in New York City. This isn’t to be confused with the still-standing Waldorf Astoria located on Park Avenue — instead, it refers to the historic hotel that was razed in 1929 so the Empire State Building could be built at the site. 

It was at that world-famous establishment that the leafless salad was created by Oscar Tschirky, a former busboy and popular maître d’hôtel who was a bit of a celebrity in his own right. In 1896, Tschirky published The Cook Book by “Oscar” of the Waldorf, which contained recipes he’d crafted in the hotel kitchen. The book included a recipe for the hotel’s namesake salad, though at the time the dish contained only apples, celery, and mayonnaise. Grapes and nuts were added later, sometime before the late 1920s.

Credit: Ekaterina Pokrovsky/ Adobe Stock

Wiener Schnitzel

Wiener schnitzel has nothing to do with sausage, hot dogs, or any other foods that English speakers commonly refer to as “wieners.” Rather, the “wiener” in the name is German for the phrase “from Vienna,” as “Wien” is the German word for Austria’s capital city Vienna. “Schnitzel,” meanwhile, is the word for the breaded veal cutlet that serves as the dish’s primary component. Though variants of this dish have existed since the late 18th century, the term “wiener schnitzel” only dates to the 1850s, according to the Oxford English Dictionary.

That said, the name “wiener” as a nickname for hot dogs also has Viennese origins. In that case, the word refers to a Viennese-style sausage called “wienerwurst.” The culinary nickname “wiener” was coined in the United States no later than 1880, and it originally referred specifically to sausages from Vienna. But by the 1930s, Americans had begun saying “wiener” to describe hot dogs and other sausages, regardless of whether they came from Vienna.

Bennett Kleinman
Staff Writer

Bennett Kleinman is a New York City-based staff writer for Inbox Studio, and previously contributed to television programs such as "Late Show With David Letterman" and "Impractical Jokers." Bennett is also a devoted New York Yankees and New Jersey Devils fan, and thinks plain seltzer is the best drink ever invented.

Original photo by Michael Burrell/ iStock

In the U.S. and many other parts of the world, students are graded on an “A” to “F” scale, seldom questioning why one letter is missing. “E” isn’t found on most modern report cards — but why? 

This isn’t a simple oversight, but rather the result of centuries of evolving grading practices. By tracing the history of student evaluations, we can uncover why the letter “E” quietly disappeared from report cards across the United States.

Credit: Science History Images/ Alamy Stock Photo

Early Modern Grading Didn’t Use Letters

In 1785, Yale University President Ezra Stiles introduced what is believed to be the first proper grading system in the American colonies. That four-point scale, written in Latin, comprised the following categories: optimi (best), secondi optimi (second best), inferiores boni (less good), and pejores (worse). 

By 1837, mathematics and philosophy professors at Harvard had adopted a 100-point grading system, though it looked different than the one we use today. The modern 100-point scale features corresponding letter values and typically looks like this: “A” is 90-100, “B” is 80-89, “C” is 70-79, and “D” is 60-69.

But Harvard used a strictly numerical scale without any corresponding letters, and the ranges were as follows: 100 (perfect), 75-99, 51-74, 26-50, and 25 or below. The average grades followed a bell curve, with most students hovering around 50. Scores on both extremes (above 75 and below 25) were rare. 

Numerical grades gained traction across the country, and by the early 20th century, it became the most common grading system.Teachers at schools of all levels began assigning and recording grades using this 100-point scale, and, for the first time, modern grades inched toward a universal system.

Credit: Michael Burrell/ iStock

The Rise and Fall of “E”

Although numerical grading was the most popular method of assessing students from the mid-1800s through the early 1900s, another system emerged and evolved alongside it: letter grades. Teachers at Mount Holyoke College in Massachusetts began using letter grades as early as 1884. By the 1896-1897 school year, Mount Holyoke had become the first U.S. school to have documented use of a uniform letter grading system.

Letters were assigned to numerical ranges, but those ranges differed from Harvard’s. Instead, the system looked similar to what’s used in schools today: An “A” grade (excellent) was 95-100, “B” (good) was 85-94, “C” (fair) was 76-84, “D” (barely passed) was 75, and “E” was a failing grade, though it didn’t have a corresponding number. 

The following year, Mount Holyoke altered its grading system, adding an “F” for the first time. The numerical ranges were adjusted to include the new letter, and the grading scale spanned “A” through “F.” Interestingly, though, the college also retained the “E,” thus increasing its grading scale from five categories to six. But this move proved unpopular, and other schools began removing the “E” grade.

Experts have several theories about why “E” began to fade, including a push for a more efficient system. By the early 20th century, educators believed that fewer grading categories would help teachers streamline the process, simplifying the system. 

Isidor Edward Finkelstein, author of The Marking System in Theory and Practice (1913), was influential in this line of thought. Specifically, he and his colleagues believed that five divisions was the optimal number on a marking scale. That meant the modern grading system needed to drop one letter. As researchers Kimberly Tanner and Dr. Jeffrey Schinske wrote in their article “Teaching More by Grading Less (or Differently),” the “E” grade was an easy target because “F” so clearly stood for “fail.”

The article cites another issue with “E”: Some students assumed it stood for “excellent” despite it marking unsatisfactory grades, making it the most misunderstood letter out of the bunch. It was a perfect storm — “F” was a clearer stand-in for “fail” while “E” confused and crowded the grading scale. By the 1930s, “E” grades had disappeared from American schools. 

Credit: jarector/ iStock

“E” Remains in Some Grading Scales

During the latter half of the 20th century, the letter grading system “A” to “F” (excluding “E”) became standard across the country. But that doesn’t mean there aren’t exceptions. In elementary schools, a different letter scale is often seen on report cards, especially for younger students in kindergarten through third grade. 

For instance, grades may include “D” (“developing”), “E” (“expanding”), “S” (“satisfactory”), and “N” (“needs improvement”). As these grading systems vary by school district, the “E” may also mean “excellent” or “exceeding expectations.” So no, “E” hasn’t been entirely banished from the modern education system, but it has undoubtedly lost its place in the standard lineup of U.S. letter grading, remaining a curious omission from student report cards across America.

Rachel Gresh
Writer

Rachel is a writer and period drama devotee who's probably hanging out at a local coffee shop somewhere in Washington, D.C.

Original photo by EKKAPHAN CHIMPALEE/ Shutterstock

Blue jeans have long been a staple of wardrobes around the world, worn by everyone from construction workers to rock stars. Certain looks, from James Dean’s cuffed 1950s denim to Steve Jobs’ faithful 501s have even become ingrained in our cultural imagination. The classic color of jeans now feels essential to their identity, but it wasn’t originally chosen for stylistic reasons. So why are jeans typically blue? 

Credit: FS/ iStock

Feeling Blue

Long before Levi Strauss patented riveted work pants and created the modern blue jean in 1873, laborers across medieval Europe also wore trousers that were dyed blue, first with locally grown woad (a plant in the mustard family) and later with imported indigo. Those early work pants weren’t jeans as we know them today, but they did set the stage for our modern version — particularly the color. 

Blue dye wasn’t used simply because it was available — it proved handy for other reasons, too. The dark color hid the grime that came with sweating in the sun or toiling away in soil, and indigo’s unique properties made it a particularly durable choice. As fabric comes out of the dye vat, exposure to the air causes indigo to oxidize and solidify, forming a thin coating around the fibers. This helps indigo resist fading far better and for far longer than most other natural dyes.

There may have been another, subtler benefit to dying trousers blue: The Indigo plant has long been valued in traditional Chinese and Indian medicine for its antibacterial properties, and it’s possible that indigo-dyed garments resisted odor slightly better than undyed cloth, an obvious advantage at a time when washing clothes was infrequent.

Credit: Pornthiwa/ Adobe Stock

Modern Blue Jeans Are Born

By the mid-19th century, cotton pants had become standard workwear for miners, railroad workers, and other laborers in the American West, and the textile industry that supplied them was booming. American mills such as the Amoskeag Manufacturing Company in Manchester, New Hampshire — then one of the largest textile producers in the world — were reliably producing indigo-dyed cotton twill known as denim, a proven workhorse material that had achieved popularity throughout Europe and the Americas.

One of Amoskeag’s customers, a San Francisco dry-goods merchant named Levi Strauss, stocked the company’s blue denim fabric, and Reno tailor Jacob Davis purchased it. When Davis began reinforcing work trousers with metal rivets, he did so on pants made of both undyed duck canvas and blue denim. His customers gravitated toward the most practical color: dark blue. 

Eventually, synthetic indigo, which was developed in the 1890s, made blue denim cheaper and easier to produce than ever before. By the 1900s, undyed work wear was all but discontinued, and blue became the de facto dye for work pants.

Credit: Stockfotos-MG/ Adobe Stock

A Style Icon Emerges

Not only was blue denim standard workwear by the 20th century, but it also became a cultural juggernaut. Hollywood’s cowboys of the 1930s cemented blue jeans as symbols of rugged Americana; soldiers wore denim abroad during World War II, spreading the look’s popularity overseas; and teenagers in the postwar years embraced the garment as a uniform of rebellion. Women who had worn denim in wartime factories also continued reaching for it long after the war ended.

By that point, blue jeans’ ability to retain their sturdiness even as the dye subtly faded at creases and edges had become a signature style. Today, jeans are a timeless wardrobe staple — around 3 billion pairs were sold in 2022 alone — and though they’re available in many colors, the classic pair of jeans will always be blue.

Nicole Villeneuve
Writer

Nicole is a writer, thrift store lover, and group-chat meme spammer based in Ontario, Canada.

Original photo by PDerrett/ iStock

You probably reach for some kind of kitchen utensil every day, whether a fork, spoon, spatula, or cheese grater. These tools seem pretty straightforward, but some conceal clever uses that may go unnoticed. For instance, have you ever used the little loop on your vegetable peeler? Or measured pasta with your serving spoon?

Some of these features were designed intentionally, while others have been happily found to be useful in more ways than one. Here are a few of the ingenious hidden features found on everyday kitchen tools.

Credit: grzymkiewicz/ Adobe Stock

Notches on Kitchen Shears

Kitchen shears are an often-overlooked utensil. A good pair can do more than just tear open food packaging in a pinch; they’re an easy way to cut through meat, chop vegetables, finely snip herbs, and even slice up a rustic pizza. 

But that’s not all they do. Take a look at the blades on your shears — if one edge has a crescent-shaped cut-out, that’s a bone notch, meant to help stop slippage and cut through bones in poultry or fish. 

There’s another neat feature, too: On the inside of the handles, shears often have serrated metal teeth that form a circle, which can be used to twist open a stubborn jar or twist-top bottle or even to crack shelled nuts or shellfish. 

Credit: marietjieopp/ iStock

Spaghetti Spoon Measuring Holes

That hole in the center of your pronged pasta server isn’t just for drainage when scooping pasta out of boiling water. Depending on the spoon’s design, the opening may also work as a measuring tool for approximating a single serving of dry spaghetti noodles.

Keep in mind, however, that while the latter can be handy, it’s not universal. Pasta servers aren’t made to one specific standard, so the size and shape of the center hole can vary widely. Serving sizes vary too; a standard package of store-bought spaghetti considers a portion to be roughly 2 ounces (with roughly a quarter-sized diameter when held in a bunch), but no one spoon will reliably measure that exact amount every time. 

Credit: Luis Echeverri Urrea/ iStock

Vegetable Peeler Scoops

Many vegetable peelers are adorned with a small, barely noticeable bump on them. This can usually be found right on the top of older-style straight peelers, while on newer, wider Y-shaped peelers, it’s usually on the side. 

This notch is designed to remove potato eyes or other vegetable and fruit blemishes without having to switch to a paring knife. It’s a small but useful detail that can save time and frustration — and maybe a nicked fingertip or two.

It’s simple to use: Place the notch over the unwanted spot and scrape or scoop it away. This works on potatoes, carrots, or any firm fruit or vegetable with imperfections, and you barely even have to break your peeling stride.

Credit: Qwart/ iStock

Box Graters’ Zesting Side

Many people associate box graters so strongly with cheese that they simply refer to it as a cheese grater. And, yes, a box grater is primarily used for grating cheese. One side grates the long, thick shreds ideal for melting (the most commonly used side), while another shreds cheese or vegetables into finer strands, and another into thin slices. But what about those imperceptibly tiny holes that feel almost dangerous to the touch? 

That’s known as the rasp-style grater, and while it can indeed be used for cheese — hard varieties such as parmesan work best — it’s best when zesting citrus. You’ll want to press the rind or other food lightly against this side and move it in short strokes, taking care not to scrape your knuckles. It’s also excellent for grating tough spices such as cinnamon and nutmeg.

Credit: SchulteProductions/ iStock

Holes in Cutting Board Handles

Many cutting boards have wide handles that make it easier to hang them up or pull them out from inside a cupboard,  but they also work as a funnel for transferring chopped food or food scraps. Enterprising home cooks have figured out that if you slide the handle opening over the edge of the counter, you can easily push scraps through the hole and into a garbage or compost bin below — no more fluttering onion skins escaping as you carry your board from point A to point B.

Additionally, you can lift the cutting board over to your cooktop and safely slide prepped ingredients into a pot or pan. The board can also act as a splashguard in the process if the water or cooking oil is already hot. 

Of course, not every cutting board has a handle with a hole, nor will the holes always be the ideal shape and size for moving food or waste through. If yours does, however, you may just have a new way to use one of your trustiest tools.

Credit: New Africa/ Adobe Stock

Colander Holes for Herbs

Colanders aren’t just for draining pasta — their perforations can also be used to quickly and neatly strip herb leaves from their stems. Simply feed the stems through the holes from the inside to the outside, pull, and the leaves slide off with minimal effort right into the colander. 

You’ll want to use the medium-sized holes on your colander and make sure you’re not using stems or leaves that have gone too limp or woody. It makes for a little bit of extra cleanup, but it may be worth it to avoid taking up drawer space with a one-use tool such as a specialized herb stripper.

Credit: ben-bryant/ iStock 

Spoon Holders in Saucepan Handles

We’ve all been there: standing over a tomato sauce simmering away on the stove with a trusty wooden spoon in hand. But when you’re done stirring, where do you rest the spoon without leaving a mess on the counter or grabbing an extra dish if you don’t have a specialized spoon rest? 

As it turns out, many saucepan handles solve this problem with the small hole at the end of the handle. While that hole is primarily meant for hanging the pot on a hook or peg, it can also double as a spoon rest with teh right setup. 

Simply stick the bottom end of the spoon’s handle the slot, leaving it slanted up toward the pot so any drips will fall back in the sauce and not all over your counter. Just be cautious and give the spoon a quick check for any heat transfer before grabbing it to finish your dish.

Nicole Villeneuve
Writer

Nicole is a writer, thrift store lover, and group-chat meme spammer based in Ontario, Canada.

Original photo by Andriy Popov/ Alamy Stock Photo

Weather forecasts often list two temperatures side by side: the actual air temperature and the “feels like” temperature. While the first is straightforward, the second is more complex — and often more important. 

The “feels like” value reflects how your body perceives temperature in real-world conditions rather than how a thermometer measures it in a controlled environment. It accounts for the fact that humans warm up, cool down, sweat, shiver, and respond to the environment in ways that can make a mild day feel sweltering or a breeze feel freezing. These factors can dramatically impact your comfort level and, in some cases, your safety.

This adjusted temperature is the result of careful calculations that combine physics, meteorology, human biology, and environmental science. Multiple elements interact to determine how heat transfers between your body and the surrounding air, and each of those elements can push the perceived temperature higher or lower. 

Whether you’re wondering why humid days feel oppressive or why a winter wind seems to cut right through your layers, the “feels like” temperature offers a scientific explanation for the sensations you experience every day.

Credit: VicVaz/ iStock

Wind Chill and Heat Index

The “feels like” temperature is based almost entirely on two standardized measures, the wind chill and the heat index. Those formulas estimate how efficiently your body exchanges heat with the surrounding air under cold or hot conditions. 

Wind chill represents how cold your skin feels on a windy day, while the heat index reflects how hot it feels during summer humidity. Both indices assume standard, shared conditions, typical clothing, and dry skin (as opposed to wet conditions such as rain).

While those formulas can’t capture every variable, they provide a far more accurate picture of real-world conditions than air temperature alone. With this metric, meteorologists can interpret raw data and apply it to the human experience, giving people clearer guidance on how to dress, how long to stay outside, and when to take precautions against extreme weather.

Credit: Zinkevych/ iStock

How Does Wind Chill Work?

On winter days, the wind can make temperatures feel lower than what the thermometer reads. This effect is captured by the wind chill index, which calculates how much faster heat leaves your skin when the wind is blowing. 

Normally, your body warms a thin insulating layer of air around your skin, helping retain heat. Wind sweeps that warm layer away, forcing your body to lose heat at a faster rate. The stronger the wind, the more intense the heat loss, and the colder you feel.

Meteorologists use formulas that account for wind speed and air temperature to produce the wind chill number, which reflects how quickly skin will cool under those conditions. While wind chill doesn’t lower the actual air temperature, it can increase the risk of frostbite and hypothermia, making it an important measure in winter weather advisories.

Credit: fcafotodigital/ iStock

How Does the Heat Index Work?

If you’ve ever visited Florida or another notoriously humid area, you’ve probably heard someone say, “It’s not the heat, it’s the humidity.” That’s because humidity makes the air feel warmer than the measured temperature. The body cools itself by sweating, but sweat only reduces heat if it can evaporate. High humidity slows evaporation, meaning the body struggles to release heat efficiently.

The heat index combines air temperature and relative humidity to estimate how hot it feels when evaporation (and therefore cooling) is impaired. So a humid 90-degree day may feel like 100 degrees or higher because your body can’t shed heat efficiently. This is why deserts can feel scorching yet tolerable, while humidity in places like the Southern U.S. can feel oppressive even at lower temperatures.

Credit: deberarr/ iStock

What Other Variables Affect How Cold or Hot You Feel?

Although wind chill and heat index are the only standardized components of a “feels like” forecast, real-world comfort is shaped by a much wider set of factors. Sunlight, for example, can significantly raise perceived warmth because radiant heat warms skin, clothing, and surrounding surfaces — something the heat index doesn’t account for. Moisture plays a major role, too: Wet skin, soaked clothing, or heavy sweat accelerates heat loss in cold conditions and interferes with efficient cooling in hot conditions.

Even the landscape can influence comfort — paved surfaces, shaded parks, waterfronts, and wind tunnels between buildings all create microclimates that feel warmer or cooler than the official forecast may indicate. In urban areas, this effect is especially pronounced due to the “urban heat island” phenomenon, in which asphalt, concrete, and dense building clusters absorb and re-emit heat, raising temperatures relative to surrounding rural areas. 

Even small changes in street orientation, building height, or surface materials can create noticeable temperature differences, meaning two locations even just a few blocks apart can feel significantly different to the human body.

Credit: Barney Low/ Alamy Stock Photo

Another Measure for Assessing Human Comfort

While meteorologists don’t build those additional variables into the standard “feels like” number, other experts often do. Occupational safety specialists, sports science experts, the military, and public health researchers use more comprehensive tools — such as the Wet Bulb Globe Temperature (WBGT) — to assess heat-related stress on the human body. This metric takes sunlight, humidity, wind speed, cloud cover, and other environmental conditions into account for the most precise assessment.

Ultimately, however, comfort is personal and can vary greatly between individuals. Your clothing, activity level, location, sun exposure, and even your own physiology can shift how conditions truly feel to you. The “feels like” forecast provides a helpful baseline — the rest is up to your body and the environment around you.

Kristina Wright
Writer

Kristina is a coffee-fueled writer living happily ever after with her family in the suburbs of Richmond, Virginia.

Original photo by GTCRFOTO/ Alamy Stock Photo

From Oscar winners to Hall of Fame musicians and world-class athletes, there are some celebrities whose accomplishments in the classroom are equally impressive as the talents that brought them fame. 

Many of these well-educated A-listers have put their advanced degrees to use, juggling careers in the world of entertainment and academia. Here are six multitalented stars who can boast Ph.D.s among their other accomplishments.

Credit: PA Images/ Alamy Stock Photo

Brian May: Ph.D. in Astrophysics

Before playing guitar in the legendary rock band Queen, Brian May had his sights set on a career in astrophysics. May was an accomplished student from a young age, studying advanced physics, mathematics, and applied mathematics at Hampton Grammar School in London. He also received a bachelor’s degree in physics from Imperial College London in 1968 — the same year he and Freddy Mercury cofounded the band Smile, the predecessor to Queen.

From 1970 to 1974, May pursued a Ph.D. in astrophysics at Imperial College London. But his budding rock ’n’ roll career was taking off at the same time. The band released their first two albums in 1973 and 1974 to such success that May decided to put his academic career on hold.

The rock star suspended his studies for more than three decades, before re-registering for a Ph.D. in 2006. The following year, he submitted a thesis titled, “A survey of radial velocities in the zodiacal dust cloud,” after which he was awarded his advanced degree at long last. 

May attended graduation in 2008 and now works with both NASA and the European Space Agency (ESA) to design stereoscopes of images taken during celestial missions. May has also used his own publishing house, the London Stereoscopic Company, to publish 3D books about astronomy.

Credit: GTCRFOTO/ Alamy Stock Photo

Shaquille O’Neal: Doctor of Education

In addition to being a Hall of Fame basketball player, movie star, professional wrestler, and former police officer, Shaq is also a Doctor of Education. The Lakers legend earned his EdD from Miami’s Barry University; in a statement released by the school, O’Neal said his pursuit of the degree was to honor his mother, “who always stressed the importance of education.”

The Big Aristotle, as he’s affectionately known, spent four and a half years juggling his studies while playing in the NBA. He achieved an admirable 3.81 GPA, virtually attending classes between games. 

Shaq retired from the NBA in 2011 and graduated with his doctorate degree the following year. This was Dr. Shaq’s third college-level degree, having also earned a Bachelor of Arts in General Studies from Louisiana State University in 2000 and an MBA from the University of Phoenix in 2005.

Credit: Victor Decolongon/ Getty Images Entertainment via Getty Images

Mayim Bialik: Ph.D. in Neuroscience

Actress Mayim Bialik shot to stardom portraying the title character on the sitcom Blossom in the early 1990s before portraying neuroscientist Amy Farrah Fowler on The Big Bang Theory in the 2010s. The latter character wasn’t far removed from Bialik’s real-life persona, as she herself has earned a Ph.D. in neuroscience.

Bialik shifted her focus away from acting and toward academics in the early 2000s. She earned a bachelor’s in neuroscience from the University of California, Los Angeles in 2000 and soon returned to UCLA to pursue a doctorate in that same field. 

The actress achieved her Ph.D. in 2007, with a focus on the effects of oxytocin and vasopressin on obsessive-compulsive disorder in adolescents with Prader-Willi syndrome. In addition to her resurrected entertainment career, Bialik is now a vocal advocate encouraging young girls to pursue careers in STEM. Bialik also put her profound intelligence to work while co-hosting TV’s Jeopardy! from 2021 to 2023.

Credit: Jason LaVeris/ FilmMagic via Getty Images

Peter Weller: Ph.D. in Italian Renaissance Art History

You may know him best as RoboCop, or perhaps as the Academy Award-nominated director of the 1993 short film Partners, but Peter Weller is also an expert in Italian art. 

Weller’s interest in art began to percolate in the 1970s as he hobnobbed with artists while living in New York City. But it wasn’t until seeing Pablo Picasso’s “Guernica” at the Museum of Modern Art that Weller had a revelation. He later told Artnet that he finally “started to see the connective tissue of visual art.” 

Weller was later drawn to artists such as the Italian Renaissance painter and architect Giotto, and began his focus on that era. With decades of acting behind him — he starred in RoboCop in 1987 — Weller enrolled at Syracuse University in 2004 and completed a master’s degree focused on Roman and Renaissance art before pursuing a Ph.D. in the field. 

Weller wrote and defended a dissertation in 2013 about humanist artist Leon Battista Alberti and earned his doctorate the following year. He continues to be involved in art education — in 2023 he established UCLA’s Weller Family Graduate Art History Fund, which helps fund travel and research for graduate art students.

Credit: Epsilon/ Getty Images Entertainment via Getty Images

George Miller: Doctor of Medicine

Director George Miller is one of the few people who can say they’ve achieved both an Oscar and a doctorate in medicine. A film career wasn’t always the primary aim for this visionary behind the Mad Max franchise; he originally studied medicine at the University of New South Wales. 

Miller graduated from medical school in 1971 and went on to complete a residency at St. Vincent’s Hospital in Sydney, Australia.
While working as an emergency room doctor in the 1970s, Miller was deeply affected by the carnage caused by car accidents, inspiring  him to imagine the dystopian universe of Mad Max. Miller released the first Mad Max movie in 1979, which proved to be a huge success and saw him transition out of the medical industry to pursue filmmaking full time.

Bennett Kleinman
Staff Writer

Bennett Kleinman is a New York City-based staff writer for Inbox Studio, and previously contributed to television programs such as "Late Show With David Letterman" and "Impractical Jokers." Bennett is also a devoted New York Yankees and New Jersey Devils fan, and thinks plain seltzer is the best drink ever invented.

Original photo by lielos/ iStock

Among all your senses, touch is the only one that spans your entire body. The skin — your largest organ — not only protects you from injury and infection but also constantly gathers information about pressure, temperature, vibration, texture, pleasure, pain, and potential threats. 

Unlike vision or hearing, which operate at a distance, touch is immediate and immersive, feeding the brain continuous updates about your body and surroundings. Scientists are increasingly discovering that touch is more than a simple bodily function; it’s a foundation of social connection, emotional regulation, and even memory.

What makes touch especially interesting is how complex and psychologically layered it is. Each sensation engages a network of specialized receptors and neural pathways the brain must interpret — rapidly and often unconsciously. The way those signals are processed reveals just how nuanced this seemingly simple sense really is.

Credit: SanyaSM/ iStock

Touch Is the First Sense To Develop

Touch develops remarkably early in human life. By around eight weeks of gestation, a developing fetus can respond to light pressure around the lips, and sensitivity quickly spreads across the body as the nervous system forms. Specialized receptors for pressure, temperature, and movement become active months before birth, creating the foundation for how you later interpret the physical world.

This early sense helps shape the developing brain and is crucial for survival and healthy growth. Touch guides fetal movements, supports neural organization, and after birth, it becomes essential for bonding, emotional stability, and healthy social development long before vision and hearing fully mature.

Credit: Moyo Studio/ iStock

Gentle Touch Helps Regulate Emotions

One of the most interesting discoveries in touch research is the role of C-tactile afferents, aka nerve fibers tuned specifically to gentle, caressing strokes. Those fibers send signals directly to areas of the brain involved in emotional processing, including the insular cortex. 

When activated, they can reduce stress hormones, lower heart rate, and trigger the release of oxytocin, a hormone associated with bonding and trust. Those physiological responses help explain why a gentle touch from a trusted person can immediately soothe you, soften distress, and create a sense of safety.

The emotional effects of gentle touch are especially profound in early development. Studies on newborns and premature infants show that skin-to-skin contact — sometimes called “kangaroo care” — can regulate breathing, stabilize body temperature, and promote healthier weight gain, all while strengthening parent-infant bonding. 

In adults, similar forms of nurturing touch continue to buffer stress and enhance social connection. Experiments have found that people who receive supportive touch from a partner experience reduced neural responses to threat and even perceive painful stimuli as less intense.

Credit: Pra-chid/ iStock

Touch Can Influence Our Decisions

Touch can subtly shape the decisions we make, often without our awareness. Research on embodied cognition shows that physical sensations such as softness, firmness, warmth, or weight can influence how we interpret situations and behave in response. 

For example, holding a warm object can momentarily increase feelings of trust and generosity, while rough textures can make social interactions seem more difficult. Even something as small as an item’s weight can affect judgment: People holding heavier objects have been found to rate issues as more serious or consequential than people holding something light. That effect reveals how the brain uses tactile cues as shortcuts, blending physical sensation with abstract evaluation.

In one 2010 study, participants engaged in a simulated negotiation with a car dealer while seated in a chair that was either soft or hard. Those in soft chairs tended to make higher second-round offers than those in hard chairs, suggesting physical comfort can increase psychological flexibility.

Credit: Halfpoint/ iStock

There’s a Reason Your Skin Gets Pruney in Water

When your fingers or toes wrinkle in water, it’s not just soggy skin; it’s a nervous system-driven adaptation to help your sense of touch work better in a slippery environment. While the outermost layer of skin does absorb some water, the distinctive prune-like wrinkling pattern is triggered by your sympathetic nervous system. Blood vessels beneath the skin constrict, changing the tension in the tissue and creating those familiar ridges.

The water-formed wrinkles enhance how you interact with wet surfaces. By channeling water away from the fingertips — much like tire treads — they improve tactile control and surface contact. Pruney fingers are the way your body preserves fine touch and dexterity when the normal friction enabled by dry skin disappears.

Credit: Mohamed Nohassi/ Unsplash+

Touch Helps Your Brain Know What’s Part of Your Body

Touch is central to how your brain determines what is part of your body. When visual and tactile signals align — such as seeing a hand that’s not yours touched while feeling the same touch — the brain can interpret the touched surface as “you.” Experiments with delayed or mismatched touch show how quickly that system can falter, revealing how actively — and continuously — the brain maintains a sense of bodily ownership.

The classic rubber hand illusion is an example of this. When a visible fake hand is stroked near and at the same time as a hidden real hand, the brain merges the visual and tactile signals. Within minutes, many people begin to feel the rubber hand as their own, demonstrating how touch, vision, and proprioception (the body’s internal sense of movement and position) are woven together to create the feeling of self.

This fluid sense of self becomes especially clear in phantom limb experiences. After an amputation, many people continue to feel sensations — warmth, pressure, pain — in the missing limb. Those sensations arise from the brain’s map of the limb, which remains intact even after the limb is gone.

Techniques such as mirror therapy show how this map can be reshaped. In mirror therapy, a mirror is positioned so the reflection of an intact limb appears where the missing limb would be, creating the visual illusion that the lost limb is still present and moving. 

That visual feedback can help the brain reorganize its internal body model, reducing phantom sensations or pain. The success of such interventions shows that even deeply rooted bodily sensations can shift when the brain receives new sensory information.

Credit: magicmine/ iStock

Touch Can Create Sensations That Aren’t Really There

Touch doesn’t always reflect the physical world exactly as it is.  When sensory signals clash or are incomplete, the brain fills in the missing information — sometimes incorrectly. In many cases, the nervous system must infer what a sensation means, especially when signals are conflicting or ambiguous, and this guesswork becomes particularly noticeable when it comes to temperature.

The thermal grill illusion, in which alternating warm and cool bars placed against the skin produce a burning or painful sensation. Neither temperature is painful on its own, yet the combination activates overlapping neural pathways that the brain misreads as extreme heat. 

A similar phenomenon happens when you put your very cold hands under warm water — the warmth can briefly feel uncomfortable or even painful. In both cases, the sensation is created by the nervous system’s attempt to reconcile conflicting temperature signals.

Credit: Jenpol/ iStock

Expectation Shapes What You Think You Feel

Touch also relies on prediction. Before you even make contact with an object, your brain estimates how heavy, smooth, sharp, or firm it should be, and those assumptions shape what you expect to feel. 

This is obvious in the size-weight illusion, in which smaller objects feel heavier than larger objects of the same mass — because the brain expects the larger objects to be heavier. When the object is lifted, the mismatch between expectation and sensation creates a strange, persistent perceptual error.

Those constant cycles of prediction, comparison, and correction happen constantly, usually without our awareness. But illusions of temperature and weight prove touch isn’t a simple reflection of physical reality, but a continuous interpretation. The brain draws on assumptions, shortcuts, and memory to construct what you think you feel, and those can occasionally take you by surprise.

Kristina Wright
Writer

Kristina is a coffee-fueled writer living happily ever after with her family in the suburbs of Richmond, Virginia.