Original photo by Eva Kristin Almqvist/ Shutterstock

Optical illusions are more than just magic tricks for the eyes — they’re a fascinating peek into the mysterious workings of the brain. At their core, these illusions are about how we interpret visual information: Our eyes take in light and send signals to the brain, but sometimes these signals get mixed up, leading to perceptions that don’t align with reality. Some optical illusions use contrast, perspective, and light refraction to deceive the brain; others work on a more complex cognitive level, tapping into the subconscious mind.

Aside from being fun, optical illusions also serve a practical purpose. While the brain uses more than 50% of its cortex for visual processing, the exact mechanisms of how we perceive our sensory input remain a mystery. Optical illusions can help us understand how we see the world, but they also demonstrate the power of the human mind — including just how easily it can be deceived. Here, we delve into the secrets behind five popular optical illusions.

Credit: eyeTricks 3D Stereograms/ Shutterstock

Magic Eye

Magic Eye images are perhaps some of the most well-known optical illusions, having spawned a pop culture craze when engineer Tom Baccei and 3D artist Cheri Smith debuted them in the 1990s. While they may first appear like nothing more than colorful static, each mosaic-like picture conceals a 3D image. 

The secret to seeing the hidden image lies in the repeating patterns. Each eye picks up slightly different parts of the pattern, so the key is for viewers to relax their eyes until they see double vision, by focusing on the image while attempting to look “through” it. This allows the brain to merge the different signals, creating the perception of a 3D image, not unlike how the brain gauges depth in the real world.

Magic Eye illusions are officially known as autostereograms — a fancy name for 3D images hidden within 2D graphics. To create a Magic Eye image, designers first choose a shape and render it in grayscale, with the lighter areas reading as closer and the darker areas reading as farther away. Next, they cover the initial shape with a colorful repetitive pattern. Specially designed computer software mixes the pattern with the gray shape, adjusting the pattern’s spacing to highlight how close or far away different parts of the shape are. When you look at it the right way, allowing your eyes to relax, your brain uses the pattern’s depth clues to reveal the hidden 3D image.

Credit: Original work by Edward H. Adelson

Checker Shadow Illusion

In the checker shadow illusion, two labeled squares among a gray-and-white checkered pattern appear to be wildly different shades of gray — only they’re not. There’s an explanation for this astounding phenomenon: While both squares emit the same light, the brain uses its past experience with shadows to determine that the square marked B, in the shadow of the checker, must be lighter than A — a square that is also, conveniently, surrounded by contrasting  lighter-colored squares

The checker shadow illusion was created by neuroscientist and vision science professor Edward H. Adelson in 1995. While human vision has been compared to a camera, the brain doesn’t actually measure light the same way a camera does. Instead, it guesses what we see based on what it’s learned from the past — a clever reminder that context, not just raw sensory data, plays a large part in what we see.

Credit: NICK FIELDING/ Alamy Stock Photo

Rotating Snakes

Seeing motion where there is none is a rather disorienting visual trick, and  “rotating snakes,” one version of the peripheral drift illusion, perfectly exemplifies this. Created by Japanese psychologist Akiyoshi Kitaoka in 2003, the image itself is static, but a quick glance appears to cause the colorful circles to spontaneously spin in a slow, mesmerizing swirl. Yet if you pick and stare at just one single spot, the “motion” ceases. What gives? 

Research suggests that our visual systems are wired to detect motion even when there is none. When we look at repeating, asymmetric patterns such as the ones in the rotating snakes illusion, our brains interpret the shifting contrasts as cues for motion. This triggers the type of neural activity that typically occurs when we observe actual movement. Essentially, through contrast and light, it tricks our brains into thinking there’s motion by mimicking the neural signals associated with seeing something move. 

Credit: Hill, W. E. (William Ely), 1887-1962, artist/ Library of Congress

My Wife and My Mother-in-Law

The image known as “My Wife and Mother-in-Law” is a famous example of how our brains can interpret the same ambiguous image in multiple ways. At first glance, you may see an image of a hunched-up older woman with a large nose and a kerchief on her head — but on second glance, an elegant younger woman with a veil and chin angled away from the viewer emerges (or vice versa). There have been a few different versions of this popular illusion; though it first appeared on a German postcard in 1888, the most popular version was adapted by American cartoonist William Ely Hill for a humor magazine called Puck in 1915. 

Research has suggested that the image you see first, or most easily, may depend on your age. Younger participants in one study tended to see the “wife” (or younger woman) first, while older participants saw the “mother-in-law” (or older woman). It’s believed this can be chalked up to initial subconscious interpretation of an image according to one’s personal age-based biases. This serves as yet more proof that our perception of reality is strongly influenced by context.

Credit: SrdjanPav/ iStock

Ames Room

At first glance, an Ames room looks like any ordinary rectangular space. Step inside one, though, and everything suddenly feels off. People standing in opposite corners appear comically different in size — one towering, the other tiny. Ames rooms are built with skewed walls, floors, and ceilings that create a trapezoidal shape; the unexpected size discrepancies are a trick of forced depth perception. 

Typically, an Ames room is viewed through a small peephole or from a specific viewing angle. This forces the brain to receive only one perspective on the scene, significantly reducing depth cues from the other eye. As a result, the distorted shape of the room is obscured, and the brain perceives the scene as if the objects inside are changing size.

The Ames room was first developed in 1946 by American ophthalmologist Adelbert Ames Jr. while exploring how experience affects perception. It’s since become an iconic fixture of amusement parks and even in pop culture — famous films such as 1971’s Willy Wonka & the Chocolate Factory and TheLord of the Rings trilogy have employed it to create deceptive visual dynamics, specifically to make certain characters appear much smaller than others.

Nicole Villeneuve
Writer

Nicole is a writer, thrift store lover, and group-chat meme spammer based in Ontario, Canada.

Original photo by COSPV/ iStock

If you added up all the money in the world — from dimes and dollars to stocks — how much would there actually be? The question isn’t as simple as it may initially appear. To begin answering it, we must first decide how to define “money.” Are we talking physical cash? Bank reserves? Mutual funds? What if we take all those things into account? Let’s attempt to break down just how much money — and wealth — exists worldwide.

Credit: fcafotodigital/ iStock

Money, Money, Money

Economists estimate only 8% of the world’s money exists as physical cash. This leaves us with a whole slew of other intangible forms of funds to consider. The global economy contains three main categories of money, known as monetary aggregates: M0, M1, and M2. They vary based on liquidity — how easily and quickly it can be converted into cash — with M0 being the most liquid. This category includes physical cash (paper money and coins) and bank reserves.

M1, also called “narrow money,” includes all of M0 plus demand deposits, traveler’s checks, and other checkable deposits — think of this category as cash you can use quickly, even if it’s not physical. M2, or “broad money,” encompasses the previous two categories plus less liquid forms of money, such as savings deposits, money market securities, mutual funds, and other time deposits. It’s not as accessible as M1, but it still qualifies as money because it can be turned into cash eventually.

Credit: Pixabay/ Pexels

How Much Cash Actually Exists?

Let’s start with the most basic category of money: M0, the physical cash in the wallets of people around the globe. It isn’t easy to nail down the worldwide M0 supply due to constantly fluctuating totals and exchange rates, but according to a 2022 report from Visual Capitalist, there’s roughly $8 trillion in cash circulating around the globe. (Although that total has been converted to USD, it accounts for all types of currency in all countries). As for the total amount of cash circulating in the U.S. alone, the Federal Reserve estimates it at $2.3 trillion as of October 2024.

To put into perspective how relatively small these numbers are, the total wealth (as in net worth, not just cash on hand) of New York City residents alone is around $3 trillion — more than the amount of cash in the entire country. This wealth exists primarily in other forms of money and assets, including real estate and valuables, which we’ll delve into next.

Credit: JohnKwan/ Shutterstock

“Narrow Money” and “Broad Money”

Now that we’ve quantified the relatively small amount of physical cash that exists globally, let’s move on to more significant monetary amounts. As previously mentioned, M1 includes all of M0 plus checking deposits and other liquid financial assets. Visual Capitalist’s 2022 estimate puts the global total for M1 at a whopping $28.6 trillion. Meanwhile, the U.S. M1 supply was estimated at around $18.2 trillion in October 2024 by the Federal Reserve.

Unsurprisingly, the category with the widest scope, M2 or “broad money,” also contains the highest amount of money. It includes M0 and M1 plus savings deposits, mutual funds, and other less liquid assets. In 2022, Visual Capitalist estimated a total M2 value of $82.6 trillion worldwide. According to the October 2024 Federal Reserve report, the M2 value of the U.S. was roughly $21.3 trillion.

Credit: Worldspectrum/ Pexels

Does Crypto Count?

But wait — what about cryptocurrencies? Technically, digital currencies such as Bitcoin and Ethereum aren’t officially recognized as money by central banks, so they aren’t included in M1 or M2 measurements. However, some experts argue that cryptocurrencies could eventually impact those numbers. As of 2024, the global cryptocurrency market cap was around $3.5 trillion, with Bitcoin leading the charge at $1.9 trillion. These values are significant, and if they continue to rise, crypto could reshape the definition of “money” in the future.

Credit: savva_25/ Shutterstock

How Much Wealth Is in the World?

Money isn’t the same as wealth, but they do go hand in hand. As of September 2024, the global Gross Domestic Product (GDP) — the value of all goods and services — amounted to nearly $110 trillion. This differs from “money” because it measures economic activity, not just cash or monetary assets, but it’s still an important measure of wealth. The 2024 U.S. GDP alone was $24 trillion, making it the largest economy in the world.

It’s when we examine private wealth (of individuals and private entities) that we enter the big leagues. According to Visual Capitalist, the total net global private wealth — cash, stocks, bonds, real estate, business ownership, cars, jewelry, and other valuables — stood at around $454.4 trillion at the end of 2022. That value is expected to soar to an astonishing $629 trillion by 2027.

Speaking of wealth, let’s not forget about gold. Though it’s no longer tied to the value of currencies (and hasn’t been since Switzerland ended its gold-backed currency in 1999), it’s still a large element of the global wealth equation. As of 2024, the world’s total gold supply is valued at about $17.7 trillion.

Credit: Rawpixel/ Shutterstock

What’s the Total Global Debt?

Of course, debt is another major player in the worldwide economy. By most estimates, the total debt incurred globally by governments, corporations, and households was around $300 trillion as of 2022. So the world contains an unimaginably large amount of wealth and debt, which paints a nuanced portrait of the global economy’s multifaceted landscape.

Rachel Gresh
Writer

Rachel is a writer and period drama devotee who's probably hanging out at a local coffee shop somewhere in Washington, D.C.

Original photo by Estudio Gourmet/ Pexels

Eggs are a culinary superstar, widely used in kitchens around the globe thanks to their incredible versatility and nutritional value. American egg farmers alone produce an astounding 100 billion eggs each year, as the food is one of the most common ingredients for restaurant menu items and home-cooked meals alike. In fact, more than 90% of U.S. households have eggs in their refrigerator at any given time.

Be it fried, boiled, or used in baked goods, there’s much to appreciate about the humble egg. But eggs are more than just a breakfast staple — they’re also a marvel of nature with some fascinating traits. Whether you like the sunny-side-up or over-easy, here are some egg-cellent facts about this versatile food.

Credit: Melena-Nsk/ iStock

Egg Yolks Can Indicate the Hen’s Diet

The color of an egg yolk can vary from pale yellow to deep orange, a difference directly related to the hen’s diet. Hens that consume a diet rich in carotenoids — a class of pigments responsible for red, orange, and yellow hues found in plants such as corn and marigold petals — produce eggs with darker, more vibrant yolks. On the other hand, hens fed a diet low in carotenoids will lay eggs with paler yolks. While yolk color doesn’t significantly impact taste or nutritional value, many people find darker yolks more visually appealing.

Credit: mophojo/ iStock’

The Color of the Shell Doesn’t Affect Nutrition

Eggshells come in a variety of colors, most commonly white and brown, but also blue and green in certain chicken breeds. Despite common myths, the color of an eggshell doesn’t influence its nutritional content or taste. The difference in shell color is simply a result of the breed of chicken that laid the egg. For example, white-feathered chickens with white earlobes typically lay white eggs, while brown-feathered chickens with red earlobes lay brown eggs. All eggs, regardless of shell color, provide the same essential nutrients, including protein, vitamins, and minerals.

Credit: Biserka Stojanovic/ iStock

Eggs Can “Breathe” Through Their Shells

An eggshell might look solid, but it’s actually porous, containing thousands of tiny pores. These pores allow the transfer of gasses through the shell: Carbon dioxide and moisture leave the shell and are replaced by atmospheric gasses including oxygen. This exchange of gasses is crucial for the respiration of the developing embryo inside a fertilized egg. The unique porous quality of eggs is why they should be stored in their carton, as it helps maintain their quality by preventing the loss of moisture and protecting them from external odors and possible contaminants.

Credit: AtlasStudio/ Shutterstock

Older Eggs Are Easier to Peel When Boiled

If you’ve ever struggled to peel a hard-boiled egg, the egg’s age may be to blame. Fresh eggs have a lower pH level and a tightly bonded membrane, making the shell cling stubbornly to the cooked white. As eggs grow older, the pH level of the white increases and the membrane becomes less adhesive, making older eggs easier to peel after boiling. For the easiest peeling experience, eggs should be a week or two old (close to their expiration date) and plunged into ice water after boiling.

Credit: Oksana Mizina/ Shutterstock

Eggs Are Jam-Packed With Nutrients

Eggs are sometimes referred to as “nature’s multivitamin” by health professionals and nutritionists — and for good reason. A single large egg contains about 70 to 80 calories, 6 grams of protein, and a range of essential vitamins and minerals, including vitamin D, vitamin B12, selenium, and choline. Choline, a vitamin-like compound that’s also produced in the liver, is particularly important for brain health and development. One large egg provides about 147 milligrams of choline, making a notable contribution toward the 400 to 550 milligrams recommended per day for adults and teens. Eggs are also one of the few natural sources of vitamin D, which supports bone health and immune function.

Credit: tsurukamedesign/ iStock

Eggs Age More Quickly at Room Temperature

Eggs age significantly faster when stored at room temperature compared to refrigeration. In just one day at room temperature, an egg can age as much as it would in a week in the fridge. Lower temperatures slow down the degradation of proteins and other components inside the egg, keeping it fresher for longer. As eggs age, their quality changes — the yolks and whites become thinner and more prone to breaking when separated. This explains why some recipes specifically call for fresh eggs, because the egg’s age can affect the outcome of the dish.

Credit: Diy13/ iStock

Eggs Purchased in the U.S. Must Go in the Fridge

In the U.S., commercially sold eggs are washed according to FDA regulations to remove contaminants, a process that also removes the natural protective coating known as the cuticle. This makes eggs more susceptible to bacteria, necessitating that they be refrigerated to maintain optimal safety and freshness. In fact, the USDA recommends eggs not be left at room temperature for more than two hours. To maximize their shelf life, eggs should be stored in their carton in the coldest part of the refrigerator, at 40 degrees Fahrenheit or below.

Some other places, including various European countries, don’t wash the eggs like they’re required to be washed in the U.S., so the cuticle remains on the shell. This allows the eggs to be safely stored at room temperature.

Kristina Wright
Writer

Kristina is a coffee-fueled writer living happily ever after with her family in the suburbs of Richmond, Virginia.

Original photo by ABO PHOTOGRAPHY/ Shutterstock

The human body is a complex biological system, and it can’t function without certain vital organs such as the brain, heart, and lungs. But not all organs are essential; in fact, some can be removed with limited complications or drawbacks. These organs are sometimes vestigial remnants from our ancestors that have evolved to have little to no use today. Other expendable organs  provide some benefits, but can still be removed or replaced without causing harm. Here are five organs the human body can live without.

Credit: Kateryna Kon/ Shutterstock

Appendix

The appendix is a small, narrow pouch that hangs off the digestive tract in the right side of the abdomen. The Cleveland Clinic estimates 5% of the U.S. population will at one point suffer from a condition called acute appendicitis, in which the organ swells and can potentially burst, spewing toxins throughout the body. At the first signs of appendicitis, it’s common to remove the appendix through a procedure called an appendectomy, which leaves behind a small scar but usually has little impact on the body.

The medical community has researched the appendix quite extensively to determine its purpose. For many years, experts believed the organ evolved to be entirely vestigial; it was theorized to have been used by our primate ancestors to help them digest leaves. But recent studies suggest the appendix may actually perform a useful, albeit inessential function. 

Some newer theories suggest the appendix acts as a host for good bacteria, which are essential for proper gut function. If true, it means the organ can help replenish the colon when its good bacteria numbers get low. Another theory is that the appendix may actually have a negative immune effect, as it’s considered a potential risk factor for developing ulcerative colitis, an inflammatory bowel disease. While scientists are still studying the purpose of the appendix, what we do know is you can  live a perfectly healthy life without it.

Credit: transurfer/ Shutterstock

Spleen

The spleen is an important part of the body’s immune system, but if it’s removed, other organs can step in and perform its duties. The spleen, which is roughly the size of a fist, is located in the upper-left portion of the abdomen behind the left set of ribs. It works hard to keep the body healthy, as it contains white blood cells that can be used to fight off infection. The spleen also helps control and manage the levels of the various types of blood cells in your body and filters blood to remove old, damaged blood cells. 

If the spleen ruptures — for instance, due to blunt force trauma to the abdomen —  you’ll likely need to have it removed. In addition, an enlarged spleen may accidentally start to filter out healthy blood cells, which can cause serious immune problems such as anemia. Thankfully, the spleen can often be removed with a relatively uninvasive keyhole surgery called a laparoscopic splenectomy, which doesn’t require any large cuts. Once the spleen is taken out, the liver steps up to fill the void, as it can also manage and filter blood cells.

Adenoids

Adenoids are lumps of tissue located behind the nose and above the roof of the mouth. These tiny organs perform a key function in children, trapping harmful bacteria that would otherwise be swallowed and cause illness. But as we get older, our adenoids perform less of a vital role and become entirely vestigial once we reach adulthood: They shrink and can even disappear by age 13.

The downsides to having adenoids is they can become infected and swollen, which may develop into a chronic condition for some. This may obstruct your ability to breathe properly, cause issues with the sinuses, and even contribute to worsening cases of sleep apnea and loud snoring. Should this happen, you can always have them removed with an adenoidectomy, a relatively simple surgery that people generally recover from in a matter of days or weeks. 

The procedure can be a no-brainer in adulthood, by which time the adenoids no longer serve a purpose. But even children can benefit from adenoid removal, as it can lead to clearer breathing and fewer sinus infections. Adenoids are a very small part of the immune system, and the body has plenty of other ways to stave off illness.

Credit: marvinh/ iStock

Gallbladder

The gallbladder is a pear-shaped organ located underneath the liver. Its primary use is to store and — when the time is right — release bile, a digestive fluid produced by the liver that helps break down fatty foods. After you eat, bile drains from the gallbladder and is carried over to the small intestine through the biliary tract. Once it reaches the small intestine, the bile helps break down any fats you’ve consumed. This process leaves the gallbladder drained, though it’ll fill up with bile again as the liver continues to produce more.

Unfortunately, the gallbladder is susceptible to several diseases such as the development of gallstones — pebble-like objects that can cause pain and inflammation. People may also experience a blockage within the gallbladder, and in rare cases, can develop gallbladder cancer. Should the need arise, you can undergo a cholecystectomy to have the gallbladder removed. 

Without a gallbladder, bile simply skips over the storage process and flows directly from the liver into the digestive system. The main potential drawback is that it can be more difficult to digest fats, so people without a gallbladder are generally advised to avoid greasy foods. The Cleveland Clinic suggests that fat calories should take up no more than 30% of your diet if you’re lacking a gallbladder. This shouldn’t be too drastic a change for most people, as dietary guidelines for the general population recommend getting 20-35% of your daily calories from fat.

Credit: Explode/ Shutterstock

Colon

The colon is a long tube located at the end of the digestive tract and is part of the large intestine. Its purpose is to receive digested food from the intestine, absorb nutrients and water, and finally to pass along remaining waste to the rectum. Despite these important functions, it’s entirely possible to live normally without your colon. The removal procedure is called a colectomy, which is usually performed to treat extant diseases such as cancer or to prevent potential future ailments from forming.

Recovery from a colectomy isn’t immediate, as it usually requires several weeks eating a liquid-only diet before reincorporating solid foods. The good news is those who’ve undergone a colectomy have reported a better quality of life afterward. In a 2015 study, 84% of respondents claimed their lives improved after having their colons removed. 

As part of a colectomy, a surgeon may connect the small intestine directly to the anus, thus allowing the patient to expel waste normally. Another option is to connect the intestine to an opening created in the abdomen, where a temporary or permanent colostomy bag can be attached to collect waste. While this latter option may take some getting used to, those who’ve undergone colectomies are able to acclimate and live a healthy life even without a colon.

Interesting Facts
Editorial

Interesting Facts writers have been seen in Popular Mechanics, Mental Floss, A+E Networks, and more. They’re fascinated by history, science, food, culture, and the world around them.

Original photo by Ground Picture/ Shutterstock

Since ancient humans first polished reflective stone, mirrors have played a sizable role in history. Mirrors feature prominently in fairy tales, science fiction, and fantasy stories, but they’ve also helped us gaze at distant planets, moons, and stars while also aiding in our exploration of the subatomic world. Mirrors form the very foundations of today’s most advanced technologies, and, of course, they play a role in how we perceive ourselves. 

The very concept of a mirror predates the technology itself, as ancient hominins likely stared at their reflections in the still waters of pools, ponds, and lakes. But as our understanding of mirrors has grown, so has their usefulness. It’s not a stretch to say the modern world wouldn’t be possible with the humble yet fascinating mirror.

Credit: Mara Fribus/ iStock

The First Mirrors Weren’t Made of Glass

Although mirrors work thanks to complicated physics and a dash of quantum mechanics, these objects have actually been around for thousands of years. Crystal-clear water notwithstanding, the first mirrors were likely made in Anatolia (modern-day Turkey) around 8,000 years ago. However, these mirrors were likely made of obsidian, a kind of reflective volcanic glass, and in the following millennia, artisans dabbled with using different polished metals, such as bronze, copper, tin, and silver. That last element and cheap aluminum, which was introduced toward the end of the 19th century, became the metals of choice for their high reflectivity, meaning they reflect nearly all light back to the viewer. 

Of course, a white piece of paper also reflects all light in the visible spectrum, so why aren’t mirrors white? Due to its ultra-smooth surface, a mirror produces a “specular reflection.” This means the angle at which light hits the mirror (known as the angle of incidence) forms an equal angle when the light is reflected (the angle of reflection). A white sheet of paper, on the other hand, has surface imperfections that create a “diffuse reflection,” meaning the light scatters in all directions, which explains why mirrors show a reflection and paper doesn’t.

Some factors can alter reflections, such as the shape of the mirror (think of funhouse mirrors), the materials used to make the mirror, as well as the surrounding atmosphere. But under normal conditions, your mirror follows the above sequence of optical physics, known more broadly as the law of reflection.

To achieve this reflective physics in modern mirrors, a sheet of glass is cleaned and polished to remove impurities and then a coating of silver or aluminum is applied along with a chemical activator so the metal of choice bonds with the glass. Light enters through the glass, then reflects off the silver and aluminum and back to you, so you can finally check your hair or see if anything is stuck in your teeth.

Credit: Fly View Productions/ iStock

Don’t Like How You Look in Photos? Blame Mirrors

Mirrors don’t actually reproduce the exact version of you that other people see. Imagine standing in front of your reflection: The mirror doesn’t flip you horizontally (along the y-axis) as you might expect, but it does flip your image along the z-axis, which represents depth. This z-axis inversion is a bit like turning a glove inside out. This is why words on a T-shirt appear backwards in a mirror. 

This phenomenon is why many people don’t like how they look in photos. Because no one’s face is perfectly symmetrical, there are subtle changes between the face you see in the mirror and the face a camera captures on its sensor. However, when you take a selfie on a smartphone, the camera flips the image like a mirror would, so it appears more natural or familiar to us. Once you’ve taken the photo, the image that’s saved to your device is inverted so the final product looks like how other people (and the camera lens) see you. 

Credit: Katelyn Perry/ Unsplash+

Mirrors Are Also Surprisingly High-Tech

Mirrors have often been associated with vanity (think of the Evil Queen in Snow White), but in reality, their usefulness far exceeds self-admiration. In fact, mirrors are the backbone of many scientific fields. NASA’s James Webb Space Telescope — the most advanced space telescope ever made — uses an array of mirrors, or reflectors, to gather light and create stunning images of the universe. Many detectors at the CERN particle accelerator in Switzerland, the laboratory responsible for breakthrough discoveries such as the Higgs boson, use a system of mirrors for reflecting light. Oh, and microscopes? Yeah, we have mirrors to thank for them too.

Then there are lasers, which rely on mirrors for building up and manipulating a focused beam of light. In December 2022, the National Ignition Facility at Lawrence Livermore National Laboratory in California used a complicated series of lasers and mirrors to achieve the first net-gain fusion reaction — the same physics that powers the sun — in hopes of one day providing limitless clean energy. So not only have mirrors helped form parts of today’s world, they’re also at the forefront of tomorrow.

Darren Orf
Writer

Darren Orf lives in Portland, has a cat, and writes about all things science and climate. You can find his previous work at Popular Mechanics, Inverse, Gizmodo, and Paste, among others.

Original photo by WLDavies/ iStock

The United Kingdom — or, to use its full name, the United Kingdom of Great Britain and Northern Ireland — is a sovereign country comprising mainland Great Britain (England, Scotland, and Wales) and Northern Ireland, as well as numerous smaller islands and overseas territories. While most people know that England and Scotland are countries (although not sovereign ones), Wales is a bit more of a conundrum to the outside world. Is it a country? Or maybe a county, or a region? Or even, as some would say, a principality? Well, there is a clear answer to this question, although it’s helpful to have some background to get the full picture.

Credit: Polly Chong/ Shutterstock

First, What Is the United Kingdom?

Delving into the sprawling geopolitical history of the United Kingdom isn’t for the faint of heart. It’s a complex, muddled, and endlessly colorful tale of kings and queens, unions and betrayals, and war and peace, that spans many centuries. Historians can even place the origins of the United Kingdom as far back as the Anglo-Saxon King Æthelstan, who in the 10th century secured the allegiance of neighboring Celtic kingdoms. These allegiances, however, didn’t last. 

Until the early 17th century, England and Scotland were entirely independent kingdoms. The Principality of Wales, however, had been under the control of English monarchs ever since the 1284 implementation of the Statute of Rhuddlan, which legally assimilated Wales into the Kingdom of England. Then, in 1707, the English and Scottish Parliaments passed the Act of Union, leading to the creation of a united single kingdom to be called Great Britain. 

After that came the Act of Union of 1801, which formally united Great Britain (which included England, with Wales still annexed to it, and Scotland) with Ireland under the name United Kingdom of Great Britain and Ireland. The partition of Ireland in 1921, which saw the Emerald Isle split into Northern and Southern Ireland, resulted in the formation of the current United Kingdom of Great Britain and Northern Ireland. 

Credit: Ian Shaw/ Alamy Stock Photo

Is Wales a Principality?

So where does Wales stand in all this? If you walked the streets of an English city today and asked people what they would label Wales, there’s a strong possibility that at least one English person would call it a principality. This notion has historical context, as the English King Edward I conquered Northern Wales (though not the whole of Wales as we know it today) and made it a principality in 1284. The union between Wales and England lasted a long time, whereas Scotland, for example, was an independent kingdom through the Middle Ages. This partially explains why there’s more confusion surrounding the status of Wales as a country than there is with Scotland.

Then there’s the existence of the Prince of Wales — a title formerly held by King Charles III, the current monarch of the U.K., and now by his son, Prince William — which can lead people to believe Wales is a principality (a state ruled by a prince). The media, too, sometimes refers to Wales as such. But Wales as a whole is not, and has never been, a principality, at least as it’s defined territorially today; only North Wales was an actual principality of the English crown. 

Credit: CHUNYIP WONG/ iStock

Wales Is a Country

Wales is, in fact, a country. To be more specific, it’s a country within a country. The United Kingdom of Great Britain and Ireland is a sovereign state, but the nations it’s made up of are also countries in their own right. It’s interesting to note, however, that only the United Kingdom (as a whole) is considered a member of the United Nations. You won’t find England, Wales, Scotland, or Northern Ireland listed as member states, as they’re considered part of the U.N. as constituent nations of the United Kingdom.

Today, Wales has its own government and legislature — the Welsh Parliament is called the Senedd Cymru — but Wales also falls under the purview of the government of the United Kingdom of Great Britain and Northern Ireland. Each governing body has power and responsibility over different aspects of the country. For example, Senedd Cymru is responsible for agricultural, economical, educational, and environmental matters, among other things, while the U.K. government deals primarily with defense, foreign affairs, immigration, and crime. Beyond the technicalities, Wales is also very much a country in terms of cultural identity, with its own language, national symbols, rich traditions, and more.

Interesting Facts
Editorial

Interesting Facts writers have been seen in Popular Mechanics, Mental Floss, A+E Networks, and more. They’re fascinated by history, science, food, culture, and the world around them.

Original photo by PRESSLAB/ Shutterstock

Nature is filled with an extraordinary assortment of patterns that are fundamentally unique, defying replication even under seemingly identical conditions. These unique patterns emerge through a series of complex interactions that can involve environmental factors, random variations, and, in the case of living things, genetic information, all helping to form these one-of-a-kind designs. 

Here are some of the most notable examples of the unique patterns that occur in nature, from our own fingerprints to individual snowflakes to the spots and stripes on wild animals. 

Credit: Meg Jenson/ Unsplash

Fingerprints

Fingerprints are the quintessential example of a unique natural pattern. Formed during fetal development, these intricate ridges and whorls result from a combination of genetic factors and developmental conditions. The complex interactions between genetic instructions and environmental influences (even down to the chemical environment inside the womb) create patterns so distinctive that even identical twins don’t have the same fingerprints. For this reason, fingerprints have been used to identify people for more than a century, especially in criminal investigations. The first criminal trial to use fingerprints as evidence was in 1910, and fingerprinting remains a vital part of forensic analysis. 

Credit: Aaron Burden/ Unsplash

Snowflakes

Snowflakes are another naturally occurring phenomena that are famously distinctive. Snowflakes form when water vapor travels through the air and condenses on a particle, creating a slow-growing ice crystal. They can start off quite similar to each other, but nearly imperceptible changes in temperature and the amount of water in the air change how the molecules act and how they condense. Because of this, no matter how many billions of snowflakes fall from the sky, no two will ever be the same. 

Credit: Bruno Almeida/ Pexels

Certain Animal Coat Patterns

Some animals possess unique markings that are genetically influenced but randomly distributed, serving to distinguish one animal of a certain species from another. Zebras, for example, may look very similar at first glance, but no two zebra stripe patterns are exactly alike. Scientists believe zebra stripes may have the social purpose of allowing the animals to recognize each other, while also possibly confusing predators such as lions and perhaps even helping regulate body temperature. Other animals with unique coat patterns include giraffes, tigers, and leopards. Their coats are the equivalent of human fingerprints, helping scientists identify individual animals simply by  their unique patterns. 

Credit: Trevor Scouten/ Shutterstock

Humpback Whale Flukes

Scientists have recently discovered that unique marks on the underside of humpback whale flukes — the flat, horizontal lobes that make up each side of their tales — are useful identifiers just like human fingerprints. Each whale tail has unique patterns and coloration, which has proved invaluable to researchers, who can identify the animals by looking at the fluke when it rises above the water (which happens almost every time a whale starts to dive). Unique scarring and even barnacle distribution on the whale’s tail can make the identification process even easier. 

Credit: Yaya Photos/ Shutterstock

Blood Vessel Networks

The intricate network of blood vessels in human and animal bodies alike develops in a singular manner for each individual, making these networks as unique as fingerprints. In humans, vascular pattern recognition, also known as vein pattern authentication, is a fairly new form of biometric identification. It uses near-infrared light to scan the patterns of subcutaneous blood vessels in a hand or finger, verifying the identity of the scanned individual. The uniqueness of blood vessel networks has also been used to distinguish between individual animals. Rodents, for example, can be identified through the distinct blood vessel patterns of their ears, providing a quick, noninvasive, and painless way of telling these diminutive creatures apart. 

Credit: AnnaVel/ Shutterstock

Eye Structures

The eyes, so they say, are the windows to the soul. As it happens, eyes are indeed unique identifiers in humans and other animals. Retinal scanning as a form of identification has been a staple of science fiction for decades but has also been utilized for security purposes in real life by government agencies including the FBI, CIA, and NASA. This form of identification works due to the aforementioned unique blood vessel patterns, which can be seen on a person’s retina. In fact, the retina is the only part of the human body where blood vessels can be viewed directly, without the need for an additional apparatus.

Our eyes offer another easy means of identification: the iris. The intricate arrangements of pigments and strands in an iris create a pattern so unique that it, too, can be used as a reliable biometric identification method. Iris recognition made the news in 2016 when it was revealed that the FBI had collected nearly 430,000 iris scans from people who’d been arrested — a highly controversial move, but one that clearly demonstrates the potential and efficacy of iris scanning for identifying humans.

Tony Dunnell
Writer

Tony is an English writer of nonfiction and fiction living on the edge of the Amazon jungle.

Original photo by Suzy Hazelwood/ Pexels

Since its creation in the 1930s, Scrabble has grown into a staple of family game nights and the source of spirited competitions around the world. In fact, according to Mattel, more than 165 million copies of the game have been sold in 120 countries since 1948. 

Each game of Scrabble depends not only on your lexical knowledge but also the luck of the draw; sometimes you’ll grab the perfect tiles to play on a triple word score, while other times you’re left holding five Es and two Os with no idea how to use them. Not only is Scrabble known for providing hours of family fun, but its history is also full of fascinating tidbits that only add to its mystique.  Here’s a look at some of the most spellbinding facts about this classic spelling game.

Credit: Yvonne Hemsey/ Hulton Archive via Getty Images

Scrabble Wasn’t Always Called “Scrabble”

During the Great Depression, architect Alfred M. Butts, like many other Americans, found himself unemployed and with some extra free time. He decided that creating a new game could provide a helpful distraction from the bleak economy. In 1931, he wrote a “Study of Games,” an essay analyzing the popularity of three distinct types of games: board games, number games, and letter/word games. 

In this treatise, Butts suggested the proper way to play word games was “not with a jumble of letters but with a mixture so proportioned that the individual letters will occur in the same frequency as they do in normal word formation.” This idea was inspired by Edgar Allen Poe’s “The Gold-Bug,” a short story in which a character solves a cipher based on the popularity of English letters.

As a lover of crossword puzzles, Butts sought to create a similar game that incorporated his theories about letter frequency. He first developed Lexiko, a game wherein players selected nine random letter tiles to form words. Using his architectural knowledge, he manufactured each of the 100 tiles by hand. 

By August 1934, however, Butts had only sold 84 sets of Lexiko for a net loss of $20.43 (around $480 today). He tinkered with the rules and came up with a new variant called Cross-Cross Words, in which each letter was assigned a point value and each player’s hand decreased from nine to seven. Much like Lekixo, Criss-Cross Words failed to attract interest from any major manufacturer. Fortunately, Butts’ luck was about to change.

Credit: JHVEPhoto/ iStock Editorial via Getty Images Plus

Macy’s Helped Popularize Scrabble

While things initially looked bleak for Butts’ word game, that all changed in 1947, when a man named James Brunot reached out to buy the rights. Butts obliged in exchange for royalties, and Brunot got to work on making a few changes. He adjusted the rules ever so slightly, implementing a 50-point bonus for playing all seven tiles in your hand at once, and he also changed the colors of the board to be more reminiscent of those we know today. Brunot finally renamed the game “Scrabble” — a word meaning “to scratch or scrape” according to Dictionary.com — which Brunot chose “out of thin air largely because it was a name that had never been copyrighted,” as the Los Angeles Times reports.

Brunot and his friends began working out of an abandoned Connecticut schoolhouse to produce 12 Scrabble boards every hour. Yet the team only managed to sell around 200 sets per week, resulting in an overall financial loss. Then, according to an unconfirmed but widely believed legend, Brunot got the break of a lifetime in 1952. 

In the book Word Freak by Stefan Fatsis, the author details a story in which then-Macy’s chairman Jack I. Straus played a game of Scrabble while on vacation. Straus was delighted by the game and also upset to learn it wasn’t stocked at Macy’s flagship store in New York City. Upon his return from vacation, Straus placed a massive Scrabble order, which triggered other retailers to do the same. In a fortunate turn of events, Brunot found himself producing thousands of Scrabble sets each week. More than one million copies of Scrabble were sold in 1953, and 3.8 million sold the following year.

Credit: Teacher Photo/ Shutterstock

The Longest Scrabble Word Can Score You More Than 1,700 Points

The goal of Scrabble is to net the highest score possible. Amateurs often surpass 100 points per game, while experts average between 330 and 450 points. But in theory, it’s possible to score more than 1,700 points with a single word, though that requires a highly unlikely set of circumstances. The word in question is “Oxyphenbutazone,” the name of an anti-inflammatory drug. As the word is 15 letters long, it’s impossible to have all those tiles in your initial seven-tile hand. This means not only does it require drawing the right set of letters, but also already having placed certain tiles on the board in just the right way.

Scrabble aficionado Dan Stock calculated that playing “OXYPHENBUTAZONE” could score you 1,778 points. According to his strategy, you’d need to be holding the tiles O, Y, P, B, A, Z, and E. You’d also need to have already placed eight specific tiles on certain squares on a standard 15 x 15 Scrabble board: X, H, E, N, U, T, O, and N. The following words also would need to be strategically placed on the board: “PACIFYING,” “ELKS,” “REINTERVIEWED,” “RAINWASHING,” “MELIORATIVE,” “ARFS,” and “JACULATING.”

If you managed to recreate that exact scenario, “OXYPHENBUTAZONE” would score you 1,458 points on its own (thanks to all the word score bonuses). You’d also get 50 points for playing your entire hand and 270 points from creating seven new words running from left to right: “OPACIFYING,” “YELKS,” “PREINTERVIEWED,” “BRAINWASHING,” “AMELIORATIVE,” “ZARFS,” and “EJACULATING.” Although Stock was the first person to calculate this strategy, there have been more recent proposals that can theoretically score you as many as 1,784 or 1,786 points with the right placement of “OXYPHENBUTAZONE.” However, these specific circumstances needed for those high scores have never arisen in actual gameplay to anyone’s knowledge.

Credit: Kerry Taylor/ Alamy Stock Photo

There Are More Than 280,000 Words in the Collins Scrabble Dictionary

Collins is the producer of Official SCRABBLE Words — a book used by English-language Scrabble players to determine which words are legal in competitive play. Nearly 2,000 new words were added in 2024, marking the first major update since the 2020 edition, which contained 279,073 words. These newer additions include slang words such as “yeet,” culinary terms such as “birria,” and gaming lingo such as “esport.” This updated version of Official SCRABBLE Words has served as the gold standard for tournaments since  January 1, 2025.

While the Collins Scrabble dictionary remains the official source for tournament play, Hasbro — the current manufacturer of Scrabble — works with Merriam-Webster to produce The Official Scrabble Players Dictionary. This text is comparatively limited in what words are considered legal, as there are a little more than 100,000 words included. This book was most recently updated in 2022, with the addition of 500 new words including shorthand terms such as “guac” and playful verbs such as “adulting.”

Credit: son Photo/ Shutterstock

Scrabble Been Turned Into Multiple TV Shows

Starting in 1984, Scrabble was adapted from the dining room table to the small screen in the form of a game show hosted by Chuck Woolery. The TV version revised the board game slightly: Instead of playing words from your hand, the game was played more like a crossword puzzle. In round one, two contestants were given a punny crossword-style clue and a random letter to build on to guess the word. The winner advanced to the Sprint Round, where players guessed words against a clock. This original version of Scrabble earned a Daytime Emmy nomination in 1988 and was briefly revived in 1993 with Woolery reprising his role as host.

A new Scrabble TV show debuted on the CW Network on October 3, 2024, with Raven-Symoné as host. Unlike the 1980s version, this adaptation adheres a bit more closely to the actual rules of Scrabble. In the first round, contestants are tasked with solving anagrams based on a clue. But in the second round, players alternate making actual Scrabble moves as you would in the board game. The difference here, however, is that both contestants use the same set of tiles to make those words. The final round involves the most traditional Scrabble gameplay, where contestants play their personal hand of tiles in hopes of surpassing 200 points to win.

Credit: John Benitez/ Unsplash

Competitive Scrabble Has Had Some Major Cheating Scandals

The world of competitive Scrabble prides itself on maintaining the integrity of the game, but some participants have let the pressure drive them to cheating. One of the first major accusations of competitive cheating came during the 2011 World Scrabble Championships, where a player was nearly strip-searched to find a missing “G” tile before authorities concluded there wasn’t enough evidence to take things that far. One year later, a player was found to have been concealing blank tiles for use at opportune times. After being confronted, the player admitted to stealing blank tiles and was subsequently ejected from the competition.

In 2017, the competitive Scrabble world was rocked yet again when Allan Simmons, the former U.K. national champion and coauthor of Official Scrabble Words, was banned from competitive play for three years amid accusations of cheating. Multiple competitors claim they saw Simmons peeking into the bag and returning less helpful tiles in exchange for ones that were more valuable. Simmons denied any wrongdoing, but the ban still held.

Interesting Facts
Editorial

Interesting Facts writers have been seen in Popular Mechanics, Mental Floss, A+E Networks, and more. They’re fascinated by history, science, food, culture, and the world around them.

Original photo by PaKApU/ Shutterstock

Popcorn is such a common, simple snack that we rarely stop to think too much about it. Unlike the sweet yellow corn we eat on the cob, popcorn comes from a variety of corn called Zea mays everta. This type of corn possesses a few unique traits that quite literally make it pop.

Popcorn kernels have a hard outer shell and high levels of starch and moisture inside. As the insides heat up, they’re forced through the exterior into the fluffy treat we know and love. Other types of corn can slightly burst open when heated; even some grains such as quinoa and sorghum can “pop.” But none of them come close to the size, crunch, and timeless appeal of popcorn — which is probably why archeological evidence suggests people have been popping it for more than a thousand years. Here are a few more interesting facts about popcorn.

Credit: virtustudio/ iStock

Popcorn Has Two Main Shapes: “Mushroom” and “Snowflake” 

At a quick glance it may seem like all popcorn looks the same, but it actually pops into two distinct shapes: mushroom and snowflake. Mushroom-shaped popcorn is rounded and dense, making it ideal for coating in caramel or chocolate since it’s less likely to crumble under pressure. 

Snowflake-shaped popcorn, on the other hand, has an airy, irregular shape with plenty of crevices and a light crunch, — optimal for butter and salt to cling to. The particular shape a kernel produces isn’t random; it’s determined by genetic structure and the popping technique used. Popcorn makers often choose a shape depending on how their popcorn will be flavored or packaged. 

Credit: New Africa/ Shutterstock

People Used To Eat It for Breakfast

It may seem strange now, but in the late 19th and early 20th centuries, popcorn was commonly served as breakfast cereal. Recipes in early 1900s issues of Good Housekeeping magazine suggested grounding up popped corn and serving it with milk and fruit. Meanwhile, the U.S. Department of Agriculture suggested boiling it with water and serving it like oatmeal. 

In her 1916 cookbook Popcorn Recipes, Mary Hamilton Talbott recommended soaking popcorn in cold water overnight, then cooking it in milk in the morning — perhaps a precursor to overnight oats. Despite the plethora of recipes, popcorn cereal eventually gave way to more refined packaged options, and by World War II, its time as a breakfast staple came to an end.

Credit: Viniciosferreira/ Shutterstock

It’s Foamy When It First Pops

When popcorn first pops, it isn’t crunchy — it’s actually foamy. As the kernel heats up, the moisture inside turns to steam. Eventually, the pressure from the steam causes the outer hull to burst open, and as the starches inside quickly expand, the fluffy structure we know as popcorn is formed. 

In the instant after popping, though, the popped kernel is more like a gelatinous foam than a crisp snack. As it’s exposed to the air, the foam almost immediately cools and hardens into the light, crunchy texture we know and love. 

Credit: iStock_Oles/ iStock

The Great Depression Led To Movie Popcorn

Popcorn and movies may be an inseparable duo now, but it wasn’t always that way. In fact, when U.S. movie theaters first emerged in the early 1900s, they even outright banned the snack because employees didn’t want to deal with the mess. 

By 1930, as silent films gave way to “talkies,” theater attendance soared. Popcorn vendors stationed themselves outside theaters, and since customers increasingly brought the snack in with them, theater owners began leasing lobby space to the vendors. Eventually, many theaters realized selling their own popcorn could be crucial for helping their business survive the Great Depression — and they never looked back. By the mid-1940s, more than half the popcorn consumed in the U.S. was eaten at the movies.

Credit: Piotr Wytrazek/ iStock

Popcorn Was Involved in the Invention of the Microwave

During World War II, radar technology was instrumental in helping Allied forces achieve victory. After the war, U.S. aerospace and defense company Raytheon continued its research into radar and the magnetrons used in it. While working on a radar system one day, engineer Percy Spencer noticed a candy bar in his pocket had melted. The next day, he brought in popcorn kernels and, after placing them near the device, watched them pop. 

He continued experimenting with other snacks, including a much messier egg, and confirmed that microwaves could indeed cook food. Raytheon patented the idea, and in 1947, the first microwave oven was released — as was Spencer’s patent for a process to pop entire corncobs whole, although that never really took off.

Credit: Natalya Stepowaya/ Shutterstock

It Used To Be Called “Pearl”

By roughly the mid-1820s, popcorn was a beloved snack throughout the Eastern United States — but it wasn’t called popcorn at that time. It was first sold as “pearl” or “nonpareil,” the latter of which is now associated with a flat chocolate candy topped with sprinkles
Throughout the 1800s, the popularity of the snack spread; Henry David Thoreau even wrote of “popped corn” in 1842. The name also started to evolve, and by 1848, the word “popcorn” — said to be derived from the noise the kernels made when they burst open — was included in linguist John Russell Bartlett’s Dictionary of Americanisms.

Nicole Villeneuve
Writer

Nicole is a writer, thrift store lover, and group-chat meme spammer based in Ontario, Canada.

Original photo by FotografiaBasica/ iStock

What’s the capital city of New York? California? Nevada? If you were to answer New York City, Los Angeles, and Las Vegas, we wouldn’t blame you, but you’d be wrong. It would be reasonable to assume a state’s capital city would be its most prominent or populous metropolis. However, this is only rarely the case. 

From geographic location to historical significance, there are many reasons why a state might select a specific city as its capital. While there are commonalities across the board, each state capital has a unique history of how it came to be. From the oldest capital city (Santa Fe, New Mexico, which has been the capital since 1610) to the youngest (Oklahoma City, Oklahoma, which became the capital in 1910), here’s how the U.S. capital cities came to achieve their prominent status. 

Credit: omersukrugoksu/ iStock

Location, Location, Location

Far and away the most common theme in choosing a state capital is a city’s central location, which makes a lot of sense when you consider the purpose of a capital. A capital city houses the state’s governmental body, meaning legislators will frequently travel to this area from all parts of the state whenever the government is in session. The city also serves as a representative of the entire population, and a central location makes it more accessible to all of its residents. States that chose a capital city based on centrality include Richmond, Virginia ( in 1782), Columbia, South Carolina (1786), Columbus, Ohio (1816), and Salt Lake City, Utah (1896).  

Credit: TonyBaggett/ iStock

Wartime Decisions 

When America was fighting for independence, the 13 colonies along the East Coast discovered another notable perk of having capital cities that were in the middle of the state versus on the coast: the protection afforded by inland locations during wartime. Central locations didn’t just have political benefits; they also made it more difficult for enemy forces (in this case, the British) to seize these major metropolises. 

During the Revolutionary War, Delaware moved its state capital from New Castle to Dover because the legislative body considered the latter city safer from attacks by British raiders traveling along the Delaware River, which was accessible by the Atlantic Ocean via the Delaware Bay. Dover was officially established as the capital of Delaware in 1781. North Carolina followed suit in 1788, opting to move its capital city from New Bern westward to Raleigh, where it was far less susceptible to British attacks from the Atlantic Ocean.

The Civil War also played a role in the establishment of some state capitals, though for different reasons. The capitals established after the Civil War were largely chosen to end any lingering territorial and social disputes. The establishment of Topeka, Kansas set the course for this kind of resolution. Before Kansas achieved statehood in 1861, conflict broke out amongst pro- and anti-slavery proponents in a period of violent conflict that began in 1854 nicknamed “Bleeding Kansas.” Topeka represented the Free State side. When these territorial fights ended in Topeka’s favor the same year Kansas achieved statehood, it became the capital. 

Arizona’s capital was also influenced by the Civil War, which had split the state itself into two halves: Confederate sympathizers and Union supporters. Tucson and Prescott were the capitals of the Confederate and Union halves, respectively. After the end of the war and the reunification of Arizona’s two territories, the capital city ping-ponged between Tucson and Prescott until state legislators decided to adopt a city located halfway between the two. Thus, Phoenix was established as Arizona’s capital city in 1889

Credit: Christine_Kohler/ iStock

Historic Hubs

Most of us would probably guess that a capital city should be the state’s most economically significant metropolis, and for many U.S. states, this holds true. However, in some cases, capital cities that were once economic strongholds no longer hold that position today. The state of New York is a perfect example of this economic shift. When Albany was founded as its capital in 1797, the city had been a trade and military hub during the Revolutionary War. Despite New York City becoming the more economically significant city in the decades to follow, Albany remains the capital to this day.

Other state capitals chosen for their historic economic importance include Honolulu, Hawaii, which Hawaii’s King Kamehameha III declared the island nation’s capital in 1850 after Honolulu Harbor became a major commerce and transportation hub between Hawaii and the mainland United States. When Hawaii was granted statehood nearly a century later in 1959, Honolulu was kept as the state capital. 

Capital cities of landlocked U.S. territories such as Idaho, Montana, and North Dakota tended to be chosen based on the territory’s major commerce and trade hubs. Boise, Idaho was established as the Gem State’s capital city in 1864 after miners and farmers began flooding into the Boise Basin from Lewiston, the then-territory’s first capital city. 

Helena, Montana followed a similar path from a regular city to state capital after gold was discovered in the surrounding area. It was selected as the capital of the Montana Territory in 1875, and after Montana became a state, Helena was named state capital in 1894. Bismarck, North Dakota’s railroads and proximity to the Missouri River made it the natural choice for the capital of the Dakota Territory in 1883, and it became the capital of the state of North Dakota six years later when the territory split.

Other capital cities that were established for their status as a transportation or economic hub include Portland, Maine (in 1820), Jackson, Mississippi ( 1821), and St. Paul, Minnesota ( 1858).

Credit: Denis Kvarda/ Shutterstock

History and Natural Landscape

Other state capital cities were selected for their historical significance or natural landscape. Boston, Massachusetts is an example of the latter. Boston was already a major cultural hub in the 1630s, making it the obvious choice for the capital of Massachusetts in 1632. Trenton, New Jersey became New Jersey’s capital city for similar reasons in 1790. Not only was the city close to other major metropolises including New York City and Philadelphia, but it was also a historically significant site, since some of the earliest settlers landed in the area a century prior.

Some capitals were selected due to the area’s natural features. For example, former president of the Republic of Texas Mirabeau B. Lamar chose the southern city of Austin, Texas, in 1839 for the state’s capital city for its climate and scenic beauty. Other preferences were more practical. Lawmakers in Louisiana recognized the economic significance of New Orleans but wanted a capital city safe from the regular flooding of the Mississippi River. Thus, they established Baton Rouge as the capital city 100 miles north of the port town in 1845. Santa Fe, New Mexico, the oldest capital city , was the obvious choice for this Southwestern state due to the ample water supply from the Santa Fe River. 

Credit: PeskyMonkey/ iStock

Money and Political Influence 

As the old adage goes, “money talks,” and this was sometimes true when it came to establishing a state capital in the 18th and 19th centuries. Frankfort, Kentucky, though overshadowed by nearby cities Louisville and Lexington, became the Commonwealth’s capital when it outbid competing cities for the privilege in 1792. A Kentucky commission to establish a capital city had been founded that year with the goal of sorting out a geographically and financially suitable location. Frankfort landowner Andrew Holmes offered land, money, and building materials for a capitol building in Frankfort, sealing the deal for the commission.

Similarly, a wealthy land and property owner in Nevada, Abe Curry, helped establish Carson City as the state’s capital by using his wealth to finance a courthouse building. As no other comparable cities had Curry’s level of monetary backing, Carson City was the obvious choice for the then-territory’s capital city.

When the money didn’t talk, political power did. Many state capitals were founded because it was the state lawmakers’ preference at the time. This was the case in Providence, Rhode Island (in 1831), Lansing, Michigan (1847), Nashville, Tennessee (1843), Sacramento, California (1854), Atlanta, Georgia (1868), and Denver, Colorado (1881).

Credit: Tudoran Andrei/ Shutterstock

By Public Vote 

In true democratic fashion, some populations established their capital city through either public vote or based on which city had the largest population. Of course, these two factors tend to go hand in hand: The greater the population of a city, the higher the likelihood that it would win a statewide vote on where to put the capital. 

Before Hartford became Connecticut’s capital city, the small state technically had two capitals: Hartford and New Haven, dating back to before the Connecticut Colony and New Haven Colony merged. However, maintaining two capitals was financially and logistically arduous, prompting politicians to call for a public vote to determine which city would take over as the sole capital. Hartford won the popular vote in 1873. 

West Virginia’s capital city was also decided by popular vote. During the Civil War, the Appalachian Mountain state’s citizens were largely divided between Union and Confederacy supporters, resulting in the state capital moving from city to city to appease one party or the other. A popular vote was eventually held in 1877, granting Charleston its status as the state’s capital.

Another example is Oklahoma. Before it became a state, Guthrie was the territory’s capital city, and it remained so for a few more years after the Sooner State achieved statehood in 1906. Eventually, other cities were given the chance to compete for the right to beat out Guthrie for the status of capital city. Governor Charles Haskell held an election in the summer of 1910, and Oklahoma City was the overwhelming winner.

Melanie Davis-McAfee
Writer

M. Davis-McAfee is a freelance writer, musician, and devoted cat mom of three living in southwest Kentucky.