Original photo by Album/ Alamy Stock Photo

It’s strange to imagine a scientist achieving the widespread recognition and adulation enjoyed by movie stars and elite athletes, but Albert Einstein’s life was seemingly propelled by such contradictions. He was an undistinguished student (according to some accounts) who blew the doors off centuries of established Newtonian physics; a pacifist who encouraged the creation of the atomic bomb; and an inherently soft-spoken individual who became a reliable dispenser of timeless wisdom. Here are six quick-hitting facts about this legendary 20th-century luminary.

Einstein as a fourteen years old boy.
Credit: ullstein bild Dtl via Getty Images

Einstein’s Speech Was Slow to Develop in Childhood

Although he would eventually discover ways to communicate the far-reaching concepts percolating in his imagination, a young Einstein was slow to learn to talk properly. According to Walter Isaacson’s Einstein: His Life and Universe, the future physicist didn’t begin speaking until after turning 2 years old, and for several years after that would whisper words quietly to himself before saying them out loud. This behavior sparked concerns that he had an intellectual disability, with the family maid nicknaming him “der Depperte” — “the dopey one.” Nowadays, a child who is slow to pick up language but otherwise exhibits sound analytical thinking is sometimes said to have Einstein’s syndrome.

The total solar eclipse in 29 May 1919.
Credit: Science & Society Picture Library via Getty Images

Einstein Rose to Fame After a 1919 Eclipse Confirmed His Theory of General Relativity

Still largely unknown even after publishing a string of revolutionary papers in his “annus mirabilis” of 1905, Einstein was primed for another breakthrough after uncovering the equations to support his theory of general relativity in 1915. However, as Germany was entrenched in warfare with much of the rest of Europe, it took a standup act of international goodwill for English astronomers Arthur Eddington and Frank Watson Dyson to test out the German physicist’s work. Their expeditions to examine the solar eclipse of May 29, 1919, confirmed Einstein’s prediction that gravity would cause light to “bend” around the sun, and the public revelation of those findings a few months later marked the beginning of Einstein’s ascension to the status of world-renowned genius.

Mathematical physicist Albert Einstein plays a violin in a music room.
Credit: Hulton Archive via Getty Images

Einstein Was an Enthusiastic Amateur Musician

When not immersed in mathematical minutiae, Einstein was known to unwind by playing the violin or piano. He reportedly traveled almost everywhere with his violin — although he owned several throughout his life, he nicknamed all of them “Lina” — and hosted regular Wednesday night chamber sessions during his years living in Princeton, New Jersey. So just how good was this master of the universe at his musical endeavors? He apparently struggled to stay in sync, but otherwise drew solid praise from acquaintances, who described his violin talents with comments ranging from “accurate but not sensuous” to “a good technique and an opulent tone.”

Albert Einstein sticking out his tongue.
Credit: Bettmann via Getty Images

The Famous “Tongue Photo” Was Shot on His 72nd Birthday

Known for an irreverent personality to match his astonishing brainpower, Einstein showcased his cheeky nature following an evening spent celebrating his 72nd birthday at Princeton University on March 14, 1951. Reportedly tired of dealing with the press that swarmed the event, the professor climbed into a car with two colleagues, stuck out his tongue in response to a request for one more photo, and zoomed off into the night. UPI photographer Arthur Sasse timed the shot perfectly, and whatever irritation Einstein felt at the moment the photo was taken, he liked the outcome enough to order nine prints to use for personal greeting cards.

Albert Einstein chatting with Israeli Prime Minister David Ben-Gurion.
Credit: Bettmann via Getty Images

Einstein Declined an Offer to Become President of Israel

After publicly supporting the Zionist movement (even though his relationship with Zionism was complex), Einstein had the opportunity to become Israel’s second president following the death of Chaim Weizmann in 1952. The pitch came late that year in a letter from Israeli ambassador Abba Eban, who promised the academic icon “freedom to pursue your great scientific work” but also stipulated that the move to Israel would be required. Einstein wrote back that he was “saddened and ashamed” he could not accept, citing his advancing age and an inability to “deal properly with people” as reasons for declining the honor.

Albert Einstein giving a lecture in front of a microphone.
Credit: Keystone/ Hulton Archive via Getty Images

The FBI Kept a Thick File on the Outspoken Physicist

Although he escaped Nazi persecution by fleeing to the United States in 1932, Einstein soon drew attention from government watchdogs of his adopted home country. FBI concern was initially moderate over his anti-war views and friendships with far-left figures such as actor, singer, and activist Paul Robeson, but bureau chief J. Edgar Hoover upped the ante after Einstein criticized the development of the hydrogen bomb during a TV appearance in 1950. The FBI tried — and failed — to obtain permission to wiretap Einstein’s phone and have him deported, but nevertheless monitored his correspondence and investigated his personal and professional relationships. By the time of his death in 1955, Einstein’s FBI file had swollen to a whopping 1,427 pages.

Interesting Facts
Editorial

Interesting Facts writers have been seen in Popular Mechanics, Mental Floss, A+E Networks, and more. They’re fascinated by history, science, food, culture, and the world around them.

Original photo by sema srinouljan/ Shutterstock

If you grew up in the United States, chances are you learned the names of all 50 states at a pretty young age. But while it may be hard to imagine now, the U.S. map could have looked very different. Here are eight states that almost had entirely different names — and the fascinating stories behind them.

Sign Welcome to Nevada in Death Valley.
Credit: Byelikova_Oksana/ iStock

Nevada

Anyone who has traveled around the West has probably come across the name Humboldt. It appears in county names, street signs, rivers, and mountain ranges — and if history had gone a little differently, the state of Nevada would bear this name, too.

The Humboldt name found its way across the region because of the exploits of an explorer and naturalist named Alexander von Humboldt. Born in 1769, Humboldt helped popularize scientific exploration with his book Kosmos. He had a fascination with geology, and he ended up traveling approximately 6,000 miles across Central and South America, exploring the oceans and landscapes. On his travels, Humboldt became the first person to figure out that altitude sickness was caused by lack of oxygen.

However, Humboldt never actually set foot in the western U.S. It was fellow explorer John C. Fremont who chose to name many locations after him in honor of his scientific contributions. When Nevada became a state in 1864, Humboldt was seriously considered as a name — but ultimately, the government chose Nevada, the Spanish word for “snow-covered” instead.

Welcome to Utah sign.
Credit: turtix/ Shutterstock

Utah

The origins of Utah are closely tied to the history of the Mormons, who initially wanted to name this state Deseret after a name in the Book of Mormon. While the Mormon church began in New York, its members struggled to acclimate. This forced church members to hit the road as they searched for a place to settle.

Leader Brigham Young decided to move the Mormons west to the Salt Lake basin. As they began to settle, Young petitioned Congress to create a new state for them. The initial suggested boundaries of Utah were enormous, spreading across what is now Nevada and stretching all the way to the coastline of Southern California.

Young’s petition was initially declined, at least in part due to the prevailing anti-Mormon bias in American society at the time. However, after the Mormons publicly abandoned polygamy several decades later, they were finally granted statehood in 1896. The resulting state was much smaller than they had hoped, and they didn’t get to name it Deseret. Instead, the government chose the name Utah, after the Ute tribe that lived there.

Highway sign for Interstate road in Maine with map in front of clouds.
Credit: Grafissimo/ iStock

Maine

New Somerset, Yorkshire, Columbus, and Lygonia were all potential names for Maine, but, of course, none of them stuck. In fact, King Charles reportedly hated the name New Somerset so much that he responded adamantly that the region should be known as “the County of Mayne and not by any other name or names whatsoever.”

The name Mayne first appeared in writing as early as 1622, but to this day, no one is quite sure how it morphed into Maine instead — and where the name ultimately came from. The most prevalent belief is that the region was named after the nautical term “main land” to distinguish it from the many islands located in the sea around the coast of Maine. An alternate theory is that it was named after an English village or a French province of the same name. However it came to be, King Charles can rest easy knowing that the name New Somerset never stuck (though Somerset is the name of a county in Maine).

Welcome to Kentucky road sign.
Credit: AndreyKrav/ iStock

Kentucky

We’re all familiar with Kentucky bourbon and the Kentucky Derby, but if history had gone another way, we could have been drinking Transylvania bourbon while watching the Transylvania Derby. The name has nothing to do with Dracula, although T-shirts for Lexington’s Transylvania University are always a popular tourist souvenir.

In 1750, physician and explorer Thomas Walker came across a long-rumored path through the Appalachian Mountains, which he named the Cumberland Gap in honor of the Duke of Cumberland. Nearly 20 years later, explorer Daniel Boone crossed the Gap; Fort Boonesborough was established in 1775.

Around the same time, businessman Richard Henderson set up the Louisa Company to negotiate the purchase of some land in what is now Kentucky. The company soon changed its name to the Transylvania Company, and in 1775, Henderson signed the Treaty of Sycamore Shoals with the Cherokee tribe, granting him a large tract of land. It became known as the colony of Transylvania. The Latin root “sylvania” refers to a wooded area, and “trans” means “across” (as in, across the Appalachians).

Unfortunately, Henderson’s treaty was quickly struck down since Virginia had already laid claim to the land and declared ownership of all rights. Hopes for Transylvania faded, and in 1792, this part of Virginia’s land broke away to become the state of Kentucky. However, no one can quite agree on the origin of the name. Possible translations include “prairie,” “land of tomorrow,” and “river of blood.”

Cancelled stamp from the United States: Greeting from Oklahoma.
Credit: SunChan/ iStock

Oklahoma

Fifty-five Native American tribes live in Oklahoma, and at one time, it was proposed that Oklahoma would be named after one of their most renowned figures — Sequoyah, who introduced reading and writing to the Cherokee language. In 1890, the Oklahoma Organic Act passed in Congress, with the intention of creating a new state. At the time, the land included in the proposal covered two territories: the Oklahoma Territory in the west and the Indian Territory in the east, where multiple tribes had been forcibly moved as a result of the 1830 Indian Removal Act.

The Cherokee, Creek, Seminole, Choctaw, and Chickasaw Nations united in a proposal to seek statehood, which would allow them to maintain control over the lands originally granted to them during the previous treaties and resettlements. The state would be run in accordance with tribal governments, with each tribe having its own county. In 1905, several bills were filed in Congress to request the state of Sequoyah. However, politicians in D.C. refused to even consider the possibility of a Native American-led state. Instead, President Theodore Roosevelt suggested that the two territories be joined, and in 1906 he signed the law that created the state of Oklahoma, a name that comes from the Choctaw language and means “honorable nation.”

West Virginia welcome sign.
Credit: LesPalenik/ Shutterstock

West Virginia

In 1863, West Virginia was formed after taking the unusual step of seceding from the state of Virginia. The move protested Virginia’s secession from the Union in support of the Confederacy. The original proposed name for the new state was Kanawha, although some were worried that this might be confused with the existing county of the same name. Eventually, Kanawha gave way to simply West Virginia.

This wasn’t the region’s first attempt to form a separate state. Benjamin Franklin proposed the State of Vandalia in the 1770s. (The name was in honor of George III’s wife Charlotte, reputedly a descendant of the Vandal people.) The state would have encompassed what is now West Virginia, as well as parts of Maryland, Virginia, Kentucky, and Pennsylvania. However, the Revolutionary War superseded those plans.

In 1775, locals petitioned the Continental Congress to create Westsylvania, comprising roughly the same area as the proposed Vandalia. Both that petition and another in 1783 went ignored. Historians suspect that the Continental Congress did not want to rile up Virginia or Pennsylvania at a time when they needed to show a united front.

Wyoming on a wooden sign.
Credit: MisterStock/ Shutterstock

Wyoming

Wyoming’s name is derived from the Delaware Native American word mecheweamiing,  which means “large plains.” But the original Wyoming wasn’t out west — it was the name of a valley in Pennsylvania.

In 1865, when a new territory was being considered in what is now Wyoming, James Ashley, a U.S. representative for Ohio, suggested the name Wyoming. Born in Pennsylvania, he was familiar with the Wyoming Valley and believed that the name would reflect the verdant valleys of the newly expanding American West. But this was before he’d actually visited the region — after doing so, he expressed regret about the name choice, deeming the land not fertile enough to produce crops or sustain a population. However, by this time, the name had already caught on.

When Wyoming finally achieved statehood in 1890, alternatives more fitting to the area’s peoples and history were considered. Potential names included Cheyenne, Yellowstone, Big Horn, Sweetwater, and others. But Wyoming was how most people referred to the land, and so  the state retained its historical link with Pennsylvania.

Welcome to colorful Colorado street sign along Interstate I-76.
Credit: miroslav_1/ iStock

Colorado

Before Idaho achieved statehood in 1890, its name was almost used for another state: Colorado (which joined the Union in 1876). While some claim that the name Idaho came from a Kiowa word for “enemy,” historians say that there is no trace of the word before it was mentioned in Congress in 1860. When much of the West was opening up to mining, lobbyist George M. Willing proposed the name for what is now Colorado, claiming it was a Shoshone word. Although this was disputed, few people paid attention at the time. Later, though, an amateur historian who had originally joined Willing in the proposal did a little more research and came to the conclusion that the word was made up. He asked the Senate to change the name, and Colorado (Spanish for “red-colored”) was chosen instead. Despite the misconceptions, the Idaho name stuck around in popular consciousness. When Congress later decided to create another mining territory further north, the name was chosen for the territory.

Fiona Young-Brown
Writer

Fiona Young-Brown is a Kentucky-based writer and author. Originally from the U.K., she has written for the BBC, Fodor’s, Atlas Obscura, This England, Culture, and other outlets.

Original photo by Anastasios71/ Shutterstock

When studying history, a few big military names come to mind — Julius Caesar, Attila the Hun, Genghis Khan — but none eclipse the conqueror known as Alexander the Great. After becoming king of Macedonia at age 20 in 336 BCE, Alexander completely redrew the world map with his conquests. His empire eventually stretched around 2 million square miles, from Greece to Egypt to India, and Alexander proved himself to be one of the greatest military commanders in history — if not the greatest. Although he only sat on the throne for 13 years, his life forever changed the course of history. These six facts highlight his extraordinary, yet brief, life.

Alexander the Great in a chariot pulled by winged griffins.
Credit: Print Collector/ Hulton Archive via Getty Images

Alexander the Great Wasn’t a Self-Made Conqueror

Although Alexander the Great is known for his impressive military achievements, the young conqueror got a huge assist from his father. Known to history as Philip II of Macedon, this king of Macedonia subdued the Greek city-states of Athens and Thebes and established a new federation of Greek states known as the League of Corinth before turning his attention toward Persia. He was assassinated by a royal bodyguard in 336 BCE before he could launch the invasion. His son Alexander inherited (by violently eliminating his rivals) a war machine ready to conquer the known world.

Alexander the Great, as he is taught by Aristotle.
Credit: Photo Researchers/ Archive Photos via Getty Images

Aristotle Was Young Alexander’s Teacher

In 343 BCE, Philip II summoned Aristotle to be the tutor for his son Alexander. The great Greek philosopher taught the young prince for seven years, until Alexander’s ascension to the throne in 336 BCE. Aristotle then returned to Athens, but Alexander brought the great thinker’s works with him on his conquests, and the two remained in touch through letters. Today, historians believe that the relationship between Aristotle and Alexander — along with the latter’s successful conquests — helped spread Aristotelian ideas throughout the conquered regions.

A detail of the Pompeian mosaic known as Alexander the Great Mosaic.
Credit: Marco Cantile/ LightRocket via Getty Images

Alexander the Great Never Lost a Battle

Although Alexander inherited a well-oiled war machine and was taught by arguably the greatest mind of his age, the young king more than earned his eventual fame. During 13 years of war, Alexander the Great never lost a battle, making him the most successful military commander in human history. In fact, Alexander was so impressive that some military academies still teach his tactics to this day. Alexander’s strength as a leader came from the unwavering loyalty of his army, as well as his ability to leverage terrain and take the advantage over his enemies. Even when facing superior numbers, Alexander’s strong, decisive, and unrelenting leadership always led his forces to victory.

Alexander and Bucephalus, between 1757 and 1760.
Credit: Heritage Images/ Hulton Fine Art Collection via Getty Images

An Ancient City Was Named After His Favorite Horse

Alexander was the greatest general who ever lived, but some of that glory is shared with the horse he rode in on. Described as a black horse with a white star on its forehead, Bucephalus was Alexander’s war horse. One famous account states that the Macedonian prince was able to tame the animal after he realized the creature was afraid of its own shadow. Alexander rode the horse into every battle until its death after the Battle of Hydaspes in 326 BCE. The king subsequently named a town near the battle, in modern-day India, Bucephala. Scholars believe that Bucephalus is likely the horse depicted in the Alexander Mosaic, a famous Roman artwork that shows Alexander’s clash with Persian king Darius III.

Painting depicting the death of Alexander the Great at Babylon.
Credit: Universal History Archive/ Universal Images Group via Getty Images

Many Theories Surround the Death of Alexander the Great

As much as Alexander changed the trajectory of history by creating one of the largest empires ever known (then or since), so did his death at the age of 32. There are various versions of his demise, which suggest a days-long paralysis or an agonizing drawn-out poisoning. Modern theories posit that Alexander was done in by typhoid fever, or perhaps a rare neurological disorder known as Guillain-Barré syndrome, which would explain reports of his paralysis. Many doctors and historians have explored his death, yet mystery still remains about what finally put an end to the greatest warrior the world had ever seen.

Equestrian statue of Alexander the Great in bronze.
Credit: DEA / G. NIMATALLAH/ De Agostini via Getty Images

Alexander’s Vast Empire Did Not Last for Long

While Alexander the Great fashioned an impressive empire, his death sent the region into a tailspin of war and uncertainty for four decades as his generals vied for power. The Hellenistic region eventually settled into four kingdoms, each ruled by one of his companions or generals who ruled as a successor: Lysimachus, Cassander, Ptolemy I, and Seleucus I Nicator. The Ptolemaic Dynasty in Egypt was the last to fall, in 30 BCE, when Cleopatra (an Egyptian pharaoh of Macedonian heritage) died after losing in battle to Octavian, later known as Caesar Augustus of Rome.  

Darren Orf
Writer

Darren Orf lives in Portland, has a cat, and writes about all things science and climate. You can find his previous work at Popular Mechanics, Inverse, Gizmodo, and Paste, among others.

Original photo by Pictorial Press Ltd/ Alamy Stock Photo

Although humans often prefer stories with a simple beginning, middle, and end, history doesn’t always line up so nicely. These six moments from the past represent some of the most head-scratching conundrums that still stump scientists, FBI investigators, and even amateur sleuths. Some of them might never be solved, but that doesn’t mean it’s not fun to try.

John White and others as they find a tree into which is carved the word 'Croatoan,'.
Credit: Stock Montage/ Archive Photos via Getty Images

What Happened to the “Lost Colony” of Roanoke?

The legend of the Roanoke colony is so enduring because it lies at the heart of the founding of America. Starting in 1584, 23 years before the establishment of the Jamestown colony in nearby Virginia, three English expeditions landed at Roanoke Island, nestled between the Outer Banks and mainland North Carolina, although these initial forays failed to establish a permanent settlement.

In 1587, John White, along with roughly 115 colonists, traveled from England and established a colony on Roanoke Island. White sailed back to England later the same year to get supplies, but upon his return three years afterward (having been delayed by the Spanish Armada), he found Roanoke completely abandoned. There was no sign of foul play. Houses were replaced with a fortress, and the word “Croatoan” had been carved into a post — a reference to the nearby island of Croatoan, now called Hatteras Island, as well as the tribe that lived there.

White tried to travel to the island but storms prevented him from doing so, and he sailed back to England. He died in 1593 unable to return to Roanoke, and no one truly knows what happened to the colonists — no bodies have ever been found. Theories range from the practical (confrontation or assimilation with Native Americans) to the supernatural or extraterrestrial, but it’s unlikely historians will ever know for sure.

View of the Mary Celeste boat.
Credit: Keystone/ Hulton Archive via Getty Images

What Happened Aboard the Mary Celeste?

The world’s oceans have swallowed many ships since the dawn of the Age of Sail in the 16th century, but no story is quite like the curious case of the Mary Celeste. On November 7, 1872, the Mary Celeste set sail for Genoa, Italy, loaded with 1,700 barrels of alcohol as cargo. Fast-forward nearly a month later, and a British merchant vessel named Dei Gratia spotted the ship some 400 miles east of the Azores in the mid-Atlantic. But something was wrong — no one on board the Mary Celeste was responding to the Dei Gratia’s signals.

After boarding, sailors found the ship mostly undamaged, but abandoned. There was little to no sign of struggle, and six months of food onboard. Only the lifeboat and navigational tools were missing. The ship’s captain, his family, and his crew have never been found.

The theories put forward to explain the ship’s abandonment include pirates, an earthquake, or a mutiny. However, the most colorful theory includes a giant squid attack.

D. B. Cooper, portrait of an American Hijacker.
Credit: StudioB/ Alamy Stock Photo

Who Was D.B. Cooper?

On November 24, 1971, a man calling himself Dan Cooper (later erroneously reported as D.B. Cooper) boarded Northwest Orient Flight 305 traveling from Portland, Oregon, to Seattle, Washington. Described as a mid-40s white man dressed in a business suit, Cooper ordered a bourbon and soda before alerting the stewardess that he had a bomb in his briefcase. Cooper then handed the stewardess a list of demands, saying that he wanted parachutes, a refueling truck, and $200,000 in cash waiting for him when the plane landed in Seattle. He added the phrase, “no funny stuff.”

After an exchange of the flight’s passengers for the money and other goods, the plane took off for Cooper’s requested destination in Mexico City — but he didn’t get far. While flying over southern Washington, Cooper strapped on one of the parachutes he had demanded and jumped out of the plane. Nine years later, a boy found $5,800 in southern Washington with serial numbers that matched the money stolen by Cooper. The FBI has described the case as “one of the longest and most exhaustive investigations in our history,” although it is no longer currently investigating it. Over 100 suspects have been evaluated, but the mysterious criminal has yet to be identified.

A tree and hand lines in the Nazca desert and observation tower.
Credit: dmitry_islentev/ Shutterstock

What Is the Purpose of the Nazca Lines?

The Nazca Lines are massive geoglyphs — sometimes more than a thousand feet long —  carved into the ground some 250 miles south of Lima, Peru. At first glance, these lines might look similar to crop circles, and can only be viewed from the cockpit of a helicopter or airplane. Depicting animals, plants, and various shapes, the Nazca Lines were created by the Nazca people some 2,000 years ago. Archaeologists have studied the lines for 80 years (and are still discovering new geoglyphs), but still don’t know for sure why ancient people created such massive monuments they couldn’t even see. Early theories suggest the lines had some sort of astronomical or calendrical purpose — not unlike Stonehenge — although more recent theories suggest the structures could’ve been tied to irrigation or elaborate religious ceremonies. Whatever the reason, the Nazca Lines remain a mystery etched into the very face of the planet.

The Dutch room at the Isabella Steward Gardner museum.
Credit: Bettmann via Getty Images

Where Are the Gardner Museum Paintings?

Museum heists are common throughout history (and Hollywood), but the ne’er-do-wells are usually captured in the following months, or sometimes years. Unfortunately, the Isabella Stewart Gardner Museum in Boston, Massachusetts, wasn’t so lucky. In the early morning of March 18, 1990, two burglars dressed as police officers subdued the museum’s two security guards and purloined 13 paintings worth over $500 million, including works by Johannes Vermeer, Rembrandt van Rijn, Edgar Degas, Govaert Flinck, and Édouard Manet. By 8:30 a.m., several hours after the heist, the police (the actual police) found the guards handcuffed in the basement.

Four years later, a mysterious letter sent to the museum offered to return the paintings for $2.6 million. Although the museum agreed, a second letter revealed the mysterious author was clearly spooked by FBI involvement, and the deal fell through. A Netflix documentary and a popular podcast have explored the heist, and the FBI even offered a $10 million reward leading to the paintings’ whereabouts, but despite it all, the 13 masterpieces — as well as the two burglars — have yet to be found.

Amelia Earhart in the cockpit of her autogiro.
Credit: Bettmann via Getty Images

What Happened to Amelia Earhart?

In the 1930s, Amelia Earhart wasn’t just one of the most famous pilots in the world — she was arguably the most famous woman in the world.  In 1928, she had become the first woman to fly across the Atlantic; in 1932, she became the first woman to make a solo nonstop transcontinental flight, from L.A. to Newark. So it’s no wonder her disappearance on July 2, 1937, while trying to circumnavigate the globe, sent a shockwave through society whose ripples can still be felt. On that fateful summer day in 1937, Earhart and her navigator, Fred Noonan, set out from Lae, New Guinea, flying a Lockheed Model 10 Electra and headed for Howland Island, a Pacific island that measures only 1 square mile.

Although Earhart was in contact with the U.S. Coast Guard ship near the island, the famous pilot never arrived. In her last transmission, she noted her position and that she was running low on fuel. Neither Earhart, her navigator, nor her plane was ever seen again. The leading theory is that Earhart simply crashed into the ocean, but an extensive search of the surrounding area has turned up nothing. Other theories suggest Earhart possibly landed on a nearby island in line with her last coordinates. In 2017, another theory suggested that Earhart survived as a Japanese prisoner, and some argued that she can be seen in a grainy photo taken on the then-Japanese Marshall Islands shortly after the crash (though some experts have poured cold water on the idea). It’s unlikely we’ll ever know what happened to one of history’s most famous aviators, but that won’t keep people from looking for answers.

Interesting Facts
Editorial

Interesting Facts writers have been seen in Popular Mechanics, Mental Floss, A+E Networks, and more. They’re fascinated by history, science, food, culture, and the world around them.

Original photo by Everett Collection/ Shutterstock

Who’s the only president to file a patent? Which VP wrote a hit pop song? And which Civil War general gave us the word “sideburns”? These 25 facts about leaders past and present, selected from around the website, shed some light on lesser-known aspects of some of the most famous figures in history.

Emperor Napoleon I of France (1769 - 1821), known as Bonaparte.
Credit: Hulton Archive via Getty Images

Napoleon Wasn’t That Short

You’ve probably heard the phrase “Napoleon complex,” which refers to the idea that small creatures often act as if they’re much bigger than they really are in an attempt to overcompensate for their lack of stature. Of course, it’s also a reference to Napoleon Bonaparte, the early 19th-century French emperor who wreaked havoc on the European continent for nearly two decades. Yet French sources say Napoleon probably stood at about 5 feet, 5 inches. While that might seem somewhat short by today’s standards, it was only an inch shorter than the average height of a Frenchman at the time. It’s possible he even stood an inch or two taller than this estimate.

So why does history remember Napoleon as such a tiny tyrant? Turns out, it’s actually British propaganda. In 1803, British political cartoonist James Gillray — arguably the most influential caricaturist of his time — introduced the character “Little Boney,” which portrayed Bonaparte as both diminutive and juvenile. In Gillray’s cartoons, Napoleon was often seen throwing tantrums while stomping around in oversized boots, military garb, and bicorne hats. The image stuck, and the sight of a raging, pint-sized Napoleon echoed through history. Before his death in 1821, the twice-exiled Napoleon even admitted that Gillray “did more than all the armies of Europe to bring me down.”

Queen Elizabeth II smiling.
Credit: Stuart C. Wilson/ Getty Images Entertainment via Getty Images

Queen Elizabeth Had a Longtime Body Double

Leaders have historically used body doubles to thwart would-be assassins, but Queen Elizabeth II’s double served a different — and significantly less bloody — purpose. A big part of being the queen of the United Kingdom was simply showing up. Whether opening a hospital or hosting a foreign dignitary, the queen was always busy. A majority of her events required rehearsals, and that’s where Ella Slack came in. Although she doesn’t look like her majesty, Slack is about the same height and build, so if an event needed to test camera angles or see if the sun would be in the queen’s eyes, Slack was the person for the task.

Slack got the job while working for the BBC’s events department in the 1980s. She stood in for the queen more than 50 times, including riding in the royal carriage and attending rehearsals for the opening of Parliament. However, Slack didn’t get to enjoy all the comforts of royalty. As a strict rule, she was never allowed to sit on the throne in the House of Lords and instead just “lurked” above it. Slack was never paid for her stand-in efforts, but considered her role “a pleasure and an honor.”

Marie Antoinette, Portrait.
Credit: brandstaetter images/ Hulton Archive via Getty Image

Marie Antoinette Never Said, “Let Them Eat Cake”

Marie Antoinette’s most famous line has echoed for more than 200 years, reportedly adding fuel to the fire of France’s revolution. The only problem is that the French queen’s supposed declaration is a myth — historians don’t think Marie Antoinette ever said, “Let them eat cake,” after being told her subjects had no bread. Researchers point to two main plot holes in the quote’s supposed backstory, the first being its phrasing in English. In fact, the French queen is supposed to have said, “Qu’ils mangent de la brioche,” or “Let them eat brioche,” a reference to a decadent bread made with eggs and butter.

The second problem is that the outline of the tale predates Marie Antoinette’s reign. At least one similar story cropped up around the 16th century in Germany, wherein a noblewoman suggested the poorest citizens in her kingdom eat sweetened bread. However, the first person to print the line about brioche was likely Jean-Jacques Rousseau, a French philosopher who mentioned the story around 1767 in his book Confessions, attributing the comment to a “great princess.” Rousseau’s text was published when Marie Antoinette was still a child in Austria, though it’s possible the story inspired French revolutionaries decades later, and was repeated with the addition of Marie Antoinette’s name as propaganda against the French monarchy. Yet there is no historical evidence that proves the queen ever uttered the phrase.

Pope Francis stands among a general audience.
Credit: TIZIANA FABI/ AFP via Getty Images

Pope Francis Was Once a Bouncer at a Nightclub

However strange it may be to think of popes having day jobs outside the church, some of them did — including Pope Francis, who was once a bouncer at a nightclub. Long before he assumed the papacy in 2013 after his predecessor, Benedict XVI, became the first pope to resign in nearly six centuries, the future leader of the Catholic Church helped keep the peace at a bar in his hometown of Buenos Aires, Argentina. It wasn’t his only odd job, as he also swept floors and worked in a chemical lab. These humble beginnings may help explain why the “people’s pope” is known for his humility and modesty, especially compared to his flashier predecessor.

Civil rights leader Reverend Martin Luther King, Jr. relaxes at home in May 1956.
Credit: Michael Ochs Archives via Getty Images

Martin Luther King Jr.’s Birth Name Was Michael

When Martin Luther King Jr. was born on January 15, 1929, his name wasn’t what we know it to be today. According to MLK’s original birth certificate, filed on April 12, 1934, his given name was Michael King Jr. His switch to a new name had to do with his father, who served as senior pastor at Atlanta’s Ebenezer Baptist Church. In 1934, King Sr. traveled to Germany, where he witnessed the budding rise of hate-fueled Nazism throughout the country. Germany was also where, in 1517, theologian and monk Martin Luther wrote his Ninety-Five Theses, which in turn inspired the Protestant Reformation. That movement held great significance to King Sr., who, upon returning to the states, chose the name “Martin Luther” for both himself and his son. MLK Jr. rose to prominence under this new name, though he didn’t officially amend his birth certificate until July 23, 1957, when the name “Michael” was crossed out and the words “Martin Luther Jr.” were printed next to it.

The Meeting of Antony and Cleopatra.
Credit: Heritage Images/ Hulton Archive via Getty Images

Cleopatra Wasn’t Egyptian

Although she ruled Egypt as pharaoh from 51 BCE to 30 BCE, Cleopatra wasn’t of Egyptian descent. She was instead Greek, specifically Macedonian. Cleopatra was the last of a line of rulers of the Ptolemaic Kingdom, a dynasty founded by her distant ancestor Ptolemy I Soter. While the kings of this dynasty often fashioned their names after its originator, Ptolemaic queens preferred names such as Arsinoë, Berenice, and of course, Cleopatra (hence the “VII”).

Although Cleopatra wasn’t ethnically Egyptian, she does hold the honorable distinction of being the only Ptolemaic ruler who could actually speak the Egyptian language — along with half a dozen or so other languages.

King Charles at the throne.
Credit: WPA Pool/ Getty Images News via Getty Images

Charles III Is the Oldest Person to Ascend to the British Throne

Given that his mother and predecessor, Queen Elizabeth II, was the longest-reigning monarch in British history (ruling for over 70 years), it makes sense that Charles III holds the distinction of being both the longest-serving British heir-apparent and the oldest individual to assume the British throne. Having ascended to the role at 73 years, 9 months, and 23 days old, Charles is almost a decade older than the previous record-holder, King William IV, who was 64 years, 10 months, and 3 days old upon becoming the king of England in 1830. On the flip side, Henry VI holds the record of being the youngest individual to assume the British throne — he became king in 1422 at just 8 months and 25 days old.

Vice President Charles Gates Dawes.
Credit: Keystone/ Hulton Archive via Getty Images

U.S. Vice President Charles Dawes Wrote a Hit Pop Song

Not many Americans know the name Charles G. Dawes today, but they should. As one of only three U.S. Vice Presidents to receive the Nobel Peace Prize during their lifetimes (for his work to preserve peace in Europe), he’s reserved a place in the history books alongside Theodore Roosevelt and Al Gore. But perhaps even more notably, he’s also the only veep with a No. 1 hit pop song. Dawes was a self-trained pianist and flautist as well as a banker, and in 1911, 14 years before he became Calvin Coolidge’s Vice President, he wrote a short instrumental piece titled “Melody in A Major.” The song received some attention during Dawes’ lifetime, but it wasn’t until 1951 — the year he died — that American songwriter Carl Sigman put lyrics to Dawes’ creation and called it “It’s All in the Game.” Seven years later, Tommy Edwards became the first Black artist to reach No. 1 in the U.S. with his doo-wop-influenced rendition of Sigman’s song.

But that wasn’t the end of Dawes’ posthumous music stardom. The song soon transformed into a pop standard, and was covered by a variety of artists across several genres. There’s Nat King Cole’s big band affair (1957), Elton John’s upbeat cover (1970), Van Morrison’s sorrowful take (1979), Issac Hayes’ soulful remix (1980), and Merle Haggard’s country creation (1984), just to name a few. To this day (and for likely many days to come), Dawes remains the only chief executive — President or Vice President — to score a hit on the Billboard Hot 100.

Aristotle And Alexander The Great.
Credit: Photo Researchers/ Archive Photos via Getty Images

Alexander the Great Was Tutored by Aristotle

In 343 BCE, Philip II summoned Aristotle to be the tutor for his son Alexander. The great Greek philosopher taught the young prince for seven years, until Alexander’s ascension to the throne in 336 BCE. Aristotle then returned to Athens, but Alexander brought the great thinker’s works with him on his conquests, and the two remained in touch through letters. Today, historians believe that the relationship between Aristotle and Alexander — along with the latter’s successful conquests — helped spread Aristotelian ideas throughout the conquered regions.

Portrait of Emperor Peter I the Great.
Credit: Heritage Images/ Hulton Archive via Getty Images

Russian Czar Peter the Great Established a Tax on Beards

A few years into his reign, Russian Czar Peter I (aka “Peter the Great”) decided to study abroad. Worried that Russia was lagging behind in key technological areas, especially when it came to shipbuilding, Peter traveled incognito from 1697 to 1698 to various European countries, including Prussia, Holland, and England, in an effort to modernize his own nation. Afterward, with his newly learned shipbuilding know-how, he created Russia’s first navy.

But it wasn’t just maritime skills Peter learned on his “Grand Embassy.” He also picked up a few fashion and grooming ideas — including a particular interest in the freshly shaven chins of most Western European men. Determined to integrate Russia into the increasingly powerful club of European countries, Peter established (around 1705) a tax that fiscally punished anyone sporting a beard. The tax was progressive, with the well-to-do shelling out more for their facial adornments than the peasantry; nobility and merchants could pay as much as 100 rubles a year, while peasants might pay one kopek (1/100 of a ruble). Yet the tax was almost universally reviled — and even helped spark a few riots. The biggest opponent of the tax was the Russian Orthodox Church, which regarded clean-shaven faces as sinful. Despite this stiff opposition, Peter I stuck with the tax and was known to even shave off the beards of his guests at parties, much to the horror displayed on their now-clean-shaven faces.

First Lady Eleanor Roosevelt types as newswomen watch.
Credit: Bettmann via Getty Images

Eleanor Roosevelt Wrote a Newspaper Column for Nearly 30 Years

Starting at the very end of 1935 and continuing until her death in 1962, Eleanor Roosevelt kept a regular, nationally syndicated newspaper column called “My Day.” Eventually, it appeared in 90 different U.S. newspapers, detailing both her actions of the day and causes she supported — including ones that perhaps diverged a little from FDR’s views. After her husband’s death, she spoke even more freely about her viewpoints, and chose to keep advocating through her writing instead of running for office herself. Some newspapers dropped her column after she advocated for the election of Adlai Stevenson II in his run against Dwight D. Eisenhower in 1956, leading United Features Syndicate to instruct her to limit her support for candidates, which she did not do. For the majority of the run, Eleanor published six columns a week; only after her health began to decline in the last couple of years of her life did she cut that down to three.

General George Washington rallying his troops at the Battle of Princeton.
Credit: Buyenlarge/ Archive Photos via Getty Images

George Washington Lost More Battles Than He Won

General George Washington embodies the phrase “losing the battle but winning the war,” because during the American Revolution, he lost more battles than he won. Despite some experience in the British army, Washington had little experience fielding a large fighting force, and the Continental Army was filled with soldiers who were far from professional fighters. However, Washington’s resilience, determination, and long-term strategy eventually won the day. According to Washington’s aide Alexander Hamilton, the plan was simple: “Our hopes are not placed in any particular city, or spot of ground, but in preserving a good army … to take advantage of favorable opportunities, and waste and defeat the enemy by piecemeal.” Washington, also aided by competent generals such as Nathanael Greene and assisted by the French navy, decisively ended British ambitions in the colonies at the Battle of Yorktown in 1781.

 Trooping The Colour, the Queen's birthday ceremony at Windsor Castle.
Credit: Pool/Samir Hussein/ WireImage via Getty Images

Queen Elizabeth Celebrated Two Birthdays

While the queen’s actual birthday fell on April 21, she also had a second “official” birthday in the summer. It was marked with a ceremony called Trooping the Colour, a practice that has existed for over 260 years to ensure that British sovereigns whose birthdays fall during colder months also have a ceremony that happens during nicer weather. More than 1,400 soldiers, 200 horses, and 400 musicians participated in the military parade, which usually happened in June. (The “colors” in the ceremony’s name refers to the hues of the flags used by regiments in the British army; “trooping” refers to officers marching up and down, waving the flags.) The public turned out in droves to take part, and members of the royal family also joined the procession on horseback or in carriages.

Portrait Of Louis Antoine of France.
Credit: Heritage Images/ Hulton Archive via Getty Images

Louis XIX Had the Shortest Reign in History

King Louis XIX of France holds an unfortunate Guinness World Record: shortest reign of a monarch in history. He reigned over France for a mere 20 minutes in 1830 following the abdication of his father, Charles X, before he himself stepped down as part of the July Revolution. (Legitimists — supporters of the Bourbon dynasty — didn’t accept this, however, and considered him the rightful king for the rest of his life.)

Some consider Louis XIX’s record to be a shared one, however. Luís Filipe, Prince Royal of Portugal, was fatally wounded in the same attack that killed his father, King Carlos I, on February 1, 1908, but survived 20 minutes longer. The 20-year-old was technically king for those few minutes, but never formally declared ruler, and his younger brother Manuel II became the last king of Portugal on that fateful day instead. His reign wasn’t especially long, either: Portugal became a republic as a result of the October 5, 1910, revolution and Manuel spent the remainder of his life exiled in England.

John Adams and Thomas Jefferson.
Credit: UniversalImagesGroup via Getty Images

John Adams and Thomas Jefferson Died on the Same Day

John Adams and Thomas Jefferson, bitter political rivals and, at times, close friends, died on the very same day — July 4, 1826, 50 years after signing the Declaration of Independence. The two were the last surviving of the original revolutionaries who helped forge a new nation after breaking with the British Empire. During their presidencies, the men diverged on policy and became leaders of opposing political parties, but at the urging of another founding father, Benjamin Rush, around 1812, Adams and Jefferson began a correspondence that lasted the rest of their lives. On his deathbed at the age of 90, Adams’ last words were reportedly “Jefferson still lives,” but he was mistaken — Jefferson had died five hours earlier in Monticello, Virginia.

34th President of Mexico.
Credit: GL Archive/ Alamy Stock Photo

Pedro Lascuráin Was President of Mexico for Only 45 Minutes

Mexico’s 38th president, Pedro Lascuráin, set an unfortunate record by being in office for a mere 45 minutes on February 19, 1913, following a coup that overthrew his predecessor, Francisco I. Madero. As foreign secretary, Lascuráin was third in the line of succession following the vice president and attorney general; because both of those men had likewise been ousted, Lascuráin was appointed president for just enough time to make General Victoriano Huerta — the architect of the coup — interior secretary. After that, he immediately resigned so that Huerta could replace him. This odd maneuvering was Huerta’s idea, as he believed it would make his rise to power look more legitimate in the eyes of Mexican citizens.

Portrait of Ambrose Everett Burnside.
Credit: Bettmann via Getty Images

“Sideburns” Come From Union General Ambrose Burnside

Sideburns have been found on the faces of several famous figures, from Alexander the Great to Charles Darwin, but it wasn’t until the U.S. Civil War (1861–1865) that the term “sideburns” came into being, thanks to a particularly hirsute Union general. Ambrose Burnside wasn’t much of a general: At the Battle of Antietam, his ineffective command meant his soldiers struggled to take a stone bridge (now called Burnside Bridge) and turned what could’ve been a Union victory into a draw. At Fredericksburg, things went from bad to worse, as Burnside led several failed assaults against Robert E. Lee’s forces. But what Burnside might’ve lacked in military acumen, he made up for with his luxurious facial hair, which connected his side-whiskers to his mustache (his chin remained clean-shaven). After the war, many men copied the general’s look, and these facial facsimiles were called “burnsides.” Over the years, the term eventually flipped into its modern spelling.

Boat invention patent by Lincoln.
Credit: Chronicle/ Alamy Stock Photo

Abraham Lincoln Was the Only President to Receive a Patent

Abraham Lincoln had a lifelong fascination with machinery and often tinkered with mechanical devices and tools. He also spent much time traveling and working on the river boats that sailed along the Mississippi River and other waterways, which were prone to breaching onto shores in shallow waters. In 1848, while Lincoln was serving his sole term as a U.S. congressman, a boat he was traveling home to Illinois on got stuck on a sandbar — forcing the captain to empty the barrels of cargo on board so he could use them to buoy and lift the ship back on the water.

The incident sparked a new idea in Lincoln, who spent the congressional break working on a design for inflatable bellows that could be attached to a ship’s hull to lift it over sandbars or other impediments. He had a scale model created and submitted the idea to the U.S. Patent Office. In May 1849, he received U.S. Patent No. 6469, although his flotation system was never put to practical use.

PRINCE OF WALES AT CAMBRIDGE IN HIS TRINITY COLLEGE GOWN.
Credit: PA Images via Getty Images

Charles III Is the First British Monarch to Hold a University Degree

While King Charles was born into a life of luxury with every resource at his fingertips, he decided to forgo the traditional at-home tutoring for royals and seek out higher education. In 1970, Charles received a bachelor’s degree from Trinity College at Cambridge University, becoming the first heir to the British crown to earn a degree of higher education. At school, Charles studied anthropology, archaeology, and history, an impressive range of topics to balance alongside his royal duties as Prince of Wales — a role he officially took on at an investiture ceremony in 1969 at age 20.

After graduating, Charles enlisted in the Royal Navy and Royal Air Force, a decision he made to follow in the footsteps of his father. While serving in the armed forces from 1971 until 1976, Charles also earned a Master of Arts degree from Cambridge in 1975.

Mahatma (Great Soul), Indian Nationalist leader.
Credit: Print Collector/ Hulton Archive via Getty Images

Gandhi Never Said, “Be the Change You Wish to See in the World”

It’s a lovely saying, but it wasn’t Gandhi. He did say something similar: “If we could change ourselves, the tendencies in the world would also change. As a man changes his own nature, so does the attitude of the world change towards him. … We need not wait to see what others do.” According to Quote Investigator, the more succinct version of the phrase didn’t start appearing until the mid-1970s — decades after Gandhi’s death.

Coretta Scott King and Children Greeting Martin Luther King, Jr.
Credit: Bettmann via Getty Images

Martin Luther King Jr. Was a Huge Fan of “Star Trek”

Martin Luther King Jr. was not only a huge fan of Star Trek but a pivotal figure in the career trajectory of one of the show’s most beloved actors. Star Trek was the only program King allowed his children to stay up late to watch, in large part because of the character Uhura, played by African American actress Nichelle Nichols. King viewed Nichols’ role as one of the few examples of equality on television — a belief that he expressed to Nichols upon meeting her at a fundraiser for the NAACP. After the show’s first season ended in 1967, Nichols had been leaning toward departing Star Trek for a role on Broadway. In the end, however, she was swayed by King’s passionate words about her power and influence as a role model for Black women, and decided to remain a member of the cast.

Portrait of Elbridge Gerry.
Credit: Bettmann via Getty Images

James Madison’s VP Elbridge Gerry Gave Us the Word “Gerrymandering”

Elbridge Gerry’s political chicanery as the governor of Massachusetts was so legendary, he gave his name to the practice of redistricting with political aims: gerrymandering.

The word was coined after Gerry’s party drew some absurd state Senate districts in order to elect more Democratic-Republicans, at the expense of their rival party, the Federalists. Redistricting with political aims wasn’t a new practice, but this was a particularly brazen example — one district resembled a salamander — and after Gerry signed off on the bill, critics dubbed it a “gerry-mander.”

Jackie Kennedy listening to her husband speak.
Credit: Bettmann via Getty Images

Jackie Kennedy Coined the Term “Camelot” to Refer to the Kennedy Administration

Shortly after her husband’s funeral, Jackie Kennedy welcomed Life magazine reporter Theodore H. White to the family compound in Hyannis Port, Massachusetts, in an effort to ensure JFK’s lasting legacy. During the interview, she coined a term that’s now synonymous with her husband’s administration: “Camelot,” a reference to both Arthurian legend and JFK’s favorite Broadway musical. In likening his presidency to the storied court, Jackie sought to establish her husband as an almost mythical figure. Quoting the musical, she stated, “Don’t let it be forgot, that once there was a spot, for one brief, shining moment that was known as Camelot.” She went on to add that while there would be other Presidents, there would “never be another Camelot again.” Editors at Life reportedly objected to the Camelot theme running throughout the interview, but Jackie was insistent on keeping it and even added her own edits to White’s notes.

 Alexander the Great Mosaic, depicting the battle of Issus.
Credit: Marco Cantile/ LightRocket via Getty Images

Alexander the Great Never Lost a Battle

Although Alexander inherited a well-oiled war machine and was taught by arguably the greatest mind of his age, the young king more than earned his eventual fame. During 13 years of war, Alexander the Great never lost a battle, making him the most successful military commander in human history. In fact, Alexander was so impressive that some military academies still teach his tactics to this day. Alexander’s strength as a leader came from the unwavering loyalty of his army, as well as his ability to leverage terrain and take the advantage over his enemies. Even when facing superior numbers, Alexander’s strong, decisive, and unrelenting leadership always led his forces to victory.

Rustic welcome sign to Talkeetna, Alaska.
Credit: John Greim/ LightRocket via Getty Images

A Cat Named Stubbs Was Honorary Mayor of a Town in Alaska

Most politicians are at least somewhat divisive. One notable exception: Stubbs, a cat who served as the honorary mayor of Talkeetna, Alaska, for more than 18 scandal-free years. He first entered office around 1998, when the town (technically an unincorporated census-designated place) and its 900 residents chose him as their leader. (Rumors that Stubbs was officially elected as a write-in candidate are incorrect, but locals loved their feline “mayor” nonetheless.) Over the course of the next two decades, Stubbs became a popular tourist attraction and performed such mayoral duties as, in the words of Smithsonian Magazine, “wandering around the town, drinking catnip-laced water from margarita glasses, and of course, sleeping a lot.” Take note, human politicians.

Interesting Facts
Editorial

Interesting Facts writers have been seen in Popular Mechanics, Mental Floss, A+E Networks, and more. They’re fascinated by history, science, food, culture, and the world around them.

Original photo by gcafotografia/ Shutterstock

From Eve’s metaphorical apple to the deadly flowers in folktales, poisonous plants have shaped human history and belief for a very long time. Somewhat alarmingly, many of the world’s most lethal plants are also widely grown in gardens and flower beds, made into jewelry, or cultivated for medicine. Here are a few plants you should think twice about getting near.

View of the manchineel, the deadliest tree in the world.
Credit: Karuna Eberl/ Shutterstock

Manchineel Tree (Hippomane mancinella)

The manchineel is officially the world’s deadliest tree. Found amid mangrove forests in the Caribbean, Central America, and the Florida Keys, the machineel produces sap that can cause severe blisters and blindness if it comes into contact with one’s skin or eyes. Rainwater dripping from its leaves, or smoke wafting from its burning wood, can induce sores and pain in mucus membranes. Its applelike fruit is also extremely toxic.

An account published in the medical journal BMJ described a man’s experience of accidentally eating a manchineel fruit: “We noticed a strange peppery feeling in our mouths, which gradually progressed to a burning, tearing sensation and tightness of the throat. The symptoms worsened over a couple of hours until we could barely swallow solid food because of the excruciating pain … Recounting our experience to the locals elicited frank horror and incredulity, such was the fruit’s poisonous reputation.”

Close-up of a rosary pea plant.
Credit: Caner Cakir/ Shutterstock

Rosary Pea (Abrus precatorius)

The rosary pea gets its name from the frequent use of its dried berries as rosary beads, as well as in jewelry and even children’s toys. Though native to tropical Asia, the woody vine has been cultivated as an ornamental plant in Florida and Hawaii and is now considered invasive. Its shiny red seeds, uniform in size and each crowned with a black dot, produce a deadly toxin called abrin; ingesting one seed is enough to kill you. Other symptoms of Abrus precatorius poisoning include bloody diarrhea, internal bleeding, nausea, severe vomiting, and abdominal pain.

Ricinus communis, the castor bean or castor oil plant close-up.
Credit: Ksenia Lada/ Shutterstock

Castor Bean (Ricinus communis)

Another highly toxic plant grown as an ornamental, castor bean features star-shaped leaves in greenish-bronze hues and small scarlet flowers on upright stalks, making it an exotic addition to gardens — despite being the source of ricin, one of the world’s deadliest poisons. All parts of the plant are toxic: Touching its leaves can cause painful skin rashes, while chewing the seeds or inhaling powdered seeds releases the poison. Numerous countries have used (or attempted to use) ricin in espionage or warfare. The U.S. military investigated its efficacy as a chemical weapon during both World Wars, and in 1978, an assassin — possibly a Soviet operative — injected a Bulgarian dissident writer with a ricin pellet on London’s Waterloo Bridge. The victim died four days later.

White snakeroot plant.
Credit: Nahhana/ Shutterstock

White Snakeroot (Ageratina altissima)

White snakeroot, a native wildflower with crowns of small white blooms on leafy stems, is found in meadows, backyards, pastures, and wooded areas across the eastern United States. It can grow almost anywhere, which makes it a particularly dangerous plant to humans. Livestock such as cows and goats can eat the poisonous leaves and root systems of the plant and pass a toxin called tremetol to the humans who drink their milk. Among early settlers, tremetol poisoning was known as milk sickness, the trembles, the staggers, or puking fever, among other colorful colloquialisms. Its symptoms included muscle tremors, weakness, cardiac myopathy, and difficulty breathing that often led to death. Nancy Hanks Lincoln, Abraham Lincoln’s mother, died of milk sickness in 1818.

Conium maculatum/ poison hemlock white flowers blooming in spring.
Credit: jessicahyde/ Shutterstock

Poison Hemlock (Conium maculatum)

Aptly named poison hemlock — which is actually part of the carrot family — closely resembles wild parsnip and parsley, so it may not be surprising that most cases of poisoning occur when foragers mistake the toxic plant for one of the edible ones. Its purple-spotted stems, leaves, seeds, and clusters of small white flowers contain a toxin called coniine that, when eaten, affects the nervous system and causes respiratory paralysis, eventually leading to death. It was used to execute prisoners in ancient Greece, the most famous victim being the philosopher Socrates in 399 BCE. Though it’s native to Europe (and elsewhere), poison hemlock was imported to the U.S. in the 19th century as an ornamental “fern” and is now found growing wild across the U.S. Poison hemlock is often confused with its even deadlier cousins, western water hemlock (Cicuta douglasii) and spotted water hemlock (Cicuta maculata), both of which are native to North America.

Atropa belladonna in autumn season.
Credit: Simon Groewe/ Shutterstock

Deadly Nightshade (Atropa belladonna)

The name of this dangerous plant’s genus comes from Atropos, one of the three Fates in Greek mythology, who presided over death. Deadly nightshade contains tropane alkaloids that, if consumed, disrupt the body’s ability to regulate heart rate, blood pressure, digestion, and other involuntary processes, resulting in convulsions and death. This quality made it a handy tool for offing Roman emperors and Macbeth’s enemies. The effects of deadly nightshade poisoning are said to include sensations of flying, suggesting that the plant was the source of alleged witches’ “flying ointment” in early modern European folklore. Ironically, deadly nightshade is also the source of atropine, a drug used to treat low heart rate and even counteract the effects of eating toxic mushrooms.

Lilac opium poppy flowers blooming in a sunny field.
Credit: Alex Manders/ Shutterstock

Opium Poppy (Papaver somniferum)

Papaver somniferum produces opium, the source of morphine, heroin, oxycodone, codeine, and other narcotics — in other words, some of the most addictive substances on Earth. Opium poppies have been cultivated for medicinal purposes since at least 2100 BCE, according to a Sumerian clay tablet that is believed to contain the world’s oldest prescriptions, including one for opium. The Greek physicians Dioscorides and Galen, as well as the Muslim physician Avicenna, also wrote about opium’s therapeutic qualities. Opium-derived laudanum was the Victorian choice for calming crying babies, soothing headaches, or overcoming insomnia; in the U.K., its widespread use led to addiction crises dubbed “morphinomania” until laws restricted the sale of opium in the early 20th century. Today, it’s not a good idea to grow opium poppies even for ornamental purposes: The Drug Enforcement Administration even pressured Monticello’s gardeners to cease cultivation of Thomas Jefferson’s historical poppy plots in the early 1990s.

Interesting Facts
Editorial

Interesting Facts writers have been seen in Popular Mechanics, Mental Floss, A+E Networks, and more. They’re fascinated by history, science, food, culture, and the world around them.

Original photo by beavera/ iStock

The meteorological conditions we refer to as the weather can be the source of some pretty serious myths and misconceptions. Some are simply funny superstitions (like using onions to predict the severity of the coming winter). Others thought to be hoaxes or hallucinations (like ball lightning) are now proven to be actual phenomena. Here are eight common myths about the weather — including some that actually have a grain of truth to them.

Lightning strike and high voltage network.
Credit: Dan Martin Maghiar/ Shutterstock

Lightning Never Strikes the Same Place Twice

While everyone wishes it were true, this weather “fact” is false. Unfortunately, lightning can strike in the same location repeatedly — even during the same thunderstorm. This is especially true when it comes to tall objects, like TV antennas. For example, the Empire State Building is struck by lightning about 25 times per year.

Other common lightning myths include the idea that trees can provide safe shelter (your best bet is always to go indoors) and that touching a lightning victim might get you electrocuted. Fortunately, the human body does not store electricity — which means you can perform first aid on someone struck by lightning without that particular fear.

View of a waterspout in the ocean.
Credit: Aramiu/ Shutterstock

Waterspouts Turn Into Tornadoes on Land

This one is both true and false. That’s because there are actually two types of waterspouts — those thin, rapidly swirling columns of air above water, sometimes seen in the Gulf of Mexico, Gulf Stream, and elsewhere.

The first is a “fair weather waterspout.” These form from the water up, move very little, and are typically almost complete by the time they’re visible. If they do move to land, they generally dissipate very quickly. The type known as “tornadic waterspouts,” on the other hand, are exactly what their name suggests: tornadoes that form over water, or move from land to water. Associated with severe thunderstorms, tornadic waterspouts can produce large hail and dangerous lightning. If they move to dry land, the funnel will pick up dirt and debris, just as a land-formed tornado would.

Woman talking on the phone staring at the thunderstorm clouds.
Credit: Kikujiarm/ Shutterstock

It’s Not Safe to Use Your Cellphone During a Thunderstorm

It’s not safe to use a landline when thunder and lightning are making the skies dramatic, just like it’s not safe to use any other appliances that are plugged in. But an (unplugged) cellphone should be fine, so long as you’re safely indoors. This myth may have arisen from situations in which people were struck by lightning and their cellphones melted, but it’s not because their cellphone “attracted” the lightning in any way. Of course, plugging in your cellphone (or laptop) to charge may present a danger.

Groundhog emerging from a snow covered den.
Credit: Brian E Kushner/ Shutterstock

Groundhogs Can Predict the Weather

The Groundhog Day tradition continues every February 2, when the members of the Punxsutawney Groundhog Club trek to Gobbler’s Knob, seeking weather wisdom from a series of woodchucks, all named “Punxsutawney Phil.” If Phil emerges from his burrow and sees his shadow (in bright sunshine), supposedly winter will hang around for six more weeks. If the day is overcast: Yay, early spring! The whole event is based on old Celtic superstitions, though, and Phil’s “predictions” are only correct about 40% of the time — but at least he’s no longer eaten after making the call.

View of a green sky filled with dark clouds.
Credit: Andrey tiyk/ Shutterstock

A Green Sky Means a Tornado Is Coming

It’s a pretty rare event, but deep storm clouds filled with raindrops later in the day may scatter light in a way that makes the sky look green. Such storm clouds likely mean severe weather — thunder, lightning, hail, or even a tornado — is on its way, but it’s no guarantee of a twister per se. One thing’s for sure: It’s definitely not a sign that frogs or grasshoppers have been sucked into the sky by the storm, as people used to think.

Cars driving on a highway in a pouring rain with a thunderstorm.
Credit: Eshma/ iStock

Car Tires Protect Us From Lightning

It isn’t the rubber tires that can keep a person inside a car safe from a direct lightning strike; it’s the metal cage of the vehicle that conducts 300 million volts of electricity into the ground. If you can’t get to shelter during a thunderstorm and must be in your (hard-topped) car, keep the windows rolled up and your hands off the car’s exterior frame.

Spider spinning web in nature.
Credit: Just dance/ Shutterstock

Spiders Spin Webs, Dry Weather Ahead

This saying has some truth to it. Spider webs are sensitive to humidity, absorbing moisture that can eventually cause their delicate strands to break. For this reason, most spiders will remain in place when rain is imminent. So it stands to reason (at least according to folklore) that if spiders are busily spinning their webs, they may know something that we don’t. In other words: Prepare for a beautiful day! (It’s also true that most spiders seek out damp places, so if you don’t want them taking up residence in your house, a dry home is less hospitable.)

Destroyed House after earthquake in Italy.
Credit: SimonSkafar/ iStock

Doors Are the Best Place to Be in an Earthquake

It’s not “weather” in the sense of atmospheric conditions, but earthquakes can be a pretty dramatic show of the Earth’s forces. Many of us learned this “tip” in school. However, the reality is that it was more true of older, unreinforced structures. Today, doorways generally aren’t stronger than other parts of the house, and the door itself may hit you in an earthquake. You’re far safer underneath a table or desk, particularly if it’s away from a window. (The CDC has more earthquake safety tips here.)

Interesting Facts
Editorial

Interesting Facts writers have been seen in Popular Mechanics, Mental Floss, A+E Networks, and more. They’re fascinated by history, science, food, culture, and the world around them.

Interesting Facts

Whether created through purposeful experimentation or as the result of a happy accident, inventions have transformed our world. And they’re a rich source of fascinating facts, too — for example, did you know that the inventor of the stop sign couldn’t drive, or that a dentist helped create the cotton candy machine? Here are some of our greatest invention stories from around the site.

Credit: Photo courtesty Library of Congress via Getty Images

Mark Twain Invented Bra Clasps

The long-term uses for a product do not always materialize during the inventor’s lifetime. Such was the case with Mark Twain — the celebrated writer born Samuel Clemens — who filed a patent for a clothing accessory when he was 35 years old. Twain found wearing suspenders uncomfortable, so he came up with a device he called an “Improvement in Adjustable and Detachable Straps for Garments.” What he envisioned was a versatile two-piece strap — preferably elastic — that fastened with hooks. The hooks were inserted into a series of rows of small holes, chosen depending on how snug (or loose) the wearer wanted their garment. Twain thought this simple, gender-neutral tool could customize the fit of a wearer’s vests, shirts, pantaloons, or stays, a corset-like object that women wore under dresses. However, thanks to changing fashions, his garment straps were not produced for several decades. In 1914, four years after Twain’s death and long after his hard-won patent expired, Mary Phelps Jacob patented the first bra from handkerchiefs and ribbon. When she sold her patent to the Warner Brothers Corset Company, they added Twain’s straps to the back to keep the garment in place.  

Credit: bhofack2/ iStock

Chinese Takeout Containers Were Invented in America

In the U.S., plenty of Chinese restaurant fare features produce that doesn’t grow in China, such as broccoli. Thus it shouldn’t be terribly surprising that Americans also took liberties with how Chinese food is packaged. While plastic containers are utilized to hold delivery and takeout dishes in China, diners in the U.S. prefer a folded, six-sided box with a slim wire handle. Chicago inventor Frederick Weeks Wilcox patented this “paper pail” on November 13, 1894. Borrowing from Japanese origami, Wilcox elected to make each pail from a single piece of paper. This decision eventually proved critical in the transportation of Chinese cuisine, lessening the likelihood of leaks and allowing steam from hot foods to escape through the top folds. During the 1970s, a graphic designer at Bloomer Brothers’ successor, the Riegel Paper Corporation, embellished the boxes to include a pagoda and the words “Thank You” and “Enjoy” — all in red, a color that represents luck in China. The Riegel Paper Corporation evolved into Fold-Pak, the world’s top producer of takeout containers, which assembles 300 million cartons per year. Composed of solid-bleached-sulfate paperboard and boasting an interior polycoating, each food carrier expands into a handy plate if you remove the wire handle.

Credit: Wachiwit/ iStock

Bubble Wrap Was Invented as Wallpaper

Bubble Wrap is one of the 20th century’s most versatile — and dare we say most beloved — inventions. The pliable, air-pocketed sheets have been used for decades to insulate pipes, protect fragile items, and even make dresses. And that’s not to mention the fascination some people have with popping the bubbles. But when it was first created in 1957 in New Jersey, inventors Al Fielding and Marc Chavannes had a different vision in mind for their ingenious padding: home decor. The pioneering duo hoped their creation — which trapped air between two shower curtains run through a heat-sealing machine — would serve as a textured wallpaper marketed to a younger generation with “modern” taste. The initial idea was a flop, however, and it took another invention of the time — IBM’s 1401 model computer  — to seal Bubble Wrap’s fate as a packing material.

Under the company name Sealed Air, Fielding and Chavannes approached IBM about using the air-filled plastic in shipping containers, replacing traditional box-fillers like newspaper, straw, and horsehair. After passing the test of transporting delicate electronics, Sealed Air became a shipping industry standard. Over time, Fielding and Chavannes were granted six patents related to Bubble Wrap manufacturing, and Sealed Air continues to create new versions of the remarkable wrap — including a cheaper, unpoppable version that’s popular with cost-minded shippers (but not so much with bubble-popping enthusiasts).

Credit: SteveLuker/ iStock

The Inventor of the Stop Sign Never Learned How To Drive

Few people have had a larger or more positive impact on the way we drive than William Phelps Eno, sometimes called the “father of traffic safety.” The New York City-born Eno — who invented the stop sign around the dawn of the 20th century — once traced the inspiration for his career to a horse-drawn-carriage traffic jam he experienced as a child in Manhattan in 1867. “There were only about a dozen horses and carriages involved, and all that was needed was a little order to keep the traffic moving,” he later wrote. “Yet nobody knew exactly what to do; neither the drivers nor the police knew anything about the control of traffic.”

After his father’s death in 1898 left him with a multimillion-dollar inheritance, Eno devoted himself to creating a field that didn’t otherwise exist: traffic management. He developed the first traffic plans for New York, Paris, and London. In 1921, he founded the Washington, D.C.-based Eno Center for Transportation, a research foundation on multimodal transportation issues that still exists. One thing Eno didn’t do, however, is learn how to drive. Perhaps because he had such extensive knowledge of them, Eno distrusted automobiles and preferred riding horses. He died in Connecticut at the age of 86 in 1945 having never driven a car.

Credit: byryo/ iStock

Love Seats Were Originally Designed to Fit Women’s Dresses, Not Couples

The two-seater upholstered benches we associate with cozy couples were initially crafted with another duo in mind: a woman and her dress. Fashionable attire in 18th-century Europe had reached voluminous proportions — panniers (a type of hooped undergarment) were all the rage, creating a wide-hipped silhouette that occasionally required wearers to pass through doors sideways. Upper-class women with funds to spare on trending styles adopted billowing silhouettes that often caused an exhausting situation: the inability to sit down comfortably (or at all). Ever astute, furniture makers of the period caught on to the need for upsized seats that would allow women with such large gowns a moment of respite during social calls.

As the 1800s rolled around, so did new dress trends. Women began shedding heavy layers of hoops and skirts for a slimmed-down silhouette that suddenly made small settees spacious. The midsize seats could now fit a conversation companion. When sweethearts began sitting side by side, the bench seats were renamed “love seats,” indicative of how courting couples could sit together for a (relatively) private conversation in public. The seat’s new use rocketed it to popularity, with some featuring frames that physically divided young paramours. While the small sofas no longer act as upholstered chaperones, love seats are just as popular today — but mostly because they fit well in small homes and apartments.

Credit: miodrag ignjatovic/ iStock

Canned Food Was Invented Before the Can Opener

On January 5, 1858, Ezra J. Warner of Connecticut invented the can opener. The device was a long time coming: Frenchman Nicolas Appert had developed the canning process in the early 1800s in response to a 12,000-franc prize the French government offered to anyone who could come up with a practical method of preserving food for Napoleon’s army. Appert devised a process for sterilizing food by half-cooking it, storing it in glass bottles, and immersing the bottles in boiling water, and he claimed the award in 1810. Later the same year, Englishman Peter Durand received the first patent for preserving food in actual tin cans — which is to say, canned food predates the can opener by nearly half a century.

Before Warner’s invention, cans were opened with a hammer and chisel — a far more time-consuming approach than the gadgets we’re used to. Warner’s tool (employed by soldiers during the Civil War) wasn’t a perfect replacement, however: It used a series of blades to puncture and then saw off the top of a can, leaving a dangerously jagged edge. As for the hand-crank can opener most commonly used today, that wasn’t invented until 1925.

Credit: Scott Raichilson/ iStock

Benjamin Franklin Invented the Lightning Rod

In Benjamin Franklin’s time — and for centuries before — lightning was a fear-inspiring phenomenon, known for starting fires, destroying buildings, and injuring people and livestock. Because little was known about how lightning worked, some people undertook unusual preventative measures against it, like ringing church bells to avert lightning strikes (even though that sent bell ringers dangerously high into steeples during storms). Perhaps that was why Franklin, the prolific inventor and founding father, was so captivated by lightning and devoted much of his scientific studies to experimenting with electricity. In 1752, Franklin undertook his now-storied kite exercise during a storm, correctly surmising that lightning must be electricity and that the mysterious energy was attracted to metal (though some historians have questioned whether the experiment actually ever happened).

With this concept in mind, Franklin designed the Franklin Rod, crafted from a pointed, iron stake. Heralded as a new, lifesaving invention that could guide the electrical currents from lightning into the ground, lightning rods sprung atop roofs and church steeples throughout the American colonies and Britain, and some were even anchored to ship masts to prevent lightning strikes at sea. Initially, some clergy were unwelcoming of the protective devices, believing lightning rods interfered with the will of the heavens; Franklin brushed off the criticism and continued his exploration of electricity, even developing some of the language — like the word “battery” — we use to talk about the force today.

Credit: Colorsandia/ iStock

A Dentist Helped Invent the Cotton Candy Machine

When folks learn that one of cotton candy’s creators cleaned teeth for a living, jaws inevitably drop. Born in 1860, dentist William J. Morrison became president of the Tennessee State Dental Association in 1894. But Morrison was something of a polymath and a dabbler, and his varied interests also included writing children’s books and designing scientific processes: He patented methods for both turning cottonseed oil into a lard substitute and purifying Nashville’s public drinking water. In 1897, Morrison and his fellow Nashvillian — confectioner John C. Wharton — collaborated on an “electric candy machine,” which received a patent within two years. Their device melted sugar into a whirling central chamber and then used air to push the sugar through a screen into a metal bowl, where wisps of the treat accumulated. Morrison and Wharton debuted their snack, “fairy floss,” at the Louisiana Purchase Exposition of 1904 (better known as the St. Louis World’s Fair). Over the seven-month event, at least 65,000 people purchased a wooden box of the stuff, netting Morrison and Wharton the modern equivalent of more than $500,000.

Credit: Patti McConville/ Alamy Stock Photo

Cool Whip, Pop Rocks, and Tang Were Invented by the Same Person

Growing up in Minnesota, William A. Mitchell spent his teenage years as a farmhand and carpenter, working to fund his college tuition. It took a few years for the future inventor to venture into food production after graduation, chemistry degree in hand; he first worked at Eastman Kodak creating chemical developers for color film, as well as at an agricultural lab. He then went to work at General Foods in 1941, contributing to the war effort by creating a tapioca substitute for soldier rations. In 1956, his quest to create a self-carbonating soda led to the accidental invention of Pop Rocks. A year later, he developed Tang Flavor Crystals, which skyrocketed to popularity after NASA used the powder in space to remedy astronauts’ metallic-tasting water. And by the time he’d retired from General Foods in 1976, Mitchell had also developed a quick-set gelatin, powdered egg whites, and a whipped cream alternative — the beloved Cool Whip that now dominates grocery store freezers.

Credit: Eugene Sergeev/ Alamy Stock Photo

Kevlar Was Originally Developed for Car Tires

In the mid-1960s, chemist Stephanie Kwolek was working in a Wilmington, Delaware, research lab for the textile division of the chemical company Dupont, which had invented another “miracle” fiber called nylon 30 years earlier. Fearing a looming gas shortage — one that arrived in earnest in 1973 — Dupont was searching for a synthetic material that could make tires lighter and stronger, replacing some of their steel and improving overall fuel efficiency. One day, Kwolek noticed that a particular batch of dissolved polyamides (a type of synthetic polymer) had formed a cloudy, runny consistency rather than the usual clear, syrup-like concoction. Although colleagues told Kwolek to toss it out, she persisted in investigating this strange mixture closely, discovering that it could be spun to create fibers of an unusual stiffness. Kevlar was born. Dupont introduced the “wonder fiber” in 1971, and the material began undergoing tests in ballistic vests almost immediately. By one estimate, it has saved at least 3,000 police officers from bullet wounds in the years since. Despite its myriad applications, Kevlar still delivers on its original purpose as an automotive component, whether baked into engine belts, brake pads, or yes, even tires.

Credit: vuk8691/ iStock

Humans Invented Alcohol Before We Invented the Wheel

The wheel is credited as one of humankind’s most important inventions: It allowed people to travel farther on land than ever before, irrigate crops, and spin fibers, among other key benefits. Today, we often consider the wheel to be the ultimate civilization game-changer, but it turns out, creating the multipurpose apparatus wasn’t really on humanity’s immediate to-do list. Our ancient ancestors worked on other ideas first: boats, musical instruments, glue, and even alcohol. The oldest evidence of booze comes from China, where archaeologists have unearthed 9,000-year-old pottery coated with beer residue; in contrast, early wheels didn’t appear until around 3500 BCE, in what is now Iraq. But even when humans began using wheels, they had a different application — rudimentary versions were commonly used as potter’s wheels, a necessity for mass-producing vessels that could store batches of brew (among other things).

Credit: sstop/ iStock

Writing Systems Were Independently Invented at Least Four Times

Much human innovation is a collective effort — scientists, innovators, and artisans building off the work of predecessors to develop some groundbreaking technology over the course of many years. But in the case of writing systems, scholars believe humans may have independently invented them four separate times. That’s because none of these writing systems show significant influence from previously existing systems, or similarities among one another. Experts generally agree that the first writing system appeared in the Mesopotamian society of Sumer in what is now Iraq. Early pictorial signs appeared some 5,500 years ago, and slowly evolved into complex characters representing the sounds of the Sumerian language. Today, this ancient writing system is known as cuneiform.

However, cuneiform wasn’t a one-off innovation. Writing systems then evolved in ancient Egypt, in the form of hieroglyphs, around 3200 BCE — only an estimated 250 years after the first examples of cuneiform. The next place that writing developed was China, where the Shang dynasty set up shop along the Yellow River and wrote early Chinese characters on animal bones during divination rituals around 1300 BCE. Finally, in Mesoamerica, writing began to take shape around 900 BCE, and influenced ancient civilizations like the Zapotecs, Olmecs, Aztecs, and Maya. Sadly, little is known about the history of many Mesoamerican languages, as Catholic priests and Spanish conquistadors destroyed a lot of the surviving documentation.

Credit: AscentXmedia/ iStock

Parachutes Were Invented Before Airplanes

While most grade school students can tell you that the first airplane was flown by Orville and Wilbur Wright in 1903, the origins of the parachute go back further — significantly further, depending on your criteria. The Shiji, composed by Chinese historian Sima Qian, describes how as a young man the legendary third-century BCE Emperor Shun jumped from the roof of a burning building, using bamboo hats as a makeshift parachute. Leonardo da Vinci famously sketched a design for a pyramid-shaped parachute made from linen around 1485. Approximately 130 years later, Venetian Bishop Fausto Veranzio unveiled his own design in his Machinae Novae, and allegedly even tested the contraption himself.

But the first modern parachutist is generally considered to be France’s Louis-Sebastien Lenormand. Along with actually coining the term “parachute,” Lenormand initially tested gravity by leaping from a tree with two umbrellas, before flinging himself from the Montpellier Observatory with a 14-foot parachute in December 1783. Fourteen years later, another Frenchman, André-Jacques Garnerin, delivered the first truly death-defying parachute exhibition when he plunged from a hydrogen balloon some 3,200 feet above Paris, and rode out the bumpy descent with his 23-foot silk net to a safe landing.

Credit: mustafa güner/ iStock

Bagpipes Were Invented in the Middle East, Not Scotland

The reedy hum of bagpipes calls to mind tartan attire and the loch-filled lands of Scotland, which is why it might be surprising to learn that the wind-powered instruments weren’t created there. Music historians believe bagpipes likely originated in the Middle East, where they were first played by pipers thousands of years ago. The earliest bagpipe-like instruments have been linked to the Egyptians around 400 BCE, though a sculpture from the ancient Hittites — a former empire set in present-day Turkey — from around 1000 BCE may also resemble bagpipes.

Bagpipes slowly made their way throughout Europe, occasionally played by notable names in history like Roman Emperor Nero, and becoming widespread enough to be depicted in medieval art and literature. By the 15th century they had made their way to Scotland, where Highland musicians added their own influence. By some accounts, they modified the pipes to their modern appearance, by adding more drones, which emit harmonized sounds. Highland musicians also began the practice of hereditary pipers, aka passing the knowledge and skill of bagpiping through families, along with the duty of playing for Scottish clan leaders. All pipers of the time learned music by ear and memorization, a necessity considering the first written music for the pipes may not have appeared until the 18th century. One family — the MacCrimmons of the Scottish island of Skye — was particularly known for its influence in bagpiping, with six generations continuing the art, composing music, and teaching through their own piping college in the 17th and 18th centuries.

Credit: AscentXmedia/ iStock

Chocolate Chips Were Invented After Chocolate Chip Cookies

Ruth Wakefield was no cookie-cutter baker. In fact, she is widely credited with developing the world’s first recipe for chocolate chip cookies. In 1937, Wakefield and her husband, Kenneth, owned the popular Toll House Inn in Whitman, Massachusetts. While mulling new desserts to serve at the inn’s restaurant, she decided to make a batch of Butter Drop Do pecan cookies (a thin butterscotch treat) with an alteration, using semisweet chocolate instead of baker’s chocolate. Rather than melting in the baker’s chocolate, she used an ice pick to cut the semisweet chocolate into tiny pieces. Upon removing the cookies from the oven, Wakefield found that the semisweet chocolate had held its shape much better than baker’s chocolate, which tended to spread throughout the dough during baking to create a chocolate-flavored cookie. These cookies, instead, had sweet little nuggets of chocolate studded throughout. The recipe for the treats — known as Toll House Chocolate Crunch Cookies — was included in a late 1930s edition of her cookbook, Ruth Wakefield’s Tried and True Recipes.

The cookies were a huge success, and Nestlé hired Wakefield as a recipe consultant in 1939, the same year they bought the rights to print her recipe on packages of their semisweet chocolate bars. To help customers create their own bits of chocolate, the bars came pre-scored in 160 segments, with an enclosed cutting tool. Around 1940 — three years after that first batch of chocolate chip cookies appeared fresh out of the oven — Nestlé began selling bags of Toll House Real Semi-Sweet Chocolate Morsels, which some dubbed “chocolate chips.” By 1941, “chocolate chip cookies” was the universally recognized name for the delicious treat. An updated version of Wakefield’s recipe, called Original Nestlé Toll House Chocolate Chip Cookies, still appears on every bag of morsels. For her contributions to Nestlé, Wakefield reportedly received a lifetime supply of chocolate.    

Credit: GMVozd/ iStock

Popsicles Were Invented by an 11-Year-Old

A dessert accidentally created by a California kid has managed to stick around for over a century. One frigid night in the San Francisco Bay Area, young Frank Epperson took a glass of water and mixed in a sweet powdered flavoring using a wooden stirrer. He left the concoction on his family’s back porch overnight, and by morning, the contents had frozen solid. Epperson ran hot water over the glass and used the stirrer as a handle to free his new creation. He immediately knew he’d stumbled on something special, and called his treat an Epsicle, a portmanteau of his last name and “icicle.” Throughout his life, Epperson claimed that this experiment occurred in 1905, when he was 11 years old. While most publications agree, the San Francisco Chronicle’s website counters that local temperatures never reached freezing in 1905; they did, however, in nearby Oakland, where the Epperson family moved around 1907, meaning the fateful event may have happened a few years later.

In 1922, Epperson brought his frozen treat — which had since become beloved by friends and neighbors — to the Fireman’s Ball at Neptune Beach amusement park. It was a hit. Within two years, he had patented his ice pop on a wooden stick. Around the same time he began referring to his desserts as “popsicles” (a play on his children’s term for their father’s creation, “pop’s sicle”), but the word was absent from his patent, and a Popsicle Corporation quickly established itself elsewhere. “I should have protected the name,” Epperson later lamented. Although he briefly set up a royalty arrangement with the Popsicle Corporation, by 1925 he sold his patent rights to the Joe Lowe Company, which became the exclusive sales agent for the Popsicle Corporation. Over the decades, Epperson’s naming oversight cost him considerable profits.

Credit: CTRPhotos/ iStock

PEZ Candy Was Created To Help People Quit Smoking

Decades before doctors began to publicize the harmful effects of cigarettes, a 30-year-old Austrian executive decided to invent a refreshing alternative. In 1927, Eduard Haas III was managing his family’s baking goods business — the Ed. Haas Company — when he expanded the product line to include round, peppermint-flavored treats known as PEZ Drops. The German word for peppermint is pfefferminz, and Haas found the name for his new candies by combining the first, middle, and last letters of the German term. Clever advertising built national demand for the candy, which adopted its iconic brick shape in the 1930s and eventually nixed the “Drops.” PEZ were packaged in foil paper or metal tins until Haas hired engineer Oscar Uxa to devise a convenient way of extracting a tablet single-handedly. Uxa’s innovation — a plastic dispenser with a cap that tilted backward as springs pushed the candy forward — debuted at the 1949 Vienna Trade Fair.

A U.S. patent for the dispenser was obtained in 1952, but Americans of the day showed little interest in giving up smoking. So PEZ replaced the mint pellets with fruity ones and targeted a new demographic: children. In 1957, after experimenting with pricey dispensers shaped like robots, Santa Claus, and space guns, PEZ released a Halloween dispenser that featured a three-dimensional witch’s head atop a rectangular case. A Popeye version was licensed in 1958, and since then PEZ has gone on to produce some 1,500 different novelty-topped dispensers. An Austrian original that was revolutionized in America, PEZ is now enjoyed in more than 80 countries — and it’s still owned by the Ed. Haas Company.

Credit: 400tmax/ iStock

Jennifer Lopez Inspired the Creation of Google Images

Jennifer Lopez has worn a lot of memorable dresses on a lot of red carpets over the years, but only one broke the internet to such an extent that it inspired the creation of Google Images. The multi-hyphenate entertainer first wore the plunging leaf-print silk chiffon Versace gown to the 2000 Grammy Awards in L.A., which former Google CEO Eric Schmidt later revealed led to “the most popular search query we had ever seen.” The problem was that the then-two-year-old search engine “had no surefire way of getting users exactly what they wanted: J.Lo wearing that dress.” Thus, in July 2001, “Google Image Search was born.”

Two decades later, to the delight of everyone in attendance, Lopez also closed out Versace’s Spring 2020 show in Milan by wearing a reimagined version of the dress, after other models walked the catwalk to the tune of her hit 2000 single “Love Don’t Cost a Thing.” After a projected montage of Google Images searches for the original dress and a voice saying, “OK, Google. Now show me the real jungle dress,” J.Lo herself appeared in an even more provocative and bedazzled rendition of the gown.

Credit: redarmy030/ iStock

Silly Putty Was Developed During World War II as a Potential Rubber Substitute

World War II ran on rubber. From tanks to jeeps to combat boots, the Allied Forces needed an uninterrupted flow of rubber to supply fresh troops and vehicles to the front lines. Then, in late 1941, Japan invaded Southeast Asia — a key supplier of America’s rubber — and what was once a plentiful resource quickly became scarce. Americans pitched in, donating household rubber (think old raincoats and hoses) to help the war effort, but it wasn’t enough. So scientists set to work finding an alternative. A pair working separately at Dow Corning and General Electric independently developed a silicone oil/boric acid mixture that appeared promising. It was easily manipulated and could even bounce on walls, but in the end its properties weren’t similar enough to rubber to be useful in the war.

U.S. government labs eventually found a workable rubber substitute using petroleum, but the previously developed “nutty putty” stuck around until it fell into the hands of advertising consultant Peter Hodgson. Sensing an opportunity, Hodgson bought manufacturing rights, renamed it “Silly Putty,” and stuck some of it inside plastic eggs just in time for Easter 1950. But it wasn’t until Silly Putty’s mention in an issue of The New Yorker later that year that sales exploded, with Hodgson eventually selling millions of this strange, non-Newtonian fluid (that is, fluids whose viscosity changes under stress; ketchup and toothpaste are other examples). Since then, Silly Putty has found various serious uses, from teaching geology to physical therapy, and even took a ride on Apollo 8 in 1968, when it was used to keep the astronauts’ tools secure. A pretty impressive résumé for a substance that was initially considered a failure.

Credit: AegeanBlue/ iStock

Marshmallows Were Invented as a Divine Food in Ancient Egypt

Today marshmallows are largely reserved for campfires and hot chocolate, but in ancient Egypt they were a treat for the gods. The ancients took sap from the mallow plant (which grows in marshes) and mixed it with nuts and honey. Scholars aren’t sure what the treat looked like, but they know it was thought suitable only for pharaohs and the divine. It wasn’t until 19th-century France that confectioners began whipping the sap into the fluffy little pillows we know and love today.

Credit: triloks/ iStock

Velcro Was Inspired by a Walk Through the Woods

Amazing inventions come to curious minds, and that’s certainly the case for Swiss engineer George de Mestral. While on a walk in the woods with his dog, de Mestral noticed how burrs from a burdock plant stuck to his pants as well as his dog’s fur. Examining the burrs under a microscope, de Mestral discovered that the tips of the burr weren’t straight (as they appeared to the naked eye), but instead contained tiny hooks at the ends that could grab hold of the fibers in his clothing. It took nearly 15 years for de Mestral to recreate what he witnessed under that microscope, but he eventually created a product that both stuck together securely and could be easily pulled apart. In 1954, he patented a creation he dubbed “Velcro,” a portmanteau of the French words velours (“velvet”) and crochet (“hook”).

Credit: gorodenkoff/ iStock

The First Home Security System Was Developed by a Nurse

If you’ve ever checked in on your home from vacation or caught a porch pirate making off with a recent delivery, you have Marie Van Brittan Brown to thank. As a nurse in New York City in the 1960s, Brown worked irregular shifts that often had her coming home at odd hours while her husband, an electronic technician, was away. Concerned about crime in their neighborhood and a lack of help from law enforcement, the Browns worked together to create the first home security system.

Marie’s design was extensive: It featured a motorized camera that could be repositioned among a set of peepholes, a TV screen for viewing outside in real-time (one of the earliest examples of closed-circuit TV or CCTV), and a two-way microphone for speaking to anyone outside her apartment. The security system also included a remote-controlled door lock and an alarm that could reach a security guard. (One newspaper account of the Browns’ invention suggested the alarm could be used by doctors and businesses to prevent or stop robberies.) Brown was awarded a patent for her thoroughly designed security system in 1966 but never pursued large-scale manufacturing of her product. Regardless, she still receives credit for her ingenuity, with a significant number of security system manufacturers recognizing her device as the grandmother of their own security tools.

Credit: clubfoto/ iStock

An Extended Vacation Led to the Accidental Discovery of Penicillin

If you ever need to stress to your boss the importance of vacation, share the tale of penicillin. On September 3, 1928, Scottish physician Alexander Fleming returned to his laboratory at St. Mary’s Hospital in London after a vacation of more than a month. Sitting next to a window was a Petri dish filled with the infectious bacteria known as staphylococcus — but it’s what Fleming found in the dish alongside the bacteria that astounded him.

Inside the Petri dish was a fungus known as penicillium, or what Fleming at the time called “mould juice.” This particular fungus appeared to stop staphylococcus from spreading, and Fleming pondered whether its bacteria-phobic superpowers could be harnessed into a new kind of medicine. Spoiler: They could, and in the coming years, Fleming developed the world’s first antibiotic, winning the Nobel Prize for medicine in 1945 for his accidental yet world-changing discovery. “I did not invent penicillin. Nature did that,” Fleming once said. “I only discovered it by accident.”

Credit: Ladanifer/ iStock

Prosthetic Appendages Have Been in Use Since at Least 950 BCE

Scientists knew that the ancient Egyptian civilization was advanced, but they didn’t know just how advanced until they tested a prosthetic toe that came from the foot of a female mummy from about 950 to 710 BCE. While false body parts were often attached to mummies for burial purposes, experts agree that this toe was in fact used while the person was still alive. The wear and tear on the three-part leather and wood appendage (which was thought to be tied onto the foot or a sandal with string) proved that it was used to help the person walk, and tests using a replica of the toe fitted to a volunteer missing the same part of their foot showed that it significantly improved their gait in Egyptian-style sandals.

Credit: Edwin Tan/ iStock

The Microwave Was Invented by Accident, Thanks to a Melted Chocolate Bar

The history of technology is filled with happy accidents. Penicillin, Popsicles, and Velcro? All accidents. But perhaps the scientific stroke of luck that most influences our day-to-day domestic life is the invention of the microwave oven. Today, 90% of American homes have a microwave, according to the U.S. Bureau of Labor Statistics, but before World War II, no such device — or even an inkling of one — existed.

During the war, Allied forces gained a significant tactical advantage by deploying the world’s first true radar system. The success of this system increased research into microwaves and the magnetrons (a type of electron tube) that generate them. One day circa 1946, Percy Spencer, an engineer and all-around magnetron expert, was working at the aerospace and defense company Raytheon when he stepped in front of an active radar set. To his surprise, microwaves produced from the radar melted a chocolate bar (or by some accounts, a peanut cluster bar) in his pocket. After getting over his shock — and presumably cleaning up — and conducting a few more experiments using eggs and popcorn kernels, Spencer realized that microwaves could be used to cook a variety of foods. Raytheon patented the invention a short time later, and by 1947, the company had released its first microwave. It took decades for the technology to improve, and prices to drop, before microwaves were affordable for the average consumer, but soon enough they grew into one of the most ubiquitous appliances in today’s kitchens.

Featured image credit: Original photo by stocksnapper/ iStock

Interesting Facts
Editorial

Interesting Facts writers have been seen in Popular Mechanics, Mental Floss, A+E Networks, and more. They’re fascinated by history, science, food, culture, and the world around them.

Original photo by sharply_done/ iStock

Science has been able to shed light on many of life’s mysteries over the centuries, offering explanations for diseases, animal behavior, the cosmos, and more. We’ve come a long way from the days when life forms were thought to appear through spontaneous generation and bloodletting was used to cure almost any illness. But there still remain many scientific mysteries embedded in our daily lives. Here are five common occurrences that continue to defy explanations from the top scientific minds.

A bottle of Tylenol on a dark background.
Credit: Brendan Smialowski via Getty Images

How Acetaminophen Works

You’d think that the accessibility of acetaminophen (Tylenol) as an over-the-counter painkiller would indicate a full understanding of its medicinal properties, but Big Pharma is still trying to figure this one out. Certainly scientists know the dangers of excessive doses, but exactly how the medication works to ease pain is still a mystery. It was once thought that acetaminophen functioned in the same manner as nonsteroidal anti-inflammatory drugs (NSAIDs) such as aspirin and ibuprofen, which block the formation of pain-producing compounds in the central nervous system. However, further testing indicated that this enzyme suppression only happens under certain chemical conditions in the body. Other researchers have examined the effects of acetaminophen on neurotransmission in the spinal cord, but a definitive mechanism remains elusive.

A cat looks directly at the camera while laying on the floor.
Credit: VioletaStoimenova/ iStock

Why Cats Purr

This one’s easy – cats purr because they’re happy you’re petting them, right? Except they also purr when they’re hungry, nervous, or in pain, so there are more complex matters to consider. One theory put forth by bioacoustician Elizabeth von Muggenthaler suggests that purring functions as an “internal healing mechanism,” as its low-frequency vibrations correspond to those used to treat fractures, edema, and other wounds. Additionally, since humans generally respond favorably to these soothing sounds, it’s possible that purring has evolved, in part, as a way for domesticated kitties to interact with their owners. And researchers at least believe they now know how purring happens – a “neural oscillator” in the cat brain is thought to trigger the constriction and relaxing of muscles around the larynx – so it may not be long before they home in on more precise reasons for this common, but still mysterious, form of feline communication.

A man walks his bicycle along a crosswalk on a city street.
Credit: Pekic/ iStock

How Bicycles Remain Upright

It’s one of the great ironies of life that we supposedly never forget how to ride a bicycle yet lack a firm understanding of the mechanics that enable us to pull it off in the first place. Early attempts at rooting out answers gave rise to the “gyroscopic theory,” which credits the force created by spinning wheels with keeping bikes upright. This theory, however, was disproven in 1970 by chemist David Jones, who created a functional bike with a counter-rotating front wheel. Jones then floated his “caster theory,” which suggests that a bike’s steering axis, pointing ahead of where the front wheel meets the ground, produces a stabilizing “trail” similar to a shopping cart caster. However, this theory also has holes, as researchers demonstrated in a 2011 Science article showing that a bike with a negative trail – a steering axis pointing behind the wheel – could maintain balance with proper weight distribution. All of which goes to show that while biking is largely a safe activity, there remains a glaring question mark at the heart of a $54 billion global industry.

A flock of geese flying over a body of water while the sun sets.
Credit: sharply_done/ iStock

How Animals Migrate

Maybe you’ve seen flocks of birds flying overhead to mark the changing of seasons or read about salmon fighting upstream to return to their birthplaces, but exactly how do these animals navigate in the midst of long distances and shifting geological conditions? In some cases, there are strong olfactory senses in play; a salmon can detect a drop of water from its natal source in 250 gallons of seawater, helping to guide the way “home.” But the possibilities get even stranger, as scientists are exploring the concept that light-sensitive proteins in the retinas of birds and other animals create chemical reactions that allow them to “read” the Earth’s magnetic field. It may seem far-fetched to think that birds rely on principles of quantum mechanics, but there may be no better explanation for how, say, the Arctic Tern stays on target while annually migrating more than 40,000 miles from pole to pole.

An alarm clock sits on a nightstand while a woman sleeps in the background.
Credit: fizkes/ iStock

Why We Sleep

Given that we can pinpoint the health benefits and problems associated with proper and insufficient amounts of sleep, it’s baffling that we still don’t fully understand what this all-important restorative state does for the body. Older theories followed the notion that sleep helps people conserve energy while keeping them away from the dangers of the night, while more recent research explores how sleep contributes to the elimination of toxic neural buildups and promotes plasticity, the brain’s ability to adjust and reorganize from its experiences. Other experts hope to come across answers by studying glia cells, which are abundant in the central nervous system and possibly involved with regulating when we nod off and awaken. And if these diligent researchers ever do crack the code of what sleep does for us, maybe it will shed light on related nighttime mysteries — like why we dream.

Interesting Facts
Editorial

Interesting Facts writers have been seen in Popular Mechanics, Mental Floss, A+E Networks, and more. They’re fascinated by history, science, food, culture, and the world around them.

Original photo ZoranOrcik/ Shutterstock

Humans have long sought to control time. While it’s generally considered impossible to bend time to our will, there are two days of the year when we have a little sway over the clock. Daylight saving time — officially beginning on the second Sunday of March and ending on the first Sunday in November — is loathed as much as it is loved, but these six facts just might help you see the time warp in a whole new light.

Portrait of Benjamin Franklin.
Credit: Library of Congress/ Corbis Historical via Getty Images

Ben Franklin Didn’t Invent Daylight Saving Time

Ben Franklin is often credited as the inventor of daylight saving time — after all, the concept seems on-brand for the founding father who once championed early waking and bedtimes as the key to success. It’s a myth that Franklin invented daylight saving time, though he did once suggest a similar idea. In 1784, Franklin (then living in France) wrote a letter to the Journal de Paris, suggesting that French citizens could conserve candles and money by syncing their schedules with the sun. Franklin’s proposal — wittily written and considered a joke by many historians — didn’t recommend adjusting clocks; the idea was to start and end the day with the sun’s rising and setting, regardless of the actual time.

Franklin’s proposal didn’t get far, but nearly 100 years later, another science-minded thinker devised the daylight saving time strategy we’re familiar with today. George Vernon Hudson, a postal worker and entomologist living in New Zealand, presented the basics of the idea in 1895. Hudson’s version moved clocks ahead two hours in the spring in an effort to extend daylight hours; for him, the biggest benefit of a seasonal time shift would be longer days in which he could hunt for bugs after his post office duties were finished. Hudson’s proposal was initially ridiculed, but three decades later, in 1927, New Zealand’s Parliament gave daylight saving a shot as a trial, and the Royal Society of New Zealand even awarded Hudson a medal for his ingenuity.

Closeup of a young man adjusting the time of a clock.
Credit: nito/ Shutterstock

Only 35% of Countries Adjust Their Clocks Seasonally

Germany paved the way for daylight saving time in 1916, becoming the first country to enact Hudson’s idea as an energy-saving move in the midst of World War I. While many countries followed suit — mostly in North America, Europe, parts of the Middle East, and New Zealand — some of the world’s 195 countries didn’t. In fact, around the globe, it’s now more common to not make clock adjustments, especially in countries close to the equator, which don’t experience major seasonal changes in day length. In total, around 70 countries observe the time shift, though even in the U.S., where daylight saving time has been a standard practice mandated by federal law since 1966, two states don’t participate: Arizona and Hawaii.

Two women point to man moving clock hands.
Credit: Bettmann via Getty Images

The First U.S. Daylight Saving Time Was a Disaster

Marching into World War I, the U.S. adopted the European strategy of rationing energy by adjusting civilian schedules. With more daylight hours, homes and businesses could somewhat reduce their reliance on electricity and other fuels, redirecting them instead to the war effort. But in the early part of the 20th century, timekeeping across the country was far from consistent, so in March 1918, President Woodrow Wilson signed legislation that created the country’s five time zones. That same month, on Easter Sunday, daylight saving time went into effect for the first time ever — though the government’s efforts to create consistent clocks were initially a mess. Holiday celebrations were thrown off by the time changes, and Americans lashed out with a variety of complaints, believing the time change diminished attendance at religious services, reduced early morning recreation, and provided too much daylight, which supposedly destroyed landscaping.

The time shift was temporary, repealed in 1919 at the war’s end, and wouldn’t be seen again on the federal level until World War II. However, some cities and states picked up the idea, adjusting their clocks in spring and fall as they saw fit.

Back-lit photo of soldiers walking in WWI.
Credit: Frank Hurley/ Hulton Archive via Getty Images

In the U.S., Daylight Saving Time Once Had a Different Nickname

Because of its association with energy rationing during World War I, daylight saving time originally had a different nickname: war time. When the U.S. became involved in World War II nearly two decades later, war time returned, and was in place year-round from February 1942 until September 1945, when it was ditched at the war’s end.

The time change earned its modern title in 1966, when Congress passed the Uniform Time Act, which further standardized time zones and standardized the start and end dates for daylight saving time, among other things. Many countries that follow daylight saving time use the same terminology, though the seasonal time change goes by different labels in some regions: In the U.K., Brits have British Summer Time (BST).

Farmer with milk churns and his cows.
Credit: Edler von Rabenstein/ Shutterstock

Dairy Cows (And Farmers) Aren’t Big Fans

Daylight saving lore has it that the spring and fall clock changes provide the biggest benefit for farms, though if cows could speak, they might say otherwise. Farmers — who supposedly benefit from the extra hour of light in the afternoon — have heavily lobbied against the time change since it was first enacted in 1918. That’s partially because it’s confusing for livestock such as dairy cows and goats, throwing off their feeding and milking schedules. Some farmers say the loss of morning light also makes it more difficult to complete necessary chores early in the day, and impacts how they harvest and move crops to market.

While farmers have pushed to drop daylight saving time, some industries — like the golf industry — have campaigned to keep it for their benefit. The extra daylight is known for bringing more putters to the courses, generating millions in golf gear sales and game fees. Other big business supporters include the barbecue industry (which sells more grills and charcoal in months with longer daylight hours) and candy companies (benefiting from longer trick-or-treating hours on Halloween).

Two o'clock on a clock face.
Credit: janzwolinski/ iStock

The 2 A.M. Start Time Is All Because of Trains

President Woodrow Wilson knew that rolling clocks forward and backward twice a year would be somewhat disruptive, so his 1918 wartime plan tried to be minimally bothersome. Instead of adjusting clocks arbitrarily at midnight in March and November, Wilson chose 2 a.m., a time when no passenger trains were running in New York City. While the shift did impact freight trains, there weren’t as many as there are today, so daylight saving time was considered a relatively easy workaround for the railroads. The 2 a.m. adjustment is still considered the least troublesome time today, since most bars and restaurants are closed and the vast majority of people are at home, asleep — either relishing in or begrudgingly accepting their adjusted bedtime schedules.

Nicole Garner Meeker
Writer

Nicole Garner Meeker is a writer and editor based in St. Louis. Her history, nature, and food stories have also appeared at Mental Floss and Better Report.