“If it ain’t broke, don’t fix it” is a motto that works well for Rome. Because of the incredibly advanced craftsmanship of ancient Rome’s architects, as well as their remarkably long-lasting building materials (more on that below), many of the ancient empire’s most marvelous construction projects can still be seen by millions of tourists today — some 6 million people visit the Colosseum each year alone. However, the most amazing engineering achievement might be Rome’s eye-catching aqueducts, one of which still supplies Rome with water millennia after it was built.
Rome has more water fountains than any other city in the world.
As befits Rome’s millennia-long history of being at the forefront of water engineering, the Italian capital still boasts more fountains than any other city in the world. Although estimates for the number of fountains run as high as 3,000 and beyond, many are no longer in use.
While the Romans didn’t invent the aqueduct — primitive irrigation systems can be found in Egyptian, Assyrian, and Babylonian history — Roman architects perfected the idea. In 312 BCE, the famed Roman leader Appius Claudius Caecus erected the first aqueduct, the Aqua Appia, which brought water to the growing population of the Roman Republic. Today, the Acqua Vergine — first built during the reign of Emperor Augustus in 19 BCE as the Aqua Virgo — still supplies Rome with water more than 2,000 years after its construction (though it’s been through several restorations).
The main reason for the aqueduct’s longevity, along with that of many of Rome’s ancient buildings, is its near-miraculous recipe for concrete. An analysis by the Massachusetts Institute of Technology discovered that Roman concrete could essentially self-heal due to its lime clasts (small mineral chunks) and a process known as “hot mixing” (mixing in the lime at extremely high temperatures). Today, researchers are studying how the material functioned in the hopes of applying secrets from the “Eternal City” to today’s building materials.
The famous Trevi fountain is one of the end points of the Acqua Vergine aqueduct.
Advertisement
New York’s Croton Aqueduct, built in 1842, was based on ancient Roman engineering.
The fall of Rome in the fifth century coincided with a decline in sanitary conditions in many of the world’s cities. By the 18th and 19th centuries, disease ran rampant due to poor sanitation and water management. One of the first aqueducts in the U.S. was the Croton Aqueduct, designed by engineer John B. Jervis, which provided fresh water for the growing metropolis of New York City. Although ancient Rome’s last aqueduct had been built some 1,600 years prior, Jervis based his design on these impressive examples of engineering, and the aqueduct similarly used simple gravity to carry water 41 miles from the Croton River to reservoirs in Manhattan. Upon its completion in 1842, the aqueduct drastically improved health and hygiene in New York City and continued providing the booming metropolis with fresh water until it was decommissioned in 1955.
Interesting Facts
Editorial
Interesting Facts writers have been seen in Popular Mechanics, Mental Floss, A+E Networks, and more. They’re fascinated by history, science, food, culture, and the world around them.
Advertisement
top picks from the Inbox Studio network
Interesting Facts is part of Inbox Studio, an email-first media company. *Indicates a third-party property.
There are all sorts of (false) rumors and superstitions floating around about redheads: They bring bad luck. They have fiery tempers. They're more susceptible to pain sometimes and hate going to the dentist. On that last account, though, there's a decent amount of research that might explain the anecdotal evidence.
One of theearliest studies supporting that last notion, published in 2004, found that redheaded subjects required 19% higher dosages of an anesthetic (desflurane) to realize a satisfactory effect. The following year,another study found redheads to be more sensitive to thermal pain, and resistant to the effects of a different injected anesthetic (lidocaine). The apparent difference, for those natural carrot tops, involves the presence of melanocortin 1 receptor (MC1R) gene variants in the pigment-producing cells known as melanocytes. These variants stymie the hormones that would otherwise turn red hair a different shade, while also seemingly influencing secretions related to pain tolerance.
Although many reputable sites repeat the claim that red hair turns white instead of gray, it's contradicted by the testimonials of gingers with gray locks.
However, research doesn’t support the idea that redheads have a lower pain tolerance generally, and they are actually more sensitive toopioid analgesics. A2021 study found that red-haired mice, which also possess the MC1R variants, have a higher threshold for certain types of pain induction. This followed a2020 study that suggested the MC1R variants tied to pain sensitivity are distinct from those that affect hair color. That said, it does seem wise to offer redheads an extra novocaine boost at the dentist.
The first public demonstration of an effective anesthesia took place in 1846.
Advertisement
Patients can wake up during surgery despite receiving anesthesia.
Regardless of hair color (or lack of hair), people have been known to briefly regain consciousness during surgery despite being under the effects of general anesthesia. According to the American Society of Anesthesiologists, this situation, called anesthesia awareness, happens once or twice per every 1,000 medical procedures. These rare cases tend to happen when lighter doses of sedatives are applied to avoid endangering the patient during certain procedures, including emergency C-sections and cardiac surgeries. Those who experience anesthesia awareness typically do not report feeling pain, but nevertheless may require counseling afterward to cope with what can be a jarring occurrence.
Tim Ott
Writer
Tim Ott has written for sites including Biography.com, History.com, and MLB.com, and is known to delude himself into thinking he can craft a marketable screenplay.
Advertisement
top picks from the Inbox Studio network
Interesting Facts is part of Inbox Studio, an email-first media company. *Indicates a third-party property.
Compared to dinosaurs, humans have occupied only a speck on the timeline of Earth’s history. Modern humans appeared on the stage 200,000 years ago (up to 7 million years ago if you include the whole human family), while dinosaurs roamed the globe for about 165 million years. Despite the large span of time stretching across three distinct geologic periods (Triassic, Jurassic, and Cretaceous), many people view the “Age of the Dinosaurs” as a monolithic moment in history when dinosaurs all lived together. In fact, more time separates stegosaurus and Tyrannosaurus rex than separates modern humans from “the King of the Dinosaurs.”
It’s not just state flowers and birds — some states also have fossils to represent them. For example, Colorado’s state fossil is a stegosaurus, Kansas’ is a pteranodon, and Utah’s is an allosaurus. Other states have mammal fossils, like mastodons, whales, and saber-toothed cats.
Stegosaurus roamed what’s now modern-day North America during the late Jurassic period, about 155 million to 145 million years ago. Although it didn’t live alongside the ferocious T. rex, its contemporary, the allosaurus, was also a nightmare of powerful teeth. T. rex didn’t arrive on the scene until some 68 million years ago, during the late Cretaceous — a difference of some 80 million years. So while a comfortable 66 million years separate humans from the dinosaur’s dramatic, likely asteroid-induced downfall, the stegosaurus and T. rex lived even farther apart. This startling fact doesn’t even take into account Triassic dinosaurs, such as herrerasaurus and eoraptor, which are twice as chronologically distant from the T. rex as stegosaurus is. Turns out, the “Age of the Dinosaurs” is much more complex than its name suggests.
The T. rex roar in “Jurassic Park” was a composite of sounds from a tiger, alligator, and baby elephant.
Advertisement
Scientists aren’t sure why stegosauruses had plates.
The word “stegosaurus” is Greek for “roof lizard,” a reference to the giant dino’s most recognizable feature — its series of plates that run nearly the length of its body. In the dino world, these plates are as iconic as a triceratops’s triple horns or a T. rex’s small (but surprisingly strong) arms, but scientists still don’t really know why this icon of the late Jurassic had these plates. Instead, we’re left with several theories that could help explain this fossilized mystery. One idea is the plates were a sexual characteristic, with bigger, pointier plates being considered more attractive. Another theory suggests the plates helped regulate temperature, as they could soak up heat during the day and then dissipate that heat at night. Other scientists argue the plates might have been used to communicate, intimidate, or defend stegosaurus against carnivorous predators. Whatever its purpose, these mysterious plates have made the stegosaurus one of the most recognizable dinos in the world.
Darren Orf
Writer
Darren Orf lives in Portland, has a cat, and writes about all things science and climate. You can find his previous work at Popular Mechanics, Inverse, Gizmodo, and Paste, among others.
Advertisement
top picks from the Inbox Studio network
Interesting Facts is part of Inbox Studio, an email-first media company. *Indicates a third-party property.
If you ever wondered how everyone’s favorite not-quite-planet got its name, rest assured that the answer has nothing to do with Mickey Mouse’s dog. Discovered in 1930 and now considered a dwarf planet — a downgraded designation that fans call a grave injustice — Pluto first came to the attention of a young Brit named Venetia Burney via her grandfather, who read a newspaper article about its discovery to her on the morning of March 14, 1930. An unusually bright 11-year-old whose knowledge of celestial objects was surpassed only by her passion for classical mythology, Burney simply said, “Why not call it Pluto?” (Pluto was an ancient Greek god of the underworld.)
Neither Mercury nor Venus has any natural satellites, most likely because they’re too close to the sun. Any moon too far from them would likely be captured by the sun, and one too close would be destroyed by tidal gravitational forces.
Most children’s suggestions on topics such as this wouldn’t travel beyond their own breakfast table, but most children aren’t the granddaughter of a retired Oxford University librarian who happens to know a well-placed astronomer. “I think PLUTO excellent!!,” that astronomer responded to the grandfather’s suggestion, before passing along the idea to his colleagues at the observatory in Arizona who had discovered the “dark and gloomy” planet. The astronomers voted unanimously in favor of it. Seventy-eight years later, in 2008, the wholesome astronomical episode was the subject of a short documentary that no less an authority than Burney herself deemed “a masterpiece.”
The smallest planet in the solar system is Mercury.
Advertisement
Many astronomers believe there’s a “real” ninth planet.
Disrespect to Pluto notwithstanding, a growing number of researchers have theorized the existence of a “real” ninth planet — and found evidence that it exists. With a “bizarre, highly elongated orbit” and mass perhaps 10 times that of Earth, the hypothetical heavenly body might be easy to find were it not for the fact that it’s thought to be 20 times farther from the sun than Neptune (and much farther away than Pluto). It could take 7,400 Earth years for Planet Nine to fully orbit the center of our solar system, whereas Pluto takes just 248. Skeptics suggest that Planet Nine doesn’t exist, however, and attempts to locate it have thus far proven unsuccessful.
Michael Nordine
Staff Writer
Michael Nordine is a writer and editor living in Denver. A native Angeleno, he has two cats and wishes he had more.
Advertisement
top picks from the Inbox Studio network
Interesting Facts is part of Inbox Studio, an email-first media company. *Indicates a third-party property.
Catnip is best known for producing bouts of euphoria in cats of all sizes, from house cats to their big cat brethren (including bobcats, jaguars, and lions). In addition to giving felines a healthy release from stress and anxiety, however, some studies show it offers up an additional perk: repelling mosquitos. Related to mint, basil, and lemon balm, Nepeta cataria (aka catnip) emits a chemical compound called nepetalactone when crushed, which naturally wards off some mosquito species. What’s more, catnip-addled cats often chew and rub the leaves into their coats, an action that (unknowingly) spreads the natural bug repellent around. While catnip is all fun for cats, it’s not so great for mosquitos; Nepeta leaves may be effective at fending off pests because they cause pain to the buzzing bugs. Researchers initially theorized that catnip’s aroma was enough to repel the insects, but some studies show mosquitoes exposed to nepetalactone actually feel pain or itchiness in the same way humans experience the sensation of wasabi.
Domesticated house cats have around 20 facial whiskers, along with a set on the back of their front legs called carpal whiskers. Since most cats have poor up-close vision, these special strands detect movement from captured prey held in their paws.
Indoor cats may not need mosquito protection, but catnip still provides a safe, effective way for them to calm down — although scientists aren’t fully sure how it works. It’s possible that catnip affects cat brains in the same way opioids work in humans to relieve pain: One study found that cats who were given naloxone — a lifesaving medication that blocks opioid receptors and is used to treat narcotics overdoses — didn’t have a reaction to catnip. Even so, catnip doesn’t work on all cats. Kittens won’t respond to the plant’s minty leaves until 3 to 6 months old; plus, catnip sensitivity is hereditary, and an estimated 50% of cats don’t experience any reaction at all. But if your cat just so happens to turn up its nose at fresh catnip, don’t worry. Humans can use it for a calming tea similar to chamomile.
Scientists think mosquitoes may be key to developing painless needles.
From the human perspective, we don’t cohabitate well with mosquitoes and their seemingly voracious summer appetites. But some researchers believe we can learn more about pain-free blood extraction from these ancient pests, which have inhabited Earth 200 million years longer than humans. Only female mosquitoes bite, and they do so by using their proboscis, a long tube that pierces the skin and removes blood. These miniature needles use a combination of features for undetected and painless feeding: a chemical in mosquito saliva that numbs the bite zone, a serrated edge that more easily pierces the skin, and tiny vibrations that reduce how much force a mosquito needs to puncture its prey. Scientists think incorporating these elements in new needles — which haven’t seen major improvements in decades — could be pivotal in developing microneedles that deliver pain-free vaccinations and medications. Another benefit: Gentle injections could reduce trypanophobia (aka the fear of needles), suffered by an estimated 66% of kids and 25% of adults.
Interesting Facts
Editorial
Interesting Facts writers have been seen in Popular Mechanics, Mental Floss, A+E Networks, and more. They’re fascinated by history, science, food, culture, and the world around them.
Advertisement
top picks from the Inbox Studio network
Interesting Facts is part of Inbox Studio, an email-first media company. *Indicates a third-party property.
Taxes fund many of the services we need, but no one enjoys paying them — and it’s likely that many of our ancestors didn’t, either. Governments worldwide have levied taxes for thousands of years; the oldest recorded tax comes from Egypt around 3000 BCE. But England — which relied heavily on taxes to fund its military conquests — is known for a slate of fees that modern taxpayers might consider unusual. Take, for instance, the so-called “window tax,” initially levied in 1696 by King William III, which annually charged citizens a certain amount based on the windows in their homes. Some 30 years before, the British crown had attempted to tax personal property based on chimneys, but clever homeowners could avoid the bill by temporarily bricking up or dismantling their hearths and chimneys before inspections. With windows, assessors could quickly determine a building’s value from the street. The tax was progressive, charging nothing for homes with few or no windows and increasing the bill for dwellings that had more than 10 (that number would eventually shrink to seven).
The British government taxed many everyday items, including salt, candles, and beer. But its 1643 tax on soap created a lather among soapmakers and drove up prices for shoppers — many of whom turned to French soap-smuggling rings for the lower-cost suds they needed.
Not surprisingly, homeowners and landlords throughout the U.K. resented the tax. It didn’t take long for windows to be entirely bricked or painted over (much like fireplaces had been), and new homes were built with fewer windows altogether. Opponents called it a tax on “light and air” that hurt public health, citing reduced ventilation that in turn encouraged disease. Even famed author Charles Dickens joined the fight to dismantle the tax, publishing scathing pieces aimed at Parliament on behalf of poor citizens who were most impacted by the lack of fresh air. Britain repealed its window tax in July 1851, but the architectural impact is still evident — many older homes and buildings throughout the U.K. still maintain their iconic converted windows.
The United States had its own (short-lived) “glass tax.”
Just two decades after declaring independence from Britain, the U.S. was in need of funds to bolster its military — this time in preparation for potential conflict with France. President John Adams knew that building up troops wasn’t a cheap initiative and that the country would need to raise the money somehow. That’s why Congress passed the 1798 U.S. Direct Tax (more commonly called the “window tax” or “glass tax”) with the goal of adding $2 million to its coffers. Each of the 16 states was responsible for assessing the property of its residents; homes and property worth more than $100 were taxed. Considering the cost of pane glass during the late 18th century, buildings with glass windows could quickly reach that cap even if they were modest in size, hence the tax’s nickname (although windows were not directly taxed as in Britain). The tax was controversial and repealed just a year later, but some records still exist, giving genealogists and historians a glimpse into how early Americans lived.
Nicole Garner Meeker
Writer
Nicole Garner Meeker is a writer and editor based in St. Louis. Her history, nature, and food stories have also appeared at Mental Floss and Better Report.
Advertisement
top picks from the Inbox Studio network
Interesting Facts is part of Inbox Studio, an email-first media company. *Indicates a third-party property.
Sculpture from classical antiquity is often presented in museums, textbooks, and more as a world of white marble. Whether unearthed from the ground or perched upon crumbling temples, these supposedly pale masterpieces also influenced Renaissance artists such as Michelangelo, who — in the throes of a classic art obsession — created sculptures meant to highlight the natural beauty of stone. Other Renaissance masterpieces, such as Raphael’s early 1500s fresco “The School of Athens,” placed colorful figures of antiquity against a backdrop of white marble. But these representations aren’t an accurate portrayal of the past: Ancient Athens and Rome were full of eye-popping color, with statues sporting vibrant togas and subtle skin tones — in fact, no sculpture was considered complete without a dazzling coat of paint.
European Renaissance artists invented oil painting.
The very first known oil paintings were created far from Europe. In the seventh century CE, Buddhist monks in Afghanistan used oil paints to create murals on cave walls.
Over time, these impermanent paints — left unprotected from the elements — wore away, leaving behind unblemished stone and a false legacy of monotone marble. This perception of the “whiteness” of antiquity was cemented in the 18th century, tied to racist ideals that equated the paleness of the body with beauty. When German scholar Johann Winckelmann (sometimes called the “father of art history”) glimpsed flecks of color on artifacts found near the ancient Roman cities of Pompeii and Herculaneum, he brushed off the work as Etruscan — a civilization he considered beneath the grandeur of ancient Rome. Besides bits of color still clinging to some statues, other evidence of the Mediterranean’s colorful past survives in frescoes from Pompeii (which even depict a Roman in the act of painting a statue); the Greek playwright Euripides also mentions colored statues in his work Helen. In recent decades, the art world has been busy recreating the colorful past of Western civilization as archaeologists use UV light to illuminate certain pigments and art exhibits travel the world to unshroud the colorful palette of these ancient civilizations.
The oldest evidence of Homo sapiens paint-making comes from a prehistoric cave in South Africa.
Advertisement
The Egyptian pyramids were originally polished white.
Even 4,500 years after its construction, the Pyramid of Giza never fails to impress. The largest of the pyramids at 455 feet tall, it’s the last-surviving of the Seven Wonders of the World, and every year hosts several million visitors. However, the Pyramid of Giza would likely be a sorry sight to ancient Egyptians who witnessed its beauty back in the 26th century BCE. Today, the pyramid’s earthy color matches the surrounding desert, but archaeologists now believe that the original structure was encased with highly polished white limestone so that the pyramids appeared white and glistening. Some experts believe that the capstones, called pyramidions, were also plated in gold. One leading theory suggests these limestone coverings were repurposed millennia later to build mosques, a process that exposed the pyramids we know and love today.
Darren Orf
Writer
Darren Orf lives in Portland, has a cat, and writes about all things science and climate. You can find his previous work at Popular Mechanics, Inverse, Gizmodo, and Paste, among others.
Advertisement
top picks from the Inbox Studio network
Interesting Facts is part of Inbox Studio, an email-first media company. *Indicates a third-party property.
Original photo by Real Window Creative/ Shutterstock
Alaska is big — in more ways than one. Not only is it the largest U.S. state by a wide margin, but it’s also home to the 10 highest mountain peaks in the U.S., far more volcanoes than any other state, and more coastline than all the other states combined. Of the United States’ estimated 12,479 miles of coastline, Alaska accounts for some 6,640 miles all on its own, at least based on one account by the Congressional Research Service. (Coastlines can be notoriously difficult to measure, and counts do vary.)
Alaska’s state flag was designed by a 13-year-old.
In 1926, the territory of Alaska held a design competition for a new flag. The winner — the Polaris star and Ursa Major (Big Dipper) constellation on a blue field — was created by a 13-year-old boy living in an orphanage. Alaska stuck with the flag when it became a state in 1959.
Alaska’s coastline borders three seas — the Beaufort, Bering, and Chukchi — along with the Pacific and Arctic oceans, and it rests in some of the most extreme climates in the world. The coasts themselves have been formed over millions of years by fault tectonics, volcanism, glaciation, fluvial processes, and sea level changes. Most of these beaches aren’t usually very balmy: The southeast section of Alaska’s coast is filled with rocky coasts and sheltered fjords, while in the north, sediment from rivers draining from the Brooks Range and the Canadian Rockies forms deltas. Although these rivers are often frozen, wind pushes sea ice along the shore during the coldest months. So if you’re looking for a place to swim, maybe stick to Key West.
Canada has the most coastline of any country, at 125,567 miles.
Advertisement
Alaska is home to 227 federally recognized Indigenous tribes, more than any other state in the U.S.
The U.S. federal government recognizes 574 tribes throughout the country — and 227 of those are in Alaska alone (the Bureau of Indian Affairs recognizes an additional two). In fact, nearly one in six Alaskans is considered Native American, which is the highest rate of any U.S. state (although California is home to the most Indigenous people overall). Although the U.S. government has formally recognized Alaska’s 229 tribes, the state of Alaska didn’t follow suit until the summer of 2022, when Governor Mike Dunleavy passed legislation recognizing the tribes and their indelible contributions to the history and culture of Alaska.
Darren Orf
Writer
Darren Orf lives in Portland, has a cat, and writes about all things science and climate. You can find his previous work at Popular Mechanics, Inverse, Gizmodo, and Paste, among others.
Advertisement
top picks from the Inbox Studio network
Interesting Facts is part of Inbox Studio, an email-first media company. *Indicates a third-party property.
There’s a good reason why both main characters in Finding Nemo are male, at least initially. All clownfish are born that way, and it’s only when a group’s dominant female dies or disappears that a male will develop into a female and become the new matriarch. All clownfish have the ability to turn female, and the change is permanent once it occurs. The transformation begins almost immediately after the dominant female leaves, and starts in the brain before manifesting itself in the sex organs. Had the beloved Pixar film been devoted to scientific accuracy, Nemo’s father, Marlin, might not have been just his sole caregiver after tragedy befalls the boy’s mother — he might literally have become his mother.
Along with their equine appearance, seahorses are well-known for another unique trait: the fact that males, not females, get pregnant and bear young. The same is true of leafy seadragons and pipefish, fellow members of the Syngnathidae family.
Clownfish aren’t the only reef-dwellers that can change sex. The bluehead wrasse does it as well, only in reverse: When a dominant male leaves its group, the largest female transforms into a male over the course of just 21 days. Researchers have identified no fewer than 500 fish species capable of changing sex; some, like the coral-dwelling species of gobies, can even switch back and forth. The process is believed to have reproductive benefits, as it allows a single fish to reproduce as both sexes throughout its life.
Though the orange-and-white look is the most recognizable, it’s not the only one clownfish can sport. With nearly 30 different species of clownfish, there are other colors, too: Yellow, red, and black are also common, though most also have the characteristic thick white stripes. Despite being known for their bright colors, clownfish aren’t especially friendly when paired with other fish — in fact, they’re downright aggressive.
Michael Nordine
Staff Writer
Michael Nordine is a writer and editor living in Denver. A native Angeleno, he has two cats and wishes he had more.
Advertisement
top picks from the Inbox Studio network
Interesting Facts is part of Inbox Studio, an email-first media company. *Indicates a third-party property.
Today, nutmeg is used in the kitchen to add a little zing to baked goods and cool-weather drinks, though at various times in history it’s been used for fragrance, medicine… and its psychotropic properties. That’s possible thanks to myristicin, a chemical compound found in high concentrations in nutmeg, but also produced in other foods like parsley and carrots. Myristicin is able to cause hallucinations by disrupting the central nervous system, causing the body to produce too much norepinephrine — a hormone and neurotransmitter that transmits signals among nerve endings. While the idea of conjuring illusions of the mind might sound intriguing, nutmeg intoxication also comes with a litany of unpleasant side effects, including dizziness, confusion, drowsiness, and heart palpitations.
Nutmeg grows on trees, but it doesn’t come from a nut. It’s actually produced from a seed that grows inside an apricot-shaped fruit on tropical Myristica fragrans trees. The harvested seeds are dried and ground into the seasoning commonly found on kitchen spice racks.
Nutmeg’s inebriating effects have been noted since the Middle Ages, when crusaders would ingest large amounts to inspire prophetic visions (and to help with travel-related aches and pains). Medieval doctors and pharmacists with the Salerno School of Medicine noted that it needed to be used carefully, warning that “one nut is good for you, the second will do you harm, the third will kill you” (which some doctors today say may have been an exaggeration). In fact, nutmeg is a vitamin-rich source of antioxidants and can even act as a mood booster — a healthy addition to your spice rack, so long as it’s used in small quantities.
Nutmeg trees also produce mace, a spice created from the coating on nutmeg seeds.
Advertisement
Manhattan became a British colony thanks to nutmeg.
Spice trading was a lucrative business in the 17th century, which is why many countries sought to control areas where they could monopolize spice production. Back then, nutmeg was considered one of the rarest spices in the world, making it a costly substance to acquire. Two European powers — the British and the Dutch — fought to control Indonesia’s Banda Islands, the only place where nutmeg was originally found. As part of the 1667 Treaty of Breda that ended the second Anglo-Dutch war, the two nations agreed to swap colonies, with the Dutch giving up their claim on Manhattan for the island of Run, a British-controlled land in the Banda island chain. Both countries were content with their wins, although their successes proved short-term: The Dutch monopoly loosened in the 1700s when trees smuggled from Indonesia increased competition for nutmeg. And just over 100 years after the treaty was signed, of course, Britain’s colonies in America declared independence and split from the crown.
Nicole Garner Meeker
Writer
Nicole Garner Meeker is a writer and editor based in St. Louis. Her history, nature, and food stories have also appeared at Mental Floss and Better Report.
Advertisement
top picks from the Inbox Studio network
Interesting Facts is part of Inbox Studio, an email-first media company. *Indicates a third-party property.