Stage names are hardly uncommon in Hollywood, but false initials are rarer — if not unheard of. To wit: Michael J. Fox’s middle name doesn’t start with “J.” The Back to the Future star’s middle name is actually Andrew, but there already was a Michael A. Fox in the Screen Actors Guild when Fox wanted to join it. So why the “J”? The letter is an homage to Michael J. Pollard, a character actor Fox admires. Pollard had more than 100 acting credits to his name by the time he died in 2019, and received Academy Award, BAFTA, and two Golden Globe nominations for his role as gas station attendant-turned-accomplice C.W. Moss in 1967’s Bonnie and Clyde.
Fox wasn’t originally cast in “Back to the Future.”
John Cusack, Charlie Sheen, Ralph Macchio, and many others all auditioned for the role of Marty McFly, but Eric Stoltz was cast. It wasn’t until six weeks into production that director Robert Zemeckis let Stoltz go, feeling he wasn’t right for the part, and Fox got the role instead.
Some stage names are so successful that most people don’t realize they’re stage names. Sir Elton John was born Reginald Kenneth Dwight, for instance, while Jamie Foxx’s real name is Eric Marlon Bishop, and Whoopi Goldberg’s is Caryn Elaine Johnson — to name just a few. Fox, who was diagnosed with Parkinson’s disease in 1991 and announced his condition in 1998, retired from acting in 2020. He founded the Michael J. Fox Foundation for Parkinson’s Research in 2000 and remains devoted to finding a cure for the disease.
“Back to the Future” was almost named “Spaceman From Pluto.”
Advertisement
Middle names date back to ancient Rome.
Well, kind of. Many Romans had three names, but their second name wasn’t quite a middle name. There was the praenomen (personal name), nomen (family name), and cognomen, which indicated which branch of a family you were from. (For instance, Julius Caesar’s full name was actually Gaius Julius Caesar.) There was also a hierarchical element to the Roman naming system, as women generally only had two names and enslaved people often had only one. Middle names as we know them today arose in the Middle Ages, a time when faithful Europeans struggled between giving their children a family name or that of a saint. Eventually deciding that both would be preferable to one, they began the tradition of a child receiving a given name, baptismal name (saint’s name), and surname. That custom eventually reached America along with the people who emigrated there, with secular middle names becoming more common over time.
Michael Nordine
Staff Writer
Michael Nordine is a writer and editor living in Denver. A native Angeleno, he has two cats and wishes he had more.
Advertisement
top picks from the Inbox Studio network
Interesting Facts is part of Inbox Studio, an email-first media company. *Indicates a third-party property.
Juneteenth is a more recent addition to most American calendars, and the first new federal holiday in nearly four decades. So it might come as a surprise that the celebration already has a flag of its own, and one that’s been around for more than 25 years. Designed back in 1997, the flag came about at a time when advocates were rallying for the holiday — which celebrates the end of slavery in the U.S. — to gain federal recognition. In the years since, the design (originally crafted by activist Ben Haith) has undergone minor changes but remains heavy with symbolism. The banner’s solid white star represents freedom and nods to Texas (aka the Lone Star State) for its role in the creation of Juneteenth; on June 19, 1865, enslaved people were officially emancipated in Texas more than two years after President Abraham Lincoln signed the Emancipation Proclamation to abolish slavery in Confederate states. Also featured on the flag is a white sunburst, portraying new beginnings. Both stars sit atop an arching blue and red horizon that signifies optimism for the future. The three-toned flag purposely uses the same colors as the American flag as a reminder that formerly enslaved people and their descendants are Americans, too, despite the country’s history of unequal rights.
The last enslaved people in the U.S. were set free on Juneteenth.
Juneteenth honors a date when enslaved Americans in Texas were liberated, but it wasn’t the true ending of slavery. Despite Lincoln’s famous decree, some areas (such as Delaware and Kentucky) permitted the practice until the 13th Amendment ending slavery was ratified in December 1865.
Juneteenth’s official flag is raised alongside the Stars and Stripes above government buildings, on college campuses, and in front yards around the country, but it also sometimes appears next to the Pan-African flag at reunions, block parties, and other Juneteenth festivities. Introduced in 1920 by a group led by political activist Marcus Garvey, the three-striped banner of the Pan-African flag is identifiable by its horizontal red, black, and green bands, which represent the blood, people, and growth of the African diaspora. Both flags are meant as inspiring symbols of unity and remembrance — ideas worth celebrating on America’s second Independence Day.
Texas was the first state to make Juneteenth an official holiday, in 1980.
Advertisement
The Emancipation Proclamation is rarely on display.
The United States’ most famous founding documents are relatively easy to see in person. Take the Declaration of Independence and the Constitution, for example, which are on permanent display at the National Archives Museum in Washington, D.C. But despite its historical significance, the original Emancipation Proclamation is seldom showcased. That’s because the handwritten, double-sided document hasn’t aged well (physically). Archivists attribute its deterioration to the paper used back in 1863, which hasn’t withstood time as well as some older documents such as the Constitution, which were penned on animal-skin parchments. In addition, the U.S. Department of State had custody of the original Emancipation Proclamation until transferring it to the National Archives in 1936, by which point it had sustained considerable damage from handling and light exposure. Today, the Emancipation Proclamation is stored in an environmentally controlled vault and viewable on rare occasions; the pages never leave storage all at once, and are always displayed under extremely low light for short periods of time to maximize their life span.
Nicole Garner Meeker
Writer
Nicole Garner Meeker is a writer and editor based in St. Louis. Her history, nature, and food stories have also appeared at Mental Floss and Better Report.
Advertisement
top picks from the Inbox Studio network
Interesting Facts is part of Inbox Studio, an email-first media company. *Indicates a third-party property.
While you might associate the development of modern universities with intellectual movements like the Renaissance or the Enlightenment, the first universities predate those major periods in history — not by years but by centuries. One of the oldest universities in the world is Oxford University, where teaching began back in 1096. That’s much older than Harvard (established in 1636) or Yale (1701), and it’s even older than some well-known Indigenous civilizations in the Americas, including the Incas, who lived in the Andean region of South America from around the 13th century CE to the mid-16th century. (Other groups and empires have occupied the Andes since at least 10,000 BCE.)
The University of al-Qarawiyyin in Fez, Morocco, was built by Fatima al-Fihri, the daughter of a rich merchant, in 859 CE. By the 10th century, it had evolved into the largest Arab university in North Africa. Today, it is the oldest continually operating university in the world.
The first universities were not like the sprawling campuses of today. Instead, they were more like guilds devoted to certain subjects or crafts. Slowly, the influence of these schools grew throughout the High Middle Ages (1000–1300), and many of them became hot spots during future intellectual movements. Meanwhile, as Europe was busy cementing the importance of its universities (and fighting in half-a-dozen Crusades), the Incas were building sprawling road networks and reliable postal systems — they even had highly skilled brain surgeons.
Machu Picchu means “old hill” in the Quechua language.
Advertisement
The Incas used string and knots to record information.
Although the Incas had no known written language, they weren’t without a means of recording important information. Quipu were Andean textiles that used a system of colored string and knots to record data. These textiles were both recorded and read by officials known as “quipucamayocs.” Evidence suggests that quipu were first developed by the Wari civilization, who lived in Peru between about 450 and 1000 CE. Scholars believe the Incas used quipu both to record hard data — such as census figures, inventory, and other administrative information — and as a way to encode Incan myths and histories. Because of the Andes’ arid climate, the quipu were well-preserved for centuries. Today, hundreds of quipu are displayed in museums around the world, with the biggest collection now residing at the Ethnological Museum of Berlin in Germany.
Darren Orf
Writer
Darren Orf lives in Portland, has a cat, and writes about all things science and climate. You can find his previous work at Popular Mechanics, Inverse, Gizmodo, and Paste, among others.
Advertisement
top picks from the Inbox Studio network
Interesting Facts is part of Inbox Studio, an email-first media company. *Indicates a third-party property.
Original photo by Olesya Semenov/ Alamy Stock Photo
In the 1950s, Americans were looking for ways to spend less time in the kitchen. Generations of home cooks, the overwhelming majority of them women, had made food preparation the focus of their day; historians estimate that in 1900, an average household spent 58 hours per week on housework. But a few decades later, postwar innovations such as affordable appliances created more free time — and so did a new wave of commercially prepared and processed foods, an emerging industry fueled by scientists such as William A. Mitchell. While Mitchell’s name isn’t widely known today, his most popular inventions are major name brands, including Cool Whip, Pop Rocks, and Tang.
The vitamin-infused powder is forever linked with space exploration, but General Foods originally planned Tang as a travel-friendly drink mix for consumers. NASA, looking for easier ways to transport beverages to space, took notice and first stocked it on the 1962 Friendship 7 mission.
Growing up in Minnesota, Mitchell spent his teenage years as a farmhand and carpenter, working to fund his college tuition. It took a few years for the future inventor to venture into food production after graduation, chemistry degree in hand; he worked at Eastman Kodak creating chemical developers for color film, as well as at an agricultural lab. He then went to work at General Foods in 1941, contributing to the war effort by creating a tapioca substitute for soldier rations. (Overseas, GIs renamed the gelatin and starch blend “Mitchell’s Mud.”) The postwar years saw Mitchell churn out a few flops, like carbonated ice, as well as now-iconic hits. In 1956, his quest to create a self-carbonating soda led to the accidental invention of Pop Rocks. A year later, he developed Tang Flavor Crystals, which skyrocketed to popularity after NASA used the powder in space to remedy astronauts’ metallic-tasting water. And by the time he’d retired from General Foods in 1976, Mitchell had developed a quick-set gelatin, powdered egg whites, and a whipped cream alternative — the beloved Cool Whip that now dominates grocery store freezers.
Pop Rocks were originally named Gasified Confection.
Advertisement
Pop Rocks were briefly discontinued because of safety concerns stemming from a notorious urban legend.
Pop Rocks are known as a totally rad treat of the 1980s, but the candy’s first release in the 1970s was a dud. General Foods initially released the candy in 1975, hoping to capitalize on its innovative appeal. But soon after the confection hit stores, rumors began to spread that it was dangerous, even deadly — supposedly, the carbon dioxide that caused the miniature explosions could mix with carbonated soda and cause children’s stomachs to explode. General Foods and inventor William Mitchell tried to combat the unfounded stories with newspaper ads, a telephone hotline, and by sending letters to 50,000 school principals around the U.S. But amid persistent rumors and slumping sales, General Foods stopped marketing the candy and sold the brand to Kraft in 1985, who marketed it as “Action Candy” — though today’s sweet tooths can once again find the candy under its original name.
Nicole Garner Meeker
Writer
Nicole Garner Meeker is a writer and editor based in St. Louis. Her history, nature, and food stories have also appeared at Mental Floss and Better Report.
Advertisement
top picks from the Inbox Studio network
Interesting Facts is part of Inbox Studio, an email-first media company. *Indicates a third-party property.
Should you ever come across an Academy Award on eBay, there’s a good chance it shouldn’t be there. That’s because Oscar winners aren’t allowed to sell their statuettes without first offering them back to the Academy of Motion Picture Arts and Sciences for the nominal fee of $1, which is meant to maintain their prestige. As the Academy’s official regulations explain, honorees “have no rights whatsoever in the Academy copyright or goodwill in the Oscar statuette or in its trademark and service mark registrations” and “shall not sell or otherwise dispose of the Oscar statuette, nor permit it to be sold or disposed of by operation of law” before first giving the Academy the chance to buy it back. Presumably, the Academy always accepts that $1 offer in order to protect the brand, though it’s not clear how often, if ever, it’s actually happened.
Both “The Godfather Part II” (1974) and “The Lord of the Rings: The Return of the King” (2003) won Best Picture. Just as impressive, all three films in both trilogies were nominated for Best Picture — and the original “Godfather” won it as well.
The rule is strictly enforced, with winners having to sign a contract before taking possession of their statuette. It also applies to their family members and descendants. Not everyone has abided by it, however. To take just one example: The trophy awarded to art director Joseph C. Wright, who won for his work on 1942’s My Gal Sal, was sold to an auction house for $79,200 in 2015. This led to the Academy winning a lawsuit enforcing the rule — and likely discouraging any future honorees from trying to break it.
The first person presented with an Oscar was German actor Emil Jannings.
Advertisement
The Oscars weren’t televised until 1953.
For nearly 25 years, you had to be in the room to truly know what went down at the Oscars. That changed on March 19, 1953, when NBC aired the ceremony live from the RKO Pantages Theatre in Hollywood. Bob Hope hosted, something he went on to do a record-setting 19 times. Prior to that, the Academy Awards were broadcast on the radio — except for the first ceremony, a private affair that lasted just 15 minutes and was exceptionally undramatic, given the fact that the winners had been announced several months earlier.
Michael Nordine
Staff Writer
Michael Nordine is a writer and editor living in Denver. A native Angeleno, he has two cats and wishes he had more.
Advertisement
top picks from the Inbox Studio network
Interesting Facts is part of Inbox Studio, an email-first media company. *Indicates a third-party property.
At 3:45 p.m. on September 9, 1947, a computer programmer working on the Mark II at Harvard recorded in a logbook that the team had discovered the “first actual case of [a] bug being found.” But the programmer wasn’t referring to some poorly written lines of code — this was an actual bug. A moth, to be exact, which had flown into a room where the Mark II, one of the world’s first computers, was housed at the university. Attracted by the warmth of the 25-ton machine, the winged creature met its end in one of the many electromagnetic relay contacts. The team removed the moth with tweezers.
Nikola Tesla was the first to coin the engineering term “bug.”
American inventor Thomas Edison made several references to “bugs” in his notebooks in the mid-1870s, defining them as “bug — as such little faults and difficulties are called.” By 1889, newspapers reported on how Edison was hard at work fixing a “bug” in his phonograph.
While this event is often mistakenly cited as the birth of the programming term “bug” to mean a flaw or imperfection, the word had actually been used in engineering circles for over half a century. But the 1947 moth misadventure was popularized by Grace Hopper, a mathematician and computer science pioneer who worked with the team as they “debugged” the Mark II. Early computers such as Harvard’s Mark series were responsible for other modern computer programming lingo, though: For example, a “patch” comes from the punched cards used in early machines that programmers physically “patched” with tape to fix errors. Today, the original Mark II logbook — with the original “bug” taped to it — is at the Smithsonian Museum of American History.
The word “bug” likely first appeared in an early English translation of the Bible.
Advertisement
“Spam" took on the additional meaning of junk email thanks to a sketch by the British comedy troupe Monty Python.
The sketch begins with a simple request: A couple in a diner wants to order food. Unfortunately, the proprietor of the establishment serves a very Spam-heavy menu, including “Spam Spam Spam Spam Spam Spam Spam baked beans Spam Spam Spam and Spam.” Originally airing on Monty Python’s Flying Circus in 1970, the sketch later became associated with annoying floods of data, ads, or massive amounts of useless text. The word likely first appeared online in late 1980s MUDs (multi-user dungeons), where users could “spam the database” by using a program to create lots of objects in the shared digital space, among other pesky, repetitive behaviors. By 1990, archived MUD chats show that the use of the term “spam,” along with its sketch comedy origins, had been officially established.
Darren Orf
Writer
Darren Orf lives in Portland, has a cat, and writes about all things science and climate. You can find his previous work at Popular Mechanics, Inverse, Gizmodo, and Paste, among others.
Advertisement
top picks from the Inbox Studio network
Interesting Facts is part of Inbox Studio, an email-first media company. *Indicates a third-party property.
Today Leonardo da Vinci’s “Mona Lisa” is probably the most famous painting in the world — and it deserves the accolade. Painted between 1503 and 1519, this portrait (commonly believed to be of Italian noblewoman Lisa del Giocondo) reflects the Renaissance polymath’s deep understanding of his art form and has been analyzed in depth for decades, if not more. Although certain sectors of the art world regarded the portrait as a masterwork by the 1860s, the general public knew little about it until the 20th century. Then, the unthinkable happened — the “Mona Lisa” was stolen.
The “Mona Lisa” once hung in Napoleon Bonaparte’s bedroom.
First displayed by a Leonardo da Vinci patron, French King Francois I, the “Mona Lisa” was a mainstay of royal residences but rode out the French Revolution (at the end of the 18th century) in a warehouse. Later, it hung in Napoleon’s bedroom in Paris’ Tuileries Palace for four years.
In the early morning hours of August 21, 1911, after spending the night hiding in an art-supply closet in the Louvre, three Italian “handymen” snuck over to the “Mona Lisa,” unhooked it from its protected location, tossed a blanket over their pilfered prize, and snuck away undetected, boarding a train at the Quai d’Orsay station at 7:47 a.m. The theft became an international scandal, and newspapers around the world ran stories about the more than two-year-long search for the missing masterpiece. Finally, in December 1913, the painting was found in Florence, Italy, after an attempted sale by the heist’s ringleader, Vincenzo Perugia — who had actually worked at the Louvre for a time, installing glass cases over the paintings. The treasure then went on a tour of Italy until it returned to the famous French museum in early 1914. Although the “Mona Lisa” and her mischievous smile survived unharmed, the painting’s reputation had changed forever, with the many headlines about the theft making her a household name that has endured to this day.
The hazy background of the “Mona Lisa” is a specific painting style known as sfumato.
Advertisement
Edvard Munch’s “The Scream” was also stolen… twice.
No painting captures existential dread quite like Edvard Munch’s “The Scream.” Created in 1893, Munch’s masterpiece depicts a ghostly figure, not mid-scream as many assume, but instead hearing “the great scream throughout nature,” according to the artist’s own inscription on a lithograph edition of the work. The painting is so famous, it’s one of the few works of art to receive the rare honor of its own emoji. Of course, popularity can also inspire the wrong kind of attention, and in February 1994, on the opening day of the Winter Olympics in nearby Lillehammer, Norway, two thieves stole Munch’s masterwork from Oslo’s National Gallery. The burglars left behind only a brief note: “Thousand thanks for the poor security.” Fortunately, the painting was recovered — identified as genuine thanks to a splash of candle wax on its front — three months later in Åsgårdstrand, Norway, a town where Munch lived and worked for years. Then, in 2004, another version of “The Scream” (Munch painted several) was stolen from the Munch Museum in Oslo; it was recovered two years later. As happened with the “Mona Lisa,” these thefts — though terrible crimes — only added to the painting’s international renown.
Darren Orf
Writer
Darren Orf lives in Portland, has a cat, and writes about all things science and climate. You can find his previous work at Popular Mechanics, Inverse, Gizmodo, and Paste, among others.
Advertisement
top picks from the Inbox Studio network
Interesting Facts is part of Inbox Studio, an email-first media company. *Indicates a third-party property.
With an ever-expanding catalogue of more than 171 million items occupying 838 miles of bookshelves, the Library of Congress is the largest library on the planet as measured by collection size. Its prodigious holdings include more than 40 million books and other print materials, 74 million manuscripts, and the largest collection of rare books in North America. Old King Cole, which is about a millimeter tall (tinier than a grain of rice), is the library’s smallest book, while a 5-by-7-foot collection of photos of Bhutan is the largest.
The Library of Congress is the country’s oldest cultural institution.
Founded in 1800, the library predates every other federal cultural institution in the U.S. — it's so old, in fact, that it was brought into existence by the same bill that relocated the capital to Washington, D.C., from Philadelphia.
The library doesn't just house printed materials, of course. It contains everything from the contents of Abraham Lincoln’s pockets on the night he was assassinated to hundreds of billions of tweets and Amelia Earhart’s palm print. The British Library and its massive catalogue is next on the list of the world’s largest libraries, with the top five rounded out by the New York Public Library, Library and Archives Canada, and the Russian State Library.
The Library of Congress was first proposed by James Madison.
Advertisement
The original library was burned down in the War of 1812.
The Library of Congress was comparatively tiny for the first 14 years of its existence, but that didn’t make it any less tragic when its collection of 3,000 books was destroyed along with the Capitol building on August 24, 1814. The conflagration that took it down, part of the War of 1812, necessitated a new location. Enter Thomas Jefferson, who offered his own collection of 6,487 books (then the largest personal library in the nation) as a replacement for the lost volumes. Though he didn’t do so for free — Congress paid him $23,950 — Jefferson did provide the foundation for what the library would eventually become. Sadly, a second fire destroyed most of his contribution as well as nearly two-thirds of the entire collection on Christmas Eve 1851, but the institution rose from the ashes once again.
Michael Nordine
Staff Writer
Michael Nordine is a writer and editor living in Denver. A native Angeleno, he has two cats and wishes he had more.
Advertisement
top picks from the Inbox Studio network
Interesting Facts is part of Inbox Studio, an email-first media company. *Indicates a third-party property.
“If it ain’t broke, don’t fix it” is a motto that works well for Rome. Because of the incredibly advanced craftsmanship of ancient Rome’s architects, as well as their remarkably long-lasting building materials (more on that below), many of the ancient empire’s most marvelous construction projects can still be seen by millions of tourists today — some 6 million people visit the Colosseum each year alone. However, the most amazing engineering achievement might be Rome’s eye-catching aqueducts, one of which still supplies Rome with water millennia after it was built.
Rome has more water fountains than any other city in the world.
As befits Rome’s millennia-long history of being at the forefront of water engineering, the Italian capital still boasts more fountains than any other city in the world. Although estimates for the number of fountains run as high as 3,000 and beyond, many are no longer in use.
While the Romans didn’t invent the aqueduct — primitive irrigation systems can be found in Egyptian, Assyrian, and Babylonian history — Roman architects perfected the idea. In 312 BCE, the famed Roman leader Appius Claudius Caecus erected the first aqueduct, the Aqua Appia, which brought water to the growing population of the Roman Republic. Today, the Acqua Vergine — first built during the reign of Emperor Augustus in 19 BCE as the Aqua Virgo — still supplies Rome with water more than 2,000 years after its construction (though it’s been through several restorations).
The main reason for the aqueduct’s longevity, along with that of many of Rome’s ancient buildings, is its near-miraculous recipe for concrete. An analysis by the Massachusetts Institute of Technology discovered that Roman concrete could essentially self-heal due to its lime clasts (small mineral chunks) and a process known as “hot mixing” (mixing in the lime at extremely high temperatures). Today, researchers are studying how the material functioned in the hopes of applying secrets from the “Eternal City” to today’s building materials.
The famous Trevi fountain is one of the end points of the Acqua Vergine aqueduct.
Advertisement
New York’s Croton Aqueduct, built in 1842, was based on ancient Roman engineering.
The fall of Rome in the fifth century coincided with a decline in sanitary conditions in many of the world’s cities. By the 18th and 19th centuries, disease ran rampant due to poor sanitation and water management. One of the first aqueducts in the U.S. was the Croton Aqueduct, designed by engineer John B. Jervis, which provided fresh water for the growing metropolis of New York City. Although ancient Rome’s last aqueduct had been built some 1,600 years prior, Jervis based his design on these impressive examples of engineering, and the aqueduct similarly used simple gravity to carry water 41 miles from the Croton River to reservoirs in Manhattan. Upon its completion in 1842, the aqueduct drastically improved health and hygiene in New York City and continued providing the booming metropolis with fresh water until it was decommissioned in 1955.
Interesting Facts
Editorial
Interesting Facts writers have been seen in Popular Mechanics, Mental Floss, A+E Networks, and more. They’re fascinated by history, science, food, culture, and the world around them.
Advertisement
top picks from the Inbox Studio network
Interesting Facts is part of Inbox Studio, an email-first media company. *Indicates a third-party property.
Compared to dinosaurs, humans have occupied only a speck on the timeline of Earth’s history. Modern humans appeared on the stage 200,000 years ago (up to 7 million years ago if you include the whole human family), while dinosaurs roamed the globe for about 165 million years. Despite the large span of time stretching across three distinct geologic periods (Triassic, Jurassic, and Cretaceous), many people view the “Age of the Dinosaurs” as a monolithic moment in history when dinosaurs all lived together. In fact, more time separates stegosaurus and Tyrannosaurus rex than separates modern humans from “the King of the Dinosaurs.”
It’s not just state flowers and birds — some states also have fossils to represent them. For example, Colorado’s state fossil is a stegosaurus, Kansas’ is a pteranodon, and Utah’s is an allosaurus. Other states have mammal fossils, like mastodons, whales, and saber-toothed cats.
Stegosaurus roamed what’s now modern-day North America during the late Jurassic period, about 155 million to 145 million years ago. Although it didn’t live alongside the ferocious T. rex, its contemporary, the allosaurus, was also a nightmare of powerful teeth. T. rex didn’t arrive on the scene until some 68 million years ago, during the late Cretaceous — a difference of some 80 million years. So while a comfortable 66 million years separate humans from the dinosaur’s dramatic, likely asteroid-induced downfall, the stegosaurus and T. rex lived even farther apart. This startling fact doesn’t even take into account Triassic dinosaurs, such as herrerasaurus and eoraptor, which are twice as chronologically distant from the T. rex as stegosaurus is. Turns out, the “Age of the Dinosaurs” is much more complex than its name suggests.
The T. rex roar in “Jurassic Park” was a composite of sounds from a tiger, alligator, and baby elephant.
Advertisement
Scientists aren’t sure why stegosauruses had plates.
The word “stegosaurus” is Greek for “roof lizard,” a reference to the giant dino’s most recognizable feature — its series of plates that run nearly the length of its body. In the dino world, these plates are as iconic as a triceratops’s triple horns or a T. rex’s small (but surprisingly strong) arms, but scientists still don’t really know why this icon of the late Jurassic had these plates. Instead, we’re left with several theories that could help explain this fossilized mystery. One idea is the plates were a sexual characteristic, with bigger, pointier plates being considered more attractive. Another theory suggests the plates helped regulate temperature, as they could soak up heat during the day and then dissipate that heat at night. Other scientists argue the plates might have been used to communicate, intimidate, or defend stegosaurus against carnivorous predators. Whatever its purpose, these mysterious plates have made the stegosaurus one of the most recognizable dinos in the world.
Darren Orf
Writer
Darren Orf lives in Portland, has a cat, and writes about all things science and climate. You can find his previous work at Popular Mechanics, Inverse, Gizmodo, and Paste, among others.
Advertisement
top picks from the Inbox Studio network
Interesting Facts is part of Inbox Studio, an email-first media company. *Indicates a third-party property.