Original photo by United Archives GmbH/ Alamy Stock Photo
In the early days of the internet, settling on the perfect username sometimes required finding the right niche email service — like the first G-mail, which gave cat lovers the ability to show off their feline fervor. Those first email accounts weren’t handled by Google, owner of today’s incredibly popular Gmail service; instead, they were run by the studio behind the Garfield comic strip. Paws, Inc. — owned by Garfield creator Jim Davis — launched “Garfield’s G-mail” around 1997, though internet historians have few details to go on about its origins or eventual demise. What is known is that the service allowed users to sign up for their own email address that ended with “@catsrule.garfield.com.” G-mail was, after all, marketed as “email with cattitude.”
“Garfield” cartoonist Jim Davis was inspired by the “Peanuts” comics.
Davis drew inspiration from Snoopy when creating his Garfield character. Garfield was a hit with readers, though it’s believed “Peanuts” creator Charles Schulz was not a fan.
Internet lore suggests the original G-mail was shuttered when Google’s Gmail emerged, though online sleuths say that’s unlikely, considering that Google didn’t launch its email service until 2004, and Paws, Inc., moved its email service to the “@e-garfield” domain around 2001. Plus, Paws, Inc., never used the “@gmail” domain name. It’s more likely the digital mailboxes were eventually shuttered once interest died off, as happened with many now-outdated remnants of the internet’s past. Garfield comics, however, have remained popular with cartoon enthusiasts, and a new animated film hit theaters in 2024, returning the fictional tabby cat to the screen for the first time in 15 years.
Today, “Google” is both a noun and verb, but at one time, the tech giant’s name was simply a typo. In 1997, Google founder Larry Page and fellow Stanford student Sean Anderson were coming up with titles for a data-indexing website when the name emerged. Initially, Anderson suggested “googolplex” (one of the largest describable numbers), which was then shortened to “googol.” Anderson went online to see if the term was available to purchase for a web domain, but misspelled the word, typing “google” instead. The name stuck: Google.com was registered as a domain in September 1997, and its search engine feature debuted a year later. But building Google’s more popular services would take some time — the search engine wouldn’t release its email accounts for six more years, and at first through invite-only. Eventually, of course, Gmail grew into the digital mainstay it is today.
Nicole Garner Meeker
Writer
Nicole Garner Meeker is a writer and editor based in St. Louis. Her history, nature, and food stories have also appeared at Mental Floss and Better Report.
Advertisement
top picks from the Inbox Studio network
Interesting Facts is part of Inbox Studio, an email-first media company. *Indicates a third-party property.
Original photo by Volgi archive/ Alamy Stock Photo
In early 1901, English inventor Hubert Cecil Booth traveled to Empire Music Hall in London to witness a strange invention — a mechanical aspirator designed to blow pressurized air to clean rail cars. Booth later asked the demonstrator why the machine (invented by an American in St. Louis) didn’t simply suck up the dust rather than blow it around. “He became heated,” Booth later wrote, “remarking that sucking out dust was impossible.” Unconvinced, Booth set about creating such a contraption, and later that same year he filed a patent for a vacuum machine he named the “Puffing Billy.”
The Hoover vacuum is named after President Herbert Hoover.
Herbert Hoover’s presidency (1929 to 1933) arrived decades after the debut of the Hoover vacuum company, named after Ohio businessman William H. Hoover. The vacuum mogul has no relation to the nation’s 31st president.
This machine wasn’t quite as fancy as modern Dust Busters, Dirt Devils, Hoovers, or Dysons. Instead, the Puffing Billy was red, gasoline-powered, extremely loud, and big — really big. So big, in fact, that the machine needed to be pulled by horses when Booth’s British Vacuum Cleaner Company made house calls. Once outside a residence, 82-foot-long hoses snaked from the machine through open windows. Because turn-of-the-century carpet cleaning wasn’t cheap, Booth’s customers were often members of British high society; one of his first jobs was to clean Westminster Abbey’s carpet ahead of Edward VII’s coronation in 1902. By 1906, Booth had created a more portable version of the Puffing Billy, and two years later, the Electric Suction Sweeper Company (later renamed Hoover) released the “Model O,” the first commercially successful vacuum in the United States.
The world’s largest vacuum chamber is located at a NASA facility in the U.S. state of Ohio.
Advertisement
Engineers in the 19th century used horses to power boats.
Although an animal-powered boat can trace its origins back to Roman times, team boats (also known as “horse boats” or “horse ferries”) became especially popular during the 19th century in the United States. Horses walked either in a circle or in place to turn wheels that moved the boat forward. The first commercially operated horse boat (or any other animal-powered boat) in the U.S. plied the waters of the Delaware River around 1791. Well suited for journeys of only a few miles, horse boats were soon sailing the waters of Lake Champlain as well as the Hudson River before eventually spreading to the Ohio and Mississippi rivers, and the Great Lakes. By the 1850s, these horse-powered creations were largely replaced by paddle steamers — the beginning of the horse’s decades-long slide from supremacy to irrelevancy, at least when it comes to transportation.
Darren Orf
Writer
Darren Orf lives in Portland, has a cat, and writes about all things science and climate. You can find his previous work at Popular Mechanics, Inverse, Gizmodo, and Paste, among others.
Advertisement
top picks from the Inbox Studio network
Interesting Facts is part of Inbox Studio, an email-first media company. *Indicates a third-party property.
Original photo by Patti McConville/ Alamy Stock Photo
Food and drink often taste different on an airplane, usually more bland. But ginger ale maintains a crisp, dry flavor that makes it known for being even better when enjoyed in the air. It all has to do with the way cabin conditions affect our taste buds. Humidity levels inside an airplane cabin generally hover around just 20%, though this can dip even lower. This dryness — combined with low cabin pressures — reduces oxygen saturation in the blood, which in turn lessens the effectiveness of some taste receptors.
Though it’s primarily enjoyed as a cold soda today, Dr Pepper was marketed as a hot drink from the late 1950s into the 1970s. Seasonal ads ran during winter to increase sales, and consumers were told to heat Dr Pepper to 180 degrees, pour it over a thin slice of lemon, and enjoy.
A 2010 study commissioned by German airline Lufthansa found that typical cabin conditions inhibit our taste buds’ ability to process salty flavors by as much as 30% and sweet flavors by as much as 20%. And a 2015 study suggests that loud noises in your standard cabin impact the body’s chorda tympani facial nerve, which also lessens the intensity of any sweet-tasting fare.
In the case of ginger ale specifically, passengers typically report that it tastes less sweet than normal in the air. However, while our taste buds may not be able to sense the sugar, the beverage still possesses a sharp, extra-dry flavor, which is often thought to feel more refreshing than ginger ale on the ground. The crispness comes from the slightly spicy nature of ginger flavoring. It makes ginger ale an especially popular beverage aboard planes, and many travel guides recommendordering the drink in flight for its unique flavor.
The five basic tastes are sweet, salty, sour, bitter, and umami.
Advertisement
The first in-flight meals were sold on a 1919 flight from London to Paris.
When the first scheduled commercial flights began in 1914, they lacked many modern amenities, including in-flight meals, which weren’t served until 1919 aboard a Handley Page Transport plane connecting London and Paris. On October 11, the company offered passengers boxed lunches containing sandwiches and fruit, which cost 3 shillings (equal to around $11 today).
In-flight dining made its way to United States airlines by the late 1920s, with Western Air Express helping pioneer the concept. It offered passengers meals containing fried chicken, fruit, and cake on flights between Los Angeles and San Francisco, though they were unheated and prepped prior to departure. In 1936, United Airlines became the first major airline to install galleys and ovens on planes, allowing crews to heat meals in flight for the first time.
Bennett Kleinman
Staff Writer
Bennett Kleinman is a New York City-based staff writer for Inbox Studio, and previously contributed to television programs such as "Late Show With David Letterman" and "Impractical Jokers." Bennett is also a devoted New York Yankees and New Jersey Devils fan, and thinks plain seltzer is the best drink ever invented.
Advertisement
top picks from the Inbox Studio network
Interesting Facts is part of Inbox Studio, an email-first media company. *Indicates a third-party property.
When George Washington died in 1799, Congress could think of no better way to honor the first president than by laying him to rest in the U.S. Capitol. The building had been under construction since Washington himself laid the cornerstone in 1793, and plans were quickly approved to add a burial chamber two stories below the rotunda with a 10-foot marble statue of Washington above the tomb. Visitors would be able to view the grave via a circular opening in the center of the rotunda floor. There was just one problem: Washington had already designated his Mount Vernon estate to be his final resting place, meaning neither he nor anyone else is actually buried in what’s still called the Capitol Crypt.
George Washington won both of his presidential elections unanimously.
He ran essentially unopposed in both 1788 and 1792, thereby winning every available electoral vote — 69 the first time, 132 the second — in each election.
This crypt, which was finally completed in 1827, has gone by a few different names over the years. The 1797 plan by architect William Thornton labeled the space the “Grand Vestibule,” whereas architect Benjamin Henry Latrobe’s 1806 plan referred to it as the “General Vestibule to all the Offices” and an 1824 report of the Commissioners of Public Buildings simply called it the “lower rotundo.” For those who’d like to see the crypt today, it’s included in most tours of the Capitol.
The Capitol Crypt’s sandstone floor was sourced from a quarry in Seneca, Maryland.
Advertisement
The U.S. Capitol was burned down in the War of 1812.
The United States has engaged in many international conflicts, most of which haven’t been fought on the country’s own soil. One exception to this is the War of 1812, a kind of sequel to the Revolutionary War in which the U.S. once again went to battle against its British frenemies across the pond.
Mostly spurred by violations of maritime rights, the war reached a retaliatory pitch when, in response to American troops burning the Canadian capital, York (now Toronto), British troops made their way to D.C. and burned everything they could — including the Capitol. This happened on August 24, 1814, a day that also saw the White House set ablaze. Restoration began immediately, and though the Library of Congress’ 3,000-volume collection was ultimately lost, the Capitol was rebuilt and a new library was begun with the donation of Thomas Jefferson’s personal collection of 6,487 books.
Michael Nordine
Staff Writer
Michael Nordine is a writer and editor living in Denver. A native Angeleno, he has two cats and wishes he had more.
Advertisement
top picks from the Inbox Studio network
Interesting Facts is part of Inbox Studio, an email-first media company. *Indicates a third-party property.
In 1851, German physician Carl Wunderlich conducted a thorough experiment to determine the average human body temperature. In the city of Leipzig, Wunderlich stuck a foot-long thermometer inside 25,000 different human armpits, and discovered temperatures ranging from 97.2 to 99.5 degrees Fahrenheit. The average of those temperatures was the well-known 98.6 degrees — aka the number you hoped to convincingly exceed when you were too “sick” to go to school as a kid. For more than a century, physicians as well as parents have stuck with that number, but in the past few decades, experts have started questioning if 98.6 degrees is really the benchmark for a healthy internal human temperature.
Primates have the highest known internal body temperature.
Although primates (such as Homo sapiens) are warm-blooded creatures, birds have some of the highest internal body temperatures in the animal kingdom. Some hummingbirds, for example, have body temperatures as high as 112 degrees Fahrenheit.
For one thing, many factors can impact a person’s temperature. The time of day, where the temperature was taken (skin, mouth, etc.), if the person ate recently, their age, their height, and their weight can all impact the mercury. Furthermore, Wunderlich’s equipment and calibrations might not pass scientific scrutiny today. Plus, some experts think humans are getting a little colder, possibly because of our overall healthier lives. Access to anti-inflammatory medication, better care for infections, and even better dental care may help keep our body temperatures lower than those of our 19th-century ancestors.
In 1992, the first study to question Wunderlich’s findings found a baseline body temperature closer to 98.2 degrees. A 2023 study refined that further and arrived at around 97.9 degrees (though oral measurements were as low as 97.5). However, the truth is that body temperature is not a one-size-fits-all situation. For the best results, try to determine your own baseline body temperature and work with that. We’re sure Wunderlich won’t mind.
Many mammals — from the humble ground squirrel to the majestic grizzly — practice some form of hibernation, slowing down certain bodily functions to survive winters. Naturally, that raises a question: “Humans are mammals. Can we hibernate?” While the answer is slightly more complicated than it is for a pint-sized rodent, the answer is yes … with caveats. The main component of hibernation is lowering body temperature. When this occurs, the body kicks into a low metabolic rate that resembles a state of torpor, a kind of extreme sluggishness in which animals require little to no food. Because most of our calories are burned up trying to keep our bodies warm, de-prioritizing that requirement would essentially send humans into hibernation — but this is where it gets tricky for Homo sapiens. First, humans don’t store food in our bodies like bears do, so we’d still need to be fed intravenously, and second, sedatives would be needed to keep us from shivering (and burning energy). In other words, it would be a medically induced hibernation, but hibernation nonetheless. A NASA project from 2014 looked into the possibility of achieving this kind of hibernation for long-duration space travel, and while the findings weren’t put into practice, there were no red flags suggesting a biological impossibility. Today, NASA continues its deep sleep work by gathering data on the hibernating prowess of Arctic ground squirrels.
Darren Orf
Writer
Darren Orf lives in Portland, has a cat, and writes about all things science and climate. You can find his previous work at Popular Mechanics, Inverse, Gizmodo, and Paste, among others.
Advertisement
top picks from the Inbox Studio network
Interesting Facts is part of Inbox Studio, an email-first media company. *Indicates a third-party property.
Winter, spring, summer, fall — outside of the tropics and the planet’s poles, most temperate areas of the globe experience the four seasons to some extent, although how we choose to view those weather changes can differ from country to country. Take, for example, ancient Japan’s calendar, which broke the year into 72 microseasons, each lasting less than a week and poetically in tune with nature’s slow shifts throughout the year.
There are two ways to celebrate the start of a new season. Astronomical seasons are marked on the solstices and equinoxes, which fluctuate annually. For consistent data collection, scientists prefer meteorological seasons (for example, meteorological fall starts on September 1).
Japan’s microseasons stem from ancient China’s lunisolar calendar, which noted the sun’s position in the sky along with the moon’s phases for agricultural purposes. Adopted by Japanese citizens in the sixth century, the lunisolar calendar broke each season into six major divisions of 15 days, called sekki, which synced with astronomical events. Each sekki was further reduced into three ko, the five-day microseasons named for natural changes experienced at that time. Descriptive and short, the 72 microseasons can be interpreted as profoundly poetic, with names like “last frost, rice seedlings grow” (April 25 to 29), “rotten grass becomes fireflies” (June 11 to 15), or “crickets chirp around the door” (October 18 to 22).
In 1685, Japanese court astronomer Shibukawa Shunkai revised an earlier version of the calendar with these names to more accurately and descriptively reflect Japan’s weather. And while climate change may affect the accuracy of each miniature season moving forward, many observers of the nature-oriented calendar find it remains one small way to slow down and notice shifts in the natural world, little by little.
Nearly 80% of Japan’s land is covered in mountains.
Advertisement
Japan recently added more than 7,200 new islands to its territory.
Japan’s archipelago has four major islands and thousands of smaller ones, though a recount in 2023 found that there were far more islets than previously known. The island nation has recognized 6,852 islands in its territory since 1987, though advancements in survey technology led to the realization that there are many more — a staggering 14,125 isles in total. In the 1980s, Japan’s Coast Guard relied on paper maps to count islands at least 100 meters (328 feet) in circumference, though surveyors now realize many small landmasses were mistakenly grouped together, lowering the total number. Today, the country’s Geospatial Information Authority uses digital surveys to get a glimpse of the chain’s smaller islands for a more accurate count, while also looking for new islands created from underwater volcanoes. However, only about 400 of Japan’s islands are inhabited, while the rest remain undeveloped due to their size, rugged terrain, and intense weather conditions.
Nicole Garner Meeker
Writer
Nicole Garner Meeker is a writer and editor based in St. Louis. Her history, nature, and food stories have also appeared at Mental Floss and Better Report.
Advertisement
top picks from the Inbox Studio network
Interesting Facts is part of Inbox Studio, an email-first media company. *Indicates a third-party property.
Tiny, hidden survival tools packed into the waistband of your pants may sound like something fantastical from a spy movie, but in the case of British wartime pilots, they were a reality. During World War II, the Royal Air Force sent its aviators skyward with all the tools they’d need to complete a mission, along with a few that could help them find their way home if they crash-landed behind enemy lines. One of the smallest pieces of survival gear pilots carried was a compass built into the button of their trousers.
Britain was the first country to create an air force.
Established in April 1918, seven months before the end of World War I, Great Britain’s Royal Air Force was the first of its kind. The sky-patrolling force launched with 300,000 service members nearly three decades before the U.S. designated its own Air Force branch in 1947.
Three months after entering World War II, the British military launched its MI9 division, a secret intelligence department tasked with helping service members evade enemy forces or escape capture. Between 1939 and 1945, masterminds at MI9 created a variety of intelligence-gathering and survival tools for troops, such as uniform-camouflaging dye shaped like candy almonds, ultra-compressed medications packed inside pens, and button compasses. The discreet navigational tools were typically made from two buttons, the bottom featuring a tiny needle. When balanced on the spike, the top button acted as a compass that rotated with the Earth’s poles; two dots painted on the metal with luminous paint represented north, and one indicated south.
MI9 distributed more than 2.3 million of its button compasses during the war. They could be paired with secretive maps that were smuggled to captured service members inside care packages delivered to prisoner-of-war camps. Often printed on silk for durability and waterproofing, the 44 different maps (sent to different camps based on location) were tucked discreetly into boot heels and board games. The ingenuity worked — by the war’s end, MI9 was credited with helping more than 35,000 Allied soldiers escape and make their way home.
Historians believe the first magnetic compasses were invented in China.
Advertisement
Some American colonists were banned from wearing fancy buttons.
Buttons can be an innocuous way to add panache to a piece of clothing … unless you were a colonist living in Massachusetts during the 17th century, that is. Choosing the wrong type of buttons for your garment during that time could have landed you in court and required paying a fine. Puritans in Massachusetts during the 1600s were ruled by a series of sumptuary laws, aka legal codes that restricted how people dressed and interacted with society based on moral or religious grounds. Massachusetts passed its first sumptuary law in 1634, prohibiting long hair and “new fashions” (aka overly swanky clothes), and five years later even banned short-sleeved garments. By 1652, the colony further restricted lower-wage earners from wearing “extravagant” accessories such as silks, fine laces, and gold or silver buttons, unless they had an estate valued at more than 200 British pounds — more than $38,000 in today’s dollars. However, the law did include some loopholes: Magistrates, public officers, and militia members (and their families) were free to choose buttons and other adornments without fear of penalty, as were the formerly wealthy and those who had advanced education.
Nicole Garner Meeker
Writer
Nicole Garner Meeker is a writer and editor based in St. Louis. Her history, nature, and food stories have also appeared at Mental Floss and Better Report.
Advertisement
top picks from the Inbox Studio network
Interesting Facts is part of Inbox Studio, an email-first media company. *Indicates a third-party property.
Uniforms convey a sense of competency across professions ranging from delivery person and airline staff to chef and firefighter. The psychological implications may be even stronger when it comes to matters of health: According to one study published in the medical journal BMJ Open, doctors who don the traditional white coat are perceived as more trustworthy, knowledgeable, and approachable than those who administer to patients in scrubs or casual business wear.
Physicians at the acclaimed Mayo Clinic do not wear white coats.
The Mayo Clinic's founders felt that the white coat would create an unnecessary separation between doctor and patient, and thus established a tradition in which physicians wear business attire to demonstrate respect for the people they serve.
The 2015-16 study drew from a questionnaire presented to more than 4,000 patients across 10 U.S. academic medical centers. Asked to rate their impressions of doctors pictured in various modes of dress, participants delivered answers that varied depending on their age and the context of proposed medical care. For example, patients preferred their doctors to wear a white coat atop formal attire in a physician's office, but favored scrubs in an emergency or surgical setting. Additionally, younger respondents were generally more accepting of scrubs in a hospital environment. Regardless, the presence of the white coat rated highly across the board — seemingly a clear signal to medical professionals on how to inspire maximum comfort and confidence from their patients.
Yet the issue of appropriate dress for doctors isn't as cut and dry as it seems, as decades of research have shown that those empowering white coats are more likely to harbor microbes that could be problematic in a health care setting. In part that’s because the garments are long-sleeved, which offers more surface area for microbes to gather — a problem that’s compounded because the coats are generally washed less often than other types of clothing. Although no definitive link between the long-sleeved coats and actual higher rates of pathogen transmission has been established, some programs, including the VCU School of Medicine in Virginia, have embraced a bare-below-the-elbows (BBE) dress code to minimize such problems. Clothes may make the man (or woman), but when it comes to patient safety, the general public may want to reassess their idea of how our health care saviors should appear.
The term for anxiety-induced high blood-pressure readings in a doctor's office is “white coat syndrome.”
Advertisement
Western doctors dressed in black until the late 1800s.
If the idea of a physician or surgeon wearing black seems a little morbid, well, that may have been part of the point in the 19th century. After all, the medical field had more than its share of undertrained practitioners who relied on sketchy procedures such as bloodletting, and even the work of a competent doctor could lead to lethal complications. However, Joseph Lister’s introduction of antisepsis in the 1860s dramatically cut the mortality rate for surgical patients, and with it, the perception of the possibilities of medicine underwent a major shift. While black had once been worn to denote seriousness, doctors began wearing white lab coats like scientists to demonstrate their devotion to science-based methodology, a sartorial presentation that also reflected an association with cleanliness and purity. By the turn of the century, the image of the black-clad physician was largely consigned to the remnants of an unenlightened age.
Interesting Facts
Editorial
Interesting Facts writers have been seen in Popular Mechanics, Mental Floss, A+E Networks, and more. They’re fascinated by history, science, food, culture, and the world around them.
Advertisement
top picks from the Inbox Studio network
Interesting Facts is part of Inbox Studio, an email-first media company. *Indicates a third-party property.
What do you call someone who’s fallen for a prank? There’s no punchline here — in most English-speaking places, you’d probably just call them gullible. But in France, you might use the term poisson d’avril, meaning “April fish.” The centuries-old name is linked to a 1508 poem by Renaissance composer and writer Eloy d’Amerval, who used the phrase to describe the springtime spawn of fish as the easiest to catch; young and hungry April fish were considered more susceptible to hooks than older fish swimming around at other times of year. Today, celebrating “April fish” in France — as well as Belgium, Canada, and Italy — is akin to April Fools’ Day elsewhere, complete with pranks; one popular form of foolery includes taping paper fish on the backs of the unsuspecting.
April Fools’ celebrations last two days in Scotland.
In Scotland, April Fools’ is also called “Huntegowk,” a day where “gowks” (the Scottish word for cuckoo birds) are sent on phony errands. The second day of celebrations, called Tailie Day, is a bit more mischievous, when tails or “kick me” notes are placed on people’s backsides.
While the first reference to poisson d’avril comes from d’Amerval’s poem, historians aren’t sure just how old the April Fools’ holiday is. It’s often linked to Hilaria, a festival celebrated by the ancient Romans and held at the end of March to commemorate the resurrection of the god Attis. However, many historians believe that while Hilaria participants would disguise themselves and imitate others, there’s little evidence that it’s the predecessor of April Fools’. Other theories suggest that April 1 trickery stems from switching to the Gregorian calendar. One such explanation dates to 1564, the year French King Charles IX moved to standardize January 1 as the start of the new year, which had often been celebrated on Christmas, Easter, or during Holy Week (the seven days before Easter). Despite the royal edict, some French people kept with the Holy Week tradition and celebrated the new year in late March to early April, becoming the first “April fools.”
An 1878 newspaper hoax reported Thomas Edison’s newest invention could turn dirt into food and wine.
Advertisement
The BBC once claimed spaghetti noodles grew on trees.
The most convincing April Fools’ pranks often come from the most unexpected sources, which could be why the BBC has a history of successful hoaxes. This includes a 1957 joke, considered to be one of the first April Fools’ TV pranks, wherein the British broadcaster aired a two-and-a-half-minute segment claiming spaghetti noodles grew on trees in Switzerland. Footage showed Swiss noodle harvesters on ladders collecting noodles and drying them in the sun before dining on a large pasta dinner. While the prank likely would have fallen flat today, spaghetti wasn’t commonly eaten in the U.K. during the 1950s, which meant the dish was entirely unfamiliar to most viewers. But the hoax didn’t just prank viewers. Many BBC staffers were also fooled after being purposefully kept in the dark about the fictitious story — the production brainchild of cameraman Charles de Jaeger and a small crew — and were taken aback by a deluge of callers looking to acquire their own spaghetti trees.
Nicole Garner Meeker
Writer
Nicole Garner Meeker is a writer and editor based in St. Louis. Her history, nature, and food stories have also appeared at Mental Floss and Better Report.
Advertisement
top picks from the Inbox Studio network
Interesting Facts is part of Inbox Studio, an email-first media company. *Indicates a third-party property.
April Fools’ Day is a time when dubious pranksters run rampant, so it’s wise to be a little more skeptical of everything you see and hear on April 1. But when the clock strikes midnight and the calendar flips over to April 2, we celebrate the polar opposite of trickery and deception: International Fact-Checking Day. This global initiative is spearheaded by the Poynter Institute’s International Fact-Checking Network (IFCN) — a community established in 2015 to fight misinformation and promote factual integrity. IFCN works with more than 170 fact-checking organizations to encourage truth-based public discourse, debunk myths, and ensure journalistic accountability.
Taco Bell once claimed to have purchased the Liberty Bell.
On April 1, 1996, Taco Bell pulled off a legendary April Fools’ prank, claiming it had purchased the Liberty Bell and renamed it the “Taco Liberty Bell.” Full-page ads ran in seven major newspapers, leading to hundreds of Americans calling the National Park Service to express outrage.
The organization held the first official International Fact-Checking Day in 2016 as part of an effort to highlight the importance of fact-checkers’ role in a well-informed society. The occasion has been celebrated on April 2 every year since, standing in stark contrast to the prior day’s foolishness. Celebrants may choose to honor the holiday however they please, be it by promoting trusted sources on social media or by taking advantage of the events and information produced by IFCN. In past years, the organization has held a public webinar to discuss the “State of the Fact-Checkers” report, and published lists of relevant essays written by fact-checkers around the world. Be sure to capitalize on this opportunity to get your facts straight, as April 4 ushers in yet another day of duplicity: National Tell a Lie Day.
A 1957 BBC April Fools’ broadcast pranked viewers into thinking spaghetti grows on trees.
Advertisement
The Spanish-American War was fueled by “yellow journalism.”
In the late 19th century, the American press found itself in the grip of a phenomenon known as “yellow journalism” — a sensationalized form of reporting that prioritized eye-catching headlines ahead of the cold, hard facts. These unverified claims sometimes had serious consequences, most notably in the case of the Spanish-American War.
On February 15, 1898, the USS Maine battleship exploded and sank in Havana Harbor in Cuba (a country controlled by Spain at the time). Within days, major newspapers including William Randolph Hearst’s New York Journal and Joseph Pulitzer’s New York World published accusations that Spain was responsible for the sinking, despite a lack of evidence. But the exaggerated headlines still swayed public opinion, fueling a desire to go to war.
Tensions escalated to the point that on April 20, the U.S. Congress issued an ultimatum for Spain to withdraw from Cuba, which Spain declined to do, opting to sever diplomatic ties with the U.S. instead. Spain declared war on the U.S. on April 23, with Congress following suit two days later.
Bennett Kleinman
Staff Writer
Bennett Kleinman is a New York City-based staff writer for Inbox Studio, and previously contributed to television programs such as "Late Show With David Letterman" and "Impractical Jokers." Bennett is also a devoted New York Yankees and New Jersey Devils fan, and thinks plain seltzer is the best drink ever invented.
Advertisement
top picks from the Inbox Studio network
Interesting Facts is part of Inbox Studio, an email-first media company. *Indicates a third-party property.