Now, more than ever, it is important to read the news.
With access to thousands of news sites and billions of opinions at our finger tips, we consume media in almost every waking moment of our day. But just reading the news may not be enough—we must know how to process it.
To become media literate, we need to remember the definition of media literacy:
“[Media Literacy is] the ability to access, analyze, evaluate, create, and act using all forms of communication,” according to the National Association for Media Literacy Education.
Knowing how to identify media and categorize it into groups: true, false, biased, and unbiased is crucial in an age where opinions are constantly hurled at us from all platforms.
What makes being knowledgeable about current events, news, culture, and society even more difficult is the rapid growth of Artificial Intelligence (AI) technology.
AI is altering what we see online, developing at breakneck speed, and users everywhere are creating videos, photos, and other forms of media that are uncannily similar to real life. Human behavior, appearance, mannerisms, and speech have been replicated almost perfectly by AI.
Deepfakes, in which a person’s body, face, and voice have been manipulated or digitally altered, and auto-generated articles flood our feeds, making it harder to identify credible sources and evaluate information critically. Experiments conducted by Penn State’s College of Information Sciences and Technology revealed that humans can distinguish AI-generated text only about 53 percent of the time in a setting where random guessing achieves 50 percent accuracy.
As a result, media literacy skills—like recognizing bias, verifying facts, and especially distinguishing AI-generated content from human-created work—have become more essential than ever.
AI generated images in the media can spread like wildfire, the University of Southern California (USC) Center for Generative AI and Society reported. While some AI-generated images might not be harmful, these images will be sent, saved, and reposted, spreading so quickly that even though they may seem false just from first glance, the constant circulation can trick us into believing it.
People who lack an understanding of media literacy may be more susceptible to falling for false information. Grandparents, aunts, uncles, parents, or even our friends who may be out of touch with how AI can manipulate our perception of what is real may be led to believe harmful, hateful, or just blatantly incorrect things. We must hold ourselves accountable—no one is immune to falling for the fake AI video, even Gen Z, born and raised with quickly developing technology.
So how do we adapt to AI’s influence in our media? How do we convince a panicked friend that the convincingly realistic video of extraterrestrials invading a nearby city is not real?
It’s important to think critically. When confronted with a piece of media that seems questionable or counterintuitive, doubt is the first step. Thinking critically allows us to observe if the media has helpful or harmful messages. The reliability of research depends on the sources we use, Tara Tate, Social Studies teacher, said.
“I have a media literacy unit in our civics class [and] what we focus on with the students is the phrase called vertical lateral reading,” Tate said. “[This] means that if you see something on the internet, the first thing you should do is open up another tab and do some research. If other people aren’t talking about it, it’s not true.”
If we educate ourselves on media literacy and adapt to our current technology, we can sharpen our critical thinking, and be able to differentiate between real and fake. A loss of literacy is a loss of intelligence. Mass misinformation by AI could sow chaos and confusion. That’s why it is so important for us to gain the ability to question, to pause, and to think critically.
Human intelligence, used to write, create, and investigate, must always be valued first.
