Author Summary

This paper explores censorship through social media. By using algorithms to group like-minded individuals, online applications indirectly isolate them by limiting perspectives. Consequently, preexisting beliefs are reinforced since there’s minimal room for intellectual growth. This affects users initially, but they can perpetuate biased opinions through voting, exacerbating the issue. In this paper, I propose solutions to initiate a more diverse media through online interventions, which are used to promote fact-checked articles to ensure proper understanding of a topic. Through the usage of artificial intelligence, interventions can become personalized and therefore be more effective rather than using a universal one, increasing success. I further argue that these interventions also minimize the amount of believable fake news, overall increasing users’ media literacy. While also addressing the negative outcomes of these solutions, this paper argues the importance of taking action in response to biased news in order to protect democratic ideals.

Chichén Itzá, a complex archaeological site located on the Yucatán Peninsula that thrived beginning in 600 AD, used advanced mathematicians to create an illusion that the feathered serpent, Kukulan, a god to the inhabitants, would appear to descend as the sun would set (McLeod, 2018). Concealing the actual mathematical mechanism that went behind creating these illusion from the general populace has given rise to uninformed worship and obedience to the authorities. Subjects in Chichén Itzá were left in the dark and feared unknowingly. The Catholic Church had a stagnant control over the individuals in Europe by establishing the Inquisition, the ability to try cases of heresy in court, in the 13th century (Hamilton and Peters, 2023) only being disrupted by Martin Luther as the Protestant Reformation took place in the 16th century (Wilkinson, 2022). In a more recent example Stalin’s dictatorship, where censorship peaked in the USSR when he had six million killed who he had claimed was against the USSR’s agenda while simultaneously being responsible for the Holodomor, an unknown genocide to the world during it’s occurence (Applebaum, n.d.). These examples illustrate how censorship has been used as a tool by those in power to silence dissenting voices and maintain their authority, often at great human cost.

Censorship has been a concern throughout history and across various cultures. The practice of limiting and suppressing the exchange of information, expression, advancements, and thoughts, has been utilized by governments, religious icons, and officials. One common motive is the aim of maintaining political and social control over a group of people. The Chichen Itza Pyramid, the Inquisition, and Stalin’s rule are all extreme accounts of censorship that most easily and quickly find difficulty to live through as they are so obvious to the body of people being controlled. The execution of censorship often conceals its true extent and impact. This paper focuses on factors that contribute to censorship online, including epistemic and filter bubbles as well as social media algorithms. The paper suggests/argues that to effectively mitigate the dissemination of misinformation, it is imperative to establish mechanisms that address and abolish epistemic bubbles, fostering a climate that allows others to engage in cross-ideological discourse and promoting critical introspection within one’s own value framework.

I. Censorship Through Epistemic Bubbles and Echo Chambers

We first need to establish a difference between two types of censorship: direct and indirect. Establishing clear distinctions between direct censorship and indirect censorship is essential for comprehending the extent to which it manifests. Direct censorship is often conspicuous, occurring through overt, government-driven measures or significant events that draw immediate attention. In contrast, indirect censorship operates in a subtler manner, gradually accumulating through language barriers, misunderstandings, and cultural nuances, often disseminating inadvertently and remaining concealed from plain sight.

The former, direct censorship, resembles a more traditional view of what one imagines when picturing isolation. A dictatorship where discussing ideas that conflict with the political agenda is prohibited is an example of direct censorship. Most historical accounts follow this form of censorship and is therefore, the more widely known type.

The latter, indirect censorship, is more common but harder to recognize as it can be found in cases as common as simple as a person’s diction. The word Holocaust, for example, is the German-given term for the genocide deriving from the Greek word holokauston, which translates to “a burnt sacrifice offered whole to God” (“Filter Bubbles and the Public Use of Reason Applying Epistemology to the Newsfeed,” n.d.). In locations such as Israel and France, the term Sho’ah, a Hebrew word meaning “catastrophe” is preferred. By repeatedly using the term Holocaust, one is implicitly spreading the idea that it was a ‘deserving sacrifice’ instead of a disastrous genocide, thus creating indirect censorship.

Over time, as the precise historical meaning of the Holocaust became less familiar to the public, subsequent generations continued to employ the term when discussing the genocide, perpetuating a harmful choice of diction. This indirect censorship insidiously contributed to the dissemination of anti-Semitic beliefs, emphasizing the power of language to shape perceptions and ideologies.

Direct censorship can significantly influence how people perceive suppression. When individuals are exposed to only one viewpoint, repeatedly promoting a single approach to understanding complex issues can lead to reliance on the source of those ideas. This dependence makes it more challenging to escape such an environment. Indirect censorship can involve controlling what information is accessible to the public, thereby creating a perception that the perspective being presented is the dominant and prevailing one. In the end, both impair human judgment.

Two renderings emerge from this- epistemic bubbles and echo chambers. Both are forms of isolation from the diversity of different beliefs but to varied extents. An epistemic bubble is more common and can be self-inflicted. It is when certain relevant voices have been eliminated (Nguyen, 2018). It is easily formed through personal interests, where a person surrounds their online media with similarities and is therefore not exposed to opposing views. It can be accidental, but the consequences are grand. By using those now restricted sources online as viable for news, users are preventing themselves from intellectually and socially presenting themselves with conflicting theories from which they can grow from.

Epistemic bubbles also foster a misleading sense of certainty, as they reinforce preexisting beliefs without subjecting them to scrutiny, creating a skewed perception of consensus (Nguyen, 2018). Intolerance to alternative viewpoints, therefore, exponentially soars when a person in an epistemic bubble faces the opposition, creating a more polarized social climate. When individuals or groups encounter perspectives that challenge their own, the inclination to dig deeper into their existing beliefs and resist alternative viewpoints intensifies. This heightened intolerance often translates into greater ideological divisions, making it increasingly challenging for diverse segments of society to find common ground and engage in constructive dialogue. The result is a society marked by greater tension, reduced empathy, and an amplified reluctance to seek compromise or understanding across differing viewpoints.

An echo chamber, however, occurs when relevant voices have been consciously invalidated through an external source, eliminating their credibility It works by first isolating the members from any outside epistemic source, similar to the behavioral conditioning found in cults . Once trust in the outer world is lost, members are entirely dependent on the internal participants. This dependence creates a lack of epistemic responsibility within each member as they can not decipher for themselves (Nguyen, 2018).

Dependence on echo chambers is detrimental for several reasons. Firstly, echo chambers reinforce preexisting beliefs and biases, preventing individuals from engaging in constructive dialogue and expanding their perspectives. This limits personal growth and hinders the development of critical thinking skills. Additionally, in an echo chamber, misinformation and falsehoods can spread unchecked, as dissenting voices are often silenced. This can have severe consequences for society, as false information can lead to misguided decisions and actions. Furthermore, echo chambers can foster a sense of tribalism and division, as individuals within them are more likely to see those outside as enemies rather than potential allies. In a diverse and interconnected world, fostering informed discussions is crucial for societal progress and unity. Dependence on echo chambers only serves to isolate and polarize, ultimately undermining the pursuit of truth and the promotion of a well-informed society.

While both epistemic bubbles and echo chambers take place online, they cannot be used interchangeably. An epistemic bubble lacks connection that restricts the pursuit of outer knowledge and immensely increases self-confidence in validation. Echo chambers actively prevent the introduction of differing topics by segregating members within them from the outside world.

Becoming indoctrinated is a process often facilitated by exposure to one-sided information, social pressure, and emotional manipulation. This process gradually solidifies a singular perspective while discouraging independent thinking. In today’s digital age, epistemic bubbles have become increasingly common and pervasive, affecting a wide range of internet users. Epistemic bubbles can easily form within the context of regular social media use. When users curate a following list that predominantly includes individuals they admire or those who share some commonalities with them, it creates an echo chamber of like-minded voices. These shared characteristics can encompass a wide range, including topography, age, education, income, gender, religion, race, and politics, among many others. As users interact primarily with others who align with their values and beliefs, this illusory unity fosters an environment ripe for intolerance, where genuine diversity of thought and experience remains unaccounted for, ultimately impeding open dialogue and the exploration of alternative viewpoints.

In the realm of epistemic bubbles, confirmation bias plays a significant role, as individuals are more likely to encounter information that confirms their preexisting beliefs while disregarding or dismissing dissenting perspectives. This insular exposure reinforces the individual’s existing convictions and can intensify their allegiance to a particular ideology. The nature of these bubbles poses a significant challenge to promoting open-mindedness, critical thinking, and the ability to engage in constructive discourse with those who hold different views. Recognizing and addressing the prevalence of epistemic bubbles is essential to fostering a society that values diversity of thought and can engage in reasoned, empathetic, and respectful conversations on complex issues.

It also increases the susceptibility to fake news, which decreases the accuracy of circulating information. The deeper one finds oneself on a specific side of an argument, the presented information becomes more controversial and theoretical. It encourages full commitment to confirm their already decided beliefs instead of opening the discussion for others to challenge them. Such exposure can cause detrimental problems to the fabric of democracy as values are not being questioned and advanced but rather remain stagnant.

Echo chambers, however, are much more compounded in their manifestations. One of the most prevalent examples can be observed in cults, where their insular practices create a perfect breeding ground for cognitive isolation. These groups often utilize the notion of a higher power as a potent tool to instill fear within their members, thereby consolidating their grip on authority. By leveraging the fear of divine consequences, cult leaders manipulate the psychological vulnerabilities of their followers, rendering them more compliant and obedient. This manipulation of faith and fear is a sinister strategy employed to solidify control over individuals, eroding their critical thinking and promoting unwavering allegiance to the group’s distorted ideology. Within such echo chambers, dissenting voices are continuously discouraged, further entrenching the group’s isolation from diverse perspectives and reinforcing their narrow worldview. Consequently, these insular environments perpetuate intolerance and stunt individuals from engaging in open, rational discussions with the broader society.

Online echo chambers can manifest through the deliberate design of certain apps, which prioritize one perspective while excluding all others. A notable example is Truth Social, an app developed by the Trump Media & Technology Group. This platform was specifically engineered to provide a streamlined communication channel for discussing political figures’ status and actions, intentionally excluding opposing viewpoints that might clash with the intended narrative. This app’s approach can be detrimental to its users, as it effectively isolates them from diverse opinions and values that might offer challenging perspectives. This platform actively amplifies it by exclusively promoting and featuring content aligned with the users’ preexisting beliefs, reinforcing their existing viewpoints.

Despite the fact that echo chambers and epistemic bubbles both consist of the basic act of partitioning a person from differing ideas, these two are varyingly different and must not be used interchangeably. Clear definitions that distinguish between the two are crucial for comprehending their effects.

II. The Perpetuation of Epistemic Bubbles Through Social Media

Social media algorithms serve as a set of rules and instructions integral to personalizing the content displayed to users across various applications. The collection of data encompasses information from various sources, such as users’ search histories (utilized by platforms like Facebook, Google, and Amazon), their lists of followed accounts, their followers, and their expressed preferences through likes. These numerical insights hold tremendous significance in today’s data-driven landscape. Notably, companies like Axicom, a technology specialist communications agency, possess and monetize vast datasets encompassing “96 percent of American households and half a billion people worldwide” (“Filter Bubbles and the Public Use of Reason Applying Epistemology to the Newsfeed,” n.d.). These staggering statistics are not confined to private hands; they are accessible to government entities and anyone willing to meet the financial cost of acquisition. This allows private corporations with compromised motives to control the media consumed by many. Morality is not a consideration when economic success is at stake, and the ethics concerning the unbiased spread of information to user’s screen will be overlooked if need be. When financial interests are on the line, the imperative to prioritize ethical standards can be easily overshadowed, jeopardizing the very foundation of a society’s commitment to truth and transparency. This then allows the spread of misinformation to heighten under the hopes of profit.

Once a person like something that follows a general trend, that trend will be consistently exposed to them after the fact. Social media companies want to lure people in and keep their users constantly stimulated and using their app, so even a slight interaction with a fandom, celebrity, or discussion, can then take over their recommendations entirely. Algorithms that create a personalized online environment by showing users content that reinforces their existing beliefs and opinions create filter bubbles, another term for epistemic bubbles (Hertwig et al., 2020).

The ubiquity and value of such data have prompted debates and concerns regarding privacy, surveillance, and ethical considerations. But there must be a limit when information is being tracked. In fact, a frequently heard phrase in the media world is that if users are not paying for information about someone, it’s quite likely that someone else is paying for information about people (Orlowski, 2020).

While algorithms may yield a more immediately relevant feed in the short term, their long-term consequences often manifest an end to fresh, thought-provoking content capable of broadening one’s pre-established convictions. As repetitive media is shown to a person who constantly supports one argument, it brings them away from neutrality and further into the conspiracy. People will quickly become consumed with said perspective and, therefore, become less tolerant of any side, boosting egotistical confidence and reducing the likelihood of progression.

III. Nudging and Boosting Interventions

There are, however, proposed solutions to this: interventions. An intervention would be a social media setting that will redirect users to unbiased, objective news. There are two types of interventions that will be the focus of this paper- nudging and boosting. “Nudging” leverages insights from human psychology to guide individuals away from misinformation and, ideally, toward a more beneficial course of action (Basol et al., 2022). A frequent example is adding friction. Our initial cognitive reactions to any concept tend to be inherently impulsive due to the compression of time, yet when prompted to engage in deliberate contemplation and subsequently reevaluate a subject, the potential exists for a diversion from our initial response. So every time someone has the urge to repost an idea, the question to reconsider will be prompted, thus slowing down the rapid spread of misinformation.

There are several types of nudging interventions, one being a policy intervention. This entails the user making a choice while still being exposed to other options, and cannot be swayed by any economic incentive (Schmauder et al., 2023). Adherence to these principles ensures the morality of nudging, where users maintain autonomy in their decision-making. Any deviation from these principles risks infringing on censorship, thus contradicting the original intent of nudging.

The second intervention, known as “boosting,” operates independently by equipping users with the essential skill of distinguishing misinformation from credible, factual information (Hertwig et al., 2020). An illustrative example of this approach could involve implementing educational programs in schools, designed to effectively instruct students on the methods for identifying and mitigating fabricated information, while also providing them with the tools to locate trustworthy and reputable articles.

In the sequence of these interventions, “nudging” precedes “boosting” as it serves as the foundational step where individuals learn to differentiate between fake news and reliable sources. Nudging involves subtle prompts and reminders that encourage critical thinking and information evaluation. By gently nudging users toward fact-checking and source verification, it helps build essential cognitive skills (Hertwig et al., 2020). This initial phase is pivotal in equipping individuals with the tools to discern misinformation and maintain a healthy level of skepticism when encountering news and information online.

Once the foundation of critical thinking is established through nudging, the “boosting” phase can be implemented more effectively. Boosting entails the promotion of credible and accurate sources of information and teaches users to search more independently (Hertwig et al., 2020), enabling users to access trustworthy content easily. It not only reinforces the value of reliable news outlets but also aids in breaking the echo chamber effect by introducing diverse viewpoints and perspectives. Together, these two stages in the intervention process form a comprehensive strategy to combat the spread of misinformation and enhance digital literacy among internet users.

IV. Involvement of Artificial Intelligence

Just as artificial intelligence serves as the foundation of algorithms, it can also present the remedy. Nudging interventions are based on mass society. They are not custom to each individual, and are therefore not always guaranteed to be effective. Algorithmic nudging can be developed by personalized data of an individual. In the context of an individual within an epistemic bubble, customized interventions assume a more supportive role. These nudges must align with the “pro-self” and “pro-social” criteria, signifying their purpose as tools designed to aid and benefit the user (Schmauder et al., 2023). To be “pro-self” means a nudge has the greatest interest of its user, which differentiates from person to person. A young mother would derive value from advertisements promoting educational opportunities, while an elderly woman would find relevance in commercials concerning retirement homes. “Pro-social”, however, is more utilitarian. Promoting green energy is a universally beneficial act, therefore relevant to all, but not as personal.

Nevertheless, harnessing AI as the arbiter of the information each individual encounters brings with it certain drawbacks. As an example, the term ‘black box’ AI systems is employed for situations where the internal mechanisms of an AI model remain concealed and unreadable to human comprehension. While we can observe the data the model processes and the results it produces, its cognitive processes remain elusive (Schmauder et al., 2023). This is a perilous phenomenon because while AI interventions are initially deployed to prevent censorship, without continuous monitoring and control, they can swiftly shift towards censoring users instead without human ability to prevent such. This can lead to slander, suppression of free speech, information control, lack of accountability, social stagnation, human rights violations, all outcomes of censorship. AI lacks emotional intelligence and therefore cannot decipher what is morally right and wrong, and without any human authority, it is open to corrupt reactions.

Despite this, the prevalence of online epistemic bubbles has reached an alarming level, and their impact on society is undeniable. It’s imperative that we acknowledge this growing problem and take proactive steps to address it. Failing to take action would not only be a missed opportunity but could also lead to the increasing spread of misinformation and the erosion of critical thinking skills. Fortunately, AI has demonstrated its potential to play a revolutionary role in mitigating this issue by deploying its capabilities to curtail the spread of false information and by empowering individuals to become more self-reliant in their information evaluation.

V. Conclusion

In conclusion, this paper has examined the pervasive issue of censorship throughout history and its manifestation in the form of epistemic bubbles and echo chambers, primarily driven by social media algorithms. While censorship takes on both direct and indirect forms, the nature of indirect censorship through epistemic bubbles and echo chambers presents a unique challenge in today’s digitally interconnected world. Epistemic bubbles and echo chambers, while different, both contribute to a polarized and intolerant society that is often resistant to alternative viewpoints. The impact of algorithms in shaping these epistemic bubbles and echo chambers is substantial, as they reinforce preexisting beliefs and limit exposure to diverse perspectives. It is essential for progression to recognize and address this issue to foster a society that values diversity of thought and open-mindedness.

Interventions, such as nudging and boosting, offer potential solutions to combat the spread of misinformation and enhance digital literacy. Nudging, which encourages critical thinking and information evaluation, serves as the foundational step in building essential cognitive skills. Once critical thinking is established, boosting can be implemented to promote credible and accurate sources of information and introduce diverse viewpoints, which promotes certain content, leaving the user to decide if it is credible. Artificial intelligence can potentially assist this situation by facilitating personalized interventions rather than adopting a more generalized approach. This adaptability to individual preferences and interests can increase the intervention’s susceptibility to online users. Even though black-box AI systems can introduce additional complexities and concerns, taking any proactive steps is imperative as we strive to advance and address these issues effectively.

In today’s digital age, knowledge consumption about elections, natural disasters, product launches, terrorist attacks, stock market fluctuations, space explorations, cultural events, and job opportunities have shifted online— encompassing virtually every aspect of our lives. To keep up in this booming digital world, students must be properly equipped to navigate authentic and false news. To target the situation at tis root, a proposal to implement mandatory media literacy curriculums in classrooms can subside the obligation for interventions later on. The very essence of democracy demands educated citizens with seamless access to unbiased, fact-checked news; anything less would risk the shadow of an escalating oligarchy. Proficiency in navigating the digital realm is essential for fostering informed voters who can actively contribute to society without being misled by misinformation, ensuring that diverse perspectives are integrated into our government.


Acknowledgements

I am immensely grateful to the Polygence Research Academy for providing me the opportunity to publish my work, offering access to their exceptional resources. Throughout my five-month tenure, I received invaluable guidance from my research program mentor, Elisabeth Holmes. Her mentorship sessions were detrimental in broadening my understanding and knowledge base, as she provided curated articles that significantly bolstered the depth of this paper. She often incorporated her own external information in our discussions, sparking countless engaging discussions and debates— recommending at least one book or documentary during every session, adding a recreational aspect of this project. Her insightful contributions not only enhanced this paper but also enriched my overall awareness. I would like to also thank my showcasing mentor, Dilyara Agisheva, who assisted as a third-party perspective, bringing clarity and structure to my paper. Without both their guidance, this composition would not have been possible.