May 4, 2026

Who Decides What You Believe?

Chanyoung Hur ‘26

Think about how often you open an app without really deciding to. TikTok between classes. Instagram before bed. X when something big happens in the news. It feels automatic, and honestly, that’s kind of the point. What shows up on those platforms isn’t neutral. It’s filtered and ranked by systems that decide what’s worth your attention. And once a platform is making that decision for you, it’s also shaping what feels relevant, and eventually, what feels true. 

Most of us don’t really experience the internet as an open space anymore. We experience it as a feed. Instead of looking for information, we’re handed a stream of content that’s already been selected and ordered for us. That sounds convenient, and it is. No one has time to sort through everything online. But when the same kinds of arguments and emotional reactions keep showing up in front of you, they start to feel obvious just because they’re familiar. Researchers have linked that kind of environment to polarization and more extreme views, not because anyone is directly pushing beliefs on users, but because repetition does its own quiet work. 

That said, it’s not as simple as saying algorithms just manipulate people. Some research has actually found the opposite, that social media exposure can make people better informed. One study had users in France and Germany follow news accounts for two weeks and found real improvements in current-events knowledge and belief accuracy, with no significant effect on polarization. So the problem isn’t exposure itself. It’s more about what kind of exposure, and what kind of attention the platform is actually built to produce. 

A feed doesn’t need to lie to mess with your sense of reality. It can get there by consistently rewarding outrage over depth, or by keeping things familiar instead of challenging. That shift is gradual, and most people don’t notice it happening. 

Fixing it is harder than it sounds. Switching to chronological feeds, for example, didn’t produce meaningful changes in how informed or polarized users felt in studies that tested it. And stronger regulation runs into its own problems. Rules broad enough to break up echo chambers in every case would bump up against freedom of expression and privacy. There’s no straightforward solution, and the research doesn’t really suggest there is one. 

So the question of responsibility becomes important. Users do have some say in what they engage with and share, but that can’t be the full answer. Platforms aren’t just neutral containers. They design the environment, decide what signals to reward, and make tradeoffs between engagement, profit, and harm. Some researchers have noted that companies may actually benefit 

from echo chambers and outrage-driven content, which means these systems might not just be shaping belief by accident. 

And it goes beyond individual feeds. Tech executives and their platforms now influence politics in ways that go well past ordinary lobbying, shaping what gets to circulate in public life at all. The algorithm isn’t just neutral math. It exists within a structure of ownership and incentive, and the people who build it make choices that reflect that. 

As a computer science student, that’s something I think about. Recommendation systems, ranking models, moderation tools, they don’t appear out of nowhere. They’re built by people, which means they reflect someone’s choices. A developer doesn’t have to be trying to manipulate anyone for the system they help build to still contribute to an environment that rewards misinformation or keeps people inside ideological bubbles. The technical side of this work is never fully separate from the ethical side when what you’re building shapes how millions of people see the world. 

The research doesn’t support either extreme, that social media is just corrupting everyone, or that individuals are fully in control of how it affects them. The real issue is what kind of attention these systems are designed to produce, and whose interests are behind the choices that built them. 

We can’t control the algorithms. But we can decide whether to treat the feed as reality. The next time an app shows you what it thinks you want, it might be worth asking: why this post, why now, and what is this system quietly training me to accept?

Be the first to comment

Leave a Reply

Your email address will not be published.


*