Imagine a world where machines think for us, shaping our thoughts and decisions. Is this the future we want? Media professor Petter Bae Brandtzæg warns that AI’s unparalleled ability to generate ideas and answers is quietly eroding our critical thinking skills. But here’s where it gets controversial: while AI promises efficiency, it may be costing us something far more valuable—our ability to reason independently.
Just three years ago, ChatGPT was unheard of. Today, it’s used by 800 million people worldwide. AI’s rapid integration into our lives—from social media to email programs—has become the new normal. But unlike social media, which we can choose to avoid, AI is inescapable. As Brandtzæg puts it, ‘We’re all partners with AI, whether we like it or not.’ And this is the part most people miss: its pervasive presence is subtly reshaping how we think, communicate, and even perceive the world.
In his groundbreaking project, ‘An AI-Powered Society,’ Brandtzæg explores how generative AI—AI that creates content—impacts both individuals and society. What’s striking is that even the Norwegian Commission for Freedom of Expression, in its 2022 report, largely overlooked AI’s societal implications. ‘AI affects our language, our moral judgment, and how we understand the world,’ Brandtzæg explains. But it doesn’t stop there. With the launch of ChatGPT shortly after the report, his research became even more urgent.
One of the most intriguing concepts Brandtzæg introduces is ‘AI-individualism.’ Building on the idea of ‘networked individualism’ from the early 2000s, this term describes how AI is blurring the lines between humans and systems. AI isn’t just a tool; it’s becoming a relational partner, meeting personal, social, and emotional needs. For instance, students are turning to ChatGPT for academic help and emotional support, preferring its instant, tailored responses over traditional resources. ‘It strengthens individual autonomy but may weaken community ties,’ Brandtzæg notes. This shift could fundamentally alter our social structures.
But here’s the kicker: in a study, over half of participants preferred mental health advice from a chatbot over a human professional. This raises a provocative question: Are we sacrificing genuine human connection for the convenience of AI? Brandtzæg’s concept of ‘model power’ sheds light on this. AI’s influence isn’t just about providing answers; it’s about shaping the very models of reality we accept. When AI generates summaries for Google searches or informs public reports, it wields a monopoly on information that can distort beliefs and behaviors.
And it gets even more unsettling. AI’s ‘hallucinations’—fabricated information presented as fact—have already led to real-world consequences, like a Norwegian municipality’s decision to close schools based on AI-generated data. Despite 91% of Norwegians expressing concern about AI-spread misinformation, we often follow AI’s advice uncritically. ‘It’s one-way communication masquerading as a dialog,’ Brandtzæg warns.
Another layer of controversy? AI’s dominance is rooted in American data and values. With less than 0.1% of AI models like ChatGPT containing Norwegian content, we’re increasingly influenced by American monoculture. What does this mean for global diversity and local values? As Brandtzæg points out, AI is not a democratic project—it’s a commercial one, controlled by a handful of U.S. companies. This raises urgent questions about regulation and the need to balance technological advancement with human values.
So, here’s the big question: As AI becomes more integrated into our lives, are we enhancing our capabilities or outsourcing our humanity? Let’s discuss—do you think AI’s benefits outweigh its risks, or are we walking into a future where critical thinking becomes a relic of the past? Share your thoughts below!