844 tacks
We Didn’t Ask for This Internet
Ragebait, sponcon, A.I. slop — the internet of 2026 makes a lot of us nostalgic for the internet of 10 or 15 years ago. What exactly went wrong here? How did the early promise of the internet get so twisted? And what exactly is wrong here? What kinds of policies could actually make our digital lives meaningfully better? Cory Doctorow and Tim Wu have two different theories of the case, which I thought would be interesting to put in conversation together. Doctorow is a science fiction writer, an activist with the Electronic Frontier Foundation and the author of “Enshittification: Why Everything Suddenly Got Worse and What to Do About It.” Wu is a law professor who worked on technology policy in the Biden White House; his latest book is “The Age of Extraction: How Tech Platforms Conquered the Economy and Threaten Our Future Prosperity.”
What Do the People Building AI Believe?
The Atlantic's Galaxy Brain explores the culture of this boom with the writer Jasmine Sun, who’s been chronicling San Francisco’s AI scene. Sun describes what this moment feels like on the ground, including a subculture of massive salaries, and a weird pride in leaning into tech’s strangeness. Together, Warzel and Sun unpack two major factions shaping the industry: the AI “doomers,” and the accelerationists. The conversation also traces Silicon Valley’s rightward drift—the “founder mode” backlash against regulation and employee activism and the rise of “Trump style” provocation-first tech marketing. Finally, Sun and Warzel address the jagged reality of today’s models, which are brilliant at some tasks and weak at others.
Tech Billionaires Want Us Dead
Taylor Lorenz argues that a strand of Silicon Valley ideology treats biological humanity as temporary and sees AI or digital beings as our “successors.” She traces this worldview from early cyber-utopianism through transhumanism, long-termism, and accelerationism, claiming billionaires are funding AI, bunkers, life-extension, and escape plans while accepting human harm as collateral. Her conclusion is that this future is not inevitable: society should regulate big tech, challenge billionaire power, and defend a human-centred technological future.
AI agents could pose a risk to humanity. We must act to prevent that future
Moltbook, an online platform for AI systems to communicate autonomously, raises concerns about the potential for rogue AI. While AI agents offer convenience, their increasing autonomy and lack of safety measures pose risks, including loss of control and potential harm to humanity. The author argues for a halt to the rapid advancement of AI capabilities and the implementation of international limits on AI development.
‘Our consciousness is under siege’: Michael Pollan on chatbots, social media and mental freedom
Michael Pollan argues that human consciousness, a precious realm of mental freedom, is under siege from various forces. He suggests adopting “consciousness hygiene” to protect this space, including practising meditation, being mindful of social media’s influence, and recognising the limitations of chatbots. Pollan also highlights the potential of psychedelics as a radical form of consciousness hygiene, drawing parallels with meditation in their ability to foster self-awareness and control.
What technology takes from us – and how to take it back
The article describes the dangers of relying too heavily on technology and AI, particularly in areas like relationships and creativity. It argues that technology often replaces human connection and intimacy with superficiality and efficiency. The article concludes by emphasizing the importance of cherishing and valuing the human experience, even if it means embracing the imperfections and uncertainties that come with it.
Leave big tech behind! How to replace Amazon, Google, X, Meta, Apple – and more
Big tech companies like Amazon, Google, and Apple dominate the web, raising concerns about data privacy, environmental impact, and monopolistic power. However, there are ethical and often European alternatives available for search engines, browsers, email services, office tools, and smartphones. These alternatives prioritise privacy, sustainability, and independence, offering viable options for those seeking to reduce their reliance on big tech.
This Is What It Looks Like When Nothing Matters
The internet is experiencing a nihilism crisis, characterised by a pervasive sense of meaninglessness and a disregard for traditional norms and institutions. This is evident in the rise of trolling, the normalisation of offensive content, and the use of memes to trivialise significant events. The phenomenon is fuelled by social media platforms’ lax moderation and the proliferation of AI-generated content, leading to a culture where self-promotion and shock value are prioritised over substance.
Without stronger privacy laws, Australians are guinea pigs in a real-time dystopian AI experiment | Peter Lewis
Bunnings’ use of facial recognition technology highlights Australia’s unpreparedness for the AI era. Outdated privacy laws, which haven’t been updated in 40 years, are a significant barrier to protecting citizens’ data and rights. The government’s National AI Plan, which prioritises existing laws over new regulations, risks leaving citizens vulnerable to exploitation by powerful tech companies.
Why your kid is yelling “chicken banana”
The phrase “chicken banana,” originating from a Swedish techno song, has become a popular and nonsensical catchphrase among children, spreading through social media and becoming a part of their cultural lexicon. This phenomenon highlights the influence of social media on children’s culture and the growing overlap between AI-generated content and human silliness. While the meaning of “chicken banana” is unclear, its absurdity and humour resonate with children, allowing them to express themselves and create a sense of belonging within their peer group.
Showing 21 to 30 of 844 results