The problem with AI Conference Culture
We are letting techno-optimist corporate shills dominate the chat. Get these men off the stage.
I was just at a conference with the obligatory ‘AI and the Future of Work’ speaker. Five years ago, it was the Resilience speaker. Five years before that, Innovation. Same guys, new buzzword. Somehow, they’re all futurists, too.
These dudes spend 45 minutes bouncing through a badly formatted PowerPoint while saying, “Look what you can do with AI these days!” They’re urging uptake, leverage, adoption, and excitement completely without context.
Which means they’re not just annoying, they’re dangerous. One cliché after another:
The jobs of tomorrow don’t exist yet / It’s not AI that’s the problem; it’s humans who don’t use it properly / Change is the only constant / Embrace AI or be left behind / Get comfortable with discomfort
It is motivational malpractice by the mediocre.
Important conversations need to be had around AI, especially by the executives and policymakers who attend these conferences. They do not need bamboozling, impressing, or urging to simply get on board.
We need to discuss how to engage with and prepare for potential social and economic shifts that will change jobs, organisations, and communities. We also need thoughtful dialogue about safety, ethics, and deceleration.
We need clear-eyed, well-informed information that distinguishes the real from the hype, the good from the bad, and the probable from the improbable. We need to put away the smoke, mirrors and fireworks and explain that while this is a big shift, it is neither inevitable nor uncontrollable.
Letting tech companies shape this dialogue makes policymakers and professionals feel powerless, and the consequences could be existential.
Tech exceptionalism is a lie. This technology can and should be regulated like any other industrial or pharmaceutical innovation has been, and bamboozling conference attendees with waffle is only making it look more impossible.
I am working on a piece that lays out some of these conversations plainly. Tech-chat is not my usual domain, but silence leaves the floor open to cannibalisation by SaaS sales pitches and cheerleaders.
I might not be a whizz at AI models, but I do understand systems, power, spin, and policymaking. I know how politicians and officials make decisions and how those choices shape our lives and careers. And I reckon it’s time those insights entered the AI chat. We’ll demystify the magic, ask some tricky questions,, and put these conference hype-boys back in their box.
Here’s what I plan to write about. If you have anything you’d like added to the conversation or someone you’d like to put me in touch with, please comment below or DM me.
Topics for the upcoming AI piece:
· What AI actually is and why you should care
· What’s happening – beyond the public facade
· What’s real, and what’s not
· The risks of AI
· The benefits of AI
· Why and how to regulate AI
· Things for policymakers and professionals to consider
· Things for the average person to start thinking about
· How to improve the quality of conversations about AI.
Stay tuned,
AM
A couple of topics that I’d like to see debated. Can we opt out & still participate in modern society? How?
And making sure people understand the energy demands and their impact.
Such a great article Alicia. You've perfectly articulated some of the most problematic issues with AI culture.
When AI first became a thing, I was frustrated at how one-sided the conversation was. It seemed to me that the precautionary principle was being thrown out the window, and no systems thinking (or even critical thinking) was being applied in the roll-out of this tech.
I'm so glad to see more balanced conversations emerging that aren't all about how AI is "inevitable" and that we just need to get on board or be left behind. I'm seeing a rise in voices that focus on things we can do to slow it down, regulate it properly, and make it helpful to humans, rather than harmful. It's such a relief to see.