- Change Makers
- Posts
- Thursday: Are we all doomed?
Thursday: Are we all doomed?
How to ensure a safe future for all?
Morning Change Makers,
How dangerous is AI and what should we do about it? It’s a topic that has governments ears pricked and news organisations buzzing.
It's also a topic that has Jack and I in stark disagreement. Here is my take...we can all await with bated breath for Jack's!
Will
There are few people on the planet more polarising than Edward Snowden or Julian Assange. Snowden leaked information from the NSA, whistleblowing the invasive security practices the US were illegally undertaking. Assange founded Wikileaks, uncovering the truth of numerous illegal US military operations. Are they martyrs of a cause greater than themselves, or domestic terrorists?
History is scattered with Snowdens and Assanges whose moral courage was called into question. And we might just have another name to add to the list: A man called Jan Leike.
At the end of last week, Jan resigned from his position as Head of Superalignment (head of safety) at OpenAI. His resignation post on Twitter gave an extremely damning report on his reason for departure and the state of safety protocols at the company:
Yesterday was my last day as head of alignment, superalignment lead, and executive @OpenAI.
— Jan Leike (@janleike)
3:57 PM • May 17, 2024
Building smarter-than-human machines is an inherently dangerous endeavor.
OpenAI is shouldering an enormous responsibility on behalf of all of humanity.
— Jan Leike (@janleike)
3:57 PM • May 17, 2024
But over the past years, safety culture and processes have taken a backseat to shiny products.
— Jan Leike (@janleike)
3:57 PM • May 17, 2024
As a newsletter created to highlight the amazing work the people and businesses are doing to transform our world, it should also only be right to highlight the actions taken by those who seek to protect it and allow the change makers of tomorrow the freedom to flourish.
This is exactly what Jan is doing, and I’d like to spell out some of the concerns Jan raises and why his protest makes him a Change Maker.
But first, what is the risk of AI?
Technological revolutions always have winners and losers. Social media winners? Meta, Snapchat and data centres. The losers? The young, whose attention span has been fried and are in a mental health crisis.
Here’s another analogy. The invention of the printing press in 1440 caused wide spread violence and religious conflict. The printing press enabled Martin Luther's 95 Theses criticising Catholic practices to spread rapidly, sparking the Protestant Reformation.
In the years that followed, witch trials and burnings increased as Catholics waged war on Protestants. We even named a Queen "Bloody Mary" after how many Catholics she killed.
All because someone managed to get ink onto a page in large quantities.
So now as we look at the invention of another form of intelligence, we might perhaps look to see what the second-order effects could be and why we should do our best to prevent the most dangerous ones. How do we prevent their being more losers than there are winners?
AI is dangerous - but where are we now?
At the moment the expectations for AI are sky-high, while in reality there aren’t that many applications/workflows we’ve seen that have been truly revolutionary - yeah ChatGPT is cool but I don’t think it’s fundamentally changed people’s lives.
And as people start to realise that their expectations aren’t being met, their expectations will fall. Meanwhile, the models are getting better and better. At some point there will be a convergence of expectations and reality and then perhaps a divergence where AI is much better than people realise.
The risk is as our expectations fall, so will the political attention and will to regulate such technologies.
And this is where the risk lies; in a lack of smart regulation. Not to inhibit innovation of this amazing technology, but to prevent catastrophic events that are to the detriment of our species.
Any sane person would argue it is correct to regulate the nuclear weapons industry or the chemical weapons industry, for example. Yet while there have been calls for AI regulation in the US, so far nothing meaningful has happened.
It’s being left to companies to self-regulate, and while claiming to act in the best interest of humanity, they only do what’s in the interest of their shareholders.
Back to OpenAI and Jan
OpenAIs safety was supposed to be governed by Jan and his team. Jan and the team are now gone.
If Jan felt his best course of action to protect against this dangerous future was to quit, it’s safe to say things must be pretty serious.
And now the problem is: if OpenAI’s own safety team is not ensuring these models are put into production safely, then who the hell is?
No one, seems to be the answer.
This is partly because OpenAI takes a closed-source approach to model distribution, meaning you can’t see what’s going on under the hood; you can only access the model’s outputs. I.e you can prompt ChatGPT and then see the results (the inference) but you have no idea what’s going on in the middle.
By contrast, some of the biggest companies in the world are taking an open-sourced approach and allowing the public to see into the “architecture” of how the model is composed.
We do not have this luxury with OpenAI.
Where this leaves us
It appears Jan shares the concerns of many with OpenAI close sourcing their models: nobody knows what the fuck they’re up to. And usually with technology that’s kind of ok - who cares that much how the intricacies of AirBnb marketplace infrastructure work…but we should care a lot about a technology that, for the first time, has intelligence that can rival humans.
If, like me, you share a certain optimism for the future - by subscribing to this newsletter, I suspect you do - then building the future of non-human intelligence in a safe and sustainable way becomes an imperative.
To allow the Change Makers of tomorrow to succeed it’s critical that the Change Makers of today build a future that is prosperous and safe for all.
Further resource:
If you’re interested in a more in-depth look at the specific risks of AI watch this:
A great essay on problems with open-sourcing AI: