
Inspiring Tech Leaders
Dave Roberts talks with Tech Leaders across the industry exploring their insights, experiences and providing advice to the next generation of technology professionals. A podcast that provides listeners with practical leadership guidance and inspired motivation for their own career development.
Inspiring Tech Leaders
Elon Musk vs. OpenAI - The Billionaire Battle for AI’s Future and the Global Regulation Dilemma
It has been another week of high-stakes drama in the AI world! On February 14, OpenAI rejected a massive $97.4 billion bid from an investor group led by Elon Musk. But was it ever a serious offer? Or just a strategic move to disrupt OpenAI’s for-profit shift?
Meanwhile, the Paris AI Summit saw world leaders signing a pledge for ethical AI, but the UK and US refused to sign. Is this about protecting innovation or avoiding much-needed regulation?
In this episode of Inspiring Tech Leaders, we break down:
💡 Elon Musk vs. OpenAI – A billionaire battle over the future of AI
💡 The $500B Stargate Project – OpenAI’s next big move in AI infrastructure
💡 The AI Regulation Divide – Why the UK & US refused to sign the Paris AI Summit agreement
💡 What this means for YOU – The impact of AI’s rapid commercialisation on jobs, privacy & innovation
AI is evolving fast, and the battle for control is heating up. Should AI be more heavily regulated, or should innovation take the lead?
I’m truly honoured that the Inspiring Tech Leaders podcast is now reaching listeners in over 70 countries and 925+ cities worldwide. Thank you for your continued support! If you’d enjoyed the podcast, please leave a review and subscribe to ensure you're notified about future episodes. For further information visit - https://priceroberts.com
Welcome to the Inspiring Tech Leaders podcast, with me Dave Roberts. It has been another week where AI is grabbing headlines as Tech Titians battle it out for control of OpenAI, the creator of ChatGPT. On Friday 14th February, OpenAI rejected an unsolicited $97.4 billion bid from an investment group led by Elon Musk.
OpenAI responded saying the firm was not for sale and emphasised that OpenAI’s mission remains focused on ensuring Artificial General Intelligence, or AGI as it is known, benefits all of humanity and not enriching a select few.
But here is the thing, Musk’s bid might not have been a serious acquisition attempt in the first place. It could be a strategic move to slow down OpenAI’s transition into a fully for-profit company.
We should remember though, that Musk co-founded OpenAI alongside Sam Altman, but he left in 2019 when OpenAI pivoted toward profit-driven investments. Since then, he has been vocal about his belief that OpenAI has strayed from its nonprofit mission.
Musk has said he wants to return OpenAI to its nonprofit roots. But at the same time, he is building his own for-profit AI company, xAI. xAI was founded in March 2023 with the stated goal to understand the true nature of the universe and to create AI that is capable of advanced mathematical reasoning.
OpenAI’s CEO, Sam Altman, called Musk’s bid an attempt to buy out a competitor rather than genuine support for AGI to support humanity. Altman’s response to Musk’s bid was a simple, snarky “no thank you but we will buy twitter for $9.74 billion if you want.” on X. And Musk fired back with just one word, “swindler.”
The tech world thrives on these billionaire feuds, and Musk seems to love the spotlight. However, beyond the social media jabs, this bid might have actually achieved something for Musk, it is has inevitably made OpenAI’s path to a for-profit transition much more complicated. As now OpenAI’s nonprofit board is under pressure to prove it is not just looking to take economic advantage. Musk, intentionally or not, just raised the valuation floor, making it harder for Altman to push through structural changes without scrutiny.
The bid may have been rejected, but this is not over. OpenAI is currently valued at around $300 billion to $350 billion, and Musk’s consortium has indicated they are open to raising their offer. Could OpenAI’s board eventually cave?
It is possible, but unlikely. And then there is Musk’s lawsuit against OpenAI and Microsoft, claiming they have breached OpenAI’s original nonprofit mission. If that lawsuit gains traction, it could further complicate OpenAI’s business structure.
Meanwhile, OpenAI is pushing forward with major AI infrastructure projects, including the $500 billion Stargate Project with Oracle, Arm, Microsoft, NVIDIA and international investors. That is a huge play to keep AI dominance in the United States and drive productivity and innovation.
OpenAI’s move towards for-profit structures signals the growing commercialisation of AI. And Musk, whether you see him as a disruptor or a controller, is ensuring that OpenAI does not make that transition quietly.
This week also saw the Paris AI Summit, which was intended to bring together world leaders to sign a declaration promising an open, inclusive, and ethical approach to AI. However, the UK and US have refused to sign the agreement, but why?
First, let us look at what this agreement actually says. The AI Action Statement signed by 60 countries, including France, China, and India, commits to making AI more transparent, safe, and sustainable. It also acknowledges the growing energy demands of AI, which some experts predict could rival the energy consumption of small nations in the near future.
Sounds reasonable, right? So why did the UK and the US hold back?
For the UK, national security concerns were at the forefront. Officials said the declaration lacked practical clarity on governance and did not sufficiently address the risks AI poses to security. A government spokesperson insisted that this was not about following the US, but about protecting UK interests.
Meanwhile, the US took a firm stance, arguing that too much regulation could kill a transformative industry before it has the chance to grow. In other words, for the Trump administration, the economic potential of AI outweighs the risks.
On one side, proponents of regulation, such as French President Emmanuel Macron, argue that rules are necessary to move AI forward responsibly. They believe that unregulated AI could lead to risks like misinformation, job displacement, and even cybersecurity threats.
On the other hand, industry advocates, such as UKAI, a trade body representing British AI businesses, support the UK’s decision. They argue that too much regulation could stifle innovation and that the focus should be on striking a balance between responsibility and economic growth.
There is a risk that unregulated AI could be exploited by hackers or even be used to develop bioweapons. Many people believe a global approach is essential to manage these risks and without it AI is being developed too quickly without proper safeguards.
Beyond the politics, what does this all mean for everyday people? Well, unregulated AI could impact jobs, privacy, and even national security. As AI systems become more advanced, they will become more embedded into our lives and every aspect of society.
There is a significant risk that those with less access to technology may be left out of AI’s benefits, while being disproportionately affected by its risks. This raises concerns about digital inequality and fairness in AI systems.
So, what happens next? The UK and US may not have signed this particular agreement, but that does not mean they are ignoring AI governance altogether. The UK has signed other agreements on AI sustainability and cybersecurity, and it has previously led discussions on AI safety.
Meanwhile, the US continues to prioritise economic growth over strict regulations, but that could change depending on who is in power after the next election.
Well, that is all for today’s episode of the Inspiring Tech Leaders podcast. What do you think? Should AI be more heavily regulated, or should we let innovation take the lead?
If you enjoyed today’s episode, do not forget to subscribe and leave a review! Thanks again for listening, and until next time, stay curious, stay connected, and keep pushing the boundaries of what’s possible in tech.