Mark Cuban says Elon Musk, Sam Altman are right to warn of potential AI doomsday: ‘It’s not an overreach’
Billionaire Mark Cuban has voiced his support for tech leaders like Elon Musk and OpenAI CEO Sam Altman, who have publicly raised concerns about the potential negative effects of advanced artificial intelligence technology. Cuban acknowledges that AI is currently not an existential threat to humanity, but he says it’s essential to pay attention to the impact it could have on society in the future. Altman recently signed a statement with other AI experts warning of a “risk of extinction” from AI technology, while Musk has suggested that there is a “non-zero chance” AI could seek to destroy or control humanity. Critics have raised concerns about job displacement and the potential for “bad actors” to use AI to cause societal chaos or attack humanity. Cuban says that while AI today is not a threat, its future impact is uncertain. He adds that AI may pose a threat in 15 to 20 years and that “real science fiction” scenarios involving advanced AI may emerge in a hundred years.
FAQs:
Q: What is advanced artificial intelligence?
A: Advanced artificial intelligence, or AI, refers to machine learning technologies that can process and analyze large amounts of data, recognize patterns and perform complex tasks with little or no human intervention.
Q: What are the potential risks of advanced AI?
A: Critics and some tech leaders warn that advanced AI could lead to significant job displacement, fuel misinformation, allow “bad actors” to cause societal chaos, or even pose a threat to humanity’s existence.
Q: Why are tech leaders raising concerns about AI now?
A: As AI technology becomes more advanced, there is growing concern about the potential negative effects it could have on society. Tech leaders like Elon Musk and OpenAI CEO Sam Altman are urging caution, warning of future scenarios where AI could pose a threat to humanity.
Q: Is AI currently a threat to humanity?
A: No, AI is currently not considered an existential threat to humanity. However, there is uncertainty about how AI will impact society in the future, which is why tech leaders are raising concerns about its potential risks.
Q: What is artificial general intelligence?
A: Artificial general intelligence, or AGI, is a still-unachieved concept in which advanced AI would develop human-level cognitive abilities. Much of the panic about AI technology has centered on the rise of AGI.

‘It’s not an overreach’: Mark Cuban supports Elon Musk and Sam Altman’s caution about the potential AI apocalypse
Tech industry heavyweights, including Elon Musk and OpenAI CEO Sam Altman, have been justified in raising alarm over the potential doomsday scenarios that could emerge from advanced artificial intelligence, according to billionaire Mark Cuban. Though Cuban acknowledged that current AI is not an existential threat to humanity, he rejected the idea that warnings from the likes of Musk and Altman were too extreme. Instead, he emphasized that there remains significant uncertainty surrounding the impact AI may have on society, saying: “It’s not an overreach. It’s a request for people to pay attention.” Critics have cited the potential for AI to cause major job losses, fuel misinformation, and even pose a risk of societal chaos or attack. Altman and other AI experts recently warned that advanced AI could carry risks on par with those associated with nuclear weapons or pandemics. Cuban acknowledged that while AI is not a threat now, it is conceivable that in 15 to 20 years simulations of threat creation and responses for risks not yet considered will emerge. Cuban also argued that the more far-fetched scenarios involving AI might be 100 years away, as scientists and engineers develop ways in which humans can interface with AI systems.