Yahoo Web Search

Search results

  1. May 30, 2023 · Media coverage of the supposed "existential" threat from AI has snowballed since March 2023 when experts, including Tesla boss Elon Musk, signed an open letter urging a halt to the...

  2. May 30, 2023 · Top AI researchers and executives, including the CEOs of Google Deepmind, OpenAI, and Anthropic, have co-signed a 22-word statement warning against the ‘existential risk’ posed by AI.

  3. Jun 1, 2023 · 350 experts sign statement warning of AI extinction threat; among them are OpenAI's Sam Altman, Google DeepMind's Demis Hassabis & Anthropic's Dario Amodei Superintelligence to be achieved in the next 5 years?

  4. Oct 12, 2024 · Meta’s Yann LeCun says worries about AIs existential threat are ‘complete B.S.’. AI pioneer Yann LeCun doesn’t think artificial intelligence is actually on the verge of becoming intelligent.

    • Overview
    • The Case for AI Catastrophe
    • Risks From AI vs Rewards of AI

    Trending Videos

    A debate is brewing between those who think artificial intelligence (AI) could destroy humanity and others who see the technology as a super advanced word processor.

    A group of industry leaders warned this week that AI may one day pose an existential threat to humanity and should be considered a societal risk at the same level as nuclear war. But other experts respond with shoulder shrugs. 

    "While it's tempting to speculate about superhuman AI or rogue machines bringing about an apocalypse, the reality is that we're nowhere near that level of sophistication yet," Adnan Masood, the Chief AI Architect of the tech firm UST told Lifewire in an email interview. "AI, as we know it today, doesn't pose an existential risk. We have yet to create anything approaching the sentience of even a dog, let alone human-level intelligence."

    The recent statement by some AI leaders shows the level of concern among the doom and gloom camp. The letter was signed by more than 350 executives, researchers, and engineers working in AI, including pioneers in the field such as Geoffrey Hinton and Yoshua Bengio.

    "Mitigating the risk of extinction from A.I. should be a global priority alongside other societal-scale risks, such as pandemics and nuclear war,” read the one-sentence manifesto released by the Center for AI Safety, a nonprofit organization. 

    The statement was released as governments worldwide are considering regulations for AI. Even many AI industry insiders say that it’s time to regulate AI or temporarily halt the development of the technology while a safety plan can be devised. 

    But Masood is among those who say that the risks from AI are overblown.

    "When we compare AI to truly existential threats such as pandemics or nuclear war, the contrast is stark,” he added. “The existential threats we face are tied to our physical survival—climate change, extreme poverty, and war. To suggest that AI, in its current state, poses a comparable risk seems to create a false equivalence.”

    They are risks that we can manage and mitigate through thoughtful design, robust testing, and responsible deployment of AI.

    Even AI boosters like Masood acknowledge that AI isn't without risk. He pointed to cases of algorithmic bias leading to unfair treatment or AI technologies being leveraged for disinformation campaigns. For example, he said that a recent incident involving an AI tool providing false legal information in a court filing shows that AI is not infallible, and the information it generates needs to be treated with the same level of scrutiny as information from any other source.

    "But these risks, while important, are not existential," he said. "They are risks that we can manage and mitigate through thoughtful design, robust testing, and responsible deployment of AI. But again, these are not existential risks—they are part of the ongoing process of integrating any new technology into our societal structures."

    Other experts point to more mundane risks from AI rather than human extinction, such as the possibility that chatbot development could lead to the leaking of personal information. Davi Ottenheimer, the vice president of digital trust and ethics for Inrupt, said in an email that ChatGPT only recently clarified that it would not use data submitted by customers to train or improve its models unless they opt-in.

    "Being so late shows a serious regulation gap and an almost blind disregard for the planning and execution of data acquisition," he added. "It's as if stakeholders didn't listen to everyone shouting from the hilltops that safe learning is critical to a healthy society."

    Many experts argue that AI's potential risks outweigh the benefits they can offer. For example, AI is used in healthcare to develop vaccines and improve diagnoses.

    "AI holds tremendous promise for helping to advance the quadruple aim—patient experience, the health of the population, reducing cost, and care team wellbeing," Christine Swisher, the Chief Scientific Officer at Project Ronin, a team that uses technologies to solve clinical problems, said in an email interview. "I always think patients are waiting, and we have a health system that stands so much to gain if we can use AI thoughtfully, with safety and ethics ensured at every phase of the life cycle of an AI system."

  5. In 2022, a survey of AI researchers with a 17% response rate found that the majority believed there is a 10 percent or greater chance that human inability to control AI will cause an existential catastrophe.

  6. People also ask

  7. BLACKBOX AI is the Best AI Model for Code. Millions of developers use Blackbox Code Chat to answer coding questions and assist them while writing code faster. Whether you are fixing a bug, building a new feature or refactoring your code, ask BLACKBOX to help.

  1. People also search for