NEW YORK – Many experts in A.I. and computer science say the technology is likely a watershed moment for human society. But 36 percent don’t mean that as a positive, warning that decisions made by A.I. could lead to “nuclear-level catastrophe,” according to researchers surveyed in an annual report on the technology by Stanford University’s Institute for Human-Centered A.I., published earlier this month.

Almost three quarters of researchers in natural language processing—the branch of computer science concerned with developing A.I.—say the technology might soon spark “revolutionary societal change,” according to the report. And while an overwhelming majority of researchers say the future net impact of A.I. and natural language processing will be positive, concerns remain that the technology could soon develop potentially dangerous capabilities, while A.I.’s traditional gatekeepers are no longer as powerful as they once were.

“As the technical barrier to entry for creating and deploying generative A.I. systems has lowered dramatically, the ethical issues around A.I. have become more apparent to the general public. Startups and large companies find themselves in a race to deploy and release generative models, and the technology is no longer controlled by a small group of actors,” the report said.

A.I. fears over the past few months have mostly been contained to the technology’s disruptive implications for society. Companies including Google and Microsoft are locked in an arms race over generative A.I., systems trained on troves of data that can generate text and images based on simple prompts. But as OpenAI’s ChatGPT has already proven, these technologies can quickly wipe out livelihoods. If generative A.I. lives up to its potential, up to 300 million jobs could be at risk in the U.S. and Europe, according to a Goldman Sachs research note last month, with legal and administrative professions the most exposed.

Goldman researchers noted that A.I.’s labor market disruption could be undone in the long run by new job creation and improved productivity, but generative A.I. has also sparked fears over the technology’s tendency to be inaccurate. Both Microsoft and Google’s A.I. offerings have frequently made untrue or misleading statements, with one recent study finding that Google’s Bard chatbot can create false narratives in nearly eight out of 10 topics. A.I.’s imprecision, in addition to a tendency for disturbing conversations when it’s used for too long, has pushed developers and experts to warn that the technology should not be used to make major decisions just yet.

To read more, click on Fortune