Ilya Sutskever’s new start-up, Safe Superintelligence, aims to build A.I. technologies that are smarter than a human but not dangerous.
Ilya Sutskever, the OpenAI co-founder and chief scientist who in November joined other board members to force out Sam Altman, the company’s high-profile chief executive, has helped found a new artificial intelligence company.
The new start-up is called Safe Superintelligence. It aims to produce superintelligence — a machine that is more intelligent than humans — in a safe way, according to the company spokeswoman Lulu Cheng Meservey.
Dr. Sutskever, who has said he regretted moving against Mr. Altman, declined to comment. The news was reported earlier by Bloomberg.
Dr. Sutskever, 38, left OpenAI last month and announced at the time that he would be starting a new project but did not provide details. Ms. Meservey declined to name who is funding the company or how much it has raised. She said that as it builds safe superintelligence, the company will not release other products.
Dr. Sutskever founded the company alongside Daniel Gross, who worked on A.I. at Apple, and Daniel Levy, who worked with Dr. Sutskever at OpenAI. Dr. Sutskever’s title at the new company will be chief scientist but he describes his role, according to Ms. Meservey, as “responsible for revolutionary breakthroughs.”
In November 2022, OpenAI captured the world’s imagination with the release of ChatGPT, an online chatbot that could answer questions, write term papers, generate computer code and even mimic human conversation. The tech industry rapidly embraced what it called generative artificial intelligence: technologies that can generate text, images and other media.