Ilya Sutskever: The 100 Most Influential People in AI 2023
There is a precedent, according to Ilya Sutskever, for a less intelligent being ensuring that radically smarter and more powerful ones act in their interests. That precedent is the human baby. “We know that it’s possible,” says Sutskever, chief scientist at OpenAI. “Parents care very deeply about the well-being of their children. It can be done. How does this imprinting work?”
The man pondering that question is one of the industry’s most celebrated technical minds. Before joining OpenAI as a founding member in 2015, Sutskever was already famous for breakthroughs that turbocharged the fields of computer vision and machine translation. Poaching him from Google was OpenAI’s earliest coup; without him, the company’s long list of innovations that followed would likely look very different. Sutskever’s name is listed on research papers that led to ChatGPT and the image generator DALL-E, as well as dozens of others. But talk to Sutskever, and you quickly get the sense he feels his most important work is ahead of him.
In July, OpenAI announced that Sutskever, 37, would be the co-lead of its new Superalignment team, which aims to solve the technical challenge of how to ensure superintelligent AI acts in the interests of humanity. The work is separate from, though parallel to, the company’s shorter-term research efforts into how to align weaker AI systems that exist today or in the near future.
In Sutskever’s view, the task of understanding how to impress certain values onto significantly superhuman systems could not have higher stakes. “The upshot is, eventually AI systems will become very, very, very capable and powerful,” he says. “We will not be able to understand them. They’ll be much smarter than us. By that time it is absolutely critical that the imprinting is very strong, so they feel toward us the way we feel toward our babies.”
More Must-Reads from TIME
ncG1vNJzZmismaKyb6%2FOpmacp5yhsqTAyKilaKyZorJyfI9mmKJnZmh9enyQamaipKmWerTB06yinq6Vp3w%3D