What this year’s Nobels can teach us about science and humanity.
Alan Burdick and
Technology observers have grown increasingly vocal in recent years about the threat that artificial intelligence poses to the human variety. A.I. models can write and talk like us, draw and paint like us, crush us at chess and Go. They express an unnerving simulacrum of creativity, not least where the truth is concerned.
A.I. is coming for science, too, as this week’s Nobel Prizes seemed keen to demonstrate. On Tuesday, the Nobel Prize in Physics was awarded to two scientists who helped computers “learn” closer to the way the human brain does. A day later, the Nobel Prize in Chemistry went to three researchers for using A.I. to invent new proteins and reveal the structure of existing ones — a problem that stumped biologists for decades, yet could be solved by A.I. in minutes.
Cue the grousing: This was computer science, not physics or chemistry! Indeed, of the five laureates on Tuesday and Wednesday, arguably only one, the University of Washington biochemist David Baker, works in the field he was awarded in.
The scientific Nobels tend to award concrete results over theories, empirical discovery over pure idea. But that schema didn’t quite hold this year, either. One prize went to scientists who leaned into physics as a foundation on which to build computer models used for no groundbreaking result in particular. The laureates on Wednesday, on the other hand, had created computer models that made big advancements in biochemistry.
These were outstanding and fundamentally human accomplishments, to be sure. But the Nobel recognition underscored a chilling prospect: Henceforth, perhaps scientists will merely craft the tools that make the breakthroughs, rather than do the revolutionary work themselves or even understand how it came about. Artificial intelligence designs and builds hundreds of molecular Notre Dames and Hagia Sophias, and a researcher gets a pat for inventing the shovel.