The New York Times recently wrote about one of our latest preprints. In it, we introduce a generative diffusion model for protein design called RFdiffusion.
[Update: This research was subsequently published in Nature, and RFdiffusion is now free and open source.]
From the New York Times:
Last spring, an artificial intelligence lab called OpenAI unveiled technology that lets you create digital images simply by describing what you want to see. Called DALL-E, it sparked a wave of similar tools with names like Midjourney and Stable Diffusion. Promising to speed the work of digital artists, this new breed of artificial intelligence captured the imagination of both the public and the pundits — and threated to generate new levels of online disinformation.
Social media is now teeming with the surprisingly conceptual, in which shockingly detailed, often photorealistic images generated by DALL-E and other tools. “Photo of a teddy bear riding a skateboard in Times Square.” “Cute corgi in a house made out of sushi.” “Jeflon Zuckergates.”
But when some scientists consider this technology, they see more than just a way of creating fake photos. They see a path to a new cancer treatment or a new flu vaccine or a new pill that helps you digest gluten.
Read the full story by Cade Metz at nytimes.com