Nat Methods. 2024 May 14.
Gustaf Ahdritz,
Nazim Bouatta,
Christina Floristean,
Sachin Kadyan,
Qinghui Xia,
William Gerecke,
Timothy J O'Donnell,
Daniel Berenberg,
Ian Fisk,
Niccolò Zanichelli,
Bo Zhang,
Arkadiusz Nowaczynski,
Bei Wang,
Marta M Stepniewska-Dziubinska,
Shang Zhang,
Adegoke Ojewole,
Murat Efe Guney,
Stella Biderman,
Andrew M Watkins,
Stephen Ra,
Pablo Ribalta Lorenzo,
Lucas Nivon,
Brian Weitzner,
Yih-En Andrew Ban,
Shiyang Chen,
Minjia Zhang,
Conglong Li,
Shuaiwen Leon Song,
Yuxiong He,
Peter K Sorger,
Emad Mostaque,
Zhao Zhang,
Richard Bonneau,
Mohammed AlQuraishi.
AlphaFold2 revolutionized structural biology with the ability to predict protein structures with exceptionally high accuracy. Its implementation, however, lacks the code and data required to train new models. These are necessary to (1) tackle new tasks, like protein-ligand complex structure prediction, (2) investigate the process by which the model learns and (3) assess the model's capacity to generalize to unseen regions of fold space. Here we report OpenFold, a fast, memory efficient and trainable implementation of AlphaFold2. We train OpenFold from scratch, matching the accuracy of AlphaFold2. Having established parity, we find that OpenFold is remarkably robust at generalizing even when the size and diversity of its training set is deliberately limited, including near-complete elisions of classes of secondary structure elements. By analyzing intermediate structures produced during training, we also gain insights into the hierarchical manner in which OpenFold learns to fold. In sum, our studies demonstrate the power and utility of OpenFold, which we believe will prove to be a crucial resource for the protein modeling community.