Responsible AI in Protein Science

Machine learning is transforming protein science, unlocking powerful technologies that will improve human and planetary health. To ensure these tools benefit everyone, we champion initiatives that foster safe and ethical research practices in our field.

Our Commitment to Responsible AI

We are proud to have developed leading software for protein structure prediction and design. As tool builders, we also recognize our responsibility to create beneficial technologies that do not increase security risks. This is what we call Responsible AI.

AI Safety Summit

In 2023, we convened the world’s first AI safety summit focused on protein science. This landmark event brought together leading computational biologists, ethicists, science publishers, and representatives from key organizations, including the White House Office of Science and Technology Policy.

The meeting launched new partnerships and sparked a global effort to develop guidelines that researchers can follow as they create and share advanced tools for biomolecular research.

Community Values, Principles, and Commitments

Over 170 scientists who lead research teams in our field have signed new community standards on responsible AI development. These standards encourage ethical behavior on the part of individual researchers by, for example, creating obligations to report concerning research practices and purchase synthetic DNA only from providers that adhere to industry-standard biosecurity screening practices.

Signatories from the developer community include IPD director David Baker, who was awarded the 2024 Nobel Prize in Chemistry for computational protein design.

We invite all senior researchers in the field to join this international, scientist-led effort. Learn more at responsiblebiodesign.ai.

Strengthening Global Health Security

AI-driven protein design can significantly accelerate the development of vaccines, therapeutics, and diagnostics, thereby improving preparedness and response to infectious diseases. IPD executive director Lynda Stuart, Microsoft’s Chief Scientific Officer Eric Horvitz, and former BARDA director Rick Bright explain this opportunity in an article published by the US National Academies of Medicine.

SKYCovione manufacturing facility in South Korea. Image: SK bioscience

Ensuring Science Benefits Everyone

Our approach to creating safe and beneficial AI technologies includes:

  • Enhancing Pandemic Preparedness: We collaborate with global partners to create essential tools for public health — diagnostics, therapeutics, and vaccines. Notably, we contributed to the development of the first computationally designed protein medicine.
  • Securing the Digital-to-Physical Divide: To preserve biosecurity in the age of AI, we advocate for universal screening and logging of all synthesized DNA sequences. We welcome collaborations to strengthen these security measures.
  • Building Global Partnerships: To ensure that AI tools for protein science are safe and trustworthy, we work with government agencies, non-profits, and other international organizations. Given the global nature of science, these partnerships are crucial for fostering innovation and maintaining security.

Additionally, our Institute’s executive director Lynda Stuart co-chairs the U.S. National Academies Consensus Study on biosecurity and AI which seeks to define the benefits, applications, and security implications of AI in life sciences research.

We welcome partnerships on these crucial topics. If you are interested in working with us, please write to contact@ipd.uw.edu.