Responsible AI in Protein Science

Machine learning is transforming protein science, unlocking powerful tools to improve human and planetary health. To ensure these advancements benefit everyone, we champion initiatives that foster safe and ethical practices in our field.

Our Commitment to Responsible AI

We are proud to have developed leading software for protein structure prediction and design. As tool builders, we also recognize our responsibility to create beneficial technologies that do not increase security risks. This is what we call Responsible AI.

AI Safety Summit

In 2023, we convened the world’s first AI safety summit focused on protein science. This landmark event brought together leading computational biologists, ethicists, science publishers, and representatives from key organizations, including the White House Office of Science and Technology Policy.

The meeting launched new partnerships and sparked a global effort to develop guidelines that researchers can follow as they create and share advanced tools for biomolecular research.

Community Values, Principles, and Commitments

Over 170 senior scientists who lead research teams in our field have signed new community standards on responsible AI development. Signatories from the developer community include our Institute’s director David Baker, who was awarded the 2024 Nobel Prize in Chemistry for computational protein design.

These community standards encourage ethical behavior on the part of individual researchers by, for example, creating obligations to report concerning research practices and purchase synthetic DNA only from providers that adhere to industry-standard biosecurity screening practices.

We invite all researchers, developers, and institutions in protein science to join this collaborative effort. Learn more at responsiblebiodesign.ai.

Strengthening Global Health Security

AI-driven protein design can significantly accelerate the development of vaccines, therapeutics, and diagnostics, thereby improving preparedness and response to infectious diseases. Our Institute’s executive director Lynda Stuart, Microsoft’s Chief Scientific Officer Eric Horvitz, and former BARDA director Rick Bright recently explored this opportunity in an article published by the US National Academies of Medicine.

SKYCovione manufacturing facility in South Korea. Image: SK bioscience

Ensuring Science Benefits Everyone

Our approach to creating safe and beneficial AI technologies includes:

  • Enhancing Pandemic Preparedness: We collaborate with global partners to design diagnostics, therapeutics, and vaccines, essential tools for public health. Notably, we contributed to the development of the first computationally designed protein medicine.
  • Securing the Digital-to-Physical Divide: To preserve biosecurity in the age of AI, we advocate for universal screening and logging of all synthesized DNA sequences and are eager to explore new collaborations to strengthen these security measures.
  • Building Global Partnerships: To ensure that AI tools in protein science are safe and trustworthy, we work with government agencies, non-profits, and other international organizations. Given the global nature of science, these partnerships are crucial for fostering innovation and maintaining security.

Additionally, our Institute’s executive director Lynda Stuart co-chairs the U.S. National Academies Consensus Study on biosecurity and AI. This study, directed by President Biden’s Executive Order 14110, seeks to define the benefits, applications, and security implications of AI in life sciences research.

We welcome new partnerships on these crucial topics. If you are interested in working with us, please write to contact@ipd.uw.edu.