Responsible AI in Protein Science

Machine learning is revolutionizing protein science, unlocking new insights and tools that can improve the health of people and and the planet. To ensure this benefits everyone, we’re advancing initiatives to promote safe and ethical research practices in our field.

Our Commitment to Responsible AI

We are proud to have developed leading software for protein structure prediction and design. As tool builders, we also recognize our responsibility to create beneficial technologies that do not increase security risks. This is what we call Responsible AI.

AI Safety Summit

In 2023, we convened the world’s first AI safety summit focused on protein science. This landmark event brought together leading computational biologists, ethicists, science publishers, and representatives from key organizations, including the White House Office of Science and Technology Policy. The meeting launched new partnerships and sparked a global effort to develop guidelines that researchers can follow as they create and share advanced tools for biomolecular research.

Community Values, Principles, and Commitments

Over 170 senior scientists who lead research teams in our field have now signed community standards on responsible AI development. Signatories from the developer community include our Institute’s director David Baker, Nobel laureate Frances Arnold, and Microsoft’s Chief Scientific Officer Eric Horvitz. Supporters include former BARDA director Rick Bright and other members of the pandemic preparedness community.

These community standards encourage ethical behavior on the part of individual researchers by, for example, creating obligations to report concerning research practices and purchase synthetic DNA only from providers that adhere to industry-standard biosecurity screening practices.

We invite everyone involved in AI development for protein science to join this community effort. Learn more at responsiblebiodesign.ai.

SKYCovione manufacturing facility in South Korea. Image: SK bioscience

Ensuring Science Benefits Everyone

Our approach to creating safe and beneficial AI technologies includes:

  • Enhancing Pandemic Preparedness: We collaborate with global partners to design diagnostics, therapeutics, and vaccines, essential tools for public health. Notably, we contributed to the development of the first computationally designed protein medicine.
  • Securing the Digital-to-Physical Divide: To preserve biosecurity in the age of AI, we advocate for universal screening and logging of all synthesized DNA sequences and are eager to explore new collaborations to strengthen these security measures.
  • Building Global Partnerships: To ensure that AI tools in protein science are safe and trustworthy, we work with government agencies, non-profits, and other international organizations. Given the global nature of science, these partnerships are crucial for fostering innovation and maintaining security.

Additionally, our Institute’s executive director Lynda Stuart co-chairs the U.S. National Academies Consensus Study on biosecurity and AI. This study, directed by President Biden’s Executive Order 14110, seeks to define the benefits, applications, and security implications of AI in life sciences research.

We welcome new partnerships on these crucial topics. If you are interested in working with us, please write to contact@ipd.uw.edu.