Skip to content


Be sure to consider the unintended consequences.

  • Sundar Pichai, Google's CEO

Bias and Fairness

Mitigating bias in data and models Evaluating model fairness Inclusive model development Transparency and Explainability


Techniques for explainability Right to explanation Safety

Risk Mitigation

Risk assessment Safeguards against misuse Privacy

Data privacy

Anonymization and de-identification Encryption and secure computing


Internal auditing processes External oversight Accountability measures Access and Inclusion

Fair and equitable access

Digital divides Participatory design Compliance

Laws and regulations

Responsible development guidelines Ethics review processes

To sort


Principles and Guidelines

Key principles of the living guidelines:

First, the summit participants agreed on three key principles for the use of generative AI in research ā€” accountability, transparency and independent oversight.

Accountability. Humans must remain in the loop to evaluate the quality of generated content; for example, to replicate results and identify bias. Although low-risk use of generative AI ā€” such as summarization or checking grammar and spelling ā€” can be helpful in scientific research, we advocate that crucial tasks, such as writing manuscripts or peer reviews, should not be fully outsourced to generative AI.

Transparency. Researchers and other stakeholders should always disclose their use of generative AI. This increases awareness and allows researchers to study how generative AI might affect research quality or decision-making. In our view, developers of generative AI tools should also be transparent about their inner workings, to allow robust and critical evaluation of these technologies.

Independent oversight. External, objective auditing of generative AI tools is needed to ensure that they are of high quality and used ethically. AI is a multibillion-dollar industry; the stakes are too high to rely on self-regulation.


The OWASP Top 10 for Large Language Model Applications project aims to educate developers, designers, architects, managers, and organizations about the potential security risks when deploying and managing Large Language Models (LLMs). The project provides a list of the top 10 most critical vulnerabilities often seen in LLM applications, highlighting their potential impact, ease of exploitation, and prevalence in real-world applications. Examples of vulnerabilities include prompt injections, data leakage, inadequate sandboxing, and unauthorized code execution, among others. The goal is to raise awareness of these vulnerabilities, suggest remediation strategies, and ultimately improve the security posture of LLM applications. You can read our group charter for more information