Skip to content


Governing is an essential component to effective AI usage, especially within larg organizations or when the use of AI for a product has greater potential to cause harm in its design. Applications of AI need to be evaluated based on their risk to do harm and be used ethically.

Why govern?

In order have the greatest potential positive impact in your use of AI, governance is essential. The larger the organization, the greater the importance of governance to help minimize needlessly duplicated internal systems and efforts. Even for smaller organizations, effective governance from the beginning will enable your organization to more reasonably create and deliver effective and responsible AI-enabled solutions.

How to Govern

  1. Establish an appropriate body of leadership and a surrounding community that supports the development of AI that is both responsible and effective.
  2. Create or adopt a set of AI principles that align with your company,
  3. Creast or adopt a set of procedures for creating, evaluating, and managing your AI systems.
  4. Create, license, or otherwise use AI _ML ops observability platforms/tools that you will use to implement and maintain AI-enabled projects that is consistent with your procedures and principles.
  5. Transparently communicate the development and status of your AI-enabled system with internal and regulatory bodies.


It is possible, if not likely, that more powerful Generative and General AI will come about. Consequently, it is essential to prepare for it in such a way to scientifically and effectively mitigate any potential risks, including catestrophic risks. As part of this OpenAI has established a preparedness framework that they are working with. Other companies may wish to follow suite. This framework, in summary, considers three things. 1. The categories and classes of risk. 2. A scorecard model that indicates the level and class of risks 3. The governance to minimize risks enable effective action upon risk emergence or identification

Categories and classes of risks

The classes of risk are mentioned as the following. 1. Low 2. Medium 3. High 4. Critical

The meaning of these classes depend on the categories and are thoroughly described in the framework

The categories are partioned into the following: 1. Cybersecurity 2. Chemical, biological, radiological and nuclear (CBRN) 3. Persuasion 4. Model Autonomy 5. Unknown unknowns

Score cards

These Describe the risks + categories before and after risk mitigation


Governance consists of ** Safety baselines**:

  • Asset Protection
  • Deployment restrictions
  • Development restrictions

Operations: An operational structure that coordinates actions and activities of a Preparedness team , a Safety Advisory Group (SAG), The OpenAI leadership, and the OpenAI Board of Directors.