Enterprises are adopting generative artificial intelligence at scale. We are upgrading our efforts and transforming our business processes from sales support to security operations. We are reaping huge benefits: increased productivity, improved quality and faster time to market.
With this advancement comes risks that also need to be considered. These include software vulnerabilities, cyberattacks, improper system access and exposure of sensitive data. There are also ethical and legal considerations, such as breaches of copyright or data privacy laws, bias or toxicity in the output generated, the spread of disinformation and deepfakes, and the further widening of the digital divide. We are now seeing the worst of it in public life, with algorithms being used to spread disinformation, manipulate public opinion and undermine trust in institutions. All of this highlights the importance of security, transparency, and accountability in how we create and use artificial intelligence systems.
There is good work being done! In the United States, President Biden’s executive order on artificial intelligence aims to promote the responsible use of artificial intelligence and address issues such as bias and discrimination. The National Institute of Standards and Technology (NIST) has developed a comprehensive framework for the trustworthiness of artificial intelligence systems. The EU has proposed the Artificial Intelligence Bill, a regulatory framework to ensure the ethical and responsible use of artificial intelligence. The UK Artificial Intelligence Safety Institute is working to develop safety standards and best practices for artificial intelligence deployments.
The responsibility for establishing a common set of AI guardrails ultimately lies with governments, but we’re not there yet. Today, we have a rough patchwork of guidelines that are inconsistent across regions and unable to keep up with the rapid pace of AI innovation. At the same time, the responsibility for using it safely and responsibly will fall on us: the AI vendors and our enterprise customers. Indeed, we need a set of guardrails.
new obligation matrix
Forward-thinking companies are becoming proactive. They are establishing internal steering committees and oversight groups to develop and implement policies consistent with their legal obligations and ethical standards. I have read over a hundred Request for Proposals (RFPs) from these organizations and they are OK. They provide information for our Writer’s framework to build our own trust and security plans.
One way to organize our thinking is to take a matrix with four areas of responsibility: data, models, systems and operations; and map them to three responsible parties: suppliers, business and government.
Guardrails in the Data category include data integrity, provenance, privacy, storage, and legal and regulatory compliance. In the Model, they are Transparency, Accuracy, Bias, Toxicity, and Abuse. In “systems” they are security, reliability, customization and configurability. In Operations, they are software development lifecycle, testing and verification, access and other policies (human and machine), and ethics.
Within each guardrail category, I recommend listing your key obligations, clarifying the stakes, defining what “good” looks like, and establishing a measurement system. Each area will look different among vendors, businesses, and government entities, but ultimately they should dovetail and support each other.
I selected a sample question from the customer’s RFP and translated each question to demonstrate how each AI guardrail works.
enterprise | hawker | |
Profile → Privacy | The key issue: What information is sensitive? Where are they located? How might they be exposed? What’s the harm in exposing them? What’s the best way to protect them? | Request for Proposal Language: Do you anonymize, encrypt, and control access to sensitive data? |
enterprise | hawker | |
Model → Deviation | The key issue: Where are our biases? Which AI systems influence our decisions or outputs? What are the dangers if we get it wrong? What does “good” look like? What is our tolerance for error? How do we measure ourselves? How do we test our systems over time? | Request for Proposal Language: Describe the mechanisms and methods you use to detect and mitigate bias. Describe your approach to bias/fairness testing over time. |
enterprise | hawker | |
System→Reliability | The key issue: What level of reliability does our artificial intelligence system need to reach? What is the impact if we don’t meet our uptime SLA? How do we measure downtime and assess system reliability over time? | Request for Proposal Language: Do you document, practice, and measure your response plan to AI system downtime events, including measuring response and downtime? |
enterprise | hawker | |
Operations→Ethics | The key issue: What role do humans play in our programming of artificial intelligence? Do we have a framework or formula that informs our roles and responsibilities? | Request for Proposal Language: Does the organization have policies and procedures in place to define and differentiate the various human roles and responsibilities when interacting with or monitoring AI systems? |
As we leverage generative AI to transform business, it is critical to recognize and address the risks associated with its implementation. While government initiatives are ongoing, today the responsibility for safe and responsible use of artificial intelligence falls on our shoulders. By proactively implementing AI guardrails across data, models, systems, and operations, we can reap the benefits of AI while minimizing the harms.
May Habib is CEO and co-founder of Writer.
More must-read comments by wealth:
The views expressed in Fortune Star review articles represent solely those of the author and do not necessarily reflect the following views and beliefs: wealth.