Skip to content
Article

Responsible AI: The Plug to Close the Gaps Left in AI Models

Artificial intelligence (AI) is no longer restricted to sci-fi books. From predicting gas pipeline corrosion rates to detecting suspicious activities in banking, AI is delivering the highest level of accuracy with a speed that humans would find impossible to achieve on their own. However, recent incidents—such as Amazon’s AI-enabled recruitment tool being biased against women—have exposed the risks of using AI. Countries and technology companies, worldwide, are now aware of the power AI holds to go rogue if not controlled and managed. 

Where are we now?

AI regulations, ethics, and control measures have become common discussion points in world tech forums. In the past three years, Responsible AI (RAI) is driving ethically sound leaders to introduce guidelines or enforce mandates worldwide. For example, the EU proposed the AI Act to regulate AI applications and categorize them into three types—unacceptable risk, very high risk, and low risk. Japan released Governance Guidelines for the Implementation of AI Principles, which require AI-using organizations to examine the impacts of the system in terms of social acceptance, company goals, and security.

The US has brought companies’ attention to algorithms that form the base of AI programs. The Algorithmic Accountability Act of 2022 asks companies to analyze automated systems before using or selling them. The act focuses on creating a transparent environment that would empower consumers to make educated decisions during the purchase of systems that automate critical actions. Similarly, regulations enacted by Singapore, Canada, China, and India focus on ensuring companies understand their responsibilities while implementing AI.

AI-focused organizations have also begun employing different approaches to earn their customers’ confidence. For instance, one of the Big Tech companies brought together the AI, Ethics, and Effects in Engineering and Research (AETHER) Committee, the Office of Responsible AI (ORA) and Responsible AI Strategy in Engineering (RAISE) to create and operationalize an RAI governance framework. A technology solutions firm with the help of the World Economic Forum developed a cross-disciplinary, multi-dimensional approach to upholding AI ethics in their AI systems.

We are now on the right path toward avoiding an AI-led takeover.

How is the market evolving?

The growing competition in introducing AI chatbots, such as ChatGPT and the more recent Bard is exposing the technology’s abilities to the mass. While these bots are currently just text-based tools with the ability to automatically search the web, it is displaying AI’s capabilities in making judgments based on mere human inputs and permissions. In the future, the permissions may be automated as well, equipping the AI program with the power to take decisions on its own. This is where miscalculations can occur, introducing bias and activating inappropriate actions.  

So, while companies are realizing benefits from deploying AI programs, RAI is becoming more significant than ever to prevent the impending takeover. Leaders are moving toward standardizing the ethical rulebooks. EU and the US are aligning AI regulations and developing a common risk management framework to make AI models trustworthy. Future-forward companies are in the process of embedding the principles of trustworthy and actionable AI in business applications.

The principles are based on maintaining fairness, consistency, explainability, data privacy, robustness, and accountability of the training data and algorithms, which are the AI system’s building blocks. The issue is in embedding such principles into the program. It requires a sophisticated understanding of machine learning models. Currently, only a few companies can truly understand the threshold where AI fails and take steps to resolve them. A typical organization is still unequipped to handle the issues in operationalizing the machine learning models—such as model explainability, racial bias, and adversarial AI.

Where should we go next?

It’s time to take matters into our own hands and ensure the AI system is more reliable, secure, and effective. The following best practices will enable AI developers to design and scale the applications while avoiding the gaps that seep into the algorithms during development.

Focus the AI design on humans – A human-centered approach must be a critical factor during the design process. Create a variety of user personas and use-case scenarios, along with an appropriate feedback channel to incorporate as many perspectives as possible. Embed full disclosures of the training data and algorithm to ensure the user has control and a complete understanding of the outcomes of the program. 

Democratize data with metrics – Multiple metrics allow training data to represent a greater number of subgroups and facilitate monitoring for a variety of errors. Create the metrics using user survey feedback, key performance indicators, and the rates of false positives and negatives. The metrics must also be aligned with the system’s context and desired outcomes. For example, alarm systems for detecting smoke or fire must be sensitive to the slightest indication. 

Perfect your raw data – Ultimately, it is the raw data that enables machine models to deliver insights and automated actions. So, it is paramount to keep a check on the data that trains the models. Periodically, check the datasets for incorrectness, missing values, sampling timelines, and relatedness with the other values. You wouldn’t want to use a self-driving AI system, which is trained for city roads in summer, in the countryside during winter. 

Understand your limitations and responsibilities – The inherent logic behind responsible AI is in accepting that AI is neither infallible nor long-lasting. Circumstances and behaviors will constantly change. Climatic change, for example, will throw curveballs every now and then—with droughts, sudden fires, flooding, and changing biodiversity in a particular location. In this case, you will need to recognize and communicate changes in patterns to ensure control over outcomes from the AI system. Testing, tracking, and updating the system must become a regular practice in every AI-using organization.

In conclusion, responsibly building the blocks of AI systems lay the foundation for future business growth and sustainability. Careful and constant assessment of datasets and machine learning models is imperative for an ethical and unbiased business environment. We surely cannot let an unreliable AI program drive humans around, maneuver satellites, or treat a patient. Right?

Explore more articles