Ethical and Explainable AI Are Startup Imperatives in 2025

3 min read

In 2024, generative AI permeated the workplace, with businesses using the technology for various purposes, from internal productivity tools to customer-facing products and services.

However, many are concerned about using these technologies safely, compliantly, and ethically. In 2025, the governance of AI will be the topic of conversation, particularly regarding ethical and explainable AI.

Many companies are still new to generative AI and have data security and privacy concerns, specifically about public large language models (LLMs), which could expose sensitive data. As a result, enterprises opt to protect their data by building on existing LLMs — often an open source model — fine-tuning it with their proprietary data, using retrieval augmented generation (RAG), and then deploying it for inference in their private data centers.

For vendors, this scenario is table stakes, but now that more companies are managing these AI models, they are confronting several AI governance issues. To provide products and services that users trust and can confidently use, organizations need to develop a roadmap for explainable and ethical AI.

Here’s how.

Explainable AI

Generative AI is known for its ability to analyze, interpret information, and answer requests or prompts. Trained on large data sets, LLMs learn to identify patterns and understand the underlying information. Then, when given a prompt, the model can generate answers based on what it has learned.

Because of this, exactly how some LLMs provide information and answers to queries is unclear, even to some who create these models. However, companies that fine-tune their models want more transparency from their LLMs.

Comprised of approaches that make AI transparent, comprehensible, and trustworthy to humans, explainable AI is an increasing requirement for any enterprise deploying generative AI. As AI technology accelerates in speed and ability, it will be incorporated into more everyday products and be able to do things that previously seemed impossible. To address this remarkable step change, people want to understand how these AI products have come up with answers or decisions, their impacts, and their potential biases or weaknesses. Once people have this understanding, they will trust them just as another new form of technology.

One way for companies to provide explainable AI is to have readily available auditability of AI systems. The complexity of auditing technology systems has grown exponentially with the use of generative AI. Yet, according to the IEEE, responsible organizations require the capability to audit their LLM so that it will “always be possible to understand why and how a system behaved” in a certain way. This includes documentation of AI models, sources of training data, documentation of algorithms, and evaluation metrics. This could eventually result in building a Time Machine for the LLM, where someone could go in, go back in time, and show, for example, which information they fine-tuned the LLM on at that moment.

This digital paper trail could be needed for several reasons. Companies are using AI in highly regulated sectors such as healthcare, finance, and law, which may be required by regulatory agencies or the company’s governance or legal committee to provide an audit.

Regarding compliance, regulators are increasingly focusing on businesses’ AI use. Legislation is quickly evolving, from the EU’s AI Act, which became official in July 2024, to the 2022 U.S. Blueprint for an AI Bill of Rights and California’s recently-passed AB 2013 on Generative AI transparency. Companies are relying on emerging technology tools to help them track and manage compliance for AI.

Data is injected into an LLM in a vector database that transforms information into a parameter to be used by the LLM. A technology partner who can audit this process and take a virtual snapshot of the vector database anytime new information is injected can be helpful. This provides traceability, a key value-add for enterprises seeking to limit their risk.

As more companies use generative AI, they will need solid, verifiable, and provable answers to questions such as:

  • How did you arrive at that conclusion?
  • How did you generate this document?
  • Do you have rights to the IP generated by that generative AI?
  • Are you infringing on someone’s IP?

Ethical AI

In addition to explainable AI, a related area of ethical AI ensures that organizations’ AI systems adhere to foundational principles such as transparency, privacy, fairness, and accountability.

In terms of privacy, algorithms ingest tremendous amounts of data to run AI programs, and some of this data may be personally identifiable information. As such, companies should make efforts to follow data minimization protocols in their products, according to the nonbinding 2022 White House Blueprint for an AI Bill of Rights. They should also anonymize personal data whenever possible. However, even when anonymized, AI can potentially infer individuals’ identities by combining multiple data sources. The release of details about individuals could cause privacy violations, so protecting data with strong cybersecurity protocols is another necessity.

Bias can be introduced into AI systems in a variety of ways. This can include training data that skews toward a particular group or perspective — only men, for example. Algorithms can introduce bias based on how they are designed and what they focus on for decision-making. This can result in mortgage application bias or healthcare diagnosis or treatment bias. Companies have a responsibility to ensure that datasets and the algorithms and AI tools that use those datasets accurately reflect the populations they will serve. AI tools are used for critical services, from healthcare to mortgage approvals, which can have serious consequences for the end customer, so companies must frequently examine their algorithms and models to ensure that bias towards particular groups is not included.

Companies need accountability in their organizational structures to provide ethical, bias-free AI. Companies should have clearly defined roles and responsibilities for managing AI to avoid confusion or mistakes. Those responsible could include the chief legal officer, chief technology officer, or other executives accountable for integrity, ethics, compliance, or data. Some companies are also creating AI ethics committees. Human oversight is also necessary for critical business or customer-facing decisions. This ensures that important decisions are not overlooked or mistakenly delegated to AI.

A broad ethics of AI underlies all of these considerations. A strong AI ethics framework or policy helps guide decision-making with principles and standards, ensuring consistency, reliability, transparency, and explanation.

The post Ethical and Explainable AI Are Startup Imperatives in 2025 appeared first on The New Stack.

Leave a Reply

Your email address will not be published. Required fields are marked *