Managing the Risks of Generative AI
-
March 27, 2024
-
Indranil Chakraborty
Business leaders, lawmakers, academicians, scientists, and many others are looking for ways to harness the power of generative AI, which can potentially transform the way we learn and work. In the corporate world, generative AI has the power to transform the way businesses interact with customers and drive growth. The latest research from Salesforce indicates that 2 out of 3 (67%) of IT leaders are looking to deploy generative AI in their business over the next 18 months, and 1 out of 3 are calling it their topmost priority. Organizations are exploring how this disruptive technology of generative AI could impact every aspect of their business, from sales, marketing, service, commerce, engineering, HR, and others.
While there is no doubt about the promise of generative AI, business leaders want a trusted and secure way for their workforce to use this technology. Almost 4 out of 5 (79%) of business leaders voiced concerns that this technology brings along the baggage of security risks and biased outcomes. At a larger level, businesses must recognize the importance of ethical, transparent, and responsible use of this technology.
A company using generative AI technology to interact with customers is in an entirely different setting from individuals using it for private consumption. There is an imminent need for businesses to adhere to regulations relevant to their industry. Irresponsible, inaccurate, or offensive outcomes of generative AI could open a pandora’s box of legal, financial, and ethical consequences. For instance, the harm caused when a generative AI tool gives incorrect steps for baking a strawberry cake is much lower than when it gives incorrect instructions to a field technician for repairing a piece of machinery. If your generative AI tool is not founded on ethical guidelines with adequate guardrails in place, generative AI can have unintended harmful consequences that could back come to haunt you.
Companies need a clearly defined framework for using generative AI and to align it with their business goals including how it will help their existing employees in sales, marketing, service, commerce, and other departments that generative AI touches.
In 2019, Salesforce published a set of trusted AI practices that covered transparency, accountability, and reliability, to help guide the development of ethical AI systems. These can be applied to any business looking to invest in AI. But having a rule book on best practices for AI isn’t enough; companies must commit to operationalizing them during the development and adoption of AI. A mature and ethical AI initiative puts into practice its principles via responsible AI development and deployment by combining multiple disciplines associated with new product development such as product design, data management, engineering, and copyrights, to mitigate any potential risks and maximize the benefits of AI. There are existing models for how companies can initiate, nurture, and grow these practices, which provide roadmaps for how to create a holistic infrastructure for ethical, responsible, and trusted AI development.
With the emergence and accessibility of mainstream generative AI, organizations have recognized that they need specific guidelines to address the potential risks of this technology. These guidelines don’t replace core values but act as a guiding light for how they can be put into practice as companies build tools and systems that leverage this new technology.
Guidelines for the development of ethical generative AI
The following set of guidelines can help companies evaluate the risks associated with generative AI as these tools enter the mainstream. They cover five key areas.
Accuracy
Businesses should be able to train their AI models on their own data to produce results that can be verified with the right balance of accuracy, relevance, and recall (the large language model’s ability to accurately identify positive cases from a given dataset). It’s important to recognize and communicate generative AI responses in cases of uncertainty so that people can validate them. The simplest way to do this is by mentioning the sources of data which the AI model is retrieving information from to create a response, elucidating why the AI gave those responses. By highlighting uncertainty and having adequate guardrails in place ensures certain tasks cannot be fully automated.
Safety
Businesses need to make every possible effort to reduce output bias and toxicity by prioritizing regular and consistent bias and explainability assessments. Companies need to protect and safeguard personally identifying information (PII) present in the training dataset to prevent any potential harm. Additionally, security assessments (such as reviewing guardrails) can help companies identify potential vulnerabilities that may be exploited by AI.
Honesty
When aggregating training data for your AI models, data provenance must be prioritized to make sure there is clear consent to use that data. This can be done by using open-source and user-provided data, and when AI generates outputs autonomously, it’s imperative to be transparent that this is AI-generated content. For this declaration (or disclaimer), watermarks can be used in the content or by in-app messaging.
Empowerment
While AI can be deployed autonomously for certain basic processes which can be fully automated, in most cases AI should play the role of a supporting actor. Generative AI today is proving to be a powerful assistant. In industries, such as financial services or healthcare, where building trust is of utmost importance, it’s critical to have human involvement in decision-making. For example, AI can provide data-driven insights and humans can take action based on that to build trust and transparency. Furthermore, make sure that your AI model’s outputs are accessible to everyone (e.g., provide ALT text with images). And lastly, businesses must respect content contributors and data labelers.
Sustainability
Language models are classified as “large” depending on the number of values or parameters they use. Some popular large language models (LLMs) have hundreds of billions of parameters and use a lot of machine time (translating to high consumption of energy and water) to train them. To put things in perspective, GPT3 consumed 1.3 gigawatt hours of energy, which is enough energy to power 120 U.S. homes for a year and 700k liters of clean water.
When investigating AI models for your business, large does not necessarily mean better. As model development becomes a mainstream activity, businesses will endeavor to minimize the size of their models while maximizing their accuracy by training them on large volumes of high-quality data. In such a scenario, less energy will be consumed at data centers because of the lesser computation required, translating to a reduced carbon footprint.
Integrating generative AI
Most businesses will embed third-party generative AI tools into their operations instead of building one internally from the ground up. Here are some strategic tips for safely embedding generative AI in business apps to drive results:
Use zero or first-party data
Businesses should train their generative AI models on zero-party data (data that customers consent to), and first-party data, which they collect directly. Reliable data provenance is critical to ensure that your AI models are accurate, reliable, and trusted. When you depend on third-party data or data acquired from external sources, it becomes difficult to train AI models to provide accurate outputs.
Let’s look at an example. Data brokers may be having legacy data or data combined incorrectly from accounts that don’t belong to the same individual or they could draw inaccurate inferences from that data. In the business context, this applies to customers when the AI models are being grounded in that data. Consequently, in Marketing Cloud, if all the customer’s data in the CRM came from data brokers, the personalization may be inaccurate.
Keep data fresh and labeled
Data is the backbone of AI. Language models that generate replies to customer service queries will likely provide inaccurate or outdated outputs if the training is grounded in data that is old, incomplete, or inaccurate. This can lead to something referred to as “hallucinations”, where an AI tool asserts that a misrepresentation is the truth. Likewise, if training data contains bias, the AI tool will only propagate that bias.
Organizations must thoroughly review all their training data that will be used to train models and eliminate any bias, toxicity, and inaccuracy. This is the key to ensuring safety and accuracy.
Ensure human intervention
Just because a process can be automated doesn’t mean that’s the best way to go about it. Generative AI isn’t yet capable of empathy, understanding context or emotion, or knowing when they’re wrong or hurtful.
Human involvement is necessary to review outputs for accuracy, remove bias, to ensure that their AI is working as intended. At a broader level, generative AI should be seen as a means to supplement human capabilities, not replace them.
Businesses have a crucial role to play in the responsible adoption of generative AI, and integrating these tools into their everyday operations in ways that enhance the experience of their employees and customers. And this goes all the way back to ensuring the responsible use of AI – maintaining accuracy, safety, transparency, sustainability, and mitigating bias, toxicity, and harmful outcomes. And the commitment to responsible and trusted AI should extend beyond business objectives and include social responsibilities and ethical AI practices.
Test thoroughly
Generative AI tools need constant supervision. Businesses can begin by automating the review process (partially) by collecting AI metadata and defining standard mitigation methods for specific risks.
Eventually, humans must be at the helm of affairs to validate generative AI output for accuracy, bias, toxicity, and hallucinations. Organizations can look at ethical AI training for engineers and managers to assess AI tools.
Get feedback
Listening to all stakeholders in AI – employees, advisors, customers, and impacted communities is vital to identify risks and refine your models. Organizations must create new communication channels for employees to report concerns. In fact, incentivizing issue reporting can be effective as well.
Some companies have created ethics advisory councils comprising of employees and external experts to assess AI development. Having open channels of communication with the larger community is key to preventing unintended consequences.
As generative AI becomes part of the mainstream, businesses have the responsibility to ensure that this emerging technology is being used ethically. By committing themselves to ethical practices and having adequate safeguards in place, they can ensure that the AI systems they deploy are accurate, safe, and reliable and that they help everyone connected flourish.
As a Salesforce Consulting Partner, we are part of an ecosystem that is leading this transformation for businesses. Generative AI is evolving at breakneck speed, so the steps you take today need to evolve over time. But adopting and committing to a strong ethical framework can help you navigate this period of rapid change.