Attend Mary Carmichael’s session “Generative AI is here: How tools like ChatGPT will transform your business” on December 13, 2023, as part of CPABC Nexus Days: Business & Innovation Insights at the Vancouver Convention Centre. Space is limited. Save your spot today.
ChatGPT is an artificial intelligence (AI) chatbot developed by OpenAI, based on its advanced family of large language models. Since its launch in November 2022, it has outpaced in adoption popular social media platforms, such as Instagram and TikTok, and holds the record for the fastest-growing user base. I could reel off many statistics about just how quickly ChatGPT has soared in popularity since late last year, but I’m guessing you already know this because, like me, you’re hearing people discuss it just about everywhere you go.
ChatGPT at the eye of the ‘Perfect Storm’
ChatGPT’s unprecedented rate of adoption highlights its transformative potential in reshaping communication and engagement practices in today’s digitally centric world. In my recent presentation, “Mastering ChatGPT” at the ISACA Digital Trust World conference held in Boston, I highlighted the convergence of several pivotal events over the past half-year, culminating in what can be called a “perfect storm” fueling ChatGPT’s rise.
At the eye of this storm lies the rapid evolution of ChatGPT’s capabilities, marking the advent of what we refer to as the “Age of AI” or the “Fourth Industrial Revolution.” I shed light on ChatGPT’s transformational capabilities, especially its potential to reshape business operations. In my personal experience, ChatGPT has proven itself valuable in tasks such as drafting initial document versions and creating LinkedIn posts, even suggesting suitable emojis! However, accompanying this storm is a limited understanding of associated risks, further compounded by an absence of a regulatory framework tailored to such advanced AI models and varying levels of organizational preparedness for AI-driven future.
Navigating the 'Perfect Storm': The implications of ChatGPT
The integration of ChatGPT, or any AI technology for that matter, should be seen as an organizational endeavor. It calls for interdisciplinary collaboration involving technological expertise, regulatory compliance, risk management and operational understanding. By ensuring this balanced and holistic approach, organizations can fully exploit the advantages of AI technologies like ChatGPT while mitigating potential risks and pitfalls.
Following my workshop, I was approached by numerous attendees who expressed their curiosity and concerns about the implications of ChatGPT, with three key themes emerging:
- Using ChatGPT for Governance, Risk, and Compliance (GRC) roles: Participants were interested in understanding how professionals in these fields can utilize ChatGPT to their benefit. For example, could ChatGPT assist in assessing risk scenarios?
- Guiding Transformational Change to be an AI-driven Organization: Attendees were keen on exploring strategies for transitioning their organizations into AI-driven organizations. They wanted insights into managing this transformation effectively while also addressing the key risks.
- Mitigating Workforce Displacement: The concern regarding the displacement effect of this disruptive technology was also prominent. This led to discussions on identifying roles most likely to be affected and how to provide opportunities for upskilling and aligning employees’ skills with the new demands of an AI-driven workplace.
To address these questions, I have drawn on my expertise and highlighted key concepts from the “Mastering ChatGPT” presentation to provide pragmatic guidance as articulated in this article.
- Using ChatGPT for governance, risk, and compliance roles?
For GRC, ChatGPT offers several potential applications, with practical examples being raised during my discussion with conference attendees. Here are a few noteworthy instances:
- Risk and control relationships: ChatGPT can provide insights into the risks associated with a specific control, along with relevant documentation for verification and validation purposes.
- Policy and governance formulation: ChatGPT can assist GRC teams in crafting targeted messaging and establishing industry-specific policies, enhancing the governance process.
- Remediation guidance: Capable of delivering personalized recommendations, ChatGPT can outline mitigation measures and strategies to address identified gaps.
- Navigating contextual changes to processes: In situations where businesses undergo process modifications, ChatGPT can provide invaluable advice on creating new controls or adjusting existing ones to meet regulatory demands.
These real-world applications show the transformative potential of ChatGPT in bolstering the GRC function. They demonstrate how it can accelerate decision-making, policy formation, remediation actions and compliance validation, thereby enhancing efficiency and effectiveness.
- Guiding transformational change to be an AI-driven organization?
While it is undeniable that ChatGPT presents a transformative opportunity, it is crucial to acknowledge the significant risks associated with its usage. For all its upsides, we know that ChatGPT comes with many risks:
- Hallucinations: ChatGPT can generate irrelevant, incorrect, or nonsensical responses known as “hallucinations.” For instance, if ChatGPT is asked for historical information beyond its training data or about future events, it might generate plausible but incorrect responses.
- Automation Bias: This refers to the inclination of humans to accept suggestions from automated systems. For example, a financial analyst might accept a forecast generated by an AI model without sufficient scrutiny, which could lead to poor investment decisions.
- Societal Biases: AI models can inadvertently learn and perpetuate societal biases present in the data they are trained on. For instance, if ChatGPT is trained on a dataset containing gender or racial bias, it might generate biased responses, thereby reinforcing harmful stereotypes.
- Misinformation: AI models can spread misinformation if they generate false or misleading responses. For example, if ChatGPT is asked about a conspiracy theory, it might inadvertently provide a response that seems to validate that theory, leading to the spread of false information.
- Privacy Implications: ChatGPT could generate responses that infringe on privacy, especially when it’s used in applications where it has access to sensitive data. For example, if the AI model is used in a healthcare setting, it’s essential to ensure it doesn’t reveal patients’ personal health information in its responses.
More generally, growing applications of AI have amplified concerns about ethical, fair and responsible use of technology that assists or replaces human decision-making. Deploying AI systems requires careful oversight to prevent unintentional outcomes not only to an organization’s brand reputation but, more critically, to employees, individuals and society.
Given these potentially adverse impacts, what should organizations be doing to adjust to this new era? An AI readiness assessment should be step one. The assessment should evaluate your organization’s current state, identify potential use cases where ChatGPT can bring benefits, prioritize a business roadmap to develop targeted use cases and address potential gaps in people, processes and technology that might hinder effective deployment.
AI readiness assessment example: Let me give you an example of how this approach could work. An organization, for instance, may want to enhance their customer service experience. It can start with an AI readiness assessment focusing on ChatGPT’s capabilities.
This assessment would begin by examining its current customer service processes, performance metrics and overall customer satisfaction levels. Next, it would identify potential use cases where ChatGPT could enhance these areas, such as 24/7 customer support or faster query resolution. Once these use cases have been identified, the organization can establish a business roadmap. This could involve a staged rollout of ChatGPT in customer service, starting with a pilot program to validate effectiveness, followed by incremental scaling based on success metrics.
AI governance framework: AI governance frameworks are sets of principles, policies and processes that aim to ensure the ethical, transparent and trustworthy use of AI systems. There are several existing frameworks that can be leveraged to provide governance over AI activities, such as Singapore’s Model AI Governance Framework, Australia's AI Ethics Framework, European Commission’s Legal Framework for AI and the AIGA Framework.
As a pioneering framework, Singapore’s Model AI Governance Framework is designed to promote two essential principles: the need for explainable, transparent and fair AI-assisted decision-making, and the significance of human-centric AI systems. This framework stands out by providing best practices that offer guidance on determining human involvement in AI-augmented decision-making and conducting algorithm audits. Another feature is its technology-agnostic approach, allowing for its application across various industries.
Acceptable usage policy: Despite the risks involved with ChatGPT, its potential for productivity enhancement is hard to overlook. The benefits may outweigh the risks in some cases, but not always. Employees are your first line of defense—if you are allowing access, are they aware of the potential risks? I deem it crucial for organizations to establish a policy governing how employees use ChatGPT.
Potentially, this policy could include:
- Restrictions for specific use cases alongside recommendations for approved applications and guidelines for data that should be excluded from the system. It might necessitate employees to validate the model’s outputs, contemplate potential biases and evaluate associated risks like confidentiality, privacy and intellectual property rights.
Given the rapid evolution in this field, I stress the importance of continuous risk assessments and maintaining open communication within the organization about use cases, opportunities and risks. This proactive approach, I believe, allows organizations to leverage the benefits of ChatGPT responsibly and effectively.
- Mitigating workforce displacement?
In the late 1990s and early 2000s, I witnessed the rise of the Internet as a commercial platform, leading to the rapid growth of e-commerce businesses, known as the Dot-Com era. This era created new job roles, like web developers and SEO specialists, that were virtually non-existent before this period. For instance, web developers played a pivotal role in building websites, a task that was virtually non-existent before this era. Similarly, SEO specialists emerged to optimize these websites for higher visibility in search engine rankings, a new area of expertise.
Now, we’re in the AI era, which, like the Dot-Com era, requires new roles and skillsets. ChatGPT is causing a shift in existing roles and generating new ones. For instance, as AI redefines customer service, I anticipate seeing AI trainers specializing in refining chatbots for more human-like interactions.
Also, organizations should identify these emerging roles and invest in upskilling initiatives to equip their workforce for the transition. Specifically, AI introduces new professions, such as AI ethicists, who ensure AI applications align with ethical standards and societal values, and prompt engineers, tasked with optimizing AI model prompts for more efficient and effective responses.
Preparing for this transformative shift to an AI-integrated workforce will enable businesses to leverage the full potential of technology while minimizing the associated disruptions.
Surviving and thriving in ChatGPT's 'perfect storm'
The “Perfect Storm” brought forth by ChatGPT, while full of transformative potential, comes with its fair share of challenges that we must navigate diligently. From redefining GRC roles to ushering in a new era of AI-driven organizations and addressing the risks of workforce displacement, the storm offers us opportunities for growth as much as it does trials.
By addressing these challenges head-on through AI readiness assessments, the establishment of AI governance frameworks and workforce upskilling initiatives, we can harness this storm’s energy. This approach ensures that we don’t merely weather this storm but emerge stronger and better prepared for an AI-driven future. So, let's set sail, navigating these exciting waters with an eye on the horizon, anticipating challenges, and making the most of the opportunities that come our way. The journey may be complex, but the destination—a world where AI serves as a tool for enhancement and progress—is well worth the effort.
Mary Carmichael, CPA, CMA, ICD.D, MBA, CISA, CRISC, is director, strategy, risk, and compliance advisory, Momentum Technology, as well as vice president, ISACA, and a member of the ISACA Emerging Trends Working Group.
Originally published by ISACA Now Blog.