Tommy will be among a group of industry experts presenting at this year's CPABC Nexus Days: Information Technology Insights virtual conference, July 23-24. Discover how you and your organization can leverage cutting-edge tools and strategies to thrive in today’s business landscape. Choose to attend the full event or only single days. Save your spot today.
Artificial intelligence (AI) is a technology that can mimic human behavior and perform tasks that humans usually do. AI uses technology to understand and respond to requests, engage in reasoning, learn from past experiences, and make decisions based on the inputs given.
AI can help the accounting and finance sector improve efficiency, accuracy, and compliance. In turn, it can help create new value and innovation. However, AI also has security and privacy risks that these professionals must deal with and manage. This article addresses some of AI’s primary security and privacy risks in accounting and finance and how to reduce them.
What are the issues?
Security risks of AI in accounting and finance are the possible threats affecting the confidentiality, integrity, and availability of AI data and systems. Some of the common security risks of AI in accounting and finance are:
- Data breaches: AI systems need a lot of data to train and work. This need can expose private and confidential information to unauthorized access, theft, or leakage. Data breaches can cause financial losses, reputation damage, legal problems, and regulatory penalties for accounting and finance organizations and their clients.
- Malicious attacks: AI systems can be attacked by bad actors who can use their weaknesses or change their outputs. For example, hackers can launch cyberattacks to disrupt, damage, or take over AI systems. They can also use fake examples to trick AI models into making wrong or harmful predictions or decisions. Malicious attacks can make AI systems unreliable, inaccurate, and untrustworthy.
- Insider threats: AI systems can also be harmed by insiders who have access to data and systems but use them for destructive or unauthorized purposes. For example, employees, contractors, or partners can misuse or abuse AI systems to commit fraud, sabotage, or spying. Insider threats can be a significant risk to AI systems’ security and integrity and data integrity.
AI Privacy Risks In Accounting And Finance
Privacy risks of AI in accounting and finance are the possible harms and violations that can affect the rights and interests of individuals and organizations whose data are collected, processed, or shared by AI systems. Some of the typical privacy risks of AI in accounting and finance are:
- Data misuse: AI systems can use data for purposes not in line with the data subjects’ or owners’ original consent or expectation. For example, AI systems can use data for other purposes. Included in these purposes are marketing, profiling, and surveillance, without getting proper consent or giving enough notice. Data misuse affects the privacy and autonomy of individuals and organizations and lower their trust and confidence in AI systems.
- Data discrimination: AI systems can create or increase unfair or biased outcomes. In turn, these actions can affect the rights and opportunities of individuals and groups based on their personal or sensitive attributes, such as age, gender, or race. For example, based on their data, AI systems can discriminate against individuals or groups in credit scoring, hiring, or pricing. Data discrimination can affect the privacy and dignity of individuals and groups. Further, it can cause social and ethical problems for accounting and finance organizations and their clients.
- Data loss: AI systems can lose or delete data because of technical errors, human errors, or natural disasters. Data loss can affect the quality, completeness, and accuracy of data and AI systems and hurt their performance and functionality. Data loss can also cause permanent harm to the privacy and interests of individuals and organizations whose data are lost or deleted.
What can we do to mitigate these risks?
Security and privacy risks of AI in accounting and finance can have severe consequences for accounting and finance organizations and their clients, as well as for society and the economy. Therefore, it is essential to use and follow appropriate measures and best practices to reduce these risks and ensure AI is used responsibly and ethically in accounting and finance. Some of the possible measures and best practices are:
- Data protection encompasses policies and procedures that protect data from unauthorized or unlawful access, use, disclosure, modification, or destruction. It can help prevent data breaches, malicious attacks, and data misuse and protect data and AI systems’ confidentiality, integrity, and availability. Data protection can include encryption, authentication, authorization, backup, recovery, and audit.
- Data governance involves the framework and processes that define the roles, responsibilities, and rules for collecting, processing, and sharing data. It can help ensure data quality, consistency, and compliance and respect data subjects’ and owners’ rights and interests. Data governance can include measures such as data inventory, data classification, data minimization, data retention, data deletion, and data ethics.
- Data security encompasses the standards and practices that protect data and AI systems from internal and external threats and problems. Implemented correctly, it can help find, prevent, and respond to data breaches, malicious attacks, and insider threats. It can also improve the reliability, accuracy, and trustworthiness of AI systems and their outcomes. Data security can include security assessments, monitoring, testing, training, and incident response.
- Data privacy includes the principles and practices that aim to protect the rights and interests of individuals and organizations whose data are collected, processed, or shared by AI systems. Data privacy can help prevent data misuse, discrimination, and loss. It can also help maintain individuals’ and organizations’ privacy and autonomy. Data privacy can include privacy by design, privacy impact assessment, privacy notices, privacy consent statements, privacy policies, and privacy regulation.
Summary
AI technology can bring many benefits and opportunities to the accounting and finance sectors. Still, it also has security and privacy risks that must be managed. Accounting and finance professionals must be aware of these risks. Further, they must use appropriate measures and best practices to reduce them and ensure AI is used responsibly and ethically in accounting and finance. By doing so, they can improve the security and privacy of data and AI systems and build trust and confidence among their clients and stakeholders.
Tommy Stephens is one of the shareholders in K2 Enterprises (www.k2e.com), affiliating with the firm in 2003 and joining as a shareholder in 2007. At K2, Tommy focuses on creating and delivering content and is responsible for many of the Firm's management and marketing functions.