Navigating the complex landscape of artificial intelligence adoption

Artificial intelligence (AI) has rapidly become an integral part of our daily lives, reshaping our perceptions of technological solutions. Indeed, the swift emergence of AI has caught many off guard. Initially, some denied the practical application of it in real life, contributing to skepticism surrounding its benefits.

The pivotal question is: What are the risks and benefits associated with the adoption of AI in general? Striking the right balance between denial and resistance versus rapid adoption poses a significant challenge. A middle-ground approach, marked by cautious pragmatism, is essential as we navigate the uncertainties surrounding AI. Let’s delve into the risks associated with current AI technologies.

Understanding the current state of AI

At its current stage, AI mirrors the mindset of its developers, encapsulating both their conscious and unconscious biases, understanding of associated risks, and the tolerance for risk acceptance by ultimate users. AI has not yet achieved the capability of full self-development or replacing human intelligence and decision-making processes. Its potential for society is still unfolding, with numerous iterations expected before it attains self-sustainability. While the prospect of enhancing work efficiency and diminishing routine tasks is promising with AI, it is crucial not to overlook considerations for potential risks and biases.

What are the risks you need to consider

1. Data and its reliability for AI feed

For AI to yield high-quality outcomes, the organization, cleanliness and reliability of the data it ingests is a crucial consideration. Mere application of an AI solution for tasks such as data analytics may not yield the desired results if there is a lack of robust data governance within the organization.

It is imperative that the data feeding into the AI solution is accurate and dependable. The adage “junk in, junk out” from the era of old-time system conversions holds true—what enters the platform is what will be dealt with in terms of reliability. Ensure that your data is clean, reliable, well-organized and devoid of biases at the source, before it goes into your AI solution. The integrity and quality of data are paramount for the proper functioning of AI in contemporary scenarios. While future iterations of AI may have the capability to clean and organize source data, current use cases demonstrate the necessity of addressing this concern proactively by the ultimate users.

Privacy must always be considered when dealing with data. Current AI algorithms are trained to gather, process and potentially transmit information as instructed, exposing AI users to potential violations of data privacy regulations. Be mindful of the jurisdictions in which you operate or where the data interacts with various states and countries.

Another critical aspect of data reliability involves assessing the security of training datasets for AI. Identify and block datasets containing illegal content early in the process. Obtain lawful consent from individuals or organizations whose nonpublic information is used in the AI feed, even for training purposes. Privacy considerations and associated risks are paramount even during the training phase of an AI solution.

Compliance also means proactively identifying and isolating potential intellectual property (IP) infringements in the training dataset for AI purposes. Establish transparent communication channels to inform users about IP risks and their responsibilities. Collaborate with legal and compliance groups to disclose information/data from others used in the training dataset. Establishing communication channels is advisable for AI users or developers working on the dataset feed to raise concerns about compliance violations, whether inadvertent or malicious.

2. Lack of understanding of AI technology

Recruiting professionally trained and knowledgeable resources for the implementation of AI solutions has become a challenging task. Beyond initial deployment, organizations face the complex challenge of determining which AI tools best suit their needs and how to continually administer these solutions. A noticeable expertise gap has emerged, with technology advancing beyond the expectations of the next generation of talent.

The market is driving the rapid development of AI solutions, with not only the usual major players but also startups and middle market providers vying for a significant share. This competition is consuming available resources in the market. Meeting the demand for talent proves difficult, both in terms of the quantity and speed required to keep up with the ever-changing landscape of AI technology.

Many organizations have concerns about their position in the AI race, potential workforce layoffs because of AI adoption, and the return on investment (ROI) from adopting this new technology.

It’s important to acknowledge that many have recently adapted to technologies and regulations related to cloud solutions, cryptocurrency, cybersecurity and data privacy, just to name a few. The disruption caused by COVID-19 has reshaped perceptions of business and technology resilience in the context of business continuity and disaster recovery. Executives, who are already navigating diverse challenges with the above, must also grapple with understanding AI while concurrently managing their company’s business to meet shareholders’ expected returns. The constant learning curve about new risks and their mitigation strategies has proven overwhelming for many.

To be fair, the nontechnology workforce has made significant strides in understanding areas outside their prior knowledge. However, the introduction of AI brings about a renewed sense of fatigue and fear of the unknown among market participants. The complexity of ever-advancing solutions, particularly AI, raises a critical question: Does our organization have the right resources for AI purposes to make intelligent decisions for our businesses?

3. Professional audit firms’ depth of experience in AI to service the market

The apprehension toward technology is rapidly intensifying due to the multitude of unknowns. It is essential to look back and evaluate whether audit firms have well-trained and knowledgeable auditors in place, capable of handling and performing audits with the assistance of AI technology-based solutions.

The trajectory of technological innovations today is evident, encompassing robotics, machine learning, natural language processing and more sophisticated data analytics, including mega data. Organizations should seek a firm that is recruiting new hires directly from college, who possess AI education and proficiency in relevant languages (such as Python, R, Prolog, LISP, etc.). Seek hands-on experts in the AI field who comprehend the tool from its back-end. Rigorous training on the subject is essential to grasp concepts and continually enhance our knowledge base.

As auditors, it is imperative to understand how AI algorithms function. Lack of knowledge can lead to incorrect conclusions or an overreliance on AI. Even basic algorithms can have a significant impact when applied at scale and understanding them may shed light on the risks posed by more intricate algorithms. The fundamentals (the inputs, rules and outputs) remain true for any algorithm.

To identify algorithms, auditors should adopt an inquisitive mindset when scrutinizing business processes. Auditors should engage in process walkthrough discussions to comprehend:

  • Inputs and outputs: Clarify the inputs and outputs relevant and critical for the scope of work.
  • Process execution frequency: Understand the frequency of process execution.
  • New automations: Assess whether new automations were added to the previously known processes or their components.
  • Algorithm deployment areas: Hold discussions with the clients to pinpoint areas, if any, where the organizations have deployed algorithms.
  • Output variability: Investigate whether the output varies based on specific input criteria and anticipate any expected variances for testing.
  • Logic or rules: Determine the logic or rules employed to generate the output.
  • Complexity impact: Assess whether the algorithms are complex and if they will affect the risk-assessment process.
    • Risk of algorithm input: Evaluate the risk associated with the quality and reliability of the algorithm input.
    • Assumptions and methodologies: Examine the assumptions and judgments made by management and solution developers, along with the methodologies (i.e., the underlying logic for processing and output generation) used in the algorithm. Consider IT general controls related to the algorithm.
  • Input controls: Identify input controls and understand how the users of the AI get comfortable about those controls.
  • Output review: Investigate whether there is a review of the output by the person(s) responsible for it. Assess the precision of the review and the reviewer’s understanding of what to expect from the output.

By addressing these aspects as a part of understanding of the client environment, auditors can gain a comprehensive understanding of the algorithms embedded within the business processes and effectively assess associated risks.

4. Biased and unfair outcome of AI

It’s worth emphasizing again that AI is constructed by humans, inheriting the biases of its creators in its algorithms. Presently, the algorithms in current AI models lack profound and practical capability to self-identify biases. AI solutions are only as fair and unbiased as the programmers who wrote them. Take this into consideration when forming opinions in your work involving AI.

5. Cybersecurity concerns

Like any technology, AI solutions are susceptible to cybersecurity vulnerabilities. Hackers can exploit vulnerabilities in AI solutions to pilfer sensitive or proprietary data. As previously noted, the rapid pace of AI development may outstrip the cybersecurity considerations of AI companies when developing the software. The enthusiasm surrounding new technology often eclipses security concerns and requirements. Be aware of the potential vulnerabilities of AI solutions, particularly if your choice of AI has not undergone a thorough vendor selection process and does not align with responsible technology adoption practices implemented by those in charge.

The takeaway

The utilization of AI tools and models holds vast, untapped potential. Factors such as data quality, ongoing education on the subject, adherence to ethical standards, caution against overreliance on technology, cyber-risk and data privacy considerations, and compliance with regulatory requirements should all be at the forefront of everyone’s mind when engaging with AI tools.

Learn more about RSM’s artificial intelligence governance services team, and how their insights and solutions can give you to the tools to identify and address risks to capitalize on the power of AI.

Let’s Talk!

Call us at (325) 677-6251 or fill out the form below and we’ll contact you to discuss your specific situation.

  • Topic Name:
  • Should be Empty:

This article was written by RSM US LLP and originally appeared on 2024-02-16.
2022 RSM US LLP. All rights reserved.
https://rsmus.com/insights/services/digital-transformation/navigating-the-complex-landscape-of-artificial-intelligence-adoption.html

RSM US Alliance provides its members with access to resources of RSM US LLP. RSM US Alliance member firms are separate and independent businesses and legal entities that are responsible for their own acts and omissions, and each is separate and independent from RSM US LLP. RSM US LLP is the U.S. member firm of RSM International, a global network of independent audit, tax, and consulting firms. Members of RSM US Alliance have access to RSM International resources through RSM US LLP but are not member firms of RSM International. Visit rsmus.com/about us for more information regarding RSM US LLP and RSM International. The RSM logo is used under license by RSM US LLP. RSM US Alliance products and services are proprietary to RSM US LLP.