AI ethics a growing concern

The increased use of artificial intelligence in accounting software has brought with it growing concerns over the ethical challenges this technology creates for professionals, their clients and the public as a whole. 

The past few years have seen a growing number of accounting solutions touting their use of AI for a wide range of applications, from tax planning and audits to payroll and expenses to ERP and CAS. The accounting profession spent $1.5 billion on such software in 2021 and is projected to spend $53 billion by 2030, according to a report from Acumen Research and Consulting

Despite this rapid growth, there's been too little attention paid to the ethical considerations that come with it, according to Aaron Harris, chief technology officer of Sage, especially when thinking about the potential money to be made. 

"I have seen, in a lot of cases, that the temptation of commercial success is a louder voice than any ethical concerns," he said. 

But what exactly are those ethical concerns? Harris said the current issues are less to do with the accidental creation of a robot overlord and more about the insertion of too-human biases into the code. He raised the example of something a lot of businesses in general, including accounting firms, use all the time now: automated resume screening. These programs, he said, are trained on already-existing data to guide their decisions, much of which reflects human-created biases. If an AI is trained on biased data, then the AI will act in a biased way, reinforcing structural inequalities in the business world.

"If you've created an AI that parses an applicant's resume, and makes a decision based on whether or not to proceed to an interview, if the data that you feed into that AI for training purposes disproportionately represents one ethnicity or another, or one gender … if African-American resumes, if women's resumes, are underrepresented, the AI naturally, because of the data fed into it, will favor white males because it's quite likely that was the bulk of the resumes that were in the training data," he said.

Enrico Palmerino, CEO of Botkeeper, raised a similar point, saying there have already been issues with loan approval bots used by banks. Much like the resume bots, the loan bots use bank data to identify who is and is not a default risk and use that assessment to determine whether someone gets a loan. The bots identified minorities as a default risk, which wasn't the accurate correlation but rather that bad credit or low cash on hand was the default risk - unfortunately, the bot learned the wrong correlation in that case. 

"As a result of that it went on to start denying loans for people of color regardless of where they lived. It came to this conclusion and didn't quite understand how geography tied into things. So you've got to worry more about that [versus accidentally creating SkyNet]," he said.

In this respect, the problem of making sure an AI is taught the right things is similar to making sure a child grows up with the right values. Sage's Harris, though, noted that the consequences for a poorly taught AI can be much more severe. 

"The difference is if you don't raise a child right, the amount of damage that child can do is sort of contained. If you don't raise an AI right, the opportunity to inflict harm is massive because the AI doesn't sleep, it has endless energy. You can use AI to scan a room. An AI can look across a room of 1,000 people and very quickly identify 999 of them. If that's used incorrectly, perhaps in law enforcement, to classify people, the AI getting people wrong can have catastrophic consequences. Whereas a person has no capacity to recognize 1,000 people," he said. 

However Beena Ammanath, executive director of the global Deloitte AI Institute, noted that these bias case studies can be more nuanced than they first appear. While people strive to make AI unbiased, she noted that it can never be 100% so because it's built by people and people are biased. It's more a question about how much bias we're willing to tolerate. 

She pointed out that, in certain cases, bias either is not a factor at all in AI or is even a positive, as in the case of using facial recognition to unlock a phone. If the AI were completely unbiased, it wouldn't be able to discriminate between users, defeating the purpose of the security feature. With this in mind, Ammanath said she would prefer looking at specific cases, as the technology's use is highly context-dependent. 

"So, facial recognition being used in a law enforcement scenario to tag someone as a criminal: If it's biased, that's probably something that should not be out in the world because we don't want some people to be tagged that way. But facial recognition is also used to identify missing children, kidnapping victims, human trafficking victims and it is literally [used in] the exact same physical location, like a traffic light. Yes, it is biased, but it is helping us rescue 40% more children than before. If we hadn't used it, is that acceptable or should we just completely remove that technology?" she said. 

So then, rather than think of the topic in a broad philosophical sense, Ammanath said it's more important to think about what people would actually need for AI to work effectively. One of the biggest things, she said, was trust. It's not so much about building an AI that's perfectly ethical, which is impossible, but rather one that can be trusted by everyday people. As opposed to an abstract discussion of what is and isn't ethical, she said trust can be defined and solved for, which she says is more practical. 

Ethics is a big part of this, yes, but so are reliability (people need to know the program will work as expected), security (people must be confident it hasn't been compromised), safety (people need to feel confident the program won't harm them physically or mentally), explainability (its processes cannot just be a black box), respect for privacy (the data that trains the program was used with their consent), and the presence of someone — presumably  human — who ultimately is accountable for the AI's actions. "All of these are important factors to consider if you want to make AI trustworthy because when we use AI in the real world, when it's out of the research labs and is being used by accountants or CEOs, you need to be able to trust that AI and know that broader context," she said. 

Like Harris and Palmerino, she noted that the consequences of failure can be quite high. For just one example, she pointed to recent findings on how social media algorithms can drive things like depression and suicide. Absent some sense of responsibility, people can be setting themselves up for what she dubbed a "Jurassic Park scenario" of AIs that no one can trust to do the right thing. "[Responsibility] means asking the question, 'Is this the right thing to do? Should this AI solution even be built? I'd like to avoid that Jurassic Park scenario: Just because your scientists could, they did it without thinking if they should," she said. 

Palmerino said Botkeeper indeed takes these kinds of considerations into account when developing new products. Their process, he said, involves looking at everything their products touch and analyzing where potentially unethical actions can creep in. Right now, he said, "We have not been able to identify those situations" but the key is that they did it in the first place and intend to keep doing it. He didn't rule out the possibility of future issues along these lines — for example, if they start focusing on the tax area.

"Say we teach the AI that there are certain buckets to [expense] categorization that bode advantageously from a tax perspective for the client, whether or not that is proper. The AI identifies things that are strategic and more beneficial than things that are not, so it could develop a bias that might categorize everything as meals and entertainment, even if it's a personal meal or out of state meal, to get that 100% deduction because it understands those incentives and this behavior could then be reinforced because the business owner starts encouraging it," he said. For such a case, he said, programmed "guard rails" of some sort will be needed.

Harris described a similar process at Sage, saying his company takes a careful approach to AI, making sure to start with a clear understanding of the ethical risks. For instance, they'd need to consider whether an AI collections bot could unfairly penalize some customers or be more aggressive or more harassing in attempting to collect than others because the data that went into training the AI was flawed. With these possible scenarios in mind, Harris said it's important that human oversight and accountability be factored into the product, even if the AI is highly advanced.

"We've been pretty conservative in our approach to AI at Sage … We started off pretty early trying to balance our enthusiasm for what we can accomplish with AI with the humility that AI has immense opportunity for positive impact and making things more efficient but done wrong can have an immense negative impact as well," he said. 

Palmerino felt encouraged that these issues were getting more attention in the public, and urged professionals to think carefully about the potential negative impacts of their actions. 

"If you plan on having anything that will have an impact, you have to consider the good and the bad. I have to be looking at it from all angles to make sure I'm changing things for the better. … Anyone reading this should take a second to reflect and think: Do you want to be remembered for having good consequences, or be remembered for creating something negative, something you can't take back. We've only one life and the only way we live on after death is in memory. So let's hope you leave a good memory behind," he said.

For reprint and licensing requests for this article, click here.
Technology Artificial intelligence Machine learning Corporate ethics
MORE FROM ACCOUNTING TODAY