March 21, 2024
In a significant development, researchers have devised a novel training tool aimed at enhancing the accuracy of artificial intelligence (AI) programs by accounting for human tendencies to deceive, particularly in contexts where economic incentives are at play, such as mortgage applications or insurance claims.
Led by Mehmet Caner, co-author of the paper detailing the work and Thurman-Raytheon Distinguished Professor of Economics at North Carolina State University’s Poole College of Management, the research addresses a critical challenge faced by AI algorithms. While AI programs are increasingly utilized in various business settings to make predictions and forecasts, they often rely solely on statistical algorithms, which can inadvertently incentivize individuals to provide false information to achieve favorable outcomes, such as securing a mortgage or reducing insurance premiums.
“The problem is that this approach creates incentives for people to lie, so that they can get a mortgage, lower their insurance premiums, and so on,” explains Caner.
To tackle this issue, the researchers developed a set of innovative training parameters designed to enable AI algorithms to recognize and account for human users’ economic incentives to deceive. By integrating these parameters into the AI’s training process, the modified algorithms can better discern inaccurate information provided by users, thus mitigating the incentive to lie.
In proof-of-concept simulations, the adjusted AI demonstrated improved capabilities in detecting deceptive information, thereby reducing users’ inclination to provide false data. However, the researchers acknowledge the need for further refinement to determine the threshold between small and significant lies.
“While this work represents a significant step forward in combating deception in AI programs, there is still more to be done to refine and optimize these training parameters,” notes Caner.
Crucially, the researchers are making the new AI training parameters publicly available, allowing developers to experiment and integrate them into their AI systems.
“This work demonstrates that we can enhance AI programs to diminish economic incentives for human deception,” Caner asserts. “With continued advancements, we may eventually eliminate these incentives altogether.”
As AI technologies continue to shape various aspects of society, efforts to improve their accuracy and integrity become increasingly vital. The development of tools like this training parameter offers promise in fostering more trustworthy and reliable AI systems, ultimately benefiting industries and users alike.
The research signifies a significant stride towards enhancing the ethical and functional dimensions of AI, paving the way for a future where AI-driven decisions are not only accurate but also inherently trustworthy.