Humans and artificial intelligence don’t necessarily work as well together as many assume, a new study suggests. The looming question is what is the point in which human tasks and AI tasks are best blended?

in many cases, humans and machines may work better independently of each other, the study, published out of MIT’s Center for Collective Intelligence, suggests. The researchers, led by MIT’s Michelle Vaccaro, looked across 100 experiments that evaluated the performance of humans alone, AI alone, and combinations of both.

Collectively, these studies show that “human–AI systems do not necessarily achieve better results than the best of humans or AI alone,” Vaccaro and her colleagues suggest. “Challenges such as communication barriers, trust issues, ethical concerns and the need for effective coordination between humans and AI systems can hinder the collaborative process.”

As a result, on average, “human-AI combinations performed significantly worse than the best of humans or AI alone,” the study shows. Ultimately, humans still make the final choices in the cases explored. “Most of the human–AI systems in our dataset involved humans making the final decisions after receiving input from AI algorithms. In these cases, when the humans are better than the algorithms overall, they are also better at deciding in which cases to trust their own opinions and in which to rely more on the algorithm’s opinions.”

For instance, the co-authors explained, “generating a good artistic image usually requires some creative inspiration about what the image should look like, but it also often requires a fair amount of more routine fleshing out of the details of the image. Similarly, generating many kinds of text documents often requires knowledge or insight that humans have and computers do not, but it also often requires filling in boilerplate or routine parts of the text as well.”

Is there a productive balance that can be achieved with humans and AI working in sync? Yes, but as long as humans always have oversight of AI-driven processes, industry leaders concur. “You can’t just put AI on autopilot and expect a favorable outcome,” Rahul Roy-Chowdhury, CEO of Grammarly, told me. “Meaningful advancements in AI that drive actual efficiency and productivity are only possible when companies focus on building great, useful products for customers — and you can’t do that without humans in the loop.”

To achieve the most productive balance between humans and AI, “position AI as an advisor and limit its ability to make decisions,” advised Brian Chess, senior vice president of technology and AI at Oracle NetSuite. “AI is great at analyzing data, surfacing insights, and serving up recommendations, and can eliminate time-consuming and repetitive work. But these insights and recommendations need to be reviewed by a human who is ultimately responsible for decision-making.”

There are now many lower-level situations in which AI has gained that trust to operate fairly autonomously. “Some hands-off AI-driven processes are already operational and trusted in manufacturing,” Artem Kroupenev, vice president of strategy at Augury, said. Examples of such autonomous processes include “providing prescriptive diagnostics for a wide range of critical industrial equipment, identifying faults and recommending precise, step-by-step maintenance actions months in advance.”

Some cutting-edge manufacturers are even “exploring AI to build a fully closed loop digital twin on a piece of processing equipment,” said Kroupenev. “This involves leveraging a wide dataset to assess trends and anomalies in the equipment and building an algorithm to control the setpoints. The human can remove themselves from the loop and give the algorithm complete control of the equipment.”

Still, in just about all cases, especially in manufacturing, “domain expertise is critical, and AI systems initially require both first-mile and last-mile human feedback,” Kroupenev added.

In the case of industrial processes, “AI should have similar safeguards as statistical or threshold-based automation,” he continued. “Humans should be able to review and intervene in the overall plan, specific tasks, decisions, and actions for any critical part of the AI-driven process. There should also be a simple way to review and edit process goals, guide rails, and constraints to guide AI-driven processes. With robust intervenability and guardrails, a single human supervisor can oversee multiple AI-driven processes, increasing autonomy and productivity.”

Chowdhury pointed out that his firm first asks if processes should be highly automated with AI. “When it comes to AI advancements, you’ve got to consider not just the implications of a hands-off approach but whether it’s ultimately even desirable,” he said. “AI should always augment people; it should really be called augmented intelligence. Keeping people at the forefront informs guardrails for human-AI collaboration.”

Similarly, when Oracle NetSuite provides AI assistants, “humans initiate the actions and confirm the results,” said Chess. “For example, when a user invokes text enhance within generative AI to help create a job post, the AI-generated content isn’t automatically submitted to the system. The generated content is available to be edited, and the manager can add any additional job description, requirements, and other information in the system.”

Enabling the reversal of AI-driven processes “allows users to gain comfort and confidence in the AI,” said Chess. “The level and the ease of human overruling AI should depend on the business process, the level of trust that AI has gained within that process, the quality of data inputs, and the quality of outputs for a specific use case. A human should be able to drill down to see the matches the AI has made and the confidence it has in those matches.”

The ability to overturn AI insights or decisions “should be considered a product feature, not a bug,” Kroupenev said. “Attaching a confidence score to raw AI insights can help users trust the recommendation, but there are cases where users have made decisions contrary to AI recommendations, especially in edge cases with low confidence. In my experience, users who initially overturned AI recommendations often came to the conclusion that they made incorrect decisions, which has ultimately increased their trust in the system for future encounters.”

Share.

Leave A Reply

Exit mobile version