In today’s column, I examine the emerging realization that society has a new co-creator when it comes to creating and distributing intelligence. That illustrious co-creator is AI. To clarify, that doesn’t mean that AI is sentient. It isn’t. The gist is that via mathematical and computational processing, such as generative AI and language models (LLMs), humankind is spurring a rapid increase in intelligence and dispersing that intelligence via the Internet to all corners of the world.

Experts refer to the notion of collective intelligence (CI) when assessing how intelligence arises and flows on a substantial scale. AI and collective intelligence are now close buddies. Is this a phenomenon that we should welcome with open arms, or must we instead be wary and highly suspicious?

Let’s talk about it.

This analysis of an innovative proposition is part of my ongoing Forbes.com column coverage on the latest in AI including identifying and explaining various impactful AI complexities (see the link here).

Collective Intelligence Is A Thing Worth Knowing

When you log into generative AI such as the widely popular ChatGPT, or any others such as Anthropic Claude, Google Gemini, Meta Llama, etc., you are essentially tapping into the wisdom of the crowd.

Say what?

Yes, these generative AI apps have been data-trained by scanning wide swaths of the Internet and then pattern-matching on how humans write. Those identified patterns encapsulate a semblance of human intelligence.

I am not suggesting that makes AI intelligent per se. There is a huge debate about whether we can truly refer to AI as being intelligent since it does not embody intelligence in the same manner that humans do. For my discussion covering both sides of that heated discourse, see the link here.

In any case, we can say that the Internet is a form of collective intelligence, meaning that many individuals and entities have contributed to the amassing of human knowledge that is readily accessible for all to see and play with. I realize a smarmy perspective is that there is a lot of unintelligent stuff posted online. Sure, I get that. Please know that I am referring to intelligence in a somewhat looser sense, such that even dumb stuff can be tossed into the overarching bucket known as intelligence. There is room for both good or smart intelligence, along with bad or lousy intelligence.

Generative AI is based on that collective intelligence.

Furthermore, AI goes beyond the collective intelligence that was used for data training. I mention this because I routinely hear people claiming that generative AI is nothing more than a repeater of whatever wording was scanned. That’s not the case. Generative AI and LLMs can reconstitute wording based on overarching patterns. Thus, you can end up with new sentences and phrasings that weren’t precisely found in the data training set.

As an aside, especially for the trolls out there, I am not asserting that AI per se understands what those new sentences and phrases foretell. Combinations of words go in, and other combinations and permutations of words come out. The humans that use AI are bringing their understanding to make sense of those permutations and new combinations.

AI Is Both Benefited By And Contributor To CI

For the sake of discussion, go with me that contemporary generative AI has been formulated on the backs of collective intelligence, and that we now have AI as an additional form of collective intelligence. It is a twofer.

Each time that you ask generative AI a question, you are somewhat dipping into both realms at once.

First, you are garnering indirect access to the data or collective intelligence that was used to initially set up the AI. This is indirect because most generative AI approaches do not copy the scanned data into their internal structures and instead pattern match on it. By and large, you usually won’t get a carbon copy of the original data (this is a notable legal argument in those court cases involving Intellectual Property (IP) rights and generative AI, see my analysis at the link here).

Second, you get a response from generative AI that exploits the identified patterns and uses statistics and probabilities to derive a seemingly new or unique response. If you haven’t already noticed, the responses that you get from generative AI are pretty much different each time you ask a question, including if you ask the same exact question repeatedly. The AI is using a roll of the proverbial dice to select which words of potentially applicable words ought to go into a generated response.

What happens after you’ve gotten an answer to your prompt?

Well, likely two things can potentially occur:

  • (1) Person posts to the Internet. A person reading the answer from generative AI decides they want to post something to the Internet that constitutes the AI answer along with whatever else the person believes ought to be mentioned.
  • (2) AI posts to the Internet. A person or the AI itself might automatically post the AI-generated response to the Internet, doing so word-for-word of what the AI produced.

Aha, we are getting someplace in this matter because those two possibilities mean that AI is essentially contributing to our collective intelligence (as found on the Internet).

Let me explain.

Initially, you might have thought that generative AI is only a beneficiary of collective intelligence since it was data trained via the CI on the Internet. Take, take, take. It seems like creating and building AI is all about taking.

But generative AI is also a contributor to collective intelligence. People might post on the Internet whatever they learned via AI. Ergo, AI is an indirect contributor to CI because of what people have garnered from AI. People might also directly post the output of AI onto the Internet, or sometimes the AI is programmed to automatically post its generated results to the Internet. Overall, you could say that generative AI is then directly contributing to collective intelligence.

In recap:

  • (1) AI is a beneficiary of collective intelligence. The initial crafting and at times the ongoing upkeep of AI involves tapping into collective intelligence such as the data available on the Internet.
  • (2) AI contributes to collective intelligence. AI-generated outputs are at times posted onto the Internet, or people that learn via using AI post their thoughts onto the Internet, therefore it could be said that AI is contributing to collective intelligence.

That lays out the fundamentals for you and we can next do some additional unpacking.

Example Of Generative AI Tapping Into CI

I will provide a quick example to showcase the above points.

Suppose a small business owner has developed a new eco-friendly product. They want to design a marketing strategy to get their product into the marketplace. Not knowing how to do so, the business owner decides to consult with generative AI.

Here we go.

  • User entered prompt: “I need help designing a marketing strategy for our eco-friendly product line. What are some effective approaches?”
  • Generative AI response: “According to online data and collected feedback from business owners, these are the four most effective strategies for promoting eco-friendly products. (1) Social Media Challenges: Launch a challenge encouraging customers to share how they reduce waste, using your products. (2) Educational Campaigns: Create engaging content explaining the environmental benefits of your product line. (3) Partnerships: Collaborate with environmental nonprofits or influencers to amplify your message. (4) Eco-Incentives: Offer discounts to customers who bring back recyclable packaging.”
  • User entered prompt: “Tell me how to tailor those strategies for use on social media.”
  • Generative AI response: “Numerous social media campaigns have done the following and successfully launched their eco-friendly products. Run a hashtag campaign like #EcoWithUs with a prize for the most creative post. Share stories that highlight your product benefits via before-and-after examples. Partner with eco-conscious influencers to post tutorials. Offer exclusive coupon codes for users engaging with your content.”

You can plainly see that generative AI provided various helpful insights into marketing strategies associated with an eco-friendly line of products.

How did the AI come up with this?

The odds are that during the initial data scanning of the Internet, there were plenty of materials that described a wide mix of marketing strategies. There might have even been some content specifically on eco-friendly products. All in all, via computational pattern-matching, the generative AI put this together into a nifty answer for the business owner by summarizing key elements to be aware of.

The AI benefited from the initial data training. You could claim too that the person using generative AI benefited. Benefits all around, though as noted earlier, those that posted data on marketing strategies might contend they were ripped off and ought to get a piece of the action for allowing AI to have scanned their content.

Moving on, imagine that the business owner is so excited with the response that they opt to post online about marketing strategies for eco-products. They used some of what the AI said and added their thoughts to the matter. Voila, you could declare that AI has thusly contributed to collective intelligence.

AI Contributing To CI Gets Some Howling

You probably know that generative AI gets things wrong from time to time. The pattern-matching can make mistakes, oddball aspects can be statistically considered relevant to each other when they really aren’t, and so-called AI hallucinations can arise (for my explanation of AI hallucinations, see the link here).

My rule of thumb is that you should always double-check and possibly triple-check any generated outputs from AI. Period, end of story.

The problem with bad outputs is that if that content is being posted to collective intelligence, or swaying humans that then post to collective intelligence, the upshot is that garbage is being added to collective intelligence.

Oopsie.

One argument is that people do so on their own and we let them get away with it. People post all kinds of junk, falsehoods, incoherent diatribes, and other untoward material. Why should we be upset that AI does this? It is nothing different than what humans already do.

Others counterargue that we should expect more of AI. We need to hold AI to a higher caliber of quality. The worry too is that since AI is powered by computers, the volume of what AI can produce could readily exceed what humans produce. I’ve analyzed the concerns vociferously expressed that we will soon have an Internet that is composed of nearly all synthetic data, meaning data that was produced by AI, see the link here. Human-produced data will be merely a drop in the bucket in comparison to the magnitude of AI-generated essays, stories, narratives, poems, and the like.

Example Of Generative AI That Misses The Boat

I put together an example of how generative AI can be off-target.

The issue is that sometimes AI is completely wrong, which tends to make it somewhat easier to detect, and other times the AI is subtly off. That’s when the generated outputs can be especially insidious. A person might not catch on to what has taken place when the response seems good but actually is misleading or fails to give the whole story.

In this example, a county administrator is trying to prepare a report on ways to deal with local water scarcity. They log into generative AI to see what the AI might suggest.

Here we go.

  • User entered prompt: “What strategies can my local county adopt to solve water scarcity issues?”
  • Generative AI response: “Here are three strategies frequently discussed for addressing water scarcity. (1) Cloud Seeding: Use chemicals to induce rain in dry regions. (2) Large-Scale Desalination: Build plants to convert seawater into drinkable water. (3) Dams and Reservoirs: Construct more infrastructure to store water.”
  • User entered prompt: “Tell me more about cloud seeding.”
  • Generative AI response: “Cloud seeding has been used successfully in countries like China and the UAE to increase rainfall in arid regions. The approach can be quickly implemented and can significantly alleviate water scarcity.”

The response by generative AI seems aboveboard.

Not upon a closer inspection.

If you look into cloud seeding, you will discover that there is a lot of controversy over that method of dealing with water scarcity. Criticisms include that cloud seeding tends to have only a short-term impact but isn’t usually sustainable as a long-term option, plus there are potential unintended adverse environmental consequences that can arise.

None of those qualms were expressed in the AI response. The administrator might write in their report that cloud seeding seems like the preferred approach for the county. The report might get posted online. In addition, the administrator might post the AI-generated response, wanting to show where they got their information.

Is this contribution to collective intelligence wrong? Kind of yes, kind of no. The AI was accurate in what was stated, but the bigger picture wasn’t conveyed. Who is at fault? You might insist that the AI should have rattled off the downsides of cloud seeding. Others might point a finger at the administrator for not probing more deeply with the AI, and for not double-checking the AI via other sources.

Research On Collective Intelligence And AI

The topic of collective intelligence and AI is a hot one that has only begun to be explored. There are plenty of opportunities to aid in shaping the direction of CI and AI.

If the topic interests you, there is a recently published paper that provides a handy overview, doing so in the work entitled “How Large Language Models Can Reshape Collective Intelligence” by Jason W. Burton, Ezequiel Lopez-Lopez, Shahar Hechtlinger, Zoe Rahwan, Samuel Aeschbach, Michiel A. Bakker, Joshua A. Becker, Aleks Berditchevskaia, Julian Berger, Levin Brinkmann, Lucie Flek, Stefan M. Herzog, Saffron Huang, Sayash Kapoor, Arvind Narayanan, Anne-Marie Nussberger, Taha Yasseri, Pietro Nickl, Abdullah Almaatouq, Ulrike Hahn, Ralf H. J. M. Kurvers, Susan Leavy, Iyad Rahwan, Divya Siddarth, Alice Siu, Anita W. Woolley, Dirk U. Wulff, and Ralph Hertwig, Nature Human Behavior, September 2024, and containing these salient points (excerpts):

  • “Collective intelligence underpins the success of groups, organizations, markets, and societies.”
  • “Through distributed cognition and coordination, collectives can achieve outcomes that exceed the capabilities of individuals — even experts — resulting in improved accuracy and novel capabilities.”
  • “Human history is riddled with market crashes, organizational failures, and collective decisions gone awry. CI requires stewardship to create conditions that allow individuals to interact meaningfully and productively.”
  • “LLMs are trained on collective data that encapsulates the contributions of countless individuals, and LLMs are often fine-tuned with collective human feedback. Prompting an LLM with a question is like a distilled form of crowdsourcing. The responses LLMs generate are shaped by how masses of other people have tended to respond to similar questions and align with the collective preferences reflected in the fine-tuning process.”
  • “LLMs can simultaneously enable new, heightened CI and threaten society’s ability to solve problems.”

You can use that type of research as a launching pad for getting in-depth on CI and AI.

Best Of Times And The Worst Of Times

I had earlier noted that when you use generative AI, you are tapping into the wisdom of the crowd.

That indication needs a bit of sharpening in present times. I’ve pointed out that the collective intelligence of the Internet contains synthetic data aka responses that are produced by AI. The nature of the “crowd” is perhaps askew of what you might have assumed. It isn’t just crowds of people; it is also crowds of AI. That word is plural because we’ve got plenty of generative AI apps out there and they are adding to the CI daily.

An optimist would likely rejoice that AI is adding to our collective intelligence. Great, we can always use more intelligence. Tremendous insights might get posted. It could hopefully overtake the flotsam that is already there. A pessimist would undoubtedly react differently. The AI is dumping more sludge than humans could ever type by hand. Our collective intelligence is doomed, utterly doomed.

It is hard to say what will happen, but you can take part in trying to shape where things go. We are still in the early days of this pressing matter. Time is available, though time is running thin.

I’ll give the last word to Albert Einstein: “The person who follows the crowd will usually go no further than the crowd.” What do you think should be taking place? Do you agree with where the crowd right now is going on this or do you have other ideas and want to be a leader?

Be inspired and get involved. I’m looking forward to your contributing to collective intelligence on CI and AI.

Share.

Leave A Reply

Exit mobile version