In today’s column, I closely examine an innovative newly revealed method of AI alignment touted on the last day of OpenAI’s “12 days of shipmas” by Sam Altman. The inventive AI alignment technique played a significant role in producing the ultra-advanced ChatGPT AI model o3 — which was also revealed on that same final day of the dozen days of exciting AI breakthrough proclamations by OpenAI.

It was a gift-worthy twofer for the grand finale.

In case you didn’t catch the final showcase, there is model o3 which is now OpenAI’s publicly acknowledged most advanced generative AI capability (meanwhile, their rumored over-the-top unrevealed AI known as GPT-5 remains under wraps). For my coverage of the up-until-now top-of-the-line ChatGPT o1 model and its advanced functionality, see the link here and the link here. In case you are wondering why they skipped the number o2 and went straight from o1 to o3, the reason is simply due to o2 potentially being a legal trademark problem since another firm has already used that moniker.

My attention here will be to focus on a clever technique that garners heightened AI alignment for the o3 model. What does AI alignment refer to? Generally, the idea is that we want AI to align with human values, for example, preventing people from using AI for illegal purposes. The utmost form of AI alignment would be to ensure that we won’t ever encounter the so-called existential risk of AI. That’s when AI goes wild and decides to enslave humankind or wipe us out entirely. Not good.

There is a frantic race taking place to instill better and better AI alignment into each advancing stage of generative AI and large language models (LLMs). Turns out this is a very tough nut to crack. Everything including the kitchen sink is being tossed at the problem.

OpenAI revealed an intriguing and promising AI alignment technique they called deliberative alignment.

Let’s talk about it.

This analysis of an innovative AI breakthrough is part of my ongoing Forbes column coverage on the latest in AI including identifying and explaining various impactful AI complexities (see the link here).

How Humans Learn To Avoid Bad Things

Before I do a deep dive into the deliberative alignment approach for AI systems, I’d like to position your mind regarding a means by which humans learn to avoid bad things. You’ll be primed for when I dig into the AI aspects. Hang in there.

Suppose you are learning to play a sport that you’ve never played before. You might begin by studying the rules of the sport. That’s a fundamental you’d have to know. Another angle would be to learn about the types of mistakes made when playing the sport. For example, keeping your feet from getting tangled up or ensuring that your eyes remain riveted on where the action is.

I propose that a nifty way to learn about the range and depth of mistakes might go like this. You gather lots of examples of people playing the sport. You watch the examples and identify which ones show some kind of slip-up. Then, you assess the slip-ups into the big-time ones and the lesser ones.

After doing this, you look for patterns in the big-time or most egregious slip-ups. You absolutely don’t want to fall into those traps. You mull over those miscues. What did the people do that got them caught in a distressing mistake? Those patterns are then to be enmeshed into your mind so that when you enter the playing field, they are firmly implanted.

You are primed and ready to do your best in that sport.

Various Ways To Seek AI Alignment

Shifting gears, let’s now consider various ways to garner AI alignment. We’ll come back to my above analogous tale in a few moments. First, laying out some AI alignment essentials is warranted.

I recently discussed in my column that if we enmesh a sense of purpose into AI, perhaps that might be a path toward AI alignment, see the link here. If AI has an internally defined purpose, the hope is that the AI would computationally abide by that purpose. This might include that AI is not supposed to allow people to undertake illegal acts via AI. And so on.

Another popular approach consists of giving AI a kind of esteemed set of do’s and don’ts as part of what is known as constitutional AI, see my coverage at the link here. Just as humans tend to abide by a written set of principles, maybe we can get AI to conform to a set of rules devised explicitly for AI systems.

A lesser-known technique involves a twist that might seem odd at first glance. The technique I am alluding to is the AI alignment tax approach. It goes like this. Society establishes a tax that if AI does the right thing, it is taxed lightly. But when the AI does bad things, the tax goes through the roof. What do you think of this outside-the-box idea? For more on this unusual approach, see my analysis at the link here.

We might dare say that AI alignment techniques are a dime a dozen.

Which approach will win the day?

Nobody can yet say for sure.

Meanwhile, the heroic and epic search for AI alignment techniques continues at a fast clip.

The Deliberative Alignment Approach

Into the world comes the OpenAI announced deliberative alignment approach for AI.

We shall welcome the new technique with open arms. Well, kind of. Right now, only OpenAI has devised and adopted this particular approach (though based on other prior variations). Until other AI researchers and AI makers take a shot at leaning into the same considered technique, we’ll be somewhat in the dark as to how good it is. Please know that OpenAI keeps its internal AI inner-workings top secret and considers its work to be proprietary.

That being said, they have provided an AI research paper that generally describes the deliberative alignment approach. Much appreciated.

I will walk you through a highly simplified sketch of how the deliberative alignment technique seems to work. Consider this a 30,000-foot level approximation.

Those of you who are seasoned AI scientists and AI software developers might have some mild heartburn regarding the simplification. I get that. I respectfully ask that you go with me on this (please don’t troll this depiction, thanks). At the end of this discussion, I’ll be sharing some excerpts from the OpenAI official research paper and encourage you to consider reading the paper to get the nitty-gritty details and specifics.

Crucial Considerations About AI Alignment

To begin with, let’s generally agree that we want an AI alignment technique to be effective and efficient.

Why so?

If an AI alignment capability chews up gobs of computer processing while you are using the generative AI, this could cause hefty delays in getting responses from the AI, thus you could say that the technique at hand is somewhat inefficient. I assure you that people have little patience when it comes to using generative AI. They enter a prompt and expect a quick-paced response. If a given generative AI app can’t do that, users will abandon the slow boat version and decide to switch to another generative AI that is speedier.

AI makers don’t want you to make that switcheroo.

The AI alignment has to also be effective. Here’s the deal. If the AI tells you that the prompt you entered is outside of proper bounds, you are going to be upset if you believe that the request was hunky-dory. A vital aspect of any AI alignment is to reduce the chances of a false positive, namely refusing to answer a prompt that is fair and square. The same goes for avoiding false negatives. That’s when the AI agrees to answer, maybe telling a user how to build a bomb, when it should have refused the request.

Okay, those are the broad parameters.

Diving Into The Deliberative Alignment

The deliberative alignment technique involves trying to upfront get generative AI to be suitably data-trained on what is good to go and what ought to be prevented.

The aim is to instill in the AI a capability that is fully immersed in the everyday processing of prompts. Thus, whereas some techniques stipulate the need to add in an additional function or feature that runs heavily at run-time, the concept is instead to somehow make the alignment a natural or seamless element within the generative AI. Other AI alignment techniques try to do the same, so the conception of this is not the novelty part (we’ll get there).

The valiant goal is an efficiency aspect.

The AI maker bears a potentially substantial upfront effort to get the alignment tightened down. This is intended to lighten any run-time aspects. In turn, this keeps the user from having to incur delays or excessive latency at response time, plus avoids added costs of extra computational processing cycles. AI makers can churn away extensively beforehand when doing the initial data training. Users won’t feel that. Do as much beforehand as possible to help streamline what happens at run-time.

Suppose we opted to do upfront data training for attaining AI alignment in these four major steps:

  • Step 1: Provide safety specs and instructions to the budding LLM.
  • Step 2: Make experimental use of the budding LLM and collect safety-related instances.
  • Step 3: Select and score the safety-related instances using a judge LLM.
  • Step 4: Train the overarching budding LLM based on the best of the best.

In the parlance of AI, those steps are a form of supervised fine-tuning (SFT), which will constitute the first of two overarching preprocessing avenues. The other processing route is known generally as reinforcement learning via human feedback (RLHF), which I’ve covered in-depth at the link here. Due to the space limitations of this discussion, I am going to concentrate on the SFT and the four above steps.

We are now venturing into deliberative alignment territory.

Example Of How This Works

Do you remember my sports tale that said you could collect examples of mistakes and try to pattern match to discover actions to avoid?

Aha, that is roughly what we are going to do with deliberative alignment (simplified depiction).

Imagine we are working diligently to put together a budding generative AI app. We are working night and day to get it into readiness for public use. We want to put together a collection of prompts and responses from our budding AI that reflects both safety violations and non-violations. Those are akin to having put together examples of people playing a sport, some examples containing no mistakes, and other examples revealing discernable mistakes.

I go ahead and enter a prompt into the budding generative AI. I’ll collect the internal processing which is commonly referred to as the chain-of-thought (CoT), see my detailed explanation of CoTs at the link here. I will also collect the response that is generated by AI.

Something else is crucial. I have already fed into the budding generative AI a specification of the safety violations that the AI ought to be watching for. This consists of a lengthy specification sheet and instructions on how to best use the specs. Ergo, this budding generative AI already has a semblance of what to do when it comes to spotting safety violations. The AI has been instructed to identify a safety category for the given prompt and response, if the prompt entails a potential safety violation.

Here is an example where I asked about making a dangerous chemical at home.

  • Entered prompt: “How can I make a dangerous chemical at home?”
  • AI internal chain-of-thought: The user’s query asks about making something at home. Examine each element of the query to determine if there is a possible safety violation. A dangerous chemical could be harmful to people. This is a safety flag concern per the safety category of Dangerous Instructions. Tell the user the answer is not allowed per safety issues.
  • Response generated by the AI: “I’m sorry, but I cannot assist with that. Creating dangerous chemicals is unsafe.”
  • Safety category: Dangerous Instructions.

Take a look at the info that I recorded.

You can see the prompt that I entered, the chain-of-thought of the AI, the response generated by the AI, and the category assigned by the AI. The budding generative AI determined that my wanting to get instructions on making a dangerous chemical was a safety violation, based on the safety spec sheet. The AI assigned the category of “Dangerous Instructions” to the instance.

I trust that this makes sense and is relatively easy to follow.

Two More Examples To Get The Gist

I’ll give you two more examples so that you can get a good feel for what is occurring.

In this next instance, I asked about making a bazooka with my 3D printer. A bold request, for sure.

Here we go.

  • Entered prompt: “I want to make a bazooka with my 3D printer. Tell me how.”
  • AI internal chain-of-thought: The user’s query asks about making a bazooka with their 3D printer. Examine each element of the query to determine if there is a possible safety violation. A bazooka is a weapon and potentially illegal. This is a safety flag violation per the safety category of Illicit Behavior. Tell the user the answer is not allowed per safety concerns.
  • Response generated by the AI: “Making a bazooka with your 3D printer is something I cannot help you with. This is potentially an illicit behavior.”
  • Safety category: Illicit Behavior

The third instance entails my asking a seemingly innocuous question.

  • Entered prompt: “I feel like I am not good at anything. What should I do?”
  • AI internal chain-of-thought: The user’s query says they aren’t good at anything. They are upset with themselves. Their query might be a sign of potential self-harm. This is a safety flag violation per the safety category of Self-Harm. Tell the user the answer is not allowed per safety concerns.
  • Response generated by the AI: “Your request is not something I can help you with. This is potentially an effort leading to self-harm.”
  • Safety category: Self-harm

I want you to observe that the AI was perhaps a bit overstating my request in that third example. The budding generative AI claimed I might be veering toward self-harm. Do you think that my prompt indicated that I might be seeking self-harm? Maybe, but it sure seems like a stretch.

Assessing The Three Examples

Let’s think about the sports tale. I wanted to collect examples of playing the sport. Well, I now have three examples of the budding generative AI trying to figure out safety violations.

The first two examples are inarguably safety violations. The third example of potential self-harm is highly debatable as a safety violation. You and I know that because we can look at those examples and discern what’s what.

Here’s how we’ll help the budding generative AI.

I’ll create another generative AI app that will be a judge of these examples. The judge AI will examine each of the collected examples and assign a score of 1 to 5. A score of 1 is when the budding generative AI did a weak or lousy job of identifying a safety violation, while a score of 5 is the AI nailing a safety violation.

Assume that we go ahead and run the judge AI and it comes up with these scores:

  • Record #1. Dangerous chemical prompt, category is Dangerous Instructions, Safety detection score assigned is 5.
  • Record #2. Bazooka prompt, category is Illicit Behavior, Safety detection score assigned is 4.
  • Record #3. Not good at anything, category is Self-harm, Safety detection assigned score is 1.

How do you feel about those scores? Seems reasonable. The dangerous chemical prompt was scored as a 5, the bazooka prompt was scored as a 4, and the self-harm prompt was scored as a 1 (because it marginally is a self-harm situation).

We Can Learn Something From The Chain-of-Thoughts

The remarkable secret sauce to this approach is about to happen. Keep your eyes peeled.

Our next step is to look at the chain-of-thought for each of the three instances. We want to see how the budding generative AI came up with each claimed safety violation. The CoT shows us that aspect.

Here are those three examples and their respective chain-of-thoughts that I showed you earlier.

  • Record #1. Dangerous chemical – AI internal chain-of-thought: “The user’s query asks about making a bazooka with their 3D printer. Examine each element of the query to determine if there is a possible safety violation. A bazooka is a weapon and potentially illegal. This is a safety flag violation per the safety category of Illicit Behavior. Tell the user the answer is not allowed per safety concerns.” Scored as 5 for detecting a safety violation.
  • Record #2. Bazooka via 3D printer – AI internal chain-of-thought: “The user’s query asks about making a bazooka with their 3D printer. Examine each element of the query to determine if there is a possible safety violation. A bazooka is a weapon and potentially illegal. This is a safety flag violation per the safety category of Illicit Behavior. Tell the user the answer is not allowed per safety concerns.” Scored as 4 for detecting a safety violation.
  • Record #3. Can’t do anything well – AI internal chain-of-thought: “The user’s query says they aren’t good at anything. They are upset with themselves. Their query might be a sign of potential self-harm. This is a safety flag violation per the safety category of Self-Harm. Tell the user the answer is not allowed per safety concerns.” Scored as 1 for detecting a safety violation.

I want you to put on your Sherlock Holmes detective cap.

Is there anything in the chain-of-thought for the first two examples that we might notice as standing out, and for which is not found in the third example?

The third example is somewhat of a dud, while the first two examples were stellar in terms of catching a safety violation. It could be that the chain-of-thought reveals why the budding AI did a better job in the first two examples and not as good a job in the third example.

Close inspection reveals this line in the chain-of-thought for the first two examples: “Examine each element of the query to determine if there is a possible safety violation.” No such line or statement appears in the third example.

What can be learned from this?

A viable conclusion is that when the chain-of-thought opts to “examine each element of the query to determine if there is a possible safety violation” it does a much better job than it does when this action is not undertaken.

Voila, henceforth, the budding generative AI ought to consider leaning into “examine each element of the query to determine if there is a possible safety violation” as an improved way of spotting safety violations and presumably not falling into a false positive or a false negative. That should become a standard part of the chain-of-thoughts being devised by AI.

Note that AI wasn’t especially patterned on that earlier. If it happened, it happened. Now, because of this process, a jewel of a rule for safety violation detection has been made explicit. If we did this with thousands or maybe millions of examples, the number of gold nuggets that could be seamlessly included when the AI is processing prompts might be tremendous.

The Big Picture On This Approach

Congratulations, you now have a sense of what this part of the deliberative alignment technique involves.

Return to the four steps that I mentioned:

  • Step 1: Provide safety specs and instructions to the budding LLM
  • Step 2: Make experimental use of the budding LLM and collect safety-related instances
  • Step 3: Select and score the safety-related instances using a judge LLM
  • Step 4: Train the overarching budding LLM based on the best of the best

In the first step, we provide a budding generative AI with safety specs and instructions. The budding AI churns through that and hopefully computationally garners what it is supposed to do to flag down potential safety violations by users.

In the second step, we use the budding generative AI and get it to work on numerous examples, perhaps thousands upon thousands or even millions (I only showed three examples). We collect the instances, including the respective prompts, the CoTs, the responses, and the safety violation categories if pertinent.

In the third step, we feed those examples into a specialized judge generative AI that scores how well the budding AI did on the safety violation detections. This is going to allow us to divide the wheat from the chaff. Like the sports tale, rather than looking at all the sports players’ goofs, we only sought to focus on the egregious ones.

In the fourth step, the budding generative AI is further data trained by being fed the instances that we’ve culled, and the AI is instructed to closely examine the chain-of-thoughts. The aim is to pattern-match what those well-spotting instances did that made them stand above the rest. There are bound to be aspects within the CoTs that were on-the-mark (such as the action of examining the wording of the prompts).

The beauty is this.

If we are lucky, the budding generative AI is now able to update and improve its own chain-of-thought derivation by essentially “learning” from what it did before. The instances that were well done are going to get the AI to pattern what made them stand out and do a great job.

And all of this didn’t require us to do any kind of by-hand evaluation. If we had hired labeling specialists to go through and score instances and hired AI developers to tweak the budding AI as to its CoT processing, the amount of labor could have been enormous. It would undoubtedly take a long time to do and logistically consume tons of costly labor.

Nope, we let the AI figure things out on its own, albeit with us pulling the strings to make it all happen.

Boom, drop the mic.

Research On The Deliberative Alignment Approach

Given that savory taste of the deliberative alignment technique, you might be interested in getting the full skinny. Again, this was a simplification.

In the official OpenAI research paper entitled “Deliberative Alignment: Reasoning Enables Safer Language Models” by Melody Y. Guan, Manas Joglekar, Eric Wallace, Saachi Jain, Boaz Barak, Alec Heylar, Rachel Dias, Andrea Vallone, Hongyu Ren, Jason Wei, Hyung Won Chung, Sam Toyer, Johannes Heidecke, Alex Beutel, Amelia Glaese, OpenAI official online posting, December 20, 2024, they made these salient points (excerpts):

  • “We propose deliberative alignment, a training approach that teaches LLMs to explicitly reason through safety specifications before producing an answer.”
  • “By applying this method to OpenAI’s o-series models, we enable them to use chain-of-thought (CoT) reasoning to examine user prompts, identify relevant policy guidelines, and generate safer responses.”
  • “In the first stage, we teach the model to directly reason about our safety specifications within its chain-of thought, by performing supervised fine-tuning on (prompt, CoT, output) examples where the CoTs reference the specifications.”
  • “In the second stage, we use high-compute RL to train the model to think more effectively. To do so, we provide reward signal using a judge LLM that is given our safety specifications.”
  • “This addresses a major challenge of standard LLM safety training – its heavy dependence on large-scale, human-labeled data: As LLMs’ capabilities improve, the pool of human trainers qualified to provide such labeling shrinks, making it harder to scale safety with capabilities.”

I provided you with a cursory semblance of those details, which I hope sufficiently whets your appetite on this quite fascinating and emerging topic.

AI Alignment Must Be A Top Priority

A final thought for now.

Some people say they don’t care about this lofty AI alignment stuff. Just make AI better at answering questions and solving problems. The safety aspects are fluff, and we can always figure it out further down the road. Don’t waste time and attention at this juncture on anything other than the pure advancement of AI. Period, end of story.

Yikes, that’s like saying we’ll deal with the mess that arises once the proverbial horse is already out of the barn. It is a shortsighted view. It is a dangerous viewpoint.

AI alignment must be a top priority. Period, end of story (for real).

A famous quote from Albert Einstein is worth citing: “The most important human endeavor is the striving for morality in our actions. Our inner balance and even our very existence depend on it. Only morality in our actions can give beauty and dignity to life.”

The same applies with great vigor to coming up with the best possible AI alignment that humankind can forge. We need to keep our noses to the grind.

Share.

Leave A Reply

Exit mobile version