AI has spent two years circling design with models that can generate images as well as outputs such as web pages, slides and documents of different types. Now AI is stepping directly into the design game with new models and product releases that target design processes and tools directly.

The use of AI is no longer just about making pretty pictures from prompts or creating output meant primarily for developers. The real fight is over who gets to create the first draft of visual work and who controls the process of what happens next. That matters for Figma and Adobe. It also matters for every company that sells tools to designers, marketers, brand teams, product managers and founders.

This new direction came into focus over the past week. Anthropic introduced Claude Design, a research preview that turns Claude into a tool for prototypes, slides, one-pagers and other visual work. OpenAI, almost in parallel, pushed its new image generation further into practical design territory with stronger text handling, editing and layout control in ChatGPT and the API. Adobe responded with its own show of force, widening the role of Firefly and pitching an assistant that can work across Photoshop, Illustrator, Premiere, Lightroom, Express and more.

Anthropic Is Moving Deeper Into Visual Work

Anthropic says Claude Design can help create polished visual work such as designs, prototypes, slides and one-pagers. This adds to Anthropic’s focus on making their models more powerful and broadly applicable by also making an early bid for the territory occupied by Figma, Canva and slices of Adobe.

The reason for this encroaching on others territory is that the first valuable moment in design is the messy beginning, when someone has an idea and wants to see it take shape. For years, that moment belonged to the person who opened Figma, Photoshop or Illustrator first. Anthropic wants that moment to begin in Claude.

Companies like Canva quickly showed why AI as part of the early design process matters, announcing a collaboration that lets Claude Design outputs move into Canva as editable, on-brand designs. That hints at a new division of scope between AI generation and design tools. The AI model handles ideation and rough composition while the design platform handles editing, governance, templates and distribution, as long as that separation of concerns holds.

That puts pressure on companies like Figma. Figma has already expanded well beyond interface design with Figma Make, Sites, Slides and Buzz. It knows the future is broader than product design alone. Still, Claude Design pokes around the edges. If users start the work in a general AI system, Figma risks becoming the place where drafts are cleaned up rather than conceived. And as users put more of their thinking and work into Claude, which is integrated into more of their work environments, then they might prefer Claude as their go-to instead of places like Figma.

OpenAI Is Making Image Models More Relevant To Designers

OpenAI is attacking the same market from the image side. Its latest release improves editing, text rendering, resolution and layout flexibility. While these changes might sound technical in nature, they really go straight to the difference between an impressive demo and something a team can ship.

For a long time, generated images looked finished until you asked for changes or greater detail. That is where they broke. The text warped or changed entirely. Layouts and brand details shifted, and elements that you didn’t want to be replaced were changed. OpenAI realized this and made improvements to make it so that iterative design changes retain more of the intent and consistency that designers have in mind.

In addition, OpenAI’s recent push is aimed at exactly the kinds of outputs that creative teams care about such as posters, brochures, editorial layouts, multilingual assets, ad concepts and marketing graphics. While the model still has limits, OpenAI says exact placement and consistency challenges remain, the model output quality has gotten a lot of positive attention.

Between Anthropic’s focus on the design process and OpenAI’s focus on generative outputs, that creates a squeeze on both ends of the market. Image-focused models that have better power and iteration can provide a lot of the benefits that people used to use tools like Canva for, and design tools lose their edge if the same assistant used for writing, analysis and coding can now draft campaign art or page layouts without asking the user to leave the chat window.

The pressure is not just coming from OpenAI and Anthropic. Open and semi-open model makers are also moving fast. Black Forest Labs continues pushing FLUX with stronger image generation and editing capabilities. Alibaba’s Qwen-Image 2.0 is explicitly targeting better text rendering and unified generation with editing. ByteDance has rolled out Seedream Lite alongside other multimodal releases as Chinese labs keep gaining ground in visual AI.

Adobe Is Leaning On Workflow, Not Just Generation

Adobe’s response to these changes in the market is telling. The company’s latest announcements center on Firefly AI Assistant and a broader story of what it means to be a creative professional. Adobe says the assistant can work across apps, keep track of context, understand assets and brand materials and carry out multi-step workflows. That plays to Adobe’s competitive strength, which has never been just creation. It is depth, finishing power and entrenchment in professional teams.

In a world where models can produce passable first drafts, Adobe insists that the professional quality that you need and finish matters more. This means review loops, rights controls, collaboration, version history and the other details that decide whether work is production-ready or just a good proof-of-concept. Adobe wants to be the place where AI-generated material becomes real work.

There is still pressure here. Adobe’s language around Firefly seems to be urgently focused on defending its business more than just providing functionality that can continue to be competitive against the continuous onslaught of AI capabilities.

The interface for design is becoming secondary to the assisting qualities of the platform. With AI, a founder can describe a landing page in plain English and a marketer can ask for six campaign concepts sized for multiple channels. Product managers can request a prototype before anyone opens a design file. All of these interactions can now happen within the context of an AI tool provided by companies like Anthropic and OpenAI without ever opening other tools.

Designers are not disappearing, but the job and tool landscape are quickly changing. But there is some hope here as tools can’t replace “taste” and design judgement. As design tools become easier for non-designers to use, strong design judgment becomes more valuable, not less. When anyone can make six polished mockups in a snap, the real skill shifts to choosing what deserves to survive.

That is where the market is headed. Anthropic wants to own the first move. OpenAI wants image generation to grow into design work. Adobe wants to own the finish. Figma wants to remain the collaborative place where teams shape and ship ideas. Open model makers and startup labs are accelerating all of it, compressing margins and forcing faster product cycles. This new phase makes it clear that AI is going to be a durable part of the design stack.

Share.
Leave A Reply

Exit mobile version