As marketers leverage AI to develop images, understanding how to do so effectively (and legally) is a key challenge. To better understand how AI is changing the development of images and best practices, I sought insight from Grant Farhall, Chief Product Officer at Getty Images, a leading visual content creator and marketplace with three brands: Getty Images, iStock, and Unsplash.

Getty Images

As background on Getty Images, Farhall indicates that “Getty Images is in the business of visual storytelling. As a leading visual content creator and marketplace, our brands offer impactful visuals to help any brand, business, or organization communicate more effectively with their target audience and inspire that audience to take action. For nearly 30 years, we have covered global events … and centered important conversations on the images we capture around the world, enabling fast, accurate visual reporting of events that drive the news cycle. We also maintain one of the largest and best privately owned archives in the world, filled with hundreds of millions of unique visual assets dating back to the beginning of photography.”

How AI is Changing the Development and Use of Images

Farhall suggests that compelling visual content is critical for marketers to connect with brands with their audiences and that “generative AI potentially offers another option to craft those visuals in certain, appropriate contexts. However, the fundamental core of the creative process remains unchanged; talented individuals, equipped with the appropriate tools, are ultimately responsible for bringing new ideas to life. Generative AI is another instrument to help them channel their unique human creativity, like a new brush and canvas in their hands.

While there are changes, much actually remains the same. To connect with audiences, brands must cut through an increasingly visually cluttered landscape, and they need to do that efficiently and at scale. Generative AI is one of many tools that marketers can use to achieve this, and in some exciting and innovative ways, but these new opportunities come alongside potential challenges.”

Benefits Provided by AI When Developing Images

Farhall acknowledges that AI poses significant benefits. “Generative AI allows users to create images that are very difficult or impossible to shoot with traditional means. And there have been striking, visually stunning examples of that. But there are also a lot of examples of low-quality images that are derivatives of pre-existing ideas, including blatant copies of images created without AI. At the end of the day, a ‘quality’ AI-generated image is one that helps someone reach and communicate to their audiences and is trained on quality and ‘clean’ data that is fully permissioned and clear of any risk to infringe on IP. Customers should not have to choose between creating quality AI visuals and legal safety; they should demand both.

To this end, the advent of AI image generation asks us to think differently about how to sustain a thriving future for creatives. AI is an exciting tool with a growing number of use cases, but the authenticity, diversity, creativity, and quality of human-created work are irreplicable and required to sustain effective AI models. In training our own generative AI model, Getty Images ensures that creators who have contributed to the dataset are compensated for their work on a recurring basis. Those AI services that have built their products through scraped data put the rights of artists and IP holders at risk. The potential erosion of these rights has immediate and long-term implications on the broader creative economy; without these rights, we limit the opportunity for people to conceive net-new ideas and be rewarded appropriately for them.

Challenges Posed by AI When Developing Images

With the rise of AI, Farhall indicates that “we now live in a world where we cannot always be certain if the photos and videos we encounter are real or not. This has serious implications for brands as they seek to build and sustain trust with their customers, particularly in cases where authenticity sits at the core of the brand’s identity. Brands need to be thoughtful about when and how they use AI and the level of transparency they offer around it. AI is not new, but it has never been so widely accessible, and everyone is wrestling with the right way to use it.

In addition, image generators and other AI tools pose the greatest risk to businesses when they are not commercially safe and not based on a clean foundational model. A commercially safe AI tool is one that allows marketers to use the generated content to freely market a product without any downstream legal risks. This means a customer has a license to use the image commercially and legal indemnification to protect them.

In the context of image generation, commercially safe AI tools are those that are not trained on any copyrighted material or known likenesses. Therefore, you can’t get in legal trouble for violating copyright.

Most AI content generators can’t tout this. Many have been trained on scraped data, which is ripped from the internet without legal consent, or synthetic data, which is generated by other AI tools that may have been trained on copyrighted material. This is why brands must be especially careful and demand full transparency from any third-party AI companies they’re working with. Training using public domain datasets may sound okay on the surface, but users should seek complete clarity on what those data sets were and the risks they present. That said, being trained on a clean dataset isn’t the only factor in commercial safety. Legal indemnification is crucial to the commercial safety equation, and it should extend to the generation, download and use of the content produced with a given tool. Otherwise, you still have holes in your proverbial ship.

When working with an AI vendor for use of a third-party tool or AI-generated assets, businesses should hold those vendors to the highest standards. That requires thoroughly assessing its commercial safety. Marketers should ask vendors about how the data was trained and on what; usage rights; legal indemnification; and what a user would have to do with the image to lose legal coverage. With a commercially safe tool, they have the confidence to experiment with AI and unlock creativity without worrying about potential legal trouble down the line.

At Getty Images, we’ve created an AI tool that exists in a world without known brands and likenesses—it’s completely impossible for users to violate intellectual property because the AI isn’t trained on any existing IP. We’ve built it solely on our permissioned, pre-shot creative library—what most people know as our ‘stock’ library—and include indemnification and perpetual, worldwide usage rights, automatically for visuals generated or modified using our AI services. You don’t have to ask for the asset to be reviewed or cleared because we know the visuals are legally safe.”

Best Practices for CMOs When Using AI to Develop Images

Farhall suggests that the starting point is to use “commercially safe” AI tools as they “can help brands create at a higher level in line with their unique needs, but aren’t a replacement for authentic, real-life imagery. CMOs and their marketing teams need to determine whether AI is the right tool for the job based on the audiences they are trying to reach and the message they want to land. Knowing when it’s appropriate to use AI-generated content perhaps more importantly, when it’s not—can help marketers protect their brands’ reputation and maintain trust with consumers.

Where brands have formed trusted relationships with their customers based on authenticity, there may be elevated risk in using AI visuals, or else not being transparent about that use. Consider how worried you would be if your audience learned you are using AI to communicate with them. Ask yourself to what degree you are comfortable being transparent about that. If you are very worried about your customers’ potential reaction, it may call into question if AI is the right tool for that project.

Marketers should also use pre-shot or custom-photographed content where they need to disclose the ages or identifying information of the models in the images; for example, liquor brands have to confirm that all models employed in a marketing campaign are over the age of 25, and you can’t confirm this in an AI-generated visual. In contrast, we can provide detailed model releases for all pre-shot creative visuals, including to confirm the ages of the people in those images.

Prompting an AI generator takes time and creative thinking. Users and brands should ensure that their AI generations belong entirely to them and that they won’t be recycled into the training dataset for the tool they’re using or offered as stock creative assets for sale to other users. This keeps the integrity of their brand intact, as well as the integrity of their hard work.

Marketing teams should also consider transparency when it comes to outputs. According to Getty Images’ VisualGPS data, 87% of consumers believe brands should disclose whether an image has been generated by AI. However, there are currently no laws in place requiring this, leaving it up to brands and marketers to use their best judgment and practice responsible disclosure until regulations can match the pace of AI tool development.”

In general, Farhall’s overall advice is to be “extremely discerning about the AI tools you’re using and to work with AI vendors that are fully transparent about their training data and processes, usage rights, and legal indemnification. Brands should be able to create in elevated ways that save them time, money and risk, and there should not be a tradeoff between creativity and protection.”

Join the Discussion: @KimWhitler

Share.

Leave A Reply

Exit mobile version