Ask the Expert: What Are the Legal Risks of Using AI for Marketing?
The frenzy over AI—and how it will transform business—has reached a fever pitch in the past year. Brands have been rapidly pressure-testing new tools and processes, aiming to see just how well the new tech can supplement or replace the most time-consuming and repetitive aspects of our jobs.
But while the opportunities before us may seem boundless, we're learning that there are some legal lines we probably shouldn't cross when using artificial intelligence for business (just ask Scarlett Johansson).
The ethical and legal use of AI affects all of us—whether you’re a stakeholder at a brand, an agency partner or a contractor. Yet, despite (mostly) good intentions for the use of artificial intelligence, only about 32 percent of companies have guardrails in place to mitigate the risk of using the technology for business, according to consulting firm McKinsey & Company.
You may now be asking yourself: how can you responsibly use AI to enhance your creative process and maximize your business productivity—without inadvertently falling into a legal trap?
That question was top of mind during our recent Women in Content Marketing Awards (WICMA) Connect event, “AI and Marketing: Balancing Risks with Rewards.”
Masthead cofounder Julie Hochheiser Ilkovitch sat down with Sharon Toerek, intellectual property (IP) attorney and Founder/CEO of Legal + Creative (a national law firm focused on IP and marketing law for marketing agencies) to discuss the legal implications and risks of integrating AI into several aspects of your business. Here, we’ve recapped the most important takeaways.
Q: Who actually owns content created with the help of AI?
A: The truth is we’re in a state of flux right now. There are some things we know and some that we don’t know. When we talk about AI, the tech always runs faster and businesses throw on sneakers and try to keep up. The law, then, solves the problems and cleans up the mess they leave behind—but in arrears.
It’s the U.S. Copyright Office’s position that if you have an entirely machine-created piece of content, whether writing, visual, or audio, that’s not ownable by anyone. Not by agencies, not by brands, its copyright ownership status is in the ether right now.
But let’s consider this case study. A graphic novel was created by an artist in New York City. This artist disclosed using Midjourney for all of the illustrations in the novel and she got copyright of the whole work. But once the Copyright Office got wind of this, they canceled the registration.
Her lawyer intervened and the result from the Copyright Office is that if you can clearly define human-generated content in the work, as in this artist’s case it was the copy and arrangement of illustrations, then you can have copyright of those elements, but not the illustrations themselves.
The challenge lies in the mix—how can you tell what is human-generated and what’s machine-generated? Well, we just have to live in a little bit of uncertainty for the time being.
Q: So how is using AI any different from using other tech tools, like Photoshop?
A: At the moment, the consensus is really that “it just is.” As far as the Copyright Office is concerned, we have to accept the fact that if you’ve used GenAI in any of the work that makes it into your deliverables, then your copyright position is not going to be pure. Whether that matters to you or not is down to the individual business factors and each party has to think about that.
Q: In addition to ownership issues, what are the risks of using GenAI for brands, agencies, or independent contractors?
A: The risks fall into three different camps.
Using IP that another party owns. Often, you can’t know until someone makes a claim if the work produced by GenAI is actually the copyrighted work of some third party. For agencies, the risk is stating in the contract that the client will own the IP of the work, but you can’t actually represent that if you don’t own it yourself.
Data breaches. Be wary of inputting any proprietary, sensitive, or personally identifying information that could technically breach a trade secret, constitute a violation of confidentiality agreement, or a violation of data privacy rights. You should have an understanding of what the information owner’s policies are around use of that information and know the legal considerations around exposing that information before feeding it into an open system.
Accuracy. Is the information that you’ve gathered accurate? Is there misinformation or misleading language about the product, service, or value promise? You have to be a fact-checker. A lot of garbage has gone into creating these AI tools, and garbage in is garbage out.
Q: What are industry standards and best practices for responsibly deploying AI into marketing campaigns?
A: First, consider which platforms make the most sense for you to use and if there are any compliance issues associated with them—whether they’re open versus closed systems.
Then, figure out whether your use case is actually appropriate for using that platform. For example, are you using it to create a document or policy for internal use, or are you using it to create deliverables that the public will see in a campaign? Gauge your risk based on this use case and the selection of the platforms that you’re using.
Q: There’s a lot of gray area surrounding what eventually becomes an end-product. From an ethical standpoint, should you disclose that AI was involved in brainstorming, for example, but not in the content creation process itself?
A: Ideas are free to use unless there’s a nondisclosure agreement between parties. Unfortunately, there’s not a lot of ways to protect ideation.
The question then becomes: To what degree is this brainstorming similar to some unknown original work that may have been inputted into the AI system? The more you are iterating, the more you are using human intervention and guardrails to build upon the brainstorm and make it unique, so the less risk you’re assuming.
But if there is not a lot of work done to make the output from AI materially different—though again, you often won’t know it until someone calls you on it, when using open systems—that’s a higher-risk activity.
So we go back to: Is our audience smaller? That reduces our risk. Is our use case more discreet? That reduces our risk. The pure IP lawyer answer doesn’t fit every corner, it’s something we have to gauge on a case-by-case basis.
Q: So what I’m hearing as a content marketer is, be careful, right? Don’t just jump in blindly.
A: As an IP lawyer first before I turned my attention to marketers and agencies: Be creative, be original. This is a tool that should be additive to the work you’re already doing. It's not meant to replace human creativity or your company’s insight into industries and verticals. It should help you be more efficient, but it should not replace originality.
Q: In terms of tools, are there certain ones that are better to use than others?
A: Any closed system is going to give you a higher degree of security for originality and non-infringement. You’ll probably have better terms and conditions around indemnification than you will with a large, open system. Closed systems carry lower risk right now.
Familiarize yourself with terms and conditions too so you understand what these platforms are and aren’t promising you. You also have to verify the work product that’s coming out and do your due diligence.
Q: What are the key considerations when communicating with partners around AI usage?
A: As a brand teaming up with an agency or consultancy, you should absolutely inform them if any of the information you’re providing them about your company is the product of GenAI. They need to know this so they can make adjustments on their end as creators of the work.
As the strategic partner such as an agency or consultancy, you need to first inquire about the brand’s AI usage policies: Ask whether certain platforms or use cases are forbidden. Make disclosures to one another upfront about how you use GenAI so that when you turn deliverables over, it’s clear to what degree an AI tool was used in the creation of that material.
For public-facing knowledge, that’s a joint decision the client and agency make together before the work sees the light of day. This should have been outlined in your contract before anything was created. Everybody should understand who’s assuming what risk before this work goes public.
Q: Should the work that gets produced say on the page that it was even partially created by generative AI?
A: I could advise that, but I don’t think any marketer would do it. And I’m not sure it’s going to reduce their liability if there’s any IP infringement. Unless you do it in a cheeky way that’s in alignment with the brand and feels organic, I just don’t see parties doing it. Between the client and the agency, there should be a lot of disclosure back and forth. But for the public, there’s no precedence on this yet from case law or policy, so just stick to marketing and advertising principles that say don’t be misleading or create a false impression of the company or products.
Q: What language should companies be using in contracts about GenAI, and how granular do we need to get?
A: At the highest level, the agreements between marketers and brands should address the fact that any work or materials generated out of an AI platform are not ownable. You can't do that with something you don't own, just like you can't own a stock image that's incorporated into the work. It's a very similar concept.
Address liability and indemnification for infringement or any other legal “bad news” that arises as a result of using AI-created work in public-facing campaigns.
As for specific policies about what the agency can or can not do, that ought to be incorporated by reference in a written policy. It’s more practical than trying to write that into a master service agreement. The AI policy must evolve over time: New platforms are popping up every day. Discuss it regularly if you have a long-term relationship with a client and update them if it changes. These policies should be dynamic, so constant communication is key.
Q: At Masthead, we’re actively trying to craft our AI policies for employees. For these internal documents, what are the key considerations?
A: As much as you need a client-facing policy, you need an internal-facing policy around your organization's position on the use of generative AI. It might cover:
Which platforms you’re comfortable with everyone using if it’s internal use and/or if it’s client-facing use
Who within the company decides on a new platform to use
What are the guardrails for reviewing work before it goes to the client if AI is used
What are the guardrails in place before the work becomes public-facing
This policy will change from time to time, so have regular, recurring training sessions with your team about it. The goal is not to add more bureaucracy, but to provide a roadmap for your team to follow.
Q: We have a task force for the exploration of AI. Active communication is vital.
A: I love this because then you get the point of view of both the envelope pushers and the risk managers. No matter the size of the organization, you can’t have a million decision-makers, but the leaders are not always going to know about the new, shiny tools. A team approach is a perfect way to address this.
Q: The AI scene is moving so fast. How can marketers stay updated on the evolving legal framework?
A: I would highly recommend the Marketing Artificial Intelligence Institute (MAII). Founded by a former agency owner, if you’re serious about wanting to understand how your craft is going to be impacted, I think this organization is the best navigator and curator of integrating AI into marketing. They have a lot of free resources, too.
It’s also good to work with somebody who stays on top of changes in marketing law in general, so they will also be up-to-speed on the AI laws that are going to impact your work and the risk around it.
Q: How can marketers effectively communicate the dangers of AI use to the C-suite?
A: Know your company’s culture for risk and proceed accordingly. Lead from a position of potential benefits to the organization. Present an even-keeled case for why you want to use AI, how you want to use it, what the potential business results will be, and what the potential risks are.
The Bottom Line
Every piece of content needs the human touch, those guardrails that are individual to each company and depend on your appetite for risk. As ambassadors of a brand, whether in-house or as a partner, you have a duty to protect that brand. So whatever work is created should be in alliance with brand standards. That is not something AI can do effectively.
Indeed, while AI should be making your life easier, it should not (and really, cannot) replace human output. However, you choose to incorporate it into your marketing strategy, proceed with caution, execute due diligence, and remain open and communicative with your partners.
Remember: AI shouldn’t stifle innovation and creativity. It’s a tool in your arsenal. Use it wisely—and ethically!