This article is not about what you can do with AI-generated images. It is about what you should do. The legal questions — copyright, commercial rights, platform rules — are covered in The Legal Layer. This article is about the ethical ones.
Most AI art courses skip this entirely. They teach the tools and assume the ethics will sort themselves out. They will not. The EU AI Act takes full effect in August 2026. Platform disclosure rules are tightening quarterly. And the question of whether this technology was built fairly is not going away.
You do not need to have all the answers before you generate your first image. But you do need to know what the questions are.
Disclosure: say what it is
When you publish an AI-generated image, say so.
Not because every platform requires it (though more do every month). Because it is honest. Because your audience deserves to know what they are looking at. And because the creators who disclose now will be trusted later — when disclosure becomes mandatory everywhere, the people who were already doing it will have credibility. The ones who hid it will have a problem.
What to say
Keep it simple. You do not need a legal disclaimer. A line is enough:
- “Created with AI (Midjourney V7)”
- “AI-generated image, prompted and curated by [your name]”
- “Made with Nano Banana Pro. Prompt, direction, and selection are mine.”
The point is not to apologise. It is to be transparent about the process. Many of the most respected AI creators disclose prominently and treat it as a badge of craft, not a confession.
On social media, the disclosure can be part of the story: “I described this scene to Nano Banana Pro and it took four attempts to get the light right. Here’s the version that worked.” That is disclosure and content at the same time — it shows the craft, invites conversation, and is honest about the process.
Where disclosure is required (April 2026)
These are the current rules. They change frequently — check before you publish.
EU AI Act (August 2026): AI-generated content must carry machine-readable labels. Providers of AI tools must build detection markers into their outputs. Individuals publishing AI content must label it as artificially generated. This applies to anyone publishing to an EU audience, regardless of where you live.
Instagram and Facebook (Meta): AI-generated content posted without disclosure may be labelled automatically by Meta’s detection systems. The platform recommends voluntary labelling and adds its own “Made with AI” tag when detected.
TikTok: Requires creators to label AI-generated content using their built-in disclosure toggle. Four undisclosed offenses result in a permanent monetization ban.
YouTube: Requires disclosure of “altered or synthetic” content that could be mistaken for real. An AI-generated video of a realistic event, person, or place must be labelled.
LinkedIn: No formal AI disclosure policy as of April 2026, but the community norm is shifting toward transparency. Posts that are discovered to use AI-generated images without disclosure face reputational backlash.
Etsy, Redbubble, Amazon (print-on-demand): Require disclosure that listings contain AI-generated content. Etsy specifically requires it in the listing description. Failing to disclose risks listing removal or account suspension.
The simple rule
When in doubt, disclose. It costs nothing and protects everything.
Consent: whose face is that?
AI image models can generate faces that look like real people. Sometimes intentionally — you uploaded a reference photo. Sometimes accidentally — the model’s training data included photographs of real humans, and statistical echoes surface in outputs.
Three situations to think about.
Using someone else’s face as a reference
If you upload a photograph of a real person as a reference image and generate AI images of them, you are creating synthetic media of that person without their consent. In many jurisdictions, this is already illegal for sexual content. The US TAKE IT DOWN Act (May 2025) makes non-consensual intimate AI imagery a federal crime. The UK Criminal Justice Act makes creating sexually explicit deepfakes punishable by up to two years in prison. For non-sexual content, the law is less clear — but the ethics are not.
Ask yourself: would this person be comfortable seeing this image? If the answer is “I don’t know” or “probably not,” do not publish it.
Public figures have more limited privacy protections, but likeness rights still apply. Generating AI images of a celebrity for commercial use — selling prints, using them in advertising, creating product mockups — exposes you to legal action in most jurisdictions.
Using your own face
Your face, your choice. Using your own photograph as a reference image is straightforward ethically. Just be aware that when you upload your face to an AI service, you are giving that service access to your biometric data. Read the terms of service. Know whether the platform stores your reference images and whether they can use them for training.
Accidental likeness
Sometimes an AI output just looks like someone famous. This is a statistical coincidence, not a deliberate creation. If you notice the resemblance, regenerate. If you publish it without noticing and someone points it out, take it down. The intent matters less than the impact.
Training data: where did this come from?
Every AI image model was trained on images made by human artists, photographers, and designers. Billions of them. In most cases, those creators were not asked for permission and were not compensated.
This is the foundational ethical tension of AI image generation. It is not resolved. It may never be fully resolved. But you should know it exists.
What happened
Models like Stable Diffusion were trained on the LAION-5B dataset — 5.85 billion image-text pairs scraped from the public web. This included copyrighted photographs, artwork, medical images, and personal photos. The creators of those images largely did not consent to their work being used for training.
Midjourney, OpenAI, and Google have been less transparent about their training data sources, but lawsuits (including Getty Images v Stability AI, and a class action by artists against Stability AI, Midjourney, and DeviantArt) allege similar practices across the industry.
Where things stand now
Getty Images sued Stability AI in the UK and lost on its primary copyright claims in November 2025 — the court ruled that scraping publicly available images for training was not infringement in that context. The US case continues. Artists used Spawning’s Have I Been Trained tool to opt 80 million images out of Stable Diffusion 3’s training data — roughly 3% of the LAION dataset. Stability AI and Hugging Face honour the opt-out registry; most other model providers have not committed to doing so. Some newer models (like Adobe Firefly) are trained exclusively on licensed content.
What you should do
You do not need to stop using AI image tools. But you should:
- Know the source. Understand that the models you use were trained on other people’s work. This is not a secret, but it is often glossed over.
- Credit the tool. Saying “made with Midjourney” or “generated via Nano Banana Pro” is basic transparency. It acknowledges the process.
- Support the artists. If AI tools accelerate your work, consider putting some of that saved time or money back into the creative community — buying prints, commissioning illustrators for work AI cannot do, supporting organisations that advocate for artist rights.
- Stay informed. The legal and ethical landscape is changing fast. What is acceptable practice today may not be in six months. The Legal Layer tracks the legal side; this article tracks the ethical side.
The checklist
Before you publish any AI-generated image or video, run through these five questions:
-
Did I disclose? If someone asks “was this made with AI?” would they already know the answer from what I published?
-
Whose face is in this? If it is based on a real person, did they consent? If it looks like a real person by accident, would they be upset?
-
Would I be comfortable if the process were visible? If someone could see my prompt, my reference images, and every step of my workflow, would I be embarrassed?
-
Am I following the platform’s rules? Each platform has its own disclosure requirements. Did I check?
-
Am I being honest about what this is? Not “is this technically legal” but “am I being straight with the people who will see this?”
Five questions. Ten seconds. Do them every time.
This conversation is not over
Ethics in AI image generation is not a solved problem with a checklist answer. It is an evolving conversation — between creators, platforms, lawmakers, artists whose work was used in training, and the audiences who consume what we make.
The fact that you are reading this article means you care about getting it right. That matters. The creators who think about these questions produce better work, build more trust, and are better prepared for the regulatory changes that are coming.
Start making things. Be transparent about how you make them. And come back to these questions as the answers evolve.
The rest of the Creative Path assumes you have thought about this. Now go make something worth making.