
Imagine you’re an artist who spends copious amounts of time creating unique and original ideas. One day, you find one of your original paintings for sale online without your approval. You discover that the painting is an AI-generated replica, eerily similar to your work, and someone else is monetizing it. You contact the platform, but because it’s AI-generated, there is a legal gray area.
Recently, this became manifest globally when OpenAI released an update to the underlying models of ChatGPT that could apply a “Ghibli” style over any image that you uploaded. Studio Ghibli is a Japanese animation studio founded by Hayao Miyazaki (among others), who created one of my favorite animations, Ponyo. Since the OpenAI models were likely trained on Studio Ghibli’s animated movies, the resulting image generations were astonishingly accurate and produced with high fidelity.
Miyazaki famously called generative AI an “insult to life itself.”

Another ethical concern that frequently appears in AI algorithms is bias. Just a few years ago, Amazon decided to scrap an AI-powered recruiting tool after discovering the algorithm learned to favor male candidates based on past hiring trends. Instead of eliminating bias against women, the AI reinforced it. Considering that a career opportunity can be a life-changing experience, it is a stark reminder of the life-altering consequences AI can have.
The rise of highly realistic AI-generated images and videos can be used to deceive people and spread misinformation. Imagine watching a video of a sitting President or Prime Minister of a country declaring war, only to find out that the video was entirely fabricated. This isn’t science fiction but a reality. Deepfakes of influential people have circulated in recent years, making it harder to distinguish fact from fiction.
Privacy is also an ethical issue in AI. Most people are well aware that our technology is constantly listening to us. As we discuss the weather, mention something we want to buy, listen to music, or watch a TV show, the applications around us pick up these signals. AI leverages the data to adjust the timeline.
AI-powered marketing leads to the unsettling question: “How much do companies actually know about us, and how do we regain our privacy?” Living in a world where applications and their underlying AI track our every movement, conversation, and purchase creates ethical issues about our privacy.
Autonomous and robotic AI in various industries also creates moral obligations. When things go wrong who takes responsibility? If a self-driving vehicle strikes a pedestrian, who is at fault? This dilemma has occurred before and will again. Robotic systems will increasingly take over surgeries, transportation, manufacturing, farming, cooking, etc. Who is accountable for their actions, particularly in cases where something goes wrong? These scenarios highlight the need for clear ethical guidelines on AI responsibility.
AI is not inherently bad. It’s an amazing tool that can transform the world for the better. Some countries and regions are creating legislation and policies to regulate it, like the Artificial Intelligence Act in the EU. But at the end of the day, the challenge is on us — as developers, as users and as advocates — to develop it responsibly.
