AI, Copyright, and the Cost of Uncertainty

 


As an AI ethics advocate, I am often confronted with a deceptively simple question: has the law caught up with generative artificial intelligence? The short answer is no. What we currently have is not a coherent legal framework, but a collection of evolving interpretations that attempt to retrofit decades-old copyright principles onto a technology that was never anticipated. The danger lies in the growing assumption that uncertainty is the same as permission.

Kazakhstan provides an illuminating, though imperfect, example of how lawmakers are attempting to navigate this terrain. Under its approach, training an AI model on copyrighted material is permitted unless the rights holder has explicitly restricted such use. This reflects a practical reality: most artists, writers, and creators produced their work without any expectation that it could one day be absorbed into a machine learning system. As a result, the vast majority of copyrighted material carries no prohibition against AI training. Treating training as a non-infringing act may seem reasonable, but it underscores how novel and unresolved these questions remain.

However, focusing solely on training risks missing the more consequential issue: output. In my own testing of generative models, I intentionally used generic prompts that avoided proper names or trademarks. Requests for a cartoon image of an Italian plumber or a yellow electric mouse produced results that were instantly recognizable as copyrighted characters. The model filled in the gaps based on its training data, not on any explicit instruction to reproduce protected intellectual property.

This is where copyright risk becomes both subtle and pervasive. Users may believe they are acting responsibly by avoiding direct references, yet still generate infringing content. Intent, in these scenarios, offers little protection. Copyright law evaluates the output, not the user’s mindset. The result is a situation in which infringement can occur unknowingly, driven by probabilistic reconstruction rather than deliberate copying.

Artistic style introduces another layer of complexity. Copyright law has long held that style itself is not protectable; only specific works are. In theory, this principle should resolve the issue. In practice, generative AI has altered the scale and precision of stylistic imitation. A prompt that names a famous artist or studio can now produce images that closely mirror a recognizable aesthetic. While this may be legally permissible, it challenges long-standing assumptions about authorship, originality, and creative labor.

The ethical discomfort surrounding this practice is not abstract. Many artists view AI systems as tools that appropriate their creative identity without consent or compensation. The law may not recognize ownership over style, but the cultural and professional consequences are real. The fact that an artist is deceased or that their work has entered the public domain does little to resolve these tensions.

For businesses, these unresolved questions translate into tangible risk. Generative AI is increasingly embedded in marketing, design, and content workflows, often with the expectation of efficiency and scale. Yet the legal ambiguity surrounding both training data and outputs demands additional layers of oversight. Compliance reviews, internal audits, and risk assessments are becoming necessary components of responsible AI use.

These safeguards come at a cost. In some cases, the time and resources required to manage legal exposure may erode the productivity gains that initially justified AI adoption. More importantly, organizations that treat AI as a legally neutral tool risk reputational damage and liability as enforcement and litigation evolve.

The reality is that the rules governing AI and copyright are still being written. Kazakhstan’s approach is one experiment among many, not a global standard. Until clearer guidance emerges, businesses and creators must resist the temptation to equate technological capability with legal permission. In the current environment, caution is not a barrier to innovation; it is a prerequisite for sustainable use.


References:

Kazakhstan Approves Artificial Intelligence (AI) Law: Key Provisions on Copyright and AI 

https://www.dentons.com/en/insights/alerts/2025/november/3/kazakhstan-approves-artificial-intelligence-law


About Me:

Dominic “Doc” Ligot is one of the leading voices in AI in the Philippines. Doc has been extensively cited in local and global media outlets including The Economist, South China Morning Post, Washington Post, and Agence France Presse. His award-winning work has been recognized and published by prestigious organizations such as NASA, Data.org, Digital Public Goods Alliance, the Group on Earth Observations (GEO), the United Nations Development Programme (UNDP), the World Health Organization (WHO), and UNICEF.

If you need guidance or training in maximizing AI for your career or business, reach out to Doc via https://docligot.com.

Follow Doc Ligot on Facebook: https://facebook.com/docligotAI