A while ago, I was asked about my thoughts on music sampling, a practice commonly used in song creation where a part of an existing song is reused, often as a loop, in a new song.
We were discussing the merits of sampling and its impact on copyright and rightful ownership, especially since the use of sampling has grown significantly pervasive -- particularly, in hip hop and electronic music.
I posited that it was fine as long as the portion of the song used is widely recognisable, so anyone who hears the sample would recognise its origins. This implies the original songwriter still gets a nod of acknowledgement, even if passive, for their work and creativity.
Just a sidenote here: songwriters do receive royalties if their music is used in a sample, regardless of the length of the sample. Their permission for such use must be obtained and royalties negotiated. The lack of prior consent is a copyright violation.
I see sampling as just another form of music creation and, in a way, homage to a tune that is so instantaneously recognisable, that its original creator still gets due credit.
The grey area for me is when samples of lesser known songs are meshed together to create a piece of music. and presumed to be an entirely new creation because it's a sum of parts that are less familiar to the general public.
To me, it’s almost akin to cheating since you’re passing off someone else’s work as your own, even if you paid for the rights and have their permission to do so.
So, sample a really well-known tune and spin something new out of that, and you’re alright by my books. At least, that's what I thought until, generative artificial intelligence (GenAI) entered the game.
How far is too far
OpenAI in March 2025 featured GPT-4o’s ability to generate images, including illustrations emulating the style of anime films created by Studio Ghibli and its founder, Hayao Miyazaki.
It triggered an avalanche of Ghibli-inspired images online, alongside criticism that giving everyone and anyone the ability to imitate its work is disrespectful to the original creators.
It also treads into still very murky waters concerning copyright, over the use of online data to train large language models (LLMs), such as ChatGPT.
The ChatGPT-fuelled Ghibli mania also prompted an old clip of Miyazaki to resurface online, in which he went on a brief rant after watching an animated creature generated by AI. He described the AI-generated character as “an insult to life itself” and expressed disgust at its creation, presumably because it displayed poor artistic form.
Granted, this was way back in 2016. AI then did not have the training benefit of LLMs and was far less capable than GenAI, to generate--and recreate--intricate art works.
I’d be curious to know if Miyazaki feels the same way today and whether he feels his work is still uniquely his, now that it can be so easily imitated, or “inspire” similar work, depending on how one chooses to see it.
And would Studio Ghibli approve if it had granted explicit permission for its creations to be used as AI training material and received royalties from all those ChatGPT-generated illustrations?
Herein lies the crux of the debate between AI and creative works: There currently is no clarity over the legal use of data, including the works of creators such as musicians and artists, in training LLMs and, subsequently, in AI products trained on these models.
The UK government just this week rejected proposals to compel AI vendors to reveal the data they use to train their AI models. It said “no changes” to its copyright laws would be considered until it is “completely satisfied they work for creators”, according to a BBC report.
Well-known UK singer-songwriter and musician Elton John criticised the decision, calling the government “absolute losers” for exempting tech companies from copyright laws.
The proposed legislation had called for transparency requirements to be added to the UK’s Data (Use and Access) Bill, ensuring copyright owners have given permission for their work to be used.
The House of Commons rejected the amendment, which John said robbed young artists of their income and legacy.
Permitting AI vendors to continue using artists’ content without paying would be “committing theft, thievery on a high scale”, he said. “It’s criminal in that I feel incredibly betrayed,” he noted, adding that young artists lacked the resources to fight big tech firms.
John vowed to “fight it all the way” and take ministers to court if the government did not change course.
Businesses cannot afford to stall on AI transparency
Organisations, though, should not regard the top-level inertia as an indication to do nothing, especially as they ramp up their adoption of AI.
A May 2024 study from IDC estimated that spending on AI in Asia-Pacific would climb 28.9% to US$90.7 billion by 2027, with 84% of organisations anticipating GenAI to provide significant competitive edge for their business.
In spite of that, however, GenAI would account for just 19% of the region’s spend, with 81% expected to go towards predictive and interpretative AI applications, IDC noted.
"To truly bring AI everywhere, the technologies used must provide accessibility, flexibility, and transparency to individuals, industries, and society at large," Alexis Crowell, Intel's Asia-Pacific Japan CTO, said in the IDC report. "As we witness increasing growth in AI investments, the next few years will be critical for markets to build out their AI maturity foundation in a responsible and thoughtful manner."
In particular, laws that fail to keep up with evolving business requirements may end up hurting innovation.
Some 81% of business leaders say unclear government regulations hinder AI investment and implementation, resulting in delayed adoption, according to a February 2025 report by NTT Data. The study looked at insights from 2,300 C-suite executives and decision makers across 34 countries.
And while 89% are concerned about AI security risks, just 24% of CISOs believe their organisations have a strong framework to balance AI risk and value creation.
In addition, 67% say their employees do not have the skills to work effectively with AI and 72% say they do not have an AI policy in place to guide responsible AI use.
NTT Data CEO Abhijit Dubey said in the report: "AI’s impact will only grow, but without decisive leadership, we risk a future where innovation outpaces responsibility, creating security gaps, ethical blind spots, and missed opportunities.
“By embedding responsibility into AI’s foundation--through design, governance, workforce readiness, and ethical frameworks -- we unlock AI’s full potential, while ensuring it serves businesses, employees, and society at large equally,” Dubey added.
And as study highlights, this also needs to encompass clear legislations to help companies better navigate the AI landscape.
In fact, 86% believe data privacy laws have a positive impact on their organisations, according to Cisco’s 2025 Data Privacy Benchmark Study, which polled 2,600 security and privacy professionals across 12 countries.
Furthermore, in spite of the costs associated with compliance, 96% say the returns outweigh the investments, the report noted. Another 99% expect to reallocate resources from privacy budgets to AI initiatives in the future.
"Privacy and proper data governance are foundational to responsible AI," Cisco’s chief legal officer Dev Stahlkopf said in the report. "For organisations working towards AI readiness, privacy investments establish essential groundwork, helping to accelerate effective AI governance."
Establish AI transparency to earn user trust
Organisations that fail to establish proper AI governance and ensure their AI initiatives are built on data transparency and fair use, risk losing the trust of their customers and users.
They also face potential copyright infringement lawsuits, should content creators decide to take issue directly with businesses, while governments stall on AI legislations and legal reforms.
Organisations need to provide guidelines on how their AI tools are trained and data used. They also should have their LLM partners commit to the same level of transparency in order to facilitate their AI own governance framework.
Whether it’s music sampling or digital creations inspired by real-life artists, AI doesn’t have to be seen as a necessary evil, but as a welcomed catalyst that brings new layers of creativity.
The technology can elevate art forms and expose them to new audiences. However, AI-generated art needs to draw on expressed consent from original creators and provide due compensation for these creators.
Otherwise, AI's potential to augment even abstract creative forms may remain nothing more than doodle on a napkin.