• About
  • Subscribe
  • Contact
Friday, May 9, 2025
    Login
FutureCISO
  • People
  • Process
  • Technology
  • Resources
    • White Papers
    • PodChats
No Result
View All Result
  • People
  • Process
  • Technology
  • Resources
    • White Papers
    • PodChats
No Result
View All Result
FutureCISO
No Result
View All Result
Home Resources Blogs

Exploitation of Large Language Models (LLM)

Melinda Baylon by Melinda Baylon
April 19, 2024
Exploitation of Large Language Models (LLM)
Share on FacebookShare on Twitter

Large language models (LLM) have been the talk of the town for the past few years as natural language processing chatbots gained prominence mainstream. LLM as defined by Gartner, is “a specialised type of artificial intelligence (AI) that has been trained on vast amounts of text to understand existing content and generate original content.”

Large volumes of texts and simulations from various data collections train LLMs and enable them to interpret prompts to generate natural, human-like outputs. They have become prominent helpers for use cases such as content creation and optimisation, research, and competitor analysis. 

Statistics show that the global LLM market is set to grow at a 79.8% CAGR, from $1.590 million in 2023 to $259.8 million in 2030. By next year, 750million apps are expected to use LLMs, automating around 50% of digital work.

As LLMs transform technology and workflows, a question persists: what happens when they are exploited?

How LLM is exploited

“AI is only as good as the data it is trained on; much of it is un-curated or unverified,” warned Demetris Booth, director of Product Marketing (APJ), at Cato Networks. 

AI is only as good as the data it is trained on.

Demetris Booth
Demetris Booth

He highlighted the importance of LLM being advanced AI algorithms “trained on a considerable amount of internet-source text data to generate content, provide classification, analysis, and more.” 

However, manipulated content can confuse and defraud unsuspecting victims. Below are some of the most common methods of LLM exploitation, according to Booth. 

Deepfakes. “Deepfakes are prime examples of this because the main objective is to deceive by spreading false information and confusion about important or sensitive issues,” he explained. He added that some misleading political campaigns deploy this technique. 

The Asia Pacific (APAC) region experienced a 1530% surge in deepfake cases from 2022 to 2023, marking it the second-highest region in the world with the most deepfake cases.

Bias and misrepresentation. “Much of the data LLMs are trained on contains real-world biases, which are then embedded in the LLM results. This has the potential to deepen the burdens faced by disadvantaged or marginalised groups," he said. Booth warned this may lead to hiring discrimination, and low-quality services.

Related:  From AI to generative AI in financial services: Cutting through the hype

Fake data breaches. Booth explained that hackers can use an LLM like ChatGPT prompts to generate realistic databases with fake data. Malicious players can claim to have stolen it from legitimate companies to sell fake data. 

"This was used earlier this year, claiming to have stolen 50 million customer data records from Europcar. This type of attack could happen to any organisation in APAC and worldwide and will become more prominent over time as methods refine and mature,” Booth said. 

Malicious prompt instructions. Booth said malicious players can inject the model with confusing or misleading instructions.

 "A common exploit of this type is an SQL Injection-like attack where the attacker forces applications to perform risky tasks that alter the main functions of these applications. Unlike in SQL Injections, however, an infinite number of prompts can manipulate the LLM into illegal actions because these prompts are in a natural language format,” he explained. 

Data Leakage. “Many website chatbots hide sensitive data. The data from these applications can be stolen by submitting specific, detailed requests for information contained in specific rows and tables of the database. This can force the backend database to transmit specific, highly sensitive information,” Booth pointed out.

Dangers of LLM exploitation 

“LLMs can generate malicious instructions for illicit purposes, which presents a major security risk to many organisations,” said Booth.

He added that malicious players use LLM to build malware attacks targeted at specific organisations and data. 

“By simply typing specific instructions of what this malware should do, most sophisticated attackers only need to refine certain lines of the code and then launch their attacks against a company,” he warns. 

Aside from that, LLMs carry the possibility of compromising intellectual property (IP) and are prone to plagiarism issues as they can generate content similar to existing, sensitive, or proprietary material. 

“This raises legal concerns about copyright infringement, something that we are witnessing in the entertainment industry today,” explained Booth. 

Policies CIOs can propose 

As enterprises are building LLMs into common business applications, Booth reminded companies to set ground rules for their use. Chief Information Officers can propose policies to ensure safety while deploying LLMs within an organisation. 

Acceptable results policies. “Under this policy, any LLM-produced result must be accompanied by a reasoned explanation. This is an important requirement for certain regulated industries like FSI, Government, Healthcare, etc., where well-developed baselines exist for what is considered valid information," he said. Booth highlighted that this provides a peer review-like process for LLM methodologies.

Related:  VODChat for FutureCISO: Enhancing identity management with IGA

Data confidentiality principle. “Whether it is Intellectual Property (IP) or Personal Identifiable Information (PII), no LLM should have access to this data. This may require operating LLMs within an air-gapped environment, in a micro-segmented network, or behind strict Data Loss Prevention (DLP) technology,” he explained. 

Reproduction testing policy. Before deployment, Booth said that organisations must test LLM results for "accuracy, fairness, bias, and many more test points and be reproducible". He added that tests must be public and verifiable to guarantee that the organisation is mitigating any potential risk of misuse.

Mitigating the adverse effects of LLM exploitation

“As the human factor is probably the most important consideration in security, employees must be vigilant when using LLM tools in the workplace and think critically about every phase of their use,” Booth said.

As a start, Booth reminded not all output generated by LLM is accurate. He said it is best to verify and cross-reference results with multiple sources to ensure accuracy. He added that employees should inform the IT/Security team of any suspicious and false generated results for further investigation.

Booth listed three steps for security teams to protect their organisations against LLM exploitation. 

“First, identify trusted apps and user groups using LLMs, then categorise and manage the desired level of access for these. Second, protect data by using classifiers to identify sensitive documents in real time and address any gaps that exist in traditional data protection strategies. Lastly, use text-blocking policies to protect any sensitive document an employee may upload to an LLM like ChatGPT for review or improvement ideas. Doing so would violate company privacy policies and should be blocked,” Booth listed. 

Booth said that preventing LLM exploitation requires complete effort by the entire organisation and a multi-pronged approach. 

“Preventing these requires a combination of technology, policies, and user awareness. This is the only way to prevent LLM misuse and exploitation, thus protecting the broader organisation,” he said. 

Tags: Artificial IntelligenceCato Networkcybersecuritylarge language model (LLM)
Melinda Baylon

Melinda Baylon

Melinda Baylon joins Cxociety as editor for FutureCIO and FutureIoT. As editor, she will be the main editorial contact for communications professionals looking to engage with aforementioned media titles. 

Melinda has adecade-long career in the media industry and served as TV reporter for ABS-CBN and IBC 13. She also worked as a researcher for GMA-7 and a news reader for Far East Broadcasting Company Philippines. 

Prior to working for Cxociety, she worked for a local government unit as a public information officer. She now ventures into the world of finance and technology writing while pursuing her passions in poetry, public speaking and content creation. 

Based in the Philippines, she can be reached at [email protected]

No Result
View All Result

Recent Posts

  • DDoS attacks surge in Asia Pacific, claims Cloudflare
  • Reimagining security for the AI Era
  • PodChats for FutureCISO: Articulating the business value of security in 2025
  • New standard for cybersecurity at the storage layer
  • Cybersecurity challenges persist despite improved defenses

Categories

  • Blogs
  • Compliance and Governance
  • Culture and Behaviour
  • Cybersecurity careers
  • Data Protection
  • Endpoint Security
  • Incident Response
  • Network Security
  • People
  • Process
  • Resources
  • Risk Management
  • Technology
  • Training and awarenes
  • Videos
  • Webinars and PodChats
  • White Papers

Strategic Insights for Chief Information Officers

FutureCISO serves the interests of the Chief Information Security Officer (CISO) and the information security profession. Its purpose is to provide relevant and timely industry insights around all things important to security professionals and organisations that recognize and value the importance of protecting the organisation’s data and its customers’ privacy.

Cxociety Media Brands

  • FutureIoT
  • FutureCFO
  • FutureCIO

Categories

  • Privacy Policy
  • Terms of Use
  • Cookie Policy

Copyright © 2024 Cxociety Pte Ltd | Designed by Pixl

Login to your account below

or

Not a member yet? Register here

Forgotten Password?

Fill the forms bellow to register

All fields are required. Log In

Retrieve your password

Please enter your username or email address to reset your password.

Log In
No Result
View All Result
  • People
  • Process
  • Technology
  • Resources
    • White Papers
    • PodChats
Login

Copyright © 2024 Cxociety Pte Ltd | Designed by Pixl