Skip to main navigation Skip to search Skip to main content

The Hidden Dangers of Publicly Accessible LLMs: A Case Study on Gab AI

Research output: Chapter in Book/Report/Conference proceedingConference contributionpeer-review

3 Scopus citations

Abstract

In recent years, the use of large language models (LLMs) for information retrieval has become as ubiquitous as using search engines like Google. However, the widespread adoption of these sophisticated AI models, such as Gab AI, introduces significant risks due to their open-source nature and massive scale. Gab AI promotes itself as an unbiased platform, yet it provides an opportunity for users to exploit the LLM for malicious purposes. This paper explores the literature surrounding the malevolent use of LLMs and investigates how open-source platforms like Gab AI can be manipulated to generate harmful content, orchestrate attack plans, and more. By examining the potential misuse of readily accessible LLMs like Gab AI, which, unlike many nefarious tools, do not require access via the dark web, this study aims to foster awareness and prompt discussions on mitigating the risks associated with these powerful technologies.

Original languageEnglish
Title of host publicationDigital Forensics and Cyber Crime - 15th EAI International Conference, ICDF2C 2024, Proceedings
EditorsSanjay Goel, Ersin Uzun, Mengjun Xie, Sumantra Sarkar
PublisherSpringer Science and Business Media Deutschland GmbH
Pages312-330
Number of pages19
ISBN (Print)9783031893629
DOIs
StatePublished - 2025
Event15th EAI International Conference on Digital Forensics and Cyber Crime, ICDF2C 2024 - Dubrovnik, Croatia
Duration: Oct 9 2024Oct 10 2024

Publication series

NameLecture Notes of the Institute for Computer Sciences, Social-Informatics and Telecommunications Engineering, LNICST
Volume613 LNICST

Conference

Conference15th EAI International Conference on Digital Forensics and Cyber Crime, ICDF2C 2024
Country/TerritoryCroatia
CityDubrovnik
Period10/9/2410/10/24

Keywords

  • Attack Prompts
  • Cybersecurity
  • Malicious LLMs

Fingerprint

Dive into the research topics of 'The Hidden Dangers of Publicly Accessible LLMs: A Case Study on Gab AI'. Together they form a unique fingerprint.

Cite this