Hi, I am Delong Li

Building Trustworthy AI

PhD at University of Technology Sydney (UTS).
Research Support at School of Electrical & Data Engineering.
Member of UTS CybeR Lab.

Focusing on Machine Unlearning, AI Security, Federated Learning.

Get In Touch

01.About Me

I am currently a PhD student at the University of Technology Sydney (UTS), serving as Research Support within the School of Electrical & Data Engineering. I am also proud to be a member of the UTS CybeR Lab.

My research sits at the intersection of privacy, security, and large language models. I am passionate about making AI systems not only more powerful but also more auditable, secure, and compliant with human values.

Research Interests

Machine Unlearning AI for Security (AI 4 Security) Federated Learning Split Learning Large Language Models (LLMs)

02.Selected Publications

Click on a paper card to view its abstract.

Published
AuditableLLM: A Hash-Chain-Backed, Compliance-Aware Auditable Framework for Large Language Models
Delong Li, Guangsheng Yu, Xu Wang, Bin Liang

Proposing a novel framework to ensure compliance and auditability in LLMs using hash-chain technology.

Abstract: Auditability and regulatory compliance are increasingly required for deploying large language models (LLMs). Prior work typically targets isolated stages such as training or unlearning and lacks a unified mechanism for verifiable accountability across model updates. This paper presents AuditableLLM, a lightweight framework that decouples update execution from an audit-and-verification layer and records each update as a hash-chain-backed, tamper-evident audit trail. The framework supports parameter-efficient fine-tuning such as Low-Rank Adaptation (LoRA) and Quantized LoRA (QLoRA), full-parameter optimization, continual learning, and data unlearning, enabling third-party verification without access to model internals or raw logs. Experiments on LLaMA-family models with LoRA adapters and the MovieLens dataset show negligible utility degradation (below 0.2% in accuracy and macro-F1) with modest overhead (3.4 ms/step; 5.7% slowdown) and sub-second audit validation in the evaluated setting. Under a simple loss-based membership inference attack on the forget set, the audit layer does not increase membership leakage relative to the underlying unlearning algorithm. Overall, the results indicate that hash-chain-backed audit logging can be integrated into practical LLM adaptation, update, and unlearning workflows with low overhead and verifiable integrity.
In Process
A Survey of LoRA-based Machine Unlearning for LLMs: Methods, Taxonomy, and Evaluation
Delong Li, Guangsheng Yu, Xu Wang, Yanna Jiang, Wencheng Yang, Bin Liang, Wei Ni

A comprehensive survey categorizing current methodologies in parameter-efficient unlearning with taxonomy and evaluation standards.

Abstract: Machine unlearning is becoming a practical requirement for deploying large language models (LLMs) under privacy, safety, and compliance constraints. In parallel, modern adaptation pipelines increasingly rely on parameter-efficient fine-tuning (PEFT), where a frozen backbone is paired with lightweight trainable modules such as Low-Rank Adaptation (LoRA). This survey reviews LoRA-based machine unlearning for LLMs, focusing on adapter-centric settings in which forgetting is achieved by operating primarily in PEFT module space rather than updating the full model. We formalize the LoRA unlearning problem under a frozen base model and present a taxonomy of existing methods, including gradient-based objectives, influence- and Fisher-informed updates, structural and continual-unlearning schemes, bounded-dynamics approaches, and pruning-enhanced variants. To relate these families, we introduce a unified adapter-space view that expresses seemingly disparate techniques as instances of a common update operator over adapter parameters. We further summarize evaluation protocols, metrics, and practical assumptions used in prior work, and provide a worked Task Of Fictitious Unlearning (TOFU) case study to illustrate utility–forgetting trade-offs across representative categories. Finally, we highlight open challenges such as repeated deletion requests, constrained data-access settings, federated or split-adapter deployments, and integration with auditing and compliance workflows.
In Process
FedVILA: Federated LoRA-based Unlearning for Large Language Models
Delong Li, Guangsheng Yu, Xu Wang, Bin Liang et al.

Exploring the intersection of Federated Learning and Machine Unlearning utilizing LoRA adapters for efficient data removal.

Abstract: Large language models (LLMs) trained on decentralized data create a need for federated machine unlearning, where the influence of specific clients or authors is removed without retraining from scratch. Existing benchmarks such as TOFU target centralized author-level unlearning and do not address federated optimization or parameter-efficient adaptation. We introduce FedVILA, a LoRA-based federated unlearning framework that combines TOFU-style evaluation with importance-guided parameter updates. In our setup, a small subset of TOFU authors is assigned to a forget client, while the remaining authors are distributed over retain clients. Starting from a Llama 3.2 1B model trained via FedAvg with LoRA on all clients, FedVILA runs a federated unlearning phase where the forget client optimizes an Inverted Hinge Loss and retain clients continue language modeling. Using gradient-based importance to define a low-rank LoRA subspace, we report TOFU Truth Ratio and KS metrics, showing improved federated forgetting with largely preserved utility.

03.Featured Projects

Three.js & AI
Interactive 3D Particle Christmas Tree
JavaScript, Three.js, MediaPipe, WebGL

A real-time interactive 3D particle system featuring hand gesture control via MediaPipe and a dynamic photo gallery.

Project Overview: This project explores the intersection of computer graphics and computer vision in the browser. It renders a particle-based Christmas tree using Three.js, where each particle interacts with a physics system.

Key Features:
  • Gesture Control: Integrated MediaPipe Hand Tracking to allow users to rotate and interact with the 3D scene using hand movements in real-time.
  • Dynamic Particles: Custom shader materials and particle physics for visual effects.
  • Photo Wall: An interactive floating photo gallery embedded within the 3D space.
XeLaTeX
Normal_Resume: Minimalist XeLaTeX Template
TeX, LaTeX Class Design, Typography

A clean, single-column résumé template featuring a custom `resume.cls` for consistent typography and compact layout.

Project Overview: Designing a resume that passes ATS (Applicant Tracking Systems) while maintaining high aesthetic standards can be challenging. This project provides a highly customizable XeLaTeX template focused on readability and structure.

Key Features:
  • Custom Class: A dedicated resume.cls file that encapsulates complex formatting logic, keeping the main .tex file clean.
  • Helper Macros: Pre-defined commands for education entries, skill lists, and contact information to ensure consistent vertical rhythm.
  • Modern Aesthetics: Uses FontAwesome icons and a flat design language suitable for academic and engineering roles.

04.What's Next?

Get In Touch

I am always open to discussing new research collaborations, especially regarding LLM security and privacy. Whether you have a question or just want to say hi, my inbox is open!

Say Hello