SPECTRE: Computational Methods in Natural Language Processing for Automated Hardware Trojan Insertion Using Large Language Models
DOI:
https://doi.org/10.37256/cm.7120268211Keywords:
natural language processing, large language mode, prompt engineering, computational method, code generation, hardware security, stealthy insertionAbstract
Experts Stealthy Processor Exploitation and Concealment Through Reconfigurable Elements (SPECTRE) is the new framework proposed in this paper to use the computational methods of Natural Language Processing (NLP) to automate the addition of Hardware Trojans (HTs) to the complex hardware design. SPECTRE takes advantage of Large Language Models (LLMs) like Generative Pre-trained Transformer (GPT)-4, Gemini-1.5-pro and LLaMA-3-70B to synthesize HTs with little to no human input by using sophisticated prompting methods like role-based prompting, reflexive validation prompting, and contextual Trojan prompting to analyze Hardware Description Language (HDL) codebases and expose vulnerabilities. Such a methodology alleviates the limitations of the more traditional machine learning-based automation that tends to require large data and extensive training times by including NLP-based code generation and an inference engine that can dynamically scale to non-homogeneous hardware platforms, including Application-Specific Integrated Circuits (ASICs) and Field-Programmable Gate Arrays (FPGAs). When tested on benchmark hardware systems like Static Random-Access Memory (SRAM), Advanced Encryption Standard (AES-128), and Universal Asynchronous Receiver-Transmitter (UART), SPECTRE shows greater performance with GPT-4 having an 88.88% success rate in generating viable and stealthy HTs that cannot be detected by current state-of-the-art Machine Learning (ML)-based tools like hw2vec. The mathematical and computational basis of the framework, premised on few-shot learning, adversarial prompting, and iterative validation algorithms, shows the dual-use potential of NLP models in the domain of hardware security, which poses the potential to exploit vulnerabilities within a short time but also demands adequate strategies to curb vulnerability exploitation by artificial intelligence generation.
Downloads
Published
Issue
Section
Categories
License
Copyright (c) 2025 Rashid Amin, et al.

This work is licensed under a Creative Commons Attribution 4.0 International License.
