SaT-CPS '25: Proceedings of the 2025 ACM Workshop on Secure and Trustworthy Cyber-physical Systems

Full Citation in the ACM Digital Library

SESSION: Keynote & Invited Talks

Trustworthy and Secure Artificial Intelligence for Transportation Systems Security and Resiliency

This paper, that summarizes the keynote address to be given at the 2025 ACM CODASPY Workshop on Secure and Trustworthy Cyber-Physical Systems to be given jointly by the authors, will discuss the application of Artificial Intelligence (AI) for securing transportation systems. In particular, attacks to transportation systems, the use of federated learning, privacy aspects, fairness issues, the use of Large Language Models, developing resilient systems, and the governance of the AI systems will be discussed.

Invited Paper: Smart Autonomous Cyber-Physical Systems

Cyber-Physical Systems (CPS) integrate computing, networking, and physical processes, making them critical in applications such as smart homes, industrial control systems, autonomous vehicles, smart grids, and medical devices. Ensuring CPS security is essential, as vulnerabilities can have serious consequences. CPS share key security requirements with traditional IT systems-confidentiality, integrity, and availability-but also introduce additional challenges due to real-time constraints, interactions with physical processes, and safety considerations. Standard security practices include secure design principles, redundancy, continuous monitoring, resilient control algorithms, and rigorous verification and validation procedures. However, security techniques must be tailored to specific CPS domains. Some of the requirements may interact with each other, e.g., adding security mechanisms violating timely responses, or lack of security measures impacting safety. The complexity of securing CPS is further heightened by the integration of artificial intelligence (AI), which enables greater system autonomy in tasks like energy optimization and security monitoring. In this paper, we present results from two previous projects that focused on smart IoT systems and avionic systems, respectively. In both cases, arriving at solutions that combine many requirements is at the heart of the methodology. Based on this past work, we discuss open research directions.

SESSION: Cyber Physical Systems Security Session I

LLM-Based Fault Detection in Connected Vehicle Time-Series Data

Large Language Models (LLMs) have demonstrated remarkable capabilities in various domains, including time series anomaly detection. Connected vehicles (CVs) rely on this capability to ensure safe operation. While the embedding of time series data prior to LLM processing has been shown to improve performance for various tasks, this technique has not been explored for fault detection within Basic Safety Messages (BSMs), which CVs rely on for their effective operation. This paper addresses this gap by exploring the effectiveness of BSM time series embeddings for LLM-based BSM fault detection. Performance of this approach is benchmarked versus fine-tuning and various prompting strategies, such as zero-shot and chain-of-thought reasoning. The techniques assessed are validated using a modified version of the TampaCV BSM dataset to include four types of faults: drift, stuck-at, hard-over, and trend. We demonstrate that time series embedding improves fault detection performance versus the other considered techniques.

Optimized Deep Learning-based Intrusion Detection System for Connected Vehicles

Future connected vehicles will enable multiple applications on the Internet of Vehicles while maintaining safe transportation. However, securing the vehicles is still a challenging concern for potential future deployment, especially considering intrusion incidents. In this paper, we propose a deep learning-driven intrusion detection system (IDS) that leverages robust and diverse datasets on vehicle intrusions, such as ROAD from Oakridge National Laboratory (ORNL). Unlike existing approaches that rely on datasets that lack sophisticated, stealthy, and complex attack scenarios, our model is evaluated on more comprehensive and challenging threats. This ensures a rigorous assessment of its effectiveness, addressing the limitations of IDS models that struggle to detect advanced intrusions in real time, which potentially poses severe safety risks to vehicles. Using the ROAD dataset, our model achieved 97.85 % accuracy with an MCC of 0.6973. To handle class imbalance in the ROAD dataset, we utilized a class weighting strategy that enabled the precise detection of rare attack categories while maintaining an average low false positive rate (FPR) of 0.0036. Random search hyperparameter tuning further optimized performance, yielding an efficient prediction time of 13.26 seconds for 307,838 ROAD dataset test samples and 71.50 seconds for HCRL test samples. These results highlight the potential of our model as a lightweight IDS optimized for resource-constrained environments.

PendingMutent: An Authorization Framework for Preventing PendingIntent Attacks in Android-based Mobile Cyber-Physical Systems

PendingIntent (PI) is a key Android feature that allows one app to delegate tasks to another while temporarily granting its permissions. However, this can create security risks if a malicious app gains control. Between 2020 and 2023, 106 CVEs highlighted such vulnerabilities. Our analysis of 180,606 apps found PI-related security flaws in 2.21% (3,993 apps), leading to risks like privilege escalation and unauthorized actions. Beyond mobile apps, PendingIntent (PI) plays a key role in Mobile Cyber-Physical Systems (MCPS) by enabling scheduled workflows and automating tasks. For instance, in IoT and Smart Home Systems, PI manages temperature adjustments based on preset schedules, while in Healthcare and Wearable Devices, it triggers workflows like medication reminders or orthopedic exercises for patients and the elderly. In industrial control systems, PI automates machinery operations. In all these scenarios, PI delegates control to a responsible component or app, which acts on behalf of the delegating app. However, security vulnerabilities could allow attackers to disrupt these operations.

To address this, we propose PendingMutent, a framework that combines dynamic authorization with binary analysis to secure PI. It verifies receiver permissions before execution, reducing attack risks while keeping flexibility. We tested it on 85 benchmark apps (including RAICC and StickyMutent) and five open-source applications. Results show PendingMutent effectively prevents PI-based attacks with minimal performance impact (0.0015%), achieving an F1-score of 88% for intra-app and 98% for inter-app analysis.

Position Paper: Trust, Teamwork and Digital Twins

We explore how team trust psychology can help us understand human interactions with digital twins. We first discuss trust from a psychological perspective and examine trust in human--human and human--AI dyads. Here, Artificial Intelligence (AI) broadly refers to intelligent agents, including autonomous systems. Next we discuss aspects of trust that emerge in teams. In the final section, we summarise the roles humans and digital twins have been reported to assume in their interactions and consider how the factors of interdependence, distinct roles and shared goals may impact the performance of work teams involving a digital twin.

SESSION: Cyber Physical Systems Security Session II

A Proof-of-Concept Implementation of a Glitch Hardware Trojan

This work presents the proof and design of an information leakage Hardware Trojan that we call a Glitch Hardware Trojan (GHT). In this work, we describe how a GHT can be created, how the leaked information can be recovered, and mitigation techniques to help protect against this type of attack. We provide power and area analysis to better understand GHTs, and test these metrics with a GHT added to an SPI hardware module that attacks via the chip select line

Work-in-Progress: RISC-V-Dilemma-1: A Hardware Trojan Benchmark Based on Seeker's Dilemma Approach

Hardware Trojans (HTs) pose a critical security threat to modern computing systems, yet existing detection benchmarks often fail to reflect real-world complexities adequately. This work presents a novel methodology to create a new Hardware Trojan (HT) benchmark that helps better evaluate HT detection approaches. In creating the benchmark, we follow an approach called Seeker's Dilemma, in which the defender is uncertain if the under-test design is HT free or HT-infected. We anticipate that creating this benchmark will help better test HT detection tools against the modern/unseen security problem. For this benchmark, the focus is on adding code-triggered HTs to a RISC-V processor i.e., the HTs are triggered upon executing a specific sequence of instructions by the processor. To study the effectiveness of the proposed benchmark, we run it against HW2VEC, a state-of-the-art graph neural network (GNN)-based HT detection tool. The results will show that HW2VEC fails to detect any inserted HTs, highlighting the limitations of current detection techniques in identifying sophisticated HTs within processor architectures.

Lova: A Novel Framework for Verifying Mathematical Proofs with Incrementally Verifiable Computation

Efficiently verifying mathematical proofs and computations has been a heavily researched topic within Computer Science. Particularly, even repetitive steps within a proof become much more complex and inefficient to validate as proof sizes grow. To solve this problem, we suggest viewing it through the lens of Incrementally Verifiable Computation (IVC). However, many IVC methods, including the state-of-the-art Nova recursive SNARKs, require proofs to be linear and for each proof step to be identical. This paper proposes Lova, a novel framework to verify mathematical proofs end-to-end that solves these problems. Particularly, our approach achieves a few novelties alongside the first-of-its-kind implementation of Nova: (i) an innovative proof splicing mechanism to generate independent proof sequences, (ii) a system of linear algorithms to verify a variety of mathematical logic rules, and (iii) a novel multiplexing circuit allowing non-homogeneous proof sequences to be verified together in a single Nova proof. The resulting Lova pipeline has linear prover time, constant verifying capability, dynamic/easy modification, and optional zero-knowledge privacy to efficiently validate mathematical proofs. We offer potential use cases for Lova to secure entire Cyber-Physical Systems (CPS) pipelines, as well as localized CPS systems in automotive and healthcare devices. Code is available at https://github.com/noelkelias/lova.

Formal Verification of Nonlinear Hybrid Systems Using Decomposition

This paper introduces a procedure to prove the safety of large-scale nonlinear hybrid systems. Critical systems are prone to vulnerabilities due to increased complexity. Formal methods for hybrid systems struggle with the computational issues of continuous dynamics and identifying invariants for nonlinear theorem proving. We propose an approach that integrates multiple formal verification methods for hybrid systems. This approach uses reachability analysis and differential dynamic logic to model and verify the safety of critical systems. By decomposing the system into components, our approach reduces computational load, enhancing the formal verification process of critical systems against potential hazards. This approach offers a comprehensive framework for the safety of complex critical systems.