Improving Security and User Privacy in Learning-Based Traffic Signal Controllers (TSC)
No Access Until
Permanent Link(s)
Collections
Other Titles
Author(s)
Abstract
21st century transportation systems leverage intelligent learning agents and data-centric approaches to analyze information gathered with sensing (both vehicles and roadsides) or shared by users to improve transportation efficiency and safety. Numerous machine learning (ML) models have been incorporated to make control decisions (e.g., traffic light control schedules) based on mining mobility data sets and real-time input from vehicles via vehicle-to-vehicle and vehicle-to-infrastructure communications. However, in such situations, where ML models are used for automation by leveraging external inputs, associated security and privacy issues start to surface. This project studied the security of ML systems and data privacy associated with learning-based traffic signal controllers (TSCs). Preliminary work had demonstrated that deep reinforcement learning (DRL) based TSCs are vulnerable to both white-box and black-box cyber-attacks. Research goals included 1) quantifying the impact of such security vulnerabilities on the safety and efficiency of the TSC operation, and 2) developing effective detection and mitigation mechanisms for such attacks. In learning based TSCs, vehicles share their messages with the DRL agents at TSCs, which will then analyze the data and take action. Sharing vehicular mobility data with a network of TSCs may cause privacy leakage. To address this problem, differential privacy techniques were applied to the mobility datasets to protect user privacy while preserving the effectiveness of the prediction outcomes of traffic-actuated or learning-based TSC algorithms. Approaches were evaluated in vehicular simulators using real mobility data from San Francisco and other cities in California. By accomplishing these goals, learning-based transportation systems are more secure and reliable for real-time implementations.