6G-SANDBOX
ARMOR (Adversarial Resistance and Model Optimisation for Robustness for 6G Open Radio Access Network)
The ARMOR project aimed to strengthen the robustness of Artificial Intelligence (AI) models in Open Radio Access Networks (O-RAN) against adversarial threats, which pose risks to performance, privacy, and trust. The main goal was to develop and validate an adversarial testing framework capable of systematically evaluating AI model vulnerabilities across evasion, inference, inversion, and poisoning attack vectors. The targeted use case was AI-based Intrusion Detection Systems (IDS) trained on radio telemetry data for anomaly detection in O-RAN environments.
Testing revealed critical weaknesses in IDS models, including a complete drop in detection accuracy under small evasion attacks, significant privacy leakage through membership inference, and class collapse under targeted poisoning. The results confirmed the necessity of adversarial testing before deployment and demonstrated reproducibility across both UCD and Oulu testbeds.
The impact is twofold: An essential security testing extension for 6G applications such as AI-enabled xApps that strengthens 6G AI security and robustness, while the ARMOR team gained technical expertise, visibility, and potential avenues for commercial exploitation in AI security.
Project Open Call 3rd-party funding
6G-SANDBOX

