Tetra: GAICIA
Tetra: GAICIA
GAICIA stands for Behavior-Based Artificial Intelligence against Cyber Industrial Attacks and is a Tetra project about today's industrial security and the possible added value that AI could bring to this.
Want to know more about the future of industrial cybersecurity with AI? Join our event on March 19.
Register Here
Main section
Quick facts
/
Tetra project
/
2 years (2022 - 2024)
/
Collaboration with UGent
What will we do and why is this project so relevant?
We will use AI in a defensive manner and therefore see to what extent AI can provide added value in detecting attacks on industrial systems.\n\nIn addition, we will also use explainable AI to provide as much information as possible to end users, so that they do not regard the AI model as a black box. The intention is therefore to provide useful alerts when an attack has been detected so that the end user can use these alerts to solve the actual problem. So we will not only let the end user know that an attack is underway, but also why the AI model thinks an attack is underway and which attack it would most likely be.
The reason why we focus exclusively on industrial security is because - unlike IT security - there are no standard policies yet. This is partly because OT devices use special protocols that are sometimes brand-related and the full functionality of which is not always fully disclosed by the manufacturer. Over time, however, these devices were increasingly connected to the internet, making them publicly accessible. As a result - and because hackers increasingly realize the impact of shutting down an OT network - more and more attacks are happening that focus on industrial networks.
Bottom section
How will we do this?
In order not to have to take network recordings from our end users and not have to carry out attacks on their networks, we have set up a test environment in collaboration with Ghent University. This test environment consists of a demo case that contains a mix of OT components that allow us to configure a number of OT-specific protocols as desired. We could then run attack scripts on this test environment to take recordings of the network before, during and after these attacks. This captured network data is then used to train our AI models.
Contributors
Researchers
/
Kyra Van Den Eynde, AI/CS Researcher, AI Lead
Want to know more about our team?
Visit the team page