About me
I am a Doctoral Researcher specializing in Machine Learning and security, currently pursuing a PhD at the Technical University of Darmstadt, Germany.
My research focuses on Collaborative Learning techniques, including Federated and Split Learning, and extends to detecting AI-generated content such as Text and Audio Deepfakes.
In my work, I employ mathematical approaches to find sound and general solutions to solve complex challenges in the field of Deep Learning and Collaborative Learning. I not only apply well-established techniques but also am able to introduce original methods that I rigorously prove to be sound and generalizable.
I am passionate about researching threats to Distributed Learning, with a particular focus on poisoning attacks and their defenses. My work in this area has resulted in the development of novel techniques for detecting and mitigating both targeted and untargeted attacks on Collaborative Machine Learning. Leading to my research beeing published in top-tier conferences, including NDSS and USENIX Security.
I am dedicated to advancing the challenging intersection between Machine Learning and Security. While Machine Learning thrives on leveraging vast amounts of data to achieve its potential, Security often imposes constraints to safeguard sensitive information. My work focuses on harmonizing these seemingly opposing objectives, striving to achieve innovative solutions that balance the need for data utility with the imperative of protecting privacy and security.
My Current Research Topics
-
Split Learning
Collaborative Learning for Edge Devices, with a focus on privacy.
-
DeepFake and GenAI
Detection of AI-generated content, with a focus on ensuring authenticity.
-
DNN Watermarking
Protecting the intellectual property of Deep Neural Networks through robust watermarking techniques.
-
Large Language Model
Exploring the capabilities, limitations, and security implications of Large Language Models in real-world applications.