Demo Session
12 September 2025 10:20-12:00
-
Autonomous Navigation using mmWave Radar Sensors
-
Successive Interference Cancellation Prototype for BLE and IEEE 802.15.4 in the Physical Layer
-
In-sector compressive direction-of-arrival estimation with a switched receive array
-
Beyond Search: Interactive Image Exploration and Retrieval via a Unified Similarity Graph
-
From VesselAI to TwinShip: AI-Powered Maritime Analytics and Digital Twins
-
Feature-Xniffer: On-the-fly Extraction and Compression of Network Traces for IoT Forensics
-
Visualizing Conversation Atmosphere in HSL Color Space
1. Autonomous Navigation using mmWave Radar Sensors
Mohammad Alaee-Kerahroodi, Min Bo Bo Kyaw, Bhavani Shankar Mysore R.
University of Luxembourg (SnT), Luxembourg
e-mail: min.kyaw@uni.lu
ABSTRACT
This demo presents a real-time radar-only odometry and mapping system using four Texas Instruments IWR6843ISK mmWave radar demo boards. The system estimates the motion of a mobile robot using Doppler and angle information extracted from the radar point clouds, without relying on scan matching, cameras, or inertial sensors. Velocity is estimated directly from Doppler measurements and integrated over time to reconstruct the robot trajectory and map the surrounding environment. All processing is done on a central laptop running ROS2, with real-time visualization of both odometry and mapping in RViz. The radar setup is compact, self-contained, and designed for operation in GNSS-denied or visually degraded environments. This work highlights the unique potential of mmWave radar for ego-motion estimation and mapping, with robustness to clutter, lighting changes, and occlusions. The novelty lies in the use of Doppler-only odometry from multiple synchronized TI demo boards, without traditional scan matching or SLAM back-ends.
BRIEF BIO:
Min Bo Bo Kyaw is a Research Support Technician at SPARC, at SnT. M. R
2. Successive Interference Cancellation Prototype for BLE and IEEE 802.15.4 in the Physical Layer
Diego Badillo-San-Juan, Said Alvarado-Marin, Thomas Watteyne,
Filip Maksimovic
AIO team, Inria, Paris, France
e-mail: diego.badillo-san-juan@inria.fr
ABSTRACT
This demo presents a post-processing prototype that recovers colliding Bluetooth Low Energy and IEEE 802.15.4 packets, which can interfere in Internet of Things applications due to their shared use of the 2.4 GHz ISM band.
The setup employs three software-defined radios (two transmitters, one per protocol, and a receiver) connected to a host computer.
The receiver employs a coarse-to-fine exhaustive search over frequencies and cross-correlation to implement Successive Interference Cancellation. It first demodulates the stronger signal, subtracts its reconstructed waveform from the interfered signal, and then demodulates the weaker one.
This approach enables the demodulation of both protocols despite overlaps in frequency and time. However, it assumes a direct line of sight, so that a dominant signal outweighs multipath effects. It also requires a sufficiently high signal-to-noise ratio to recover the weaker signal after subtraction, and the analogue-to-digital converter’s quantisation limits the acceptable power difference between protocols.
BRIEF BIO:
Diego Badillo-San-Juan is an R&D engineer on the AIO team at Inria Paris, focusing on embedded wireless communications for the Internet of Things.
He received his degree in Electronic Engineering and a Master’s in Electronic Engineering from the Federico Santa María Technical University in Chile in 2024. Diego is particularly interested in array signal processing and its applications to acoustic signal processing.
3. In-sector compressive direction-of-arrival estimation with a switched receive array
Paul Barend Groen, Bein Frederik Jacob Kamminga, Lars Jisse Hoogland,
Boele van Schaik, Anniek Christine van der Veen, Edoardo Focante,
Hamed Masoumi, Nitin Jonathan
Myers Delft Center for Systems and Control, Delft University of Technology, The Netherlands
e-mail: E.Focante@tudelft.nl
ABSTRACT
Direction-of-arrival (DoA) estimation is a key function in 5G radios, radars, and sonars. While large arrays lead to high-resolution DoA estimates, their implementation with fully digital receive architectures incurs significant power consumption. This paper demonstrates DoA estimation with a low-power switched receive array, which consumes less power than a fully digital array. The DoA is estimated in two stages. First, the sector in which the source lies is estimated by mechanically steering a wide beam. Then, the DoA within the identified sector is estimated electronically using our switched receive array. We formulate an integer program to optimize the configuration of the switches in the receiver. Our optimized configuration results in low-aliasing artifacts within the identified sector. We build a custom ultrasound receive array testbed and demonstrate DoA estimation with our optimized switch configuration.
BRIEF BIO:
Edoardo Focante (Student Member, IEEE) received the BSc degree from Università Politecnica delle Marche, Ancona, Italy, and the MSc degree from the Delft University of Technology (TU Delft), Delft, The Netherlands. He is currently a PhD student at TU Delft working on situation-aware signal processing for automotive radar in collaboration with NXP Semiconductors. His research interests include array signal processing, MIMO radars, target detection and convex optimization.
ADDITIONAL INFO:
We also have a youtube presentation at
https://www.youtube.com/watch?v=-2LHcuZD1S0.
4. Beyond Search: Interactive Image Exploration and Retrieval via a Unified Similarity Graph
Kai Uwe Barthel, Konstantin Schall, Nico Hezel, Klaus Jung
Visual Computing Group HTW Berlin Berlin, Germany
Peter Eisert, Florian Tim Barthel
Vision and Imaging Technologies Fraunhofer HHI Berlin, Germany
e-mail: Kai-Uwe.Barthel@HTW-Berlin.de
ABSTRACT
Navigu 2.0 is the latest iteration of a system designed for fast, scalable visual exploration of large, unstructured image collections. It combines an intuitive interface with an efficient architecture optimized for real-time user interaction. The new version introduces a significant architectural change to a single, unified similarity graph. Previously, three distinct graphs were used for visual appearance, semantic meaning, and color composition of the images. An improved CLIP-based encoder generates one joint feature vector per image, integrating visual and semantic similarities. This unified embedding, used in a Dynamic Exploration Graph, enables faster and more accurate search results for both text and image queries while simplifying the backend architecture.
In response to a search query, the system retrieves a subgraph of related images and displays them as a visually sorted 2D grid. This grid is created using a novel version of the Fast Linear Assignment Sorting algorithm working solely on pairwise distances. Users can interact with this grid by dragging to explore neighboring regions of the graph or zooming to navigate between broad concepts and fine-grained similarities. These three enhancements provide a unified, efficient solution for cross-modal image search and extensive visual data exploration.
BRIEF BIO:
Kai Uwe Barthel, Professor of Visual Computing at HTW Berlin, specializes in visual image search and automatic image understanding. In 2009, he founded Pixolution, a company that provides visual search technologies to stock agencies. His recent work includes automatic image keywording, and developing visual navigation systems.
ADDITIONAL INFO:
The live demo is available at https://www.navigu.net
5. From VesselAI to TwinShip: AI-Powered Maritime Analytics and Digital Twins
Loukas Ilias, Afroditi Blika, Ariadni Michalitsi-Psarrou, Theodoros Florakis, Giannis Xidias, Vassilis Michalakopoulos, Spiros Mouzakitis, Dimitris Askounis
DSS Lab, School of ECE, NTUA 15780 Athens, Greece
e-mail: lilias@epu.ntua.gr
ABSTRACT
This demo showcases the VesselAI platform, which delivers advanced maritime analytics through AI-based decision support, voyage optimization, and risk forecasting by integrating diverse big data sources such as AIS, oceanographic, meteorological, and bathymetric data. Additionally, it presents key functionalities planned for the upcoming TwinShip Horizon Europe project, which builds upon VesselAI by introducing digital twin technology for real-time simulation and enhanced situational awareness. TwinShip leverages VesselAI’s platform capabilities alongside live sensor data to create digital replicas of vessels, aiming to improve safety, fuel efficiency, and operational decision making. This demonstration highlights also the transition from VesselAI’s AI-driven analytics to the innovative digital twin framework that TwinShip will offer.
BRIEF BIO:
Loukas Ilias is a postdoctoral researcher at the Decision Support Systems Laboratory, School of Electrical and Computer Engineering, National Technical University of Athens. He completed his PhD on “Machine Learning Methods for Recognizing Brain Disorders”. Currently, Dr. Ilias is working on the TwinShip Horizon Europe project, focusing on applying AI and digital twin technologies to advance maritime operations.
6. Feature-Xniffer: On-the-fly Extraction and Compression of Network Traces for IoT Forensics
Fabio Palmese, Alessandro E. C. Redondi, Matteo Cesana
DEIB, Politecnico di Milano Milan, Italy
e-mail: fabio.palmese@polimi.it
ABSTRACT
This demo presents Feature-Xniffer, a framework for real-time network traffic feature extraction and compression in IoT forensics scenarios. The tool operates on Wi-Fi Access Points, computing statistical features from network traffic as it flows, thus eliminating the need for storing raw PCAP files. Feature-Xniffer also implements lossy compression techniques (Scalar Quantization, Vector Quantization, and PCA) to significantly reduce storage requirements while maintaining forensic capabilities. We demonstrate its effectiveness in IoT forensics tasks such as device identification and human activity recognition.
BRIEF BIO:
Fabio Palmese pursued his PhD degree in 2025 in Information Technology at DEIB, Politecnico di Milano. His research interests are around network traffic analysis for security and forensics purposes, focusing on Internet of Things devices.
7. Visualizing Conversation Atmosphere in HSL Color Space
Akio Sashima
Research Institute on Human and Societal Augmentation National Institute of Advanced Industrial Science and Technology (AIST) Kashiwa, Chiba Japan
Mitsuru Kawamoto
School of Data Science and Management Utsunomiya University Utsunomiya, Tochigi Japan
e-mail: mkawamoto@a.utsunomiya-u.ac.jp
ABSTRACT
When people talk in groups, the atmosphere of their conversation can change significantly depending on their interactions. However, once the conversation is over, it’s hard to remember whether it was lively or boring. If we could record this atmosphere for later review, we would better understand group conversations.
In this demonstration, we present a novel system for real-time sensing and visualization of conversational atmosphere. The proposed system, which works on a laptop PC, continuously senses the environmental sound in the demonstration space and visualizing “atmospheric color,” which is color representation of surrounding sound environment. The system also shows the history of the atmospheric colors graphically and plots the color points on a 2D map to represent atmospheric colors corresponding to the progression of the conversation.
BRIEF BIO:
Mitsuru Kawamoto, Professor, School of Data Science and Management, Utsunomiya University.