Tutorials
Morning session
8 Sept. 2025 9:00-13:00
-
Physics-Informed Machine Learning For Audio Processing
-
Integrated Sensing and Communications: Signalling, Security, and Network
-
Tropical Algebra and Geometry for Machine Learning and Optimization
-
Automotive Radar Signal Processing
-
Generative AI to Learn the Signal High-Order Statistics and to Solve Physical Layer Communications Challenges
-
End-to-End Learned Image and Video Coding: Recent Advances and the Rate-Distortion-Complexity Trade-offs
-
Parametrical Sparse Models: Atom Learning and Gridless Recovery Techniques
Afternoon session
8 Sept. 2025 14:00-18:00
-
Signal Processing for IoT – Decision Fusion in Sensor Networks
-
Robust Sound Zone Control with Optimal Filtering Methods
-
Robust Optimization Methods and Applications to Transmit/Receive Beamforming in Radar and Wireless Communications
-
Learning with Covariance Matrices: Foundations and Applications to Network Neuroscience
-
Proximal Neural Networks: Wedding Variational Methods and Artificial Intelligence
-
Sparse Arrays and Sparse Waveforms: Design, Processing, and Applications
-
Towards Edge AI-native 6G with Semantic and Goal-Oriented Communications
Morning session | 8 Sept. 2025 9:00-13:00
1. Physics-Informed Machine Learning For Audio Processing
Presenters
– Mirco Pezzoli, Politecnico di Milano, Italy
– Diego Di Carlo, Center for Advanced Intelligence Project (AIP), RIKEN, Japan
– Shoichi Koyama, National Institute of Informatics, Japan
Abstract
Machine learning, particularly deep learning, has revolutionized numerous audio processing tasks, including source separation, speech enhancement, and spatial audio rendering. However, conventional deep learning approaches often struggle in scenarios where data is scarce or strong generalization is required, common challenges in acoustics and signal processing.
Physics-Informed Machine Learning (PIML) offers a powerful solution by embedding physical principles, such as wave equations and acoustic propagation models, directly into learning frameworks. This integration enhances model interpretability, improves generalization, and ensures adherence to fundamental physical laws.
This tutorial provides a comprehensive introduction to PIML, covering two main paradigms: Physics-Informed methods, where physical laws are incorporated as regularization terms, and Physics-Constrained methods, where models are explicitly structured to satisfy governing equations.
We will explore how these approaches extend classical techniques, such as numerical solvers and kernel methods, while naturally integrating with modern machine learning frameworks, including neural fields and implicit neural representations.
The tutorial will present key applications of PIML in audio processing, with a focus on sound field estimation and reconstruction. Additionally, we will discuss its impact on active noise control, HRTF upsampling, beamforming, and localization, demonstrating the versatility of PIML in solving complex acoustic problems. The session will also highlight recent advancements that bridge physics-based modeling with deep learning, offering new perspectives on audio processing.
By attending this tutorial, participants will gain both theoretical insights and practical knowledge on implementing PIML in real-world applications. Through theoretical discussions and examples, they will learn how to design machine learning models that integrate physical constraints, leading to more robust, efficient, and interpretable solutions in audio and signal processing.
Morning session | 8 Sept. 2025 9:00-13:00
2. Integrated Sensing and Communications: Signalling, Security, and Network
Presenters
– Christos Masouros, University College London, UK
– Kaitao Meng, University College London, UK
– Kawon Han, University College London, UK
Abstract
The future Global cellular infrastructure will underpin smart city applications, urban security, infrastructure monitoring, and smart mobility, among an array of emerging applications that require new network functionalities beyond communications. Key network KPIs for 6G involve Gb/s data rates; cm-level localization; μs-level latency; and Tb/Joule energy efficiency. Future networks will also need to support the UN’s Sustainable Development Goals to ensure sustainability, net-zero emissions, resilience, and inclusivity. The multifunctionality and the net-zero emissions agenda necessitate a redesign of the signals and waveforms for 6G and beyond. In this tutorial, we will first explore enabling the multi-functionality of signals and wireless transmissions, as a means of hardware reuse and carbon footprint reduction. We will overview the emerging area of integrated sensing and communications (ISAC), which is a paradigm shift that enables both sensing and communication functionalities from a single transmission, a single spectrum use and ultimately a common infrastructure. Then, we will introduce new challenges of physical layer security in ISAC, including secure ISAC signaling design. Moreover, we will open a new research direction to network-level ISAC in multi-cell scenarios, which is capable of effectively expanding both the sensing and communication coverage and of providing extra degrees of freedom (DoF) for realizing increased integration gains. We will discuss several new considerations for ISAC networks, including new metrics, optimization DoF, cooperation regimes, and trade-off resources. Moreover, focusing on a networked ISAC, we will present a distributed ISAC system design, which offers cooperative sensing and communication functionality and increases ISAC security. We explore signalling designs, synchronization, and ISAC security benefits in the distributed ISAC considering their coherency. Together with the analysis of the network-level ISAC, it unveils the performance gain and implementation feasibility of the ISAC network for future wireless.
Morning session | 8 Sept. 2025 9:00-13:00
3. Tropical Algebra and Geometry for Machine Learning and Optimization
Presenters
– Petros Maragos, National Technical University of Athens, School of E.C.E., Robotics Institute, Athena Research Center, HERON – Hellenic Robotics Center of Excellence, Athens, Greece
Abstract
Tropical geometry is a relatively recent field in mathematics and computer science. The scalar arithmetic of its analytic part is based on max-plus and min-plus operations, which are also used in nonlinear image processing, convex analysis and optimization, and nonlinear control. Tropical geometry emerged successfully in the analysis and extension of several classes of problems and systems in both traditional machine learning and deep learning. The tutorial will cover the following topics: (1) Brief introduction to tropical geometry and max-plus algebra. (2) Deep Neural Networks (DNNs) with piecewise-linear activations of ReLU and Maxout type, whose representation and discriminability are described by tropical geometry. (3) Morphological Neural Networks have max-plus nodes, enjoy fast training and can be severely pruned while retaining adequate performance. We present methods for their training and pruning using tropical geometry and convex-concave optimization. (4) Neural Network Minimization: The output of DNNs with ReLU activations can be described via max-plus tropical polynomials. We present methods based on tropical zonotopes for approximating these tropical polynomials and their Newton Polytopes, to minimize networks trained for multiclass classification problems. (5) Tropical Approximation: Tropical Mappings, defined as vectors of tropical polynomials, can express several interesting optimization problems including tropical inversion, tropical regression, and tropical compression, whose potential applications include recommendation systems and reinforcement learning. We present a unified theoretical framework based on tropical matrix factorization, a complexity analysis, and solution algorithms. (6) Piecewise-linear Regression: We present optimal and low-complexity algorithms to fit tropical polynomials to multidimensional data, possibly in the presence of noise. Overall, tropical geometry and max-plus algebra provide a new set of algebraic and geometrical methods and tools for the analysis, understanding, and optimization of several classes of neural networks and other machine learning systems for image and signal data.
Morning session | 8 Sept. 2025 9:00-13:00
4. Automotive Radar Signal Processing
Presenters
– Igal Bilik, School of Electrical and Computer Engineering, Ben Gurion University of the Negev, Israel
Abstract
Autonomous driving is one of the automotive industry’s megatrends, and most car manufacturers are already introducing various levels of autonomy into commercially available vehicles. The main task of the sensing suite in autonomous vehicles is to provide the most reliable and dense information on the vehicular surroundings. Specifically, it is necessary to acquire information on drivable areas on the road and to port all objects above the road level as obstacles to be avoided. Thus, the sensors need to detect, localize, and classify a variety of typical objects, such as vehicles, pedestrians, poles, and guardrails. All autonomous vehicles are typically equipped with multiple sensors of multiple modalities: radars, cameras, and lidars. Lidars are expensive, cameras are sensitive to illumination and weather conditions, have to be mounted behind an optically transparent surface, and do not provide direct range and velocity measurements. Radars are robust to adverse weather conditions, are insensitive to lighting variations, provide long and accurate range measurements, and can be packaged behind the optically nontransparent fascia. The uniqueness of automotive radar scenarios and operational conditions mandate the formulation and derivation of new signal-processing approaches beyond classical military radar concepts. The reformulation of vehicular radar tasks and new performance requirements provide an opportunity to develop innovative signal-processing approaches. This tutorial consists of 5 parts. The first part will describe the active safety and autonomous driving capabilities and the associated resulting signal processing challenges. Next, various sensing modalities used in automotive applications will be overviewed, and their signal processing will be compared. The second part will first define the automotive radar performance requirements. Next, it will discuss propagation phenomena experienced by typical automotive radar, the associated challenges, and radar signal processing concepts that can address them. Radars and LiDARs complement optical cameras by providing depth information and additional robustness. Therefore, the third part will compare radar and LiDAR signal processing chains and emphasize their similarity, differences, and associated processing challenges. In the fourth part, the tutorial will focus on the automotive radar processing chain: a) target detection, b) range and Doppler measurements estimation, c) beamforming, including MIMO radar processing, d) target tracking, and e) target classification. The comparison between the automotive and the conventional military radar challenges and signal processing will be made. Leveraging the AI revolution, the tutorial will discuss and emphasize the deep neural network (DNN) processing opportunities and challenges compared with conventional processing. The following advanced signal processing topics with the automotive radar focus will be presented in the fifth part of the tutorial: a) mutual interference mitigation, b) multipath processing in practical urban scenarios, c) radar-camera sensor fusion, d) non-line-of-sight (NLOS) automotive radar processing in dense urban scenarios, e) cognitive automotive radar waveform design and beamforming, f) automotive synthetic aperture radar (SAR) processing, g) automotive radar near-field operation and tangential velocity estimation, h) performance bounds on the automotive radar and modeling misspecification, i) automotive radar networked operation and processing, including graph signal processing (GSP) and f) automotive radar calibration approaches.
Morning session | 8 Sept. 2025 9:00-13:00
5. Generative AI to Learn the Signal High-Order Statistics and to Solve Physical Layer Communications Challenges
Presenters
– Andrea M. Tonello, University of Klagenfurt, Austria
Abstract
The tutorial deals with generative AI and ML models for signal analysis (learning) and synthesis (generation), and their application in communication systems. It highlights the pivotal role played by the ability to estimate the signal high-order statics. We review basic concepts about the high-order statistical description of random processes and conventional random signal generation methods. Then, we will discuss recent generative and discriminative models capable of first learning the hidden/implicit distribution, and then generating synthetic signals. We will elaborate on the concept of copula, and motivate using recently introduced segmented neural network architectures that operate in the uniform probability space.
Several challenges in communications are then considered. Novel neural solutions are derived from a mathematical formulation of optimality criteria. In detail, we will address the following problems: a) synthetic channel emulation, b) mutual information estimation and the exploitation of the f-divergence, c) optimal neural decoding, d) classification for radio sensing and network diagnostics, e) end-to-end system design to approach capacity, and f) channel capacity estimation in unknown channels.
The tutorial will substantiate the theoretical aspects with several application examples not only in the wireless communication and sensing context but also in the less popular power line communication domain, which is perhaps more challenging given the extremely complex nature of the channel and noise.
Morning session | 8 Sept. 2025 9:00-13:00
6. End-to-End Learned Image and Video Coding: Recent Advances and the Rate-Distortion-Complexity Trade-offs
Presenters
– Wen-Hsiao Peng, National Yang Ming Chiao Tung University, Taiwan
– Heming Sun, Yokohama National University, Japan
Abstract
End-to-end learned image and video coding has gained significant attention since the introduction of variational image compression in 2018. Over 200 papers have been published, with state-of-the-art (SOTA) learned image coding achieving comparable compression performance to H.266/VVC intra coding in PSNR-RGB and superior MS-SSIM results. Similarly, recent developments in learned video coding have demonstrated comparable PSNR-RGB/YUV results to H.266/VVC and its Enhanced Compression Model (ECM) under low-delay settings. These advancements have sparked significant interest from international standardization bodies such as JPEG and MPEG, and motivated CLIC and ISCAS NNVC Grand Challenge at academic events. Despite these efforts, the rate-distortion-complexity trade-offs in neural image and video codecs remain underexplored. Key concerns include peak memory usage, multiply-accumulate operations per pixel (MAC/pixel), power consumption, cross-platform interoperability, parallelizability, etc.
This tutorial provides a comprehensive overview of recent advancements in learned image and video coding, with a focus on rate-distortion-complexity trade-offs.
- The first part highlights the recent progress in this field, covering the best achievable rate-distortion performance, standardization efforts in JPEG and MPEG, and coding results from grand challenges.
- The second part delves into the developments in end-to-end learned image coding. It begins by introducing the components of a typical learned image codec and then review advanced tool features (e.g. fast context models) from several notable works. Additionally, it addresses network pruning and quantization techniques tailored for learned image codecs.
- The third part is dedicated to learned video coding. It explores three major coding frameworks: residual coding, conditional coding, and conditional residual coding. This part pays particular attention to the rate-distortion-complexity trade-offs of these coding frameworks under various temporal buffering strategies (e.g. explicit, implicit, and hybrid approaches). It also discusses early attempts at quantizing end-to-end video codecs.
The tutorial concludes with insights and perspectives on future research directions.
Morning session | 8 Sept. 2025 9:00-13:00
7. Parametrical Sparse Models: Atom Learning and Gridless Recovery Techniques
Presenters
– Antonia Maria Tulino, Università degli Studi di Napoli, Federico II, Italy
– Matilde Sánchez-Fernández, Universidad Carlos III de Madrid, Spain
Abstract
Sparse parametrical models are fundamental in a wide range of applications, from wireless communications and radar imaging to medical diagnostics and autonomous systems. These models enable the recovery of structured signals with significantly fewer measurements than traditional methods, leveraging sparsity and parametric dependencies to enhance resolution and robustness. However, classical compressed sensing approaches often rely on predefined grids, leading to issues such as basis mismatch and computational inefficiencies.
This tutorial explores gridless sparse model recovery techniques, focusing on atom learning and dictionary-based parameter estimation. We provide a structured overview of the mathematical foundations of sparse parametrical models, emphasizing their formulation in high-dimensional spaces. The session will introduce attendees to methodologies such as atomic norm minimization, Bayesian inference for continuous dictionary learning, and emerging machine learning-based approaches for sparse recovery.
We will bridge the gap between classical compressed sensing and modern statistical inference, showcasing how recent advances in optimization and probabilistic modeling allow for enhanced estimation accuracy in real-world scenarios. The tutorial will include practical examples from massive MIMO systems, millimeter-wave communications, super-resolution imaging, and biomedical signal processing, illustrating the impact of these techniques on next-generation technology.
Designed for researchers, engineers, and PhD students with a background in signal processing, this tutorial provides both theoretical insights and practical tools to apply gridless sparse recovery methods effectively.
Afternoon session | 8 Sept. 2025 14:00-18:00
8. Signal Processing for IoT – Decision Fusion in Sensor Networks
Presenters
– Pramod K. Varshney, Syracuse University, US
– Pierluigi Salvo Rossi, Norwegian University of Science and Technology, Norway
– Domenico Ciuonzo, Università degli Studi di Napoli, Federico II, Italy
Abstract
The Internet-of-Things (IoT) paradigm is crucial for the digital transformation of modern society: a multitude of networked devices interact with the physical world and provide services through data collection, communication, processing and control. Several applications (from healthcare to industry, from entertainment to communications & security) require sophisticated design of tailored techniques to capitalize information wrt resources (e.g. energy-efficient IoT design enables long-life operations with reduced maintenance costs). This tutorial adopts a statistical signal processing perspective and focuses on the distributed version of the binary-hypothesis test which supports several energy-efficient IoT applications concerning the robust detection of a phenomenon of interest (e.g. environmental hazard, oil/gas leakage, forest fire). The reference scenario is a wireless sensor network and a fusion center with multiple antennas collecting and processing the information. The presence of multiple antennas at both transmit/receive sides resembles a multiple-input multiple-output (MIMO) system and allows for array processing techniques providing spectral efficiency, fading mitigation, and low-energy consumption. The problem is referred to as MIMO decision fusion. The tutorial covers both design and performance assessment of fusion and is made of three sections. The first section introduces classical decision fusion over parallel-access and multiple-access channels. Coherent and non-coherent decision fusion are discussed. The second section moves to the recent paradigm of MIMO decision fusion, based on array processing techniques. “Decode-and-Fuse” and “Decode-then-Fuse” approaches are introduced and compared in the coherent case, while the energy test is discussed in the non-coherent case. The third section explores the massive MIMO regime. Identifying analogies with multiuser detection and exploiting the favorable propagation condition, widely-linear fusion rules are presented, and energy-efficient transmission regimes are identified. The use of backscatter-based sensors and the aid of smart surfaces is discussed for lifelong monitoring and some examples with application to leak detection & localization in process engineering are discussed.
Afternoon session | 8 Sept. 2025 14:00-18:00
9. Robust Sound Zone Control with Optimal Filtering Methods
Presenters
– Andreas Jonas Fuglsig, Department of Electronic Systems, Aalborg University, Denmark
– Jesper Rindom Jensen, Department of Electronic Systems, Aalborg University, Denmark
– Professor Mads Græsbøll Christensen, Department of Electronic Systems, Aalborg University, Denmark
Abstract
This tutorial presents an overview of robust sound zone control formulated as an optimal filtering problem. The session begins with a foundational introduction to sound zones, explaining how multiple loudspeakers are utilized to create distinct regions: bright zones for desired audio reproduction and dark zones for silence. Attendees will gain insights into sound pressure modeling in both time and frequency domains, emphasizing the significance of acoustic impulse responses (AIRs). The tutorial further explores methods for AIR estimation, interpolation for enhanced spatial coverage, and efficient low-rank approximations enabling real-time adaptive control.
A critical aspect addressed is the management of environmental non-stationarities, such as movements, temperature offset, uncertainties, humidity changes, etc., through adaptive and robust control methods. Participants will learn about incorporating practical variations, including changes in temperature and hardware positions, into sound zone models using robust optimization techniques and parametric AIR adaptation. Additionally, methods accommodating loudspeaker nonlinearities will be discussed, alongside techniques ensuring robustness in noisy environments via optimization.
Via the advanced variable span trade-off (VAST) filtering framework, the tutorial will highlight and relate recent and traditional approaches like acoustic contrast control, pressure matching, and hybrid methods. The VAST approach, which balances sound quality in bright zones against residual audio in dark zones, can further integrate adaptive filtering and perceptual masking to reduce the computational requirements and optimize the listener experiences. Finally, the session will identify future challenges and potential enhancements using AI methodologies, making it ideal for researchers, industry professionals, and graduate students interested in the foundations of optimal sound zone control filtering methods, forming a basis for future AI-driven spatial audio processing techniques.
Afternoon session | 8 Sept. 2025 14:00-18:00
10. Robust Optimization Methods and Applications to Transmit/Receive Beamforming in Radar and Wireless Communications
Presenters
– Yongwei Huang, Guangdong University of Technology, China
– Sergiy A. Vorobyov, Aalto University, Finland
Abstract
Robust optimization methods are of crucial importance in modern signal processing. Thus, we will review some most significant such methods in applications to beamforming. Modern optimization-based beamforming techniques are critical in radar and wireless communication systems for reducing power consumption, mitigating interference, and enhancing signal quality. However, even optimized beamformers can suffer performance degradation due to uncertainty data caused by target signal mismatches, sensor element errors, imperfect channel state information and so on. Over the past two decades, robust optimization methods have emerged to address these challenges, employing advances in convex optimization to design beamformers that remain effective under various uncertainties. This tutorial presents recent advances in robust optimization techniques applied to both transmit and receive beamforming. It covers deterministic and probabilistic robust optimizations. The former includes robust least squares, minimax/maximin signal-to-interference-plus-noise ratio problems, worst-case optimization, robust sparse optimization, the generalized S-procedure, and quadratic matrix inequality optimization. The latter focuses on probability-constrained and distributionally robust optimization methods. Emphasis will be placed on the analysis of optimality conditions, the design of efficient algorithms, and computational cost assessments. Participants will gain a comprehensive understanding of how robust optimization can mitigate the adverse effects of uncertainties in beamforming, making these techniques indispensable for next-generation signal processing systems.
Afternoon session | 8 Sept. 2025 14:00-18:00
11. Learning with Covariance Matrices: Foundations and Applications to Network Neuroscience
Presenters
– Saurabh Sihag, University at Albany, USA
– Gonzalo Mateos, University of Rochester, USA
– Alejandro Ribeiro, University of Pennsylvania, USA
Abstract
Covariance matrices have been the cornerstone of multivariate data analysis in a host of practical applications. The covariance matrix spectrum implicitly captures the structure of a dataset via the principal components, and said structure can be exploited via the workhorse principal component analysis (PCA) transform. The practical deployment of PCA-based learning approaches faces scrutiny due to various limitations that include a lack of reproducibility and generalizability. This tutorial will focus on a specific family of deep learning models, referred to as coVariance neural networks (VNNs), that overcome major limitations of PCA-based learning approaches and foster a novel, yet natural and insightful, convergence between the paradigms of PCA, graph signal processing (GSP), and modern deep learning architectures. The intellectual foundation of this tutorial will be established by elucidating the equivalence between PCA-driven statistical approaches and the information processing modules derived from GSP in graph neural networks (GNNs) that operate on covariance matrices (viewed as graphs), i.e., VNNs. With VNN models as the conduit for studying DL architectures over covariance matrices, this tutorial will first zoom into the recently developed theoretical foundations of VNNs that include (i) the stability of VNN outcomes to stochastic perturbations in the sample covariance matrices estimated from finite data, and (ii) transference of VNNs across multiscale or multiresolution datasets by exploiting the convergence between covariance matrices of different sizes in these settings. The extensions of the foundational principles of VNNs to the settings with sparse covariance matrices and spatio-temporal settings will also be discussed. Furthermore, this tutorial will elucidate how the impact of these foundational advances in VNNs permeates to principled designs and applications of learning methods across broad domains where covariance matrices emerge, with a specific focus on the application of brain age gap prediction from neuroimaging data in computational neuroscience.
Afternoon session | 8 Sept. 2025 14:00-18:00
12. Proximal Neural Networks: Wedding Variational Methods and Artificial Intelligence
Presenters
– Audrey Repetti, Heriot-Watt University and Maxwell Institute for Mathematical Sciences, Edinburgh, UK
– Nelly Pustelnik, ENS Lyon, CNRS, Laboratoire de Physique, France
– Jean-Christophe Pesquet, CVN, CentraleSupélec, University Paris-Saclay, INRIA, France
Abstract
Imaging sciences are ubiquitous to assist experts worldwide addressing fundamental questions across observational sciences, biology, medicine, security, astronomy, and beyond. Since the early 2000s, signal and image processing has been significantly shaped by two major trends: sparsity-powered proximal algorithms and deep learning. The former rely on a clever integration of variational formalism and optimization schemes, while the latter hinges on intricately designed neural network architectures. Both approaches have demonstrated high performance across various applications, with deep learning often surpassing pure optimization methods in practical settings. However, for many decision-making processes, optimization methods may remain preferred because of their strong theoretical guarantees for generating reliable solutions. More recently, there has been a surge in hybrid methods combining optimization and deep learning, reaching performance levels at least comparable to traditional deep learning, while providing theoretical guarantees and interpretability. In an era where both proximal algorithms and deep learning have reached advanced maturity and complexity levels, there arises a valuable opportunity to investigate the interplay between these methodological families. This tutorial will aim to show that a unified framework can encapsulate these four important classes of methods to solve inverse imaging problems: (i) variational methods powered by proximal algorithms, (ii) end- to-end neural networks, (iii) unfolded neural networks, and (iv) plug-and-play/implicit prior methods.
Outline: The tutorial will hence consist of three main parts:
- Variational approaches and proximal splitting methods (including introduction to imaging problems) (approx. 60 min);
- An optimization view of neural networks (approx. 30 min);
- Hybrid methods across proximal methods and neural networks (approx. 75 min).
Finally, the remaining time (approx. 45 min) will be dedicated to a hands-on session to use some of the tools discussed in the above sections on imaging problems.
Afternoon session | 8 Sept. 2025 14:00-18:00
13. Sparse Arrays and Sparse Waveforms: Design, Processing, and Applications
Presenters
– Yimin D. Zhang, Temple University, USA
– Shunqiao Sun, The University of Alabama, USA
Abstract
Modern sensing and communication systems use large-scale antenna arrays to enable high spatial resolution and enhance directional gain. However, this increases hardware complexity. Sparse arrays offer a solution to maintain large aperture with low system complexity. Advances in sparse array design and processing have improved direction-of-arrival (DOA) estimation, radar imaging, and adaptive beamforming, thus enhancing sensing accuracy and increasing communication capacity while keeping low system complexity. We explore sparse arrays and sparse waveforms, covering fundamental concepts, optimum design, signal processing algorithms, machine learning methods, and real-world applications. It is structured into three technical sections: sparse array design and processing, sparse waveform design and processing, and machine learning architectures and methods for sparsity-based processing.
Sparse array design and processing significantly improve DOA estimation and radar imaging performance with reduced number of antennas. We discuss different sparse array designs and sparse array interpolation methods that maximize difference lags and reduce mutual coupling. Signal processing algorithms for sparse array DOA estimation using compressive sensing and structured matrix completion will be presented, along with sparse arrays incorporating frequency diversity and single-snapshot DOA estimation for automotive radar.
Spectrum usage becomes increasingly congested. Traditional radar waveforms like frequency-modulated continuous-wave (FMCW) signals offer high-resolution range and Doppler estimation but consume substantial bandwidth. Sparse waveform design strategically minimizes time-frequency occupancy while preserving high-resolution sensing, enabling multiple radars to cooperatively share the time-frequency resources. We explore sparse step-frequency and slow-time pulse designs as well as sparsity-based range-Doppler processing, and present sparsity-oriented four-dimensional sensing applied to automotive radar considering real-world requirements and constraints.
Machine learning enhances sparsity-based processing with improved robustness, adaptability, and efficiency. We will discuss data-driven and model-based deep learning for DOA estimation using uniform and sparse arrays. We also present deep reinforcement learning for enhanced learning capability while optimizing sparse array design and exploiting quantized phase shifters.
Afternoon session | 8 Sept. 2025 14:00-18:00
14. Towards Edge AI-native 6G with Semantic and Goal-Oriented Communications
Presenters
– Paolo Di Lorenzo, Sapienza University of Rome, Italy
– Mattia Merluzzi, CEA-Leti, France
Abstract
In next generation networks (6G), Artificial Intelligence (AI) and Machine Learning (ML) will play a pivotal role, converging with wireless communication into a unique system that senses, store, exchange, and process data. However, classical method for communication fall short with the tremendous amount of data generated by the network and needed by AI. This is due to energy, wireless capacity, and hardware constraints, whose limits are close to be achieved. The goal of this tutorial is to provide a vision to cope with these limitations and promote a sober design of networks, towards a “just enough” perspective. This vision relies on the semantic and goal-oriented communication paradigm, whose general objective is to go beyond classical bit-level quality metrics and content blind transmission, towards content and knowledge-aware data representation exchange, to make communication networks more than a reliable pipe transporting bits. Starting from a general view on 6G with focus on semantic and goal-oriented services, the tutorial will focus on the aspects related to communication and compute trade-offs. The main questions to answer are related to the cost-effectiveness of computing before and after transmitting, in terms of communication vs. computation overheads. Then, the second part will focus on a signal processing perspective of semantic communication, to show how relations among data can improve compression and transmission. The problem of language mismatch will be also tackled, to answer questions on how agents with heterogeneous logic can communicate, understand each other, and solve complex cooperative tasks. The tutorial integrates concepts from communication theory, AI, machine learning, and signal processing in a multidisciplinary approach, aiming to foster discussion and stimulate research in the emerging field of semantic and goal-oriented communication for efficient and effective wireless networking.