Program

The number of places at the workshops and roundtable sessions are limited. So reserve your spot now!

Pre-FEARS workshops

All info about the pre-FEARS workshops on the workshop page.

2 September

14:00-17:00

Workshop: How to develop an academic poster?

Prepare for the poster session by following this workshop.

9 September

14:00-17:00

Workshop: How to pitch your research for a broader audience?

Prepare for the pitch session by following this workshop.

17 September

14:00-17:00

Workshop: Pitching yourself to companies or in the job market.

Prepare for the industry roundtable session by following this workshop.

FEARS 2025 program

12:30

Registration & Poster setup

All attendees are welcome from 12.30pm to enjoy a coffee. Presenters can install their demo setups and posters.

13:00

Dean speech

FEARS will open with an opening speech by Patrick De Baets, the dean of FEA.

13:15

Keynote by prof. Andrés Vásquez Quintero, CTO & Co-founder of Azalea Vision

Professor Andrés Vásquez Quintero is the Chief Technology Officer and Co-founder of Azalea Vision, a MedTech spinoff from Ghent University and imec that is developing smart contact lenses. He also serves as an Associate Professor at UGent. Professor Vásquez Quintero was instrumental in guiding the Azalea project from its initial incubation at UGent/imec to its industrialization as a spinoff company. Under his leadership, the company has secured €15M in venture capital investment and €5.8M in public grants, and has grown the team from one to 17 people with a focus on manufacturability and clinical strategy. His technical expertise in areas like flexible electronics, thin films, and MEMS is supported by more than 20 A1 publications and 12 patents.

13:45

(Parallel sessions)

Poster and demo session (Part 1)

13:45-14:40 @ Corridor

Stroll through interesting posters and demonstrations of FEA research and take the time to get to know colleagues.
Submit your poster or demo
Detailed poster and demo schedule

Workshop 1: Your PhD in a CV

13:45-14:25 @ Lady's Mantle

Get to know the do’s and don’ts for drafting a cv with an academic background.
Register your spot at the workshop
More info on the workshops page

Workshop 2: Funding for entrepreneurial postdocs

13:45-14:25 @ Lavender Room

An introduction to the different TechTransfer practices for postdocs and finalising PhD researchers.
Register your spot at the workshop
More info on the workshops page

14:30

(Parallel sessions)

Coffee break

14:45-15:15 @ Corridor & Chapter House

Take a break and enjoy a coffee while networking with your colleagues, industry professionals, and other researchers.

Roundtable session 1

14:30-15:10 @ Cellarium

Dive deep into hot topics about R&D after a PhD in engineering and architecture with leading companies at our Industry Roundtables.
More info on the roundtable page

15:15

(Parallel sessions)

Pitch session

15:15-15:55 @ Chapter House

Get inspired by a selection of two-minute pitches by FEA researchers.
Submit your pitch
Detailed pitch schedule

Workshop 3: Pursuing a doctoral degree

15:15-15:55 @ Lady's Mantle

Information session for students. We help you figure out whether a PhD is something for you and how you can start one.
Register your spot at the workshop
More info on the workshops page

Start your own Spin-off

15:15-15:55 @ Lavender Room

In this session, we will explain what a UGent spin-off is and how you start the process towards a spin-off, with room for your own questions on the topic.

16:00

(Parallel sessions)

Poster and demo session (Part 2)

16:00-16:55 @ Corridor

Stroll through interesting posters and demonstrations of FEA research and take the time to get to know colleagues.
Submit your poster or demo
Detailed poster and demo schedule

Roundtable session 2

16:00-16:40 @ Cellarium

Dive deep into hot topics about R&D after a PhD in engineering and architecture with leading companies at our Industry Roundtables.
Register your spot at the table
More info on the roundtable page

17:00

Panel Session: Beyond the Dissertation — Building a Spin-Off

How do researchers enter a spin-off trajectory? And how do they finally start their own business? 5 recent and future spin-off founders, all current or former FEA researchers, share their experiences on different aspects during this challenging but rewarding journey.

Panel members: Ahmed Selema (Drive13), Ewout Picavet (FFLOWS), Thomas Thienpont (Pyro Engineering), Ashkan Joshghani (Exoligamentz), and Alexandru-Cristian Mara (Nobl.ai)

Moderator: Simon De Corte

17:45 - 19:30

Award ceremony and reception

We will close FEARS 2025 with a drink and some bites, while presenting awards for remarkable contributions to the symposium.

Pitch sessions

Pitch sessions are organized in the Chapter House.  Please find the times of pitches below.

Elementary Charge Characterization of Quantum Dots in Solution using Laser Scanning Microscopy

Time

15:15

Authors

Sumit Sumit, Lucas Oorlynck, Marieke Eliano, Alina Myslovska, Iwan Moreels, and Filip Strubbe

Abstract

Colloidal quantum dots (QDs) are used in a wide range of applications due to their exceptional properties such as size-tunable emission, high photoluminescence quantum yield, broad absorption spectra, and narrow emission linewidths. To gain detailed insights into the correlation between electric charge, size, and optical properties, as well as the interactions between nanoparticles and their environment, it is crucial to move beyond ensemble averages and use single-particle measurement techniques. However, single-QD measurements in the native liquid environment, i.e. without fixation on a substrate, are highly challenging due to the low photon emission rate and fast Brownian motion. In this study, we present a laser scanning microscopy technique that enables precise measurement of the electrophoretic mobility and size of individual CdSe/CdS core/giant-shell quantum dots (QDs) in a non-polar liquid environment (dodecane) using a simple microfluidic devices. We demonstrate this technique on QDs of 15 nm and 25 nm. By employing large oscillating electric fields and fast single-nanoparticle tracking, we measure the electrophoretic mobility of each particle. In the histogram of the electrophoretic mobility obtained from many individual QDs, we observe peaks that correspond to quantized charge states, i.e., -2e, -e, 0, e, 2e, on individual QDs. Consequently, this approach delivers highly accurate charge estimates and reveals information on the hydrodynamic size. We demonstrate this technique on QDs with a core diameter of 15 nm and 25 nm. And, we compare the observed histograms to a theoretical model of electrostatic charging in nonpolar media. Beyond QD charge analysis, this technique holds promise for probing charge–emission relationships in QD-LEDs, as well as characterizing single-molecule binding and dynamics on nanoparticle surfaces in aqueous environments, thereby opening new avenues in nanoscience and bioanalytical chemistry. Keywords: Quantum Dots, Laser Scanning Microscopy, Electrophoresis, Electrometry

Engineered Porosity via Polystyrene Beads in Gypsum Plasters: Experimental Validation and Microstructural Control

Time

15:18

Authors

Afsar

Abstract

Like many sectors across the European Union, the gypsum industry is actively seeking strategies to reduce its environmental footprint by lowering product density while maintaining quality and performance standards. To address this, a resource-efficient strategy that reduces gypsum and water consumption, and thereby lowering energy use without sacrificing mechanical performance is pursued. In this study we establish the relationship between the three-dimensional characteris-tics of air voids such as volume fraction, size distribution, sphericity, polydispersity, packing density, and tortuosity within the gypsum matrix and the resulting mechanical performance of the material. Standard and advanced characterization techniques, such as mercury intrusion porosim-etry (MIP), X-ray micro-computed tomography (μCT), and mechanical testing were used to investigate gypsum core structure and its mechanical properties. These characterizations served as a benchmark for quantifying the relationship between pore structure and material properties in gypsum plaster. Additionally, a straightforward methodology was developed to produce model porous samples with controlled porosity parameters. The polystyrene (PS) beads of known diam-eter were added in the gypsum plaster paste and mechanical test were carried out. The samples were compared with identical counterparts from which polystyrene had been leached. The results indicate that polystyrene can effectively serve as a surrogate for air voids in representing porosity within the gypsum matrix

Integrated data analysis using Bayesian probabilistic methods for plasma diagnosis in the next generation of fusion reactors

Time

15:21

Authors

Jeffrey De Rycke, Geert Verdoolaege

Abstract

Nuclear fusion offers a view to safe, clean baseload power. The most promising approach is based on the confinement of a hot hydrogen gas, called plasma, using strong magnetic fields. The next generation of plants must operate with fewer diagnostics in harsher environments than today’s experiments. Extracting maximum information from limited, noisy measurements is essential. We present a Bayesian integrated data analysis (IDA) framework for diagnostic inference in fusion devices. IDA allows to combine heterogeneous data sources through mathematical models that link the plasma state to sensor signals, and combines them with prior information: what we think the plasma a priori probably looks like. Assumptions are stated explicitly and updated by the data. Bayes’ theorem then yields a joint posterior probability distribution, encapsulating our state of knowledge about the plasma. Not a point estimate, but a result that allows keeping track of errors through the entire measurement process. A key advantage of our approach is modularity: each diagnostic contributes to the final probability distribution with a likelihood distribution. Diagnostics become drop-in components. Adding, removing or upgrading sensors is straightforward. This enables coherent processing of, for example, measurements from magnetic coils, interferometry and reflectometry for inferring the plasma current jointly with the electron density in a single step. Unlike conventional parallel data analysis, Bayesian IDA allows information to accumulate rather than compete. Furthermore, traditional deterministic solutions to the inverse problem of determining the plasma state tend to provide point estimates, whilst Bayesian IDA delivers coherent uncertainty quantification. By unifying diverse diagnostics under a single probabilistic umbrella, Bayesian IDA turns fragmented measurements into a coherent picture of the plasma, complete with quantified confidence. In short: make every measurement count. J. De Rycke acknowledges the Research Foundation - Flanders (FWO) via PhD grant 1SH6424N

Photoalignment patterning for polar domain engineering with ferroelectric nematic liquid crystals

Time

15:27

Authors

B. de Vries, M. Li, Y. Hsiao, K. Neyts, I. Nys

Abstract

Ferroelectric nematic (NF) liquid crystals are a recently discovered class of soft materials that combine the fluidity and self-organisation of conventional liquid crystals with a spontaneous polar order arising from the collective alignment of the molecular dipoles. This polarity breaks the molecular inversion symmetry characteristic of standard (paraelectric) nematic phases and couples strongly to electric fields. With optical nonlinearities comparable to crystalline materials such as lithium niobate, these novel polar liquid crystals are very promising candidates for next-generation optoelectronic and nonlinear photonic devices. However, harnessing this potential requires a detailed understanding of how these materials self-organize under surface anchoring conditions relevant to device fabrication. In this study, we examine how the nematic (N) and ferroelectric nematic (NF) phases self-assemble when subjected to geometric frustration. Specifically, the bottom surface features a periodically rotating photoalignment pattern, while the top surface imposes uniform planar alignment, directing molecular orientation at the surfaces. In the conventional N phase, these conditions produce an optically symmetric, periodic lattice of topological defects (disclination lines), marking the region of rotational mismatch in bulk molecular alignment. In contrast, the NF phase develops a distinctly different structure. The emergence of polar order introduces electrostatic interactions that drive the system into an optically asymmetric twisted state. The defect lines observed in the N phase now act as nucleation sites for polarisation reversal walls (PRWs) in the NF phase, boundaries separating domains of opposite polarity. The result is a periodic arrangement of well-defined polar domains. These findings provide new physical insight into how ferroelectric nematics self-organize and establish guiding principles for engineering polar domain structures using photoalignment. Precise control over polar domains opens new opportunities for exploiting ferroelectric nematic liquid crystals for advanced photonic technologies.

Interfacial issues in concrete: Formation and characteristics

Time

15:33

Authors

Xuan Gao

Abstract

Interfacial issues have always been a major concern in material science, especially for composite materials like concrete. The performance of concrete is influenced by all its components, especially the weak component according to the Cannikin Law. At the mesoscopic scale, concrete consists of aggregate, cement paste, and the interfacial transition zone (ITZ) located between them. The ITZ is generated due to the gradient distribution of cement particles near the aggregate, and its thickness typically ranging from 20 to 50 μm.The micrstructure of ITZ is very loose and porous, which provides an easy pathway for the erosion of aggressive substances and the development of microcracks, and thus is the first to be damaged during service. Therefore, ITZ is often regarded as the weak link in concrete, and its formation theory and microscopic/macroscopic characteristics have attracted a lot of attention. In this work, the microstructure evolution under multi interacted mechanisms was first modelled numerically, which quantifying the formation process of ITZ at both spatial and time scales. Then, a novel analytical-numerical strategy was proposed to extract both local and global characteristics of ITZ within heterogeneous concrete. Finally, considering the interactions between multiple ITZ, a 3D point cloud method and a series of mathematical expressions are proposed to accurately obtain the volume fraction of ITZ within concrete. This work provides a new perspective for studying the interfacial issues in material science, and also provides an important theoretical basis for improving the performance of ITZ and concrete, which can help the long-term service of infrastructures.

DFT+TN: A new method to accurately model complex materials from first principles.

Time

15:36

Authors

Simon Ganne, Daan Vrancken, Daan Verraes, Tom Braeckevelt, Lukas Devos, Laurens Vanderstraeten, Jutho Haegeman, Veronique Van Speybroeck

Abstract

Modern technological applications rely on materials with precisely controlled electronic and magnetic properties. However, predicting these properties computationally remains challenging for an important class of materials where electrons interact strongly with each other. These "strongly correlated" materials exhibit extraordinary behaviours such as superconductivity at relatively high temperatures and transitions between conducting and insulating states, making them promising candidates for next-generation technologies including quantum computers and energy-efficient electronics. Current computational methods like Density Functional Theory, whilst successful for many materials, often fail to accurately predict the properties of strongly correlated systems. This limitation significantly hinders the rational design of new functional materials. We have developed a systematic computational approach that addresses this challenge by combining multiple advanced techniques in a novel framework. Our method begins with standard electronic structure calculations and systematically constructs simplified but accurate models that capture the essential physics of electron correlations. These models are then solved using state-of-the-art numerical techniques developed for quantum many-body systems called Tensor Networks (TN). Initial applications to quasi-one-dimensional materials demonstrate substantial improvements in calculated electronic properties compared to conventional methods, with results showing excellent agreement with experimental measurements. This work establishes a robust, parameter-free computational tool for predicting the properties of strongly correlated materials, accelerating the discovery and design of novel materials with tailored functionalities for technological applications.

Applying linked data pipelines in digital water: challenges and lessons

Time

15:39

Authors

Shehabeldeen Abdelfatah, Janelcy Alferes, Pieter Colpaert, Julián Andrés Rojas, Piet Seuntjens

Abstract

The digital transformation of water management requires robust, standards-based data pipelines that transform heterogeneous sensor and administrative datasets into interoperable, queryable resources for decision support. We present a modular, water-specific framework that combines semantic modelling, declarative mapping, validation, cataloguing, streaming, and fine-grained access control to produce FAIR and linked knowledge graphs from raw surface-water data. The pipeline applies W3C and domain standards (e.g., SOSA/SSN, QUDT, GeoSPARQL, RML, SHACL, DCAT, and LDES) to (1) formalize station and sensor models, (2) map heterogeneous source formats into RDF, (3) validate and enforce governance constraints, (4) publish discoverable catalogs and SPARQL endpoints, and (5) support efficient real-time distribution via linked data event streams, which allow for efficient sensory data streaming and caching, minimizing bandwidth and processing constraints. We demonstrate the framework on surface-water observations from Waterinfo Vlaanderen, describing implementation choices (mapping using RML, SHACL rules for data quality, GraphDB hosting) and trade-offs between streaming and bulk access. Results show the approach improves discoverability, semantic interoperability, and reuse while preserving data confidentiality through graph-level access controls. We discuss operational lessons, remaining semantic gaps for circular-water use cases, and a roadmap toward publishing linked data at the source to reduce deployment overhead and accelerate cross-stakeholder data reuse.

Development of a neuromorphic platform for early diagnosis of Parkison

Time

15:42

Authors

Ir. Iben Braeckevelt, Prof. Dr. Ir. Ioulia Tzouvadaki

Abstract

Parkinson’s disease affects over 8.5 million people worldwide, with incidence having doubled over the last 25 years and estimated to double again in the coming 25 years. (WHO, 2023) Parkinson’s begins with changes in proteins, mitochondria, regulatory pathways and other unknowns. However, symptoms become apparent only when over 50% of the dopaminergic neurons have died. Only when the symptoms become clear can a diagnosis be made. This significant delay between the early stages of Parkinson’s and clinical symptoms shows the urgent need for technologies that can detect Parkinson’s at an early stage. (David P Breen, 2013) This project aims to sense biomarkers related to the early stages of Parkinson’s by developing a novel diagnostic platform designed to measure multiple biomarkers at the same time. The platform, currently being fabricated, incorporates memristive elements – a new class of electronic components whose resistance depends on the history of applied charges. (Mohanty, 2013) This property allows for more sensitive detection of biomarker variations, with current papers showing unprecedented sensitivity, which is crucial to detect the early variations in biomarkers. (Ioulia Tzouvadaki, 2020) Measuring biomarkers is only the first step. To translate these biomarker concentrations into clinically actionable results, the project integrates neuromorphic computing. Using the same memristors, a low-power neuromorphic platform will be implemented on-chip. This approach enables artificial intelligence that is low-power and creates actionable results, paving the way for earlier diagnosis, better treatment strategies, and ultimately improved quality of life for individuals at risk of Parkinson’s disease.

Transient techniques for methane abatement

Time

15:45

Authors

Olivier Vloeberghs

Abstract

Transient techniques for methane abatement. Methane is the second most significant greenhouse gas after CO₂, with a global warming potential 84 times higher during the first 20 years after release (1). Converting methane to CO₂ through catalytic oxidation is a feasible mitigation strategy, since the resulting increase in atmospheric CO₂ is small compared to the overall climate benefit (1). The reaction is exothermic (ΔH = –803 kJ mol⁻¹), but the high activation barrier (ΔH = 435 kJ mol⁻¹) requires advanced catalysts (1). Transition-metal-exchanged zeolites, particularly Cu- and Fe-based systems, have emerged as promising catalysts. However, an incomplete understanding of the complex active sites and reaction pathways constrains their performance (2). Transient characterization techniques bring new opportunities to address these challenges. Temporal Analysis of Products (TAP) provides a high time resolution and precise control over the catalyst surface by pulsing nanomoles of reactant gas under vacuum. This approach reveals how catalysts respond to defined reactant inputs at different catalyst states and is largely unexplored for methane oxidation over zeolites (3). In addition, Modulation Excitation Spectroscopy (MES) applies periodic perturbations to the system, like reactant gas concentration, simultaneously measuring the spectroscopic response. Active species follow the modulation while spectator species do not, effectively separating signals from reactive intermediates and inactive background contributions (4). The combined use of TAP and MES is expected to clarify the mechanistic details of methane activation and further oxidation on Cu- and Fe-zeolites. The knowledge gained will support rational optimization of catalyst design and contribute to the development of more efficient approaches for methane abatement. References 1) Jackson, R. B., Solomon, E. I., Canadell, J. G., Cargnello, M., & Field, C. B. (2019). Methane removal and atmospheric restoration. Nature Sustainability, 2(6), 436-438. 2) Snyder, B. E., Bols, M. L., Schoonheydt, R. A., Sels, B. F., & Solomon, E. I. (2017). Iron and copper active sites in zeolites and their correlation to metalloenzymes. Chemical reviews, 118(5), 2718-2768. 3) Morgan, K., Maguire, N., Fushimi, R., Gleaves, J. T., Goguet, A., Harold, M. P., ... & Yablonsky, G. S. (2017). Forty years of temporal analysis of products. Catalysis Science & Technology, 7(12), 2416-2439. 4) Urakawa, A., Bürgi, T., & Baiker, A. (2008). Sensitivity enhancement and dynamic behavior analysis by modulation excitation spectroscopy: Principle and application in heterogeneous catalysis. Chemical Engineering Science, 63(20), 4902-4909.

AI-driven solutions for optimizing 3D orthopedic knee care

Time

15:48

Authors

Roel Huysentruyt, Aleksandra Pizurica, Hannes Vermue, Emmanuel Audenaert

Abstract

The knee is the most frequently operated joint in orthopedics. Despite its clinical success, approximately one in five patients remains dissatisfied following surgery. With an aging population and increasing obesity rates, the number of procedures is expected to triple by 2040. Accurate preoperative planning and postoperative evaluation are therefore critical to improve patient outcomes. Currently, many clinical measurements on medical images are performed manually, which is time-consuming, subject to inter-observer variability, and prone to error. Moreover, conventional X-rays only provide 2D information, which may lead to inaccuracies in capturing complex 3D anatomical structures. The current PhD project aims to address these challenges by developing automated tools for orthopedic imaging analysis using artificial intelligence (AI) and statistical shape modeling (SSM). The approach focuses on two key objectives: (1) automating standard clinical measurements on both X-ray and CT imaging, and (2) predicting 3D anatomical information from limited 2D projections. Data acquisition involves large-scale weightbearing CT and standing X-ray datasets, enabling robust algorithm development and validation. First studies during the project have already demonstrated the feasibility of this approach. Automated measurements on CT achieved accuracy comparable to expert clinicians. These results highlight the potential of AI-driven and model-based methods to reduce manual effort, increase measurement reproducibility, and enhance anatomical understanding. By building on these findings, the project aims to streamline clinical workflows and improve decision-making in knee surgery planning.

Evaluating Visual Context and Social Intelligence in Human-Robot Dialogue

Time

15:51

Authors

Ruben Janssens, Thomas Demeester, Tony Belpaeme

Abstract

Large language models (LLMs) have given social robots the ability to autonomously engage in open-domain conversations. However, they are still missing a fundamental social skill: making use of the multiple modalities that carry social interactions. While previous work has focused on task-oriented interactions that require referencing the environment or specific phenomena in social interactions such as dialogue breakdowns, we focus on more open-ended social conversations, where the aim is to build a bond between the user and the robot - an important skill for robots in socially assistive roles, such as in education and healthcare. This research evaluates how robots can use visual context in such social conversations. We present a system using open-source Vision-Language Models (VLMs), that is able to process that visual context locally and in real time, taking a holistic view that encompasses environmental awareness, adaptation to the user and their behaviour, and processing non-verbal feedback. We then evaluate how well this system works and whether this improves the conversations - using a semi-structured conversation for comparability, in a within-subject study. Our results show the clear gap between impressive performance of VLMs at understanding the visual context and their failure to appropriately integrate them in a social conversation. Furthermore, our seemingly simple dialogue task shows clear remaining limitations of LLMs at holding social conversations. With this research, we outline current challenges for LLMs and VLMs in social dialogue and offer a measurable way forward that will allow research to break through the silicon ceiling holding back foundation models from succeeding at social human-robot dialogue.

Poster and demo sessions

Poster and demo sessions are organized in the Corridor.  You can find the poster IDs and their session below.The first poster session takes place from 13:45 to 14:40. The second poster session takes place from 16:00 to 16:55.

InfoClus

Demo ID

1

Session

1

Authors

Fuyin Lai, Edith Heiter, Guillaume Bied, Jefrey Lijffijt

Abstract

Developing an understanding of high-dimensional data can be facilitated by visualizing that data using dimensionality reduction. However, the low-dimensional embeddings are often difficult to interpret. To facilitate the exploration and interpretation of low-dimensional embeddings, we introduce a new concept named partitioning with explanations. The idea is to partition the data shown through the embedding into groups, each of which is given a sparse explanation using the original high-dimensional attributes. We introduce an objective function that quantifies how much we can learn through observing the explanations of the data partitioning, using information theory, and also how complex the explanations are. Through parameterization of the complexity, we can tune the solutions towards the desired granularity. We propose InfoClus, which optimizes the partitioning and explanations jointly, through greedy search constrained over a hierarchical clustering. We conduct a qualitative and quantitative analysis of InfoClus on three data sets. We contrast the results on the Cytometry data with published manual analysis results, and compare with two other recent methods for explaining embeddings (RVX and VERA). These comparisons highlight that InfoClus has distinct advantages over existing procedures and methods. We find that InfoClus can automatically create good starting points for the analysis of dimensionality-reduction-based scatter plots.

Procedural Modeling and Visualization of Electromagnetic Fields Simulations with Blender and Sionna

Demo ID

2

Session

1

Authors

Felipe Oliveira Ribas, Günter Vermeeren, Wout Joseph

Abstract

Electromagnetic fields simulations are essential for designing and testing wireless communication systems, yet their accuracy often depends on the quality and flexibility of the underlying scene models. Conventional scene preparation is often static, time-consuming, and limited in scope, making it difficult to explore diverse environments. In this demonstration, we present a workflow that combines Blender and Sionna to overcome these challenges. Using Blender’s procedural modeling tools, specifically Geometry Nodes, we can generate complex three-dimensional environments in a reproducible and highly adjustable way. Randomisation of scene parameters allows for the creation of multiple scenario variations, enabling systematic testing of communication setups under different conditions. These scenes are then imported into Sionna, a simulation framework for wireless research, where electromagnetic interactions such as signal coverage, multipath propagation, and reflections are computed. Simulation results are visualised back in Blender, offering an intuitive link between environment design and signal behaviour. The approach also supports the generation of diverse datasets for training and validating artificial intelligence models in communication research. The live demo will showcase the complete workflow: how procedural scenes can be created in Blender using Geometry Nodes, how parameters can be easily adjusted to generate variations, and how custom add-ons can extend the workflow for specific cases. We will then demonstrate how these scenes are used in Sionna for electromagnetic simulations, and how the results are visualised back in Blender. This provides participants with a clear, hands-on view of a flexible and open-source pipeline for wireless communication.

Tweelie: A wheel-shaped tactile sensor prototype for 3D contacts

Demo ID

4

Session

2

Authors

Thijs Van Hauwermeiren, Annelies Coene, Guillaume Crevecoeur

Abstract

Tactile sensing is an essential yet underexplored sensor modality in legged robotics. Robots can be given a sense of touch using tactile sensors. For a legged robot with tactile feet this means having the ability to detect subtle ground features, localize contacts and reacting adaptively to interaction forces. All of these applications increase the legged robot perception and control. In this demo, we demonstrate Tweelie, a wheel-shaped tactile sensor developed that can measure and localize multiple three-dimensional contact forces at the foot sole of a legged robot. Tweelie integrates 48 miniature pressure sensors on flexible printed circuit boards, cast into a spherical elastomer skin. This design provides omnidirectional, 3D sensitivity to forces across its curved surface. The sensor samples data from the pressure sensor at 25 Hz, and is robust to repeated impacts exceeding five times nominal operating forces. A linear regression method can reconstruct applied forces in real time from the measured distributed pressure data. Experimental validation demonstrated accurate localization accuracy (<4° error) and force regression error within a few Newtons. At the demo booth visitors can interact with the sensor, by touching, applying force at one or more contact points and see the results by means of a visualizations on the screen. A number of visualizations can be selected: pressure signals of sensing elements, force localization and force regression. The visitor will be able to see how the algorithms work in real time. During the interaction the visitor will get a good understanding of how tactile sensor work and get some insights into the current state-of-the-art in tactile sensing technology. Contact: Thijs.vanhauwermeiren@ugent.be D2Lab, Vakgroep Elektromechanica, Systeem- en Metaalengineering

RES2Go: Assessing and addressing the future of renewable energy in industrial processes and clusters

Demo ID

5

Session

2

Authors

Thijs Duvillard, Nienke Dhondt, Greet Van Eetvelde

Abstract

In early 2025, the European Commission launched the Clean Industrial Deal to strengthen the competitiveness of Europe’s industry while remaining aligned with climate neutrality objectives. Energy-intensive industries (EIIs) are central to this transition, representing one-fifth of Europe’s greenhouse gas emissions, yet they face major challenges: high energy costs since the energy crisis and uncertainties linked to new low-carbon production routes. Lowering investment risks is key to reducing energy prices, while process optimisation remains essential for cutting energy consumption across both industry and policy planning. Anticipating future energy demand is therefore critical, but assessing industrial energy use is hindered by limited and often confidential data. The AIDRES project, published by DG Energy in 2022, addresses this gap by developing blueprints for key sectors such as steel, chemicals, glass, refineries, fertilisers and cement. These blueprints describe energy and material requirements per tonne of product, which can then be projected at European scale to map demand hubs and plan supply options. Building on this approach, the RES2Go web tool provides user-friendly pathways for assessing industrial energy transitions. RES2Go offers decision support for companies, industry clusters, policymakers and investors, enabling analysis of energy demand and supply scenarios. By fostering local low-carbon energy solutions, the tool contributes to the competitiveness and resilience of Europe’s energy-intensive industries

Career Path Recommendation using Large Language Models

Poster ID

1

Session

1

Authors

Iman Johary, Alexandru Mara, Tijl De Bie

Abstract

In the contemporary job market, efficient and precise resume analysis is essential for high-level tasks such as career path recommendations and job matching. This work introduces a novel approach to streamlining the process by harnessing the power of large language models. The primary objective is to develop a comprehensive pipeline for information extraction, data mapping, and subsequent utilization in downstream recommendation and matching tasks. Resumes are often parsed from different formats, resulting in unstructured and noisy text that poses a significant challenge for automated analysis. We aim to enhance the accuracy and efficacy of information extraction by leveraging recent developments in natural language processing, particularly large language models. The proposed pipeline begins with an information extraction module that efficiently retrieves key data points, including skills, qualifications, and experiences. Subsequently, a data mapping component maps different sections, such as job titles, places, and degrees, to a smaller taxonomy. The extracted information is then employed in downstream tasks, such as career path recommendations and job matching. For information extraction, we propose different methods, such as prompt engineering and LoRa. For data mapping, we experimented with fine-tuning and heuristic methods to increase accuracy. We evaluate the proposed pipeline using a rich dataset provided by VDAB, encompassing resumes in Dutch, French, and English. We further enhanced this dataset by annotating it to facilitate resume analysis and information extraction.

Exploring PTFE-free solutions as sustainable wear-resistant eco-polymers

Poster ID

2

Session

2

Authors

Robbe Vergieu, Ádám Kalácska, and Patrick De Baets

Abstract

Polytetrafluoroethylene (PTFE) is a widely utilised polymer in tribological applications (e.g., seals, sliding bearings, and gears), prized for its exceptionally low friction, self-lubricating properties, and high chemical and thermal stability. However, as a per- and polyfluoroalkyl substance (PFAS), its use is a major source of environmental pollution from industrial and consumer products. Certain PFAS compounds are also associated with carcinogenic and toxic effects, which necessitate a critical and urgent search for sustainable and safer alternatives. Still, the challenge lies in identifying materials that can replicate PTFE's unique combination of favourable properties, since no direct substitute polymer material currently exists. This research project, known as SWEEP (Exploring PTFE-free solutions as sustainable wear-resistant eco-polymers), directly addresses this issue by comprehensively investigating the tribological performance of a series of potential PTFE-free substitute materials, including high-performance thermoplastics and specifically engineered composites. The study employs a multi-faceted experimental approach, combining both lab-scale and component-level testing, to analyse the friction, wear, thermal response, and surface degradation of these alternative materials under a broad range of pressure-velocity (P × V) conditions. The investigation includes a detailed analysis of wear mechanisms, changes in surface topography, and the characteristics of wear particle generation and transfer film formation. As a result, this research provides crucial and fundamental insights into the tribological behaviour and characterisation of these PTFE-free alternatives, while establishing a robust framework for material ranking and evaluation. The eventual findings will be invaluable for the effective selection of polymer sliding elements and tribotechnical components, supporting the development of sustainable, high-performance solutions for various industrial sectors. The project represents a significant step towards reducing the reliance on PFAS-based materials, thereby contributing to both environmental protection and human health.

Estimating absolute flood depth using image retrieval method

Poster ID

3

Session

1

Authors

Guangan Chen, Brian Booth, Michiel Vlaminck, Anh Minh Truong, Michiel De Baets, Wilfried Philips, Hiep Luong

Abstract

Estimating flood depth from visual imagery is a critical task for disaster response, yet remains challenging due to the scarcity of labeled data and the complexity of real-world scenes. In this work, we introduce a novel image retrieval-based framework to estimate the localized flood depth from images of partially submerged vehicles. The proposed pipeline employs a hybrid segmentation strategy that combines Mask R-CNN and the Segment Anything Model to extract vehicle masks. To support retrieval-based estimation, we construct a large-scale synthetic dataset comprising 6,912 rendered images of a 3D vehicle model under varying viewpoints and discretized flood depths. To facilitate model validation and future benchmarking, we generated a synthetic dataset of 200 flooded vehicle scene images with ground-truth depth in centimeters, created via guided inpainting using the Stable Diffusion model. Furthermore, we designed a two-stage graphical user interface that enables users to annotate flood depth in 3D space, allowing them to perceptually align a virtual water level with the vehicle in each image. Using this tool, we create a real-world dataset of 200 images, each annotated by multiple participants with depth estimates provided in centimeter-level precision. Experiments demonstrate that our retrieval-based approach yields accurate and interpretable flood depth predictions on both synthetic and real-world datasets, offering a promising alternative to conventional classification-based methods.

From Sol–Gel Chemistry to Nanofiber Membranes: A Model-Based Approach

Poster ID

4

Session

2

Authors

Sofie Verschraegen, Alice Novello, Eva Loccufier, Alessandro D. Trigilio, Klaartje De Buysser, Dagmar R. D'hooge, Karen De Clerck

Abstract

Sol-gel chemistry offers powerful tools for engineering silica-based materials with tunable properties, including organosilica nanofiber membranes that are essential for applications such as chemical sensing, solvent separation, and electrochemical barriers. Despite their potential, the molecular factors that govern electrospinnability remain insufficiently understood. In particular, the complex relationship between hydrolysis kinetics, crosslinking dynamics, and rheological behavior often forces researchers to rely on empirical trial-and-error methods. To address this challenge, we developed a predictive framework for methyltriethoxysilane (MTES)-based sol-gel systems, establishing correlations between viscosity evolution and key structural parameters, such as hydrolysis degree and the distribution of crosslinking functional groups. A comparative analysis with tetraethoxysilane (TEOS), a more crosslinkable four-arm precursor, was also conducted. Using ²⁹Si NMR spectroscopy and coupled matrix-based Monte Carlo (CMMC) modeling, we extracted Arrhenius parameters for MTES hydrolysis and condensation, which were then applied under non-isothermal conditions simulating electrospinning environments, including solvent evaporation. This allowed us to extract molecular rules defining processing conditions that distinguish between no deposition, electrospraying and electrospinning. To validate the model, we randomly selected three synthesis conditions based on its predictions and tested them experimentally. Scanning electron microscopy (SEM) imaging confirmed that the resulting morphologies matched the predicted electrospinnability outcomes, demonstrating the reliability of the molecular rules derived from the model. By identifying molecular thresholds for successful electrospinning, such as siloxane yields and group fractions, this predictive framework provides a rational alternative to experimental trial-and-error, supporting the design of advanced organosilica membranes for sustainable technologies.

Computational Screening of Zeolitic Imidazolate Frameworks (ZIFs) for Optical Sensing of VOCs via Refractive Index Modulation

Poster ID

5

Session

1

Authors

Aparajita Ghosh, Louis Vanduyfhuys, Guillaume Maurin, and Veronique Van Speybroeck

Abstract

Volatile Organic Compounds (VOCs) pose a serious threat to indoor air quality, contributing to both short and long-term health issues. Identifying materials capable of selectively and sensitively detecting VOCs is, therefore, essential. Among various sensing strategies, optical detection based on refractive index (RI) changes has shown great promise for detecting VOCs at low concentrations[1]. However, accurately predicting the dielectric and optical properties of crystalline materials using ab initio methods such as periodic Density Functional Theory (DFT) is computationally intensive. To efficiently identify suitable materials from a broader selection, a computational screening approach that balances accuracy and resource demands is imperative. In this study, we explore the potential of Zeolitic Imidazolate Frameworks (ZIFs), a subclass of MOFs known for their structural tuneability and chemical diversity for RI-based VOC sensing. We begin with a fragment-based method (FBM) developed by Treger et al. [2] that decomposes each ZIF into its inorganic node and organic linker components. Polarizabilities are calculated for each fragment using DFT, and the refractive index is estimated through the Clausius-Mossotti relation[3]. This method allows rapid screening while significantly reducing the computational load compared to full periodic DFT calculations. We apply this approach to ten structurally diverse ZIFs, systematically varying in metal composition, linker chemistry, and topology, and evaluate their RI response to three representative VOCs: acetone, toluene, and methanol. As a next step, we assess the validity of the FBM by performing full periodic DFT calculations on selected systems for benchmarking. Our findings highlight that ZIFs with smaller unit cell volumes, such as ZIF-7 and ZIF-9, both incorporating benzimidazole linkers, exhibit the most pronounced RI changes upon VOC adsorption, followed by ZIF-1. While unit cell size and linker chemistry contribute to this behaviour, the framework topology also plays a critical role. ZIFs with tightly packed, low-porosity structures tend to show more increased RI shifts. This subtle influence arising from the spatial arrangement of building blocks was particularly well-captured by the fragment-based method (FBM). Moreover, the consistency in the RI change trends between FBM and periodic DFT further validates the reliability of this cost-effective approach. Overall, this workflow offers an effective and scalable strategy for discovering advanced materials for optical sensing, paving the way for next-generation VOC detection technologies.

CSD-AFNet: Computationally Efficient Atrial Fibrillation Classification from ECGs using 2D Causal Strided Dilated Convolutions

Poster ID

6

Session

2

Authors

Lennert Bontinck, Aranka Steyaert, Hongbing Chen, Tom Dhaene, Dirk Deschrijver

Abstract

Automated analysis of electrocardiogram (ECG) signals using deep learning (DL) methods has shown substantial promise in atrial fibrillation (AFib) classification, particularly for detecting subtle indicators during normal sinus rhythm and for predicting new-onset AFib. However, many existing state-of-the-art models exhibit high computational demands, characterised by large parameter and floating-point operations (FLOPs) counts. This presents a high barrier to entry for training in budget-limited institutes and hinders the models’ deployment on medical edge devices. This paper introduces CSD-AFNet, a computationally efficient DL model specifically designed for AFib-related ECG classification tasks. CSD-AFNet achieves substantial reductions in both parameter and FLOPs counts by replacing expensive temporal convolutions with novel Feature-Preserving Pooled Convolutions (FPP-Convs). FPP-Convs enable the combination of striding and dilation without input feature loss, preserving temporal coverage while reducing the computational cost. The model further incorporates two-dimensional causal padding to prevent temporal leakage in downstream representations. Evaluation on the public CODE-15% and PTB-XL datasets demonstrates that CSD-AFNet matches the classification performance of leading benchmark models while reducing parameter count by a factor of 71 and FLOPs by a factor of 122 compared to the ResNet-10 inspired baseline. These findings support the suitability of CSD-AFNet for practical clinical scenarios, enabling training under resource constraints and efficient inference on medical edge devices, thereby facilitating scalable and cost-effective ECG-based AFib screening and monitoring.

Is Particle Strain an Underlying Mechanism of Ultrasound Neuromodulation?

Poster ID

8

Session

2

Authors

Ryo Segawa, Emmeric Tanghe, Thomas Tarnaud

Abstract

Ultrasonic neuromodulation represents a promising non-invasive approach for therapeutic applications, offering precise spatial targeting without the risks associated with invasive procedures. However, the complex multi-dimensional parameter space and limited understanding of underlying ultrasound-neuron interaction mechanisms create major hurdles for clinical translation. This study addresses these limitations through the development of strain-based computational models for ultrasound-neuron coupling. Strain is the primary mechanism of interaction between ultrasonic waves and tissue, and refers here to both the oscillating and steady deformation of neurons. The primary objective is to establish a comprehensive computational framework for quantifying ultrasonic effects caused by strain mechanisms, enabling a systematic analysis of diverse ultrasonic protocols and neural targets. The secondary objective involves implementing this framework for morphologically realistic neuronal models and determining the computational feasibility of multi-scale optimisation approaches. We implemented multi-compartmental neuronal models, focusing on an unmyelinated C-fibre model as a benchmark for understanding ultrasound-induced membrane dynamics. We then developed a strain-based modelling framework incorporating membrane deflection mechanisms and accounting for ultrasound-induced changes in membrane properties across varying conditions of pressure and frequency. Furthermore, analytical expressions were derived through mathematical expansion techniques for the membrane capacitance, membrane resistance, and axial resistance, enabling systematic exploration of parameter dependencies including pressure amplitude, frequency, and axon geometry. The framework incorporates electrical coefficients derived from cable theory and Hodgkin-Huxley formulations, allowing for a comprehensive evaluation of ultrasonic effects on action potential propagation and gating kinetics. Extensive parameter space exploration across physiologically relevant ranges was conducted to evaluate computational requirements and accuracy of different modelling approaches, with particular focus on the feasibility of multi-scale optimisation with look-up tables. Multi-scale optimisation is necessary due to the numerical stiffness caused by the microsecond ultrasonic period, which improves computational efficiency and numerical accuracy. This computational framework establishes the foundation for systematic optimisation of ultrasonic protocols across diverse neurological applications through a mechanism-specific computational approach. The strain-based framework enables progression towards morphologically-realistic neural models, incorporating detailed channel properties and anatomical reconstructions to create comprehensive ultrasound-sensitive single-neuron models.

Peak Exposure Focused Assessment Method for Sustainable Indoor Air Quality Management

Poster ID

9

Session

1

Authors

Lisa Corneillie, Klaas De Jonge, Jelle Laverge

Abstract

Particulate matter (PM) is one of the most harmful indoor air pollutants, contributing to over two million deaths annually. Despite these significant health risks, current research often misrepresents exposure and overlooks the role of peak concentrations, leading to an underestimation of PM’s true health impacts. This proposed research aims to address these gaps by developing a population-scale, time-resolved indoor PM exposure model tailored to residential buildings. The model will be parametric, multi-zone, and archetype-based, enabling accurate and scalable exposure assessments across diverse housing types. These exposure estimates will then be integrated into a burden of disease framework using the Disability-Adjusted Life Years (DALY) metric. By comparing health impacts under various peak exposure scenarios, the project seeks to improve the evaluation and optimization of indoor air quality (IAQ) strategies across European residential environments. Additionally, the research will explore the energy implications of these IAQ strategies to support integrated, sustainable solutions. By focusing on peak concentrations and improving exposure representation, this research intends to contribute to the ongoing paradigm shift toward health-based assessments in the built environment and related legislation. Ultimately, it aims to support the development of holistic, health-centred IAQ strategies that are both energy-efficient and effective across a wide range of residential settings.

Development of large-scale journal bearing test setup

Poster ID

11

Session

1

Authors

Á. Kalácska, K. Van Minnebruggen, K. Van Hoey, T. Depraetere, P. De Baets

Abstract

Due to the rising demand for green energy, wind turbines are often installed in offshore locations where the wind is stronger. Floating platforms are secured to the seabed using mooring lines and anchors. Self-lubricating journal bearings are installed to accommodate the platform’s oscillating movement caused by waves and wind and to mitigate the potential environmental impact of leaking lubricants. These compact bearings with high-load low-velocity characteristics, and maintenance-free operation, serve as a replacement for traditional roller bearings. Other offshore uses for these plain bearings include hydropower plants and the submerged stern tubes of cargo ships. To optimize the efficiency and reliability of these self-lubricating bearings, it is crucial to gain insight into their tribological characteristics. Small-scale testing may not provide representative results as the size effect is not taken into account. Due to the complex interactions, (running-in/out, sliding layer formation, edge effect, roughness changes, ...) large-scale tests are needed to accurately describe real-life behaviour as well as possible. Therefore, a full-scale test rig is designed to accurately replicate the enormous mooring line forces and the complex tribological interactions encountered in real-life applications. In the basic working principle of the developed machine, a shrink disc secures the lever arm, which transfers the displacement piston’s motion to the shaft. During the test, the displacement of both pistons, the radial load, the load applied on the movement lever arm, and the resulted torque load are measured. Displacement laser sensors are used to measure the movement of the shaft with respect to the test bearing to qualitatively estimate the wear over time. The coefficient of friction (CoF) of these self-lubricating bearings could be obtained through two methods. The initial test results were evaluated and an asymmetric CoF behaviour was found. The static CoF was reported as the maximum value of friction during the first 25% of each stroke, and the average of the forward and backward stroke within each cycle. The dynamic CoF was reported as the average value of friction during the middle 33% of each stroke, and the average of the forward and backward stroke within each cycle

MEASUREMENT OF RENAL TUMOR SIZE ON MEDICAL IMAGING: THE IMPACT OF 2D VERSUS 3D METHODS ON THE T-STAGE

Poster ID

12

Session

2

Authors

Saar Vermijs, Joris Vangeneugden, Pieter De Visschere, Pieter De Backer, Karel Decaestecker, Charles Van Praet, Charlotte Debbaut

Abstract

Introduction When resecting a localized kidney tumour, surgeons should understand the patient-specific complexity. T-staging helps by classifying the tumour based on its maximal diameter: T1a (≤4cm), T1b (>4-7cm), T2a (>7-10cm) or T2b (>10cm). These thresholds were based on post-op measurements of resected tumours. For treatment planning, however, diameters are measured on pre-op medical images (e.g. CT). Radiologists use three manual methods, varying in accuracy and speed. In this study, these pre-op methods are examined and compared to post-op staging and three new methods are proposed. Materials and Methods Six measurements methods were tested to calculate the tumour diameter in 173 patients. 3D models were segmented on CT for automatic analysis with PyRadiomics. Paired t-tests were used to compare all methods. To identify the method correlating best with surgical complexity, operation time (OT) and estimated blood loss (EBL) were compared (T1a vs. T1b) using unpaired t-tests. Effect sizes (Cohen’s d) were used to check their relevance. Results Comparing T-stages to the 2D oblique method (most accurate method currently used), the 3D diameter method upstages 4.0% of the cases, while all other methods downstage several cases (3D volume: 16.8%; 2D ellipsoid: 13.9%; 2D axial: 5.2%; 2D perpendicular: 4.0%). The result of the 3D diameter method is on average 16.9% larger than the post-op diameter. All methods significantly differ between T1a and T1b regarding surgical complexity. For EBL, 2D axial has the largest effect size (d = 0.75), followed by 3D volume (d=0.73). For OT, the post-op diameter has the largest effect size (d=1.35), followed by 3D volume (d=1.26). Discussion and Conclusions This investigation is part of a larger scale project to develop a pre-op planning platform for (partial) nephrectomies. This study shows that the measurement method influences the T-staging, which may directly impact the preferred treatment plan. Hence, surgeons should be aware of the used method.

What is the impact of occupant behaviour in life cycle assessment?

Poster ID

13

Session

1

Authors

Hannelore Scheipers, Eline Himpe, Yanaika Decorte, Stijn Van de Putte, Arnold Janssens

Abstract

The renovation rate of the building stock must increase to reach the 2050 climate goals. One of the methods to reach this goal, is to shift the focus to district renovations. Therefore the Horizon Europe Mission Project Neutralpath has been initiated. This project investigates how PCEDs (Positive and Clean Energy Districts) can be achieved in 5 European cities, in an inclusive manner. PCEDs are districts that produce more energy than they consume, and only use ‘clean’, i.e. renewable energy. The design of these PCEDs is done through participative and human-centre principles. This research is part of the Neutralpath project and investigates the most optimal renovation scenarios in existing districts in a holistic manner, with the Muide Meulestede district in Ghent as case study. Energy use, the environmental impact over the full life cycle and costs will be considered. As sub-research, the impact of integrating dynamic simulations in LCA is investigated, with a focus on the impact of occupant behaviour. The study finds that both comfort preference levels and insulation strategies (choice of insulation + finishing materials) significantly affect the total environmental impact of renovation scenarios, and thus the most optimal renovation scenario. Further, comparing dynamic simulation results with those of the Belgian LCA tool TOTEM reveals large discrepancies due to simplified energy use calculations used in TOTEM.

Understanding the entanglement between diffusion and reaction by probing the mobility of ketene in chabazites

Poster ID

14

Session

2

Authors

Wei Chen, Pieter Cnudde, Veronique Van Speybroeck

Abstract

In zeolite catalysis, diffusion and reaction are generally viewed as separate processes that independently affect catalytic performance due to the significant variation in timescales for diffusion and reaction. Nevertheless, this study reveals that reaction and diffusion can be intertwined, a phenomenon hitherto unexplored. In particular, we highlight this complex relationship for ketene intermediates in chabazite topologies, where the diffusion properties of ketene are notably affected by the reactivity with Brønsted acid sites (BAS) and guest molecules present in the zeolite pores. Ketene is an important intermediate in zeolite catalyzed methanol-to-hydrocarbons and COx-to-hydrocarbons conversion and its diffusion and reaction behavior directly impacts the catalytic performance. Our ab initio molecular dynamics simulations reveal that ketene diffusion is significantly facilitated by hydrogen bonding interactions with BAS during the diffusion through the 8-ring windows of chabazite, and that ketene can also readily react with other guest species along the diffusion pathway. This entanglement between reaction and diffusion can be attributed to the high activity of ketene, resulting in a strong competition between reaction and diffusion, which cannot be viewed as two independent processes. Therefore, our findings concerning the complex interconnection between diffusion and reaction not only contribute to the fundamental understanding of ketene chemistry in chabazite but also have important consequences for other fields of catalysis involving highly active intermediates.

Polonium-containing Molecules in MYRRHA

Poster ID

15

Session

1

Authors

Joachim Scheerlinck, Stefaan Cottenier

Abstract

The MYRRHA reactor, a next-generation nuclear fission installation, will inevitably generate the isotope ²¹⁰Po through nuclear reactions involving the lead and bismuth nuclei present in its liquid metal coolant. In combination with oxygen and hydrogen impurities, polonium may form volatile molecular compounds such as (Bi)Po(Pb)OₓHᵧ. Due to the radiotoxicity of ²¹⁰Po, capturing these species before release is of high importance for reactor safety. However, experimental identification of such molecules is limited, making predictive computational studies an essential tool. In this work, we investigate the structure, stability and occurrence of polonium-containing molecules under MYRRHA-relevant conditions. Using a combination of advanced quantum chemical methods and machine learning, we will derive dissociation profiles and estimate relative molecular occurrences. Traditional approaches such as CASSCF and CASPT2 provide a starting point but face challenges with geometry-dependent active space selection and intruder states. To overcome these, we employ multistate NEVPT2, which avoids intruder states, together with DMRG-based workflows that enable the treatment of larger active spaces. Furthermore, matrix product state state-interaction (MPSSI) techniques are used to incorporate spin–orbit coupling effects, which are crucial for heavy-element systems. By combining accurate electronic structure calculations with neural network models trained on high-level data, our study aims to provide reliable predictions of the most likely polonium-containing species. These results are relevant for designing chemistry-specific filtration systems and contribute to a deeper understanding of the radiochemistry in advanced nuclear reactors.

Integrating Digital Elevation Models for Enhanced Flood Segmentation from UAV Imagery

Poster ID

16

Session

2

Authors

Michiel De Baets, Brian Booth, Michiel Vlaminck, Hiep Luong

Abstract

Semantic segmentation models are crucial for flood detection from UAVs, yet their performance is often limited because of the complex appearance of flood water. Shadows, reflections, suspended particles, ground colour and partial occlusions introduce large visual variability. This abstract of work describes a novel approach to enhance flood water detection by integrating Digital Elevation Models (DEMs) to post-process the output of classical deep learning methods for semantic segmentation. Our methodology is built upon two core concepts that leverage terrain data to enhance model performance. First, we utilize DEMs to partition the landscape into distinct watershed basins, which serve as local domains for analysis. This approach allows us to assume a single, flat water level within each basin, a geophysical sound simplification that traditional techniques overlook. By aggregating the activation values from a classical RGB convolutional neural network (CNN) within these basins, we create a more refined representation of the potential flood extent. Secondly, we introduce a custom cost function designed to maximize the model's support for a specific flood level within each basin. This function optimizes the water level for the summed output activations of the CNN to align with the physically defined flood boundaries. We demonstrate the effectiveness of our approach on a novel dataset of 4 drone videos from flooding events in Flanders. The results show a significant increase in Intersection over Union (IoU) scores, with gains of over 20% compared to conventional deep learning models. These findings highlight the critical role of incorporating physical domain knowledge into deep learning for flood mapping.

Semi-synthetic Dataset Generation of Thrown Objects Incidents for Safety Analysis in Sports Stadiums

Poster ID

17

Session

1

Authors

Winter Clinckemaillie, Robbe Decorte, Jelle Vanhaeverbeke, Maarten Slembrouck, Steven Verstockt

Abstract

In sports stadiums, objects thrown from the stands (such as bottles or cups) pose safety risks to players and officials. Detecting such incidents automatically is challenging because these events are rare, objects are small and fast, and privacy regulations limit access to (labelled) footage. We introduce a methodology to generate semi-synthetic training data by integrating simulated thrown objects into real CCTV footage. Our approach first reconstructs a textured 3D model of an empty stadium from drone footage, establishing a consistent coordinate frame for all simulations. Fixed CCTV cameras are then localised within this model by matching their frames to the drone imagery using hierarchical localisation with learned local features, followed by pose refinement to ensure accurate geometric alignment. Next, physically plausible throws are simulated using a game engine (Unity) and rendered with the estimated CCTV intrinsics and extrinsics at native resolution and frame rate; only the object layer is composited onto the original CCTV frames, preserving real crowd motion, lighting, and compression characteristics. Automatic masks, IDs, and bounding boxes are generated for each frame. This results in a multi-camera, pixel-accurate dataset for stadium safety monitoring, suitable for training and benchmarking computer-vision methods for detecting and analyzing incidents. This methodology scales to diverse venues and object types, delivers realistic labelled data where real incidents are scarce, and paves the way for more robust, automated incident-analysis systems.

Ignoring the Decoy: Tackling Forensic Distractions in Image Forgery Localization using Masked Convolutions

Poster ID

18

Session

2

Authors

Xander Staelens, Peter Lambert, Glenn Van Wallendael & Hannes Mareen

Abstract

Manipulating images has become easier than ever thanks to the growing sophistication and accessibility of image editing tools. While these tools can be used for good, they have the potential to be exploited for malicious purposes, such as fabricating fake news, spreading misinformation, impersonating individuals, and facilitating fraud. The field of multimedia forensics aims to combat this problem by developing robust image forgery localization (IFL) methods, which detect and localize manipulations in images. However, in this research, we demonstrate that recent IFL methods are compromised by forensic distractions. These distractions are benign visual elements, such as captions, logos, and visible watermarks, that occur in real-world scenarios. We investigate the impact of distractions on state-of-the-art IFL models, which was never formally done before. We reveal that some models (like TruFor) are highly sensitive to forensic distractions, making them lose focus on the real forgeries, leading to large performance drops. To address this issue, we propose a novel masked-convolution approach for TruFor. We replace all convolution operations with a masked counterpart, enabling the method to ignore user-specified distraction regions during inference. Evaluation shows that the proposed masked-convolution approach significantly improves TruFor’s robustness against distractions, allowing it to regain focus on the real manipulations and nearly restoring original performance. This demonstrates its potential as a distraction-aware technique to enhance the real-world applicability of IFL models.

A Pilot Study on the Effect of Intratendinous Pressure on Shear Wave Elastography in the Achilles Tendon

Poster ID

19

Session

1

Authors

Ariana Cihan, Clara Ketele, Lauren Pringels, Luc Vanden Bossche, Hendrik J Vos, Annette Caenen, Patrick Segers

Abstract

Tendinopathy alters the biomechanics of Achilles tendon through degeneration and swelling. This leads to elevated intratendinous pressure (ITP) [1], which comprises the interstitial fluid pressure (IFP) and solid stress. Ultrasound shear wave elastography (SWE) is a quantitative and non-invasive imaging modality that estimates tissue stiffness from the propagation speed of acoustically induced shear waves. We previously showed that IFP positively correlates with the shear wave velocity (SWV) in poroelastic tissue [2]. To test the hypothesis that the same behaviour is present in the Achilles tendon, we performed a pilot study of SWE in a pressurized ex vivo human Achilles tendon. Interstitial fluid pressurization was achieved through manual injections (~1-2cc) of deionized water or 5% uniaxial stretching along the fibre direction. Multiple SWE measurements were acquired during the manipulations, while continuously monitoring IFP. Our findings demonstrate that both fluid injection and uniaxial stretching can alter IFP and that IFP significantly impacts tendon SWV, both along and across fibres. This implies that IFP must be accounted for to assess tendon intrinsic stiffness more accurately, which is an important implication for the interpretation of tendon SWE data in the clinic. [1] J Sport Health Sci., 2024, 10.1016/j.jshs.2024.04.004. [2] Phys Med Biol., 2024, 10.1088/1361-6560/ad2d80.

Ghent University Chamber I (GUCCI): opening a window into the fundamentals of combustion engines

Poster ID

20

Session

2

Authors

Victor Sileghem, Quinten Dejaegere, Sebastian Verhelst

Abstract

Hard-to-electrify applications such as shipping and heavy machinery are in desperate need of sustainable fuels. The Ghent University Combustion Chamber I (GUCCI) is a unique experimental test facility designed to recreate the extreme conditions found inside combustion engines and to study the fundamental processes involved. This poster provides an overview of the setup’s capabilities, as well as past and planned research. Thanks to its versatility, GUCCI enables investigations ranging from fuel property measurements to high-pressure fuel spray formation, combustion processes, and the impact of different fuels on these phenomena. Moreover, its optical access, combined with high-speed imaging techniques, allows researchers to study in detail what often remains hidden inside the “black box” of the engine. The setup has been operated since 2008, and has been used in multiple PhD and industrial projects. Initial studies focused on high-pressure fuel spray formation in marine diesel engines, providing valuable insights to optimize contemporary engines, as well as to characterize the impact of novel biofuels. More recently, the setup has been applied to measure laminar flame speeds, a key parameter for engine modelling. Today, GUCCI is being prepared to investigate E-fuels such as hydrogen and methanol. These fuels differ significantly from conventional fossil fuels, requiring novel combustion concepts to enable their use in tomorrows marine engines. The aim of this poster is to showcase our research and to connect with other researchers working with advanced high-speed imaging techniques.

Tracing the impact: Flemish non-residential buildings under the microscope

Poster ID

21

Session

1

Authors

Maud Haverbeke

Abstract

The urgent need to reduce the carbon footprint of buildings stems from the building sector’s substantial contribution to global CO2 emissions. Despite growing awareness, practitioners such as contractors and architects often lack actionable knowledge to effectively mitigate environmental impacts. A deeper understanding is required of the specific building components that drive these impacts, and of the influence material selection exerts throughout the life cycle. Although residential buildings have been the primary focus of environmental impact studies, non-residential typologies remain underexplored. This study addresses that gap by analysing seven recently built Belgian non-residential buildings. The sample includes an office building, a primary school, three childcare centres and an assisted living facility, representing a range of typologies and construction systems, including solid, massive timber, and timber construction. The environmental performance is assessed using Life Cycle Assessment (LCA), encompassing the embodied impact. The analysis provides initial insights into which building elements contribute most significantly to environmental impact, and the role of material choices. Future research will focus on identifying more cost-effective construction assemblies that can further reduce the environmental impact of the building sector, thereby supporting a transition toward more sustainable and economically viable practices.

Flexible, Capacitive based Pressure Sensing Array for Compression Bandages

Poster ID

22

Session

2

Authors

Thomas Verhelst, Herbert De Pauw, Pieter Bauwens

Abstract

Compression bandages can play an important role in accelerating the healing process of several wound types during wound treatment. This type of bandage needs to be applied at a certain pressure range, typically ranging from 14 mmHg to over 40 mmHg, to be most effective. Integration of pressure sensors in the bandage could aid in the monitoring of the applied pressure, but typical rigid sensors could discomfort the patient when these are placed in the general wound area. In this work, a flexible, capacitive-based pressure sensing array is presented which is suitable for integration in bandages. The pressure-sensing elements rely on microstructured electrodes to create a compressible dielectric layer between the two conductive layers, improving the pressure operating range. The structured electrode is patterned with silicone-filled, thermoformed domes that allow for tuning of the sensor's characteristics. Both electrodes within a sensing element are fabricated using screen printing of a silver-based, stretchable ink on a stretchable TPU substrate. Four of these pressure-sensing elements were combined and fabricated into an array, demonstrating its capabilities to measure a pressure ranging from 0 mmHg to 180 mmHg. Due to the use of common fabrication processes and textile-compatible materials, the sensor is highly suitable for integration into bandages and can be easily expanded into low-cost, high-scale production and adapted to other wearable applications. The work described here is part of a PhD topic in which novel, soft, 3D force sensors for wearable applications are being developed.

Effluent treatment via non-thermal plasma and its effect on chickpea crop development

Poster ID

24

Session

2

Authors

Supriya More, Sunil Swami, Shital Sable, Mikhail Gromov, Sudha V. Bhoraskar, Kisan Kodam, Vikas L. Mathe, Nathalie De Geyter, Rino Morent

Abstract

The unprocessed outlet of textile industrial effluent can cause the harm to water and soil eco-system. To address this issue related to effluent mineralization, majority of literature addressed on the single pollutant as effluent mineralized by photocatalysis or non-thermal plasma or hybrid treatments; without implementing the use of post processed water. The goal of this study is to show the abilities of non-thermal plasma technology on the removal of mixed organic dye contaminations from water, considering it further use for agricultural purposes. For this a number of diagnostic techniques are employed to comprehensively characterize plasma physical properties and plasma induced gas and liquid phases chemistry, providing important fundamental insights. It is shown that reactive oxygen species (like O, OH., O3, etc.) are mostly responsible for the removal of parent organic contaminations. Meanwhile, the effect of by-products formed after plasma treatments is experimentally evaluated by monitoring the growth efficacy of chickpea seeds. This experiment illustrates that the effluent water irrigated soil and eventually grown chickpea crop showed adverse effect on soil eco-system. Whereas, irrigating with plasma treated water boosts the soil micro-flora and positive crop development.

Investigation of the physicochemical properties and selective anti-cancer efficacy of in-plasma treated PBS using an exclusive liquid-submerged plasma jet

Poster ID

27

Session

1

Authors

L. Hoebus, P. Shali, N. Caz, J. Van den Bosch, R. Ghobeira, M. Narimisa, R. Morent, E. Wolfs, N. De Geyter

Abstract

Despite advances in current treatment options, cancer remains one of the leading causes of death, leading to nearly 10 million deaths in 2020. This highlights the urgent need for novel therapeutic alternatives. Plasma treated liquids (PTLs) have emerged as a promising alternative, primarily due to the generation of reactive oxygen and nitrogen species (RONS), such as H2O2, NO2-, and NO3-, during plasma treatment. In this study, a unique submerged plasma setup was used to produce high RONS concentrations in phosphate-buffered saline (PBS). The effects of plasma parameters, including gas flow rate, voltage, and treatment time, were systematically evaluated for their impact on H2O2 and NO2- concentration. Results showed that increasing treatment time and voltage significantly elevated both H2O2 and NO2- concentrations, whereas gas flow rate had minimal effect on both. Notably, the submerged setup generated exceptionally high H2O2 concentrations (>2000 µM), surpassing those typically reported in literature. Finally, the cytotoxic effects of the plasma treated PBS were assessed on human oral squamous cell carcinoma (OSCC) cells and healthy human keratinocytes (HaCaT) using cell proliferation and viability assays. The results demonstrated selective cytotoxicity, with a clear reduction in OSCC cell proliferation and viability, while sparing HaCaT cells. This selectivity underscores the strong potential of plasma treated PBS as a targeted cancer therapy that minimizes damage to healthy tissue.

Alchemical Move: Chasing Rabbits, Trusting Turtles

Poster ID

28

Session

2

Authors

Parham Rezaee, Sina Safaei, An Ghysels

Abstract

Many molecular processes, including membrane permeation or protein-ligand unbinding, occur through spontaneous transitions between stable states separated by activation barriers. Such events are rare, when these barriers are high and occurring on timescales far longer than those accessible to standard molecular dynamics (MD) simulations. This makes the direct application of MD inefficient for studying rare transitions. Path sampling techniques, such as Transition Interface Sampling (TIS), address this challenge by focusing on reactive trajectories rather than full equilibrium simulations. TIS employs Monte Carlo moves, such as the shooting move, to generate new trajectories that satisfy detailed balance, ensuring each elementary process is in equilibrium with its reverse process. TIS can be used to estimate transition rate constants at a reasonable computational cost. However, TIS can become inefficient in multi-channel systems, where important alternative pathways may remain undersampled. In such cases, channels are often separated by high free-energy barriers in phase space, so small perturbations rarely allow a trajectory to hop from one channel to another. In this work, we introduce a novel approach, termed the alchemical move, designed to overcome this limitation. The method alternates between two forcefields: a low-level model that rapidly explores phase space and a high-level model that provides accurate dynamics. By swapping forcefield between these systems, the approach enhances the discovery of distinct transition pathways while reducing computational cost. The workflow comprises four steps: phase-point selection, forcefield swapping, acceptance evaluation, and path generation. Preliminary simulations show that alchemical moves produce crossing probabilities and path lengths consistent with those obtained using only the high-level forcefield as the reference. Ongoing work will extend validation to one- and two-dimensional model systems and to all-atom biomolecular simulations.

Impact of Thermal non-equilibrium phenomena during two phase expansion

Poster ID

29

Session

1

Authors

Xander van Heule, Michel De Paepe, Steven Lecompte

Abstract

Today, climate change is one of the biggest issues facing humanity. One method to reduce its effects is to utilise a larger amount of the total used primary energy. One well-known method is the utilisation of waste heat by an organic Rankine cycle (ORC). However, the efficiency of this method is reduced when lower-temperature heat sources are used. A Promising variant of the ORC for the application in low-temperature heat recovery is the trilateral flash cycle (TFC) and the partially evaporating organic Rankine cycle (PEORC). Both these cycles require an expansion machine that is capable of expanding a two-phase liquid-vapour mixture. For this, volumetric machines lend themselves naturally compared to turbomachines. However, the behaviour of these volumetric two-phase expansion machines has not been broadly studied yet. It has been shown that thermodynamic non-equilibrium phenomena occur within these expansion machines. It was also shown that these effects introduce additional losses which have been unaccounted for in studies of the TLC and PEORC. In previous work, the authors modelled these non-equilibrium expansion effects within a free linear reciprocating expander. The model predicts a reduction of the indicated work of an expansion stroke by around 20% when this stroke occurs over a duration of 0.1 seconds compared to a duration of 1 second. This work also tested these same expansion profiles, but experimentally. The same expansion profiles and durations show a reduction of around 25% experimentally compared to the 20% of the model. This shows that the model is capable of predicting the two-phase expansion behaviour. However, the empirical parameters within the model will have to be slightly adapted.

Towards a biophysical dipole model for ABR generators

Poster ID

30

Session

2

Authors

Sarah Vandepitte, Sarah Verhulst, Emmeric Tanghe, Thomas Tarnaud

Abstract

The auditory brainstem response (ABR) provides a non-invasive measure of synchronous neural activity along the ascending auditory pathway and is widely used in auditory neuroscience. Changes in the ABR waveform serve as biomarkers of altered neural processing and can provide insight into the mechanisms of auditory disorders such as tinnitus. In this work, we present the first steps towards a biophysically grounded dipole model of ABR generators, linking neuronal population activity to scalp-recorded responses. Acoustic input is transformed into inner hair cell receptor potentials using the Verhulst et al. (2018) auditory periphery model, comprising middle-ear filtering, cochlear frequency decomposition, and biophysical inner hair cell transduction. These signals drive a NetPyNE/NEURON network of spiral ganglion cells (SGCs) implemented with Hodgkin–Huxley mechanisms, multicompartment morphologies, and high- and low/medium-spontaneous rate types. Simulations demonstrate tonotopic organization and phase locking, consistent with known auditory nerve physiology. A click stimulus evokes highly synchronous firing across the SGC population, producing a population dipole corresponding to ABR wave I. To extend the model, bushy cells of the cochlear nucleus (CN) and principal neurons of the inferior colliculus (IC) will be incorporated as candidate generators of waves III and V, respectively. Additionally, accurate modelling of specialized synapses such as the Endbulb of Held will be critical to capture the temporal precision of CN responses. Ultimately, this framework enables the translation of biophysical neural dynamics into scalp-recorded ABRs, providing a mechanistic tool to study auditory coding in health and pathology.

BETS: A MIMO-Based Approach for Enhancing Time-Sensitive Traffic Delivery in Industrial Wireless Networks

Poster ID

32

Session

2

Authors

Mohammadreza Heydarian, Didier Colle, Wouter Tavernier

Abstract

The rise of Industry 4.0 has driven the need for ultra-reliable, low-latency wireless communication systems capable of supporting Time-Sensitive Networking (TSN) requirements. While traditional research in Multiple-Input Multiple-Output (MIMO) has focused on maximizing spectral efficiency and bitrate, and TSN work has emphasized scheduling, there is significant potential in applying beamforming to improve TSN capacity—a topic that has received limited attention in the literature. In this work, we propose a novel joint scheduling, and resource allocation framework for mmWave MIMO networks that integrates spatial multiplexing through MIMO beamforming, OFDMA, and traffic shaping for TSN streams. We formulate the problem of optimizing both frame scheduling and network provisioning as a unified, though computationally intractable, problem. To address this, we propose the Beam Enhanced Time Shaping (BETS) algorithm, a practical iterative heuristic based on alternating optimization. BETS tackles the challenge by jointly optimizing network provisioning (MIMO beamweights and bandwidth partitioning) and network demand (frame-level schedules). Simulation results in an indoor factory setting with mmWave channel models show that BETS outperforms an equal-resource-allocation baseline, improving the number of satisfied TSN streams by 39% to 50%. These results also demonstrate BETS’s robustness, scalability, and potential for deployment in future MIMO-enabled industrial networks.

DFT+TN: A new method to accurately model complex materials from first principles.

Poster ID

33

Session

1

Authors

Simon Ganne, Daan Vrancken, Daan Verraes, Tom Braeckevelt, Lukas Devos, Laurens Vanderstraeten, Jutho Haegeman, Veronique Van Speybroeck

Abstract

Modern technological applications rely on materials with precisely controlled electronic and magnetic properties. However, predicting these properties computationally remains challenging for an important class of materials where electrons interact strongly with each other. These "strongly correlated" materials exhibit extraordinary behaviours such as superconductivity at relatively high temperatures and transitions between conducting and insulating states, making them promising candidates for next-generation technologies including quantum computers and energy-efficient electronics. Current computational methods like Density Functional Theory, whilst successful for many materials, often fail to accurately predict the properties of strongly correlated systems. This limitation significantly hinders the rational design of new functional materials. We have developed a systematic computational approach that addresses this challenge by combining multiple advanced techniques in a novel framework. Our method begins with standard electronic structure calculations and systematically constructs simplified but accurate models that capture the essential physics of electron correlations. These models are then solved using state-of-the-art numerical techniques developed for quantum many-body systems called Tensor Networks (TN). Initial applications to quasi-one-dimensional materials demonstrate substantial improvements in calculated electronic properties compared to conventional methods, with results showing excellent agreement with experimental measurements. This work establishes a robust, parameter-free computational tool for predicting the properties of strongly correlated materials, accelerating the discovery and design of novel materials with tailored functionalities for technological applications.

A new approach to uncertainty quantification of PINNs

Poster ID

35

Session

1

Authors

Robbie Slos, Tom Lefebvre, Jolan Wauters, Guillaume Crevecoeur

Abstract

Physics-Informed Neural Networks (PINNs) refer to a deep learning approach to represent the spatial and temporal characteristics of a distributed physical phenomenon, such as thermal fields, using Neural Networks (NNs). The loss function used to train PINNs relies on a term that penalises any violation of the Partial Differential Equation (PDE) that governs the distributed phenomena, in addition to a least-squares-error penalisation of any available observations in the relevant domain. PINNs have been shown to be successful in approximating the solutions to PDEs and have proven effective even in the low data regime. A critical shortcoming is the lack of a systematic treatment of the uncertainty of the approximation. State-of-the-art approaches rely on Bayesian NNs, though these are computationally heavy, both during training and at inference, nor does the associated training procedure derive from first principles. To remedy these limitations, we propose Variational Inference PINNs (VI-PINNs). Our approach derives from first principles: the uncertainty intrinsic to the distributed phenomena is explained by adopting Stochastic PDEs, while relying on standard measurement uncertainty to explain the observational uncertainty. This leads to the formulation of a posterior probability for the distributed phenomenon. Drawing parallels with Bayesian inference in finite spaces and relying on VI techniques to circumvent the otherwise intractable posterior. We derive a training objective that allows us to train two NNs, representing the mean and covariance of the approximation, respectively. The solution may be interpreted as a Bayesian belief about the true distributed phenomenon. Importantly, in the limit, the original PINN framework is recovered. We compare our approach with Bayesian PINNs (B-PINNs). Our results suggest that VI-PINNs are easier to implement, have a lower training time, and yields results that better align with reality, especially when extrapolating outside the measurement range.

Rutile-Rich TiO2 Coatings Engineered by Plasma Electrolytic Oxidation for Enhanced Mechanical Performance

Poster ID

38

Session

2

Authors

Asif Ali, Maryam Nilkar, Anton Nikiforov, Rino Morent, Kim Verbeken, and Nathalie De Geyter

Abstract

Titanium (Ti) and its alloys are the most commonly used materials for the production of metallic medical implants due to their promising characteristics, including bioinertness, corrosion resistance, and biocompatibility. This study investigates the deposition of mesoporous, rutile-rich TiO2 coatings on commercially available pure titanium substrates by plasma electrolytic oxidation (PEO) using a non-conventional phosphate- and silicate-free electrolyte. The aim is to improve the mechanical properties of the TiO2 coatings so that they are suitable for demanding applications, such as external fixation pins that have to withstand considerable drilling forces during implantation. The PEO process has been optimized to produce coatings with mesoporous structures that exhibit pore sizes from 0.5 to 5.0 µm and unique columnar, tubular oxide growth at the pore edges. The balance between ceramic-like and melt-like phases achieved during deposition improved the mechanical properties of the coating and resulted in a significantly higher surface hardness compared to bare titanium. X-ray diffraction (XRD) analysis revealed that the coatings consist of pure TiO2, with the rutile phase comprising 76.4% of the material. Rutile is known for its superior mechanical strength, stability, and resistance to degradation compared to the anatase phase, making it ideal for load-bearing applications. The coatings also showed a reduction in unwanted ionic inclusions, which can affect the properties of the coating. This work demonstrates that rutile-rich TiO2 coatings produced with a non-conventional electrolyte can significantly improve the mechanical performance of titanium-based implants, providing greater hardness. Consequently, plasma-assisted rutile-rich TiO2 fabrication of coatings emerges as a promising approach for enhancing implant performance.

Stochastic Scheduling and Quality Inspection in Remanufacturing Systems

Poster ID

39

Session

1

Authors

Sajjad Hedayati, Stijn De Vuyst

Abstract

This research provides an in-depth examination of stochastic scheduling and quality inspection within remanufacturing systems, a critical area in sustainable manufacturing that encompasses disassembly, reprocessing, and reassembly stages. The study begins with an overview of recoverable areas, tracing the material flow from disassembly shops through reprocessing shops to reassembly shops, establishing the foundational framework for subsequent analyses. A comprehensive exploration of scheduling policies and constraints is undertaken, including routing strategies such as random routes, shortest queue, round-robin, and reverse flows; inventory management practices covering work-in-progress, initial, final, and spare parts storage with associated replenishment policies; and resource allocation strategies involving parallel and serial facilities, expert capacities, and retrieving policies. Quality inspection processes are meticulously addressed through the assignment and scheduling of parts and experts, ensuring rigorous quality control throughout the remanufacturing cycle. Performance analysis is conducted using advanced simulation techniques, evaluating a broad spectrum of metrics such as simulation time, waiting time, resource utilization, inventory levels, replenishment frequency, queue lengths, and machine assignment, sequencing, and scheduling bottlenecks. Visual representations, including Gantt charts and utilization graphs, enhance the interpretation of system dynamics and resource efficiency. The research proposes two configuration topologies: one featuring a bank of serial machines with intermediate buffers to optimize flow and minimize waiting times, and another utilizing a pool of parallel machines and a team of experts/technicians to improve flexibility and throughput. Statistical analysis and visualization techniques are employed to support the findings, leveraging computational tools for process modeling. Future research directions focus on the development and application of advanced optimization techniques to enhance scheduling efficiency and the exploration of predictive and management strategies for reverse remanufacturing supply chains, contributing to the advancement of sustainable industrial practices.

Dual Estimation of States and Disturbances in Anesthesia and Hemodynamic Systems

Poster ID

40

Session

2

Authors

Bouchra Khoumeri, Dana Copot, Clara M. Ionescu

Abstract

Reliable estimation of patient states and external disturbances is essential in biomedical monitoring, where many critical variables cannot be directly measured. This work presents a dual-estimation framework for anesthesia and hemodynamic systems, domains where disturbances such as surgical stimulation and hemorrhage strongly influence observed responses and may compromise both patient safety and automated drug delivery. Three challenges are considered: limited parameter identifiability due to weak input excitation, the difficulty of separating genuine drug sensitivity changes from disturbance effects, and the need for timely hemorrhage detection despite reliance on indirect, delayed signals. The proposed methodology employs two parallel Kalman filters. The anesthesia estimator reconstructs pharmacokinetic states, effect-site concentration, output disturbances, and pharmacodynamic parameters, while the hemodynamic estimator recovers cardiovascular states and disturbances. A bidirectional exchange of information links the two, enabling joint monitoring of anesthesia depth and cardiovascular dynamics. The underlying patient model integrates a pharmacokinetic–pharmacodynamic subsystem for anesthesia with a lumped cardiovascular subsystem including fluid exchange dynamics. Parameter updates are guided by input informativity, ensuring sensitivity adaptation during periods of clinically meaningful excitation. The framework is highly relevant to clinical practice, as hemorrhage remains a relevant cause of preventable trauma death, responsible for up to 25% of fatalities. Future work includes theoretical validation of estimator properties, large-scale testing on virtual patient cohorts, and integration into predictive control schemes for closed-loop anesthesia and hemodynamic management.

Hidden value in lean black liquor: unlocking hydroxycinnamic acids

Poster ID

41

Session

1

Authors

Iva Zoteva, Ingeborg Stals, Jeroen Lauwaert, Jeriffa De Clercq

Abstract

As global demand for pulp and paper increases, the industry seeks to meet consumption needs while pursuing closed-loop valorisation of production outputs. Lignocellulosic biomass, composed of cellulose, hemicellulose and lignin, serves as the primary feedstock. Miscanthus x giganteus, a C4 grass cultivated on marginal land and known for its high biomass yield, is a promising source. To obtain the cellulose for paper and pulp production, the hemicellulose and lignin are liquified with a NaOH-solution. This generates black liquor, which upon acidification precipitates lignin, yielding a lignin-rich solid fraction and a residual liquid stream, the lean black liquor (LBL). Despite ongoing efforts to valorise lignin, the inorganics of the LBL are recycled and the remaining liquor is combusted for process energy. However, the LBL contains high-value hydroxycinnamic acids (HCAs), i.e. p-coumaric and ferulic acid. These have a strong antioxidant activity and hence, potential for high-end cosmetic and pharmaceutical applications. Before directing and investigating possible HCA separation and recovery strategies, a full characterisation of the LBL is required. In this study, LBL was characterized using chromatography and 2D-NMR, revealing p-coumaric (30 mg/g biomass) and ferulic acid (2 mg/g biomass), as well as other low- and high-molecular-weight aromatics, lignin-carbohydrate complexes, and sugars, indicating the feasibility of HCA valorisation.

Large Eddy Simulation of flow in a T-shaped open-channel bifurcation

Poster ID

44

Session

2

Authors

Shan Gao, Okba Mostefaoui, Emmanuel Mignot, Tom De Mulder

Abstract

Open-channel bifurcations occur at river islands or delta-shaped mouths, and in engineered hydraulic systems, like water intakes and combined sewer overflows. The complex flow features are often studied experimentally or numerically at laboratory scale. In such systems, flow separation and recirculation strongly influence mass transport, mixing processes, and energy losses. This numerical study focuses on the detailed flow in a laboratory-scale bifurcation, making use of straight channels with rectangular cross-sections and concordant, fixed and smooth beds, where the lateral branch is perpendicular to the main channel, yielding an intersection with sharp edges. It is designed to replicate a flow configuration studied experimentally at INSA Lyon, where 3D particle tracking velocimetry revealed helical ascending trajectories within the recirculation region. The numerical model employs a Large Eddy Simulation (LES), wall functions, a synthetic turbulence generator at the inlet, and the Volume of Fluid (VoF) method to capture the free surface. The simulation reproduces junction flow features, including flow separation at the upstream junction corner and the formation of a recirculation zone in the lateral branch. Model performance is evaluated by comparison with the experimental data, including time-averaged velocity fields, streamlines, and the rotation centres at different elevations in the recirculation zone. The numerical results demonstrate the predictive capability of the present LES model. After verification whether a wall-resolved model version has a superior quality, the impact of Froude number and width-to-depth ratio on the flow features will be studied with the selected model version in future research.

Techno-Economic Optimization of Energy Systems

Poster ID

46

Session

2

Authors

Karel Herregodts, Didier Colle, Sofie Verbrugge

Abstract

The transition towards renewable energy systems presents both opportunities and challenges, particularly in ensuring economic viability under high levels of renewable penetration. Business cases are often complicated by uncertainties such as fluctuating energy prices and variations in consumer behaviour. This research addresses these challenges by advancing techno-economic models and optimisation techniques to support robust energy system sizing, with a focus on solutions that generate value locally while also participating in wider energy markets. Using real-world datasets on consumption, production, and market prices, simulations were conducted to evaluate the performance of photovoltaic (PV) installations, battery storage, and smart electric vehicle (EV) charging. The analysis explored multiple scenarios across a broad range of asset sizes, assessing both cost and emission impacts. Findings indicate that while increased PV capacity improves self-sufficiency and annual value, batteries often reduce annual value unless additional revenue is obtained through ancillary services such as grid balancing. Moreover, the optimisation of EV charging emerged as a key driver of economic performance: smart charging strategies not only enhanced the net present value of charging hubs but also supported the integration of larger PV systems, particularly when combined with office load profiles. The results suggest that mid-size batteries are unlikely to be economically viable in shared energy districts unless coupled with ancillary services, whereas PV systems remain promising for large buildings or sites, albeit with strong sensitivity to cost and price dynamics. Smart EV charging demonstrates substantial cost reduction potential without compromising user comfort, while also increasing the viability of renewable integration. Future work will focus on reducing model complexity for large-scale stochastic systems and designing evolutionary algorithms to further improve asset sizing optimisation.

Engineered Porosity via Polystyrene Beads in Gypsum Plasters: Experimental Validation and Microstructural Control

Poster ID

47

Session

1

Authors

Afsar Muhammad

Abstract

Like many sectors across the European Union, the gypsum industry is actively seeking strategies to reduce its environmental footprint by lowering product density while maintaining quality and performance standards. To address this, a resource-efficient strategy that reduces gypsum and water consumption, and thereby lowering energy use without sacrificing mechanical performance is pursued. In this study we establish the relationship between the three-dimensional characteris-tics of air voids such as volume fraction, size distribution, sphericity, polydispersity, packing density, and tortuosity within the gypsum matrix and the resulting mechanical performance of the material. Standard and advanced characterization techniques, such as mercury intrusion porosim-etry (MIP), X-ray micro-computed tomography (μCT), and mechanical testing were used to investigate gypsum core structure and its mechanical properties. These characterizations served as a benchmark for quantifying the relationship between pore structure and material properties in gypsum plaster. Additionally, a straightforward methodology was developed to produce model porous samples with controlled porosity parameters. The polystyrene (PS) beads of known diam-eter were added in the gypsum plaster paste and mechanical test were carried out. The samples were compared with identical counterparts from which polystyrene had been leached. The results indicate that polystyrene can effectively serve as a surrogate for air voids in representing porosity within the gypsum matrix

Integration of Contemporary Architecture in Bruges and Belgium (1970-1989)

Poster ID

48

Session

2

Authors

Warre Onnockx

Abstract

The project aims to examine the segment of postmodern architecture in Bruges, realised between 1970 and 1989, which sought to reintegrate within the historic city centre. Set in motion through state funding and local actors like the Marcus Gerardsstichting, Bruges pioneered city renewal and integral conservation programs, placing built heritage at the core of its future urban development. Unlike several historical studies on canonical projects in Brussels that reacted against the so-called “Brusselisation”, a nuanced study of postmodern architecture in the provincial city of Bruges is still lacking. The case studies selected for this project, designed by architects such as Groep Planning, Luc Dugardyn or Eugene Van Assche were often criticised of little architectural interest by journalists and architectural critics in the late 20th century. Adding, the way in which these projects responded to the urban mythology of “Bruges-la-Morte” remains largely unquestioned. By combining recent disciplinary insights of architectural history and heritage studies, including the digital tools of controversy mapping, this project proposes an important revision and contribution to the history of postmodern architecture, at a moment when key actors involved can still be interviewed and personal archives of architects have become available. In this way, the project contributes to the present-day challenges of urban regeneration, adaptive re-use and resilient cultural heritage.

Influence of Remanent Magnetisation On DC-Shielding Of Earth’s Magnetic Field

Poster ID

49

Session

1

Authors

Jirka Verleysen, Wolfgang Rösler, Simo Spassov, Luc Dupré, Margot Deruyck

Abstract

Magnetic shielding is essential when studying weak magnetic fields, such as those emanating from rocks (palaeomagnetism) or the human body (medicine), which are orders of magnitude smaller than the Earth Magnetic field (EMF). Magnetically shielded rooms (MSR) reduce the effect of the ambient EMF and technical (man-made) field disturbances to a minimum and provide the basis for precision measurements of weak magnetic fields. This passive (DC) shielding relies on two main effects, flux shunting and remanent magnetisation, which are both dependent on the materials used for the MSRs. Materials of highest permeability (e.g. mu-metal, permalloy) exhibit excellent flux shunting, whereas the contribution of their remanence to the shielding performance is negligible. As a result, many shielding formulas and simulations are simplified by not taking remanent magnetisation into account. Materials with high permeability and a magnetic remanence like electrical steel (and other ferromagnetic materials) may contribute to magnetic DC shielding by moderate flux shunting and a significant magnetic remanence. If this magnetic remanence is aligned parallel to the ambient field, it will be the dominant effect of magnetic shielding (by field compensation). Here an arrangement is presented that can be used to investigate the contribution of remanent magnetisation to magnetic shielding in mu-metal and electrical steel. Materials are selected to match those used in the construction of the MSR. These setups together with COMSOL Multiphysics simulations allow better understanding of the theory behind and the importance of remanent magnetisation. This work is part of MAGSCREEN, an ongoing project of the Royal Meteorological Institute of Belgium (RMI), which focuses on building a MSR for palaeo- and archaeomagnetic research at the Geophysical Centre in Dourbes. The role of remanent magnetisation is being investigated to improve the understanding of shielding theory and to support the design and construction of current and future MSRs.

Exact kinetics of amino acids through chiral phospholipid membranes via path sampling

Poster ID

50

Session

2

Authors

Tom Vlaar, An Ghysels

Abstract

Cell membranes, composed of phospholipid bilayers, act as barriers that enable the localization and compartmentalization essential to life. Understanding the permeation of small molecules through membranes is crucial for various biological and pharmacological applications. In molecular dynamics (MD) simulations, the permeation event is considered a rare event due to the infrequent occurrence. Capturing such a rare event and determining the kinetics requires immense computational resources. Substantial reduction of simulation time and exact assessment of the rate kinetics can be accomplished with replica exchange transition interface path sampling (RETIS) method. RETIS works by randomly generating new trajectories from an initial trajectory by shooting moves which are next accepted or rejected according to the Metropolis-Hastings algorithm. Enabling paths to be exchanged between different ensembles, enhances the sampling efficiency greatly. While no external bias potentials are used, the reactive paths with true dynamics resulting from this method allow for the qualitative information about rare events to be obtained. To further increase the exchange rate of RETIS, a parallelizable version called infinity-RETIS was developed. This advanced approach increases the exchange rate without steep factorial scaling and thus significantly increases computational efficiency while still maintaining accuracy. In this study, the permeation rate of different amino acids enantiomers is investigated, i.e. of proline. Enantiomers consist of the same atoms but the structures are each other’s mirror image. The permeation through a 5 : 1 1,2-dioleoyl-sn-glycero-3-phosphocholine (DOPC) 1-palmitoyl-2-oleoyl-sn-glycero-3-phosphocholine (POPC) is investigated. Experimentally, the biological L- amino acids have been found to passively permeate up to 6 times faster through a membrane. The role of chiral membranes in the selective permeation of small molecules still remains partially elusive. Understanding the determinants of membrane permeation, can aid in pharmacokinetic property tuning as well as shed light on the fundamental studies of symmetry in the origins of life.

Laser Integration on Photonic Chips Using Micro-Optics

Poster ID

53

Session

1

Authors

Neethu edathil valappil, Geert van steenberge, Kasper van gasse, Bart kuyken

Abstract

The integration of laser on PIC is challenging as submicron level alignment accuracy is needed for low loss coupling. There are various approaches to laser integration. Flip-chip bonding and hybrid integration are two such methods. In flip-chip bonding, the laser needs to be bonded while ensuring alignment in the vertical direction with respect to the waveguide on the photonic chip. This requires so-called pedestals on the photonic chip and corresponding recess areas on the laser diode to achieve proper vertical alignment, necessitating customized lasers. For shorter wavelength, alignment becomes even more challenging. In the hybrid integration approach, the laser diode is actively aligned with the photonic chip and glued together and assembled on a common sub-mount. This method allows the use of off-the-shelf laser diodes and shorter wavelength, but it is not compatible with wafer-scale processes. Currently, there is no solution for wafer-scale integration of high-power lasers at shorter wavelengths. We propose a novel method for integrating a high power laser diode. In this approach, the laser beam will be coupled to the SiN waveguides using a two ball lens system. This packaging method allows bonding the laser diodes with limited accuracy and adjusting the accuracy using the active alignment of two ball lenses. It also allows the use of of-the-shelf laser diodes, facilitates wafer-scale integration, and supports even shorter wavelengths. This packaging method enables wafer-scale integration of lasers, even at high power and shorter wavelengths. Also, the SiN chip can be designed to enhance the performance of the laser.

Online Sensor Selection for Object Detection via Bayesian Risk Minimization

Poster ID

57

Session

1

Authors

Arash Chaichi Mellatshahi, Tim Willems, Marwan Yusuf, David Van Hamme, Jan Aelterman

Abstract

In today’s world, object detection systems increasingly rely on multiple sensors, but processing data from all of them is computationally, energy, and bandwidth intensive. Situations such as battery-saving mode, limited network bandwidth, processor cooling requirements, or handling other tasks simultaneously make it difficult to use all sensors at once. This motivates the need for methods that dynamically select sensors, achieving high detection performance while conserving resources. We propose a decision-theoretic approach for object detection that selects the optimal sensor based on incoming evidence, rather than using all sensors simultaneously. Using Bayesian principles, the system chooses the sensor that minimizes the expected risk of using that sensor, combining the cost of computation and detection errors given the evidence. This dynamic selection balances high detection performance with limited resource usage, producing an output expected to have minimal risk. To demonstrate our sensor selection method, we applied it to choosing between analyzing a low-resolution (LR) or high-resolution (HR) image for car detection. In this setup, the system uses the output of the car detector on the low-resolution image as evidence to decide whether the corresponding high-resolution image should be analyzed for a given region. Our approach, which selects online between high- and low-resolution sensors, increased recall from 0.66 to 0.71 at 0.75 precision compared to using only the medium-resolution sensor on the KITTI dataset. It also reduced processed pixels by 34% and outperformed the medium-resolution pipeline in both computation and detection performance. Overall, our method outperforms low-, medium-, and high-resolution pipelines in terms of total risk.

A Machine Learning Framework with Dimensionality Reduction for Robust Indoor Visible Light Positioning

Poster ID

58

Session

2

Authors

Negasa Berhanu Fite, Getachew Mamo Wegari, Heidi Steendam

Abstract

Indoor visible light positioning (VLP) has emerged as a promising alternative for precise localization without relying on additional infrastructure. Despite its potential, achieving reliable accuracy remains challenging due to the complex interplay of light propagation, unknown LED orientations, and dynamic environmental factors such as user mobility and obstacles. Traditional approaches, particularly those based on received signal strength (RSS), often face limitations caused by fluctuations in optical power and modeling imperfections. To address these challenges, this study introduces a machine learning–driven localization framework that integrates nonlinear regression with a dimensionality reduction technique. User movement is emulated through a systematically constructed dataset that varies receiver positions to reflect realistic displacement patterns in a static indoor environment with fixed obstacles. The regression model effectively captures the intricate relationships between optical signals and spatial coordinates, while dimensionality reduction alleviates computational overhead, accelerates training, and enhances generalization. Experimental evaluations conducted in a 12 m × 18 m × 6.8 m indoor setup with eight LEDs and a photodiode receiver demonstrate significant improvements in accuracy. The proposed framework achieves mean squared error values of 0.0062 cm for training and 0.0456 cm for testing, alongside R² scores of 99.31% and 94.74%, respectively, underscoring its strong predictive capability and low model loss. These findings highlight the effectiveness of combining machine learning regression with dimensionality reduction to optimize VLP performance, offering a robust solution for reliable indoor positioning services.

Leveraging Bayesian Optimization for the 
Automatic Tuning of Loss-in-weight Feeding

Poster ID

59

Session

1

Authors

András Retzler, Ruben Waeytens, Bram Bekaert, Tom Lefebvre, Guillaume Crevecoeur

Abstract

Continuous Direct Compression systems provide a more sustainable solution compared to traditional batch manufacturing of pharmaceutical tablets, in terms of manpower, footprint, material, and energy consumption. The downside is that such a system has more than 100 parameters, many of which must be tuned by a skilled operator to achieve a steady mass flow, i.e., to minimize any excessive overshoots and oscillations that can potentially occur during the feeding process, if not correctly tuned. Bayesian optimization (BO) provides a principled way to automate various tuning problems, for cases where every evaluation is expensive in terms of time or costs. BO is a surrogate-based technique suitable for optimizing black-box functions without available analytical derivatives. In this work, we have compared human tuning with BO-based autotuning, feeding a challenging raw material (i.e., Emcocel® 90 M; highly compressible and with reduced flowability) throughout all iterations. Four important feeder parameters were tuned: the minimum refill level [%], the agitator speed [%], the ARS refilling speed [rpm], and the controller aggressiveness, while keeping every other parameter constant. With BO, we achieved the best objective value at iteration #14, which is significantly lower than what the human expert has found. BO surpassed the performance level of the human operator at iteration #11. The outcome of BO is a steadier mass flow compared to the best efforts of the expert, and also less material loss during the machine operation, which is a step towards sustainability. In the future, in terms of iteration count, it would be beneficial to achieve the human level earlier, which would be possible by incorporating human knowledge into BO.

Acousto-electric Frequency Shifting and Filtering of Deep Neuronal Activity: A New Technique for Acousto-Electrophysiological Neuroimaging

Poster ID

60

Session

2

Authors

Mehdi Soozande, Emmeric Tanghe, Thomas Tarnaud

Abstract

Acousto-electrophysiological neuroimaging (AENI) is a novel method for non-invasive and spatially selective recording of deep neuronal activity that leverages the interaction between focused ultrasound and brain tissue. When an ultrasound wave is applied to a target region, neurons within that region vibrate, producing periodic changes in their distance to recording electrodes such as invasive local field potential (LFP) or non-invasive transcranial electroencephalography (EEG) electrodes. This vibration induces a mixing between the neural signals and the ultrasound carrier, resulting in a frequency shift of the local activity. Therefore, signals from the targeted region produce sideband components at frequencies centred around the ultrasound carrier, while signals from outside the targeted regions remain unshifted. In this work, we propose a new frequency-domain decoding algorithm that selectively extracts the frequency shifted components, thereby isolating activity from the region of interest while suppressing interference from other brain regions. Simulations performed in NetPyNE at 500 kHz and 1 MHz ultrasound frequencies demonstrate that the proposed decoding method achieves up to 25 dB signal-to-interference ratio (SIR) improvement using subthreshold ultrasound pressures, which are too low to induce neuromodulation. These results establish acousto-electric frequency shifting and filtering of neuronal activity as an effective technique for enhancing spatial resolution in electrophysiological recordings, with potential applications in neuroscience research and clinical diagnostics.

Neutron irradiation effects on mechanical properties of new EUROFER97 grades developed by SCK CEN and OCAS NV

Poster ID

61

Session

1

Authors

Sahil Valiyev

Abstract

Reduced Activation Ferritic Martensitic (RAFM) steels are the baseline structural materials for the Tritium Breeding Module (TBM) in ITER, with Eurofer97 serving as the reference alloy for European TBM concepts. However, its operational temperature range (350–550 °C) is limited by irradiation-induced hardening and embrittlement at low temperatures (<350 °C), and by creep deformation at high temperatures (>550 °C). To overcome these constraints, new Eurofer97 grades have been developed within the EUROFusion under the Materials Work Package to extend its operational temperature window. For low-temperature applications (280-300 C), the focus is on reducing the ductile-to-brittle transition temperature (DBTT) in the unirradiated state, enhancing toughness. For high-temperature applications (up to 650 C), strength improvements are achieved by promoting the precipitation of stable tantalum (Ta) and vanadium (V) carbonitrides, which resist coarsening under thermal exposure while preserving low-temperature properties. These developments have led to the classification of the new steels into low-temperature (LT) and high-temperature (HT) application grades. This study investigates the effects of neutron irradiation on the tensile and impact properties of Eurofer97 steel grades developed by SCK CEN and OCAS NV. Irradiation experiments were conducted at the BR2 reactor on miniature flat tensile specimens, reaching a dose of 3 dpa at 300 °C representing the expected end-of-life exposure conditions in the ITER TBM (TBM). Five steel grades were selected for the irradiation campaign: four newly developed variants of Eurofer97 and one reference grade. For LT applications, two new grades were engineered using nonstandard thermomechanical treatment routes and an increased tantalum (Ta) content of 0.2 wt.% to refine block and prior austenite grain size. For (HT) applications, two grades were optimized through modified thermomechanical treatments, reduced carbon content, and enhanced precipitation of MX-type particles by increasing nitrogen and Ta levels aiming to improve creep resistance. Post-irradiation characterization of the samples included uniaxial tensile testing at both room temperature and irradiation temperature to evaluate irradiation-induced hardening. Impact testing was performed on KLST specimens to assess the shift in ductile-to-brittle transition temperature (DBTT), and fracture surface analysis was conducted using Scanning Electron Microscopy (SEM) to determine fracture mode of the samples. LT grades exhibited a reduced DBTT after irradiation while maintaining comparable high-temperature strength. HT grades demonstrated similar levels of irradiation hardening; however, one of these grades showed a significantly higher post irradiation DBTT, indicating increased embrittlement. Despite strong irradiation-induced hardening and the loss of uniform elongation, all specimens fractured in a ductile manner, confirmed by the presence of dimples on the fracture surfaces.

Evaluating Visual Context and Social Intelligence in Human-Robot Dialogue

Poster ID

64

Session

2

Authors

Ruben Janssens, Thomas Demeester, Tony Belpaeme

Abstract

Large language models (LLMs) have given social robots the ability to autonomously engage in open-domain conversations. However, they are still missing a fundamental social skill: making use of the multiple modalities that carry social interactions. While previous work has focused on task-oriented interactions that require referencing the environment or specific phenomena in social interactions such as dialogue breakdowns, we focus on more open-ended social conversations, where the aim is to build a bond between the user and the robot - an important skill for robots in socially assistive roles, such as in education and healthcare. This research evaluates how robots can use visual context in such social conversations. We present a system using open-source Vision-Language Models (VLMs), that is able to process that visual context locally and in real time, taking a holistic view that encompasses environmental awareness, adaptation to the user and their behaviour, and processing non-verbal feedback. We then evaluate how well this system works and whether this improves the conversations - using a semi-structured conversation for comparability, in a within-subject study. Our results show the clear gap between impressive performance of VLMs at understanding the visual context and their failure to appropriately integrate them in a social conversation. Furthermore, our seemingly simple dialogue task shows clear remaining limitations of LLMs at holding social conversations. With this research, we outline current challenges for LLMs and VLMs in social dialogue and offer a measurable way forward that will allow research to break through the silicon ceiling holding back foundation models from succeeding at social human-robot dialogue.

Long lasting, bit-reproducible research with GNU Guix

Poster ID

65

Session

1

Authors

Thijs Paelman

Abstract

Even under the best of circumstances*, reproducing research from a few years ago is hard. To confirm the results of a scientific experiment, one could start from the same data and processing scripts, which should in theory lead to the same result, since most variables (e.g. measurement points) become fixed. Still, it remains almost impossible to reliably reproduce the results bit by bit due to uncontrolled variables (e.g. random numbers). Even without random numbers, there is one variable uncontrolled and overlooked most of the times: the computational environment. Unless meticulously recording every single library used in producing the result, chances for bit-reproducible research are dropping fast. While results don't always have to be bit-reproducible to review if the conclusion is still valid, the question remains: can a discussion about the differences in results even be fruitful if it's unclear if the variability is caused by the computational environment or another uncontrolled variable? There are a few approaches to reproduce the computational environment for experiments. The package manager GNU Guix is presented because of it's unique approach to recording the complete software stack, with an explicit focus on long term reproducibility. Additionally, it allows for effortless experimenting with differences in the computational environment. The effect on the final result of swapping out or patching libraries deep inside the stack is easily investigated. The authors dissertation written for his educational master's degree is used as an example of a bit-reproducible deliverable. *When the data, the processing scripts and the report are all available: Open Data, Open Source software and Open Access

Numerical simulation of intertidal areas

Poster ID

66

Session

2

Authors

Jone Liekens, Tom De Mulder, Henk Schuttelaars, Yoeri Dijkstra

Abstract

Our research is focused on the numerical simulation of water motion and morphodynamics in tidal systems. Morphodynamics, the study of how the bed changes over time due to erosion and deposition, is important for several reasons: the bedform geometry can influence flood risks, is important for vegetation and wildlife and affects which ships can pass. Using numerical models allows for proper ecological and economic management of these systems. As preliminary step in my research, the detailed tide-induced water motion in an intertidal area is investigated for a schematized tidal inlet system with a rectangular plan view and a linearly sloped bed, forced by a semi-diurnal lunar tide at the seaward boundary. The flow is governed by the 1D Shallow Water Equations. For the considered configuration, velocity peaks occur in the intertidal area during both flood and ebb. We compare the coordinate transformation method with other wetting-drying approaches, such as the removal/addition of grid cells used in Delft3D, the Defina approach with partially dry cells and perturbation methods. As a next step, we will focus more on idealized modelling: the use of simplified models based on first principles, where on the basis of scaling analysis the most dominant mechanisms can be selected and a perturbation method and harmonic analysis can be applied. The main advantage of these idealized models is their fast execution time, so that it is possible to thoroughly investigate the sensitivity for a range of parameters and identify equilibria and their stability.

Data-driven predictive maintenance for plasma-facing components in fusion devices

Poster ID

67

Session

1

Authors

L. Caputo

Abstract

Components like the divertor and the first wall are exposed to extreme thermal loads, intense thermal shocks, bombardment by plasma ions, neutral particles, and energetic neutrons. This unique combination of conditions makes it difficult to simulate and monitor in any test environment outside of an actual reactor. In addition, only a limited set of diagnostics is able to monitor in real time PFCs under such conditions, and modelling techniques for a reactor of this scale are computationally prohibitive. As a result, PFC failure may compromise plant availability and trigger prolonged, complex, and costly remote maintenance interventions. In fusion environments, mean time to repair is typically on the order of several weeks, and any unscheduled outage entails a significant loss of revenue and operational disruption. In this study, we present a similarity-based approach to estimate the remaining useful life of beryllium tiles subjected to steady-state thermal loading by an electron beam. Texture features including Gray Level Co-occurrence Matrix, Local Binary Pattern, and Fast Fourier Transform are extracted from high-resolution infrared images to describe the evolution of local pixel intensity patterns in the hottest region of the material’s surface under thermal loading. The health indicator is constructed from a fusion of these features and is invariant to absolute brightness, eliminating the need for temperature calibration. While demonstrated for steady-state heat loading conditions, the approach is potentially generalizable to more complex damage sources as in a real machine scenario.

Flow-Volume Curves for Effective and Interpretable Artificial Intelligence in Respiratory Medicine

Poster ID

68

Session

1

Authors

Thomas T. Kok, John Morales, Christophe Smeets, David Ruttens, Kristyna Sirka Kacafırkova, Dolores Blanco-Almazan, An Jacobs, Vojkan Mihajlovic, Femke Ongenae, Sofie Van Hoecke

Abstract

This study evaluates whether the use of flow-volume data can improve classification performance and interpretability in Artificial Intelligence (AI) models for respiratory medicine, and examines the impact of these explanations on model understanding. We assessed the classification performance of AI models trained on flow-volume and respiratory airflow data, using a case study of estimating breathing difficulty for COPD patients. Additionally, we evaluated model performance with varying dataset sizes, and conducted user tests with physicians to assess the impact of including explanations on the classification performance when receiving decision support from the AI model. The results showed that models trained on flow-volume data outperformed those trained on respiratory airflow data when the dataset was sufficiently large, with a minimum required number of 30 to 35 patients for estimating ease of breathing. Providing explanations alongside data and model predictions improved the classification performance of physicians in user tests, most notably with the explanations from flow-volume data, despite subjective evaluations rating the explanations below average in usefulness. In conclusion, flow-volume data can offer benefits for classification, conditioned by the size of the available dataset. Although there is relevant information present in the explanations that improves physician performance, further efforts are needed to win physicians’ trust. Overall, the results highlight the potential of flow-volume data in respiratory AI applications.

Floor plan

Floor plan FEARS 2025 at St Peter's Abbey