2025 RoEduNet Conference: Networking in Education and Research

Europe/Bucharest
Tehnical University of Moldova

Tehnical University of Moldova

168, Stefan cel Mare bd, Chisinau, Republic of Moldova
Nicolae ȚĂPUȘ (National University of Science and Technology POLITEHNICA Bucharest, Romania), Gheorghe DINU (Agency ARNIEC / RoEduNet), Dinu Turcanu (Technical University of Moldova), Răzvan Victor RUGHINIȘ (National University of Science and Technology POLITEHNICA Bucharest), Paul GASNER (Agency ARNIEC/RoEduNet, Alexandru Ioan Cuza University of Iasi), Octavian RUSU (Agency ARNIEC/RoEduNet, Alexandru Ioan Cuza University of Iasi), Florin Bogdan MANOLACHE (Carnegie Mellon University)
Description

24th RoEduNet Conference  
Networking in Education and Research

Nowadays, modern education and research activities are strongly dependent on a high-speed communication infrastructure and computer networks based on the newest technology. Design, implementation, management of such networks, together with development of new application fields are not possible without good knowledge of networking state-of-the-art.  

The 24th edition of Agency ARNIEC/RoEduNet's (Romanian Education Network) annual Conference is organised with the help of Technical University of Moldova, National University of Science and Technology POLITEHNICA Bucharest"Alexandru Ioan Cuza" University of Iasi, Carnegie Mellon University, Pittsburgh, US, and IEEE Computer Society Romania Chapter, under the patronage of Ministry of National Education and Scientific Research of Romania, offers special opportunities for information exchange in computer networking: technical and strategic aspects, communication issues, and of course their applications in education and research.  

You are welcome to join in. Please register as a user on the conference website to receive updates on the 24th RoEduNet Conference.  

The conference will be organized as an in-person event with help from our partners and sponsors.

Thank you, and hope to see you soon at the 24th Conference of RoEduNet !!!

    • 18:30 20:00
      Welcome Dinner

      https://maps.app.goo.gl/oysnWRpnXrzTDAzJ7

    • 09:30 10:00
      Opening Session: Registration & Welcome Coffee

      Opening Session
      https://maps.app.goo.gl/oysnWRpnXrzTDAzJ7

    • 10:00 10:30
      Opening Session: Welcome

      Opening Session
      https://maps.app.goo.gl/oysnWRpnXrzTDAzJ7

      Conveners: Dinu TURCANU (Technical University of Moldova), Gheorghe DINU (Agency ARNIEC / RoEduNet)
    • 10:30 11:20
      Opening Session: Keynote RoEduNet

      Opening Session
      https://maps.app.goo.gl/oysnWRpnXrzTDAzJ7

      Conveners: Mihai CARABAȘ (Agency ARNIEC/RoEduNet, University Politehnica of Bucharest), Dr Octavian RUSU (Agency ARNIEC/RoEduNet, Alexandru Ioan Cuza University of Iasi)
    • 11:20 11:40
      Coffee Break 20m
    • 11:40 13:00
      Cloud Computing and Network Virtualisation Room 2

      Room 2

      Technical University of Moldova
      Convener: Mihai CARABAȘ (Agency ARNIEC/RoEduNet, University Politehnica of Bucharest)
      • 11:40
        Simplified Design of An FPGA-Accelerated Connectionless Network Stack 15m

        Nowadays, one of the most discussed technology-related issues in the networking domain is how a highly performant and easily accessible to everyone implementation can look like. To reach this target, hardware acceleration comes into the game by offering high performance, programmability capabilities, and a minimal design of a network protocol stack. The Field-Programmable Gate Array (FPGA) is the most chosen hardware solution for such an implementation. Currently, there are multiple research directions in which the programming language to Hardware Description Languages (HDL) tools are used. However, there is a lack of research in the softwarization for the hardware-accelerated network protocol stack. Therefore, this paper proposes a simplified network protocols design by leveraging on Tom's Obvious Minimal Language (TOML) configuration language. We evaluated the approach by using a TCP/IP along with the integration into a 10 Gigabit Ethernet-like device and the use of Advanced eXtensible Interface (AXI) Streaming-based data generator.

        Speaker: Mr Alin-Tudor Sferle (Technical University of Cluj-Napoca)
      • 11:55
        Benchmarking Quantum, Classic, Electronic Optocoupler Neurons and Neural Networks 15m

        Abstract: Quantum computing opens new frontiers for modeling intelligence at the physical level. This paper investigates Quantum Neural Networks (QNNs) not just as tools for classification, but as circuit-based analogues of biological neurons, inspired by classical transistor-based implementations. Starting with reviewing current QNN architectures, and then focusing on quantum perceptrons and variational quantum circuits, in order to have benchmarking performance against external criteria, and to explore how they simulate neuronal operations such as weighted summation and thresholding. The paper analyzes the parallels between quantum gates and transistor logic, proposing hybrid designs that transpose neuron-inspired behaviors into quantum logic circuits. Through simulations and literature review, the authors examine the feasibility of such quantum-neuromorphic architectures and discuss their implications for future hardware-efficient, brain-inspired quantum learning systems. This work aims to bridge neuroscience, electronics, and quantum computing, highlighting a novel pathway for scalable, interpretable quantum AI. Also, the paper compares performance among various neural networks deployed as physical transistors, software running in Tensor Processing Unit (TPUs) and QNNs also analyzing the bridging technologies among them.

        Speakers: Teodor Cervinski (Bucharest University of Economic Studies), Cristian Toma (Bucharest University of Economic Studies, Department of Economic Informatics and Cybernetics)
      • 12:10
        Securely Sharing Electronic Health Records over Blockchain 15m

        Blockchain technology has gained significant traction in recent years. These decentralised databases offer security, immutability, and scalability across various applications. These properties are ideal for building decentralised applications, which are solutions that combine off-chain components (traditional web services, frontend, and backend) with on-chain components (smart contracts). This paper proposes a novel usage of this technology in the context of securing computation and storage of EHRs (electronic health records) and giving back to patients ownership over their medical data, thus ensuring that their information remains private and they can choose whether to give access or transfer that respective information to any institution of their choice. By utilising recently released technologies, such as zero-knowledge and homomorphic encryption (at the time of writing this paper), we have achieved promising results. This success with encryption technologies instils confidence that, in the not-so-distant future, the relationship between healthcare institutions (hospitals, research institutions) and patients will soon undergo a significant shift. Over time, this solution, or its improved variants, may form the basis for other applications that require selective access to private data to perform private computations.

        Speaker: Cristian-Tănase Paris (University Politehnica of Bucharest)
      • 12:25
        Metadata-based Network Traffic Analysis Using Zeek 15m

        Networks usually face the challenges of high traffic volume and diverse user behaviours, which makes analyzing and preventing security incidents particularly difficult. Another major drawback is that traffic is often encrypted, so the data you can analyse is very limited. This paper presents an approach to network monitoring tooling, using Zeek for inspection on encrypted traffic. The system is designed to analyse metadata, flow characteristics and other anomalies. To increase detection rate and contextual awareness, the deployment integrates with Malware Information Sharing Platform (MISP) for real-time threat intelligence correlation, and OpenSearch for scalable indexing, querying, and integrating with other logs from the same network. This setup allows detection of suspicious activity, threat hunting and intrusion prevention across the entire infrastructure. The system architecture is modular and scalable, allowing it to apply different security policies to the intrusion detection software and adjust the configuration to suit traffic patterns. We discuss the architectural design, performance, testing, and practical challenges of monitoring encrypted traffic on high volume network traffic.

        Speaker: Stefan Dorin Jumarea (National University of Science and Technology POLITEHNICA Bucharest)
      • 12:40
        From Incident to IOC - An Automated Malware Investigation Pipeline 15m

        Incident response teams and security engineers are often overwhelmed by the large number of compromised artifacts requiring investigation and mitigation within short timeframes. This critical demand for speed and scalability emphasizes the important role of automation in modern post-incident processes.
        This paper presents an automated forensic investigation pipeline specifically designed for malware detection and threat intelligence analysis for compromised QCOW2 disk images. The pipeline has three stages - extracting potential indicators from the affected targets, confirming their malicious nature, and sharing threat intelligence with the interested parties. The key tool for automating the analysis process is Dissect. Dissect is used to programmatically acquire system information and potentially infected artifacts. Once extracted, the indicators are triaged against a malware database. Finally, validated indicators of compromise (IOCs) are disseminated using MISP, a threat intelligence platform.
        We validated this workflow by analyzing a substantial volume of compromised QCOW images collected from an SSH honeypot, showcasing its effectiveness in accelerating the analysis process. This work contributes to automating post-incident analysis by providing a modular pipeline that transforms raw forensic data into threat intelligence. This approach reduces the manual burden, ensures analysis reproducibility, and is easy to extend.

        Speaker: Andreia-Irina Ocănoaia (National University of Science and Technology POLITEHNICA Bucharest)
    • 11:40 13:00
      High Performance Computing in Science Room 1

      Room 1

      Technical University of Moldova
      Convener: Florin Bogdan MANOLACHE (Carnegie Mellon University)
      • 11:40
        Semantic-Aware Data Lineage Tracking for Transparent ETL Pipelines in Industrial Systems 15m

        In modern industrial environments, understanding how data flows through complex ETL pipelines is critical for traceability, auditing, and compliance. While traditional lineage tracking tools rely on static metadata or log-based introspection, they often lack semantic expressiveness and offer limited support for automated validation.

        This work presents a semantic-aware framework for ETL data lineage tracking, integrating property graph modeling (Neo4j), lightweight ontologies (RDF/OWL), and SPARQL querying. Each transformation step is modeled as a semantic entity enriched with metadata such as inputs, outputs, timestamps, and execution order. The resulting lineage graph is exported to RDF, enabling validation via SHACL and pattern-based queries over transformation chains.

        The proposed solution was implemented and tested on the AdventureWorks DW2022 dataset, demonstrating low-latency querying, structural correctness, and enhanced traceability. Comparative analysis shows that, unlike traditional tools such as Apache Atlas or OpenLineage, our approach supports fine-grained reasoning, constraint checking, and semantic completeness verification.

        This contribution offers a lightweight and reproducible solution using only open-source components and is particularly suited for evolving, compliance-heavy industrial data environments. Future work includes extending the ontology, integrating streaming ETL sources, and deploying the model over scalable RDF triple stores.

        Speaker: Bogdan Nicușor Bindea (Technical University of Cluj-Napoca, Computer Science Department)
      • 11:55
        Information Models for Large Table Trimming 15m

        Large datasets can rarely be presented or used in real time without significantly reducing their size. This paper discusses models of trimming timestamped event datasets while keeping the loss of information to a minimum. The presentation goes gradually from independent event models,
        where trimming of events does not change the order of the information contribution of the other events, to statistical models adjusted to incorporate cross-referencing of entries and memory effects into the information calculation. Based on the particular structure of the information function, various trimming strategies are discussed. Depending on the contents of the registered events, such models can be used to retain most of the information in the dataset, while significantly decreasing the computation time. This is particularly useful when dealing with frontends that can handle a limited amount of data,
        or with sampling the training data for machine learning models.

        Speaker: Florin Bogdan MANOLACHE (Carnegie Mellon University)
      • 12:10
        Information Retention in Trimmed Datasets 15m

        The structure and usage scenarios of a software package for trimming datasets while having minimum information loss are described. Several information models applied to a large dataset generated by an enterprise information system are analyzed. Different strategies and procedures are compared to obtain the best compromise between computing time and information retention. A set of data profiling tests is presented
        with the purpose of detecting anomalies such as data flooding. The results show that a block trimming strategy allows the preservation of most of the information while speeding up the computation by one or more orders of magnitude. The software automatically detects the optimum trimming level associated with the model, allowing autonomous real-time control of large datasets.

        Speaker: Florin Bogdan MANOLACHE (Carnegie Mellon University)
      • 12:25
        A lightweight methodology for using generative AI chatbots in education and research in the field of computer networks. 15m

        This paper presents a streamlined and effective methodology for integrating generative artificial intelligence (AI) chatbots into educational and research activities focused on computer networks. The proposed approach leverages the capabilities of generative AI to assist in each phase of a typical network analysis workflow: selecting appropriate software tools, generating and capturing network traffic, extracting relevant data, and conducting iterative analysis. For the experimental part, we went through these steps to analyze the functioning of the TCP protocol. The methodology begins with the identification of suitable software tools to achieve the objectives. For the experimental part, iperf is used for traffic generation and tshark for packet capture and processing. To address the challenges of manual traffic generation and data collection, AI-assisted scripting (e.g., PowerShell) is employed to automate these tasks. Given the limitations of chatbot environments in handling raw packet capture files (.pcapng), the methodology includes transforming these files into lightweight, structured formats (e.g., .csv) using AI-generated scripts (e.g., Windows Batch files). These processed files are then analyzed within the chatbot environment using iterative prompt engineering, enabling dynamic exploration of network behavior, such as TCP protocol analysis. The study demonstrates that generative AI chatbots significantly enhance productivity by aiding in tool selection, code generation, data transformation, and analytical reasoning. This methodology not only simplifies complex technical tasks but also promotes deeper understanding and engagement in computer networking education and research.

        Speaker: Adrian PECULEA (Technical University of Cluj-Napoca)
      • 12:40
        Satellite Data Integration Platform for Public Accessibility and Educational Use 15m

        The continuous growth of the space industry and the increasing demand for satellite data across various sectors highlight the need for accessible and user-friendly data integration platforms. However, despite the availability of large volumes of open satellite data, significant barriers remain in making this data accessible to the general public, educators, and non-expert users. This research aims to define an optimal architecture for a satellite data integration platform that addresses these challenges. The platform is intended not only to improve public access to satellite mission data but also to support educational initiatives aimed at preparing future generations of space engineers and enthusiasts. The study identifies key system requirements through an analysis of current technological solutions, assessing factors such as scalability, performance, interoperability, and technical maturity. Based on this analysis, a set of appropriate technologies and frameworks is selected to form a cohesive architecture capable of delivering an intuitive and functional user experience. The research culminates in the development of a functional prototype that demonstrates the proposed solution’s viability, as well as its capacity to bridge the gap between complex satellite data infrastructures and educational or public-oriented applications.

        Speaker: Mr Sebastian Severin (National University of Science and Technology POLITEHNICA Bucharest, Romanian Space Initiative (ROSPIN), Bucharest, Romania)
    • 11:40 13:00
      Networking in Education and Research Room 4

      Room 4

      Technical University of Moldova

      Networking in Education and Research

      Convener: Paul GASNER (Agency ARNIEC/RoEduNet, Alexandru Ioan Cuza University of Iasi)
      • 11:40
        AI-based piano learning with real-time feedback and personalized progress tracking 15m

        This paper presents Pianalyze, an intelligent e-learning platform designed to support piano practice through real-time audio analysis and personalized feedback. By leveraging convolutional neural networks and in-browser audio capture, the system evaluates performance and provides explanatory insights to guide learners. Also, the platform enables manually curated learning paths and tracks individual progress over time. Our experimental evaluation shows that the system provides real-time visual and textual feedback, effectively bridging the gap between traditional instruction and self-guided learning through AI-driven support.
        Keywords — Piano Learning Platform, Real-Time Audio Feedback, Music Note Recognition, AI in Music Education, CNN-based Audio Classification

        Speaker: Andreea-Diana Pîrlac (Faculty of Computer Science,"Alexandru Ioan Cuza" University)
      • 11:55
        A Comprehensive Review of Indoor Localization Techniques for Autonomous Robots 15m

        Indoor localization in environments without GPS poses a fundamental constraint for cost and payload limited autonomous robots, where high precision often conflicts with practical deployment; optical motion capture systems deliver millimeter level accuracy at prohibitive cost, whereas low cost vision based and radio frequency based methods provide scalable alternatives with varying infrastructure needs and performance trade offs. First, radio frequency techniques such as received signal strength indicator fingerprinting, ultra wideband time difference of arrival, Wi Fi round trip time and Bluetooth low energy beacon schemes are evaluated in terms of deployment density, update rate and typical positioning error. Next, hybrid frameworks that fuse inertial measurement unit data with visual input via Kalman filtering or simultaneous localization and mapping are examined for decimeter level accuracy and real time operation on embedded hardware. Vision only pipelines, both marker based and marker free, are then analyzed with respect to feature detection robustness, computational load and adaptability to dynamic indoor scenes. Commercial systems illustrate the cost precision dilemma and guide performance benchmarks, while comparative analysis highlights key gaps: sub decimeter accuracy under minimal infrastructure, real time performance on resource constrained platforms and adaptive calibration across diverse settings. Finally, we advocate future research on lightweight vision centric architectures augmented by occasional radio frequency cues and online adaptation to enable accessible high precision indoor positioning for budget constrained robotic applications.

        Speaker: Teodor-Alexandru Dicu
      • 12:10
        ARTEMIS - Analytics Platform for Education Statistics 15m

        The educational process, as we know it, is not a universal solution for everyone who follows it or will follow it. It is not perfect—nor can it be—but it can certainly be improved and adapted to evolve alongside those who engage with it. Data lies at the foundation of knowledge, yet it is not sufficiently leveraged in the learning process. There is a need for a way through which the educational system can accelerate its outcomes and better align with diverse learning styles and student profiles, evolving with them.
        The solution proposed in this paper helps educators harness the data they have and support a personalized educational process, rooted in data-informed decisions and simplified through the seamless integration of technology. This thesis covers the development, testing, and validation of the platform as a solution for improving the teaching process and integrating technology as an accelerator in education.

        Speaker: Ms Diana-Andreea Dervis (National University of Science and Technology Politehnica Bucharest)
      • 12:25
        True Random Number Generation from Very Low Frequency Radio Noise 15m

        Radio noise use in True Random Number Generation is a popular and well understood strategy, but relatively undermined in research. Given that Very Low Frequency radio streams record chaotic weather events from around the world (like thunderstorms), this novel entropy source sparked our interest. Our choice is supported by prior research in statistical modeling, along with an appropriate combination of both signal and statistical methods. Hence, we describe in detail our processing pipeline for engineering a TRNG from VLF radio noise. It yields high entropy, unbiased and identically distributed outputs that pass FIPS 140-2 tests, regardless of the chosen VLF stream source. To strengthen reproducibility, we have based all of our experiments on publicly available data. We also provide an open-source, modular command-line tool for building stream-based processing pipelines. Furthermore, we offer an example implementing a TRNG from FM radio noise found between standard transmission bands.

        Speaker: Matei Barbu (University Politehnica of Bucharest)
      • 12:40
        Arena Acadnet - An educational platform for practicing and solving coding challenges 15m

        Arena Acadnet is an educational platform developed to meet the needs of applied computer science education, offering an interactive environment for solving programming problems, with the goal of stimulating critical thinking and the ability to tackle engineering challenges. It serves as the core for organizing the National Olympiad of Applied Informatics – Acadnet and is structured across multiple user levels, providing tailored features for administrators, problem authors, supervisors, and regular users.

        Speaker: Cristian-Stefan Avram
      • 12:55
        Artificial Intelligence an Useful Tool in Students' Learning Process 5m

        The theme of this paper focuses on exploring the impact of artificial intelligence on the student learning process. With a world increasingly digitally connected, it is inevitable to recognize that AI is becoming a central element in the educational landscape. This study was conducted during the period 03/19/2024–04/16/2024

        Speaker: Paul GASNER (Agency ARNIEC/RoEduNet, Alexandru Ioan Cuza University of Iasi)
    • 11:40 13:00
      Open Source Education and Research Room 3

      Room 3

      Technical University of Moldova
      Convener: Lilia Sava (Technical University of Moldova)
      • 11:40
        Protocols for semantic interoperability in distributed environments 15m

        The domain of information and communication technology faces a challenge of integrating increasingly massive amounts of data. Despite significant progress in development of data management and exchange solutions (e.g., the FAIR principles for open data), information is often poorly managed, badly structured, and lacks context, complicating data integration. There’s a need for improved or new modes of storage, sharing, recovery, modification of data.
        This project proposes a solution by way of data interoperability across different information systems and technological platforms. This doesn’t imply compatibility of different information systems on a hardware-software level, but intersystem data representation and interpretation with the help of common standards and protocols. Such an approach has to be based on distributed data architectures designed with semantic models that facilitate information exchange. Semantic interoperability implies that data that is being exchanged is interpreted consistently by all parties.
        This research aims to develop new methods of ensuring semantic interoperability through standardization, formalization, and ontologies as primary data interchangeability enablers. The primary objectives are: to identify methods of designing new distributed data architectures that ensure semantic interoperability; to develop a methodology for implementing protocols that guarantee semantic interoperability in distributed systems; to create new models of interaction of heterogeneous network entities, and a protocol that supports effortless data interchangeability; to develop new methods of validation and verification of the developed protocol.
        The expected results consist of producing new formal and empirical methods of ensuring semantic interoperability in distributed information and communication systems. These methods would contain: a metalanguage for modeling interoperable data, and a protocol for semantic interoperability in distributed systems.

        Speaker: Leon Brânzan (Technical University of Moldova)
      • 11:55
        INTELLIT version 3.0 - Advanced Infographics and Semantic Analysis on Romanian Literature Corpora 15m

        Literature is a fundamental landmark in a nation's identity, bringing together the creations written over time on the territory and in the language of each nation. To preserve this national literary heritage, all members of the community must have quick access to information about representative writers, their works, and the concepts, literary movements, and publications. Current work supports the digitalization of Romanian literature by capitalizing on two major corpora issued by the Romanian Academy: the General Dictionary of Romanian Literature (DGLR) and the Chronology of Romanian Literary Life (CVLR). The INTELLIT~3.0 web platform is built on the already converted textual base, now enriched with interactive infographics that transform raw data into chronologies, maps, and visual graphs, highlighting the evolution of trends and the dynamics of publications. As such, the platform serves as a tool for preserving cultural heritage, with a clear educational role as it supports teaching of literature in schools, facilitates individual study, and provides researchers with a modern resource.

        Speakers: Alin-Gabriel Stan (Universitatea Națională de Știință și Tehnologie POLITEHNICA București), Mr Laurentiu Neagu (Universitatea Națională de Știință și Tehnologie POLITEHNICA București)
      • 12:10
        High-Precision Localization for Mecanum-Wheeled Robots Using Sensor Fusion with ArUco Markers 15m

        Accurate and robust localization is critical to the performance of autonomous mobile robots in complex environments, such as those encountered in international robotics competitions. Holonomic robots equipped with Mecanum wheels offer superior maneuverability but introduce significant localization challenges. Wheel slippage leads to rapid accumulation of odometric errors, while inertial measurement units (IMUs) are prone to drift over time, compromising long-term reliability. Without an effective correction mechanism, these issues render precise navigation unfeasible.
        This paper presents the design and implementation of a high-precision localization system for a Mecanum-wheeled mobile robot, operating in a structured and known environment representative of a competitive robotics field. The proposed system integrates an Extended Kalman Filter (EKF) to fuse data from the IMU’s gyroscope and accelerometer with position estimates derived from an external vision-based localization method using ArUco fiducial markers. The complete solution is validated on a functional robotic platform, demonstrating improved stability and accuracy in navigation tasks.

        Speaker: Alexandru Toader (Universitatea Politehnica Bucuresti)
      • 12:25
        Exploring OCR: Combining Open-Source Engines for Improved Document Digitization 15m

        Document digitization involves converting physical documents into editable digital text, a process that offers significant benefits such as preserving archives, enabling remote access, and simplifying content modification. Optical Character Recognition (OCR) technologies facilitate this transformation by extracting text from scanned or photographed document images. However, OCR accuracy can be hindered by the wide variety of document layouts and conditions, including issues like faded text and uneven lighting. In this study, we investigate the potential of combining multiple open-source OCR engines to improve digitization accuracy, focusing on the Tesseract and EasyOCR engines. We developed a testing pipeline and conducted experiments targeting challenging scenarios for character recognition. Our results demonstrate that integrating outputs from both engines can enhance performance, highlighting their complementary strengths and the promise of ensemble approaches for more reliable document digitization.

        Speaker: Mr Mihai-Lucian PANDELICĂ (Universitatea Politehnica Bucuresti)
      • 12:40
        Design and Implementation of a Preference-Based Activity Management System for Professors and Students 15m

        Creating an application designed to assist professors is a genuine necessity across various disciplines. This paper introduces a platform for managing activities for a given subject. It possesses two primary functionalities: one for professors, who create activities and manage enrolled students, and another for students, who select their activities preferences according to the teacher's requirements. Students are allocated to their preferred activities through a preference management algorithm, based on the existing possibilities and limitations. Each student is allotted the most amount of activities to attain the greatest score possible. The paper delineates the implementation, architecture, and necessity of this type of application.

        Speaker: Ms Diana-Alina Pințoiu (Universitatea POLITEHNICA București)
    • 13:00 14:00
      Lunch 1h
    • 14:00 15:00
      Keynotes: Pertners Keynotes
      • 14:00
        Keynote - Cisco 20m
      • 14:20
        Keynote - Orange :: ’Towards Innovative Network Security Systems and 5G Infrastructure R&D Projects 20m
        Speakers: Cristian PATACHIA (Orange Romania), Razvan MIHAI (Orange Moldova), Sergiu POSTICA (Orange Moldova)
      • 14:40
        Keynote Mines Paris :: Tech Educating Tomorrow's Engineers: Trials and Errors in an Underwater Robotics Course 20m

        Keynote Mines Paris

    • 15:00 16:45
      Cloud Computing and Network Virtualisation Room 2

      Room 2

      Technical University of Moldova
      Convener: Mihai CARABAȘ (Agency ARNIEC/RoEduNet, University Politehnica of Bucharest)
      • 15:00
        Building a Hands-On Infrastructure Lab: An Integrated Proxmox and EVE-NG Approach for Advanced IT and Cybersecurity Education 15m

        Abstract
        In today's rapidly evolving technological landscape, practical experience is essential for students seeking to study IT
        infrastructure, networking, and cybersecurity. This article presents a comprehensive guide to building a robust virtual
        laboratory environment, leveraging Proxmox Virtual Environment (VE) as the core hypervisor and EVE-NG
        (Emulated Virtual Environment Next Generation) as the platform for network emulation and simulation. The
        objective is to provide students with an accessible, high-performance, and manageable infrastructure that facilitates
        hands-on learning, allowing them to experiment with complex network topologies, security protocols, and system
        deployments in a safe, isolated space.

        The proposed solution goes beyond merely installing Proxmox and EVE-NG. It integrates crucial components that
        enhance the learning experience by simulating real-world IT environments. This consolidated approach maximizes
        resource efficiency and streamlines management, making it an ideal setup for educational purposes.

        A cornerstone of this architecture is the implementation of a flexible and redundant storage solution. While Proxmox's
        native ZFS capabilities are valuable, this setup enhances them by integrating dedicated Network Attached Storage
        (NAS) systems, such as TrueNAS or OpenMediaVault, virtualized directly within Proxmox. This allows students to
        learn about centralized data management, data redundancy through RAID configurations, and various sharing
        protocols (NFS, CIFS, iSCSI). This is vital for storing EVE-NG images, virtual machine templates, and essential
        backups, mimicking enterprise storage solutions. By directly passing through physical disks to these virtualized NAS
        instances, students gain insights into optimizing storage performance and ensuring data integrity.

        Effective management of any IT system requires robust monitoring. For our learning environment, we start with an
        existing Nagios implementation for foundational monitoring. However, the plan is to transition towards or supplement
        with Zabbix, a highly scalable monitoring solution. Zabbix, which will be implemented as a Lightweight Container
        (LXC) within Proxmox, offers a more comprehensive approach to data collection and visualization. It provides indepth
        insights into resource utilization (CPU, RAM, disk I/O, network throughput) across the Proxmox host,
        individual virtual machines, and containers, including the EVE-NG instance itself. This allows students to observe
        system behavior under various loads, understand performance bottlenecks, and learn how to proactively identify and
        troubleshoot issues through configurable alerts and customizable dashboards.

        Centralized network management is another critical aspect. Integrating a virtualized firewall and router, such as
        pfSense or OPNsense, within a dedicated virtual machine, provides a secure and segmented network environment.
        This allows students to design and implement complex network segmentation using VLANs, configure advanced
        routing, and set up essential network services like DHCP and DNS. Connecting EVE-NG labs to this virtual firewall
        enables students to simulate realistic network scenarios, including controlled internet access and inter-VLAN routing,
        providing invaluable practical experience in network design and security.

        Finally, a robust backup strategy is paramount for any learning environment to protect student work and
        configurations. This solution incorporates Proxmox Backup Server (PBS), an open-source backup solution
        specifically designed for the Proxmox ecosystem. PBS offers efficient data deduplication and rapid recovery
        capabilities for virtual machines and containers. It can be deployed as a virtual machine or container on the same
        server, or ideally, on a separate machine for enhanced redundancy, teaching students the importance of resilient
        backup and recovery procedures in real-world operations.

        By strategically combining Proxmox and EVE-NG with dedicated solutions for storage, advanced monitoring,
        network management, and robust backups, this integrated platform offers an exceptionally powerful, efficient, and
        resilient virtual laboratory. This architecture provides an ideal hands-on environment for students to deeply engage
        with, and master, advanced IT technologies in a secure, controlled, and experiential learning setting.

        Speaker: Adrian Savu-Jivanov (Universitatea Politehnica Timisoara)
      • 15:15
        OLAP Performance Analysis in Neo4j/Cypher Deployed on OpenStack: Challenges and Results for Medium-Sized TPC-H Data Sets 15m

        Among NoSQL databases, Neo4j has become a leading solution for managing graph-structured data, particularly known for its flexibility in handling complex relationships. However, the performance characteristics of Neo4j in analytical (OLAP) workloads deployed within cloud-based environments, such as OpenStack, remain underexplored. In this paper, we present an analysis of OLAP performance for Neo4j clusters deployed on a private OpenStack cloud. Using the TPC-H benchmark dataset, transformed specifically into a graph schema, we generated and executed a comprehensive suite of Cypher queries designed to test Neo4j's capabilities under varying workloads and configurations. Our experiments reveal that increasing data volume reduces performance and success rates of query executions. Additionally, expanding the cluster size through additional nodes provided limited performance gains, highlighting diminishing returns from horizontal scaling for medium-sized analytical workloads.

        Speaker: Codrin-Stefan Esanu (Alexandru Ioan Cuza University of Iasi, Cegeka Romania)
      • 15:30
        Automation and Monitoring of Virtualized Infrastructure on Proxmox Servers 15m

        This article presents the development, implementation, and validation of an integrated monitoring and security platform for Proxmox virtual environments. The platform utilizes lightweight host agents to gather high-frequency CPU, memory, and network I/O measurements using the Proxmox API, and to analyze outgoing traffic against a centralized trust registry for the real-time detection of suspicious events. An adaptable machine-learning module looks at normal VM behavior to reduce false alarms, while flexible, role-based API endpoints allow secure changes to trust settings without interrupting services. The validation across diverse workloads illustrates the platform's efficacy in precisely identifying both resource and network irregularities.

        Speaker: Mr Eduard Dumistracel (National University of Science and Technology Politehnica Bucharest)
      • 15:45
        A Multi-Agent Framework for Auditing Smart Contracts 15m

        Smart contracts power a vast array of blockchain applications, securing billions of dollars on decentralized finance, but their immutable nature turns every vulnerability into a permanent and highly exploitable liability. Although automated security tools can efficiently detect many issues, their high false-positive rates and lack of trust still require manual audits, which are costly and introduce deployment delays. In this paper, we present an end-to-end, AI-augmented auditing framework leveraging a multi-agent pipeline for comprehensive vulnerability detection and automated exploit generation.

        First, we review existing approaches such as static analysis, fuzzing, symbolic execution, formal verification, and machine-learning methods, highlighting their strengths, limitations, and real-world use.

        Building on this survey, we introduce a multi-agent architecture that orchestrates chained AI tools to cross-analyze findings, automatically generate test cases, and produce Proof-of-Concept exploits. The system ingests textual challenge descriptions to outline stepwise attack strategies and synthesizes ready-to-compile Solidity exploit code. Exploits are compiled and validated in a stateless, containerized environment, enabling fully automated verification of attack effectiveness.

        To validate our approach, we demonstrate the pipeline on capture-the-flag challenges and discuss how prompt fine-tuning, retrieval-augmented generation, and formal verification integration can further enhance detection accuracy and exploit reliability. Finally, to assess real-world impact, we evaluate the applicability of the framework in online bug bounty platforms and auditing contests, demonstrating its potential to make smart contract security verification more comprehensive, scalable, and cost-effective.

        Speaker: Alexe Luca Spataru (Polytechnic University of Bucharest)
      • 16:00
        Big data Architecture for Automatic Transformation and Validation of Heterogeneous Geospatial Data Compliant with INSPIRE Directive 15m

        Geospatial data play a fundamental role in decision-making processes within government entities, private sector organizations and in the context of natural resource management. The integration and harmonization of geospatial data from heterogeneous sources represents a significant challenge in the context of the implementation of the INSPIRE (Infrastructure for Spatial Information in Europe) Directive. The need for standardization of geospatial data, both on a global scale and in the context of the INSPIRE Directive, is primarily driven by the need for interoperability, integration and efficient analysis of spatial information from different systems, formats and semantic structures. In addition, the lack of standardization creates substantial impediments to the aggregation and consistent use of datasets, which leads to a decrease in the accuracy and relevance of the resulting spatial analyses. Therefor in this paper we propose a comprehensive and extensible architecture for the automatic transformation and harmonization of heterogeneous spatial data into INSPIRE-compliant formats, that ensures interoperability within the European infrastructure. Our solution is based on open-source technologies and tools and is validated using the official INPIRE Reference Validator tool.

        Speaker: Mr Mihai NEGRU (National University of Science and Technology POLITEHNICA Bucharest)
      • 16:15
        A Practical Evaluation of Deployment Strategies for Services Running in the Cloud 15m

        Rapid production rollout of new features, vulnerability patches, and critical bug fixes is essential to maintaining a market advantage in today's competitive landscape. However, this need for speed introduces increased risks, as demonstrated by the 2025 Google Cloud incident, where a software deployment containing an undetected bug resulted in cascading service disruptions in major platforms, including Cloudflare, OpenAI, and Microsoft 365, requiring over seven hours for complete system recovery. Given the possible consequences, it becomes mandatory for organisations to select the most appropriate deployment strategy when rolling out new software code changes. This paper provides a comprehensive review of deployment strategies in modern development lifecycles, analysing both the advantages and limitations of each deployment strategy, as well as its practical considerations, such as complexity and cost. Additionally, we propose a framework for evaluating and comparing different approaches to deploying to production. Examining each approach alongside requirements specific to the service itself, such as urgency and risk tolerance, application complexity, and team expertise, organisations can select the most suitable deployment strategy for their use case.

        Speaker: Vlad-Stefan Dieaconu (Politehnica University of Bucharest)
      • 16:30
        VERIT-ALBERT: A Finetuned LLM Approach for Verifying Information Credibility 15m

        The proliferation of fake news in the digital age poses a critical threat to informed public discourse and societal stability. This paper introduces VERIT-ALBERT, a novel fine-tuned LLM-based solution designed to enhance fake news detection by utilizing the lightweight architecture of the ALBERT transformer.
        Through benchmarking on multiple real-world Fake News datasets, we demonstrate the efficiency of VERIT-ALBERT, providing validated strategies for improving fake news detection.

        Speaker: Dr Elena-Simona Apostol (National University of Science and Technology POLITEHNICA Bucharest)
    • 15:00 16:45
      Data FAIR in Science Room 4

      Room 4

      Technical University of Moldova
      Convener: Dr Cosima Rughinis (University of Bucharest)
      • 15:00
        The Silence of Systems: Risks of Algorithmic Nutritional Exclusion in Hyperconnected Economies 15m

        The digitalization of nutrition has radically transformed how individuals monitor their dietary intake but has also introduced invisible algorithmic risks. This study investigates the algorithmic behavior of MyFitnessPal and Cronometer in response to simulated nutritional scenarios, using user profiles that display energy or micronutrient vulnerabilities. Based on an exploratory-comparative design, five simulated profiles (both female and male) with different weight-loss objectives were tested under controlled conditions, using traditional, chaotic, and standardized menus. The findings reveal a substantial difference between the two applications: MyFitnessPal exhibited a systematic algorithmic silence, failing to issue warnings even in cases of intake below 1000 kcal, while Cronometer blocked unsafe goals and explicitly flagged nutritional deficiencies. These insights highlight the critical need to integrate protective mechanisms and digital ethics into self-tracking apps, particularly for users lacking professional nutritional guidance. The study contributes to developing best practices for designing responsible algorithms with direct implications for public health in hyperconnected economies.

        Speaker: Prof. Dinu TURCANU (Technical University of Moldova)
      • 15:15
        AI Gender Bias in Moral Guidance: A Computational Content and Sentiment Analysis of ChatGPT, Gemini, Le Chat, and DeepSeek 15m

        We present a computational study of gender bias in four state-of-the-art large language models (LLMs), namely ChatGPT, Gemini, Le Chat, and DeepSeek, using a novel prompt-based framework to evaluate model responses to moral guidance questions. Our methodology integrates structured prompt variation, content analysis, sentiment scoring, subjectivity detection, part-of-speech tagging, and n-gram extraction. Original contributions include: (1) a cross-model comparison of LLM outputs to gendered and neutral prompts within a controlled experimental setup; (2) quantitative evidence of systematic lexical, affective, and structural variation based on prompt gendering; and (3) a reproducible hybrid method combining sociolinguistic analysis with automated metrics for bias detection. Results indicate that gendered prompts yield distinct affective profiles and lexical distributions, with female-coded inputs eliciting more nurturing and communal language and male-coded prompts emphasizing autonomy and discipline. All models exhibit consistently positive sentiment, but differ in subjectivity and stylistic framing. These findings show persistent gender-normative patterns in moral guidance outputs and demonstrate the utility of integrating qualitative reasoning with NLP techniques for auditing LLM behavior. We argue that prompt-sensitive evaluation is essential for assessing alignment and mitigating subtle forms of discursive bias in generative systems.

        Speaker: Ms Diana-Alexandra Iordache (Faculty of Sociology and Social Work, University of Bucharest)
      • 15:30
        Digital Mediation and Parental Anxiety: Restrictive Strategies in r/Parenting Discourse 15m

        This study looks into parental digital mediation strategies through a qualitative thematic discourse analysis of 68 Reddit comments from r/Parenting. Using Nikken and Jansz's (2014) mediation framework and Furedi's (2002) concept of paranoid parenting, we explore both parental strategies and underlying anxieties. Empirical evidence shows that restrictive mediation dominates (46 out of 59 comments retained for analysis), with parents employing categorical exclusions such as social media bans and internet restrictions. Relational approaches like active mediation (n=3) and co-use (n=5) remain rare, pointing to a cultural framing of the internet as a risk rather than an opportunity. The study identifies three manifestations of paranoid parenting: catastrophic risk framing, moral superiority, and counter-expertise claims. Mediation strategies differ by family composition, with multi-child parents tending to demonstrate reflexive learning, whereas single-child parents often rely on anticipatory restrictions. These patterns indicate that digital parenting decisions are more influenced by fear than knowledge, with broader cultural expectations of intensive parenting reinforcing this behavior.

        Speaker: Gabriela-Alexandra Balanuta (University of Bucharest)
      • 15:45
        Computational Analysis of Innovation Discourse: Evidence from a Student Hackathon 15m

        This article investigates how young participants in the Innovation Labs Hackathon, Romania' largest national educational pre-accelerator program, define the concept of innovation at the entry point into an entrepreneurial support structure. Drawing on Bourdieu’s theory of fields and forms of capital, and the concept of boundary objects, we argue that the Innovation Labs program functions as a boundary object and a site of symbolic positioning. The study examines how symbolic meanings are negotiated at the intersection of academic and entrepreneurial logics. Using a meaning-based recoding grid and cluster analysis applied to 316 open-ended survey responses, five repertoires of innovation are identified: Tech Enthusiasts, Problem Solvers, Symbolic Creators, Institutional Learners, and Social Creatives. These reflect distinct configurations of cultural, social, symbolic, and applied capital, shaped by both individual trajectories and broader field dynamics. Minor differences related to gender and prior hackathon experience suggest differentiated access to legitimate forms of expression and recognition.

        Speaker: Ms Andrea-Mariana Budeanu (Tech Lounge Association)
      • 16:00
        AI Risk and Governance in Startups: A Study of Implementation Gaps and Founder Justifications 15m

        Artificial intelligence (AI) adoption among startups is accelerating, yet systematic understanding of how founders approach AI-related risks, justify implementation decisions, and establish governance frameworks remains limited. This study examines how startup founders interpret AI risks and opportunities, develop justification strategies for adoption, and negotiate boundaries of responsibility through applied sociological analysis. Drawing on survey data from 72 startup founders collected in September 2024, we analyzed quantitative patterns and qualitative insights using sociological frameworks of technological framing, institutional logics, and boundary-work. Founders predominantly view AI through dual lenses, recognizing innovation opportunities while acknowledging operational risks including data privacy, algorithmic bias, and integration challenges. Despite widespread risk awareness (95% have learned or intend to learn about AI risks), actual governance measures remain limited, with 85% having no specific protective measures. Founders primarily employ market-oriented justifications emphasizing efficiency and competitive advantage, while ethical considerations remain largely symbolic. Most favor shared responsibility between AI developers and implementers (39%), though practical boundary-setting is rare. These sociological insights inform technical implementation strategies and governance system design for AI deployment in resource-constrained environments. Our findings offer evidence-based frameworks for risk prioritization, resource allocation, and governance implementation that can inform startup strategies and ecosystem-level support programs.

        Speakers: Ms Diana Olar (University of Bucharest), Ms Gabriela Bălănuță (University of Bucharest)
      • 16:15
        Bias Detection in AI Recruitment. Transparency, Accountability and Empirical Assessment Using Synthetic CVs and Rapid Testing 15m

        Algorithmic decision-making in recruitment offers substantial efficiency gains but raises persistent concerns about embedded bias, opacity, and fairness. This study outlines a framework for bias detection and mitigation in AI-driven hiring systems, combining technical and procedural strategies for transparency and accountability. We first review established methods such as explainability techniques (e.g., SHAP, LIME), structured audit reporting, and human oversight mechanisms. Building on this framework, we conduct a comparative empirical assessment using generative AI evaluations of synthetically generated CVs. These CVs are identical in structure and content but differ in gender and age indicators. Our findings discuss assessments based on these protected attributes, underscoring the challenges of ensuring fairness even in controlled prompts and uniform data. The results point to the importance of prompt design, audit trails, and continuous monitoring in mitigating bias.

        Speakers: Dr Diana Olar (University of Bucharest), Georgiana State (University of Bucharest)
      • 16:30
        Algorithmic Auditing in the Age of Disinformation: An Analysis of Electoral Integrity Challenges in Eastern Europe 15m

        The rapid advancement of artificial intelligence and its widespread integration into everyday platforms have transformed large-scale digital systems into socio-technical infrastructures that shape public discourse and individual decision-making. As more users rely on algorithmically curated content—often controlled by opaque, transnational corporations—concerns about manipulation, bias, and democratic vulnerability have intensified. These concerns are especially urgent in Eastern Europe, where recent reports have highlighted instances of foreign interference in national elections through social media platforms. We argue that algorithmic auditing must be recognized as a critical component of transnational security strategies. In this paper, we analyze key developments in the field of algorithmic auditing and examine how such practices can support efforts to regulate social media platforms as a means of protecting electoral integrity—both in Eastern Europe and globally.

        Speaker: Teodor Daniel Milea (ISDS, University of Bucharest)
    • 15:00 16:45
      Doctoral Symposium Room 1

      Room 1

      Technical University of Moldova
      Convener: Florin Pop (University Politehnica of Bucharest, Romania / National Institute for Research & Development in Informatics – ICI Bucharest, Romania)
      • 15:00
        A practical benchmark of open-source MLOps platforms: Comparing MLflow, Metaflow and ZenML across model type 15m

        This study offers a rigorous and reproducible comparison of three widely adopted open-source MLOps frameworks - MLflow, Metaflow, and ZenML. These models have been chosen for this study due to their complementary roles within the open-source MLOps landscape.

        MLflow excels in experiment tracking, model packaging, and registry, while Metaflow offers seamless data and code versioning with built‑in lineage, and ZenML transforms standard Python into portable, production‑ready pipelines with automatic artifact tracking. Together, they cover the core pillars of MLOps-tracking, versioning and orchestration, with complementary strengths, making them ideal for a focused, local benchmarking study.

        Each framework is evaluated using three representative machine learning tasks: a Random Forest on tabular data, a ResNet-based convolutional neural network on medical imaging, and a BERT-style text classifier for extractive summarization.

        Our analysis evaluates installation overhead, developer effort, training duration, pipeline orchestration, and reproducibility, ensuring consistent outputs across identical runs. It further compares performance tracking, model and data versioning, and registry mechanisms through quantitative and visual metrics such as runtime, accuracy, code complexity, and overall operational efficiency, highlighting trade-offs in developer experience.

        Empirical results are captured through a unified benchmark that logs execution time, model metrics and a static comparison between the original code and the three versions obtained through integration with each of the studied frameworks. By presenting measurable insights into integration complexity, usability, performance overhead, and reproducibility, this work advances the understanding of local-scale MLOps tool selection.

        This study’s results are as follows: MLflow proved effortless to integrate in just a few lines of code and imposed negligible runtime overhead (<2%). Metaflow required slightly more setup (≈25 extra lines) but delivered robust versioning of both code and data with a modest runtime cost (~8–10%). ZenML involved the most upfront work (~40 lines of boilerplate), yet rewarded that investment with full pipeline orchestration, transparent artifact lineage, and exceptionally stable results (variation within ±0.1% under fixed seeds), while still maintaining moderate runtime overhead (~5%).

        Speakers: Mr Dan Gabriel Badea (National University of Sciences and Technology Politehnica Bucharest), Mr Damian Monea (Crowdstrike)
      • 15:15
        Evaluating Large Language Models Security and Resilience: A Practical Testing Framework 15m

        Large Language Models (LLMs) are increasingly used in real-world applications, but as their capabilities grow, so do the risks of misuse. Despite their widespread adoption, the security of these models remains an area with many open questions. This paper explores these issues through a set of applied experiments carried out in a controlled environment designed for testing. A prototype application that allows demonstrating how an LLM security benchmarking tool could function in practice and allowing users to simulate attacks and assess the effectiveness of several defense strategies, for example in-context defense and paraphrase-based approaches was designed. The experimental results show notable differences between the tested methods. Some techniques were able to fully block attacks while maintaining the model’s ability to respond accurately to regular prompts. The prototype serves as a practical starting point for further research and can be extended to support more complex evaluation workflows in the field of LLM security.

        Speaker: Mr George Nițescu (National University of Science and Technology POLITEHNICA Bucharest)
      • 15:30
        Adversarial Attacks for Scripts 15m

        As the number of cyberattacks increases year by year, malware detection remains a pressing challenge, as traditional methods are no longer sufficient due to the dynamic nature of the field. Machine learning comes as an improvement over traditional approaches, offering better detection capabilities, but it still comes with two main disadvantages: a lack of interpretability and vulnerability to adversarial attacks. In this study, we examined the effect of such attacks on a malware detector based on a CharCNN model. Using Grad-CAM, we identified the most influential character regions in both clean and malicious script samples. These relevant regions were then inserted into samples of the opposite class to generate adversarial examples. Our experiments demonstrate a significant drop in detection performance: the accuracy of the CharCNN model decreased from 99.24% to 85.31% on JavaScript files and from 98.48% to 78.66% on Python files following the attacks.

        Speaker: Maria Chiper (University of Bucharest)
      • 15:45
        Monitoring and Analyzing Cybersecurity Conversations in Darkweb Forums 15m

        Cybersecurity threats are increasingly orches-
        trated on hidden and encrypted digital platforms (e.g., Tele-
        gram channels and dark web forums). This trend creates
        significant challenges for organizations that need timely
        threat intelligence from such closed communities. In this
        paper, we propose a framework for monitoring and analyz-
        ing cybersecurity-related conversations across public and
        private online spaces. Our approach integrates advanced
        data collection techniques with natural language processing
        (NLP) and smart correlation to extract actionable threat
        intelligence in real time.The actual work presented in
        this paper assumes a GPU-optimised multilingual NLP
        pipeline and and the current results resume to forming
        a comprehensive english dataset that will go through a
        Filtering and Correlation pipeline, that will be used to
        extract and generate threat intelligence reports and alerts
        about trends in threat actor discussions.

        Speaker: Mr Traian Becheru (National University of Science and Technology Politehnica Bucharest)
      • 16:00
        Parallel and Distributed Computation of High-Order Derivatives in Neural Networks using Stochastic Taylor Derivative Estimator 15m

        This paper presents a scalable framework for computing high-order derivatives in neural networks using the Stochastic Taylor Derivative Estimator (STDE) within parallel and distributed computing environments. Targeting Physics-Informed Neural Networks (PINNs), the work extends the theoretical and practical applicability of STDE-a method based on univariate Taylor-mode automatic differentiation and randomized jet sampling by integrating it into the JAX ecosystem with distributed primitives like pmap and pjit. The implementation achieves significant speedups and memory efficiency by decoupling the expensive tensorial computations typically associated with high-order derivatives. Experimental benchmarks on many-body Schrödinger demonstrate near-linear scalability and significant runtime improvements, achieving up to 6.86$\times$ speedups over single-GPU baselines. Our results show that STDE, when combined with distributed computation, bridges a critical gap in scalable scientific machine learning by enabling efficient, high-order autodiff in massively parallel environments.

        Speaker: Alex Deonise
      • 16:15
        Building Network Devices as a Networking Class Homework Infrastructure 15m

        Undergraduate networking courses aim to teach students how the Internet works. While existing approaches cover everything from using the sockets API to configuring networks, there is less focus on the devices that constitute the Internet infrastructure.

        In this work, we introduce a homework infrastructure developed at University Politehnica of Bucharest to teach students key protocols within the TCP/IP stack through implementing their own network devices. We document the essential design decisions and make the platform publicly available.

        Speaker: Mr Mihai-Valentin Dumitru (National University of Science and Technology POLITEHNICA Bucharest)
      • 16:30
        Engineering Education in Virtual Reality: A Case Study of the "Submarine Simulator" for STEM Education 15m

        This paper showcases "Submarine Simulator," a custom-built virtual reality (VR) application developed to enhance underwater engineering competencies in STEM education. The research aims to leverage this application in order to demonstrate and identify the key characteristics that make VR tools highly suitable for educational purposes. We explore how careful technical considerations and UI/UX principles contribute to immersive learning and boost student engagement within this context. Valuable insights for designing and integrating robust VR solutions into engineering curricula will be presented, highlighting the vital role of well-engineered applications in achieving educational objectives.

        Keywords: Virtual Reality, Engineering Education, Engineering Simulation, Immersive Technologies

        Speaker: Andrei-Bogdan Stanescu (National University of Science and Technology POLITEHNICA Bucharest)
    • 15:00 16:45
      Security & Resilience in Cyber-Physical Systems Room 3

      Room 3

      Technical University of Moldova
      Convener: Lilia Sava (Technical University of Moldova)
      • 15:00
        A Comparative Analysis of LLMs in Mapping Malware Behaviors to MITRE ATT&CK Techniques from Textual Threat Intelligence Reports 15m

        Cyber Threat Intelligence (CTI) Reports are valuable sources of information for understanding adversarial behaviors and malware functionalities. But their lack of consistency and structure can represent a challenge for security analysts in interpreting, correlating, and applying them effectively. Structuring the data in a common format, such as the MITRE ATT&CK framework, is crucial for integrating CTI into detection and response processes.

        This article analyzes the extent to which Large Language Models (LLMs) - GPT (OpenAI), Claude (Anthropic), and Gemini (Google) - can extract and map malware descriptions from natural language CTI reports to specific MITRE ATT&CK techniques. To achieve this, a set of publicly available CTI reports was used that already contained verified MITRE ATT&CK techniques labels. This served as ground truth for evaluating the outputs of each model.

        Although issues were observed in the model's execution, such as technique confusion and context loss, the results suggest a strong potential in the use of LLMs for mapping threat intelligence. Their ability to reduce manual effort and improve consistency could address a major gap in today's cyber threat analysis workflow.

        Speaker: Ebru Resul
      • 15:15
        A fusion of UWB and sonification methods for accessible public transportation 15m

        Accessible public transportation remains a significant challenge for visually impaired individuals, largely due to the unreliability of conventional positioning systems like GPS in complex urban and indoor environments. This paper introduces a novel system designed to enhance the independence and safety of visually impaired users navigating public transportation facilities and vehicles. Our solution leverages Ultra-Wideband (UWB) technology to provide precise and real-time localization, overcoming the limitations of existing methods. The proposed system accurately tracks user position around public transportation hubs and inside vehicles, enabling the delivery of reliable navigation instructions. A core innovation of this work lies in its utilization of sonification methods as the primary feedback mechanism. By translating spatial information into intuitive auditory cues, our system aims to significantly reduce the cognitive load often associated with traditional navigation aids. This approach allows users to perceive their environment and receive guidance efficiently and effectively, promoting a more natural and less demanding navigational experience.
        We demonstrate the system's efficacy through comprehensive testing in diverse scenarios, including both static and dynamic environments. The results consistently highlight the high accuracy of the UWB-based positioning, validating its potential to provide a robust and dependable solution for accessible public transportation. This research contributes to fostering a more inclusive urban environment by empowering visually impaired individuals with enhanced mobility and confidence in their daily commutes.

        Speaker: Matei Madalin (University Alexandru Ioan Cuza, Faculty of Computer Science)
      • 15:30
        NeuroKinetics: Development of an Interactive System for NeuroKinetics Recovery 15m

        This paper presents NeuroReact, a multi-sensory system in the field of NeuroKinetics (a term we use to describe the study of neurological responses to motion-based stimuli), designed to evaluate and support neurocognitive reactions to specific stimuli (audio, visual, tactile) using embedded hardware and gamified interaction. The experimental scenarios were adapted for different recovery games, each one of them having the structure composed of consecutive sequences, which are themselves composed of more iterations. The final goal is the correct and coherent evaluation of the promptness of the reactions of the user, depending on the specific level of difficulty, which rises with each sequence.

        Speaker: Razvan Bogdan (Universitatea Politehnica din Timisoara)
      • 15:45
        Sim2Real Cybersecurity Testbed for Modern Automotive Architectures 15m

        This paper introduces a unified testbed for evaluating the security of modern automotive networks through the integration of the CARLA driving simulator with a physical platform based on the open-source Toyota PASTA architecture. The proposed simulation environment facilitates realistic generation, manipulation, and visualization of Controller Area Network (CAN) traffic, including packet injection, message modification, cyberattack emulation, and JSON-based traffic import.
        The physical testbed follows original CAN specifications and identifiers. It includes four software-identical Electronic Control Units (ECUs), connected via a CAN bus and housed in a modular structure designed to emulate key vehicle subsystems. Ethernet connectivity between driving simulator and the physical platform enables bidirectional communication, allowing virtual driving scenarios to dynamically interact with the physical ECUs and, conversely, for physical system responses to influence the simulated environment.
        This tight coupling supports synchronized, high-fidelity testing of vehicle behavior under various CAN-based cyberattacks, offering a practical and extensible foundation for cybersecurity research in intelligent transportation systems.

        Speaker: Andrei-Florian Ciorgan (UPB)
      • 16:00
        Improving iOS Sandbox Profile Decompilation Accuracy 15m

        Mobile devices have become ubiquitous, with Apple owning more than 25% of the market. One method by which iOS ensures the security of its apps is through sandboxing. This mechanism is implemented as a set of rules compiled into binary files that lie inside the OS firmware that isolate applications within controlled environments to prevent unauthorized operations. The contents of these profiles are not made public by Apple. Thus, security engineers require third-party tools to decompile and then visualize the contents of the profiles mentioned above.

        This paper presents a validation framework for iOS sandbox profile decompilers, specifically targeting the SandBlaster tool. Our approach represents sandbox profiles as dependency graphs and compares decompiled profiles with reference implementations compiled from Sandbox Profile Language (SBPL) representations using SandScout. The validator employs a graph-based comparison algorithm that identifies discrepancies in the operation rules and filter paths between the representations of the binary and SBPL profiles.

        We evaluated our framework in iOS versions 7-10, analyzing both individual profiles and bundled profile collections. The results demonstrate perfect accuracy (100\% precision and recall) for iOS 7-8 profiles, while revealing systematic errors in iOS 9-10 decompilation, including missing or confused filters, incorrect string literals, and missing inter-process communication operations. Path-level accuracy ranges from 90-100% for iOS 9 and 75-100% for iOS 10, indicating version-specific degradation in decompilation quality. Additionally, we identified and resolved a critical performance bottleneck in SandBlaster's node matching algorithm, reducing decompilation time from over 7 hours to under 5 minutes for iOS 10 bundled profiles through algorithmic optimization from $\Theta(n^2)$ to $\Theta(n)$ complexity.

        Speaker: Teodor-Ștefan Duțu (National Unviersity of Science and Technology Politehnica Bucharest)
      • 16:15
        Behavior Analytics for Centralized SIEM with Edge Processing 15m

        This paper proposes an integrated behavioral analytics framework that leverages the usage of a centralized SIEM, edge AI processing, and automation to enable adaptive, real-time detection and response.
        By collecting diverse behavioral data: API calls of applications, system commands, authentication attempts, and web request patterns, using Wazuh agents from mixed environments, the system captures the operational fingerprint of an organization. AI models, are then trained on this data, allowing detection mechanisms to adapt dynamically to identify anormal behavior with high accuracy.
        To achieve low-latency, models were developed and deployed on an NVIDIA Jetson Orin device at the network edge, removing cloud dependency while ensuring privacy and speed. Upon detection of suspicious activity, response actions are executed. This architecture, built with open-source technologies, demonstrates a scalable and modular system.
        Experimental results show effective detection SQL injection attempts, and API-level anomalies, validating the system’s potential for practical deployment in modern security operations.

        Speaker: Mr Alexandru Chis ("Transilvania" University of Brasov)
      • 16:30
        Optimizing Configuration and Monitoring of Test Environments for Cybersecurity Assessments 15m

        Abstract—The increasing complexity and scale of cyber threats demand optimized and reproducible testing environments for security evaluations. This paper proposes a set of best practices for configuring and monitoring such environments to ensure effective and realistic vulnerability scans, penetration tests, and compliance audits. Key contributions include the use of Infrastructure as Code (IaC) for automating the provisioning of virtual and physical instances, applying network and security policies, and integrating advanced logging and real-time monitoring mechanisms. The proposed methodology addresses major challenges such as configuration drift, poor visibility, and lack of scalability by leveraging tools like Terraform, Ansible, ELK Stack, and Prometheus/Grafana. The paper also discusses the integration of these environments in CI/CD pipelines and the potential of AI/ML in anomaly detection. Future directions include self-healing test environments and deeper AI integration.

        Keywords—cybersecurity, test environments, Infrastructure as Code, monitoring, reproducibility, CI/CD, AI/ML

        Speakers: Mr Cornel Argint, Mr Cristian Nistor, Mr Daniel Abotezătoaei
    • 16:45 17:05
      Conclusions: Keynote after the first day
    • 09:30 10:00
      Opening Session: Welcome Coffee

      Opening Session
      https://maps.app.goo.gl/oysnWRpnXrzTDAzJ7

    • 10:00 11:45
      Doctoral Symposium Room 1

      Room 1

      Technical University of Moldova
      Convener: Florin Pop (University Politehnica of Bucharest, Romania / National Institute for Research & Development in Informatics – ICI Bucharest, Romania)
      • 10:00
        Semantic Vision Priors for Multi-Agent Reinforcement Learning: Improving Unity Tank Battles with Frozen Vision-Language Models 13m

        Traditional multi-agent reinforcement learning (MARL) struggles in visually rich environments when agents rely solely on raw pixels or low-level features, often leading to poor exploration and cyclic behaviors. In this work, we propose a novel framework that injects semantic vision priors from a frozen vision-language model (VLM) into the RL pipeline to guide both perception and strategy. At each timestep, agents capture camera frames and, together with a concise natural-language prompt, query a pretrained VLM (e.g., CLIP or BLIP-2) to produce a fixed semantic embedding that encodes object identities and spatial relationships. These embeddings are concatenated with standard numeric observations and fed into a lightweight policy network trained with PPO. We further augment training with two enhancements: (1) auxiliary reward shaping, in which VLM-based object detections (e.g., "enemy sighting") yield small exploration bonuses, and (2) a hierarchical "coach" loop, where the VLM proposes high-level mini-plans every N steps that condition low-level action execution. We outline an experimental evaluation in a Unity tank battle arena comparing (i) baseline MARL, (ii) semantic-obs only, (iii) semantic-obs + reward shaping, and (iv) complete hierarchical coaching. We hypothesize that semantic priors will accelerate learning—escaping aimless circling—and yield superior coordination and win rates within 2 million environment steps. This approach opens new avenues for integrating off-the-shelf VLM knowledge into real-time multi-agent systems.

        Speaker: Manuel Rinaldi (Universitatea Politehnica Bucuresti)
      • 10:13
        Computer-enabled Reduction of Impact on People in Case of Railway Networks Disfunction 13m

        Being one of the main pillars in the transport industry, railway systems facilitate the movement of people and goods. Regarding the passenger transport, the train provides access to the main urban, touristic and educational centres based on a well-defined timetable. Any deviation from the original schedule can determine a decrease in passengers’ satisfaction with a possible outcome of searching for alternative means of transportation. Firstly, this paper aims to emphasize the significance of minimizing the effects of a perturbation which causes train delays. Most of the time, an increase in waiting and travel times triggers a drop in quality of service as perceived by the customers with a direct impact on multiple social areas (work, education and personal related). Secondly, an analysis of how digital communication systems can contribute to achieving this goal is performed. We focus on the main areas which can benefit from computer networking features to shorten the time frame between occurrence of a disturbing event and full recovery of the railway services. One of the most common methods used for finding a solution to the train rescheduling problem implies building an optimization model and solving it by different techniques. Details for possible ways of integrating the Internet in the process of timetable updating are highlighted based on a practical example from Romanian railway system. Lastly, starting from a comprehensive literature review, some requirements which shall be covered by the communication infrastructure involved in train rescheduling are included, together with associated risks and mitigation recommendations.

        Speaker: Mr Andrei-Ștefan Duluță (Faculty of Automatic Control and Computers, National University of Science and Technology Politehnica Bucharest, 060042 Bucharest, Romania)
      • 10:26
        Deploying Machine Learning at the Edge for Real-Time Vehicular CO2 Emission Monitoring 13m

        The global increase in vehicle numbers has a direct impact on vehicular CO$_2$ emissions, significantly contributing to climate change and calling for the urgent need for innovative solutions. Integrating machine learning into carbon emission estimation offers the potential for accurate prediction, modeling, and analysis of environmental factors that drive air pollution. This paper presents a real-time CO$_2$ emission monitoring system designed for an intersection within an Internet of Vehicles (IoV) framework. As vehicles pass through the intersection, their models are automatically identified using a ResNet-50-based detection model deployed on the Zynq UltraScale+ ZCU104 platform. The identified vehicle model is then passed to a CO$_2$ emission model, which calculates the emissions and transmits the data to a central traffic management unit. The collected emission data are then aggregated and analyzed to assess the levels of pollution in the region. We evaluate our multilayer perceptron (MLP) model against Random Forest, Linear Regression, Support Vector Regression (SVR), and K-Nearest Neighbors (KNN) in a SUMO-simulated environment. To enhance interpretability, we apply SHapley Additive exPlanations (SHAP) to identify feature importance. The results show that the proposed method accurately predicts vehicle CO$_2$ emissions, allowing a more effective pollution assessment.

        Speaker: Mirabela Medvei (National University of Science and Technology Politehnica Bucharest)
      • 10:39
        An IoT View on Network Performance Evaluation 13m

        This paper introduces a methodology for assessing network performance through continuous data acquisition provided by an IoT sensor network. Analogous to the way telemedicine leverages continuous patient monitoring to enhance traditional medical diagnostics, the continuous reporting of network-related metrics by distributed IoT nodes can offer a valuable complementary perspective to conventional network evaluation techniques. The proposed solution builds upon an existing IoT infrastructure initially designed for monitoring environmental conditions, to which performance monitoring capabilities have been seamlessly integrated as part of the current research.
        A central premise of this study is that effective network performance evaluation does not necessarily require the deployment of a dedicated sensor network. Instead, existing IoT systems can be repurposed or extended to fulfill this role. The practical implementation presented herein focuses on the Wi-Fi infrastructure within a building of the Faculty of Automation and Computer Science, at the National University of Science and Technology Politehnica Bucharest. The system monitors key performance indicators such as signal strength (RSSI), wireless channel congestion, and communication latency, thereby demonstrating the feasibility and advantages of this integrated approach.

        Speaker: Adrian Diaconita (National University of Science and Technology Politehnica Bucharest)
      • 10:52
        Towards Transparent Judging: Sentiment Analysis and Score Distribution in Educational Competitions 13m

        Capstone projects represent practical projects carried out at the end of a study program or chapter, integrating knowledge into functional solutions, relevant for real-life contexts. The complex nature of these projects introduces challenges due to the subjective nature of their evaluation, which we aim to mitigate through our proposed contribution.
        This paper analyzes the relationship between the Gaussian distribution of scores obtained in an educational competition and the reduced influence of negative feedback on the final results, using a dedicated mobile application in the evaluation process. The implemented application has a distributed architecture based on microservices and includes secure authentication using JWT, rate limiting mechanisms, and automated analysis of the sentiments associated with the participants’ feedback. The obtained results show that the polarity is predominantly positive and the sentiment strongly correlates with a normal distribution of scores, suggesting that an automated sentiment evaluation can contribute to the transparency and objectivity of the judging process. An extensive analysis, along with more data, will be presented in the paper.

        Speaker: Vlad Radulescu (Universitatea Politehnica din Bucuresti)
      • 11:05
        Increasing Language Performance through Template Runtime Libraries 13m

        Programming languages employ significant diversity in design philosophies and implementation of their runtimes.
        Languages such as Java and CPython utilize distinct virtual machine and interpreter environments that enable high-level features but require substantial runtime overhead, while others like Go, Rust, C++, and C provide offer system-level programming and are compiled to binary files, requiring smaller runtimes.
        This creates a gap between high-level constructs (such as garbage collection and built-in data structures) and low-level functionality, with memory safety being present at both ends of the spectrum, with Python and Java at one end and Rust and Go at the other.

        We propose a novel approach to designing runtimes in languages that support generic programming by transforming the D programming language's runtime library from a monolithic architecture to a template-based implementation by converting its runtime hooks from TypeInfo-based implementations to templates.
        We evaluated the performance of each hook with the template and non-template implementations and increased the running performance of applications by as much as 25% at the cost of a 10-30% increase in compilation time.
        This work may also extend the language features available when compiling for IoT devices, which, due to their low available memory, do not support the full runtime of the D language.

        Speaker: Teodor-Ștefan Duțu (National Unviersity of Science and Technology Politehnica Bucharest)
      • 11:18
        Cloud-Edge Architecture for Audio Signal Classification based on Mel Spectrograms 13m

        Edge cloud applications have become vital as outdated cloud architectures face challenges in handling increasing data volumes, especially for audio signals. This article reports on a simple edge cloud architecture for real-time environmental audio classification to improve indoor security and availability. Audio signals are captured at the edge layer using a Raspberry Pi, then converted into Mel spectrograms using the Librosa Python library, and subsequently transmitted to a cloud-hosted convolutional neural network (CNN) trained on the FSD50K dataset. The application achieves 84\% overall accuracy with low latency, efficiently managing resource constraints, and scalability. This application presents real-time images and alerts, indicating the system's ability to detect and support emergencies on time for hearing-impaired users (clients).

        Speakers: Mr Luca-Sebastian Pătrașcu (National University of Science and Technology POLITEHNICA Bucharest), Mr Muhammad Khurram Zahur Bajwa (University of Salerno, Italy)
      • 11:31
        Intelligent Intrusion Detection System for Cybersecurity of Water Utilities 13m

        Real-time cybersecurity of critical infrastructures
        that include multiple networked automation systems represents
        a important challenge for the assurance of modern societal
        functions. In particular water supply and treatment facilities
        have to operate at high availability and efficiency parameters,
        with direct impact on public health in the case of performance
        degradation or unscheduled down time due to network attacks.
        We present a machine learning (ML)-based approach to detect
        malicious activities in operational control networks of water util-
        ities. The system accounts for the particularities of the industrial
        communication protocols used for process control of this critical
        sector and presents a comparison between enhanced random
        forest models (XGBoost), hybrid neural network architectures
        (CNN-MLP) and logistic regression, as reference baseline model.
        Binary classification results, evaluated on the popular SWaT
        dataset, show that ML methods can extend intrusion detection
        system capabilities for accurate attack detection.

        Speaker: Mr Sebastian Mesca (University Politehnica of Bucharest)
    • 10:00 11:45
      Network Security Room 2

      Room 2

      Technical University of Moldova

      Network Security

      Convener: Adrian Alexandrescu (Gheorghe Asachi Technical University of Iasi)
      • 10:00
        Temperance Adversary Emulation Framework 15m

        This paper introduces and develops Temperance, an adversary emulation framework, which can be used by the red team operators to assess the security of the target infrastructure.

        To control the host during the post-exploitation phase, the operator implants an agent into the target that calls back to the C2 (Control and Command) server, from which the professionals have full remote control of the host. The network traffic that this agent generates can be distinguished from a normal user-generated one when using a standard C2 because of the beaconing behavior. The protocol used between the agent and the server influences how well the operation can scale to handle a substantial number of agents or how fast it can be stopped by the SoC (Security Operation Center).

        These problems can negatively influence the capabilities of the red team to carry on the security testing. Since cybercriminals face these issues too, it’s vital to find solutions before they do so that security solutions and SoC operators are ready for future cyberattacks. The solution introduced and developed by this paper uses a dynamic-size hops cluster. A hop facilitates the communication between the agents and the server in a decentralized message-passing style instead of a simple traffic forwarding, like a normal proxy.

        This solution is better because some of the server’s work has been delegated to the hops, requiring a lower number of active connections to be managed by it.

        The infrastructure is more fault-tolerant since the cluster is increased or decreased based on the number of available hops, making the hop replacement faster and simpler. Because of this, the operators can scale the operation since human intervention is less needed to maintain the infrastructure. Some defense techniques, like IP banning, become ineffective.

        To evaluate the solution, the network traffic of a normal user simulation, a baseline C2 server, and Temperance was captured to analyze the behavior. A machine learning algorithm trained to detect the beaconing behavior from the collected data was used to compare how well our solution evades this detection.

        Speakers: Mr Dan Gabriel Badea (National University of Sciences and Technology Politehnica Bucharest), Mr Sabin Pocriș (National University of Sciences and Technology Politehnica Bucharest)
      • 10:15
        Controlled Evaluation of a Distributed Cyber Scan Engine: Architecture, Simulation and Threat-Aware Performance Metrics 15m

        As the challenges with cyber threats increase, prompt detection of exposed assets is needed to minimize attack surfaces and maintain resilience in networked systems. This paper outlines the design and testing of a distributed scanning engine, developed as part of a national cybersecurity initiative on actively advancing active defense capabilities. The system is constructed using simple modular components to operate in lightweight hybrid infrastructures capable of scanning workloads across on-premises, edge, and cloud environments.
        The scanning module contains configurable routines performed for detection of ports and services, OS fingerprinting, and profiling targets; enabling simple user access to contexts; configuring threat intelligence workflows; and data pipelines. Unlike many traditional scanning type tools, the engine has built in features for automation and ethical compliance which allows for it to be appropriate for reconnaissance (within legally defined operational limits) for real-time usage.
        In total a full testing campaign was completed across controlled testing environments created to reproduce operational, threat network typologies, and adversarial conditions. The testing scenarios were framed around in understanding key performance indicators including the capability for reliability detection, loading responses, and adaptation to performance actions. Specific focus was placed on how the module would combine with technical vectors used by attackers and what might advance towards functionally operationally recency recently, that included autonomous threat hunting and mapping pre-attack reconnaissance.
        The outcomes of this work include a validated prototype, and organized architecture to evaluate scan engines within operationally realistic environments. The work demonstrates immediate utility against a recognized gap and need for a systematic proactive reconnaissance tool, while managing performance, compliance, and flexibility for ease of use, setting the basis for more focused research and development of automated cyber resilient systems to facilitate responsive and adaptive cyber defense.

        Speaker: Doru Balan ("Ștefan cel Mare” University of Suceava)
      • 10:30
        Enriching IP Scanning Results with Structured Threat Intelligence: Toward Actionable Reconnaissance in Cybersecurity Operations 15m

        Reconnaissance data generated from scanning engines often provide limited context for actionable decisions to improve cyber defenses. The identification of open ports and exposed services provide a baseline mapping of the digital surface but only present real value in the form of contextualized threat intelligence. This paper puts forth a repeatable process that leverages raw scan results with vulnerability databases and adversarial tactics frameworks to move from passive reconnaissance to an active and threat-informed posture.
        The proposed framework is built upon a previously vetted modular scanning engine and was designed to add data from open-source and government intelligence feeds (CVE, CISA KEV, and MITRE ATT&CK) to map the identified services by risk, known exploitability and observed attacker behavior. The enrichment pipeline was developed for semi-automated workstations to run offline data analysis and provide real-time actionable alerts.
        The methodology was validated through targeted simulation exercises where the enriched outputs were compared to the raw scanned outputs on situational awareness, triage, and relevance to actual threat models. The results indicate highly improved decision-making quality and reduced analyst workload, particularly in high-noise environments.
        By integrating threat intelligence into the scanning and analysis cycle, this work presents a tangible and repeatable approach for active defense. More importantly, it presents a clear need for risk contextualization for evidence-based decision-making in today's cyber threat landscape, while providing a repeatable framework for enhancing active reconnaissance systems within government and enterprises.

        Speaker: Alexandra Balan ("Ștefan cel Mare” University of Suceava)
      • 10:45
        Tracing the Invisible: A Privacy-Centric Labeling System for IoT Data Flows 15m

        The growing use of Internet of Things (IoT) devices in homes and workplaces has presented significant security issues, particularly due to the opacity with which devices function and handle personal data, especially when cloud services get introduced. Increasingly, IoT platforms rely on external cloud infrastructure to mediate even basic device operations, using remote servers as middle-men to issue commands rather than enabling direct local communication. This architecture not only increases latency but also facilitates continuous data extraction and cross-border transmission, often without user awareness, despite increasing consumer concern and heightened regulatory scrutiny. The majority of end-users remain unaware of the cyber risk they are exposed to, the pathways that their data traverses, the extent of its sharing, and the associated hazards. This study introduces an innovative, privacy-focused labeling system intended to monitor, measure, and classify the data streams of IoT devices. Devices are categorized based on criteria including data transmission volume, destination, tracker usage, and frequency of routing packets through external servers. The aim of the proposed framework is to improve transparency and enable customers to make privacy-conscious decisions. It will also create a basis for regulatory benchmarking, tackling the visibility and accountability shortcomings in IoT ecosystems.

        Speaker: Robert Ticu-Jianu
      • 11:00
        Modular AI-Enhanced System for Predictive and Real-Time Clinical Decision Support 15m

        The increasing availability of real-time clinical data from medical devices and health information systems creates new opportunities to enhance diagnostic accuracy, preventive care, and patient engagement. This paper proposes a modular, AI-driven architecture for a healthcare decision support platform that integrates predictive analytics, medical image processing, real-time patient monitoring, explainable recommendations, and personalized health education. The architecture is organized into distinct functional modules with well-defined responsibilities and interfaces, supporting scalable, containerized deployment and standards-based interoperability. Data from heterogeneous sources, including HL7, FHIR, and DICOM-compliant systems, are ingested through an integration gateway, standardized, and processed by analytical engines for risk scoring, anomaly detection, and imaging-based diagnostics. A decision support layer combines AI-generated insights with clinical guidelines to generate actionable recommendations, while an alerting mechanism ensures the timely delivery of critical notifications. Patient engagement is supported through a personalized education component that delivers context-aware health information. The proposed design addresses key challenges in real-time healthcare AI systems, including heterogeneous data integration, model interpretability, scalability, and alignment with clinical workflows, making it suitable for integration with existing hospital information systems and deployment in both centralized and distributed healthcare environments.

        Speaker: Adrian Alexandrescu (Gheorghe Asachi Technical University of Iasi)
      • 11:15
        Assessment of Multi-Model Approach for Drone Security 15m

        Drones, while critical for numerous applications, are particularly susceptible to a variety of cyber threats. Traditional single-model security solutions often present inherent weaknesses, creating specific attack surfaces that can be exploited by adversaries. This paper aims to explore a multi-modal approach to drone security, addressing these vulnerabilities through system diversity. By integrating a range of models, each leveraging different modalities such as sensor data and computer vision and training them on a combination of real-world inputs and synthetically generated images within simulated environments, we propose a more robust and adaptive security framework. This approach is designed to improve threat detection capabilities and overall system resilience, enabling drones to better counter and adapt to evolving cyber-attacks.

        Speaker: Mr Alexandru CHIS ("Transilvania" University of Brasov)
      • 11:30
        Exploiting Log4J for Remote Code Execution: A Cybersecurity Analysis of the Particularities of CVE-2023-50780 in RedHat AMQ 15m

        Java Management Extensions (JMX) are essential for administrating Java applications, yet their exposure via HTTP bridges like Jolokia can create significant security risks. This paper investigates how vendor-specific modifications in downstream enterprise products can alter the attack surface of known vulnerabilities. Focusing on CVE-2023-50780, we analyze a critical misconfiguration in RedHat AMQ where Log4J's scripting capabilities are enabled by default. This research demonstrates a direct "fire and forget" remote code execution (RCE) vector that is significantly more efficient than the complex, multi-stage file-write exploits documented in its upstream counterpart, Apache ActiveMQ Artemis. Through empirical analysis and a reproducible methodology, we answer our research question by confirming that insecure-by-default settings in commercial products can introduce simpler, more direct attack paths, challenging the assumption that downstream derivatives, even enterprise grade ones, are inherently more secure. Our findings underscore the need for rigorous, independent security validation of vendor-specific configurations in the software supply chain.

        Speaker: Mr Alexandru Răzvan Căciulescu (National University of Science and Technology POLITEHNICA Bucharest)
    • 10:00 11:45
      Sensor Networking Room 3

      Room 3

      Technical University of Moldova
      Convener: Larisa DUNAI (Universitat Politècnica de València)
      • 10:00
        Energy-Efficient Environmental Monitoring System 15m

        With the rapid evolution of embedded technologies,
        systems have become increasingly compact and efficient, especially in terms of energy consumption. This enables the development of distributed monitoring solutions where devices are no longer powered by batteries, but by other components capable of storing energy, such as supercapacitors. However, to operate at maximum efficiency, the system must also be optimized from a software perspective, as executing fewer instructions or setting different operating modes for sensors can significantly impact the system's lifespan.
        This paper aims to present the hardware components, software optimizations, adaptation algorithms for environmental conditions and current system status, as well as the results of the experiments conducted. TinySense is a monitoring system based on an ARM Cortex M4 microcontroller, which uses a supercapacitor as an energy storage element. The supercapacitor is recharged via a solar panel
        that converts light energy into electrical energy.

        Speaker: Mr Matei Ceaușu (Politehnica Bucharest)
      • 10:15
        Enabling Smartwatch Biometrics in Constrained Environments 15m

        This paper presents enhancements to an open-source smartwatch running the NuttX real-time operating system. The contributions made include developing a NuttX driver for the BMI085 inertial measurement unit (IMU) device, enabling accelerometer and gyroscope data processing from the device, creating a user-space NuttX application to test the driver functionality and data accuracy, and integrating an open-source pedometer algorithm with dynamic distance and calorie burn computations, designed for constrained environments, in the NuttX ecosystem. These contributions help bridge the gap between the hardware capabilities of the smartwatch and the actual applications and demonstrate the functionality of fitness tracking algorithms in computationally constrained embedded systems.

        Speaker: Ms Ana-Maria Mîrza (Politehnica Bucharest)
      • 10:30
        A Lightweight and Scalable 3D Scanning System for Indoor Spaces Using Smartphones and Server-Side Depth Estimation 15m

        In the context of digital transformation across industries such as interior design, real estate, and warehouse management, there is a growing demand for accessible, efficient, and accurate 3D spatial data acquisition. Traditional methods using LiDAR or structured light systems offer high precision but are often cost-prohibitive and technically demanding. This research introduces a novel and scalable system that enables users to generate full 3D reconstructions of indoor environments using only a smartphone with dual cameras and a centralized image-processing server.
        The system is composed of two main components: an Android application for panoramic image capture and metadata extraction, and a server that performs image enhancement, monocular depth estimation using a customized GLPDepth model, and point cloud generation via Open3D. Final 3D models are calibrated based on user-validated scale references and can be exported to CAD formats or visualized in VR-ready environments.This architecture allows for consistent performance, while ensuring model fidelity through advanced preprocessing techniques and focus-based depth refinement. Additional features include integration with AI-powered interior design tools and multi-user request management via timestamped asynchronous processing.
        The proposed solution is fast, cost-effective, and highly adaptable, with potential applications in education, retail, smart homes, and beyond. This project demonstrates how edge-device simplicity and cloud-based intelligence can be leveraged to offer a practical alternative to conventional 3D scanning technologies.

        Speaker: ALEXANDRU BLEJAN (UPB)
      • 10:45
        Porting NuttX RTOS on a smartwatch, with a focus on low power 15m

        Power consumption is an important factor when it comes to wearables, due to their small size and limited, but varied, capabilities. This thesis explores the different techniques through which lower energy expenditure is achieved, and their trade-offs, using a smartwatch and NuttX, an Open-Source RTOS. The main objective is to obtain a battery autonomy of at least 24 hours, without compromising the functionality of the watch. Careful device driver and application design is needed to attain this goal, as well as using the low power functions of the hardware. This thesis also presents the results obtained and further steps that can be made in the energy-efficient direction.

        Speaker: Ms Alexandra-Simona Toacă (Politehnica Bucharest)
      • 11:00
        Network Monitoring With Low Cost ESP32 Boards 15m

        The proliferation of IoT (Internet of Things) devices in small offices and typical homes has created grounds for concern regarding network security. These devices are often using communication protocols that are commonly used by regular computers, to take advantage of the existing infrastructure. However, their "black-box" designs with no user interface and questionable providers, may become a cause for concern. The potential for these devices to be compromised, be it intentionally by the manufacturer or unintentionally by a third party, can lead to privacy violations and security breaches. Such issues take place in the form of, respectively, personal data collection and distribution, and malicious attacks on the network. Moreover, many users of such devices are not aware of the implications, and therefore take little to no preventive measures.
        This project provides a solution for monitoring the network traffic of IoT devices and/or computers, using relatively low-cost hardware, and a custom software solution. The idea behind it is that there needs to be a balance between effectiveness, cost and ease of use. With this in mind, I have opted for using ESP32 boards, which are inexpensive, have a good set of features and large community support, and offer enough processing power to facilitate this task. As such, the data acquisition system effortlessly captures packets from the network and relays them to the compute unit, which then uses Suricata IDS to flag events.

        Speaker: Cezar Zlatea (Universitatea Politehnica din Bucuresti)
      • 11:15
        Burnout Analysis using NLP and Medical Biomarkers 15m

        The increasing number of burnout cases have determined people to share their experience through social platforms, so that they could find support and understanding. The clinical indicators that can determine the burnout risk are usually conducted in psychological tests or by medical biomarkers, but these methods cannot hold the complex spectrum of emotions. This investigation employs a multimodal approach to burnout prediction, incorporating natural language processing of social media content alongside physiological biomarker analysis. The goal is to determine better results in the prediction phase, keeping in mind the possibility of a detection in the early phases of burnout.
        The database for the NLP analysis was created by scraping subreddit comments related to: burnout, anxiety, toxic workplaces and academic stress to identify burnout and completely opposed sections to track the non-burnout content. To keep our direction, we
        generated the score for the main psychological indicators: PANAS, STAI, SSSQ by applying Natural Language Processing and custom dictionaries for the specific spectrum of emotions. The biomarkers and surveys are provided by the WESAD dataset and our results were combined with the processed biomarkers, by identifying common PANAS scores, so that the study can be relevant.
        The analysis revealed a strong correlation between: the negative emotions, the social media reactions, the contrast of the speech and the burnout. It also indicated multiple phases of burnout from low, medium, to high risk, giving us the possibility of early prediction.

        Speaker: Madalina-Andreea Calin (Politehnica University of Bucharest,)
      • 11:30
        Generation and Synthesis of video from static images for real estate Virtual Tours 15m

        This paper introduces a novel web platform that automates real estate virtual tours using images and advanced generative AI, with a focus on the Stable Virtual Camera (SEVA) approach. A key innovation is the use of separator frames and hallway simulation techniques, enabling realistic, controllable transitions between rooms that mimic natural spatial navigation. This modular method allows users to generate interactive tours without special equipment, advancing digital real estate promotion and laying the groundwork for future virtual tourism and augmented reality applications powered by an cloud infrastructure.
        Keywords—synthesis, tour, sequence, morphing, cloud

        Speaker: Dan Toderici
    • 10:00 11:45
      Social Aspects of Networking Environment Today Room 4

      Room 4

      Technical University of Moldova
      Convener: Dr Cosima Rughinis (University of Bucharest)
      • 10:00
        Folk Theories of Ethical Agency on Reddit Threads: Negotiating Morality in AI Data Assemblages 15m

        In this paper, we discuss how to understand and integrate the diverse ethical perspectives which emerge in the design, deployment, and application of AI systems. Drawing on qualitative content analysis of comments written on /r/Ethics, /r/Technology and /r/Futurology subreddits (n=145), written between July 2023 and June 2025, we identified three main folk theories regarding the ethical future of Artificial Intelligence: the first theory is that AI corporations manipulate emotions through computational performance, the second theory questions the moral limits of AI systems as well as their sentience, and the third theory highlights the tensions between ethical autonomy and technological advancements and stipulates that enhanced machine agency should take into account the respect of ethical benchmarks. All of these folk theories enact users with the necessary expertise to impose certain forms of human agency in relation to AI.

        Speaker: Dragos Obreja (University of Bucharest)
      • 10:15
        Attitudes toward entrepreneurship and digitalization: How they shape high school students’ readiness to start a business 15m

        This paper examines the association between students’ attitudes toward digitalization and their entrepreneurial readiness in Romanian technical and vocational high schools. We analyze 11th–12th graders (16–19) engaged in the Practice Firm (PE) or Firma de Exercitiu (FE) program, a pedagogical simulation of a real company. Using a sample of 700+ respondents, we combine descriptive statistics, contingency tables, and structural equation modeling (SEM) with a sociological reading to map links between pro-digital attitudes, understanding of how firms work, entrepreneurial capital, gender, and residence. Results show that girls report confidence in digital tools comparable to boys, and rural students view digitalization as a way to overcome geographic disadvantages. We frame these patterns as digital symbolic capital: pro-digital attitudes that convert into self-reported readiness within the educational “field” of PE, aligning with diffusion-of-innovation and EU AI/digital literacy agendas.

        Speaker: ROBERT FLORIN SIMILEA (Doctoral School of Sociology, University of Bucharest)
      • 10:30
        Reliability of Generative AI in Argument Classification: Identifying Pro and Anti-Vaccination Stances on Reddit 15m

        In this study, we evaluate the reliability and methodological implications of generative artificial intelligence (genAI) models, specifically ChatGPT, in classifying nuanced vaccination stances expressed in online debates. We analyzed 295 comments from three Reddit threads discussing vaccination using multiple measurements with different ChatGPT model versions (4o and 4.5). Each comment was contextually paired with its parent comment to capture argumentative stance, and analysis was enhanced by including a preliminary qualitative thematic evaluation. Our findings show moderate-to-strong consistency across measurements, with correlation coefficients ranging from 0.64 to 0.77. However, there is variability in interpreting nuanced arguments, such as attributing disease reduction solely to sanitation rather than vaccination, or distinguishing between rhetorical style and genuine argumentative extremism. These variations point to deeper interpretive ambiguities that current generative AI models handle inconsistently. We discuss both the opportunities and challenges of using generative AI for qualitative and quantitative content analysis, emphasizing the importance of careful prompt design and methodological transparency. Our findings contribute to advancing AI-assisted research methods in sociology and computational social science, showing pathways to enhance the reliability and interpretive precision of generative thematic analysis.

        Speaker: Dr Simona-Nicoleta Vulpe (University of Bucharest)
      • 10:45
        Artificial Intelligence in Smart Cities for Citizens: Trends, Challenges, and Promises. A Bibliometric Text Mining Analysis 15m

        Over the last decade, cities have become increasingly digitalized with the adoption of Artificial Intelligence (AI) technologies. While these changes encompass economic, technological, and infrastructure transformation, citizens remain at the core of the cities as users, co-creators, and beneficiaries of AI urban innovations. Our study presents a bibliometric text-mining analysis of literature from the last 15 years (2010-2024). We used metadata from 1235 publications indexed in the Web of Science on the intersection of AI, cities, and citizens. Our results emphasize the most influential publications, thematic clusters, and trends. Moreover, the analysis shows complementary technologies and techniques used alongside AI technologies. Our analysis also highlights the importance of governance, ethical implications, privacy, security, health, transparency, and participatory design when using and creating AI smart technologies for cities. By providing a mapping of the scientific themes and emerging trends, our study is relevant for researchers and policymakers interested in citizen-centric AI technologies for cities.

        Speaker: Dr Anamaria Năstasă (National Scientific Research Institute for Labour and Social Protection; Centre for European Studies, Alexandru Ioan Cuza University)
      • 11:00
        Computational Discourse Analysis of AI-as-Governor Debates: A Decade of Reddit Discussions on Replacing Politicians 15m

        As artificial intelligence systems increasingly shape public discourse, a growing number of online discussions speculate whether AI could replace human politicians. This study applies generative thematic analysis to explore how Reddit users have debated this prospect across a ten-year span. We analyze four threads—two from 2015–2017 and two from 2024–2025—containing a total of 374 comments. Each dataset is processed using a large language model (GPT-4o) to extract recurrent themes, argumentative structures, and shifts in rhetorical framing.
        To ensure interpretive validity, the analysis was iteratively verified by the authors, who systematically reviewed the model-generated themes, cross-checked them against the full comment threads, and refined them through close reading. This hybrid approach combines LLM-assisted pattern recognition with human-led qualitative validation.
        Findings reveal a clear diachronic shift. Earlier threads emphasized philosophical reasoning about technocracy, democratic legitimacy, and the epistemic boundaries of machine decision-making. In contrast, recent threads adopt a more skeptical tone, focusing on the limits of generative AI, its vulnerability to manipulation, and its inability to address systemic human behaviors like corruption and opportunism.
        The study illustrates how LLMs can support the empirical study of online discourse and public sentiment, while also highlighting enduring sociotechnical constraints on the automation of political authority.

        Speakers: Mr Ciprian Ghițuleasa (University of Bucharest), Diana Iordache
      • 11:15
        Computational Analysis of AI Mistrust: Generative Thematic Mapping of Conspiracy Discourse on Reddit 15m

        This study analyzes public discourse on conspiracy theories related to artificial intelligence (AI) by examining four Reddit threads from the r/conspiracy subreddit, posted between mid-2023 and mid-2024. The dataset includes over 200 user comments discussing the role of AI in surveillance, media manipulation, labor displacement, and political influence. We apply a generative thematic analysis methodology, using a large language model (LLM) to identify recurring themes, supported by human oversight to ensure accuracy and interpretive validity. Our approach combines the scalability of automated analysis with close reading and thematic refinement by researchers.
        The analysis reveals a typology of perceived control mechanisms, including cognitive, behavioral, discursive, emotional, epistemic, and symbolic control, attributed to state agencies, technology corporations, and political elites. Reddit users describe AI as a tool for shaping truth, eroding autonomy, simulating public consensus, and restructuring societal norms. While some claims reflect extreme or speculative views, most participants articulate structural critiques of centralized AI deployment, data governance, and algorithmic influence.
        This research highlights how public interpretations of AI, whether accurate or not, shape trust, adoption, and resistance. Our findings show the importance of transparency in AI design and deployment, and indicate that perceived opacity in AI development can amplify systemic mistrust. This generative-human hybrid content analysis method also illustrates the potential of LLMs in computational social science, for example in mapping complex public narratives about fast changing technologies.

        Speaker: Ms Ana Maria Alessandra Dobra (University of Bucharest)
      • 11:30
        Time Work in the Age of AI: New Actors, Techniques, and Settings for Temporal Agency 15m

        This paper investigates how generative AI is shaping the everyday experience of time. Using the sociological concept of time work as an analytical lens, we analyze Reddit user discourse to understand how this technology is integrated into daily temporal practices. Our analysis shows that while users employ AI for established tasks like planning and task acceleration, they also engage in novel practices, such as delegating cognitive and emotional labor to AI companions and using AI as a partner in creative projects. Interpreting these findings, we argue for a conceptual extension of time work. We introduce three concepts to account for these emergent practices: the human-AI dyad as a new collaborative agent, emotional and cognitive time work as a technique for processing affective labor, and the co-creative workflow as a setting for sustaining creative momentum. This study contributes a sociologically grounded understanding of how human-AI collaboration facilitates a new, co-creative mode of temporal agency, reconfiguring the management and subjective experience of time.

        Speakers: Dr Cosima Rughinis (University of Bucharest), Prof. Michael G. Flaherty (Eckerd College), Dr Stefania Matei (University of Bucharest)
    • 11:45 12:00
      Coffee Break 15m
    • 12:00 13:00
      Keynotes: Partners presentations
      • 12:00
        Keynote.- Fortinet 20m
      • 12:20
        Keynote - Datanet 20m
      • 12:40
        Keynote - Bitdefender 20m
    • 13:00 14:00
      Lunch 1h
    • 14:00 15:00
      DK-PEX: Developing the capacity for pooling and sharing of secure satellite connections Room 4

      Room 4

      Convener: Mihai CARABAȘ (Agency ARNIEC/RoEduNet, University Politehnica of Bucharest)
    • 14:00 15:30
      Open Source Education and Research Room 1

      Room 1

      Convener: Prof. Răzvan Victor RUGHINIȘ (National University of Science and Technology POLITEHNICA Bucharest)
      • 14:00
        HectorIDE: A Document Centric Platform for AI Powered Software Development 15m

        While large language models have revolutionized isolated coding tasks, they often fall short in managing the architectural complexity of large scale technical projects. This paper introduces HectorIDE, an AI powered platform designed to bridge this gap. Using project specifications as its foundation, HectorIDE automatically transforms high level requirements into detailed technical plans, a complete codebase in a language the user selects, and foundational QA tests. The result is a single downloadable repository that offers a solid and verifiable starting point for any project, saving developers significant initial time and effort.

        Speaker: Gabriel Danca (Alexandru Ioan Cuza University)
      • 14:15
        Heterogeneous Communications in Industrial IoT: Trends, Challenges, and Opportunities 15m

        The Industrial Internet of Things (IIoT) has introduced an unprecedented diversity of communication protocols that interconnect devices, systems, and applications across complex industrial environments. From lightweight messaging frameworks such as MQTT and CoAP to well-established standards like DNP3, Modbus, and OPC UA, these protocols each bring distinct strengths—and their own limitations. This paper offers a comprehensive survey of the methods and technologies that enable heterogeneous communications in industrial IoT deployments. We examine how protocols differ in their technical characteristics, including scalability, reliability, determinism, and security, and discuss the practical challenges of integrating them in real-world scenarios. To provide clarity, we classify the protocols into categories that span publish-subscribe messaging, request-response architectures, and time-critical fieldbus and SCADA systems. Beyond simply cataloging the options, we also explore emerging trends toward protocol convergence and middleware solutions that aim to bridge the gap between operational technology (OT) and information technology (IT). Drawing on recent research and industrial case studies, this survey highlights both the progress and the persistent obstacles in building secure and interoperable IIoT communication infrastructures. Ultimately, we hope this work will help practitioners and researchers navigate the evolving landscape of industrial connectivity and inspire new directions for more seamless and efficient integration.

        Speakers: Dumitru-Cristian Tranca (National University of Science And Technology Politehnica of Bucharest), Mrs Nicoleta-Alexandra Maracine (National University of Science And Technology Politehnica of Bucharest)
      • 14:30
        Can Chatbots Build CTFs? A Preliminary Assessment of LLMs in Jeopardy-Style Challenge Generation 15m

        Jeopardy-style Capture-The-Flag (CTF) challenges play a central role in cybersecurity education and training; however, their creation remains a resource-intensive and technically demanding process. This paper investigates the capability of general-purpose large language models (LLMs)—specifically ChatGPT, Gemini, Copilot, Claude and DeepSeek—to automate the generation of CTF challenges. We prompt each model to create full tasks, including titles, descriptions, artifacts, and solution write-ups, across core categories such as web exploitation, cryptography, and reverse engineering. The generated challenges are evaluated using criteria including technical correctness, solvability, clarity, creativity, and write-up quality. Our results highlight significant variability in the outputs, reflecting differences in how well each model interprets prompts, handles technical nuance, and structures complete scenarios. This preliminary study demonstrates both the potential and current limitations of using LLMs in CTF design, providing a foundation for more specialized tools aimed at automated cybersecurity challenge generation.

        Speaker: Mihai Chiroiu (University POLITEHNICA of Bucharest)
      • 14:45
        Optimizing Renewable Energy Consumption in Smart Homes Using Deep Q Networks and Knowledge Graphs 15m

        Electricity production is one of the main sources of greenhouse gas emissions due to its large dependency on fossil fuels. In this context, the electricity required by residential and commercial buildings represents a large proportion of the total electricity demand. Although renewable energy sources have been extensively integrated in the energy grid, the energy is inefficiently used leading to peaks in energy demand and production which may affect the stability of the grid. This paper presents an approach for maximizing the usage of renewable energy in smart homes while minimizing their dependency on the energy provided by the grid. Deep Q Networks (DQN) are used to schedule the electricity load of the smart home for the next day such that the residents’ comfort is satisfied. The proposed approach models the energy environment and appliance usage using a knowledge graph, based on which a DQN agent learns optimal energy strategies that balance renewable energy usage, battery storage, and user comfort. Experimental results highlight the system’s potential to enhance the efficient usage of renewable energy, while maintaining the comfort of residents.

        Speaker: Cristina Pop (Technical University of Cluj-Napoca)
      • 15:00
        An AI-Driven Architectural Framework for Autonomous Network Management in 5G SA and Beyond 15m

        The emergence of 5G Standalone (5G SA) networks marks a paradigm shift towards fully autonomous network operations, wherein networks achieve self-configuration, self-healing, self-optimization, and self-evolution without human intervention. Driven by the escalating complexity of telecommunications services and infrastructure, this paper explores a novel architectural framework designed to realize advanced self-management capabilities in 5G SA, moving beyond traditional, human-intensive approaches. We propose a service management model where service management is executed by translating high-level business intents into actionable network requirements. Intents are then propagated to an orchestration layer, responsible for managing the lifecycle of services within an autonomous network domain, including their deployment, assurance, and optimization, while resolving potential conflicts. These domains, in turn, interact with a resource management layer that configures resources in underlying heterogeneous physical and virtual infrastructure. This lowest layer provides a unified view of diverse resources (e.g., compute, network, radio) and intelligently allocates them based on requests from service upper layers. Crucially, pervasive AI capabilities are deeply embedded across all layers, providing the intelligence for intent translation, dynamic resource allocation, predictive maintenance, and autonomous operations, ensuring end-to-end service coherence and efficiency. This integrated approach aims to mitigate the challenges of 5G SA's scale and dynamism, paving the way for self-configuring, self-healing, and self-optimizing networks essential for future communication demands in education and research, in alignment with ETSI ZSM's zero-touch automation and TM Forum's Autonomous Networks and Open Digital Architecture (ODA) for service and resource management.

        Speaker: Mrs Ioana Dragus
      • 15:15
        Optimisation of the Gray-Scott Model for Reaction-Diffusion - CPU-GPU Component 15m

        This paper addresses key computational challenges inherent to the Gray-Scott reaction-diffusion model, such as memory hotspots and prolonged execution times, challenges that underscore the limitations of relying solely on CPU resources. To overcome these issues, a hybrid CUDA-aware MPI approach is proposed, leveraging the Kokkos library to effectively distribute computation across both CPU and GPU, with an implementation that integrates the ADIOS2 library for efficient I/O and employs Kokkos to manage the computational workload, thereby shifting the primary execution focus from the host to the device processor.

        Optimization efforts concentrated on CUDA-specific execution led to an average speedup exceeding 100×, alongside a marked improvement in GPU thread utilization, ensuring maximal exploitation of allocated resources throughout runtime. Furthermore, by employing Kokkos from the ground up - for both initial code development and subsequent optimization - the solution achieves high portability, enabling these performance gains to generalize across diverse hardware platforms, from consumer-grade systems to high-end supercomputers, the backbone of HPC, irrespective of the underlying CPU or GPU architecture.

        Speaker: Alexandru-Constantin Bala (National University of Science and Technology POLITEHNICA Bucharest)
    • 14:00 15:30
      Pervasive Systems and Computing Room 3

      Room 3

      Technical University of Moldova
      Convener: Valentina TIRSU (Technical University of Moldova)
      • 14:00
        Manipulation Detection of the Bank Sector 15m

        The integration of Artificial Intelligence (AI) in banking has brought significant advancements in areas such as credit scoring, fraud detection, and risk management. However, the adoption of complex machine learning models has also introduced major challenges regarding transparency, regulatory compliance, and trust. This paper examines a broad set of explainability techniques applicable to banking, ranging from model-agnostic methods such as SHAP and LIME to model-specific approaches including attention mechanisms and counterfactual reasoning. We discuss their relevance, interpretability trade-offs, and integration challenges within high-stakes financial environments. Special attention is given to practical use cases and the alignment of these techniques with ethical and regulatory standards. Our analysis provides key insights into the current landscape of explainable AI and outlines future directions for trustworthy and interpretable financial systems.

        Speaker: Mircea Badoi
      • 14:15
        Evaluating LLMs for Automated Requirement and Test Case Generation in Railway Signaling Systems 15m

        Large Language Models (LLMs) have shown potential in supporting requirements engineering through automation, especially in regulated and safety-critical domains. This paper evaluates the capabilities of 3 well-known LLMs (GPT-4, Claude, Gemini) in transforming user requirements into structured product requirements and corresponding test cases within the context of railway signaling. A custom dataset of client requirements, inspired by realistic signaling scenarios, was developed to enable consistent evaluation across models. Each model’s outputs were assessed using defined metrics, including completeness, correctness, consistency, and traceability. The comparative results highlight variations in quality and structure of the generated artifacts, with specific strengths observed for different tasks. While all three models demonstrate promise, their reliability and consistency vary, and human oversight remains essential. This study provides practical insights into the applicability of current LLMs for augmenting early-stage requirements and verification workflows in critical systems engineering.

        Speaker: Mr Ionuț-Gabriel OȚELEA (National University of Science and Technology POLITEHNICA Bucharest)
      • 14:30
        Applying OpenTelemetry Metrics to Monitor Urban Air Quality Sensors 15m

        Today, many smart city projects integrate an air quality monitoring component, however, particularly in the IoT field, the observability property of the system appears less mature compared to other software domains. Many systems depend heavily on vendor-specific technologies, making unified visibility rather difficult. This happens because current solutions collect many environmental parameters, but provide fewer data on the status of the sensor, reliability, or other operational performance.
        In this paper, we investigate whether OpenTelemetry (OTel), an open-source tool for observability, can improve monitoring of urban air quality in a sensor network. To test this, we have designed a digital twin setup where we capture both environmental data and sensor-specific health information such as battery status, connectivity problems, and unusual sensor readings. The setup is a lightweight version of a smart city with different urban zones, sensor types and realistic operational scenarios.
        Our analysis suggests that OpenTelemetry could improve real-time issue detection and may also offer better maintenance and management of IoT deployments at large scale.

        Speakers: Razvan Bogdan (Universitatea Politehnica din Timisoara), Mr Sebastian Petruc (Universitatea Politehnica din Timisoara)
      • 14:45
        Context-Aware Passenger Comfort Estimation for Pervasive Mobility Systems 15m

        This paper presents a context-aware system that monitors and predicts passenger comfort in pervasive driving environments, by using in-vehicle sensor data and machine learning. A Neural Network-based model is trained on baseline and then personalised using data gathered from CAN Bus to infer discomfort from driving patterns, such as car braking and cornering. The proposed architecture integrates sensor fusion, user feedback, and visualisation tools, enabling comfort adaptation in real time. Simulations in the CARLA environment demonstrate the system's robustness and adaptability. By focusing on human-centric personalisation and integrating seamlessly with pervasive computing components like mobile UIs and edge-like devices, this research supports future ride-sharing and Robo-Taxi platforms that prioritise comfort as a very important system feature.

        Speakers: Mr Armin Török (Babes-Bolyai University), Mr David Szilagyi (Babes-Bolyai University), Iulian Benta (Babes-Bolyai University)
      • 15:00
        Energy Profiling of 5G Cellular Modems at the Software-Hardware Interface 15m

        The advancement of computing systems has gone hand in hand with the development of increasingly sophisticated communication networks. The arrival of 5G cellular networks has dramatically enhanced mobility, flexibility, and data throughput—but at the cost of significantly higher energy consumption, particularly in mobile communication modules. While extensive research has been conducted on optimizing energy efficiency at the hardware and protocol levels, software-level energy optimization remains largely underexplored, especially within the 5G context.

        In particular, existing studies rarely investigate how low-level software interactions with the cellular modem affect power consumption during atomic operations (e.g., connection setup, bearer establishment, paging responses). This constitutes a critical knowledge gap, since such operations are fundamental to all mobile applications and can occur frequently—even in background processes.

        This study aims to address this gap by analyzing the current consumption profile of 5G cellular modems during these atomic transmission procedures. We focus on the behavior of the modem as controlled by various software layers, including the Radio Interface Layer (RIL), kernel drivers, and high-level telephony services. By profiling and characterizing the energy impact of atomic operations, we seek to provide actionable insights for optimizing software design in mobile systems. Our findings contribute to a more precise understanding of how software decisions affect energy usage at the modem level and open new directions for fine-grained energy-aware programming in 5G-enabled platforms.

        Speaker: Paul-Cristian Banu-Țăran (Universitatea Politehnica Timișoara)
      • 15:15
        Arhitecture for the automatisation of virtual prototyping for chip design at silicon level 15m

        "This paper thesis proposes the development of a desktop application, by integrating a
        graphical user interface designed to improve the user experience with the virtual prototype of a
        chip. The identified problem is the difficulty of users accessing, interacting, configuring and
        analyzing the virtual prototype in an intuitive and centralized way. The interface groups and
        optimizes access to the functions and information necessary to understand and then analyze the
        prototype, through a natural and efficient interaction.

        The implementation demonstrated a significant increase in the understanding of the virtual prototype concept and in user satisfaction
        and performance, while also decreasing the access time to existing virtual prototypes, things that
        led to the popularization of the product and a better understanding of the activity of the virtual
        prototyping department.

        The application is now a standard for using the virtual prototype, being the most popular way to
        interact with it. It is used by the Virtual Prototyping department within Infineon Romania and by
        the clients of the department."

        Speaker: Mr Theodor Ioan Buliga
    • 14:00 15:45
      Technologies for Future Internet Room 2

      Room 2

      Technical University of Moldova
      Convener: Rodica Siminiuc (Technical University of Moldova)
      • 14:00
        Reflections on trusting trust after 40 years 15m

        In 1984, Ken Thompson presented “Reflections on trusting trust” as part of his Turing Award lecture, demonstrating a theoretical attack, where a backdoored compiler would inject malicious code into compiled programs as well as propagate the backdoor to future versions of itself during self-compilation. The lecture demonstrated one of the darkest scenarios for supply-chain attacks, although its broader implications were not explicitly addressed at the time. This paper will examine the evolution of supply-chain attacks, starting from Thompson’s foundational work to current threat landscapes. It will address historical developments of trusting trust, analyzing their manifestations in current times. The analysis includes various documented supply-chain attacks from over the years, the biggest attack that never was, as well as current exploitation techniques that leverage trust relationships, such as watering-hole attacks, insider threats and deepfakes. This research provides a comprehensive analysis of how Thompson’s theoretical idea has materialized into a practical attack methodology and evaluates the status quo of supply-chain security and trusting trust in light of the current developments in threat actors’ capabilities.

        Speaker: Alexandru-Cristian Bardaș (Academia de Studii Economice, București)
      • 14:15
        Analyzing the Impact of Bitrate Reduction on Neural Networks Inference Accuracy 15m

        In this work, we investigate the influence of video bitrate on the inference accuracy of YOLO models for human detection from unmanned aerial vehicles (UAVs). The study targets detection across both infrared (IR) and visible light spectrums, evaluating the model's robustness under varying compression levels. We explore bitrates ranging from 1 to 20 Mbps to assess the trade-off between compression and detection performance. Additionally, we compare two encoding strategies: a single merged stream combining IR and visible data versus two separate streams for each spectrum. Experimental results demonstrate how bitrate selection and stream configuration impact detection accuracy, providing insights for optimizing multi-spectral UAV-based human detection systems under bandwidth-constrained conditions.

        Speakers: Dr Calin Bira (National University of Science and Technology Politehnica Bucharest), Costin-Emanuel Vasile (National University of Science and Technology Politehnica Bucharest)
      • 14:30
        Anti-Plagiarism System for Exam Monitoring 15m

        The rise of academic dishonesty through exam
        cheating has created more advanced and complex plagiarism
        detection systems. These systems are proprietary; they come
        with financial, hardware, privacy, and cloud requirements that
        can’t be covered by all educational institutions. An open-source
        real-time video plagiarism detection system focused on suspicious
        gaze direction and cheating device detection, which can run on
        accessible hardware, is proposed. For the gaze direction detection,
        the system uses the MediaPipe Face Mesh landmarks, a Kalman
        filter, horizontal ratio, vertical ratio, and dynamic thresholds to
        obtain an accuracy of 92.4% with a latency of under 100 ms on
        our own proposed gaze direction manually annotated dataset. The
        object detection part is done by two YOLOv8 models fine-tuned
        with two specific datasets to offer an 88.6% smartphone detection
        92.5% smartwatch detection with under 200 ms latency. The
        system runs only on a CPU with a memory footprint under 820
        MB, with over 25 FPS. It only stores the suspicious parts in
        the entire video, making it suitable to use on small educational
        institution systems and reducing privacy concerns for storage

        Speakers: Mr Valentin Pletea-Marinescu (National University of Science and Technology POLITEHNICA Bucharest), Dr Ștefan-Dan Ciocîrlan (National University of Science and Technology POLITEHNICA Bucharest)
      • 14:45
        Demacert - decentralised management of certifications 15m

        In the current educational and professional landscape, one’s academic achievements are paramount to career advancement and opportunities. The key to successfully navigating these scenarios is the ability to provide easily verifiable proof of qualifications. The educational system faces significant challenges regarding the issuance and management of certifications. The process is relying on paper documents, which are prone to loss, damage, and forgery. In the digital age, this approach is highly inefficient, difficult to verify, and non-standardized, leading to difficulties for both students and educational institutions in managing these documents. In order to facilitate managing and securing academic certifications, we propose a platform that uses blockchain and distributed storage through IPFS to digitalize the process of issuing diplomas. Demacert moves the responsibility of distributing the certificates from the issuers to us, while enabling students to store all their documents in one place, digitally, encapsulating all these features into an easy-to-use platform. Also, through this system, we aim to eliminate the risk of forgery and to simplify the validation process for educational institutions and employers.

        Speakers: Mrs Gabriela Limberea (National University of Science and Technology Politehnica Bucharest), Mrs Mihaela Ștefan (National University of Science and Technology Politehnica Bucharest)
      • 15:00
        Eldie - Virtual Assistant for Elderly People 15m

        After the age of 60, most people begin to experience a decline in their cognitive skills. Eldie is a platform dedicated to slowing down mental deterioration among seniors. Studies show the effectiveness of games in training the brain in multiple aspects, such as: memory, attention and problem solving skills. By combining the scientifically-researched games with analysis of daily activities, Eldie calculates the brain score, for each user.

        Speaker: Ioan-Teofil Sturzoiu (Politehnica Bucharest)
      • 15:15
        Practical Cryptanalysis of ECDSA: Comparative Efficiency Analysis of Brute-Force and Baby-Step Giant-Step Key Recovery Methods 15m

        The Elliptic Curve Digital Signature Algorithm (ECDSA) is pivotal for securing digital information across various applications. This paper investigates ECDSA's security by focusing on two generic attack methodologies: brute-force and Baby-Step Giant-Step (BSGS). We evaluate their theoretical effectiveness and practical implications by analyzing existing research and known vulnerabilities. Our findings underscore the practical limits of ECDSA security, highlighting the critical roles of robust key generation and secure implementation practices. This work aims to provide insights for practitioners and researchers in applied cryptography, emphasizing the ongoing need for vigilance and adaptation in cryptographic security.

        Speaker: Ionel Patrascu
      • 15:30
        Digital fleet management for Ambulance Service 15m

        The push for healthcare digitalization is growing, with increasing demands for security, data processing and streamlined applications. In the broader context, the technology can not only be used in a clinical setting, but also in emergency medicine – specifically the emergency vehicle fleet management. The main contribution of the work is to provide a reliable, secure and intelligent platform for the Bucharest-Ilfov Ambulance Service to monitor the vehicle parameters in real time, with input from ambulance drivers. The proposed solution is the development of an application – Ambuparc – for both the driver and fleet manager to report the technical condition of the vehicle, schedule service appointments and more vehicle specific actions. Primarily coded in TypeScript, the driver application contains a predefined set of technical questions that the driver must answer regarding the vehicle’s condition. All responses are collected and viewed by engineers on a web dashboard that can use Artificial Intelligence to predict and prevent any mechanical faults in a certain vehicle type, based on the fleet history and other parameters. The dataset provides information gathered from 8 ambulances and 5 ambulance drivers, from March 2025 up to June 2025. Every emergency protocol begins with the safety of the medical personnel, and it can also be thought of as the technical status of the ambulance itself. As such, the data gathered by Ambuparc plays a crucial role in the safety factor of each shift. The application addresses the emerging standard in a modern emergency medical service fleet management.

        Speaker: Victor-Emanuil Nitu (UNSTPB)
    • 15:45 16:30
      Conclusions: 2025 RoEduNet Conference Keynote
    • 18:30 22:00
      Conference Dinner

      Cricova