Research proposals

PhD selection is performed twice a year through an open process. Further information are available at http://dottorato.polito.it/en/admission.

Grants from Politecnico di Torino and other bodies are available.

In the following, we report the list of topics for new PhD students in Computer and Control Engineering (Cycle XXXIX). This list will be regularly updated when new proposals will be available.

If you are interested in one of the topics, please contact the related Proposer.

Research proposals

Group website: https://elite.polito.it/
Summary of the proposal: Nowadays, tech companies adopt dark design patterns, e.g., content autoplay and infinite scroll, that manipulate users into spending time and attention on web and mobile applications. As a result, concerns about technology overuse and addiction have gained momentum. This PhD proposal investigates the design, implementation, and evaluation of user interfaces that promote people’s digital wellbeing by respecting and preserving users’ time and attention. Possible outcomes include intelligent and effective mitigation strategies for contemporary dark patterns, as well as the development of intelligent tools that can proactively improve the interaction between users and digital services.
Topics: Human-Computer Interaction, Digital Wellbeing, User Interfaces
Rsearch objectives and methods: In the contemporary attention economy, tech companies adopt design patterns, e.g., content autoplay and infinite scroll, that exploit people’ psychological vulnerabilities and manipulate users into spending time and attention on digital services. These “attention-capture” dark patterns often lead people to experience a lost sense of control and time over technology use and a later sense of regret. As a result, concerns about technology overuse and addiction have gained momentum, and researchers, public media, and some tech industries have started highlighting the need to design user interfaces that better align with people’s digital wellbeing, a term that refers to the impact of digital technologies on people’s lives.

The main research objective of this PhD proposal is to explore the relationships between dark patterns and people’s digital wellbeing, thus promoting the development of user interfaces that positively impact people’s digital wellbeing, rather than diminishing it. The PhD student will study, design, develop, and evaluate proper models and novel technical solutions (e.g., tools and frameworks) in this domain, starting from the relevant scientific literature and performing studies involving users. In particular, possible areas of investigation are:
a) Development of mitigation strategies that can limit the drawbacks and negative impacts brought by attention-capture dark patterns (ACDPs). A possible example is the usage of appropriate nudges. Nudges are changes in the design architecture of a system that target users' cognitive biases. One of their main goals is to allow users to know the underlying system better and make deliberate choices, i.e., they can be seen as a way to make users exercise their own agency. In this proposal, nudges could promote awareness on the negative consequences of ACDPs on people's digital wellbeing. Examples include informative splash screens listing all the dark patterns exploited by a given mobile app, and widgets highlighting when an pattern is operating, e.g., during intensive scrolling.
b) Development of alternative design patterns that respect and preserve the user’s attention. These patterns promote users' awareness by design and support reflection by offering the same functionality as ACDPs. Examples include recommender systems that take into account user's past problematic behaviors, e.g., to avoid viral suggestions for users that already showed signs of addictive behaviors, or social networks' newsfeeds listing friends' posts in chronological order, only, e.g., through pagination.
c) Development of intelligent tools to proactively improve the interaction between users and digital services like web and mobile applications. Through the usage of artificial intelligence and machine learning models, these tools may detect when the user is trapped by a dark pattern, e.g., during intensive scrolling, and may automatically apply a suitable mitigation strategy or even modify the design of the user interface.

The proposal will adopt a human-centered approach and it will build upon the existing scientific literature from different interdisciplinary domains, mainly from Human-Computer Interaction. The work plan will be organized according to the following four phases, partially overlapped:
• Phase 1 (months 0-6): literature review about digital wellbeing and attention-capture dark patterns; focus groups and interviews with designers, practitioners, and end users; definitions and development of a set of use cases and promising strategies and alternative patterns to be adopted.
• Phase 2 (months 6-24): research, definition, and experimentation of mitigation strategies and alternative design patterns for attention-capture dark patterns, starting from the outcome of the previous phase. Here, the focus will be on the most commonly used devices, i.e., the smartphone and the PC, with the design, implementation, and evaluation of one or more mobile applications and browser extensions.
• Phase 3 (months 12-36): research, definition, and experimentation of intelligent tools that make use of artificial intelligence and machine learning models to proactively apply the mitigation strategies and alternative design patterns identified in the previous phase.
• Phase 4 (months 24-36): extension and possible generalization of the previous phases to include additional devices; evaluation in real settings over long period of times to assess at which extent the proposed solutions can address the negative impact of attention-capture dark patterns and increase any positive outcome on our digital wellbeing.

It is expected that the results of this research will be published in some of the top conferences in the Human-Computer Interaction field (e.g., ACM CHI, ACM CSCW, and ACM IUI). One or more journal publications are expected on a subset of the following international journals: ACM Transactions on Computer-Human Interaction, ACM Transactions on the Web, ACM Transactions on Interactive Intelligent Systems, IEEE Transactions on Human-Machine System, and International Journal of Human Computer Studies.
Required skills: A candidate interested in the proposal should ideally:
• be able to critically analyze and evaluate existing research, as well as gather and interpret data from various sources;
• be able to effectively communicate research findings through writing and presenting, both to specialist and non-specialist audiences;
• have a solid foundation in computer science/engineering and possess the relevant technical skills;
• have a good understanding of HCI research methods;
• be aware of ethical considerations related to the investigated topics, particularly when conducting research on human subjects.

Group website: https://dbdmg.polito.it
https://www.eurecom.fr/en/researc...
Summary of the proposal: Representation learning aims at leveraging Deep Learning models to build vector representations of data suitable for accomplishing complex tasks. Established methods tailored for textual data, images, and time series have already been proposed. The goal of the PhD program is to advance the state of the art by exploring their application to tabular data. The twofold aim is to leverage the meta-information provided by the structure and investigate the use of contrastive learning approaches.
Topics: Natural Language Processing, Representation Learning, Tabular data
Rsearch objectives and methods: RESEARCH OBJECTIVES
The PhD program will focus on representation learning for tabular data. The goal is to design and implement Deep Learning approaches, mainly self-supervised, to build vector representations of structured data that incorporate structure-related information and are suitable for tackling complex NLP tasks. We envision two main directions:
- benchmarking such representation with a suite of tests (in the style of CheckList [1] for language models in NLP). We had some preliminary idea tested in [2]
- Apply contrastive learning techniques, already established for images and time series [3], to tabular data.

[1] Ribeiro et al. Beyond Accuracy: Behavioral Testing of NLP models with CheckList
[2] Cappuzzo et al. Creating Embeddings of Heterogeneous Relational Datasets for Data Integration Tasks. SIGMOD 2020
[3] Elijah Cole, Xuan Yang, Kimberly Wilber, Oisin Mac Aodha, Serge Belongie; Proceedings of the IEEE/CVF Conference on Computer Vision and Pattern Recognition (CVPR), 2022, pp. 14755-14764

OUTLINE OF THE RESEARCH PLAN
The PhD student will first elaborate on the preliminary ideas described in [2] by proposing suitable tests and robust empirical evaluations. Then, she will explore the potential of state-of-the-art self- and semi-supervised learning architectures to analyze tabular data to solve challenging taks (e.g., clustering [4], summarization [5]). Finally, she will specifically address the problem of designing contrastive representations by generating positive/negative pairs based on structure-related information.

[4] Colomba L, Cagliero L, Garza P. Density-based Clustering by Means of Bridge Point Identification. IEEE TKDE. PrePrints pp. 1-14, DOI Bookmark: 10.1109/TKDE.2022.3232315
[5] La Quatra M, Cagliero L. BART-IT: An Efficient Sequence-to-Sequence Model for Italian Text Summarization. Future Internet. 2023; 15(1):15. https://doi.org/10.3390/fi15010015 LIST OF POSSIBLE PUBLICATION VENUES
Top-ranked conferences (e.g., EMNLP, ACL, SIGIR, SIGMOD, VLDB)
Top-quality journals (e.g., TACL, IEEE TKDE, ACM TKDD, ACM TOIS, IEEE TPAMI)

ONGOING COLLABORATIONS RELATED TO NLP
Luca Cagliero has active collaborations on NLP-related topics with
(1) H-FARM Innovation (https://innovation.h-farm.com/it/),
(2) R&D Unit – Radiotelevisione Italiana
(3) Commissione nazionale per le società e la Borsa (CONSOB)

Paolo Papotti has two ongoing projects on NLP related topics:
(1) PhD (CIFRE) on mining rules from a large textual corpus with SAP Labs France
(2) ANR Attention project on Fact-Checking textual claims with hererogeneous data
Required skills: The PhD candidate is expected to be have skills on
- Data Science fundamentals
- Python programming
- Deep Learning
A basic knowledge on Natural Language Processing techniques is also advisable but not required.

Group website: eda.polito.it
Summary of the proposal: The excellent accuracy of deep learning models comes at the cost of high complexity. Optimizing these models, reducing their latency, memory, and energy costs, is paramount to enable their execution at the edge. Going in this direction, this project explores:
1. Neural Architecture Search (NAS) to optimize the network topology
2. Self-supervised learning to increase the accuracy of small models while not requiring enormous datasets,
3. Continual learning to cope with data and concept drift.
Topics: Deep Learning; TinyML; Edge Computing
Rsearch objectives and methods: Deep learning models are often deployed on edge devices for various tasks, ranging from biosignals processing to industrial assets monitoring. However, these models can have high computational complexity, which is an obstacle for their deployment on low-power edge devices, with limited resources in terms of memory, energy, and processing power. In order to achieve the required accuracy at the minimum latency, it is crucial to optimize deep learning models for these constraints. The candidate will explore three main directions to maintain/improve accuracy while reducing the complexity of deep learning models for real edge applications. The three techniques will be developed in parallel and in synergy with each other, and the work will be done in collaboration with EDA group researchers already working on those topics.

The first investigated approach will be Neural Architecture Search (NAS). These techniques automate the process of designing neural network architectures by using optimization algorithms to search through the space of possible architectures and select the ones that perform the best on a given task. Using NAS makes it possible to find network topologies that are simultaneously accurate and efficient in terms of latency and energy consumption, making them more suitable for deployment on edge devices.
The candidate's goal will be to explore different NAS algorithms and evaluate their performance on various edge-relevant tasks. This could involve implementing and comparing different NAS algorithms, as well as experimenting with different search spaces and optimization objectives.

An orthogonal approach to optimize deep learning models for edge devices is to use Self-Supervised Learning (SSL) techniques. SSL is a type of machine learning where the model is trained on data that is not labeled, but rather is generated from some known structure or process. By using SSL, it is possible to increase the accuracy of small models without requiring enormous, labeled datasets, which can be particularly useful since differently from “mainstream” Computer Vision and Natural Language Processing applications, many edge-relevant tasks cannot rely on huge publicly available datasets.
The goal of this second activity will be therefore to improve the accuracy of already optimized models, e.g., those found by NAS algorithms described above. The candidate will apply existing SSL techniques on new tasks, possibly fusing them with NAS.

The final topic considered in this thesis will be Continual Learning (CL), a set of methods that enable a model to adapt to new data over time without forgetting what it has learned previously. This is important because the data distribution on edge relevant tasks can often change over time due to factors such as changes in the environment or in the monitored asset. By using CL techniques, it is possible to cope with this degradation and maintain the accuracy of the model over time. The final goal of this activity will be to deploy an end-to-end working application that exploits previously optimized architectures, and uses CL to maintain high accuracy on a target task over time.

The work plan of the project will be structured as follows:

Months 1-9: Conduct a literature review on: i) NAS algorithms, ii) SLL methods and iii) CL methods, with a focus on edge devices and applications. In particular, the candidate will consider the standard MLPerf Tiny Inference Benchmarks, plus some additional benchmarks related to EDA group projects, focusing on industrial applications, energy management, and biosignals processing.

Months 9-18: Implement and test state-of-the-art NAS, SSL and CL algorithms on the relevant benchmarks described above.

Months 18-36: Deploy and evaluate new NAS algorithms, comparing them with the state-of-the-art in terms of accuracy, latency, and energy consumption of the resulting models. Try to fuse SSL approaches within the developed NASes. Lastly, apply CL techniques on model architectures optimized with NAS + SSL.

Overall, these three approaches - NAS, SSL and CL - offer promising ways to automate the optimization and deployment of deep learning models at the edge and increase their robustness in real-world scenarios. By combining these techniques, the candidate will try to maximize the accuracy of the models, while minimizing its resource requirements in terms of latency, energy consumption, and memory occupation, enabling new tasks to be solved at the edge of the network, without resorting to a centralized approach, or improving the capabilities of existing ones.

Possible publication venues for this thesis include:
- IEEE Transactions on CAD
- IEEE Transactions on Computers
- IEEE Journal on Internet of Things
- IEEE Transactions on Emerging Topics in Computing
- IEEE Transactions on Sustainable Computing
- IEEE Transactions on Neural Networks and Learning Systems
- ACM Transactions on Embedded Computing Systems
- ACM Transactions of Design Automation of Electronic Systems
- ACM Transactions on IoT

The EDA Group has many active industrial collaborations and funded projects on these topics, including:
- TRISTAN (ECSEL-JU 2023)
- ISOLDE (ECSEL-JU 2023)
- HiFiDELITY (ECSEL-JU 2023)
- StorAIge (ESCEL-JU 2021)
- Etc.
Required skills: 1. Experience with neural network models: The project involves designing and evaluating different neural network architectures, so a good understanding of how different types of layers and connections can impact the performance of a model would be important.
2. Familiarity with programming languages: The project will involve implementing and testing various algorithms and techniques, so experience with programming languages such as Python and C is needed.
3. Familiarity with embedded systems and computer architectures: Understanding the constraints and limitations of edge devices is important in order to optimize the deep learning models for these platforms. Experience with working on projects involving edge devices would be useful.

Group website: -
Summary of the proposal: Vehicles equipped with Automated Driving (AD) systems are a promising evolution of current vehicle technology and of ADAS systems. They pave the way to a sustainable mobility future for enhanced road safety, efficient traffic flow and improved fuel consumption. The research project focuses on the development of an innovative encapsulated control architecture that implements the AD functions needed to reach high driving automation in low complexity driving scenarios, such as smart cities, according to the SAE level L4+. Intelligence Augmentation (IA) principles are employed to integrate Artificial Intelligence (AI) perception of unpredictable events and vehicle control. The core approach relies on the use of Artificial Potential Field (APF) combined with Model Predictive Control (MPC) methods to generate an optimal and safe path, given by the minimum energy trajectory over time.
Topics: Automated driving systems, Model Predictive Control, Intelligence Augmentation
Rsearch objectives and methods: This proposal refers to an industrial collaboration with STELLANTIS group and Centro Ricerche Fiat.
Research objectives
The main goal is to provide an innovative methodology and solution for high level AD systems, that allows to enlarge the Operational Design Domain (ODD) progressively, leading to the realization of higher levels of AD (L4+). Several but strongly interrelated Research Objectives should be pursued to achieve it. Initially, focus mainly on low-speed velocities AD function but methodology extendable in highway scenarios as well.
Several but strongly interrelated research objectives (RO) should be pursued to achieve this goal:
• RO1 – Hybrid and flexible architecture, capable of accommodating various sensory configurations, on the one hand, and integrating Infrastructure info on the other; it will have to be designed with a robust approach to obtain an AD of L4+, employing AI developments (perception, scene understanding) and robust control techniques (motion planning and control), to provide relevant content and functionality.
• RO1a - Hypothesis of different degrees of responsibility / autonomy
• RO1b - How to model automotive eco-system contexts using a multimodal sensing approach (on-board sensors, environmental data, off-board information, etc.) to provide relevant content and functionality
• RO1c - Correct individualization of the subtasks of the function to be solved with AI or model-based techniques and moreover, their right semantic integration in the perspective of Intelligence Augmentation
• RO2 - AD functional architecture allows the design of scalar solutions (different configurations of vehicle sensors and infrastructure info, remote-controlled vs full ego approach)
• RO3 - Flexibility - suitable architecture for the so-called low complexity environments (valet parking, to reserved areas in a smart city context, but also extendable in highway scenario).
• RO4 - Benchmark and evaluate the proposed solution with alternative/other Stellantis solutions.

The above objectives open a broad multidisciplinary research landscape that touches core aspects of new mobility models and systems for Industry 4.0 applications and automotive sector. Such a framework is considered very important for the incremental process of ODD augmentation vs. high level AD systems, and for safety and robustness implications the APF&MPC approach can bring to the operations of AVs, relative to the in-depth synthesis & analysis of these issues and root cause identification, as well as countermeasure development.

Outline of the research work plan
In the first year, the candidate will carry out an overall study evaluation of the functional architecture (hierarchical) that incapsulates a structured pipeline of separate functional blocks. Focus will be mainly on global planner, global-in-the-loop and local planner concepts. Also, a first feasibility evaluation, considering different sensors configuration and infrastructure information will be performed as well.
In the second year, the candidate will be focalized on the motion planning design based on the so-called one and two step design approaches for the trajectory planning function. Semantic interactions between planning and control modules are included as well. Specificities in LSM context (AVP, smart city), but also will explore the possibility of method extension in a highway context.
In the third year, the candidate will be dedicated mainly on the validation of the proposed solution performance obtained with this approach, based on APF/MPC for motion planning design, integrated in the overall AD architecture (AVP in the defined ODD with specialized infrastructure areas); it will be implemented and run in the Matlab / Simulink / SCANer simulation platform. Feasibility evaluation for a demo vehicle implementation, since the beginning two plausible sensors set configuration of the vehicle will be considered (full ego and an infrastructure info based).
More in detail: workstreams, enablers, timing:
• WS 1: Overall study evaluation of the functional architecture (hierarchical) that incapsulates a structured pipeline of separate functional blocks. Global planner, global-in-the-loop, and local planner (T1 = T0 + 6 months)
• WS 2: First feasibility evaluation considering different sensors configuration and infrastructure information. Situation awareness and APF construction based on lane change / obstacle avoidance, path keeping and ACC (T2 = T1 + 6 months)
• WS 3: Evaluation of one and two step design approaches of the trajectory planning function. Semantic interactions between planning and control modules. Specificities in LSM and highway context (T3 = T2 + 9 months)
• WS 4: Validation of the motion planning on the simulation platform Matlab / Simulink / SCANer with the new function in defined ODD with specialized infrastructure areas. Feasibility evaluation for demo implement. (T4 = T3 + 9 months)
• WS 5: writing of the PhD thesis report (T5 = T4 + 6 months)

During the 2nd and 3rd years, the candidate will perform many testing at Stellantis premises. The central part of the activity will be carried out at the CRF facilities in Turin, and in remote with AEES/SCIC team & facilities (Vélizy – Villacoublay, Paris). The Ph.D. candidate will also be involved in a period abroad in a STELLANTIS group facilities in France and Germany.

List of possible venues for publications The proposed solutions will be disseminated through original papers submitted to relevant journals and conferences like e.g. IEEE Transactions on Intelligent Vehicles, IEEE Transactions on Intelligent Transportation Systems, IEEE/ASME Transactions on Mechatronics, IEEE Transactions on Control Systems Technology, Mechatronics, Control Engineering Practice, IEEE Conference on Decision and Control, American Control Conference, IEEE Intelligent Vehicles symposium.
Required skills: The candidate should have a solid background in the field of Automatic Control including advanced methodology such as Model Predictive Control. Furthermore, deep expertise of MatLab programming and Simulink environment is required. Finally, basics of mechanical modeling of ground vehicles and automated driving tasks are needed as additional specific knowledge.

Group website: http://grains.polito.it/index.php
Summary of the proposal: Synthetic renderings are widely used in a variety of applications, including computer graphics, virtual reality, and product design. Comparing synthetic renderings can be used to identify changes or differences between renderings, evaluate the performance of different rendering algorithms or techniques, and compare the quality of different versions of a rendering. The goal of this research is to design and implement machine learning techniques to automate this process and improve the efficiency and accuracy of synthetic render comparison.
Topics: Machine Learning; Image Processing; Semantic Classification
Rsearch objectives and methods: Synthetic renderings, also known as computer-generated imagery (CGI) or computer graphics, are widely used in a variety of applications, including computer graphics, virtual reality, product design, marketing and advertising, and architecture. Synthetic renderings can be generated manually by a human artist or automatically using computational techniques. There are different techniques and methods to create renderings, simulating light and material properties, camera settings, and other variables to create a highly detailed image that corresponds to a realistic scene or object. Nowadays, synthetic datasets are widely used to train a convolutional neural network (CNN) for image or video analysis tasks in a variety of applications, including object detection, image segmentation, and image generation. There are several advantages to using synthetic datasets to train CNN: automatically obtaining labeled data and annotations; controlling and generating the data depending on the needs of the application to improve the performance of the model, the accuracy (scalable dataset size), the generalization and the robustness (scalable dataset variety).

Synthetic rendering comparison refers to the process of evaluating and comparing the differences between two or more synthetic visual contents. This can be done manually by a human evaluator, or automatically using computational techniques. Automatic comparison of synthetic renderings allows for the rapid and objective comparison of different renderings and can be used to identify changes or differences between renderings, evaluate the performance of different rendering algorithms or techniques, and compare the quality of different versions of a rendering. There are several techniques that have been proposed for the automatic evaluation of synthetic renderings, including supervised learning, unsupervised learning, deep learning, and transfer learning techniques. These techniques can be used to predict the similarity or dissimilarity between synthetic renderings, classify renderings as similar or different, and measure the degree of difference between renderings.

Research objectives:
- To review and analyze current state-of-the-art machine learning techniques for the automatic comparison and generation of synthetic renders. This will involve a comprehensive literature review of existing approaches, including supervised and unsupervised learning, reinforcement learning, and transfer learning.
- To identify the challenges and limitations of these techniques through a detailed analysis of the strengths and weaknesses of existing approaches, as well as an exploration of the unique challenges associated with using synthetic datasets to train CNNs.
- To propose and evaluate novel approaches that address these challenges and improve the efficiency and accuracy of synthetic render comparison. This will involve the generation of a synthetic dataset of images, choosing and/or developing a model and framework that will be used to train and run the system, as well as training, validating, and deploying the model.
- To demonstrate the practicality and effectiveness of the proposed techniques through simulations and experiments. This will involve the implementation and testing of the proposed techniques in realistic simulation environments, as well as in real-world scenarios to evaluate their performance and effectiveness

Research work plan:
1st year
- Conduct a comprehensive literature review of both existing machine learning techniques for the automatic comparison of synthetic renders and methods for generating synthetic rendering.
- Identify and analyze the challenges and limitations of these techniques.
- Improve programming, problem-solving, and data analysis skills pertaining to machine learning techniques, image processing, and automatic synthetic renders generation.
- Develop strong communication skills to communicate research findings and ideas effectively to a range of audiences.
2nd year
- Develop and propose novel approaches that address existing challenges and improve the efficiency and accuracy of synthetic render comparison.
3rd year
- Implement and test the proposed techniques in simulation and real-world scenarios.
- Analyze and evaluate the results of the simulations and experiments.

Possible venues for publications:
- IEEE Transactions on Visualization and Computer Graphics (Q1, IF 5.226)
- ACM Transactions on Graphics (Q1, IF 7.403)
- International Journal of Computer Vision (Q1, 13.369)
- Computer Graphics Forum (Q1, 2.363)
- IEEE Computer Graphics and Applications (Q2, IF 1.909)
- ACM SIGGRAPH Conference on Computer Graphics and Interactive Techniques
- EUROGRAPHICS Annual Conference of the European Association for Computer Graphics
- IEEE Conference on Computer Vision and Pattern Recognition (CVPR)
Required skills: The candidate should have a strong background in computer science. Experience in computer vision techniques, data science, and machine learning is a plus. The candidate should also have excellent problem-solving and analytical skills, whereas strong communication and writing skills are a plus. The candidate should be self-motivated and able to work independently, as well as collaboratively with a team.

Group website: http://grains.polito.it/index.php
Summary of the proposal: Human-robot collaboration is a rapidly growing field that has the potential to revolutionize manufacturing, healthcare, and other industries by enhancing productivity and efficiency. However, for human-robot collaboration to be truly effective, robots must be able to learn and adapt to changing environments and tasks, as well as to the preferences and needs of their human collaborators. The goal of this research is to design and implement robot learning algorithms for robotic arm manipulators that enable effective human-robot collaboration in dynamic environments.
Topics: Human-Robot Collaboration; Robot Learning; Human-Robot Interaction
Rsearch objectives and methods: The proposed research aims to advance the field of human-robot collaboration by developing and demonstrating innovative robot learning techniques that enable effective collaboration in dynamic environments.

A collaborative robotic arm manipulator is a type of robot that is designed to work alongside humans in a shared workspace. These robots are typically used in manufacturing, where they are capable of performing tasks such as picking, placing, and manipulating objects. Collaborative robotic arm manipulators are equipped with sensors and safety features that allow them to operate safely in close proximity to humans and are designed to be easy to use and adaptable to a wide range of tasks. They are used in a wide range of fields, including manufacturing, healthcare, retail, agriculture, construction, and service industries. Overall, the use of collaborative robotic arms is growing in a wide range of fields, as they are capable of performing tasks quickly, accurately, and safely, and can improve efficiency and reduce labor costs.

There are several learning techniques that are commonly used for programming and operating collaborative robotic arm manipulators:
- Lead-through programming involves physically guiding the robot through the desired motion path, while the robot records the movements and stores them as a program.
- Point-and-click programming involves using a graphical user interface (GUI) to specify the desired motion path by clicking on points on a screen.
- Scripting involves writing a program to specify the desired motion path and other actions.
- Hardware-in-the-loop (HIL) simulation involves using a simulation environment to test and verify the robot's motion and performance before running it on the actual hardware.

A novel emerging technique is learning by imitation, which refers to the process of a robot learning to perform a task by observing and copying the actions of a human or another robot. There are several approaches to robot learning by imitation, including:
- Behavioural cloning: in this approach, the robot is trained to mimic the actions of a human or another robot by learning from a dataset of demonstrations. The robot is provided with input data (e.g., sensor readings) and the corresponding output actions (e.g., joint angles) and uses this data to learn a mapping from input to output.
- Learning from demonstrations: In this approach, the robot is provided with a set of demonstrations and learns a model of the task by inferring the underlying rules or principles from the demonstrations.
- Human-robot interaction: In this approach, the robot learns by interacting with a human in a collaborative manner. The human provides guidance and feedback to the robot as it performs the task, allowing the robot to learn and improve over time.
Overall, robot learning by imitation is a powerful tool for enabling robots to learn new tasks and adapt to changing environments. It allows robots to learn from human expertise and experience and can reduce the need for explicit programming of tasks.

The goal is to research and develop novel learning by imitation techniques and to contribute to the development of more effective and efficient human-robot collaboration systems.

Active industrial collaboration is ongoing with Comau pertaining to human-robot collaboration systems and applications.

Research objectives:
- To review and analyze current state-of-the-art robot learning techniques for human-robot collaboration. This will involve a comprehensive literature review of existing approaches, including supervised and unsupervised learning, reinforcement learning, and transfer learning.
- To identify the challenges and limitations of these techniques in dynamic environments. This will involve a detailed analysis of the strengths and weaknesses of existing approaches, as well as an exploration of the unique challenges posed by dynamic environments such as manufacturing and healthcare settings.
- To propose and evaluate novel robot learning approaches that address these challenges and enhance human-robot collaboration. This will involve the development of new robot learning algorithms and approaches that are specifically designed to enable effective human-robot collaboration in dynamic environments.
- To demonstrate the practicality and effectiveness of the proposed techniques through simulations and experiments. This will involve the implementation and testing of the proposed techniques in realistic simulation environments, as well as in real-world scenarios to evaluate their performance and effectiveness.

Research work plan:
1st year
- Conduct a comprehensive literature review of existing learning by imitation techniques for human-robot collaboration with robotic arm manipulators.
- Identify and analyze the challenges and limitations of these techniques in dynamic environments.
- Improve programming, problem-solving, and data analysis skills pertaining to robot learning techniques and human-robot interaction
- Develop strong communication skills to communicate research findings and ideas effectively to a range of audiences
2nd year
- Develop and propose novel learning approaches (based on the learning by imitation technique) that address existing challenges and enhance human-robot collaboration.
3rd year
- Implement and test the proposed techniques in simulation and real-world scenarios.
- Analyze and evaluate the results of the simulations and experiments.

Possible venues for publications:
- IEEE Transactions on Robotics (Q1, IF 6.835)
- IEEE Transactions on Automation Science and Engineering (Q1, IF 6.636)
- International Journal of Human-Computer Interaction (Q1, IF 4.920)
- ACM Transactions on Human-Robot Interaction (Q2, IF 4.69)
- Frontiers in Robotics and AI (Q2, IF 3.47)
- IEEE International Conference on Human-Robot Interaction (HRI)
- International Conference on Robotics and Automation (ICRA)
Required skills: The candidate should have a strong background in computer science. Experience in data science and machine learning is a plus. The candidate should also have excellent problem-solving and analytical skills, whereas strong communication and writing skills are a plus. The candidate should be self-motivated and able to work independently, as well as collaboratively with a team.

Group website: www.smilies.polito.it
Summary of the proposal: Modern SoC includes tons of dedicated hardware components to perform as optimized computing architectures for many technological trends and challenges, such as the Internet of Things, Artificial Intelligence, and Autonomous Driving. Given that they include multi-core microprocessors, memories, DSPs, and many other components and usually run complex software, the dependability and security of SoC remain an open challenge. The purpose of the Ph.D. is to investigate methodologies for ensuring dependability and security in RISC-V-based systems.
Topics: Dependability, Security, RISC-V
Rsearch objectives and methods: - Research Objectives
o Since the open quest for dependable and secure systems is a cross-layer problem, this research looks for analysis methodologies that, from data collectible from the SoC and from the application(s) that will have to run, can assess deviations at the earliest stages of the execution. The main objective will be limiting (or avoiding at all) any fault injection activity to produce those data. The methodologies can include Artificial Intelligence methodologies, Statistical models, and more. In return, we expect to be able to identify strategies to directly improve the RISC-V platform with facilities that can ease early detection by providing extra data or reducing the need for fault injections. For those strategies, we plan to implement hardware modifications, if necessary, directly. The same goes for any software feature, like specific compilers.

- Outline
o Year 1 – Study of the monitoring capabilities of RISC-V architectures: During the first year, the Ph.D. student will explore the state-of-the-art early detection of dependability and security threats using HW/SW co-monitoring. The exploration will include the RISC-V support for Hardware performance counters and the ecosystem of tools for their usage, i.e., microarchitectural simulators. Those explorations will require tackling the full support of such tools concerning the RISC-V specifications. At the same time, the student will get acquainted with the most critical dependability and security threats in literature.
o Year 2 – Machine learning models for early detection: the second year will see the student applying all the knowledge to developing machine learning models for early detection. This task will require evaluating different data collection strategies (i.e., temporal based against overall evaluation) and ML/AI models. The assessment will be conducted against a set of dependability and security threats that might require specific implementations to target the RISC-V architecture. The analysis will also point out limitations in the HW/SW co-monitoring capabilities, leading to RISC-V updates to add missing events monitoring.
o Year 3 – Fully anomaly detection support: in the last year of the Ph.D., the anomaly detection for security and dependability threads will be mature enough to support extensive benchmarking. The task will target many application fields where the RISC-V architectures will be employed, such as automotive or cloud systems. Still, it will also allow fine-tuning the ML/AI strategies to reduce the overhead of the model update in the presence of new threats.

- Publications
o We expect the candidate to submit their work to several high-level conferences (e.g., DATE, DAC, DFT, ESWEEK, MICRO, etc.) and top-ranking journals (e.g., IEEE Transactions on Computers, IEEE Transactions on VLSI). We foresaw a minimum of two conference publications per year, and at least two journal publications, spawning along with the second and the third year.
This P.D. will work within the framework of the VITAMIN-V project, an EU Horizon 2020-funded project, under the leadership of Prof. Stefano Di Carlo.
Required skills: The candidate should have a deep knowledge of modern computer architectures and master programming languages, including C/C++ and Python. An initial understanding of RISC-V architectures is desirable.

Group website: www.smilies.polito.it
Summary of the proposal: The growing need to transfer massive amounts of data among multitudes of interconnected devices, e.g., self-driving vehicles, IoT, or industry 4.0, has led to a quest towards low-power and secure approaches to local processing data. Neuromorphic computing, a brain-inspired approach, addresses this need by radically changing information processing. The Ph.D. aims to develop models for the digital twins of neuromorphic accelerators. The digital twins will support the design of next-generation low-power and secure edge-computing systems, exploiting novel photonic neural networks coupled with RISC-V-compliant interfaces for smooth adoption and programmability.
Topics: Neuromorphic computing, Neural networks, RISC-V
Rsearch objectives and methods: - Research Objectives
o The Ph.D. aims to develop models to support the design of modern neural network accelerators based on the integration of silicon photonics, novel PCMs, and Q-switched III-V lasers. Such accelerators will be low-power and secure accelerators. Still, the current development phase of neuromorphic chips will require support from a system-level simulation platform for PCM-based photonic low-power accelerators. Models developed in the framework of this Ph.D. will provide both functional capabilities and structural data, such as power consumption estimation, to pave the way for early device design and benchmarking before reaching the physical build of the chip.

- Outline
o Year 1 – Study of the neuromorphic-enabled Neural Network architectures: During the first year, the Ph.D. student will explore the state-of-the-art of neural network architectures that can already have a physical counterpart based on neuromorphic materials. The exploration will include the RISC-V support for AI accelerators and the ecosystem of tools for their integration, i.e., microarchitectural simulators. Such an exploration phase could already provide some preliminary output in NN models, providing a close emulation of neuromorphic material behavior, including their properties, such as power consumption.
o Year 2 – Preliminary digital twin of neuromorphic-enabled accelerators: following the knowledge base built in the previous year, the student will work on building a neuromorphic-enabled accelerator, deployable on an FPGA board. The task will require the development of digital-like models of the neuromorphic components and simulation tools to support the offline learning phase, leaving the FPGA component for inference operations. The full implementation will target the complete integration with the RISC-V environment.
o Year 3 – Fully digital-twin of neuromorphic-enabled accelerator: in the last year of the Ph.D., the student will pave the way for the maturity of the digital twin of the neuromorphic-enabled accelerator by exploring the feasibility of supporting online learning. Indeed, neuromorphic-based accelerators are known to support self-learning capabilities, enabling their application in many new applications. At the same time, all activities coming from the previous year will find room for further benchmarking, targeting the RISC-V compliance but also the fine-tuning of the modeling of the physical properties of the neuromorphic materials, such as power consumption.

- Publications
o We expect the candidate to submit their work to several high-level conferences (e.g., DATE, DAC, DFT, ESWEEK, MICRO, etc.) and top-ranking journals (e.g., IEEE Transactions on Computers, IEEE Transactions on Neural Networks and Learning Systems). We foresaw a minimum of two conference publications per year, and at least two journal publications, spawning along with the second and the third year.
This P.D. will work within the framework of the NEUROPULS project, an EU Horizon 2020-funded project, under the leadership of prof. Alessandro Savino.
Required skills: The candidate should have a deep knowledge of modern computer architectures and master programming languages, including C/C++, Python, and Rust. An initial understanding of RISC-V architectures is desirable.

Group website: http://www.netgroup.polito.it
Summary of the proposal: Next-Generation (NextG) networks are expected to support advanced and critical services, incorporating computation, coordination, communication, and intelligent decision making. The aim of the project is to design and implement novel mechanisms using supervised and unsupervised (distributed) learning, within software-defined networks to serve the needs of data-driven edge infrastructure management decisions. Moreover, the design of novel protocols, along with newly defined in-band network telemetry mechanisms, can favor the deployment of learning capabilities and increase the accuracy of network decisions.
Topics: Network Management – Machine Learning – Autonomous networks
Rsearch objectives and methods: Two research questions (RQ) guide the proposed work:

RQ1: How can we design and implement on local and larger-scale testbeds effective transport and routing network protocols that integrate the network stack at different scopes using recent advances in supervised and unsupervised learning?

RQ2: To scale the use of machine learning-based solutions in network management, what are the most efficient distributed machine learning architectures that can be implemented at the network edge layer?

The final target of the research work is to answer these questions, also by evaluating the proposed solutions on small-scale network emulators or large-scale virtual network testbeds, using a few applications, including virtual and augmented reality, precision agriculture, or haptic wearables. In essence, the main goals are to provide innovation in network monitoring, network adaptation, and network resilience, using centralized and distributed learning integrated with edge computing infrastructures. Both vertical and horizontal integration will be considered. By vertical integration, we mean considering learning problems that integrate states across network hardware and software, as well as states across the network stack across different scopes. For example, the candidate will design data-driven algorithms for congestion control problems to address the tussle between in-network and end-to-end congestion notifications. By horizontal learning, we mean using states from local (e.g., physical layer) and wide area (e.g., transport layer) as input for the learning-based algorithms. The data needed by these algorithms are carried to the learning actor by means of newly defined in-band network telemetry mechanisms. Aside from supporting resiliency with the vertical integration, solutions must offer resiliency across a wide (horizontal) range of network operations: from close-edge, i.e., near the device, to the far-edge, with the design of secure data-centric resource allocation (federated) algorithms.

The research activity will be organized in three phases:
Phase 1 (1st year): the candidate will analyze the state-of-the-art solutions for network management, with particular emphasis on knowledge-based network automation techniques. The candidate will then define detailed guidelines for the development of architectures and protocols that are suitable for automatic operation and configuration of NextG networks, with particular reference to edge infrastructures. Specific use-cases will also be defined during this phase (e.g., in virtual reality). Such use cases will help identifying ad-hoc requirements and will include peculiarities of specific environments. With these use cases in mind, the candidate will also design and implement novel solutions to deal with the partial availability of data within distributed edge infrastructures. Results of this work will likely result in conference publications.

Phase 2 (2nd year): the candidate will consolidate the approaches proposed in the previous year, focusing on the design and implementation of mechanisms for vertical and horizontal integration of supervised and unsupervised learning with network virtualization. Network, and computational resources will be considered for the definition of proper allocation algorithms. All solutions will be implemented and tested. Results will be published, targeting at least one journal publication.

Phase 3 (3rd year): the consolidation and the experimentation of the proposed approach will be completed. Particular emphasis will be given to the identified use cases, properly tuning the developed solutions to real scenarios. Major importance will be given to the quality offered to the service, with specific emphasis on the minimization of latencies in order to enable a real-time network automation for critical environments (e.g., telehealth systems, precision agriculture, or haptic wearables). Further conference and journal publications are expected.

The research activity is in collaboration with Saint Louis University, MO, USA, also in the context of the NSF grant #2201536 “Integration-Small: A Software-Defined Edge Infrastructure Testbed for Full-stack Data-Driven Wireless Network Applications”. Furthermore, it is related to active collaborations with Futurewei Inc. and Tiesse SpA, both interested in the covered topics.

The contributions produced by the proposed research can be published in conferences and journals belonging to the areas of networking and machine learning (e.g. IEEE INFOCOM, ICML, ACM/IEEE Transactions on Networking, or IEEE Transactions on Network and Service Management) and cloud/fog computing (e.g. IEEE/ACM SEC, IEEE ICFEC, IEEE Transactions on Cloud Computing), as well as in publications related to the specific areas that could benefit from the proposed solutions (e.g., IEEE Transactions on Industrial Informatics, IEEE Transactions on Vehicular Technology).
Required skills: The ideal candidate has good knowledge and experience in networking and machine learning, or at least in one of the two topics. Availability for spending periods abroad (mainly but not only at Saint Louis University) is also important for a profitable development of the research topic.

Group website: http://www.netgroup.polito.it
Summary of the proposal: In recent years we have experienced an increasing interest in the adoption of networked systems in specific critical scenarios, with a particular involvement of the network edge, where these applications usually run. The aim is to automate the orchestration and management of these systems, so that they can rapidly adapt to changing situations and consequently guarantee the required quality. The approach is promising, but also challenging. For example, data are not always available or obtainable, and even when they are, retraining machine learning models are slow and expensive.
Topics: Network management – Machine learning – Autonomous networks
Rsearch objectives and methods: The main objective of the proposed research is to design and implement new network management architectures that integrate machine learning and network virtualization technologies for the orchestration of edge infrastructures. The design will holistically consider inference techniques for distributed resource discovery, algorithmic solutions for machine learning driven service mapping, and distributed resource allocation decisions for (sustainable) training across wireless and wired network application processes. In particular, the proposed solutions will integrate learning theories and network virtualization in novel ways, focusing on a few mechanisms that edge network providers run to create, deploy, and maintain valuable services: 1) inference techniques in network telemetry for resource discovery, 2) machine learning techniques for distributed network and service mapping, to optimize computation placement and transmission parameters, and 3) allocation, to bind virtual resources to the physical edge infrastructure in support of ultra-low latency networked applications. First, the candidate will study novel solutions based on inference techniques to design network discovery protocols that can reconstruct states of a (wireless) edge network with only aggregate measurement information. Second, novel network protocols and architectures will be designed to solve network and service mapping problems with partially known network states, using unsupervised learning. Third, novel resource allocation mechanisms will be designed and implemented using distributed consensus protocols with theoretical guarantees to properly manage distributed latency-sensitive applications. A further objective will be the evaluation of the developed solutions in significant use-case scenarios. For example, offloading of computational tasks from Internet of Things devices to the edge will be considered. In such a scenario, a proper management of the network edge resources is key to guarantee the required quality, e.g., in terms of task completion time. Other use cases might be related to telehealth or other industrial applications, where low and bounded latencies are a must for an effective service.

The research activity will be organized in three phases:

Phase 1 (1st year): the candidate will analyze the state-of-the-art solutions for network management, with particular emphasis to knowledge-based network automation techniques. The candidate will then define detailed guidelines for the development of architectures and protocols that are suitable for discovery, mapping and allocation within the IoT-Edge-Cloud continuum. Specific use-cases will also be defined during this phase (e.g., in the telehealth). Such use cases will help identifying ad-hoc requirements, and will include peculiarities of specific environments. With these use cases in mind, the candidate will design and implement novel solutions to deal with the partial availability of data within distributed edge infrastructures. Results of this work will likely result in conference publications.

Phase 2 (2nd year): the candidate will consolidate the approaches proposed in the previous year, considering both the network and service mapping problem and the resource allocation task. This will lead (likely as part of the third year activity) to the definition of a comprehensive and automatic network and service management framework for distributed edge infrastructures. Solutions will be implemented and tested. Results will be published, targeting at least one journal publication.

Phase 3 (3rd year): the consolidation and the experimentation of the proposed approach will be completed. Particular emphasis will be given to the identified use cases, properly tuning the developed solutions to real scenarios. Major importance will be given to the quality offered to the service, with specific emphasis to the minimization of latencies in order to enable a real-time network automation for critical environments (e.g., industrial networks or telehealth systems). Further conference and journal publications are expected.

The research activity is in collaboration with Saint Louis University, MO, USA, also in the context of the NSF grant #2201536 “Integration-Small: A Software-Defined Edge Infrastructure Testbed for Full-stack Data-Driven Wireless Network Applications”. Furthermore, it is related to active collaborations with Futurewei Inc. and Tiesse SpA, both interested in the covered topics.

The contributions produced by the proposed research can be published in conferences and journals belonging to the areas of networking and machine learning (e.g. IEEE INFOCOM, ICML, ACM/IEEE Transactions on Networking, or IEEE Transactions on Network and Service Management) and cloud/fog computing (e.g. IEEE/ACM SEC, IEEE ICFEC, IEEE Transactions on Cloud Computing), as well as in publications related to the specific areas that could benefit from the proposed solutions (e.g., IEEE Transactions on Industrial Informatics, IEEE Transactions on Vehicular Technology)
Required skills: The ideal candidate has good knowledge and experience in networking and machine learning, or at least in one of the two topics. Availability for spending periods abroad (mainly but not only at Saint Louis University) is also important for a profitable development of the research topic.

Group website: http://netgroup.polito.it
http://netgroup.polito.it/post/prof...
Summary of the proposal: The objective of the research will focus on enhancing existing sensing technology in vehicles to extend the perception of the surrounding environment. The goal is the development of a collaborative perception approach that leverages multiple sources in order to have a clearer representation of the environment in which autonomous vehicle will travel.
Topics: COLLABORATIVE PERCEPTION, AUTOMATED DRIVING, VEHICULAR COMMUNICATION
Rsearch objectives and methods: This scholarship refers to an industrial collaboration with Stellantis.

Environment perception constitutes a foundational block for Automated Driving Systems (ADS). Despite significant advances in sensor technology in recent years, the perception capability of these local sensors is ultimately bounded in range and field of view (FOV) due to their physical constraints. In addition, occluding objects in urban traffic environments such as buildings, trees, and other road users impose challenges in perception. There are also robustness related concerns (sensor degradation in adverse weather conditions, sensor interference, hardware malfunction and failure). Enhancing such features is imperative to breach the barrier of complex environments such as urban scenarios. Occluding obstacles, sudden appearance and disappearance of detected objects/people are just a few of the challenges traditional tracking algorithms may face in an urban context that hinders their performance. Furthermore, approaches that deal with data association and merging are still physically limited by the point-of-view of the ego vehicle. A new approach is therefore needed in order to allow all traffic participants and infrastructure to share information on objects (Vulnerable Road Users (VRUs), obstacles, other vehicles) detected by their object-tracking sensors. Ultimately, the collaborative environment perception will affect the Operational Design Domain (ODD), i.e., a description of the specific operating conditions in which the ADS is designed to properly operate.

The research work plan will be articulated as follows:
First year:
• Definition of a collaborative perception approach to merge different perspectives coming from different sensors and nearby vehicles to enhance the reliability of the environment perception of automated vehicles in complex scenarios.
• Application of the collaborative perception approach firstly for some specific scenarios, analyzing in particular important issues as full sensors coverage, precision of measured attributes, real-time operation, time-synchronization, etc.
Second year:
• Creation of a simulation framework that incorporates the entire collective perception pipeline enabling to comprehensively study sensor-based perception, Intelligent Road-Side Unit (IRSU) and connected ADS platforms. It will also expected to provide detailed insight in end-to-end delays and aging of information within the environmental model of automated vehicles.
• Extension of the perception approach from conventional detection (complex roadside sensor tech, vehicle is a passive object) to collaborative detection (where the vehicle is an active part of the process).
Third Year:
• Application to ADS of Level 4 and above, increasing progressively the ODD, specifically:
• ODD extension of Ego Vehicle equipped with ADS of level L4 in complex urban scenarios (occlusions, appearing/disappearing, intersections, etc.)
• Cooperative driving mobility in “reserved” areas (limited traffic zones)
• “Robo-taxis” operated in an Autonomous Mobility on Demand service

Publication venues:
Conferences:
IEEE Infocom, IEEE CCNC, ACM Mobicom, ACM Mobihoc

Journals:
IEEE Transactions on Mobile Computing
IEEE Transactions on Networks and Service Management
IEEE Communication Magazine
Required skills: Required: Programming skills in C/C++/Python. Desirable:
- Knowledge of Data management and ML/AI techniques.
- Familiarity with concepts of simulation and mobile/vehicular communication.

Group website: -
Summary of the proposal: The most challenging current demand of the agricultural sector is the production of sufficient and safe food for a growing population without over-exploiting natural resources. This challenge is placed in a difficult context of unstable climate conditions, with competition for land, water, energy, and in an increasingly urbanized world. The research activity aims to increase the competitiveness of the agri-food system in terms of safety, quality, sustainability, and added value of food products.
Topics: Wireless sensor network
Image analysis
Precision agriculture
Rsearch objectives and methods: The research activity of the PhD candidate will investigate devices and techniques for monitoring the agricultural produce in a holistic vision, with the aim of limiting environmental pollution, preventing the misuse of pesticides and fertilizers, reducing water and energy request, and increasing net profit.

A first activity concerns the development of a low-cost proximity monitoring system. Off-the-shelf sensors will be selected to measure the most meaningful parameters, such as temperature and humidity of both air and soil, light condition, PH of soil, concentration of NPK (nitrogen, phosphorus, and potassium) in the soil. The adoption of low cost sensors will make possible a pervasive distribution in the environments to be monitored. All the gathered data will be associated with GPS coordinates, date and time of the measurement. The measurements will be repeated several times at different points in the crop; at the end of each sample, the measurements will be synchronized on a server to keep track over time. The integration of the sensing, computing, and communication functionalities within small-size devices will be a key element for increasing the pervasiveness and robustness of the network. Possibly, the integration of the sensor network with drone-based systems will be investigated.
A strictly correlated subsequent activity regards the analysis of the data collected by the sensor network: several goals are set, as detailed in the following. Different calibration strategies will be evaluated: reference values provided by other sensors will be used to determine the most effective calibration strategies and when the calibration needs to be repeated in order to ensure precise measurements. The correlation of the collected data with operating conditions and environmental conditions (e.g., measurement range, microclimatic characteristics) will be analyzed in order to assess the variability of the measurements, both in time and space. In particular, understanding spatial variability may lead to the development of models for data spatialization. Finally, the benefits of sensor redundancy, in terms of data availability, reliability, network performance, and maintainability will be investigated.

A complementary research activity will focus on optical remote sensing. Non-destructive analysis techniques based on UV-Vis- NIR spectroscopy will be adapted in order to allow continuous monitoring of many critical aspects of the production. In particular, new procedures will be developed to correlate the absorption of light radiation, measured with spectroscopic techniques, with the chemical and physical properties of soil, crops, and horticultural produce. Computer graphics techniques will be studied for developing new protocols of calculation of vegetation and soil indices (e.g., NDVI, GNDVI, SAVI, RE). Images will be taken by cameras at different wavelengths, ranging from 1 to 14 microns. Algorithms for pattern analysis and recognition will be developed for the automatic identification of specific parts of the plant, such as leaves or stem, and the detailed analysis of its state of health, with the goal of correlating the images of leaves to the growth and onset of specific diseases.

The PhD research activities can be grouped into three consecutive phases, each one roughly corresponding to one year in the PhD career. Initially, the PhD candidate will improve his/her background by attending PhD courses and surveying relevant literature. After this initial training, the student is expected to select and evaluate the most promising solutions for monitoring agricultural produce. The second phase regards experimental activities on the field aimed at the development of monitoring systems and techniques, such as the integration and deployment of the sensor network, the evaluation of effective calibration strategies, the acquisition of multispectral images, the computation of vegetation and soil indices, and the integration with drone-based systems. Finally, the data collected will be analyzed during the third phase with different goals: assessment of the measurement variability according to the operating conditions (e.g., measurement range, microclimatic characteristics, etc.), influence of sensor redundancy on the network performance, modeling the spatial distribution of data, relationship between sensor measurements and vegetation indices, etc.

The research will be carried out as part of the activities of the National Research Centre for Agricultural Technologies (Agritech).

Some expected target publications are:
- IEEE Transactions on AgriFood Electronics
- ACM Transactions on Sensor Networks
- IEEE Transactions on Image Processing
- Information Processing in Agriculture (Elsevier)
- Computers and Electronics in Agriculture (Elsevier)
Required skills: As the research activity regards the design, development, and evaluation of digital technologies for the next generation agriculture in a holistic vision, the PhD candidate is required to own multidisciplinary skills: e.g., distributed computing, embedded systems, computer networks, security, computer graphics, programming, database management.

Group website: https://media.polito.it/
Summary of the proposal: Recent integration of real-time audio functionalities in the web browsers permits new potentialities in interactive and distributed audio systems, e.g., anyone with a browser on a mobile device can connect with an interactive installation and let the sound mediate the interaction with it. This proposal aims on improving such interactive audio performances by means of automatic techniques that could compensate for both technical issues, and sonic issues related to the information conveyed by the sound, its meaning and emotional content.
Topics: audio processing, web development, multimedia communications
Rsearch objectives and methods: # Research objectives.

Many sonic interaction works have been recently proposed for web-based applications. However, almost all of them are designed to interact with a single user in an acoustically isolated environment, e.g., through headphones.

The main challenges, when dealing with multiple users and devices that share the same physical space and software application, are coordination and synchronization. In fact, different devices exhibit different behaviors because of their hardware and software capabilities. Even in a human scenario, i.e., an orchestra, a conductor is required to provide a means for synchronization and a tuning session is required to setup each instrument before the performance.

Starting from Internet Media Group’s ongoing work on web audio and streaming technologies, the candidate will derive a systematic approach followed by analyzing and taking advantage of synchronization and coordination techniques discussed in literature for different scenarios. The final aim is to realize a framework that will improve collective sonic interaction when web applications are used.

Audio functionalities for web applications and mobile devices are relatively new and further studies are needed to improve their usability, stability and performance. To this aim, we plan to focus on technical issues in order to improve automatic setup of the devices that, as the instruments in an orchestra, should be tuned and synchronized. For example, it is essential to uniform their sound level, identify their position, know their latency and be able to synchronize them on the beats.

Such objectives will be achieved by using both theoretical and practical approaches. The resulting insight will then be validated in practical cases by analyzing the performance of the system with simulations and real experiments. In this regard, the research will be carried on in close cooperation with the Turin Music Conservatory, so as ¬to supplement our experience in sonic production and interaction.

# Outline of the research work plan

In the first year, the PhD candidate will familiarize with the Web Audio and WebRTC API for audio processing in the web browser, as well as with the characteristics of the existing applications for sonic interaction. This activity will address the creation of a JavaScript framework that could allow multiple devices to coordinate and interact together in real-time through a web application, along with the definition of a set of practical use cases. Such activity, culminating in the implementation, analysis, and comparison of different synchronization techniques in a web environment, is expected to lead to conference publications.

In the second year, building on the knowledge already present in the research group and on the candidate background on Internet networking technologies, new experiments for automatic tuning and synchronization of the devices will be developed, simulated, and tested to demonstrate their performance and their ability to improve the coordination of sound events and the interaction through the devices. The actual production of new sound works will be crucial for this assessment. In this context, potential advantages of such techniques will be systematically analyzed. These results are expected to yield at least a journal publication.

In the third year, the activity will be expanded to study new sonic interaction experiences. In this context, this novel approach could unfold new possibilities in the design of interfaces for musical expression and in the composition of multisource electro-acoustic music. Such proposals will target journal publications.

Throughout the whole PhD program, the Electronic Music Studio and School of the Music Conservatory of Turin will be involved in the research activity, specifically focusing on its practice-based aspects and the production of new interactive sound works, for this reason, the candidate is not required to be able to play a musical instrument or to know music theory.

# List of possible venues for publications

Possible targets for research publications (well known to the proposer) include IEEE Transactions on Multimedia, ACM Transactions on Multimedia Computing Communications and Applications, Elsevier Multimedia Tools and Applications, Computer Music Journal, Journal of the Audio Engineering Society, various international conferences (Web Audio Conference, New Interfaces for Musical Expression Conference, International Computer Music Conference, International Symposium on Computer Music Multidisciplinary Research, Audio Mostly Conference, Sound and Music Computing Conference, IEEE Intl. Conf. Multimedia and Expo, AES Conference, ACM WWW, ACM Audio Mostly, ACM SIGGRAPH).
Required skills: The candidate is expected to have a good background in computer networking and Web development. A reasonable knowledge of operating system programming and software development in the Unix/Linux environment is appreciated.

Group website: www.cad.polito.it
Summary of the proposal: The PhD program focuses on the development of low-cost solutions for health monitoring. We aim at developing algorithms compatible with commercial off the shelf wearable devices (i.e., smart watches) to identify the Sleep Apnea (SA) pathology. Most of the affected persons are unaware of suffering from SA; hence, they are at risk of more serious diseases and of fatigue-related risks. A non-invasive, low-cost solution that can monitor seamlessly people’s sleep and that can detect automatically SA could significantly increase the quality of life of a vast portion of world’s population.
Topics: Internet of Things, Digital Health Care, Sleep Analysis
Rsearch objectives and methods: Sleep Apnea is a potentially serious sleep disorder in which breathing repeatedly stops and starts, whose most evident side effects are loud snoring, and tiredness after a full night’s sleep.
The main types of sleep apnea are:
• Obstructive sleep apnea (OSA), which is the more common form that occurs when throat muscles relax and block the flow of air into the lungs;
• Central sleep apnea (CSA), which occurs when the brain doesn’t send proper signals to the muscles that control breathing;
• Treatment-emergent central sleep apnea, also known as complex sleep apnea, which happens when someone has OSA - diagnosed with a sleep study - that converts to CSA when receiving therapy for OSA.

Approximately 1 billion of the world’s population of 7.3 billion people, between the ages of 30 and 69 years, are estimated to have the most common type of sleep-disordered breathing, obstructive sleep apnea (OSA). Complications of OSA can include:
• Daytime fatigue. The repeated awakenings associated with sleep apnea make typical, restorative sleep impossible, in turn making severe daytime drowsiness, fatigue and irritability likely. People with sleep apnea have an increased risk of motor vehicle and workplace accidents.
• High blood pressure or heart problems. Sudden drops in blood oxygen levels that occur during OSA increase blood pressure and strain the cardiovascular system.
• Type 2 diabetes. Having sleep apnea increases the risk of developing insulin resistance and type 2 diabetes.
• Liver problems. People with sleep apnea are more likely to have irregular results on liver function tests, and their livers are more likely to show signs of scarring, known as nonalcoholic fatty liver disease.

Despite the important social aspect of OSA, approximately 80%-90% of OSA syndrome remains undiagnosed. Thanks to the development of novel wearable sensing technologies, such as those found in modern smartwatches, it is now possible to collect an enormous amount of data with a quality comparable to that coming from expensive and invasive medical equipment.
In this research we intend to leverage the sensing technology offered by low-cost commercial of the shelf smartwatches to develop a system able to identify OSA. The research program, that will be developed in collaboration with the start-up company Sleep Advice Technology and the J-Medical sleep medicine doctors, will consist of the following activities:
Year 1
Literature review, analysis of existing sleep database, design of dedicated sleep recording experiments, data collection and data analysis to identify a ground truth for a representative subset of the population (healthy male and male under beta-blocker prescription). This activity will be performed in cooperation with sleep medicine doctors that will analyze the collected data and will provide sleep scoring and SA identification, if any. Commercially available wearable devices will be used to collect data (e.g., Garmin smart watches, WearOS-based smart watches) as well as medical-grade polysomnographic devices.
Year 2
Development of an algorithm for automatic OSA identification based on physiological measurement such as oxygen saturation (spO2), respiration rate (RR), and heart rate variability (HRV). Three steps are foreseen: an algorithm suitable for healthy patients will be first developed by analyzing the available data set, then the algorithm will be extended to consider patients under beta-blocker prescription, finally the algorithm will be adapted to cope with data affected by noise. Algorithms will be first developed starting from high-quality data provided by polysomnographic devices, and then they will be adopted to data coming from wearable devices.
Year 3
In this year an experimental validation of the OSA algorithm will be performed. A new data set will be collected on a representative subset of the population and a blind-study evaluation will be performed: the collected data set will be independently assessed by the sleep medicine doctors, and by using the developed algorithms. The results produced by the two approaches will then be compared.

The research program will be performed in collaboration with Sleep Advice Technologies Srl, and J-Medical.
The expected outcomes are:
• International patents
• Journal publications such as IEEE Transactions.
Required skills: skilMATLAB or Python or C/C++ programmingls

Group website: http://media.polito.it
Summary of the proposal: Machine learning (ML) significantly changed the way many optimization tasks are addressed. Here the focus is on optimizing the media compression and communication scenario, trying to predict users’ quality of experience. Key objectives of this proposal are the creation of tools to analyze and exploit large scale datasets using ML to identify media characteristics and features that most influence perceptual quality. Such new knowledge will be fundamental to improve existing measures and algorithms.
Topics: Media Quality Evaluation, Machine Learning, Large Scale Datasets
Rsearch objectives and methods: In recent years, machine learning (ML) has been successfully employed to develop video quality estimation algorithm (see, e.g., the Netflix VMAF proposal) to be integrated in media quality optimization frameworks. However, despite these improvements, no technique can currently be considered reliable, partly because the inner workings of machine learning (ML) models cannot be easily and fully understood especially when they are based on “black box” neural network models.

We aim to improve the situation by developing more reliable and explainable quality prediction models. Starting from Internet Media Group’s ongoing work on modeling the behavior of single human subjects in media quality experiments, the candidate will derive a systematic approach by employing several subjectively annotated datasets (i.e., with quality scores given by human subjects). With such an approach we expect to be able to identify meaningful media quality features useful to develop new reliable and explainable quality prediction models. However, to identify and improve such features by using machine learning models, it is important to include also large-scale, not subjectively annotated, datasets. To efficiently deal with this large amount of data, it is necessary to develop a framework comprising a set of tools that allows to more easily process both the subjective scores (given by human subjects) as well as objective scores in an efficient and integrated manner, since currently every dataset has its own characteristics, quality scale, way of describing distortions, etc. which make integration difficult. Such framework, that we will make publicly available for research purposes, will constitute the basis for reproducible research, which is increasingly important for ML techniques. The framework will allow to systematically investigate existing quality prediction algorithms finding strength and weaknesses, as well as to identify the most challenging content on which newer development can be based. Such objectives will be achieved by using both theoretical and practical approaches. The resulting insight will then be validated in practical cases by analyzing the performance of the system with simulations and experiments with industry-grade signals, leveraging cooperation with companies to facilitate the migration of the developed algorithms and technologies into prototypes that can then be effectively tested in real industrial media processing pipelines.

The workplan of the activities is detailed in the following. In the first year the PhD candidate will first familiarize with the recently proposed ML and AI-based techniques for media quality optimization, as well as the characteristics of the publicly available datasets for research purposes. In parallel, a framework will be created to efficiently process the large sets of data (especially for the video case) with potentially complex measures that might need retraining, fine-tuning or other computationally complex optimizations. It is expected to make this framework publicly available also to address the research reproducibility issues that are of growing interest in the ML community. This initial investigation and activities are expected to lead to conference publications. In the second year, building on the framework and the theoretical knowledge already present in the research group, new media quality indicators for specific quality features will be developed, simulated, and tested to demonstrate their performance and in particular their ability to identify the root causes of the quality scores for several existing quality prediction algorithms, thus partly explaining their inner working methods in a more understandable form. In this context, potential shortcomings of such algorithms will be systematically identified. These results are expected to yield one or more journal publications. In the third year the activity will then be expanded to propose improvements that can mitigate the identified shortcoming as well as to create proposals for quality prediction algorithms based on the previously identified robust features. Such proposal will target journal publications.

Possible targets for research publications, well known to the proposer, include IEEE Transactions on Multimedia, Elsevier Signal Processing: Image Communication, ACM Transactions on Multimedia Computing Communications and Applications, Elsevier Multimedia Tools and Applications, various IEEE/ACM international conferences (IEEE ICME, IEEE MMSP, QoMEX, ACM MM, ACM MMSys).

The proposer is actively collaborating with the Video Quality Experts Group (VQEG), an international group of experts from academia and industry that aims to develop new standards in the context of video quality. In particular the tutor is co-chair of the JEG-Hybrid project which is very interested in the activity previously described.
Required skills: The PhD candidate is expected to have:
- strong analytical skills;
- some background on ML systems;
- good English writing and communication skills;
- reasonably good ability to work with large quantities of data on remote server systems, in particular by automating the procedures with scripts, pipelines, etc.

Group website: eda.polito.it
Summary of the proposal: How to efficiently execute compute- and data-intensive applications on advanced hardware platforms, which are typically parallel and heterogeneous, is one of the fundamental problems in today’s computer engineering world, whose solution is provided by modern compiler infrastructures.

In this thesis, the candidate will have the chance to study and improve such infrastructures, focusing in particular on compilers for deep learning models, which represent a key application for these technologies, due to their pervasiveness in modern applications, and to their high complexity and peculiar features from a computational standpoint.
Topics: Deep Learning; Compilers; Heterogeneous Computing
Rsearch objectives and methods: The candidate will have the chance to work on two main technologies: Tensor Virtual Machine (TVM) and Multi-Level Intermediate Representation (MLIR). The former is an open-source deep learning compiler and runtime, that aims to accelerate the deployment of machine learning models on a variety of hardware targets, including CPUs, GPUs, and accelerators. MLIR is a novel approach to building reusable compiler infrastructures, whose scope goes beyond machine learning and extends to any kind of domain-specific computing.

The candidate will work on applying and extending TVM and MLIR to support the deployment of complex deep learning models on constrained edge devices based on the open-source RISC-V instruction set architecture, ranging from simple microcontroller units (MCUs) to heterogeneous multi-accelerator systems.

Detailed objectives:
1. To review the existing literature on the execution of deep neural networks (DNNs) on low-power heterogeneous edge devices and to identify potential optimization techniques that can be ported inside TVM or implemented using a dialect optimization step of MLIR. In particular, a set of different optimizations will be developed, which can be either “general-purpose”, i.e., applied to every hardware target, “specific”, i.e., applied to a single target, or “tunable”, i.e., applied to every target but with target-specific parameters. These will include network graphs rewriting optimizations (layer fusion or replacement), individual computational kernel optimizations (loop reordering, tiling, and fusion), and memory management optimizations (DMA, double buffering, etc.). Frameworks for implementing these optimizations, such as polyhedral compilation will be studied and customized for the specific objectives.
2. To study a set of edge-relevant benchmarks and hardware platforms for evaluating the effectiveness of the developed techniques. Examples of benchmarks include the TinyML Perf Suite, which lists four tasks and DNNs and is considered an industrial standard in this field, as well as custom benchmarks related to the EDA group’s projects, ranging from industrial assets monitoring to energy management, and biosignal processing. The target hardware platforms will include Diana, a System-on-Chip (SoC) developed by KU Leuven that contains a main RISC-V control unit and two DNN hardware accelerators, one 16x16 digital systolic accelerator, and one 1152x512 analog-in-memory computing accelerator; GAP8 and GAP9 by GreenWaves Technologies, characterized by a RISC-V control unit and a small parallel cluster of 8/9 additional RISC-V cores respectively, with a dedicated Level 1 scratchpad memory; GAP9 also includes a programmable DNN-specific digital accelerator.
3. To implement and evaluate the selected optimization techniques on the selected DNNs, benchmarks and low-power hardware platforms.

Outline of Work:
1. Familiarization with the target applications and hardware platforms. The three aforementioned target platforms will be configured and tested until the candidate can easily program them. The candidate will also familiarize with the relative Software Development Kits (SDKs) and compilation toolchains. Next, the candidate will study the target applications and DNN models, replicating state-of-the-art results on each of them.
2. Review of the existing literature on the compilation of DNNs for low-power platforms and identification of potential optimization techniques that can be implemented using TVM and MLIR to improve the efficiency of models deployed on the selected hardware targets. This will involve a thorough review of relevant papers, articles, and other sources to identify the most promising optimization techniques.
3. Implementation of the selected optimization techniques in TVM and/or MLIR.
4. Measurement of the performance, energy efficiency or memory occupation improvements obtained through the developed techniques, and analysis of the results to determine the most effective optimization set of steps for each platform.
5. Possible development of new, non-existing, target specific optimizations.

The ultimate goal of this research is to improve the efficiency of DNN execution on low power devices, enabling their wider use in a variety of applications.

Possible publication venues for this thesis include:
- IEEE Transactions on CAD
- IEEE Transactions on Computers
- IEEE Journal on Internet of Things
- IEEE Transactions on Emerging Topics in Computing
- IEEE Transactions on Parallel and Distributed Systems
- ACM Transactions on Embedded Computing Systems
- ACM Transactions of Design Automation of Electronic Systems
- ACM Transactions on IoT
- ACM Transactions on Architecture and Code Optimization

The EDA Group has many active industrial collaborations and funded projects on these topics, including:
- TRISTAN (ECSEL-JU 2023)
- ISOLDE (ECSEL-JU 2023)
- HiFiDELITY (ECSEL-JU 2023)
- StorAIge (ESCEL-JU 2021)
- Etc.
Required skills: 1. Familiarity with programming languages: The project will involve implementing and testing various algorithms and techniques, so experience with programming languages such as Python and C is needed.
2. Familiarity with embedded systems and computer architectures: Understanding the constraints and limitations of edge devices is important in order to optimize the deep learning models for these platforms. Experience with working on projects involving edge devices would be useful.
3. Familiarity with compilers and compiler optimizations is a nice-to-have, but not a hard requirement, since these concepts will be studied during the first period of the thesis.

Group website: https://eda.polito.it/
Summary of the proposal: Artificial Intelligence is driving a revolution in many important sectors in society. Deep learning networks, and especially supervised ones such as Convolutional Neural Networks, remain the go-to approach for many important tasks. Nonetheless, training these models typically requires massive amount of good-quality annotated data, which makes them impractical in many real-world applications. This PhD program seeks answers to such problems, targeting important use-cases in today’s society.
Topics: Artificial Intelligence, Self-supervised learning, Data-limited applications
Rsearch objectives and methods: The main goal of this PhD program is the investigation of robust AI-based decision making in data-limited situations. This includes three possible scenarios, which are typical of many important real-world applications:
1) the training data is difficult to obtain, or it is available in limited quantity.
2) obtaining the training data is not difficult. Nonetheless, it is either difficult or economically impractical to have human experts labelling the data.
3) the training data/annotations are available, but the quality of such data is very poor.

Possible solutions involve different approaches, from classic transfer learning and domain adaptation techniques, data augmentation with generative modelling, or semi- and self-supervised learning approaches, where the access to real data of the target application is either minimized or avoided altogether. In addition, the use of probabilistic approaches (e.g., Bayesian inference) can be of help to properly quantify the uncertainty level both at training and inference time, making the decision process more robust both to noisy data and/or inconsistent annotations. This research proposal aims to investigate and advance the state of the art in such areas. The outline can be divided into 3 consecutive phases, one per each year of the program.
- In the first year, the candidate will acquire the necessary background by attending PhD courses and surveying the relevant literature and will start experimenting on the available state-of-the-art techniques. A seminal conference publication is expected at the end of the year.
- In the second year, the candidate will select and address some relevant use-cases, well-representing the three data-limited scenarios mentioned before. Stemming from the supervisors’ collaborations and current research activity, these use-cases may involve industry 4.0 applications (for example: smart manufacturing and industrial 3D printing) as well as biomedicine and digital pathology. There is some scope to shape the specific focus of such use-cases with the interests and background of the prospective student, as well as with the ones of the various collaborators that could be involved in the project activity: research centers such as the Inter-departmental Center for Additive Manufacturing in PoliTO, the National Institute for Research in Digital Science and Technology (INRIA, France) as well as industries such as Prima Industrie, Stellantis, Avio Aero, etc.

At the end of the second year, the candidate is expected to target at least a paper in a well-reputed conference in the field of applied AI, and possibly another publication in a Q1 journal of the Computer Science sector (e.g., Pattern Recognition, IEEE Transactions on Pattern Analysis and Machine Intelligence, etc.)
- In the third year, the candidate will consolidate the models and approaches that were investigated in the second year, and possibly integrate them into a standalone architecture. The candidate will also finalize this work into at least another major journal publication, as well as into a PhD thesis to defend at the end of the program.
Required skills: The ideal candidate to this PhD program has:
- positive attitude to research activity and working in team
- solid programming skills
- solid basics of linear algebra, probability, and statistics
- good communication and problem-solving skills
- some prior experience in the design and development of machine learning and deep learning architectures.

Group website: https://eda.polito.it/
Summary of the proposal: The smart factory is an integral part of the Industry 4.0, where the tremendous amount of heterogeneous data generated by manufacturing processes unleash an increased potential of applications of Artificial Intelligence, with a considerably accelerating impact on automation, working environment and productivity. The challenge addressed by this PhD program is to successfully deploy such potential in the new generation of manufacturing systems (e.g. Additive Manufacturing).
Topics: Smart Manufacturing, Industry 4.0, Additive Manufacturing
Rsearch objectives and methods: The main goal of this PhD program is the investigation, design and deployment of state-of-the-art Artificial Intelligence approaches in the context of the smart factory, with special regards with new generation manufacturing systems such as Additive Manufacturing. These tasks include:
- quality assurance and inspection of manufactured product via heterogeneous sensors data (e.g., images from visible range or IR cameras, time-series, etc.)
- process monitoring and forecasting
- anomaly detection
- failure prediction and maintenance planning support

While the Artificial Intelligence technologies able to address such tasks may already exist and be successfully consolidated in other real-world applications, the specific domain of manufacturing systems poses severe challenges to the effective deployment of these techniques. Among the others:
- the complexity and immaturity of the process
- the lack of effective infrastructures for data collection, integration, and annotation
- the necessity to handle heterogeneous and noisy data from different types of sensors/machines
- the lack of annotated datasets for training supervised models
- the lack of standardized quality measures and benchmarks

This PhD program seeks solutions to the aforementioned challenges, with specific focus on new generation manufacturing systems, such as, but not limited to, Additive Manufacturing (AM). AM includes many innovative 3D printing processes, which are rapidly revolutionizing manufacturing in the direction of higher digitalization of the process and higher flexibility of production. While AM is a perfect candidate for the deployment of Artificial Intelligence, because it involves a fully digitalized process from design to product finishing, to this date it is still a very complex and immature technology, with tremendous room for improvement in terms of production time and product defectiveness. Specific use-cases in this regard will stem from the supervisors’ collaborations with the Inter-departmental Center for Additive Manufacturing in PoliTO, as well as with several major industrial partners such as Prima Industrie, Stellantis, Avio Aero, etc.

The outline of the PhD program can be divided into 3 consecutive phases, one per each year of the program.
- In the first year, the candidate will acquire the necessary background by attending PhD courses and surveying the relevant literature and will start experimenting state-of-the-art techniques on the available datasets, either from public sources or from past projects of the supervisors. A seminal conference publication is expected at the end of the year.
- In the second year, the candidate will select and address some relevant use-cases, with real data from the industrial partners, and will seek solutions to the technological and computational challenges posed by the specific industrial application.
At the end of the second year, the candidate is expected to target at least a second conference paper in a well-reputed industry-oriented conference (e.g. ETFA), and possibly another publication in a Q1 journal of the Computer Science sector (e.g. IEEE Transactions on Industrial Informatics, Expert Systems with Applications, etc).
- In the third year, the candidate will consolidate the models and approaches that were investigated in the second year, and possibly integrate them into a standalone framework. The candidate will also finalize this work into at least another major journal publication, as well as into a PhD thesis to defend at the end of the program.
Required skills: The ideal candidate to this PhD program has:
- positive attitude to research activity and working in team
- solid programming skills
- solid basics of linear algebra, probability, and statistics
- good communication and problem-solving skills
- some prior experience in the design and development of machine learning and deep learning architectures.
- some prior knowledge/experience of manufacturing processes is a plus, but not a requirement.

Group website: https://eda.polito.it/
Summary of the proposal: This Ph.D. research proposal aims at studying novel software solutions based on Machine Learning (ML) techniques to estimate the State-of-Health (SoH) of batteries in Electric Vehicles (EV) in near-real-time. This research area is gaining a strong interest in the last years as the number of EVs is constantly rising. Knowing this SoH can unlock different possible strategies i) to reuse EVs’ batteries in other contexts, e.g. stationary energy storage systems in Smart Grids, or ii) to recycle them.
Topics: Machine Learning, Battery’s State-of-Health, Electric Vehicles
Rsearch objectives and methods: In the last years, the number of Electric Vehicles (EVs) increases significantly and it is expected to grow in the upcoming years. Due to the use of high-value materials, there is a strong economic, environmental and political interest in implementing solutions to recycle EV’s batteries for example by reusing them in stationary applications to became useful energy storage systems in Smart Grids. To achieve it, novel tools are needed to estimate the battery State-of-Health (SoH), i.e. the battery capacity measurement, in near-real-time. Currently, SoH is determined by bench discharging tests taking several hours and making this process time-consuming and expensive.

The objective of this Ph.D. proposal consists of the design and development of models based on Machine Learning (ML) techniques that will exploit both synthetic and real-world datasets. The synthetic dataset is needed to train and test a generic ML model suitable for any EV independently from a specific brand and/or model. Whilst, the real-world dataset, given by monitoring real EVs, is needed to fine-tune the ML models, for example, by applying transfer learning techniques, customizing them more and more on the specific brand and model of the real-world EV to monitor.

During the three years of the Ph.D., the research activity will be divided into four phases:
1. Study and analysis of both state-of-the-art solutions and datasets of real-world EV monitoring.
2. Design and develop a realistic simulator of an EV fleet to generate the synthetic and realistic dataset. Starting from both datasheet information of different EVs (in terms of brand and model) and information provided by the Italian National Institute of Statistics (ISTAT), the simulator will simulate different routes in terms of length, altitude and travel speed, impacting battery wear differently, thus making the resulting dataset realistic and heterogeneous.
3. Design and development of ML-based models trained and tested with the synthetic dataset to estimate the SoH of EV’s batteries.
4. Apply transfer learning techniques to the ML-based models (from the previous bullet #3) to fine-tune them by exploiting datasets of real-world EV monitoring (result of the previous bullet #1).

Possible international scientific journals and conference:
• IEEE Transaction Smart Grid,
• IEEE Transaction on Vehicular Technology,
• IEEE Transaction on Industrial Informatics,
• IEEE Transactions on Industry Applications,
• Engineering Applications of Artificial Intelligence,
• Expert Systems with Applications,
• ACM e-Energy
• IEEE EEEIC internat. conf.
• IEEE SEST internat. conf.
• IEEE Compsac internat. conf.
Required skills: • Programming and Object-Oriented Programming (preferable in Python),
• Knowledge of Machine Learning and Neural Networks
• Knowledge of frameworks to develop models based on Machine Learning and Neural Networks
• Knowledge of development of Internet of Things Applications

Group website: eda.polito.it
Summary of the proposal: RISC-V architecture defines both a set of instructions (ISA) and the essential components of that microprocessor. The RISC-V ecosystem is currently sparse. The goal of this PhD is to integrate existing Open-Source for RISC-V-based simulators and architecture exploration tools within a unified framework for system-level modelling and simulation of extra-functional properties by using Open Source languages and tools. Target properties include power, temperature, reliability and aging.
Topics: RISC-V, SystemC, system level simulation
Rsearch objectives and methods: RISC-V architecture defines both the set of instructions (ISA) that tell a microprocessor how to execute the elementary actions, and the essential components of that microprocessor. Its open development model has caught the interest of engineers and scientists around the world, and it has the potential to change the computing landscape in as profound a manner. The RISC-V ecosystem is currently sparse, without competing alternatives for each of the building blocks required for a SoC. In this context, one important dimension is the extension of SoC frameworks with the evaluation of extra-functional properties like power, temperature, reliability and aging. Reaching a simultaneous simulation of such aspects (and their mutual influence) simultaneously with the RISC-V functional simulation, would strengthen the design of RISC-V systems by ensuring their correct operations. Modeling and monitoring of these properties in the various domains in isolation is not a new problem. How to manage the complex interactions of these properties, however, is still an open problem. Moreover, these quantities are interdependent in complex ways. Power consumption affects thermal and aging patterns, while temperature affects power consumption (in particular, static power in digital logic) and it is an essential parameter in any reliability or aging model. Accurately tracking the mutual influence among extra-functional properties must be done at runtime, by extending the functional RISC-V Virtual Platforms (VPs) with support for extra-functional aspects.

The goal of this PhD is thus to integrate existing Open-Source for RISC-V-based simulators, VPs and architecture exploration tools within a unified framework for system-level modelling and simulation of extra-functional properties such as power consumption, thermal behaviour and reliability. The framework will be supported by standard specification languages (IP-XACT or others) and based on SystemC-AMS, which is a Open-Source language naturally supporting heterogeneous SoCs covering a wide range of domains.

The objectives of the Ph.D. plan are the following:
- Studying RISC-V simulation frameworks and VPs to identify the most suitable ones for the integration in the extra-functional framework;
- Developing the competences required for extra-functional simulation, e.g., power, thermal, and reliability estimation;
- Implementing the extra-functional simulation framework in SystemC-AMS, by focusing on one property at a time, to gradually increase the level of complexity;
- Understanding what open source languages can support the framework to automate the generation of the extra-functional simulation;
- Apply the proposed solution to industrial applications, e.g. robotic applications, to prove its applicability and feasibility.

Outline of the research plan:
- 1st year: developing the competences required for extra-functional simulation; studying RISC-V VPs suitable for extension; extension of one VP with power simulation;
- 2nd year: extension of the framework to the thermal and reliability evaluation for the RISC-V VP; extension to a different ; support of the framework with automatic code generation;
- 3rd year: strengthening the framework to support complex case studies at industrial level; further development of the framework.

Possible venues for publications:
- IEEE Transactions on Computers
- IEEE Transactions on Computer Aided Design
- IEEE Transactions on Industrial Informatics
- ACM Transactions on Embedded Computing Systems
- ACM Transactions of Design Automation of Electronic Systems
- Design Automation Conference
- Design Automation and Test in Europe Conference

Funded projects and collaborations:
- TRISTAN (Together for RISc-V Technology and ApplicatioNs) KDT ED project
- COSEDA, Infineon, ST Microelectronics
- Università di Bologna
Required skills: MS degree in computer engineering or electronics engineering.
Good skills in computer programming.
Technical background in electronic design, modeling, simulation and optimization.
Possibly basics of SystemC simulation.

Group website: http://www.cad.polito.it/
Summary of the proposal: Neural Networks (NNs) are increasingly used in many application domains where safety is crucial (e.g., automotive, aerospace and robotics). In order to match the time requirements while keeping into account the other parameters (such as hardware cost and power consumption), in most cases the NN execution relies on hardware accelerators. Possible hardware faults affecting the accelerator can severely impact the produced results, especially taking into account the advanced semiconductor technologies used for its manufacturing. The goal of the proposed research activity is to face the challenging task of understanding which are the most critical faults (i.e., those which can modify the output of the NN), identifying solutions trading-off the result accuracy with the computational effort required to simulate the effects of the faults, tracing their effects up to the application level. Once the most critical elements in the accelerators are identified, effective solutions for its hardening can be devised, playing both at the hardware and at the software level.
Topics: Reliability, Safety, AI
Rsearch objectives and methods: AI accelerators are increasingly adopted in safety-critical applications (e.g., in the automotive, aerospace and robotics domains), where the probability of failures must be lower than well-defined (and extremely low) thresholds. This goal is particularly challenging, since AI accelerators are extremely advanced devices, built with highly sophisticated (and hence less mature) semiconductor technologies. On the other side, since these applications are often based on Artificial Intelligence (AI) algorithms, they benefit of their intrinsic robustness, at least with respect to some faults. Unfortunately, given the complexity of these algorithms and of the underlying architectures, an extensive analysis to understand which faults/modules are particularly critical is still missing. The planned research activities aim first at exploring the effects of faults affecting the hardware of an accelerator supporting the NN execution. Experiments will study the effects of the considered faults on the results produced by the NN. This study will mainly be performed resorting to fault injection experiments. In order to keep the computational effort reasonable, different solutions will be considered, combining simulation- and emulation-based fault injection with multi-level one. The trade-off between the accuracy of the results and the required computational effort will also be evaluated. Based on the gathered results, hardening solutions acting on the hardware and/or the software will be devised, aimed at improving the resilience of the whole application with respect to faults, and thus matching the safety requirements of the target applications.

The proposed plan of activities is organized in the following phases:
- phase 1: the student will first study the state of the art and the literature in the area of NNs, their implementation on different platforms (including hardware accelerators) and their applications. At the same time, the student will become familiar with open source accelerators models (e.g., NVDLA) and existing fault injection environments (e.g., NVbitFI). Suitable cases of study will also be identified, whose reliability and safety could be analyzed with respect to faults affecting the underlying hardware.
- phase 2: suitable solutions to analyze the impact of faults on the considered accelerator will be devised and prototypical environments implementing them will be put in place.
- phase 3: based on the results of a set of fault injection campaigns performed to assess the reliability and safety of the selected cases of study a detailed analysis leading to the identification of the most critical faults/components will be carried out.
- phase 4: suitable hardening solution will be proposed and evaluated.

Phases 2 to 4 will include dissemination activities, based on writing papers and presenting them at conferences.

We also plan for a strong cooperation with the researchers of other universities and research centers working in the area, such as the University of Trento, the University of California at Irvine (US), the Federal University of Rio Grande do Sul (Brazil), NVIDIA.
Required skills: The candidate should own basic skills in
- digital design
- computing architectures
- Artificial Intelligence.

Group website: https://eda.polito.it/
Summary of the proposal: An automated system driving a robot must handle many problems simultaneously. For example, the system must locate its position, identify practicable pathways and surrounding objects, determine strategies for interacting with the environment, and prevent accidents. Generally, a System of Systems (SoS) aiming to deal with this variety of issues is composed of a wide variety of modules, each specialized in solving a reduced number of problems.

This project will investigate the definition of a framework for the digital design and testing of data fusion algorithms, wherein distributed sensing technologies disseminated in the industrial production lines are integrated through the fog computing paradigm.
Topics: AIoT, Industry 4.0, Neuromorphic Computing
Rsearch objectives and methods: Research objectives
In automation use cases, information extracted by different SoS modules is merged on the path from the raw sensor to the actuator. The pipeline often follows an intuitive order: first, the information from the sensors is processed; then, this information is elaborated by AI-based algorithms; finally, the extracted knowledge is used to elaborate a control strategy for the actuator. The developed framework should support the automatization of each step of the engineering process for creating optimized Artificial Intelligence of Things (AIoT) sensor solutions. Following the key features:
1. Manage onboard sensor data collection and labelling, allowing developers to build datasets from each available sensing system for any given use case.
2. Select and customize AI algorithms for generating efficient inference and data fusion models compatible with on-edge execution.
3. Define optimization strategies and tools for identifying model parameters by exploiting the labelled data acquired by the onboard sensing systems.
4. Validate model accuracy on the target edge device.
The framework will be evaluated on selected sensing technologies and analytics tasks involved in relevant industrial use cases in the automation field. In this project, the candidate will target emerging HW technologies such as FPGA, GPU, Neuromorphic platforms, and parallel architectures, to implement new computational paradigms optimizing computation on the edge. Neuromorphic technology will be strongly integrated into the supported use cases.

Outline of the research work plan
1st year. The candidate will study state-of-the-art frameworks aimed at designing AIoT solutions to be deployed on the edge. Moreover, the candidate will acquire experience with Neuromorphic HW technologies and embedded systems in the industrial context. He/She will contribute to the definition of the framework requirements, technologies, and solutions for the development of AI applications to be deployed on edge devices.
2nd year. The candidate will develop an integrated methodological approach running on a fog computing platform for modelling applications and systems, accordingly to the experiences obtained during the first year of research in a multi-scenario analysis. He/She will develop the basic structure of a user-friendly framework for supporting the realization of AI applications to be deployed on edge devices. Moreover, it will support the benchmarking procedure. Then, he/she will select software libraries supporting the most promising models identified in the TinyML domain for the specific industrial use cases. For this last point, the candidate will explore models from Imitation, Continuous, Federated, and Deep learning technologies, as well as bio-inspired neuromorphic models.
3rd year. The candidate will apply the proposed approach to different complex systems, enabling greater generalisation of the methodology to different domains. The candidate will define relevant Key Performance Indicators (KPI) for demonstrating the advantages of using the developed framework, compared to the use case’s baseline implementation approach.

The research activities will be carried out, in collaboration with the Fluently project partners and the EBRAINS-Italy partners.

List of possible venues for publications
The main outcome of the project will be disseminated in three international conference papers and at least one publication in a journal of the AIoT and neuromorphic fields. Moreover, the candidate will disseminate the major results in the EBRAINS-Italy meetings and events. In the following the possible conference and journal targets:
• IEEE/ACM International Conferences (e.g., DAC, DATE, AICAS, NICE, ISLPED, GLSVLSI, PATMOS, ISCAS, VLSI-SoC);
• IEEE/ACM Journals (e.g., TCAD, TETC, TVLSI, TCAS-I, TCAS-II, TCOMP), MDPI Journals (e.g., Electronics).
The PhD position is funded by the EBRAINS-Italy project PNRR CUP B51E22000150006
Required skills: MS degree in computer engineering, electronics engineering or physics of complex systems. Excellent skills in computer programming, computer architecture, embedded systems, and IoT applications. Technical background in electronic design, modelling, simulation and optimization.

Group website: https://eda.polito.it/
Summary of the proposal: Nowadays, the process of task mapping on the hardware units available on heterogeneous systems (multi-core, CPU, GPU, FPGA, Neuromorphic HW) is challenging for a software developer. Bioinformatics teams deal with huge amounts of data which must be efficiently analysed in order to extract useful knowledge; however, frequently SW tools are not optimised to fully exploit the features of the available HW.

This project will investigate methods and techniques for designing parallel optimized tools fully exploiting the heterogeneous HW architectures currently available in a modern fog computing system.
Topics: Data Analytics, Bioinformatics, Heterogeneous computing
Rsearch objectives and methods: Research objectives
Many research teams are engaged in designing novel solutions for enhancing compilers and programming models with the aim of exploiting heterogeneous architectures in the domain of edge and fog computing. Such solutions also strive to support automatic resource allocation and optimization procedures. At the same time, the adoption of heterogeneous systems for the analysis of sequential data streams is increasing in industrial and biomedical applications. However, the cost and complexity of software development and the energy footprint of the created solutions are still not well balanced.

The objectives of the PhD plan are the following:
1. Develop the competence to analyze available data from product documentation, experiments and scientific reports, for extracting features of complex components and systems.
2. Analyze the state-of-the-art of compiler technology for heterogeneous HW architectures in the data analysis and bioinformatics fields of applications.
3. Develop a general (machine learning-based) approach for partitioning a sequential data stream application, described in a high-level programming language, into elementary computation tiles (kernels).
4. Design a reliable methodology for placing the elementary kernels on the devices available on the target heterogeneous systems, together with the generation of the inter-task communication interfaces.
5. Design of proof-of-concept experiments for demonstrating that the developed partitioning and allocation methodology succeeds in better exploiting the resources of heterogeneous HW, by reducing the execution time and/or the power consumption of an application.
6. Provide a framework for configuring heterogeneous embedded systems in a semi-automatic way, to further facilitate the optimized porting of applications on the emerging heterogeneous systems.

The activities of research mentioned above will focus on three primary areas of application:
• Medical and bioinformatics data stream analysis;
• Video surveillance and object recognition;
• Smart energy system.

The candidate will design optimized bioinformatics tools to accelerate and optimize data analytics tasks, with a particular focus on the analysis of RNA molecules and the simulation of brain neural structures. In the project, the candidate will target several emerging HW technologies such as FPGA, GPU, Neuromorphic platforms, and parallel architectures. Neuromorphic technology will be strongly integrated into the supported use cases.

Outline of the research work plan
1st year. The candidate will study state-of-the-art frameworks for the design of Bioinformatics solutions to be deployed on the heterogeneous fog-based system. Moreover, the candidate will acquire experience with Neuromorphic HW technologies and embedded systems adopted in the relevant use cases. He/She will contribute to the specification of the tools that will be implemented in order to fully exploit the HW technologies available in the heterogeneous platform.
2nd year. The candidate will develop an integrated methodological approach for designing tools implementing parallel applications running on a fog computing platform.
The candidate will integrate the technologies identified in the first year for designing optimized data analysis and bioinformatics tools supporting the benchmarking procedure.
3rd year. The candidate will test the designed tools on relevant use cases, selected in collaboration with scientific and industrial partners. Moreover, he/she will define relevant Key Performance Indicators (KPI) for demonstrating the advantages of implementing tools which fully exploit the HW heterogeneity offered by the fog computing systems.

The research activities will be carried out, in collaboration with the Fluently project partners and the EBRAINS-Italy partners.

List of possible venues for publications
The main outcome of the project will be disseminated in three international conference papers and at least one publication in a journal of the bioinformatics and neuromorphic fields. Moreover, the candidate will disseminate the major results in the EBRAINS-Italy meetings and events. In the following the possible conference and journal targets:
• IEEE/ACM International Conferences (e.g., DAC, DATE, AICAS, NICE, ISLPED, GLSVLSI, PATMOS, ISCAS, VLSI-SoC);
• IEEE/ACM Journals (e.g., TCAD, TETC, TVLSI, TCAS-I, TCAS-II, TCOMP), MDPI Journals (e.g., Electronics).
Required skills: MS degree in computer engineering, electronics engineering or physics of complex systems. Excellent skills in computer programming, computer architecture, embedded systems, and IoT applications. Technical background in electronic design, modelling, simulation and optimization.

Group website: https://eda.polito.it/
Summary of the proposal: Computer scientists have developed several heterogeneous HW platforms for supporting Spiking Neural Network (SNN) simulations. However, the tools currently supporting the engineering process of interoperable applications and simulations in this field still lack many useful features required to spread a new neuromorphic-based computational paradigm.

The candidate will be involved in the two activities funded by the EBRAINS-Italy Project:
1) Definition and design of a platform facility for the development of neuromorphic simulations and AI applications on heterogeneous digital/neuromorphic computing systems, and 2) design of a framework supporting developers in the end-to-end engineering process of SNN simulations executed on neuromorphic devices.
Topics: Neuromorphic Computing, IoT, Data Analytics
Rsearch objectives and methods: Research objectives
Although neuromorphic HW architectures were originally intended for brain simulations, they are also of interest in areas such as IoT edge devices, high-performance computing, and robotics. Neuromorphic platforms have been shown to offer better scalability than traditional multi-core architectures and are well-suited for problems that require massive parallelism, which neuromorphic HW is inherently optimized for. Moreover, these brain-inspired technologies have been identified by the scientific community as particularly appropriate for low-power and adaptive applications required to analyse data in real time.

The objectives of the PhD plan are the following:
1. Develop the competence to analyse available data from product documentation features experimentally extracted from complex components and systems.
2. Evaluate the potentiality of a SNN, efficiently simulated on the neuromorphic platforms, when customized at the abstraction level of a flow graph and used for implementing a general-purpose algorithm.
3. Contribute to the design and development of a platform for prototyping Neuromorphic solutions following all engineering process phases, from the definition of specifications to the HW procurement and installation of Server nodes, Neuromorphic HW, and sensors available on the market.
4. Present a general approach for generating simplified neuromorphic models, implementing basic kernels, that users can directly use to implement their algorithms. The abstraction level of the models will depend on the availability of software libraries supporting the neuromorphic target hardware.
5. Using the prototyping platform, design proof-of-concept applications by combining a set of neuromorphic models, which will provide outputs with a limited (i.e., acceptable) error with respect to versions running on standard systems. Such applications should also reduce execution time and power consumption.
6. Contribute to the development of a framework for generating and connecting neuromorphic models in a semi-automatic way, to further facilitate the modelling process and the exploration of new neuromorphic-based computational paradigms.

The above research activities will focus on the implementation of algorithms in three main areas of application:
- Simulations of models developed by the EBRAINS neuroscience community.
- Real-time data analysis from IoT and Industrial applications.
- Medical and biological data stream analysis and pattern matching.

Outline of the research work plan
1st year. The candidate will study state-of-the-art neuromorphic frameworks aimed at deploying simulations on different neuromorphic HW technologies. He/She will contribute to the development of a framework for generating and connecting neuromorphic models in a semi-automatic way, facilitating the modelling process and the exploration of new neuromorphic-based computational paradigms. Within the first year, the candidate will be involved in the design and development of the Heterogeneous Neuromorphic Computing Platform (HNCP).
2nd year. The candidate will develop an integrated methodological approach for modelling applications and systems, applying the experiences obtained during the first year of research in a multi-scenario analysis. He/She will develop the basic structure of a user-friendly neuromorphic computing framework providing access and validation for the HNCP prototype. The candidate will contribute to the definition of two Modelling, Simulation, and Analysis (MSA) use cases, reproducing the needs of three expected user profiles: Neuroscientist, Bioinformatician, and Data scientist/engineer.
3rd year. The candidate will apply the proposed approach to different complex systems, enabling greater generalization of the methodology to different domains: for instance, by analyzing future investments in the field of neuromorphic compilers powering the new generation of neuromorphic hardware, soon available on the market alongside general-purpose computing units. Moreover, the candidate will contribute to the integration of the HNCP in the EBRAIN service ecosystem.
The research activities will be carried out, in collaboration with the Human Brain Project partners (The University of Manchester) and the EBRAINS-Italy partners.

List of possible venues for publications
The main outcome of the project will be disseminated in three international conference papers and at least one publication in a journal of the AIoT and neuromorphic fields. Moreover, the candidate will disseminate the major results in the EBRAINS-Italy meetings and events.
In the following the possible conference and journal targets:
• IEEE/ACM International Conferences (e.g., DAC, DATE, AICAS, NICE, ISLPED, GLSVLSI, PATMOS, ISCAS, VLSI-SoC);
• IEEE/ACM Journals (e.g., TCAD, TETC, TVLSI, TCAS-I, TCAS-II, TCOMP), MDPI Journals (e.g., Electronics).

The PhD position is funded by the EBRAINS-Italy project PNRR CUP B51E22000150006
Required skills: MS degree in computer engineering, electronics engineering or physics of complex systems. Excellent skills in computer programming, computer architecture, embedded systems, and IoT applications. Technical background in electronic design, modelling, simulation and optimization.

Group website: www.eda.polito.it
Summary of the proposal: A smart citizen-centric energy system is at the center of the energy transition. Energy communities will enable citizens to participate actively in local energy markets by exploiting new digital tools. Citizens will need to understand how to interact with smart energy systems, novel digital tools and local energy markets. Thus, new complex socio-techno-economic interactions will take place in such intelligence systems which need to be analyzed and simulated to evaluate possible future impacts.
Topics: Energy Communities, Multi Agent Systems simulations, Co-simulation
Rsearch objectives and methods: The diffusion of distributed (renewable) energy sources poses new challenges in the underlying energy infrastructure, e.g., distribution and transmission networks and/or within micro (private) electric grids. The optimal, efficient and safe management and dispatch of electricity flows among different actors (i.e., prosumers) is key to support the diffusion of distributed energy sources paradigm. The goal of the project is explore different corporate structures, billing and sharing mechanism inside energy communities. For instance, the use of smart energy contracts based on Distributed Ledger Technology (blockchain) for energy management in local energy communities will be studied. A testbed comprising of physical hardware (e.g., smart meters) connected in the loop with a simulated energy community environment (e.g., a building or a cluster of buildings) exploiting different RES and energy storage technology will be developed and tested during the three-year program. Hence, the research will focuses on the development of agents capable of describing:
1. the final customer/prosumer beliefs desire and intention and opinions.
2. the local energy market where prosumers can trade their energy and or flexibility
3. the local system operator that has to provide the grid reliability

All the software entities will be coupled with external simulators of grid and energy sources in a plug and play fashion. Hence the overall framework it as to be able to work in co-simulation environment with the possibility of performing hardware in the loop. The final outcomes of this research will be an agent based modelling tool that can be exploited for:
• Planning the evolution of future smart multi energy system by taking in to account the operational phase
• Evaluating the effect of different policies and related customer satisfaction
• Evaluating the diffusion of technologies and/or energy policies under different regulatory scenarios
• Evaluating new business model for energy communities and aggregators

During the 1st year, the candidate will study state-of the-art solution of existing agent based modelling tools in order to identify the best available solution for large scale smart energy system simulation in distributed environments. Furthermore, the candidate will review the state of the art in prosumers/aggregators/market modelling in order to identify the challenges and identify possible innovations. Moreover, the candidate will focus on the review possible corporate structures, billing and sharing mechanism of energy communities. Finally, it will start the design of the overall platform starting for the requirements identification and definition.
During the 2nd year, the candidate will terminate the design phase and will start the implementation of the agents intelligences. Furthermore, it will start to integrate agents intelligent and simulators together in order to crate the first beta version of the tool.
During 3rd year, the candidate will ultimate the over all platform and test it in different case study and scenarios in order to show all the effects of the different corporate structures, billing and sharing mechanism in energy communities.

Possible international scientific journals and conference:
• IEEE Transaction Smart Grid,
• IEEE Transactions on Evolutionary Computation,
• IEEE Transactions on Control of Network Systems,
• Enviromental modelling and Software,
• JASSS,
• ACM e-Energy,
• IEEE EEEIC internat. conf.
• IEEE SEST internat. conf.
• IEEE Compsac internat. conf.
Required skills: • Programming and Object-Oriented Programming (preferable in Python),
• Frameworks for Multi Agent Systems Development (preferable),
• Development in web environment (e.g. REST web services),
• Computer Networks

Group website: www.eda.polito.it
Summary of the proposal: The emerging concept of smart grid is strictly connected to heterogeneous and interlinked aspects, from energy systems, to cyber-infrastructures and active prosumers. In this context, Electric Vehicles can play a crucial role in demand-side management. However, such EV connection with the rest grid and city must be tested, evaluated, planned and cannot be left to chance. For this purpose novel co-simulation techniques with hardware in the loop capabilities must be designed and developed
Topics: Vehicle to Grid, Co-simulation platform,
Rsearch objectives and methods: This research aims at developing a novel distributed infrastructure to model and co-simulate different V2X connection scenarios by combining different technologies (both Hardware and Software) in a plug-and-play fashion and analysing heterogeneous information, often in real-time. The final purpose consists of simulating the implication of V2X connectivity with future energy systems and cities. Thus, the resulting infrastructure will integrate into a distributed environment heterogeneous i) data-sources, ii) cyber-physical-systems, i.e. Internet-of-Things devices, to retrieve/send information in real-time, iii) models of the different components of EV, iv) energy systems models , v) real-time simulators, vi) third-party services to retrieve information in real-time data. This infrastructure will follow the modern software design patterns (e.g. microservice) and each single component will adopt the novel communication paradigms, such as publish/subscribe. This will ease the integration of “modules” and the link between them to create holistic simulation scenarios. The infrastructure will enable also both Hardware-in-the-Loop (HIL) and Software-in-the-Loop again to perform real-time simulations. Furthermore, the solution should be able to scale the simulation from micro-scale (e.g. single EV) up to macro-scale (e.g. urban or regional scale) and span different time scales from micro-seconds up to years. In a nutshell, the co-simulation platform will offer simulations as a service that can be used by different stakeholders to build and analyse new scenarios for short- and long-term activities and for testing and managing the operational status of smart energy systems.

Hence the research will focus on the development of a distributed co-simulation platform capable of:
• Interconnecting and synchronizing digital real-time simulators, even located remotely
• Integrating Hardware in the loop in the co-simulation process
• Easing the integration of simulation modules by aumatizing the code generation to build new simulation scenarios in a plug-and-play fashion

The outcomes of this research will be a distributed co-simualtion platform that can be exploited for:

• Planning the evolution of the future V”X connectivity
• Evaluating the effect of different policies and related customer satisfaction
• Evaluating the performances of hardware components in a realistic test-bench

During the 1st year, the candidate will study the state-of-the-art solution of existing co-simulation platforms to identify the best available solution for V2X connectivity simulation in distributed environments. Furthermore, the candidate will review the state of the art in hardware in the loop integration and in automatic composability of the scenario code by identifing challenges and possible solution. Finally, it will start the design of the overall platform starting from the requirements identification and definition.
During the 2nd year, the candidate will end the design phase and will start the implementation of the co-simualiton platform including HIL features to be integrated with software simulators together to create the first beta version of the tool. Furthermore, the candidate will start developing software solutions to solutions to easy the integration of simulation modules by aumatizing the code generation
During 3rd year, the candidate will ultimate the overall platform and test it in different case study and scenarios to show all capabilities of the platform in terms of automatic scenario composition and integration of HIL.

Possible international scientific journals and conference:
• IEEE Transaction Smart Grid,
• IEEE Transactions on Evolutionary Computation,
• IEEE Transactions on Control of Network Systems,
• Enviromental modelling and Software,
• JASSS,
• ACM e-Energy,
• IEEE EEEIC internat. conf.
• IEEE SEST internat. conf.
• IEEE Compsac internat. conf.
Required skills: • Programming and Object-Oriented Programming (preferable in Python),
• Frameworks for Multi Agent Systems Development (preferable),
• Development in web environment (e.g. REST web services),
• Computer Networks

Group website: www.eda.polito.it
Summary of the proposal: The development of novel ICT solutions in smart-grids has opened new opportunities to foster novel services for energy management and energy saving in all end-use sectors, with particular emphasis on Electric Vehicle to X (V2X) connectivity. New generation of distribution grids will open the possibility to provide new services for both citizens and energy providers like demand flexibility. Thus, there will be a strong interaction among transportation, traffic trends and energy distribution systems For this reason, new simulation tools are needed to evaluate the impact of the of Electric Vehicles in the grid by considering citizens behaviors.
Topics: Vehicle to Grid, Multi Agent Systems Simulations, Electric Vehicles
Rsearch objectives and methods: This research aims at developing novel simulation tools for smart cities/smart grid scenarios that exploit the Agent-Based Modelling (ABM) approach to evaluate novel strategies to manage the V2X connectivity with traffic simulation. The candidate will develop an ABM simulator that will provide a realistic and virtual city where different scenarios will be executed. The ABM should be based on real data, demand profiles and traffic patterns. Furthermore, the simulation framework should be flexible and extendable so that i) It can be improved with new data from the field; ii) it can be interfaced with other simulation layers (i.e. physical grid simulators, communication simulators); iii) It can interact with external tools executing real policies (such as energy aggregation). This simulator will be a useful tool to analyse how V2X connectivity and the associated services impact both social behaviours and traffic. It will also help the understanding of the impact of new actors and companies (e.g., sharing companies) in both the marketplace and the society, again by analysing the social behaviours and the traffic conditions In a nutshell ABM simulator will simulate both traffic variation and the possible advantages of V2X connectivity strategies in a smart grid context. This ABM simulator will be designed and developed to span different spatial-temporal resolutions. All the software entities will be coupled with external simulators of grid and energy sources in a plug and play fashion to be ready for being integrated with external simulators and platforms. This will enhance the resulting AMB framework unlocking also hardware in the loop features.
The outcomes of this research will be an agent-based modelling tool that can be exploited for:
• Simulating V2X connectivity considering traffic conditions
• Evaluating the effect of different policies and related customer satisfaction
• Evaluating the diffusion and acceptance of demand flexibility strategies
• Evaluating the new business model for future companies and services

During the 1st year, the candidate will study the state-of-the-art solution of existing agent-based modelling tools to identify the best available solution for large scale traffic simulation in distributed environments. Furthermore, the candidate will review the state of the art of V2X connectivity to identify the challenges and identify possible innovations. Moreover, the candidate will focus on the review Artificial Intelligence algorithms for simulating traffic conditions and variation for estimating EV flexibility and users' preferences. Finally, it will start the design of the overall ABM framework and algorithms starting with the requirements identification and definition.
During the 2nd year, the candidate will terminate the design phase and will start the implementation of the agents' intelligence and test the first version of the proposed solution.
During the 3rd year, the candidate will ultimate the overall ABM framework and AI algorithms and test it in different case studies and scenarios to assess the impact of V2X connection strategies and novel business models.

Possible international scientific journals and conference:
• IEEE Transaction Smart Grid,
• IEEE Transactions on Evolutionary Computation,
• IEEE Transactions on Control of Network Systems,
• Enviromental modelling and Software,
• JASSS,
• ACM e-Energy,
• IEEE EEEIC internat. conf.
• IEEE SEST internat. conf.
• IEEE Compsac internat. conf.
Required skills: • Programming and Object-Oriented Programming (preferable in Python),
• Frameworks for Multi Agent Systems Development (preferable)
• Development in web environment (e.g. REST web services),
• Computer Networks

Group website: www.eda.polito.it
Summary of the proposal: This research proposal aims at designing and developing a novel ICT solution to enhance quality of life of people with frailty due to neurodegenerative diseases. By combining Internet of Things (IoT), Machine Learning (ML) and Virtual and Augmented Reality (VAR), it aims at i) offering remote cultural activities using VAR technologies to patients, ii) remote monitoring and actuation of smart environments and ii) supporting medical staff with an advanced remote monitoring systems.
Topics: Virtual and Augmented Reality, Internet of things, Machine Learning
Rsearch objectives and methods: This research project aims at developing Digital Twin models based on the Internet of Things (IoT), Machine Learning (ML) and Virtual and Augmented Reality (VAR) to enhance quality of life of people with frailty due to neurodegenerative diseases such as Amyotrophic Lateral Sclerosis, Sclerosis Multiple, Senile Dementia, Parkinson and Alzheimer, making them more autonomous. The final solution aims at enabling i) an accurate remote monitoring and ii) supporting medical staff. The importance of these diseases is relevant in nowadays societies. As an example, dementia is an increasing disease in the population and has been defined according to the World Health Organization and Alzheimer Disease International Report as a world public health priority: "In 2010 35.6 million people were affected by dementia with an estimated double increase in 2030, triple in 2050, with 7.7 million new cases per year (1 every 4 seconds) and with an average survival, after diagnosis, of 4-8- years. The estimated cost is $604 billion per year with increasing and continuing challenge to health systems. All countries must include dementia in their public health programs. International, national, regional and local levels are needed programs and coordination on multiple levels and among all interested parties. "The objectives aim at:
• improving the remote monitoring of people with frailty through new generation IoT devices;
• supporting hospital medical staff in activities with patients in presence and remotely to improve personalized service and reduce hospitalization costs;
• promoting the interoperability of data relating to hospital infrastructures and IoT devices through the creation of a Digital Twin model of the infrastructure itself;
• improving the quality of life of patients by offering remote cultural activities using VAR technologies to:
• attend cultural sites (museums, theaters, etc.) in an increased or virtual manner in relation to the progress of the disease;
• take advantage of the environments of knowledge, being and feeling based on cultural heritage, aimed at setting up customizable qualification programs for care in the ward and at home, differentiated by pathology, severity and age.
The methodology adopted will be iterative and will include the following logical steps for: (i) the definition of end users who will be heterogeneous and will range from patients to medical health personnel including also technical personnel of cultural heritage; (ii) the identification and analysis of requirements for the development of a distributed software platform; (iii) the integration of bio-medical and environmental IoT devices into the software platform; (iv) the definition of the contents of the Digital Twin in the static parameters and necessary for the creation of the digital model of hospital, housing and cultural infrastructures; (v) the analysis and modeling of the information necessary for the remote monitoring of vital parameters, for visits to the cultural heritage and for the enabling programs which will be managed as dynamic data within the Digital Twin; (vi) the development of ML algorithms for the real-time personalization of the activities that users can perform and of the multimedia contents that can be used with the VAR tools.

During the 1st year, the candidate will study the state-of-the-art of existing digital twin, BIM, IoT and VAR solutions for remote monitoring of patients. Then, he/she will start the design of digital twin models.
During the 2nd year, the candidate will end the design phase and will start the implementation of digital twin models to create the first beta version of the software tools.
During the 3rd year, the candidate will ultimate the aforementioned solutions and will test them in a real-world hospital.

Possible international scientific journals and conference:
• IEEE Internet of things journal
• IEEE Transactions on Biomedical Engineering,
• IEEE Journal of Biomedical and health informatics
• IEEE Transactions on neural systems and rehabilitation engineering
• Journal of biomedical engineering, Elsevier
• Computer methods and program in biomedicine, Elsevier
• Engineering Applications of Artificial Intelligence,
• Expert Systems with Applications,
• IEEE EEEIC internat. conf.
• IEEE SEST internat. conf.
• IEEE Compsac internat. conf.
Required skills: • Programming and Object-Oriented Programming (preferable in Python),
• Development in web environment (e.g. REST web services),
• Computer Networks

Group website: Group website: https://netgroup.polito.it
Project website: h...
Summary of the proposal: This project proposes to aggregate the huge number of traditional computing/storage devices available in modern environments (such as desktop/laptop computers, embedded devices, etc.), which run mostly underutilized, into an “opportunistic” datacenter, hence replacing the current micro-datacenters at the edge of the network and the consequent potential savings in energy and CAPEX. This would transform all the current computing hosts into datacenter nodes, including the operating system software. The current Ph.D. proposal aims at investigating the problem that may arise in the above scenario, such as defining a set of algorithms that allow orchestrating jobs on an “opportunistic” datacenter, as well as a proof-of-concept showing the above system in action.
Topics: Cloud computing, edge computing
Rsearch objectives and methods: Cloud-native technologies are increasingly deployed at the edge of the network, usually through tiny datacenters made by a few servers that maintain the main characteristics (powerful CPUs, high-speed network) of the well-known cloud datacenters. However, most of current domestic environments and enterprises host a huge number of traditional computing/storage devices, such as desktop/laptop computers, embedded devices, and more, which run mostly underutilized.
This project proposes to aggregate the above available hardware into an “opportunistic” datacenter, hence replacing the current micro-datacenters at the edge of the network and the consequent potential savings in energy and CAPEX. This would transform all the current computing hosts into datacenter nodes, including the operating system software.
The current Ph.D. proposal aims at investigating the problem that may arise in the above scenario, such as defining a set of algorithms that allow orchestrating jobs on an “opportunistic” datacenter, as well as a proof-of-concept showing the above system in action.

The objectives of the present research are the following:
• Evaluate the economic potential impact (in terms of hardware expenditure, i.e., Capital Expenditures - CAPEX, and energy savings, i.e., Operating Expenses - OPEX) of such a scenario, in order to validate its economic sustainability and the impact in terms of energy consumption.
• Extend existing operating systems (e.g., Linux) with lightweight distributed processing/storage capabilities, in order to allow current devices to host “foreign” applications (in case of availability of resources), or to borrow resources in other machines and delegate the execution of some of its tasks to the remote device.
• Define the algorithms for job orchestration on the “opportunistic” datacenter, which may differ considerably from the traditional orchestration algorithms (limited network bandwidth between nodes; highly different node capabilities in terms of CPU/RAM/etc; reliability considerations; necessity to leave free resources to the desktop owner, etc).

The research activity is part of the Horizon Europe FLUIDOS project (https://www.fluidos.eu/) and it is related to current active collaborations with Aruba S.p.A. (https://www.aruba.it/) and Tiesse (http://www.tiesse.com/).

The research activity will be organized in three phases:
• Phase 1 (Y1): Economic and energy impact of opportunistic datacenters. This would include real-world measurements in different environment conditions (e.g., University lab; domestic environment; factory) about computing characteristics and energy consumption and the creation of a model to assess potential savings (economic/energy).
• Phase 2 (Y2): Job orchestration on opportunistic datacenters. This would include real-world measurements of the features required for distributed orchestration algorithms (CPU/memory/storage consumption; device availability; network characteristics), and the definition of a scheduling model that achieves the foreseen objectives, evaluated with simulations.
• Phase 3 (Y3): Experimenting with opportunistic datacenters. This would include the creation of a proof of concept of the defined orchestration algorithm, executed on real platforms, with real-world measurements of the behavior of the above algorithm in a specific use-case (e.g., University computing lab, factory with many data acquisition devices, etc.)

Expected target conferences are the following:
Top conferences:
• USENIX Symposium on Operating Systems Design and Implementation (OSDI)
• USENIX Symposium on Networked Systems Design and Implementation (NSDI)
• International Conference on Computer Communications (INFOCOM)
• ACM European Conference on Computer Systems (EuroSys)
• ACM Symposium on Principles of Distributed Computing (PODC)
• ACM Symposium on Operating Systems Principles (SOSP)

Journals:
• IEEE/ACM Transactions on Networking
• IEEE Transactions on Computers
• ACM Transactions on Computer Systems (TOCS)
• IEEE Transactions on Cloud Computing

Magazines:
• IEEE Computer
Required skills: The ideal candidate has good knowledge and experience in cloud computing and networking. Availability for spending periods abroad would be preferred for a more profitable investigation of the research topic.

Group website: https://www.polito.it/cgvg
Summary of the proposal: Vision and Language models (VLMs) are novel AI approaches that can understand and generate both visual and textual information, and are used for a variety of tasks such as generation, retrieval, and classification. This project will focus on improving the limited generalization capabilities, unbalanced cross-modal retrieval, and lack of interpretability of current VLMs through different approaches (such as transfer learning, unsupervised domain adaptation, and explainable AI techniques).
Topics: Machine Learning, Vision and Language models, domain adaptation
Rsearch objectives and methods: Research Objectives
Vision and Language models (VLM) are a type of AI models that are able to understand and generate both visual and textual information. These models are designed to understand the relationship between visual and linguistic data, and can be used to perform a wide range of tasks such as generation tasks (such as visual question answering, visual captioning, text-to-image generation), retrieval tasks (i.e., retrieving images based on a textual description, or navigating an environment based on textual information), and classification tasks. Depending on the task at hand, different architectures have been proposed but their common characteristics is that to learn how to solve the task they are designed for, they need to be trained with very large datasets that contain pairs of images/videos and associated text (which depends on the task at hand). VLMs are a rapidly evolving field, with ongoing research focused on improving their performance and applicability in real-world applications. In particular, the project will tackle the following issues.

Limited Generalization capabilities. One of the main limitations of current VLMs is their lack of generalization, since they tend to perform well on the specific tasks and datasets they were trained on, but with degraded performance when applied to new or unseen data. This is an issue in particular when these models should be adapted to domains where training data are scarce, or where differences between the classes to be recognized are subtle. For instance, in insurance applications, segmenting the image region where a specific type of damage is present can be challenging due to the similarities between the damage classes and their severity. To address this problem, the research will investigate novel transfer learning and unsupervised domain adaptation (UDA) techniques (i.e., where only unlabeled target samples are available), focusing on challenging settings like zero-shot, one-shot and few-shot UDA.

Unbalanced cross-modal retrieval. Cross-modal retrieval is the task of retrieving relevant assets from one modality (e.g., image) given a query from another modality (e.g., text). We distinguish among unimodal search spaces, which contain only assets of one modality to be retrieved (e.g., only images), and multimodal search spaces, which utilizes all modalities. The main issue in this context is that VLMs show robust performance in unimodal search spaces, while a gap arises in multimodal search spaces, affecting the efficacy of retrieving images and text related to the search query altogether. The project will investigate different approaches to soften this problem of unbalanced retrieval.

Lack of interpretability. Current VLMs are often considered as "black box" models, which makes it difficult to understand how they make their decisions. This can be a problem for applications where trust and transparency are important, such as in the medical, legal or insurance domains. Therefore, the project will investigate different approaches to address this problem, such as using techniques to visualize the internal representations and decision making process of the model. Another possibility to extract insights from the model is using eXplainable AI (XAI) which includes methods like saliency maps, layer-wise relevance propagation, and counterfactual analysis.

Work plan
Phase 1 (months 0-6): review of the current state-of-the-art on VLMs, to highlight current issues and possible solutions.
Phase 2 (months 6-24): development of novel UDA approaches for VLMs and XAI techniques for improving their explainability. Proposed solutions will be assessed as well on a private dataset of images and textual reports on home damage assessment provided by Intesa San Paolo Assicurazioni.
Phase 3 (months 18-30): development of approaches improving current cross-modal retrieval approaches.
Phase 4 (months 24-36): extension and generalization of the previous phases to include additional contexts and use cases.

Possible publication venues
International peer-reviewed journals in the fields related to the current proposal, such as: IEEE Transactions Image Processing, IEEE transactions Pattern Analysis and Machine Intelligence, Pattern Recognition, Computer Vision and Image Understanding, International Journal of Computer Vision. Top-tier international conferences, such as CVPR, ICCV, ECCV, NeurIPS, ICPR, WSDM, ACM Web Conference, CIKM, ISWC

Collaborations
The proposal is strictly related to the DARE (Developing AI for Risk management in the insurancE industry) project, funded by Fondazione Compagnia di San Paolo on the Artificial Intelligence call 2022. The project focuses on the application of AI in the risk management and assessment of "non-motor" claims (in particular, claims related to home accidents). The project is also part of an ongoing collaboration with LINKS Foundation on the topic of VLMs.
Required skills: Soft skills: autonomy, creativity, proactivity, team working attitudes and good communication skills.
Hard skills: computer science degree (or similar) with a provable background in machine learning. The candidate should have a good knowledge of the main programming languages (python in particular) and software design paradigms.

Group website: https://www.polito.it\cgvg
Summary of the proposal: ECAs are virtual characters capable of simulating a human-like conversation using natural language processing and multimedia communications. ECAs can be used in a plethora of applications from education to training, healthcare, virtual assistants and virtual companions. In many of these fields, improving ECAs effectiveness and realism requires to make them capable of expressing believable emotions and leverage the analysis of human affects to create a strong empathic bond with the end-users.
Topics: Embodied Conversation Agent, affective computing, mixed reality
Rsearch objectives and methods: Research Objectives
Recent advances in Machine Learning and real-time graphics computing generated a growing interest in Embodied Conversational Agents (ECAs), i.e., virtual characters capable of simulating a human-like face-to-face conversation using natural language processing (NLP) and multimedia communicative behaviors that include verbal and non-verbal clues. The availability of increasingly powerful and connected sensors allows ECAs to access contextual information and interact autonomously with humans and the environment. The possibility of leveraging a virtual body and voice for interaction should be complemented by social-emotional behaviors expressed by the ECA. In other words, ECAs should have a personality, emotions, and intentions and should enhance their social interaction with the user by simulating and triggering empathy. With these capabilities, ECAs have the potential to play an increasingly important role in a plethora of applications ranging from educational and training environments, health and medical care, virtual assistants in industry, and virtual companions in games. Despite that, improving the effectiveness of ECAs requires substantial contributions from the research. First, ECAs should be made simple to design and implement. Indeed, ECAs require taking into account different elements (NLP, context sensing, emotion modelling, affective computing, 3D animations) that involve specific technological and technical skills. Thus, the first outcome of the project will be the development of E2CA, a simple framework that supports the rapid design and deployment of emotion-aware ECAs, and facilitates the introduction of novel features expanding the current ECAs capabilities. Then, ECAs should be equipped with the capabilities of fully expressing and conveying (believable) emotions and leveraging the (fine-grained) analysis of human affects to create a robust empathic bond with the end-users. To this end, the project aims at improving the quality and effectiveness of the end-user experience with ECAs by tackling the following issues.
• A substantial portion of our communication includes nonverbal cues and behaviors (postures, facial expressions, gestures). However, some of these clues are currently relatively little-used in this context or are difficult to convey in a realistic and believable manner.
• A similar comment can be made for paralinguistic factors (voice tone, loudness, inflexion, and pitch), which provide information about the actual emotional states of the other peer in the communication and allow modulating the ECA response according to its emotional states. On the contrary, current text-to-speech libraries only partially manage such paralinguistic factors.
• As for the management of voice interaction between the user and the ECA, defining the set of intents and the training set of utterances for recognition is complicated and tedious, which would require automatic approaches for harvesting and mining textual corpus related to the specific scenario addressed.
Finally, the project foresees the development of advanced Machine Learning approaches to exploit multimedia information for capturing users' affect and actions and use these pieces of information to improve E2CA empathic and emotional response.

Work plan
In the first year, the project will start with a systematic review of the current state of the art of ECAs. The outcome of this work will serve to analyze the current limitations, identify promising approaches, and inform the design and development of the solutions to address the problems described in the research proposal. The candidate will also start designing, developing, and assessing a framework that allows the rapid prototyping and deployment of ECA in different contexts.
In the following years, the candidate will develop solutions addressing the above-mentioned objectives. These solutions will aim at extending the capabilities offered by this framework and, ultimately, the features that ECAs can exploit to improve the quality and effectiveness of the interaction with the end-users. The proposed techniques will be applied to relevant use-cases using VR and AR applications and will be validated through user studies involving panels of volunteers.

Possible publication venues
IEEE Transactions on Affective Computing, IEEE Transactions on Visualization and Computer Graphic, ACM Transactions on Graphics, Pattern Recognition, ACM trans on HCI, and similar. Top-tier conferences related to the project topics.

Collaborations
This project involves two main collaborations with the department of Psychology and Neuroscience of the University of Turin. The first is with the ContExp Lab (Prof. Elisa Carlino), for investigating the use of E2CA as virtual physicians in placebo treatments; the second is with the Vision and Affective Neuroscience Lab (Prof. Olga Del Monte) for developing virtual environments for the treatment of social anxiety.
Required skills: Soft skills: autonomy, creativity, proactivity, team working attitudes and good communication skills.
Hard skills: computer science degree (or similar) with a strong and provable background in real-time computer-graphics. In particular, the candidate should have experience with designing and developing AR/VR applications and have a good knowledge of the main programming languages (c#, python) and software design paradigms.

Group website: http://grains.polito.it/index.php
Summary of the proposal: The main goal is to provide an innovative methodology and solution for the vehicle owner’s recognition in emergency access conditions that can operate “in the wild”, i.e., that can quickly and accurately detect, recognize, and verify the owner’s identity in presence of harsh conditions (highly variable lighting, presence of other subjects, impact of possibly constrained vision systems on the image quality, etc.). Attention will be focuses on face and body pose. Synthetic datasets are expected to be exploited in the training of the devised models for data quality, scalability, ease of use and privacy compliance purposes. The activities will be carried out in collaboration with CRF and STELLANTIS.
Topics: AI for human-vehicle interaction, Face verification/recognition, Human pose estimation
Rsearch objectives and methods: Face recognition is one of the key components for future intelligent vehicle applications, such as determining whether a person is authorized to operate the vehicle or not. The challenge is to build a fast and accurate system that can detect, recognize, and verify the driver’s identity in presence of constraints introduced in the car environment in different lighting and other external conditions. Furthermore, if the problem of recognizing the owner of a vehicle moves outside the car itself, that is, the system must recognize the owner in emergency access conditions, the problem becomes even more tough. Considering the challenges related to such an “in the wild” recognition, no complete face recognition system tailored to the car environment has been reported yet. The above considerations can be easily extended also to human pose estimation that, together with face verification/recognition, can play a key role in next-generation human-vehicle interaction. Hence, the main goal of this research will be the design and implementation of a novel machine/deep learning algorithms for supporting face and body pose recognition in the mentioned conditions. Based on the described context, the work plan for the research will be organized as follows:
- Development of a semi-automatic computer graphics pipeline to simulate different images of the person considered as the vehicle owner; extensive domain randomization will be used to train models robust to variations in terms of, e.g., skin color diversity, aging, facial changes, etc.
- Design of a semi-automatic computer graphics pipeline to simulate different types of scenarios in the proximity of the vehicle; domain randomization will be further employed to train models robust to variations in terms of, e.g., illumination, presence of other human and non-human subjects, etc.
- Release the created datasets containing the annotated images that allows suitable algorithms to (learn how to) differentiate subtle intra-class variations (including different setups of the same face and person pose model) from deformations occurring due to various reasons.

In the 1st year, the candidate will carry out an overall evaluation of the main research goal, will review the state of the art in the field and will define an appropriate application as a study case for emergency access, setting up the functional architecture of the algorithm. Also, a first feasibility evaluation, considering different sensors configuration and infrastructure information will be performed. In the 2nd year, the candidate will focus on deep learning methods for face detection and recognition and human pose estimation, tracking the latest research trends and analyzing the characteristics of the methods devised so far for the considered use case. Synthetic data generation will be also part of the main activity to be carried out. The created datasets will be employed for training. In the 3rd year, the candidate will develop and implement the whole emergency access in the cockpit algorithms, model training, etc. It will be considered the possibility to use the experimental data from a demo car. Finally, the candidate will work on the validation of the proposed solution performance.
During the 2nd and 3rd years, the candidate will perform a significant part of the testing at the STELLANTIS premises. The central part of the research will be carried out at the CRF facilities in Turin, and in remote with AEES/SCIC team & facilities (Vélizy – Villacoublay, Paris). The candidate will also be involved in a period abroad in a STELLANTIS group facilities (probably in France).

The research activity will aim to disseminate the proposed solutions in the primary publication venues and the automotive industry, encouraging cross-fertilization of research results and new collaborations with other research actors. In particular, publications will target conferences (e.g., CVPR, ECCV, ICPR) and journals in the areas of machine/deep learning, computer vision and vehicular technologies. Target journals include, e.g., IEEE Transactions on Pattern Analysis and Machine Intelligence, International Journal of Computer Vision, IEEE Transactions on Neural Networks and Learning Systems, IEEE Transactions on Visualization and Computer Graphics, Pattern Recognition, Computer Vision and Image Understanding, Journal of Machine Learning Research, IEEE Transactions on Vehicular Technology.
Required skills: Programming skills in C/C++/Python.
Knowledge of machine/deep learning and computer vision techniques and frameworks.

Group website: http://netgroup.polito.it
Summary of the proposal: In next-generation Cloud Environments, security requirements should be enforced automatically, rather than manually as happens today: administrators should only specify some intents, and the system should automatically enforce them in the best way. Today, this kind of automation has been reached for resource scaling, but not for security policies. This research aims to fill this gap by defining novel, highly automated, intent-based approaches to enforce security in a formally correct and optimized way.
Topics: Cybersecurity, Cloud Computing, Security Automation and Optimization
Rsearch objectives and methods: The main objective of the proposed research is to improve the state of the art of security automation and optimization in cloud-based systems, especially focusing on the automated implementation of access control and isolation policies, as well as of security principles like least privilege and zero trust. Although some of the tools available today (e.g., service mesh platforms like Istio) partially support these activities, they still have serious limitations, especially because they leave a lot of the work and responsibility in charge of the human user, who must write several policies and configurations manually, with the risk of introducing errors that undermine security. The candidate will pursue intent-based, highly automated approaches that limit human intervention as much as possible, so reducing the risk of introducing human errors and speeding up reconfigurations. This last aspect is very important because cloud-based systems are highly dynamic. Moreover, in case security attacks or policy violations are detected at runtime, the system should be able to rapidly recover, by reconfiguring its security policies promptly. Another feature that the candidate will pursue in the proposed solution is a formal approach, capable of providing formal correctness by construction. In this way, high correctness confidence is achieved without the need of a posteriori formal verification of the solution. Finally, the proposed approach will pursue optimization, by selecting the best solution among the many possible ones. In this work, the candidate will exploit the results and the expertise recently achieved by the proposer’s research group in the related field of network security automation. Although there are big differences between the two application fields, there are also some similarities, and the underlying expertise on formal methods held by the group will be fundamental in the candidate’s research work. If successful, this research work can have a high impact because, by improving automation, it can simplify and improve the quality of re-configurations in cloud environments, that nowadays are crucial for our society.

The research activity will be organized in three phases:
Phase 1 (1st year): The candidate will analyze and identify the main issue and limitations of the recent technologies in the Cloud Computing environment (e.g., Kubernetes) and approaches/frameworks related to services communications and management (e.g., Istio). Also, the candidate will study the state-of-the-art literature of security automation and optimization in a Cloud Computing environment, with particular attention to formal approaches for modeling, refining, and verifying security. Subsequently, with the tutor's guidance, the candidate will start identifying and defining the new approaches for automatic and optimized security enforcement of access control and services isolation in Cloud Systems by intents. At this phase's end, some preliminary results are expected to be published. During the first year, the candidate will also acquire the background necessary for the research. This will be done by attending courses and by personal study.

Phase 2 (2nd year): The candidate will consolidate the proposed approaches, will fully implement them, and will conduct experiments with them, e.g., to study their correctness, generality and performance. In this year, particular emphasis will be given to the identified use cases, properly tuning the developed solutions to real scenarios. The results of this consolidated work will also be submitted for publication, aiming at least at a journal publication.

Phase 3 (3rd year): based on the results achieved in the previous phase, the proposed approach will be further refined, to improve its scalability, performance, and applicability (e.g., different security policies and strategies will be considered), and the related dissemination activity will be completed.

The contributions produced by the proposed research can be published in conferences and journals belonging to the areas of networking and cloud (e.g. INFOCOM, ACM/IEEE Transactions on Networking, IEEE Transactions on Cloud Computing, or IEEE Transactions on Network and service Management, Netsoft), cybersecurity (e.g. IEEE S&P, ACM CCS, NDSS, ESORICS, IFIP SEC, DSN, ACM Transactions on Information and System Security, or IEEE Transactions on Secure and Dependable Computing), and applications (e.g. IEEE Transactions on Industrial Informatics or IEEE Transactions on Vehicular Technology).
Required skills: In order to successfully develop the proposed activity, the candidate should have a good background in cybersecurity (especially in network security) and Cloud systems, and good programming skills. Some knowledge of formal methods can be useful, but it is not required: the candidate can acquire this knowledge and related skills as part of the PhD Program, by exploiting specialized courses.

Group website: https://elite.polito.it
Summary of the proposal: Media artists can use smartphones and IoT-enabled devices as material for creative exploration. However, some do not code and have a low interest in learning. Those who develop recognize that programming artworks enclose characteristics that differ from traditional coding.

This Ph.D. proposal aims to extend our comprehension of the needs of artists for creative coding. In doing so, it investigates the design, implementation, and evaluation of toolkits that appropriately serve them to realize code-based artworks across multiple devices and media effectively.
Topics: Human-Computer Interaction, Creative Coding, Multimedia
Rsearch objectives and methods: The recent availability of smartphones, AR/VR headsets, IoT-enabled devices, and microcontroller kits creates new opportunities for creative explorations for media artists and designers. The field of “creative coding” emphasizes the goal of expression rather than function, and creative coders combine computational skills with creative insight. In some cases, artists and designers are interested in creative coding but lack the knowledge or programming skills to benefit from the offered possibilities.

The main research objective of this Ph.D. proposal is to extend our comprehension of the needs of media artists and designers for creative coding across multiple devices and media. To reach this objective, the Ph.D. student will study, design, develop, and evaluate proper models and novel technical solutions (e.g., toolkits and tools) for supporting creative coders. The proposal envisions focusing on both creative coders and end-user programmers. The work will start from the needs of the stakeholders (i.e., artists and designers), complemented by the existing literature. Using a participatory approach, the Ph.D. student will keep the stakeholders involved in the various phases of the work.
In particular, the Ph.D. research activity will focus on the following:
1) Study of the creative coding field, stakeholders’ needs and current tools, and HCI techniques able to support the identification of suitable requirements and the creation of technical solutions to effectively support creative exploration and coding.
2) Creation of a theoretical framework to satisfy the identified needs and requirements, able to adapt to different media, devices, and skills. For instance, it can include end-user personalization as a way to allow end-users to create code-based artifacts and AI techniques to support the creation of programs.
3) Development of a toolkit and related tools to experiment with the theoretical framework’s facets. The creation and evaluation of the tools will serve as the validation for the framework.

The work plan will be organized according to the following four phases, partially overlapped:
• Phase 1 (months 0-6): literature review about creative coders and coding; focus groups and interviews with designers and media artists of various skills; definitions and development of a set of use cases and promising strategies to be adopted.
• Phase 2 (months 6-18): research, definition, and experimentation of an initial version of the theoretical framework and a first toolkit for creative coders, starting from the outcome of the previous phase. In this phase, the focus will be on the most common target devices, i.e., the smartphone and the PC, with the design, implementation, and evaluation of suitable tools.
• Phase 3 (months 12-24): research, definition, and experimentation of a second toolkit (or an evolution of the previous one) for novice creative coders and end-users. Such a toolkit uses artificial intelligence and machine learning to help during the coding process. The subsequent design, implementation, evaluation of suitable tools, and extension of the framework.
• Phase 4 (months 24-36): extension and generalization of the previous phases to include additional target devices and consolidate the theoretical framework; evaluation of the toolkit and the tools in real settings with a large number of artists.

It is expected that the results of this research will be published in some of the top conferences in the Human-Computer Interaction field (e.g., ACM CHI, ACM CSCW, ACM C&C, and ACM IUI). One or more journal publications are expected on a subset of the following international journals: ACM Transactions on Computer-Human Interaction, ACM Transactions on Interactive Intelligent Systems, IEEE Transactions on Human-Machine Systems, and International Journal of Human Computer Studies.
Required skills: A candidate interested in the proposal should ideally:
• be able to critically analyze and evaluate existing research, as well as gather and interpret data from various sources;
• be able to effectively communicate research findings through writing and presenting, both to specialist and non-specialist audiences;
• have a solid foundation in computer science/engineering and possess the relevant technical skills;
• have a good understanding of HCI research methods, especially around needfinding;
• be aware of ethical considerations related to the investigated topics, particularly when conducting research on human subjects.

Group website: https://dbdmg.polito.it/dbdmg_web/
Summary of the proposal: The PhD student will develop a multidisciplinary study across the research areas of Deep Learning Natural Language Understanding and Linguistics, with the goal of promoting and ensuring equality and inclusion in communication. Linguistic and discursive criteria for modeling diversity in a community and their intersectionality will be defined to fully represent them in communication. Deep-learning methods will be developed to identify non-inclusive text snippets and suggest inclusive text reformulations. Novel strategies for training deep-learning models will be coupled with human-in-the-analytics analysis loop strategies to ensure fair, privacy-friendly, and responsible data processing and to create fair and unbiased models.
Topics: Natural language processing, Inclusive communication, Deep Learning
Rsearch objectives and methods: This study aims to overcome discriminatory use of language within a text through Deep Learning methods to disseminate correct language use that reflects the diversity of our society. To achieve this ambitious goal, a number of different but highly interrelated research objectives (RO) will be pursued

RO1. Automatically process raw input text, identify discriminatory text segments, and suggest alternative inclusive reformulations, that is:
++ RO1a. How to define what linguistic "bias" means, e.g., which linguistic expressions are likely to be correlated with non-inclusive communication.
++ RO1b. Whether Deep Learning techniques are able to override the bias in the input text and produce an appropriate reformulation of the text.
++ RO1c. To what extent large, general, multilingual collections (e.g., Wikipedia) are suitable for learning pre-trained models that can be conveniently tuned to specific tasks (e.g., text reformulation and generation).

RO2. Improve engine learning capabilities by storing and leveraging end-user feedback.

RO3. Define an interactive tool to effectively explore the capabilities of the proposed automatic text rewrite tool.

RO4. Benchmark and evaluate the proposed system tailored to formal communication used mainly in academia and public administration, adapted for two Romance languages (i.e., Italian and French), which are particularly prone to non-inclusive wording.

The above objectives open a broad multidisciplinary research landscape that touches core aspects of linguistic research activities and data scientists working in the field of NLP research. The study will promote the application of a Deep Learning-based methodology for processing raw input text, identifying discriminative text snippets within an input text, and generating alternative and more comprehensive reformulations. To further improve the adaptability of the system, the engine continuously learns from user feedback and interactions and uses the acquired linguistic expertise to improve the ability of the Deep Learning methods to correctly handle the paraphrasing task and improve system performance.

During Year 1, the candidate will study state-of-the-art deep learning algorithms for natural language processing, as well as linguistic and discursive criteria able to model the diversity of our society. The candidate will define new linguistic and discursive criteria based on French discourse analysis (Moirand 2020) and customized for communication typologies (e.g., legal documents, academic texts). In addition, the candidate will propose solutions for three tasks: (1) data modeling, (2) inclusive language classification, and (3) reformulation of inclusive language.

In Year 2, the candidate will evaluate Deep Learning models using intrinsic and extrinsic quality metrics computed from a test set. However, the inherent complexity of the text classification and revision processes requires new strategies, specifically modeling human judgments, to quantify the soundness and completeness of the results obtained. To this aim, the candidate will propose new strategies for capturing and analyzing human preferences to propose alternative texts to different users based on the application scenarios and the users' knowledge and preferences.

In Year 3, the candidate will design a visual interface that allows end users and linguistics experts to easily interact with and use the proposed engine and provide feedback and suggestions to improve the capability of the proposed solution.

During the 2nd and 3rd year, the candidate will evaluate the proposed solutions in two Romance languages, Italian and French. All the research activity will be aimed at disseminating the proposed solutions not only in the primary venues (conferences and journals), but also towards language industries and society, by promoting cross-domain cross-fertilization of research results and new collaborations with other research actors.

Any of the following journals/conference
IEEE TKDE (Trans. on Knowledge and Data Engineering)
ACM TKDD (Trans. on Knowledge Discovery in Data)
Information sciences (Elsevier)
Machine Learning (Springer)
International Conference on Machine Learning
Required skills: Strong background in data science and deep learning.

Group website: https://smartdata.polito.it/
Summary of the proposal: Social media (such as Facebook, LinkedIn) and instant messaging applications (such as Telegram and Whatsapp) are platforms that connect people, thanks to groups and channels, creating time-evolving networks and graphs. These networks carry information on users' behavior and the reuse of media content. The research aims to shed light on these complex ecosystems, discover interesting patterns (e.g., polarization) and highlight possible misuse, from cybersecurity (e.g., phishing) to misinformation.
Topics: Social networks, Cybersecurity, Data science
Rsearch objectives and methods: Research objectives:
The expected outcome of the research would be a deeper understanding of the dynamics of social media and instant messaging networks, including identifying key patterns, behaviors, and cyber-threats.

The research will follow the following steps:
- Data collection process, involving web crawlers to gather data from social media, and novel solution to automatically crawl instant messaging platforms.
- Network creation and study the characteristics of these networks, both in considering a single snapshot, and how these change over time.
- Understanding of how users interact with and within these networks, and what behavior patterns can be observed.
- Highlight the potential anomalies and security threats of these platforms and how they can be detected and addressed using automatic approaches.
- Understanding how these platforms can be used to spread misinformation and inform strategies for countering it.
- Inform strategies for countering cyber threats and misinformation.
The analysis will be performed with machine learning techniques, graph analysis, and natural language processing tools.

Outline of the research work plan
1st year
- Study of the state-of-the-art of social network and instant messaging data-driven data collection and analysis
- Design and deployment of a data collection process with automatic and scalable crawlers for the web, social networks, and instant messaging platforms.
- The data will be stored, organized, and analyzed using big data techniques (such as Pyspark).

2nd year
- Adaptation and extension of existing solutions to study the networks
- Propose and develop innovative solutions to the problems of cyber threats and misinformation

3rd year
- Tune the developed techniques and highlight possible strategies to counteract the various threats
- Application of the strategies to new data for validation.

List of possible venues for publications
- Elsevier Online Social Networks and Media
- ACM Transactions on the Web
- Springer Social Network Analysis and Mining
- IEEE/ACM ASONAM
- ACM WebSci
Required skills: - programming skills (python, big data)
- machine learning
- data science for network analysis
- basics of parsing and scraping/crawling web pages

Group website: https://smartdata.polito.it/
Summary of the proposal: Darknets and honeypots are network sensors that collect data of possible cyberattacks. They receive unsolicited network traffic and let the security analysts observe new attacks and their evolution. Typically deployed on a single network, they suffer from limited visibility. This research proposal aims to: i) create a portable system to simplify the deployment and control of sensors in a distributed infrastructure for data collection; (ii) characterize traffic the distributed sensor collects; (iii) leverage machine learning tools to automatically detect patterns and anomalies in each network; (iv) leverage federated learning methodologies to obtain a general overview of malicious activities from the global platform.
Topics: Network security, machine learning, cybersecurity
Rsearch objectives and methods: Research objectives

Darknets are passive sensors collecting unsolicited traffic in unused address spaces. They provide insight into unsolicited and potentially malicious activities, such as the rise of botnets and DDoS attacks. Honeypots complement the visibility on such traffic. They are active sensors designed to respond to traffic mimicking real systems. Typically, such sensors are limited to specific areas, such as individual campus or corporate networks, providing a limited view from the perspective of a single network. This limitation reduces the ability to obtain general knowledge and representations of malicious activity that may change their behavior when targeting different victims, e.g., residential, institutional, or commercial. To overcome this limitation and extract general knowledge and representations of the malicious activities, this project aims at:
1) Design and implement a portable darknets/honeypots that can be easily deployed on any network, including commercial routers and single personal computers. The goal is to create a distributed infrastructure of darknets/honeypots that can be easily installed on heterogeneous networks, e.g., home/office/hospital/etc. The infrastructure will collect and share data to observe the malicious activities targeting different sites. As we may collect sensitive data, when developing the infrastructure, we will consider how to anonymize the data to protect user privacy.
2) To characterize the data collected using data analysis techniques. The goal is to understand how malicious activity differs depending on the type of sites receiving it.
3) To investigate the usage of advanced machine learning methodologies to automatically process the traffic collected by the distributed infrastructure. First, we will consider the creation of local models capable of detecting patterns and anomalies separately in each deployment.
4) To adopt federated learning methodologies to create a single unified machine learning model that provides a general overview of the malicious activity at the global scale. The goal is to integrate the information produced by each local models and site and to create a common and overarching model able to automatically detect malicious activities at scale.

Outline of the research work plan

The research work plan for the three-year PhD program is as follows:
First year: After the study of the state of the art, the candidate will design and implement a platform, e.g., using containers or virtual machines, to create portable darknets/honeypots, comparing advantages, and limitations of different solutions. The candidate will also investigate how to collect data while maintaining privacy and deploy darknets and honeypots in heterogeneous networks. At the end of the first year, we expect the candidate to deploy the distributed system in collaboration with partners of the SERICS project funded by the PNRR.

Second year: The candidate will then develop data analysis techniques to automatically process and understand the malicious activity on each network. The candidate will compare the traffic observed across different deployments to examine how malicious activity differs on different sites. Finally, the candidate will start exploring machine learning and artificial intelligence models to automatize the analysis of the data.

Third year: In the final year, the candidate will focus on machine learning methods. In particular, the candidate will refine the machine learning methods defined in the last part of the second year. Then, the candidate will investigate, and design federated learning methods to fuse local models into a model that provides a general representation of malicious activity.

List of possible venues for publications
The proposers expect to publish in top ranking conferences and journals. In details we expect to target conferences including ACM Internet Measurement Conference (IMC), ACM Conference on emerging Networking EXperiments and Technologies (CoNEXT), in the Network Traffic Measurement and Analysis Conference (TMA), ACM Computer and Communications Security Conference (CCS), iEEE Symposium on Security and Privacy (Oakland) Usenix Security, and journals such as IEEE Transaction on Networking, IEEE Transactions on Information Forensics and Security, IEEE Transaction on Network and Service Management, IEEE Transactions on Dependable and Secure Computing.

Projects and collaborations
The research will be carried in the context of the project SERICS, funded by the PNRR under the PE7 “Partenariato esteso per la Cybersecurity” and in collaboration with the Ermes Cybersecurity company.
Required skills: The candidate must have excellent knowledge of computer networks, and good knowledge of machine learning methodologies. The candidate shall also have knowledge of Big Data platforms, such as Hadoop, Spark.

Group website: https://sites.google.com/site/sophiefosson/home
Summary of the proposal: System identification/learning consists in building mathematical models of dynamical systems from input-output data. In many engineering applications, models are time-varying, due to either internal or external changes, such as environmental actions, degradation, faults and cyber-physical attacks. The identification of time-varying systems is a tracking problem: the estimate of the model must be periodically updated by an online algorithm. The main goal of this PhD activity is to develop and analyze online learning algorithms for time-varying systems, with particular interest in problems where a smoothly time-varying profile is merged with possible sudden changes.
Topics: Optimization algorithms, online estimation, system identification
Rsearch objectives and methods: Research objectives

The objectives of this PhD project are both methodological and applications-oriented. We summarize them as follows.

1) Methodological objectives: development and analysis of online learning algorithms, to track time-varying systems with good accuracy and responsiveness to sudden changes. By starting from the literature on online convex optimization, identification of time-varying systems and hybrid systems, we expect to develop original algorithms that improve the state-of-the-art. The developed algorithms should be supported by theoretical results, that guarantee their good performance, in terms of accuracy of the solution and numerical complexity, and their practical feasibility.

2) Applications-oriented objectives: the developed algorithms will be implemented and tested on real-world case studies, in the fields of automotive and cyber-physical systems.

Outline of the research work plan

M1-M6: Study of the literature on online convex optimization and identification of time-varying systems, with particular reference to the case of systems subjected to abrupt changes. Development of general mathematical formulation of the problem accounting for different possible a-priori assumptions on the systems time evolution and on the noise affecting the measured data. Decomposition of the general problems into a sequence of simplified sub-problems.
Milestone 1:
Report of the results available in the literature; theoretical formulation of problem and analysis of the main difficulties/critical aspect/open problems. Results obtained from this theoretical analysis of the problem will be the subject of a first contribution to be submitted to an international conference.

M7-M12: Development and analysis of novel techniques that improve the performance of the state-of-the-art methods for the simplified sub-problems.
Milestone 2:
Results obtained in this stage of the project are expected to be the core of a paper to be submitted to an international journal.

M13-M24: Development and analysis of novel techniques that improve the performance of the state-of-the-art methods, for the most general version of the problem.
Milestone 3:
Results obtained in this stage of the project are expected to be the core of both a conference contribution and a paper to be submitted to an international journal.

M25-M36: Analysis and formulation of suitable strategies for practical implementation of the proposed techniques/algorithms in presence of limited computational resources.
Milestone 4:
Application of the developed methods and algorithms to real-world problems.

List of possible venues for publications
Journals:
IEEE Transactions on Automatic Control, Automatica, IEEE Transactions on Neural Networks and Learning Systems

Conferences:
IEEE Conference on Decision and Control (CDC), IFAC Symposium on System Identification (SYSID), IFAC World Congress, International Conference on Machine Learning (ICML), Conference on Neural Information Processing Systems (NeurIPS)
Required skills: The candidate should have a solid background and interest in linear algebra, convex optimization and dynamical system theory. In particular, she/he should be familiar with the main iterative algorithms for optimization problems, both in terms of mathematical aspects and practical implementation. Some background in machine learning would be an asset. Programming skills in MATLAB/Python are required.

Group website: https://elite.polito.it
Summary of the proposal: Human-Centered AI (HCAI) emerged as a novel conceptual framework for reconsidering the centrality of humans while keeping the benefit of artificial intelligence systems. To do so, the framework builds on the idea that a system can contemporary exhibit high levels of automation and high levels of human control.

The Ph.D. proposal aims at designing, developing, and evaluating concrete HCAI systems to support users of IoT-enabled environments. Also, it aims at extending the understanding of the HCAI framework’s principles and providing valuable lessons for different fields.
Topics: Human-Computer Interaction, Artificial Intelligence, Internet of Things
Rsearch objectives and methods: Artificial Intelligence (AI) systems are widespread in many aspects of the society. Machine Learning enabled the development of algorithms able to automatically learn from data without human intervention. While this leads to many advantages in decision processes and productivity, it also presents drawbacks such as disregarding end-user perspectives and needs.

Human-Centered AI (HCAI) is a novel conceptual framework [1] that builds on the idea that a system can contemporary exhibit high levels of automation and high levels of human control.

The Ph.D. proposal extends the research on HCAI to smart environments, e.g., AI-powered environments equipped with Internet-of-Things devices (e.g., sensors, actuators, mobile interfaces, service robotics). In such environments, AI systems typically automate the activities that people perform; users, however, want to remain in control. This generates a conflict that could be tackled by adopting the HCAI framework. This proposal aims at designing, developing, and evaluating concrete HCAI systems to support users of IoT-enabled environments. Also, it aims at extending the understanding of the HCAI framework’s principles and providing valuable lessons for different fields.

The main research objective is to investigate solutions for designing and developing HCAI systems in smart IoT-enabled environments. A particular focus will be on how the adoption of the HCAI framework can bring tangible benefits to users and to the smart environments research field, while extending the research on HCAI.

The research activities will mainly build on the following characteristics of the HCAI framework:
- High levels of human control and high levels of automation are possible: design decisions should give users a clear understanding of the AI system state and its choices, guided by human-centered concerns, e.g., the consequences and reversibility of errors. Well-designed automation preserves human control where appropriate, thus increasing performance and enabling creative improvements.
- AI systems should shift from emulating and replacing humans to empowering and “augmenting” people, as people are different from computers. Intelligent system designs that take advantage of unique computer features are more likely to increase performance. Similarly, designs that recognize the unique capabilities of humans will have advantages such as encouraging innovative use and supporting continuous improvement.

In particular, the Ph.D. research activity will focus on:
1) Study of AI algorithms and models, distributed architectures, and HCI techniques able to support the identification of suitable use cases for building effective and realistic HCAI systems.
2) Enhancement of the HCAI framework to include end-user personalization, e.g., as a way to recover from errors or to guide the system choices.
3) Development of strategies for dealing with de-skilling effects. Such effects may undermine the human skills that are needed when automation fails and the difficulty of remaining aware when some user actions become less frequent.
Such goals will require advancement both in interfaces and interaction modalities, and in AI algorithms and their integration into user-facing smart environments.

The work plan will be organized according to the following four phases, partially overlapping.
Phase 1 (months 0-6): literature review about HCAI and smart environments; study and knowledge of AI algorithms and models, IoT devices and smart appliances, as well as related communication and programming practices and standards.
Phase 2 (months 6-18): based on the results of the previous phase, definitions and development of a set of use cases and interesting contexts to be adopted for building user-facing smart environments. Initial data collection for validating use cases, and possible applications of end-user personalization strategies.
Phase 3 (months 12-24): research, definition, and experimentation of HCAI systems in the defined smart environments and use cases, starting from the outcome of the previous phase. Such solutions will imply the design, implementation, and evaluation of distributed and intelligent systems, able to take into account users’ preferences, capabilities of a set of connected devices, as well as AI methods and algorithms.
Phase 4 (months 24-36): extension and possible generalization of the previous phase to include additional contexts and use cases. Evaluation in real settings of one or more of the realized systems over a significant amount of time.

For each of the previously mentioned phases, at least one conference or journal publication is expected. Suitable venues might include:
ACM CHI, ACM IUI, ACM Ubicomp, IEEE Internet of Things Journal, ACM Transactions on Internet of Things, IEEE Pervasive Computing, IEEE Transactions on Human-Machine Systems, ACM Transactions on Computer-Human Interaction.
Required skills: A candidate interested in the proposal should ideally:
• be able to critically analyze and evaluate existing research, as well as gather and interpret data from various sources;
• be able to effectively communicate research findings through writing and presenting, both to specialist and non-specialist audiences;
• have a solid foundation in computer science/engineering and possess the relevant technical skills;
• have a good understanding of AI and/or HCI research methods;
• be aware of ethical considerations related to the investigated topics, particularly when conducting research on human subjects.

Group website: dbdmg.polito.it
Summary of the proposal: Urban Data Science entails the acquisition, integration, and analysis of big and heterogeneous data collections generated by a diversity of sources in urban spaces. It plays a key role in achieving a smart and sustainable city. However, data analytics on urban data collections is still a daunting task, because they are generally too big and heterogeneous to be processed through machine learning techniques currently available. Therefore, today's urban data give rise to a lot of challenges that constitute a new inter-disciplinary field of data science research.
Topics: Urban intelligence, data science, machine learning
Rsearch objectives and methods: The PhD student will work on the study, design and development of proper data models and novel solutions and for the acquisition, integration, storage, management, and analysis of big volumes of heterogeneous urban data.

The research activity involves multidisciplinary knowledge and skills including database, machine learning techniques, and advanced programming. Different case studies in urban scenarios such as urban mobility, citizen-centric contexts, and healthy city will be considered to conduct the research activity.

The objectives of the research activity consist in identifying the peculiar characteristics and challenges of each considered application domain and devise novel solutions for the management and analysis of urban data for each domain. More urban scenarios will be considered with the aim of exploring the different facets of urban data and evaluating how the proposed solutions perform on different data collections.

More in detail, the following challenges will be addressed during the PhD:
- Suitable data fusion techniques and data representation paradigms should be devised to integrate the heterogeneous collected data into a unified representation describing all facets of the targeted domain. For example, since urban data are often collected with different spatial and temporal granularities, suitable data fusion techniques should be devised to support a spatio-temporal alignment of collected data.
- Adoption of proper data models. The storage of heterogeneous urban data collections requires the use of alternative data representations to the relational model such as NoSQL databases (e.g., MongoDB), also able to manage geo-referenced data.
- Design and development of algorithms for big data analytics. Huge volume of data demands the definition of novel data analytics strategies also exploiting recent analysis paradigms and cloud based platforms. Moreover, urban data is usually characterized by spatio-temporal coordinates describing when and where data has been acquired, which entails the design of suitable data analytics methods.

The research activity will be organized as follows.
1st Year. The PhD student will review the recent literature on urban computing to identify the up-to-date research directions and the most relevant open issues in the urban scenario. Based on the outcome of this preliminary explorative analysis, an application domain, such as urban mobility, will be selected as a first reference case study. The selected domain will be investigated to (i) identify the open research issues, (ii) identify the most relevant data analysis perspectives for gaining useful insights, and (iii) assess of main data analysis issues. The student will perform an exploratory evaluation of state-of-the-art technologies and methods on the considered domain, and she/he will present a preliminary proposal for the optimization techniques of these approaches.

2nd and 3rd Year. Based on the results of the 1st year activity, the PhD student will design and develop a suitable data analysis framework including innovative analytics solutions to efficiently extract useful knowledge in the considered domain, aimed at overcoming weaknesses of state-of-the-art methods.

Moreover, during the 2nd and 3rd year, the student will progressively consider a larger spectrum of application domains in the urban scenario. The student will evaluate if and how her/his proposed solutions can be applied to the new considered domains as well as she/he will propose novel analytics solutions.

During the PhD, the student will have the opportunity to cooperate in the development of solutions applied to the research projects on smart cities. The student will also complete her/his background by attending relevant courses. The student will participate to conferences presenting the results of her/his research activity.

The results of the research activity can be published in conferences and journals belonging to the areas of machine learning and applied data scienced, such as
IEEE T-ITS (Trans. on Intelligent Transport Systems),
Expert Systems With Applications (Elsevier), Information Sciences (Elsevier), ACM TIST (Trans. on Intelligent Systems and Technology), ACM SAC, IEEE ICDE.
Required skills: The candidate should have good programming skills, and competencies in data modelling and techniques for data analysis.

Group website: https://dbdmg.polito.it/dbdmg_web/
Summary of the proposal: The wide adoption of machine learning (ML) models requires both establishing trust in a model decision and providing a ML pipeline robust to errors and attacks. Safety and robustness come into play along the entire pipeline of ML-based systems and in a wide range of application domains (e.g., automotive, healthcare).

The main goal of this research is the study of methods for the deployment of safe ML pipelines in real-life settings. It addresses the many facets of safety, ranging from explaining the different steps of the ML pipeline to detecting concept drift that may raise safety issues.
Topics: Machine learning, Explainable AI, Robust AI
Rsearch objectives and methods: Ensuring safety of ML based pipelines is fundamental to allow their acceptance in a wide range of critical application domains. Different techniques are usually needed to account for different data types (e.g., images, structured data, time series). All the different steps in ML-based development pipelines should be addressed: requirement definition, data preparation and model selection, training, evaluation and testing, monitoring.

The research activity will consider industrial domains (e.g., critical infrastructures, aerospace, automotive, manufacturing) in which safety plays a key role. The algorithms and methods will target different data types, possibly considered jointly in multimodal applications. The following different facets of trustworthy AI will be addressed.

Model understanding. The research work will address local analysis of individual predictions. These techniques will allow the inspection of the local behavior of different classifiers and the analysis of the knowledge different classifiers are exploiting for their prediction. The final aim is to support human-in-the-loop inspection of the reasons behind model predictions.

Model trust and robustness. Insights into how machine learning models make their decision allow evaluating if the model may be trusted. Methods to evaluate the reliability of different models will be proposed. In case of negative outcomes, techniques to suggest enhancements of the model to cope with wrong behaviors and improve the trustworthiness of the model will be studied. Robustness is the ability of a ML algorithm or pipeline to cope with errors during execution or with erroneous inputs. Criteria to evaluate model robustness and resiliency will be studied.

Model debugging and improvement. The evaluation of classification models generally focuses on their overall performance, which is estimated over all the available test data. An interesting research line is the exploration of differences in the model behavior, which may characterize different data subsets, thus allowing the identification of problematic data subsets, which may cause anomalous behaviors in the ML model.

YEAR I: state-of-the-art survey for safe AI methods, performance analysis and preliminary proposals of improvements over state-of-the-art algorithms, exploratory analysis of novel, creative solutions for trustworthy AI; assessment of main explanation issues in 1-2 specific industrial case studies.
YEAR 2: new algorithm design and development, experimental evaluation on a subset of application domains; deployment of algorithms in selected industrial contexts.
YEAR 3: algorithm improvements, both in design and development, experimental evaluation in new application domains.
During the second-third year, the candidate will have the opportunity to spend a period of time abroad in a leading research center.

Any of the following journals
IEEE TKDE (Trans. on Knowledge and Data Engineering)
ACM TKDD (Trans. on Knowledge Discovery in Data)
IEEE TNNLS (Trans. On Neural Networks and Learning Systems)
ACM TOIS (Trans. on Information Systems)
Information sciences (Elsevier)
Expert systems with Applications (Elsevier)
Machine Learning with Applications (Elsevier)
Engineering Applications of Artificial Intelligence (Elsevier)
Journal of Big Data (Springer)

IEEE/ACM International Conferences
Required skills: Strong background in data science and related topics such as machine learning, deep learning, artificial intelligence, and data management.

Group website: https://dbdmg.polito.it/
https://smartdata.polito.it/
Summary of the proposal: Machine learning approaches extract information from data with generalized optimization methods. However, besides the knowledge brought by the data, extra a-priori knowledge of the modeled phenomena is often available. Hence an inductive bias can be introduced from domain knowledge and physical constraints, as proposed by the emerging field of Theory-Guided Data Science. Within this broad field, the candidate will explore solutions exploiting the relational structure among data, represented by means of Graph Network approaches.
Topics: Data science, machine learning, graph mining
Rsearch objectives and methods: Research Objectives

The research aims at defining new methodologies for semantics embedding, propose novel algorithms and data structures, explore applications, investigate limitations, and advance the solutions based on different emerging Theory-guided Data Science approaches. The final goal is to contribute to improving the machine learning model performance by reducing the learning space thanks to the exploitation of existing domain knowledge in addition to the (often limited) available training data, pushing towards more unsupervised and semantically richer models.
To this aim, the main research objective is to exploit the Graph Network frameworks in deep-learning architectures by addressing the following issues:
- Improving state-of-the-art strategies of organizing and extracting information from structured data.
- Overcoming the Graph-Network model limitation in training very deep architectures, with a consequent loss in expressive power of the solutions.
- Advancing the state-of-the-art solutions to dynamic graphs, which can change nodes and mutual connections over time. Dynamic Networks can successfully learn the behavior of evolving systems.
- Experimentally evaluate the novel techniques in large-scale systems, such as supply chains, social networks, collaborative smart-working platforms, etc. Currently, for most graph-embedding algorithms, the scalability of the structure is difficult to handle since each node has a peculiar neighborhood organization.
- Applying the proposed algorithms to natively graph-unstructured data, such as texts, images, audio, etc.
- Developing techniques to design ensemble graph architectures to capture domain-knowledge relationships and physical constraints.

Outline:

1st year. The candidate will explore the state-of-the art techniques of dealing with both structured and unstructured data, to integrate domain-knowledge strategies in network model architectures. Applications to physics phenomena, images and text, taken from real-world networks such as social platforms and supply chains will be considered.
2nd year. The candidate will define innovative solutions to overcome the limitations described in the research objectives, by experimenting the proposed techniques on the identified real-world problems. The development and the experimental phase will be conducted on public, synthetic, and possibly real-world datasets. New challenges and limitations are expected to be identified in this phase.
During the 3rd year, the candidate will extend the research by widening the experimental evaluation to more complex phenomena able to better leverage the domain-knowledge provided by the Graph Networks. The candidate will perform optimizations on the designed algorithms, establishing limitations of the developed solutions and possible improvements in new application fields.

Target publications

IEEE TKDE (Trans. on Knowledge and Data Engineering)
ACM TKDD (Trans. on Knowledge Discovery in Data)
ACM TOIS (Trans. on Information Systems)
ACM TOIT (Trans. on Internet Technology)
ACM TIST (Trans. on Intelligent Systems and Technology)
IEEE TPAMI (Trans. on Pattern Analysis and Machine Intelligence)
Information sciences (Elsevier)
Expert systems with Applications (Elsevier)
Engineering Applications of Artificial Intelligence (Elsevier)
Journal of Big Data (Springer)
ACM Transactions on Spatial Algorithms and Systems (TSAS)
IEEE Transactions on Big Data (TBD)
Big Data Research
IEEE Transactions on Emerging Topics in Computing (TETC)
Information sciences (Elsevier)
Required skills: • Knowledge of the basic computer science concepts.
• Programming skills in Python
• Undergraduate experience with data mining and machine learning techniques
• Knowledge of English, both written and spoken.
• Capability of presenting the results of the work, both written (scientific writing and slide presentations) and oral.
• Entrepreneurship, autonomous working, goal oriented.
• Flexibility and curiosity for different activities, from programming to teaching to presenting to writing.
• Capability of guiding undergraduate students for thesis projects.

Group website: www.sysbio.polito.it
Summary of the proposal: Motor impairments are among the most disabling symptoms of Parkinson’s disease that adversely affect quality of life, resulting in limited autonomy, independence, and safety. Recent studies have demonstrated the benefits of physiotherapy and rehabilitation programs specifically targeted to the needs of Parkinsonian patients in supporting drug treatments and improving motor control and coordination and motor-cognitive integration. However, due to the expected increase in patients in the coming years, traditional rehabilitation pathways in healthcare facilities could become unsustainable. New strategies are needed, in which technologies play a key role in enabling more frequent, comprehensive, and out-of-hospital follow-up.
Topics: Parkinson’s Disease, Exergames, Artificial Intelligence
Rsearch objectives and methods: This proposal regards a vision-based solution using instrumentation such as the Azure Kinect DK sensor to implement an integrated approach for remote assessment, monitoring, and rehabilitation of Parkinsonian patients. This will exploit non-invasive 3D tracking of body movements to objectively and automatically characterize both standard evaluative motor tasks and virtual exergames. The final objective is to develop a system that can be used by the patients at home, and that provides both individually tailored rehabilitation exercises, and data to monitor the disease evolution.

The main innovation relies in the integration of evaluative and rehabilitative aspects, which will be used as a closed loop to design new protocols for remote management of patients tailored to their actual conditions.

In this context, the candidate activity will address:
- Body tracking techniques based on non-invasive single and multi-camera systems (2D and 3D RGB-Depth cameras), possibly combined with wearable inertial measurement units.
- Kinematic analysis of the body movements of neurological subjects performing clinical tests and rehabilitation exercises.
- Feature selection algorithms to evaluate the disease progression via proper clinical assessments scales
- Ai algorithms for motion analysis and body tracking based on multidimensional data streams.
- Adaptive exergame implementation.

The system will be experimented by patients with the cooperation of the CNR-IEIIT, the “Dipartimento di Neuroscienze Rita Levi Montalcini”(DNLM) , Università di Torino, and the “Ospedale Auxologico Italiano”, Piancavallo (VB). DAUIN and DNLM are both involved in the PNRR Initiative “Ecosistema NODES – Nord Ovest Digitale e Sostenibile”, and the proposal will benefit of this cooperation.

ACTIVITIES: YEAR 1.
Task 0: analysis of the state of the art on PD rehabilitation needs and strategies. Definition of the rehabilitation protocol set-up. Arrangement of the tasks to be administered to the patients (in strict cooperation with the clinical staff). Definition of data to be collected during rehabilitation tasks and their usage to implement patients’ follow-up.
Task 1: preliminary implementation of exergames in virtual reality both for training and as a support for motor condition assessment.
Task 2: Experimental tests (with proper approval of ethics committee) involving a limited number of parkinsonian subjects and healthy controls, in hospital. Preliminary results on the system’s ability to quantify specific and statistically significant features of motor performance and monitor changes as the disease progresses over time. The patients will be selected and invited by the clinical staff, based on proper inclusion and exclusion criteria. This task will be continued in successive years.

YEAR 2.
Task 3: Refinement of exergames involving more rehabilitation tasks. Implementation of AI algorithms that enable adaptive modification of the exergame, depending on the patient’s physical and mental status.
Task 4: Implementation, validation and testing of rehabilitation efficacy and disease progression markers on a larger number of patients (in hospital and/or outpatient structures).

YEAR 3.
Task 5: Final refinement of the rehabilitation and monitoring tool and its testing at home on a subset of patients. Validation and testing. Analysis of patients’ acceptance via proper questionnaires.
Task 6: critical analysis of the results, proposal of integration of the algorithms with heterogeneous data present in the patient’s clinical records.

N.B: Due to the complexity and novelty of this topic, the Task description is forcedly preliminary and may be subject to modifications depending on the obtained early results.
We plan to have at least two journal papers published per year.
Target journals:
IEEE Transactions on Biomedical Engineering
IEEE Journal on Biomedical and Health Informatics
IEEE Access
IEEE Journal of Translational Engineering in Health and Medicine
MDPI Sensors
Frontiers in Neurology

Cooperations:
CNR-IEIIT
Department of Neuroscience “Rita Levi Montalcini”, and Molinette Hospital, Neurology Department
Ospedale Auxologico Italiano, Piancavallo (VB)
Ospedale Molinette, Torino

Projects: PNRR Ecosistema NODES Nord-Ovest Digitale e Sostenibile
PNRR Salute Complementare “Health Digital Driven Diagnostics, prognostics and therapeutics for sustainable Health care”
Required skills: Preferred skills:
- Expertise in the fields of Motion Analysis and Biomechanics, motion capture systems, Signal Processing, Image and Video Analysis, Statistics and Machine Learning (e.g. feature selection and ranking, supervised and unsupervised learning)
- Good knowledge of C, Python, Matlab, Simulink programming languages;
- Experience in the use of commercial RGB-Depth cameras (e.g., Azure Kinect, Intel RealSense, Orbbec) and their specific firmware
- Good relational abilities and knowledge of the Italian language, to effectively manage interactions with patients during the evaluation campaigns
- Knowledge of neurological evaluation scales (e.g., UPDRS, MESUPES and Berg Balance Scale)

Group website: https://grains.polito.it/
https://vr.polito.it
Summary of the proposal: Road and air mobility will undergo dramatic changes in the next few years, which will require an intense research activity in different domains. This PhD proposal specifically focuses on the use of Human-Machine Interaction (HMI), eXtended Reality (XR), Digital Twin (DT) and simulation to support this revolution. HMI aspects related to safety, reliability and usability of autonomous and tele-operated platforms will be investigated. Moreover, XR-based simulations integrated with DTs of real vehicles/aircrafts and infrastructures will be exploited to virtually recreate those scenarios the end-users of future mobility services will be exposed to before they are implemented, and to perform the required investigations on user experience.
Topics: Virtual Reality, Augmented Reality, Road and Air Mobility
Rsearch objectives and methods: Connected and autonomous vehicles as well as manned and unmanned aircrafts promise to revolutionize the concept of mobility. In order to pass, on the one side, from current Advanced Driver-Assistance Systems (ADAS) to full, self-driving vehicles and, on the other side, to design future infrastructures supporting passenger or cargo-carrying air transportation services, a number of research challenges will have to be addressed. Many of these challenges will pertain the user experience, which will be key for the success of next-generation, integrated mobility platforms. In this context, this PhD proposal aims to investigate this scenario by focusing on a number of key dimensions. A main focus will be on Human-Machine Interaction (HMI), since it is expected that, e.g., to fully benefit of autonomous driving and flying systems, humans, both drivers/passengers will need to trust their safety, reliability and helpfulness which could be enhanced with properly designed interaction methods, whereas easy-to-use interfaces will be required for remotely controlling fleets/storms of tele-operated vehicles/aircrafts. Simulation systems will also be essential to virtually recreate representative road and air mobility scenarios, including both manned and unmanned platforms and interaction with them. In particular, the focus of this PhD proposal will be on the use of eXtended Reality (XR) based simulations. For instance, human-in-the-loop, Virtual Reality (VR-)-based simulation platforms capable to faithfully recreate the experience of using these vehicles/aicrafts as either drivers/pilots or passengers in an immersive way could be considered. Similarly, Augmented Reality (AR) could be leveraged, e.g., to study possible concepts of forthcoming interfaces, both meant for an in-vehicle use (targeting occupants) or an out-of-vehicle use (for communicating, e.g., with pedestrians, potential passengers, and consignees of a shipped good, or to deliver effective visualizations supporting road/air traffic management operators). The simulation could also encompass the creation of Digital Twins (DTs) of relevant scenarios, targeting both humans (e.g., intelligent agents simulating the users of the mobility systems), machines (vehicles/aircrafts and mobility infrastructures, like smart roads, vertiports, etc.) and physical phenomena (weather, traffic conditions, etc.).

During the first year, the PhD student will review the state of the art regarding XR-based simulation of mobility platforms, with a focus on immersive experiences and considering all the possible dimensions of user experience with the vehicle/aircraft and related road/air infrastructures. A conference publication is expected to be produced based on the results of the review. Afterwards, he or she will start the design and the realization of an integrated simulation platform, by grounding on previous developments by the GRAphics and INtelligent Systems (GRAINS) group and on hardware available at the VR@POLITO lab. The simulation platform will leverage off-the-shelf inertial hardware capable to mimic the behavior of a real vehicle/aircraft, useful, e.g., to study the users’ psycho-physical response to it. VR will be used to immerse the users in realistic situations, whose appearance and complexity could be configured using suitable authoring tools. The users will be allowed to live experiences from within the vehicle/aircraft (as drivers/pilots or passengers) or from the outside (as road users or potential customers of the transport/freight service, as operators supervising it, etc.). They will also be allowed to see their body reconstructed, and to interact with the simulated platform/infrastructure and its functionalities. AR technology could be exploited to simulate HMI paradigms, e.g., based on Head-Up Displays (HUDs), to enhance users’ awareness about the surrounding context, the vehicle/aircraft status and its intentions. A prototype of the simulation platform is expected to be available during the second year. The student will start using this prototype to tackle open problems in the field and will be expected to publish at least another conference paper on the outcomes of these activities. During the first two years, the student will complete his or her background in AR, VR, HMI and simulation by attending relevant courses. During the third year, the work of the student will build onto a set of simulation scenarios related to one or more applications (selected among those mentioned above and/or related to challenges proposed by companies/institutions the GRAINS group is collaborating with also in the context of funded initiatives) with the aim to study the applicability of the devised platform to real problems and advance the state of the art in the field. Results obtained will be reported into other conference works plus, at least, a high-quality journal publication.

Regarding collaborations, the activities concerning air mobility will be carried out in the context of the Spoke 1 of the Sustainable Mobility Center (Centro Nazionale per la Mobilità Sostenibile – CNMS), a research project funded by the Italian National Recovery and Resilience Plan (Piano Nazionale di Ripresa e Resilienza – PNRR), together with other university and with the industry (Leonardo, Poste Italiane, Accenture, Teoresi, etc.). The activities regarding road mobility could involve collaborations with Stellantis / Centro Ricerche Fiat and Reply.

The target publications will cover the fields of XR, HMI, simulation and mobility systems. International journals could include, e.g.:
IEEE Transactions on Visualization and Computer Graphics
IEEE Transactions on Human-Machine Systems
IEEE Transactions on Emerging Topics in Computing
IEEE Computer Graphics and Applications
ACM Transactions on Computer-Human Interaction
IEEE Transactions on Vehicular Technology
Required skills: Knowledge, skills and competences related to computer graphics / computer animation and to VR/AR applications

Group website: https://nexa.polito.it
Summary of the proposal: There is a growing concern about the issue of bias in online systems. Institutions require higher accountability and transparency of platforms. However, as these software systems became increasingly complex and interconnected, understanding whether they systematically (dis)advantage certain categories of people became extremely difficult. This PhD proposal aims at researching and developing novel and scalable techniques and tools for performing black-box bias audits of online software systems
Topics: Algorithm audit, Software and data bias, Algorithmic and data justice
Rsearch objectives and methods: RESEARCH OBJECTIVES
The PhD research has the following three objectives.
O1. Composing a comprehensive test-suite for black-box bias audits: comprehensive test-suite will be composed with existing tools as well as novel methodologies developed ad-hoc, to be able to audit a large variety of categories of online software systems (e.g., insurance, online advertising, etc.).
O2. Metrics and indicators for assessing discrimination: robust metrics and indicators are fundamental to assess the risk of discrimination in online services. The metrics will be retrieved from the literature of algorithm fairness. New metrics will be proposed to address identified gaps.
O3. Quantitative analysis of bias in existing online software systems: building on the achievements of the previous objectives, experiments will be designed to simulate real-world scenarios with synthetic data. Quantitative analysis will be performed to measure and quantify the discriminatory outcomes.
The techniques and tools developed during the research will empower civil society and institutions for assessing the risk of discrimination in online services. The results of the rigorous experiments will also provide objective insights into the discriminatory behavior exhibited by online systems. Overall, the research will contribute to building a safer and trusted digital environment.

OUTLINE OF THE WORKPLAN
The following activities will be conducted to pursue the above-mentioned objectives.
A1) Literature review on existing black-box auditing techniques and tools and on metrics and indicators for assessing bias and discrimination. Research gaps and key challenges identified will guide the next phases. During this phase, the student will also explore the factors that contribute to bias, such as dataset characteristics and design choices. She will investigate how biases in software systems can disadvantage or advantage certain social groups and identify which online software systems are most prone to bias. This activity will contribute to the achievement of Objective 1 and Objective 2.
A2) Development of a test suite for bias audits: novel techniques and tools, as well as novel metrics and indicators for assessing bias and discrimination, will be developed for specifically addressing research gaps and challenges identified in A1. Existing implemented methodologies will also be part of the test suite (e.g. tools and scripts for regression analysis, threshold tests, mediation analysis, experiments diff-in-diff, outcome tests). This activity contributes to the achievement of Objective 1 and Objective 2.
A3) Design and execution of experiments that simulate real-world scenarios to verify and quantify discriminatory behavior. The experimentation includes the selection of online systems to be tested, the development of test cases, the production of suitable synthetic test data and the application of quantitative analyses to measure the extent and impact of bias. When feasible, comparison of results across similar services will be made. The activity contributes to the achievement of Objective 3.

LIST OF POSSIBLE VENUES FOR PUBLICATION
Journals:
- IEEE Transactions on Software Engineering
- Empirical Software Engineering
- ACM Transactions on Information Systems
- European Journal of Information Systems
- Journal of Systems and Software
- Software X
- Journal of Responsible Technology

Conferences:
- ACM Conference on Fairness, Accountability, and Transparency
- AAAI/ACM Conference on AI, Ethics, and Society
- International Conference on Internet Technologies & Society
- EPIA Conference on Artificial Intelligence
- ACM/IEEE International Symposium on Empirical Software Engineering and Measurement
- International Conference on Frontiers of Artificial Intelligence, Ethics, and Multidisciplinary Application
- International Conference on Software Engineering

Workshops:
- International Workshop on Data science for equality, inclusion and well-being challenges
- International Workshop on Equitable Data & Technology
- Workshop on Bias and Fairness in AI
Required skills: The candidate should have:
- Basic knowledge on software testing concepts, techniques, and methodologies.
- Research aptitude and curiosity to cross disciplinary boundaries.
- Good knowledge of statistical methods for analyzing experimental data.
- Proficiency in data analysis techniques and tools.
- Strong programming skills.
- Basic knowledge of the problem of algorithm bias.

The candidate should also possess good communication and presentation skills.

Group website: https://www.dauin.polito.it/it/la_ricerca/gruppi_di_ricerca/torse...
Summary of the proposal: Recent years have seen massive progress in software network complexity, flexibility, and manageability. However, this progress was only marginally used to increase the security of these networks: attacks may remain undiscovered for months, and human errors mainly cause them. The Ph.D. proposal has a high-level research objective: investigating and exploiting software networks' full potential to mitigate cybersecurity risks automatically. To this aim, the research will provide defensive tools with more intelligence and a higher level of automation.
Topics: Security of software networks, Reaction to security incidents, Aautomatic enforcement of security policies
Rsearch objectives and methods: Nowadays, attackers are always one or more steps behind the security defenders. When vulnerabilities are found, patches follow only days later, and anti-virus signature updates come after discovering new malware. Intrusion Prevention Systems provide simple reactions triggered by simplistic conditions often considered ineffective by large companies. Moreover, companies face risks of misconfiguration whenever security policies or network layouts need an update. Statistics are clear: attacks are discovered with unacceptable delays, and in most cases, attacks are caused by human errors. The solution is also clear: providing defensive tools with more intelligence and a higher level of automation.
This Ph.D. proposal aims to use these features for security purposes, i.e., to develop AI-based systems able to perform policy refinement, configure the network and security controls starting from high-level security requirements, and policy reaction, to respond to incidents and mitigate risks. Coupling then understanding the features of security controls and software networks will build more resilient information systems that discover and react to attacks faster and more effectively.
The initial phases of the Ph.D. will be devoted to formalizing the framework models needed to reach the most ambitious research objectives. During the first year, the candidate will improve the model of security controls' capabilities and define the formal model of the software networks' reconfiguration abilities. The most relevant families of security controls will be analyzed, starting from filtering (up to layer seven) and channel protection. The candidate will contribute to a journal publication that extends an existing conference publication. The work on software network modelling will start with analysing the features of Kubernetes technology. It will also identify strategies to use pods and clusters to define policy enforcement units that merge security controls with complementary features for protecting network parts, which will be used for refinement purposes. The results of this task will be first submitted to a conference and then extended to a journal publication.
More attention will be devoted to the refinement and reaction models from the second year. The candidate will study the possibility of building refinement models that use advanced logic (forward and abductive reasoning) to represent decision-making processes. AI (Artificial Intelligence) and machine learning techniques will be investigated to learn from decisions overridden and manual corrections made by humans for fine-tuning security decisions. The candidate will also perform research towards an abstract framework for abstractly representing reaction strategies to security events. Every strategy requires adaptations to be enforced in each context; the research will investigate how to characterize and implement this adaptation and what is the proper level of abstraction for strategies. The effectiveness of these models will be evaluated on relevant scenarios like corporate networks, ISP, automotive, and Industrial Control Systems, also coming from two EC-funded European Projects. The candidate will be guided to evaluate and decide the best venues to publish the results of his research.
Moreover, to increase the impact of the research and cover existing lacks, the candidate will investigate how to standardize the information used to model with the proper level of detail the scenarios requiring reactions and the reaction and threat intelligence data.
One or two 3-6 months internship periods are expected in an external institution. The objective is to acquire competencies that may emerge as needed. Research collaborations are ongoing with EU academia and with leading companies in the EU. We expect at least two publications on top-level cybersecurity conferences and symposia (e.g., ACM CCS, IEEE S&P) or top conferences about software networks (e.g., IEEE NetSoft).
The models of the security controls and software networks' capabilities models will be submitted to top-tier journals in the cybersecurity, networking, and modelling scope (e.g., IEEE /ACM Transactions on Networking, IEEE Transactions on Network and Service Management, IEEE Journal on Selected Areas in Communications, IEEE Transactions on Dependable and Secure Computing).
We also expect results for at least one journal article about the automatic enforcement and empirical assessments of software protections. Together with the journals reported above, if the innovation of the results will deserve it, also IEEE Transactions on Emerging Topics in Computing.
Required skills: The candidate needs to have a solid background in cybersecurity (risk management), defensive controls (e.g., firewall technologies and VPNs), monitoring controls (e.g., IDS/IPS and threat intelligence) and incident response. Moreover, he should also possess a background in software network technologies (SDN, NFV, Kubernetes) and cloud computing. Having skills in formal modelling and logical systems is a plus.

Group website: https://media.polito.it
Summary of the proposal: The growing demand for interactive web services has also led to the need for interactive video applications, capable of accommodating a much larger audience than videoconferencing tools, but with almost the same, i.e., strict, requirements in end-to-end latency. This proposal aims to define and study new media coding and transmission techniques that will exploit new HTTP/3 features, such as QUIC and WebTransport, to improve the scalability and reduce the latency of current streaming solutions.
Topics: Streaming, Multimedia, HTTP/3
Rsearch objectives and methods: Research objectives
The term "Interactive Live Video Streaming" (IVS) has been defined for one-/few-to-many streaming services that can allow end-to-end latency below one second at scale and with low-cost, thus enabling some sort of interaction between one or more "presenters" and the audience. IVS is particularly useful in scenarios such as i) interactive video chat rooms, ii) instant feedback from video viewers (such as polling or voting), iii) promotional elements synchronized with a live stream. The market demand for IVS aligns with the ongoing deployment of HTTP/3, which is expected to obsolete the long-standing TCP transport protocol by means of the new QUIC protocol (which is UDP- based and implements congestion control algorithms in the user space). Although QUIC has been employed for data transfer in HTTP since 2012, and it is now experimentally supported by a good number of servers and browsers, it is still in the early stages of adoption for media delivery. Ensuring an optimal balance between network efficiency and user satisfaction poses several challenges for the deployment of multimedia services using the new protocol. For instance, one challenge is how to exploit both reliable streams and unreliable datagrams in the transmission protocol according to the characteristics of different media elements. Additionally, managing quality adaptation without overloading the server, and ensuring effective caching by relay servers, even with stringent delay requirements, are also critical issues that need to be addressed. To this aim, we plan to start from the implementation of an experimental HTTP/3 client-server application for ultra-low latency media delivery that will allow us to test and simulate different proposals and alternative solutions, and then compare their relative benefits and costs. The research will address the challenges of customizing media coding, packetization, forward error control, resource prioritization, and adaptivity in the new scenario. Such objectives will be pursued by using both theoretical and practical approaches. The resulting insight will then be validated in practical cases by analyzing the system's performance with simulations and actual experiments. In this regard, the research will be carried on in cooperation with a media company so that the experiments will be validated in an industrially relevant environment.
Outline of the research work plan
During the project's first year, the doctoral candidate will explore and gain practical experience with server and browser-related software (or libraries) for QUIC and WebTransport. Specifically, the candidate will test and investigate open-source software implementations geared toward low-delay multimedia streaming. This activity will address the creation of an experimental framework for client-server streaming of multimedia content over HTTP/3. This implementation will act as the foundation for testing, analyzing, and comparing different cutting-edge protocols under various practical scenarios, such as network conditions and media bitrate, among others. The expected outcome of this initial investigation is to produce a conference publication to present the research framework to the community and to facilitate subsequent engagement with the international research groups working on the topic.
In the second year, building on the knowledge already present in the research group and on the candidate's background, new experiments for i) bitrate adaptability to the time-varying network condition, ii) quality/delay trade-offs, iii) scalability support by means of relay nodes or CDN, will be implemented, simulated, and tested in laboratory to analyze their performance and ability to provide a significant reduction in end-to-end latency with respect to non HTTP/3 based solutions. These results are expected to yield other conference publications and potentially a journal publication with one or more theoretical performance models of the tested systems.
In the third year, the activity will be expanded with the contribution of a media company in order to unfold new possibilities in supporting the scalability of the ultra-low delay streaming protocol via relay nodes or CDNs. The candidate will provide assistance to the company in the experimental deployment of the new solution in an industrially relevant environment. The proposed techniques will aim to produce advancements that will be targeted towards a journal publication reflecting the results that can be achieved in industrially relevant contexts.
List of possible venues for publications
Possible targets for research publications (well known to the proposer) include IEEE Transactions on Multimedia, IEEE Internet Computing, ACM Transactions on Multimedia Computing Communications and Applications, Elsevier Multimedia Tools and Applications, various international conferences (IEEE ICME, IEEE INFOCOM, IEEE ICC, ACM WWW, ACM Multimedia, ACM MMSys, Packet Video).
Required skills: The candidate is expected to have a good background in multimedia coding/streaming, computer networking, and web development. A reasonable knowledge of network programming and software development in the Unix/Linux environment is appreciated.

Group website: Research Group: https://www.dauin.polito.it/research/research_gro...
Summary of the proposal: One of the main challenges in the future society will be the introduction of fully automated and connected transportation. In this process, a key role is played by safety-related connectivity, which is at its initial stage from a consumer perspective but has tens of years of research in its background and different wireless technologies being debated. This PhD proposal will focus on what is called the “Day-3+” of cooperative connected and automated vehicles (CCAVs)and more specifically on advanced manoeuvre coordination. The activity will consider one of the most challenging use cases for CCAVs, which is the management of road intersections, with the objective to overcome the need for traffic lights and improve the capacity of the road infrastructure without reducing the road safety. The focus of the activity will be on the design of communication protocols exploiting a wide range of wireless technologies and leveraging different sensor information. The goal is to enhance the reliability of the communication, as a function of aspects like the size and frequency of generated cooperative messages, of the interference from the other vehicles and the inhomogeneity of the scenario, therefore addressing the design of new wireless access techniques, both including fully distributed and network-aided communications. The activity will also include the implementation of a proof of concept (PoC) before the end of the PhD activity.
Topics: Connected Vehicles, Automated Vehicles, Road Safety
Rsearch objectives and methods: The focus of the activity will be on the design of communication protocols exploiting a wide range of wireless technologies and leveraging different sensor information, aiming at enhancing the reliability of the communication in challenging road scenarios requiring coordination among connected vehicles. The main goals of the activity will be the following:
- Goal 1: identification of application scenarios and requirements. To identify and classify scenarios (e.g., from small to large intersections) and to recognize the main issues and requirements from the point of view of communications.
- Goal 2: design of protocols for intersection management. To design new protocols for safe and seamless management of the intersections considering the bandwidth and reliability limitations of the wireless communications systems that are used and the different users of the intersection: light/heavy vehicles and vulnerable road users (bikes, pedestrians and new mobility vehicles)
- Goal 3: data sharing strategies. To develop efficient data sharing strategies aimed maximizing the context awareness and accuracy of the trajectory prediction also in the presence of heterogeneous sensing equipment for the vehicles and exploiting the concept of Value-of-Information.
- Goal 4: wireless communications for intersection management. To investigate new protocols and access techniques, also including hybrid technologies for both centralised and decentralized networks.
- Goal 5: design of federated MEC servers for intersection management. To design new MEC-based solutions to cope with complex scenarios.
- Goal 6: PoC implementation: To implement a PoC to test and validate the concept on field, with reference to an application scenario identified during the activity.
The activity will be articulated as follows:
The first six months (M6) will be devoted to the establishment of the state-of-the-art in the field, looking at developments set forth by the main standardization bodies (ETSI, 3GPP) and other authorities and consortia (C2CC, 5GAA, NHTSA...) Then, in the following 6 months, (M12) the PhD candidate will narrow down the focus on one or more of the research objectives outlined above, placing them into the context of the Day-3+ roadmap while starting the development of tools (simulators, emulators, prototypes) for the study of the research objectives.
During the second year, until M24, the PhD candidate will consolidate the work on development of tools, contributing to existing research project at the national and European level, while preparing at least a scientific publication at a major conference or journal.
In the final year, until M36, the candidate will assess the work, review the objectives, quantify the Key Performance Indicators of the research and the benchmarks, leading to the writing of more scientific papers and of the PhD thesis.
The targeted publication venues are:
Conferences:
IEEE Infocom, IEEE VNC, ACM Mobicom, ACM Mobihoc
Journals:
IEEE Transactions on Mobile Computing IEEE Transactions on Networks and Service Management IEEE Communication Magazine
Funded projects related to the PhD proposal:
- MaaS4it – Progetto “ToMove” - PNRR PE14 Spoke 5 – focus project “MoVeOver”
Required skills: Required:
- Programming skills in C/C++/Python.
- Familiarity with concepts of simulation and mobile/vehicular communication.
Desirable:
- Knowledge of Data management and ML/AI techniques.

Group website: http://grains.polito.it/index.php
Summary of the proposal: Creatives in the animation and film industries constantly explore new and innovative tools and methods to enhance their creative process, especially in pre-production. As realistic, real-time rendering techniques have emerged in recent years, 3D game engines and modeling and animation tools have been exploited to support storyboarding and movie prototyping. The goal of this research is to design and implement innovative solutions based on machine learning techniques and extended reality technologies to automate, improve and extend the storyboarding process.
Topics: Extended Reality; Machine Learning; 3D Storyboarding
Rsearch objectives and methods: Creatives in the animation and film industries constantly explore new and innovative tools and methods to enhance their creative process, especially in preproduction. Traditional approaches rely on hand-drawn storyboards and physical mockups, whereas information technology introduced sketch-based and picture-based 2D drawing applications. As 3D animation became popular, 3D artists were employed to recreate in 3D the storyboard drawings. This approach is also helpful in cinematic production to pre-visualize the story before the video shoot phase. However, the conversion accuracy from 2D images to a 3D scene can only be judged when the 3D artists complete their works. Drawing the correct scale of objects without absolute references (for cinematic production) or 3D references (for CGI productions) is another possible mistake. Performing the storyboarding process in 3D can resolve these issues and provide an interactive pre-visualization of the storyboard.

As realistic, real-time rendering techniques have emerged in recent years, 3D modeling and animation tools and 3D game engines, as well as machine learning techniques and extended reality interfaces, have been researched and explored to support storyboarding and movie prototyping: developing tools to generate storyboard automatically; exploiting 3D to innovate the shooting process of CGI through assisting filmmakers in camera composition; investigating novel ways to automatically pose 3D models from 2D sketches, scripts, or from dedicated staging languages such as Prose Storyboard Language (PSL); developing semi-automated cinematography toolset capable of procedurally generating cutscenes.

The goal of this research is to design and implement innovative solutions based on machine learning techniques and extended reality technologies to automate, improve and extend the storyboarding process. The activities will comprehend: automatic 3D reconstruction of real environments; recognition of the user actions for extended reality applications; automatic analysis and creation of 3D storyboards.

Research work plan:
1st year
- Conduct a comprehensive literature review of existing machine learning techniques for 3D reconstruction, text analysis, and image processing. Moreover, existing 3D storyboard approaches will be considered.
- Identify and analyze the challenges and limitations of these techniques.
- Improve programming, problem-solving, and data analysis skills pertaining to machine learning techniques, image processing, and extended reality interface creation.
- Develop strong communication skills to effectively communicate research findings and ideas to a range of audiences.
- Write and submit a systematic literature review/survey to a relevant conference/journal.
2nd year
- Develop and propose novel approaches that address existing challenges and improve the storyboarding process.
- Write and submit publications to relevant conferences and journals.
3rd year
- Implement and test the proposed techniques in simulation and real-world scenarios.
- Analyze and evaluate the results of the simulations and experiments.
- Write and submit publications to relevant conferences and journals.

Possible venues for publications:
- IEEE Transactions on Visualization and Computer Graphics (Q1, IF 5.226)
- ACM Transactions on Graphics (Q1, IF 7.403)
- International Journal of Computer Vision (Q1, 13.369)
- Computer Graphics Forum (Q1, 2.363)
- IEEE Computer Graphics and Applications (Q2, IF 1.909)
- ACM SIGGRAPH Conference on Computer Graphics and Interactive Techniques
- EUROGRAPHICS Annual Conference of the European Association for Computer Graphics
- Entertainment Computing (Q2, IF 2.072)
- IEEE Conference on Computer Vision and Pattern Recognition (CVPR)
Required skills: The candidate should have a strong background in computer science. Experience in computer vision techniques, data science, and machine learning is a plus. The candidate should also have excellent problem-solving and analytical skills, whereas solid communication and writing skills are a plus. The candidate should be self-motivated and able to work independently and collaboratively with a team.

Group website: http://www.cad.polito.it/
Summary of the proposal: Artificial Intelligence (CI) is playing an increasingly important role in many industrial sectors. Techniques ascribable to this large framework have long been used in the Computer-Aided Design (CAD) field: probabilistic methods for the analysis of failures or the classification of processes; evolutionary algorithms for the generation of tests; bio-inspired techniques, such as simulated annealing, for the optimization of parameters or the definition of surrogate models. The proposal focuses on the use and development of algorithms specifically tweaked on the need and peculiarities of CAD industries.
Topics: CAD, Artificial Intelligence, Machine Learning
Rsearch objectives and methods: The recent fortune of the term “Machine Learning” renewed the interests in many automatic processes; moreover, the publicized successes of (deep) neural networks smoothed down the bias against other non-explicable black-box approaches, such as Evolutionary Algorithms, or the use of complex kernels in linear models. The goal of the research is twofold: from an academic point of view, tweaking existing methodologies, as well as developing new ones, specifically able to tackle CAD problems; from an industrial point of view, creating a highly qualified expert able to bring the scientific know-how into a company, while being also able to understand the practical needs, such as how data are selected and possibly collected. The need to team the experts from industry with more mathematically minded researchers is apparent: frequently a great knowledge of the practicalities is not accompanied by an adequate understanding of the statistical models used for analysis and predictions.

The research will consider techniques less able to process large amount of information, but perhaps more able to exploit all problem-specific knowledge available. It will almost certainly include bio-inspired techniques for generating, optimizing, minimizing test programs; statistical methods for analyzing and predicting the outcome of industrial processes (e.g., predicting the maximum operating frequency of a programmable unit based on the frequencies measured by some ring oscillators; detecting dangerous elements in a circuit; predicting catastrophic events). The activity is also like to exploit (deep) neural networks, however developing novel, creative results in this area is not a priority. On the contrary, the research shall face problems related to dimensionality reduction, feature extraction and prototypes identification/creation.

From a practical standpoint, the activity would start by analyzing a current practical need, namely: “predictive maintenance”. A significant amount of data is currently collected by many industries, although in a rather disorganized way. The student would start by analyzing the practical problems of data collection, storage, and transmission, while, at the same time, practicing with the principles of data profiling, classification, and regression (all topics that are currently considered part of “machine learning”). The analysis of sequences to predict the final event, or rather identify a trigger, is an open research topic, with implications far beyond CAD. Unfortunately, unlikely popular ML scenarios, the availability of data is a significant limitation, a situation sometimes labeled “small data”.

Then the research shall focus on the study of surrogate measures, that is, the use of measures that can be easily and inexpensively gathered as a proxy for others, more industrially relevant but expensive. In this regard, Riccardo Cantoro and Squillero are working with a semiconductor manufacturer for using in-situ sensors values as a proxy for the prediction of operating frequency, and they jointly supervised master students.

The work could then proceed by tackling problems related to “dimensionality reduction”, useful to limit the number of input data of the model, and “feature selection”, essential when each single feature is the result of a costly measurement. At the same time, the research is likely to help the introduction of more advanced optimization techniques in everyday tasks.

Expected target publications:
Top journals with impact factors
- ASOC – Applied Soft Computing
- TEC – IEEE Transactions on Evolutionary Computation
- TC – IEEE Transactions on Computers
Top conferences
- ITC – International Test Conference
- DATE – Design, Automation and Test in Europe Conference
- GECCO – Genetic and Evolutionary Computation Conference
- CEC/WCCI – World Congress on Computational Intelligence
- PPSN - Parallel Problem Solving From Nature

Notes:
- The CAD Group has a long record of successful applications of intelligent systems in several different domains. For the specific activities, the list of possibly involved companies include: SPEA, Infineon (through the Ph.D. student Niccolò Bellarmino), ST Microelectronics, Comau (through the Ph.D. student Eliana Giovannitti)
- The proposer is collaborating with Infineon on the subjects listed in the proposal: Two contracts have been signed, the third extension is currently under discussion; A joint paper has been published at ITC, other one was submitted, others are in preparation.
- The proposer is collaborating with SPEA under the umbrella contract “Colibri”. Such a contract is likely to be renewed on precisely the topics listed in the proposal.
Required skills: Required skills:
- Proficiency in Python (including deep understanding of object-oriented principles and design patterns).
- Proficiency in using libraries such as NumPy, Pandas, and SciPy for data analysis and manipulation.
Preferred
- Knowledge of Electronic CAD.
- Knowledge of Scikit-Learn.
- Knowledge of Neural Networks and PyTorch.

Group website: http://www.cad.polito.it/
Summary of the proposal: Soft Computing in general, and evolutionary computation (EC) in particular, is experiencing a peculiar moment. On the one hand, fewer and fewer scientific papers focus on EC as their main topic; on the other hand, traditional EC techniques are routinely exploited in practical activities that are filed under different labels. Divergence of character, or, more precisely, the lack of it, is widely recognized as the most impairing single problem in the field of EC. While divergence of character is a cornerstone of natural evolution, in EC all candidate solutions eventually crowd the very same areas in the search space, such a “lack of speciation” has been pointed out in the seminal work of Holland back in 1975. It is usually labeled with the oxymoron “premature convergence” to stress the tendency of an algorithm to convergence toward a point where it was not supposed to converge to in the first place. The research activity would tackle “diversity promotion”, that is either “increasing” or “preserving” diversity in an EC population, both from a practical and theoretical point of view. It will also include the related problems of defining and measuring diversity.
Topics: Evolutionary Computation, Soft Computing
Rsearch objectives and methods: The first phase of the project shall consist of an extensive experimental study of existing diversity preservation methods across various global optimization problems. The MicroGP, a general-purpose EA, will be used to study the influence of various methodologies and modifications on the population dynamics. Solutions that do not require the analysis of the internal structure of the individual (e.g., Cellular EAs, Deterministic Crowding, Hierarchical Fair Competition, Island Models, or Segregation) shall be considered. This study should allow the development of a, possibly new, effective methodology, able to generalize and coalesce most of the cited techniques.

During the first year, the candidate will take a course in Artificial Intelligence, and all Ph.D. courses of the educational path on Data Science. Additionally, the candidates are required to improve their knowledge of Python.

Starting from the second year, the research activity shall include Turing-complete program generation. The candidate will move to MicroGP v4, the new, Python version of the toolkit under active development. That would also ease the comparison with existing state-of-the-art toolkits, such as inspyred and deap. The candidate will try to replicate the work of the first year on much more difficult genotype-level methodologies, such as Clearing, Diversifiers, Fitness Sharing, Restricted Tournament Selection, Sequential Niching, Standard Crowding, Tarpeian Method, and Two-level Diversity Selection.

At some point, probably toward the end of the second year, the new methodologies will be integrated into the Grammatical Evolution framework developed at the Machine Learning Lab of University of Trieste – GE allows a sharp distinction between phenotype, genotype and fitness, creating an unprecedented test bench (Squillero is already collaborating with prof. Medvet on these topics, see “Multi-level diversity promotion strategies for Grammar-guided Genetic Programming” (Applied Soft Computing, 2019).

A remarkable goal of this research would be to link phenotype-level methodologies to genotype measures.

Target Publications

Journals with impact factors
- ASOC - Applied Soft Computing
- ECJ - Evolutionary Computation Journal
- GPem - Genetic Programming and Evolvable Machines
- Informatics and Computer Science Intelligent Systems Applications
- IS - Information Sciences
- NC - Natural Computing
- TCIAIG - IEEE Transactions on Computational Intelligence and AI in Games
- TEC - IEEE Transactions on Evolutionary Computation
Top conferences
- ACM GECCO - Genetic and Evolutionary Computation Conference
- IEEE CEC/WCCI - World Congress on Computational Intelligence
- PPSN - Parallel Problem Solving From Nature
Note:
The CAD Group has a long record of successful applications of evolutionary algorithms in several different domains. For instance, the on-going collaboration with STMicroelectronics on test and validation of programmable devices, does exploit evolutionary algorithms and would benefit from the research. Squillero is also in contact with industries that actively consider exploiting evolutionary machine-learning for enhancing their biological models, for instance, KRD (Czech Republic), Teregroup (Italy), and BioVal Process (France).
Required skills: Required skills:
- Proficiency in Python (including deep understanding of object-oriented principles and design patterns).
Preferred
- Experience with metaheuristcs
- Experience with optimization algorithms

Group website: proposer
Summary of the proposal: https://www.polito.it/personale?p=ernesto.sanchez
https://cad.polito.it/
https://staff.polito.it/ernesto.sanchez/index.html
Topics: Reliability, Security, Autonomous vehicles.
Rsearch objectives and methods: Artificial Intelligence (AI) based applications, and in particular unmanned vehicles (UVs) have been a subject of great interest during the last years. In fact, its complexity due to the hardware and software interaction is growing up continuously. In this context, an emerging set of problems regard the verification, testing, reliability, and security of such applications, and in particular, considering the computational elements involved in the artificial intelligence computations. During this project, the Ph.D. candidate will study from the hardware and software perspective how to improve the reliability and security aspects related to unmanned vehicles based on AI solutions.
Required skills: The Ph.D. proposal aims at studying the current design, verification and testing methodologies that try to guarantee a correct implementation of AI-based systems in UVs, with particular interest on the available solutions to increase the systems reliability and security. During the initial phase, a set of benchmarks that will provide the suitable cases of study for the following research steps are defined. Different types of AI-based systems in UVs will be analyzed: the first one implements the A.I. algorithm supported by Open-Source devices or components-off-the-shelfs (COTS) such as systems that embeds high performance processor cores. On the other hand, the system application can be based on hardware accelerators that exploit, for example FPGA implementations. From the reliability point of view, there is a lack of metrics able to correctly assess how reliable an AI-based system is, in fact, a study and proposal of appropriate metrics is also required at this point. As a matter of fact, it will be necessary to gather the most suitable metrics or define a set of fault models oriented to better identify the device vulnerabilities during the development time.

An additional effort to consider the system security of AI algorithms running on embedded systems is also required. Regarding security, the lack of metrics and experimental demonstrators make important to fulfill this gap by providing some indications about the main security criticalities and how to mitigate them in UVs based on embedded systems. Finally, mitigation strategies based on self-test, error-recovery and earlier detection mechanisms will be developed for the autonomous systems studied. The final goal is to equip the AI hardware with self-test mechanisms to detect hardware errors, and possible thread intrusions thanks to the implementation of fault-tolerance and secure oriented mechanisms for increasing the reliability and security of the AI algorithm while maintaining the system accuracy.

Proposed work plan
The work plan is divided in three years as follows:

First year: 1. Study and identification of the most important works on design and verification of AI solutions used in unmanned vehicles.
2. Study and identification of the most relevant works related to security issues for AI solutions used in unmanned vehicles.
3. Design and implementation of the cases of study resorting to hardware accelerators, Open-Source devices, and COTS based on high performance processor cores.
Second year: 4. Fault model definition and experimentation mainly resorting to the implemented cases of study.
5. Metrics definition for the reliability assessment of UVs.
6. Metrics definition for the security assessment of UVs.
Third year: 7. Mitigation strategies proposal.

In a few words: the first three steps will leave the Ph.D. candidate with the appropriate background to perform the following activities. Steps 4 to 7 are particularly interesting from the research point of view, allowing the student to write papers and present them in the international conferences related to the different research areas faced during the Ph.D. During these research phases, the student will have the possibility to cooperate with international companies such as NVIDIA and STMicroelectronics, and foreign research groups in Lyon, Montpellier and other universities.

Target publications:
The main conferences where the Ph.D. student will possibly publish her/his works are:
DATE: IEEE - Design, Automation & Test in Europe Conference
ETS: IEEE European Test Symposium
ITC: IEEE International Test Conference
VTS: IEEE VLSI Test Symposium
IOLTS: IEEE International Symposium on On-Line Testing and Robust System Design
MTV: IEEE International workshop on Microprocessor/SoC Test, Security & Verification
Additionally, the project research may be published in relevant international journals, such as: TCAD, TVLSI, ToC, IEEE Transactions on Vehicular Technology, IEEE Transactions on Reliability.

Industries possibly involved in the proposal:
NVIDIA, STMicroelectronics

Group website: https://dbdmg.polito.it/dbdmg_web/
Summary of the proposal: Innovative machine learning solutions are needed to enable new mobility applications by enriching vehicles with cognitive capabilities to make contextual decisions tailored to users' needs. The research challenge is to characterize relevant vehicle contexts through a multimodal sensing approach (onboard sensors, environmental data, offboard information, user profile data, and external and internal vehicle context) to provide data-driven, personalized, and safe services to drivers and passengers.
Topics: Machine learning, Vehicle contexts, Data-driven models
Rsearch objectives and methods: This study will study innovative data analysis services for in-vehicle context awareness through machine learning methods. Several but strongly interrelated research objectives (RO) should be pursued to achieve this goal.
RO1. Automatically process raw data collected from an in-vehicle or fleet with information describing the external environment to augment cognitive capabilities of vehicles in support of contextual decision making, i.e.:
++ RO1a. How can novel data fusion techniques be defined to provide a global view of vehicles?
++ RO1b How to model automotive contexts using a multimodal sensing approach (on-board sensors, environmental data, off-board information, user profile data, etc.) to provide relevant content and functionality to the driver and passengers.
++ RO1c. Whether machine learning algorithms can overcome data cleaning issues and provide opportunistic and specific results to support in-vehicle context-aware applications
++ RO1d. To what extent are in-vehicle data collections suitable for learning pre-trained models that can be conveniently tuned to specific tasks (e.g., context modeling and categorization, predictive maintenance based on vehicle activities and driving situations).
RO2. Improve learning capabilities by leveraging end-user feedback.
RO3. Study a conversational interface to explore the capabilities of proposed machine learning tools for in-vehicle context awareness and user feedback.
RO4. Benchmark and evaluate the proposed system with different vehicles/fleets.

The above objectives open a broad multidisciplinary research landscape that touches on core aspects of machine learning research for Industry 4.0 applications and the automotive sector. The study will advance the application of a machine learning methodology for processing raw input text, categorizing application context, modeling vehicle contexts, and supporting decision-making.
A further focus is continuously learning from user feedback and interactions to improve the system's adaptability using new data to enhance the ability of the proposed data-driven approach to improve overall performance.
Required skills: Strong background in data science and deep learning.

Group website: https://www.dauin.polito.it/it/la_ricerca/gruppi_di_ricerca/grain...
Summary of the proposal: Quantum Computing (QC) is a quite new research field. The Ph.D. candidate will be required to work on cybersecurity issues from an interdisciplinary point of view. In particular Quantum and post quantum cryptography algorithms will be considered, but the target will be on all the applications of QC for security, also on Quantum Key Distribution (QKD). Since many QC based algorithms are related to security (e.g. Shor), also such algorithms will be considered during the research. This broad spectrum is important because QC related security applications are fast changing.
Topics: Quantum computing, Cybersecurity, QKD
Rsearch objectives and methods: Quantum computing (QC), being a totally new paradigm, is going to be a challenge for engineers, who would have not only to re-implement classical algorithms in a quantum way, but also explore uncharted paths of the new way of representing and elaborating information and its processing. In the last five years, QC companies and research institutes have come up with different software stacks, appealing to a wide spectrum of possible users, from Machine Learning to Optimization to Material simulation and of course to Quantum Cybersecurity. These companies are trying to provide the programmer pseudo-standard APIs like the ones already available for conventional computers. Since there are many different technologies, APIs are quite different from one technology to another. From the point of view of the Quantum based cybersecurity, it is possible to use such APIs in an interesting way, even if different technologies can require different approaches. Research objectives will therefore be related to the development of new algorithms and techniques on quantum and post quantum cryptography , also considering all the related algorithms, as e.g. Shor. This work will be developed during the three years, following the usual Ph.D program:
- first year, improvement of the basic knowledge, attendance of most of the required courses, submission of at least one conference paper
- second year, design and implementation of new algorithms and submission of conference papers and at least one journal
- third year, finalization of the work, with at least a selected journal publication.

Possible venues for publication will be, if possible, journals and conferences related to QC, from IEEE and ACM. An example could be the IEEE Quantum Computing Engineering (QCE) conference.
The scholarship, funded by Comitato ICT, is not explicitly referred to a specific project, but it is expected that the candidate will be involved in two European projects, both of them on quantum based cybersecurity and in particular on Quantum Key Distribution (QKD).
The work will also be done together with Fondazione Links, with whom there is already a strong collaboration on several projects. The two European projects just cited will be in collaboration with Fondazione Links too.
Required skills: The ideal candidate should have an interest in Quantum Computing and Cybersecurity.
The candidate should also have a good background in programming skills, mainly in Python and a knowledge of classic Cybersecurity. A background in Quantum mechanics is also very useful.

Group website: https://dbdmg.polito.it/
https://smartdata.polito.it/
Summary of the proposal: As the demand for novel distributed machine learning models operating at the edge continues to grow, so does the call for cloud continuum frameworks and technologies that combine edge and fog computing to support machine learning. In this broad context, the candidate will explore innovative solutions achieved by combining the benefits of edge-based machine learning models with the cloud continuum scenario, in a wide range of application contexts ranging from sustainable manufacturing to smart cities and watersheds, from healthcare to smart manufacturing, smart grids, and the Internet of Energy.
Topics: Machine Learning, Data Science, Edge Computing
Rsearch objectives and methods: Research Objectives


This research aims to define new methods for improving machine learning applications in cloud computing contexts. Compared to traditional machine learning models that are trained in the cloud and can leverage virtually unlimited storage and computational resources offered by scalable data centers, the goal of the research is to investigate limitations, experimentally evaluate, and improve the state of the art in machine learning models based on distributed and federated learning techniques. Applications that are delay sensitive or generate large amounts of distributed time series data can benefit from the proposed paradigm: The computational power provided by devices at the edge and by intermediate nodes between the edge and the central cloud (fog computing) can be used to provide cloud continuum machine learning models.

Innovative cloud continuum machine learning solutions will be applied using existing cloud-to-edge frameworks, while also following current EU research directions that aim to create alternatives to established hyperscalers by building an EU-based sovereign edge platform (e.g., SovereignEdge.eu, EUCloudEdgeIoT.eu, FluidOS, etc.).

The proposed research can be useful in many scenarios: Time Series Data Modeling and Energy Management at different scales, from watersheds (e.g., PNRR project NODES) to smart cities, from large buildings to complex vehicles (e.g., airplanes and cruise ships), from smart manufacturing to distributed sensors in healthcare, in smart power grids, and IoT networks where devices have limited resources and are very sensitive to environmental conditions, data speed, network connectivity, and power consumption.
To this end, several research topics will be addressed, such as:
- Edge AI and machine learning for next generation computing systems.
- Benefits and challenges of cloud and edge computing through comparative experimental analysis of state-of-the-art applications and real-world scenarios.
- Lightweight AI models with better efficiency for devices with limited computational and energy resources.
- Distributed and decentralized learning techniques in network monitoring and orchestration techniques.
- Mitigation and prevention of security breaches in Edge ML, using AI monitoring tools.


Outline of the research work plan

1st year. The candidate will explore the state of the art of distributed machine learning techniques, such as federated learning, split learning, gossip learning, in the context of an edge computing environment. He/she will look for gaps and emerging trends in AI models in the cloud continuum and test applications of existing paradigms to a real-world application.

2nd year. The candidate will design and develop novel solutions to overcome limitations and constraints by testing proposed methods on highlighted real-world challenges. Public, artificial, and possibly real data sets will be used for the development and testing phases. New limitations and constraints are expected to be discovered during this phase.

In the 3rd year, the candidate will advance the research by extending the experimental evaluation to more complicated scenarios that can better benefit from the expertise provided by the new cloud continuum of proposed machine learning solutions. To identify shortcomings and possible further advances in new application areas, the candidate will make optimizations to the proposed models.

List of possible venues for publications.
Journal of Grid Computing (Springer)
Future Generation Computer Systems (Elsevier)
IEEE TKDE (Trans. on Knowledge and Data Engineering)
IEEE TCC (Trans. on Cloud Computing)
ACM TKDD (Trans. on Knowledge Discovery in Data)
ACM TOIS (Trans. on Information Systems)
ACM TOIT (Trans. on Internet Technology)
ACM TOIST (Trans. on Intelligent Systems and Technology)
Information sciences (Elsevier)
Expert systems with Applications (Elsevier)
Internet of Things (Elsevier)
Journal of Big Data (Springer)
IEEE TBD (Trans. on Big Data)
Big Data Research
IEEE TETC (Trans. on Emerging Topics in Computing)
IEEE Internet of Things Journal
Journal of Network and Computer Applications (Academic Press)
Required skills: - Knowledge of the basic computer science concepts. - Knowledge of the main cloud computing topic. - Programming skills in C-family and Python languages. - Undergraduate experience with data mining and machine learning techniques. - Knowledge of English, both written and spoken. - Capability of presenting the results of the work, both written (scientific writing and slide presentations) and oral. - Entrepreneurship, autonomous working, goal oriented. - Flexibility and curiosity for different activities, from programming to teaching to presenting to writing. - Capability of guiding undergraduate students for thesis projects.

Group website: https://dbdmg.polito.it/
Summary of the proposal: Spatio-Temporal data are continuously increasing (time series collected from IoT sensors, satellite images, and textual documents such as tweets). Although Spatio-Temporal data have been extensively studied, the current data analytics approaches do not manage heterogeneous data effectively. Most of them focus on one data type at a time. Innovative ML approaches based on latent spaces, designed to leverage the heterogeneity of information conveyed by data sources, are the primary goal of this proposal.
Topics: Spatio-Temporal Data, Heterogeneous Data, Data Science
Rsearch objectives and methods: The main objective of this research activity is to design data-driven techniques and machine learning algorithms to analyze heterogeneous Spatio-Temporal data (e.g., time series collected from IoT sensors, satellite images, and textual documents such as tweets). Both descriptive and predictive problems will be considered.

The main issues that will be addressed are as follows.

Heterogeneity. Several sources, characterized by different data types, are available. Each data source represents a facet of the analyzed phenomena and provides additional insights only if adequately integrated with the other sources. Innovative integration techniques based, for instance, on latent spaces will be studied to address this issue. Properly integrating heterogeneous data sources permits analyzing all facets of the phenomena of interest without losing information.

Scalability. Spatio-Temporal data are frequently big (e.g., vast collections of remote sensing data). Hence, big data solutions are needed to process and analyze them, mainly when historical data are analyzed.

Timeliness. Timeliness is crucial in several domains (e.g., emergency management). Real-time and incremental machine learning algorithms must be designed and implemented.

The work plan for the three years is organized as follows.
1st year. Analysis of the state-of-the-art algorithms and data analytics frameworks for heterogeneous Spatio-Temporal data. Based on the pros and cons of the current solutions, a preliminary common data representation based on latent spaces will be studied and designed to integrate heterogeneous data effectively. Based on the proposed data representation, novel algorithms will be designed, developed, and validated on historical data related to specific domains (e.g., emergency management).
2nd year. Common representations of heterogeneous Spatio-Temporal data will be further analyzed and proposed, focusing on incremental and scalable algorithms.
3rd year. The timeliness facet will be considered during the last year. Specifically, the focus will be on near real-time Spatio-Temporal data analysis based on near real-time ML-based algorithms .

The outcomes of the research activity are expected to be published at IEEE/ACM International Conferences and in any of the following journals:
- ACM Transactions on Spatial Algorithms and Systems
- IEEE Transactions on Knowledge and Data Engineering
- IEEE Transactions on Big Data
- IEEE Transactions on Emerging Topics in Computing
- ACM Transactions on Knowledge Discovery in Data
- information sciences (Elsevier)
- Expert systems with Applications (Elsevier)
- Machine Learning with Applications (Elsevier)
Required skills: Strong background in data science fundamentals and machine learning algorithms, including embeddings-based data models. Strong programming skills. Knowledge of big data frameworks such as Spark is advisable but not required.

Group website: https://security.polito.it
Summary of the proposal: Applications run outside the direct control of the owner (e.g. in the cloud or on a smartphone) incur risks about the confidentiality of the data and operations. This calls for a robust execution environment, where applications of different stakeholders cannot interfere with each other (e.g. to access cryptographic keys or sensitive information). The research objective is the design and test of an environment that provides this segregation level and can prove its correct operation. This will typically take the form of a TEE (Trusted Execution Environment) based on a suitable hardware root-of-trust.
Topics: Cybersecurity
Rsearch objectives and methods: Modern ICT infrastructures go beyond traditional boundaries: computing and storage are available not only at the core but – with edge and fog computing, personal devices, and IoT – there are several distributed components that concur to data processing and storage. Similarly, networks have evolved into intelligent Infrastructures able to perform several tasks. SDN (Software Defined Networking) permits intelligent packet processing based on an external supervisor (the controller and the SDN applications), while NFV (Network Function Virtualization) implements on-demand processing (firewall, VPN, …) once requiring dedicated appliances.

In this scenario, when an application is run on a node outside the direct control of its owner, there are risks about the confidentiality of the data and of the operations. For example, consider applications run in the cloud or on a smartphone. This calls for a robust execution environment, where applications belonging to different stakeholders are kept segregated so that they cannot interfere with each other (e.g. to access cryptographic keys or sensitive information).

The gross objective of this research is the design and test of an environment that provides a high segregation level between different applications and can prove its correct operation to a third party. This will typically take the form of a TEE (Trusted Execution Environment) based on a suitable hardware root-of-trust.

The specific objectives of the research activity are:
1. Identify, design, and implement appropriate software elements to support a secure and trusted execution environment.
2. Adapt an existing open-source trusted execution framework to the components designed (e.g Keystone could be a suitable target being open-source and modular).
3. Implement a system with the hardware and software components identified and develop suitable applications and test is ability to provide security features in various domains, such as a IoT gateway or a multi-tenant NFV node.


The first year will be spent studying the existing paradigms for trusted execution, such as Intel TXT and SGX, the Trusted Computing platform, and the ARM TrustZone. The PhD student will also analyse modern security paradigms applied to software infrastructures. During this year, the student should also follow most of the mandatory courses for the PhD and submit at least one conference paper.
During the second year, the PhD student will design a custom approach for trusted execution in an open hardware platform (e.g. RISC-V) enriched with security components implemented in FPGA. The application domain should be oriented to modern infrastructures, for personal/edge/fog devices that support lightweight virtualisation technologies. At the end of the second year, the student should have started preparing a journal publication on the topic and submit at least another conference paper.
Finally, the third year will be devoted to the Implementation and evaluation of the proposed solution, compared with the existing ones. At the end of this final year, a publication in a high-impact journal shall be achieved.


Possible target publications: IEEE Security and Privacy, Springer International Journal of Information Security, Elsevier Computers and Security, Future Generation Computer Systems.

This research is part of the H2020 project SPIRS (Secure Platform for ICT systems Rooted at the Silicon manufacturing process). https://www.spirs-project.eu/
Required skills: Cybersecurity (mandatory)
Trusted computing (preferred)
Hardware design and related firmware (preferred)

Group website: https://security.polito.it
Summary of the proposal: Modern ICT applications are highly distributed and networked. Given the intrinsic insecurity of the networks, inter-node communications must face several security problems: protection of the data being transmitted, authentication of the peer, and trust in the application executed by the peer. The research objective is the design and test of secure and trusted network channels ,to be used in both lightweight and standard computing systems (e.g. IoT as well as cloud).
Topics: Cybersecurity, Network security, Trusted computing
Rsearch objectives and methods: Modern ICT applications are highly distributed (e.g. cloud, edge, IoT, and personal devices) and heavily rely on network communications. However, networks are inherently insecure and this generates various threats.

In this scenario, when an application on a node communicates with another one on a different node, several security problems arise, such as protection of the data being transmitted, authentication of the peer, trust in the application executed by the peer, and proof of transit

The gross objective of this research is the design and test of secure and trusted network channels to be used in both lightweight and standard computing systems (e.g. IoT as well ad cloud).

The specific objectives of the research activity are:
1. Identify, design, and implement appropriate software elements to support the creation of secure and trusted network channels.
2. Extend existing open-source systems (e.g. Linux and Keystone could be suitable targets) to support the designed secure and trusted network channels.
3. Implement a system with the hardware and software components needed to demonstrate the feasibility and performance of the designed secure and trusted network channels.


The first year will be spent studying the existing paradigms for secure network channels (TLS, IPsec, …) and trusted execution (Intel TXT and SGX, the Trusted Computing platform, and the ARM TrustZone). The PhD student will also analyse modern security paradigms applied to software infrastructures. During this year, the student should also follow most of the mandatory courses for the PhD and submit at least one conference paper.
During the second year, the PhD student will design a custom approach for secure channels in a trusted execution environment, enriched with hardware root-of-trust. The application domain should be oriented to modern infrastructures, for personal/edge/fog devices that support lightweight virtualisation technologies. At the end of the second year, the student should have started preparing a journal publication on the topic and submit at least another conference paper.
Finally, the third year will be devoted to the Implementation and evaluation of the proposed solution, compared with the existing ones. At the end of this final year, a publication in a high-impact journal shall be achieved.


Possible target publications: IEEE Security and Privacy, Springer International Journal of Information Security, Elsevier Computers and Security, Future Generation Computer Systems.

This research is part of the H2020 project SPIRS (Secure Platform for ICT systems Rooted at the Silicon manufacturing process). https://www.spirs-project.eu/
Required skills: Cybersecurity (mandatory)
Network security (mandatory)
Trusted computing (preferred)
Hardware design and related firmware (preferred)

Group website: https://security.polito.it
Summary of the proposal: Cybersecurity is typically based on mathematical cryptographic algorithms (e.g. RSA, ECDSA, ECDH) that are threatened by the advent of quantum computing.
Purpose of this research is to evolve various security components to quantum-resistant versions. This may include secure network channels (e.g. TLS, IPsec), digital signatures, secure boot, and remote attestation.
The final objective is the design and test of quantum-resistant versions of various security solutions in an open-source environment (e.g. Linux, Keystone).
Topics: Cybersecurity, Quantum computing
Rsearch objectives and methods: Hard security is typically based on mathematical cryptographic algorithms that support computation of symmetric and asymmetric encryption, key exchange, digital signature, and hash values.

Several of these algorithms (e.g. RSA, ECDSA, ECDH) are threatened by the advent of quantum computing. NIST and other bodies have thus selected new quantum-resistant algorithms and advocated their fast adoption in current security solutions. However this is not a simple change, as there are several intertwined aspects to be considered, such as hardware support, key-length, and X.509 certificates.

Purpose of this work is to evolve various security components of modern ICT infrastructures to quantum-resistant versions. This may include secure network channels (e.g. TLS, IPsec), digital signatures, secure boot, and remote attestation.

The gross objective is the design and test of quantum-resistant versions of several security solutions in an open-source environment (e.g. Linux, Keystone).

The specific objectives of this research activity are:
1. Identify security components threatened by quantum computing and review proposed standards to make them quantum-resistant.
2. Extend existing open-source systems and components (e.g. Linux, Keystone, openSSL, mbedTLS could be suitable targets) to support the proposed quantum-resistant solutions.
3. Implement a system with the hardware and software components needed to demonstrate the feasibility and performance of the improved quantum-resistant elements.

The first year will be spent studying the existing security paradigms and how they are affected by quantum computing. The PhD student will also analyse the proposed post-quantum algorithms and evaluate their performance and hardware requirements. During this year, the student should also follow most of the mandatory courses for the PhD and submit at least one conference paper.
During the second year, the PhD student will design a custom approach for quantum-resistant secure channels and trusted execution environment, possibly enriched with specialized hardware elements. At the end of the second year, the student should have started preparing a journal publication on the topic and submit at least another conference paper.
Finally, the third year will be devoted to the Implementation and evaluation of the proposed solution, compared with the existing ones. At the end of this final year, a publication in a high-impact journal shall be achieved.

Possible target publications: IEEE Security and Privacy, Springer International Journal of Information Security, Elsevier Computers and Security, Future Generation Computer Systems.

This research is part of the Horizon Europe QUBIP project (Quantum-oriented Update to Browsers and Infrastructures for the PQ Transition)

Note: website not yet available, as the project will start on September 2023.
Required skills: Cybersecurity (mandatory)
Network security (mandatory)
Trusted computing (preferred)
Basics of quantum computing (preferred)

Group website: www.cad.polito.it
Summary of the proposal: Nowadays, there are not any available design architectures for radiation-hardened RISC-V for FPGAs or ASICs having high performance and being radiation-hardened. This research proposal aims to target the realization of multi-core architecture based on RISC-V for space exploring the reliability-issues, the architectural benefits / trade-offs and the computational modification and effort required by computer-aided design to support this implementation.
Topics: FPGA, Reconfigurability, Reliability.
Rsearch objectives and methods: This research proposal aims to target the implementation stages of multi-core architecture based on RISC-V for space considering the design, development, and final implementation and test. The main objectives of the research proposal are:
- Design, Mapping and Implementation exploration: The RISC-V architecture module sizes will be tuned with respect to the available area of state-of-the-art rad-hard FPGAs. The main components that will require a manipulation and a re-design are the RISC-V core. The memories modules are fundamental for the data storage and transfer of the multi-core computational workload. The RISC-V will be designed with an I/O interface in order to support the communication with external modules through a specific architectural bus (e.g., AXI or AMBA bus). This objective it is expected to be achieved during the first 18 months of the PhD program.
- Validation and Performance exploration: The RISC-V multi-core architecture will be validated by developing a monitoring system. An host PC software platform will be developed to generate, transmit and receive the test pattern that are transmitted to the control unit and towards the RISC-Vs. The purpose of the platform is to test and validate the proposed architectures performances and their robustness against hardware failures that plague the computational RISC-V multi-core. For this purpose, a specific software compiler that will translate the high-level algorithm model to the specific Instruction Set Architecture (ISA) of the RISC-V will be released. This objective it is expected to be achieved during the first 24 months of the PhD program.
- Reliability and Radiation Effects analysis and mitigation: The RISC-V multicore architecture will be implemented on commercial FPGAs and radiation-hardened FPGA devices available within the research group. The reliability and radiation-effects will be evaluated using previously developed tools oriented to evaluate Single Event Effects (SEEs) such as transient effects and soft-errors; and the Reliability. Furthermore, the developed architectures will be evaluated regarding the application error probability. The analysis will generate a list of resource sensitivity related to the implemented design. The identifiers are generated in relation to the release of the CAD implementation tools. The tools will provide two outcome: the former is the Cumulative Application Error Probability for the RISC-V circuit and for all its modules; the latter, is the generation of the placement and routing constraints of the architectural modules implemented on radiation-hardened devices. This objective it is expected to be achieved at the 30 months of the PhD program.


The research projects correlated to the present proposals in the last years are:
- [active 2023] “TERRAC: Towards Exascale Reconfigurable and Rad-hard Accelerated Computing in space” in cooperation with the European Space Agency (ESA)
- [active 2021] “EuFRATE: European FPGA Radiation-hardened Architecture for Telecommunications and Embedded computing in space” in cooperation with Argotec and the European Space Agency (ESA)
- [active 2020] “The HERA Cubesat program” in cooperation with DIMEAS, Tyvak International and the European Space Agency (ESA)
- [closed 2020] “VEGAS: Validation of European high capacity rad-hard FPGA and software tools”, EU H2020-Compet Project
The research program will target the following conference venues:
- IEEE Design and Test in Europe (DATE)
- IEEE Dependable Network and System (DSN)
- IEEE International Test Conference (ITC)
- IEEE International conference on Field Programmable Logic (FPL)
- IEEE RADECS
- IEEE NSREC
The research program will target the following journals:
- IEEE Transactions on Circuits and Systems-I
- IEEE Transactions on Computers
- IEEE Transactions on Reliability
- ACM Transactions on Reconfigurable Technology and Systems
Required skills: The research program is characterized by the following encouraged skills: good knowledge of VHDL languages and computer architectures

Group website: http://grains.polito.it/
https://vr.polito.it/
Summary of the proposal: The umbrella term Extended Reality (XR) encompasses several media like Virtual Reality (VR), Augmented Reality (AR), Augmented Virtuality (AR), and Mixed Reality (MR) which are revolutionizing many sectors of the Society, from entertainment to manufacturing, from medicine to cultural heritage. The delivery of effective XR experiences relies onto the ability to render high-quality, multimodal content that can either replace the real world or overlap to it. The goal of this research is to tackle some of the major open issues pertaining the reconstruction and simulation of both physical and fictional environments, objects, and phenomena with the aim to offer users the possibility to interact with them in a seamless way to carry out the intended tasks. Different problems deriving from concrete application scenarios will be considered to challenge the devised solutions and validate the outcomes of the research.
Topics: eXtended Reality, simulation, human-machine interaction
Rsearch objectives and methods: Nowadays, the concept of XR is used to broadly refer to a set of technologies that span the so-called “reality–virtuality continuum” and support the creation of experiences where the real and the virtual world are either exchanged or interleaved. Although XR technologies are not new, thanks to the availability of ever more powerful and affordable devices they are recently facing a new hype. Nevertheless, the creation of effective XR experiences is still characterized by many open problems.

On the one side, there is the reconstruction and the simulation of the required assets. As a matter of example, to use XR for supporting the study of the user experience with a new product, service, or infrastructure, e.g., in the context of urban air or road mobility, it is necessary to build a realistic scenario with streets, buildings, etc., and recreate the behavior of driving or flying vehicles and pedestrians, as well as interactions among them. Similarly, with the aim to use XR to let users engage in or observe critical situations like, e.g., a fire, a flood, or another disaster, for training or decision-making purposes, a physically accurate, interactive reproduction of fire, smoke, water, etc. would be needed. The realism of the reconstruction is be related to the ability to translate real-world data into digital data and to the capability of the simulation engine to reproduce physical phenomena in the best way possible. On the other side, the effectiveness of XR experiences depends on the level of interaction that can be guaranteed to the users. Indeed, the digital environment needs to stimulate the users’ senses in a suitable way, but at the same time users need to be given the possibility to act on the environment in a natural and transparent way. To this aim, proper XR interfaces need to be designed with the goal to maximize both the users’ levels of immersion and presence, as well as task success.

The research objective of the PhD student will be to tackle some of the open challenges in the depicted context like, e.g., the modeling of physical phenomena and their integration in interactive simulations, the implementation of believable virtual environments capable to replace real experiences, as well as the design of natural and intuitive interfaces between the users and the simulated/reconstructed scenarios.

During the first year, the PhD student will review the state of the art in terms of techniques/approaches developed/proposed to deal with issues mentioned above concerning XR-based simulation, 3D reconstruction, and human-machine interaction (HMI). The student will start investigating known problems in these domains, and devising solutions that will be experimented in specific application scenarios among those tackled by the GRAphics and Intelligent Systems (GRAINS) group and the VR@POLITO lab. A conference publication is expected to be produced based on the results of these activities. In the second year, the student will start designing and building a methodology capable to support more than one of the above issues in a holistic way. The student is expected to publish at least another conference paper on the outcomes of these activities. During the first two years, the student will complete his or her background in topics relevant for the proposal by attending suitable courses. During the third year, the student will devise protocols/develop metrics to test the efficacy of the solutions proposed to the identified problems and will run experiments with relevant stakeholders to prove the validity of work done and advance the state of the art in the field. Results obtained will be reported into other conference works plus, at least, a high-quality journal publication.

The target publications will cover the fields of XR and HMI. International journals could include, e.g.:

IEEE Transactions on Visualization and Computer Graphics
IEEE Transactions on Human-Machine Systems
IEEE Computer Graphics and Applications
ACM Transactions on Computer-Human Interaction

Relevant international conferences could include ACM CHI, IEEE VR, IEEE ISMAR, ACM SIGGRAPH Eurographics, etc.

Topics addressed in the proposal are strongly related to those tackled by the “Centro Nazionale Mobilità Sostenibile” (“National Center for Sustainable Mobility”) funded by the National Recovery and Resilience Plan (PNRR) and, in particular, by the in “Air mobility” project, whose goal is to develop virtual simulations and HMI solutions for advanced air mobility. Moreover, it is linked to the activities being developed in collaboration with LINKS Foundation, with the Italian Airforce – 3rd Wing Villafranca di Verona, with Stellantis-CRF, and Civil Protection of Regione Piemonte regarding the simulation of critical or hazardous situations related, e.g., to CBRN risks, forest fires, industrial manufacturing, etc.
Required skills: Knowledge, skills and competences related to computer graphics, simulation and, specifically, XR technologies.

Group website: eda.polito.it
Summary of the proposal: The thesis will explore modern design automation techniques aimed at improving productivity in the design of mixed-signal system-on-chip platforms, such as those used in smart sensors for the Artificial Intelligence of Things (AIoT). ST Microelectronics, a world-leader in this field, will indicate the most relevant research problems, giving the candidate the unique opportunity to integrate its work with industrial design flows and products.
Topics: Electronic Design Automation; Machine Learning; Hardware Design;
Rsearch objectives and methods: The thesis, co-funded by ST Microelectronics S.r.l., will explore modern design automation techniques for mixed-signal system-on-chip platforms, such as those used in smart sensor devices for the Artificial Intelligence of Things (AIoT). These systems are increasingly complex and include many heterogeneous digital and analog components. Therefore, enhancing productivity in their design process requires automation throughout the software and hardware stack, from the application-level down to the device-level.

On the hardware design side, the candidate will explore the use of both rule-based and data-driven graph algorithms for analyzing mixed-signal circuits, e.g., quantifying the amount of novelty in each design iteration, or finding graph topologies that require specific constraints during place and route.

For what concerns software, advanced compilation and mapping algorithms will be explored to optimize the execution of relevant applications, such as neural network inference, on the general purpose and domain-specific digital hardware blocks present on the system.

The work plan of the project will be structured as follows:

Months 1-9: Conduct an extensive study and literature review on state-of-the-art on i) Mixed-signal design, ii) Electronic Design Automation (EDA) techniques and iii) Advanced low-level compilation and mapping for heterogeneous systems, to become familiar with the context of the thesis. The candidate will have the chance to learn the differences in terms of algorithms and level and maturity between digital and analog EDA, and the unique issues associated with deploying complex software on these highly constrained platforms.

Months 9-24: Implement and test state-of-the-art new state-of-the-art EDA techniques (HW Level) and software-optimization techniques (SW Level) for Mixed-signal systems, focusing on increasingly complex benchmark circuits and applications, respectively. Specifically, the candidate will apply both classic graph-based algorithms and machine learning techniques (e.g., Graph Neural Networks) to the analysis and optimization of mixed-signal circuits. In parallel, advanced software compilation and optimization tools will be developed to map complex applications (e.g., neural network inference) on the digital compute blocks of the device.

Months 24-36: Deploy and evaluate the newly developed techniques on highly complex systems, comparable to those put on the market by ST Microelectronics. This phase will constitute the final benchmark for the scalability and effectiveness of the developed methods.

The candidate will also have the unique opportunity to test the developed techniques on custom ASIC testchips and/or FPGA-based prototyping platforms.

Possible publication venues for this thesis include:
- IEEE Transactions on CAD
- IEEE Transactions on Circuits and Systems (I and II)
- IEEE Transactions on Computers
- IEEE Transactions on Emerging Topics in Computing
- ACM Transactions on Embedded Computing Systems
- ACM Transactions of Design Automation of Electronic Systems
- Design Automation and Test in Europe (DATE) and Design Automation Conference (DAC)
- etc.

On the topics of this thesis, the EDA Group at Politecnico di Torino coordinates a Marie Curie Research and Innovation Staff Exchange (RISE) project called AMBEATion, in collaboration with two world-leader industries in microelectronic design (ST Micro) and EDA (Synopsys) and two other universities (University of Prague and University of Catania). Furthermore, the group has several other industrial collaborations and funded projects on these topics, including:
- TRISTAN (ECSEL-JU 2023)
- ISOLDE (ECSEL-JU 2023)
- StorAIge (ESCEL-JU 2021)
- Etc.
Required skills: 1. Good programming experience in languages such as Python and C is needed (must have).
2. At least a basic knowledge of digital and analog circuits is preferable.
3. A basic understanding of (digital) EDA is also a plus. Otherwise, this understanding will have to be acquired during the initial study and literature review phase.
4. A basic understanding of embedded systems and computer architectures is necessary for the software optimization part of the work.

Group website: eda.polito.it
Summary of the proposal: The goal of this thesis, co-funded by ST Microelectronics, is to develop innovative solutions for the integration of complex software algorithms on silicon. The work will focus in particular on deploying Machine Learning algorithms on industrial and medical smart sensors, whose digital components are ultra-tightly constrained in terms of area occupation (hence memory space) and power consumption, leveraging the flexible and extensible the RISC-V Instruction Set Architecture.
Topics: RISC-V; Machine Learning; Compilers;
Rsearch objectives and methods: The main objective of this thesis is to investigate and develop innovative solutions for the integration of algorithms on silicon through interpreters, using hardware-software co-design techniques. The candidate will focus on developing solutions based on the RISC-V Instruction Set Architecture (ISA), enhanced with custom extensions. This will require a comprehensive understanding of the RISC-V ISA, including the design and implementation of custom instructions that can be integrated into an existing processor pipeline.

The ultimate goal of this research is to reduce the area of specific smart sensor applications in the industrial and medical domains, currently on the market, such as step-counters, heart rate monitors, infrared vision sensors, etc.

The main direction that will be pursued to achieve this objective will be the hardware acceleration of critical portions of the involved computation, thereby trading off the addition of new logic with a reduction in the required program memory. In fact, by replacing entire sections of code with a single accelerator invocation, the total area of the system can be reduced, while also possibly reducing peak power consumption and improving energy efficiency. The candidate will develop a full hardware-software co-design flow to identify the critical sections of an application, accelerate them, and then provide the required software infrastructure (ISA extension, compiler) to support the newly designed accelerators. Throughout this process, the candidate will work closely with ST Microelectronics, a world leader in the design of intelligent platforms. The company will direct the candidate towards the most relevant research problems, giving them a unique opportunity to integrate their work within state-of-the-art industrial design flows and possibly even contribute to the development of new products. This collaboration will provide the candidate with access to cutting-edge hardware and software tools, allowing them to explore the latest developments in the field of smart sensor technology. In summary, this thesis will provide a unique opportunity for the candidate to develop innovative solutions for the integration of algorithms on silicon, while also gaining hands-on experience in hardware-software co-design and microprocessor design.

The work plan of the project will be structured as follows:

Months 1-9: Literature review and background study, focusing on i) the RISC-V ISA and its peculiarities; ii) Hardware-software co-design and RISC-V ISA extensions; iii) Compilation toolchains, and iv) Techniques for Machine Learning deployment on constrained systems (quantization, pruning, NAS, etc). Furthermore, the candidate will also become familiar with the smart sensor applications identified by ST Microelectronics as main benchmarks of interest for the rest of the work.

Months 9-24: Implementation of hardware extensions and software deployment toolchains to reduce area on the target smart sensor applications based on machine learning. In this phase, the candidate will intervene on the hardware and software sides of the system separately, first developing an accelerator/ISA extension (e.g., for low-precision or sparse computation) and then adding compiler support for it, to enable its usage starting from a high-level specification (e.g., in Python) of the target machine learning application, transparently to the developer.

Months 24-36: Connection of the HW and SW sides of the work into a unique, automated, HW/SW co-design tool, in which the architecture and configuration of the developed extensions/accelerators are co-optimized with the characteristics of the deployed ML model “in the loop”.

Possible publication venues for this thesis include:
- IEEE Transactions on CAD
- IEEE Transactions on Computers
- IEEE Journal on Internet of Things
- IEEE Transactions on Circuits and Systems (I and II)
- IEEE Transactions on Parallel and Distributed Systems
- ACM Transactions on Embedded Computing Systems
- ACM Transactions of Design Automation of Electronic Systems
- ACM Transactions on Architecture and Code Optimization
- Conferences such as DAC, DATE, MLSys, and others.

The EDA Group has many active industrial collaborations and funded projects on these topics, many of which involve ST Microelectronics, including:
- TRISTAN (ECSEL-JU 2023)
- ISOLDE (ECSEL-JU 2023)
- StorAIge (ESCEL-JU 2021)
- Etc.
Required skills: 1. Good programming skills in Python and C.
2. Familiarity with embedded systems and computer architectures.
3. Familiarity with compilers and compiler optimizations is a nice-to-have, but not a hard requirement, since these concepts will be studied during the first period of the thesis.
4. Basics of hardware design in Verilog/VHDL are nice-to-have since the candidate will develop accelerators and ISA extensions. However, the group internally has skills on this, and can support the candidate.

Group website: eda.polito.it
Summary of the proposal: Large Language Models are the pinnacle of innovation in Artificial Intelligence. The thesis, co-funded by the Huawei Zurich Research Center (ZRC), will focus on improving the efficiency of these models (mainly in terms of latency) when they are executed on the advanced accelerators designed by Huawei (Ascend series). This will be achieved through a combination of optimizations, ranging from kernel selection to sparsity exploitation.
Topics: Deep Learning; Optimization; Hardware Acceleration;
Rsearch objectives and methods: Large Language Models (LLMs) like GPT-3 and very recently ChatGPT, GPT-4, are Artificial Intelligence models which have been all over the news. This has spurred another large wave of research into training such models and building the infrastructure to train every bigger models.

This thesis will focus on what’s to come next: the efficient use (inference) of these models. We will explore design automation and optimization techniques for inference of DNN-based language models, particularly focusing on latency optimization. We will address it from multiple angles:

1) Currently, deployment relies on fixed compute kernels implementing mostly individual compute steps of these DNN, some compute-bound and some memory-bound; we will explore automatic ways to fuse multiple compute steps for an optimal trade-off.

2) LLM’s auto-regressive structure strongly limits parallelizability and thus creates a high latency; we will explore speculative methods for the auto-regressive dependencies to overcome this limitations.

3) Many of the operations, particularly around the attention compute step, lead to many almost-zero values; we will explore how to benefit from this property while keeping the workload efficiently mappable to tensor cores.

Huawei, a leading infrastructure provider for deep learning with their Ascend product, will direct the work from their European Research Institute towards the key challenges to be tackled, giving the candidate the opportunity to access the company's large compute infrastructure, and possibly to see output of the research applied in the field on real, large scale problems.

The work plan of the project will be structured as follows:

Months 1-9: Literature review and background study, focusing on i) the Transformer deep learning model, which constitutes the base architecture of most LLMs; ii) Existing efficient transformer implementations for highly-parallel, accelerator-rich compute platforms; iii) Generic software optimization techniques for deep learning, including sparsity-based ones (pruning, sparse MoE, adaptive inference), precision-reduction ones (quantization), and low-level software ones (kernel selection, layer fusion, memory tiling, etc). Identification of a set of target models to use as benchmark for the following part of the work.

Months 9-18: Implementation of “exact” strategies based on low-level kernel selection, fusion, memory access optimizations, etc. These solutions will be implemented first because they will serve as building blocks for the following phases, and will allow the candidate to become familiar with the internal of LLM’s Transformer architectures.

Months 18-27: Implementation of speculative optimizations aimed at improving the parallelization of autoregressive decoding in LLMs. This second set of optimizations will be built on top of the work done in the previous part, since the speculative LLMs will leverage the optimized kernels developed in Months 9-18.

Months 27-36: Implementation of optimizations aimed at exploiting the natural sparsity of LLMs to further reduce their inference latency. The candidate will also consider the possibility of forcedly increasing sparsity (e.g., by on-the-fly activations pruning) to increase the effectiveness of this strategy.

Possible publication venues for this thesis include:
- IEEE Transactions on Computers
- IEEE Transactions on CAD
- IEEE Trans. on Pattern Analysis and Machine Intelligence
- ACM Transactions of Design Automation of Electronic Systems
- ACM Transactions on Computer Systems
- IEEE Journal on Emerging and Selected Topics in Circuits and Systems
- Journal of Machine Learning Research
- Elsevier AI
- Conferences such as NeurIPS, AAAI, ICML, ICLR, MLsys, ACL, DAC, EuroSys, ASPLOS, ISCA, DATE, HiPEAC.

The EDA Group has many active industrial collaborations and funded projects on these topics, including:
- TRISTAN (ECSEL-JU 2023)
- ISOLDE (ECSEL-JU 2023)
- StorAIge (ESCEL-JU 2021)
- etc.
Required skills: 1. Familiarity with programming languages: The project will involve implementing and testing various algorithms and techniques, so experience with programming languages such as Python and C is needed.
2. Familiarity with computer architectures (processing cores, memory hierarchies) and parallel programming models.
3. Understanding of Deep Learning and Artificial Intelligence models, although a specific experience on Transformers and LLMs is not required, as it will be acquired during the first phase of the thesis.

Group website: https://cad.polito.it
Summary of the proposal: This research focuses on the importance of reliable and fault-tolerant electronic devices in embedded and HPC systems used in various fields such as mobility, medical devices, and infrastructure. To improve the testing techniques for these circuits, the research proposes innovative approaches that cover not only the digital part but also the analog and mixed-signal interconnections. The study addresses the challenges posed by the delay and internal defects in logic gates, and the effects of aging and external factors on device performance. The research utilizes tools and technologies provided by STMicroelectronics to develop field testing algorithms and methodologies.
Topics: Computer-Aided Design, Reliability, Testing
Rsearch objectives and methods: Research objectives
The massive use of electronic devices in embedded and HPC systems, places great emphasis on the reliability and tolerance to hardware faults that such devices offer. Defect affecting the hardware are increasingly harder to detect due to the complexity of electronic systems and emerging technologies, and this creates serious criticalities on the system’s operational life: as faults can manifest themselves as wrong data, they may severely affect the software making use of the faulty hardware. The is the need for improving the effectiveness of the state-of-the-art evaluation methods, as well as test strategies to identify and mitigate those faults.

The objectives of this research are summarized in the following.
- Develop a fault grading framework for defect-oriented faults models for digital circuits (such as path-delay or cell-aware faults), as dealing with those faults is only partially supported by commercial EDA tools. A fault grading process is of paramount importance for understanding the effectiveness of state-of-the-art test methods.
- Assess the effectiveness of state-of-the-art test methods in detecting defect-oriented faults.
- Analyze the impact of defect-oriented faults on the final system’s reliability, considering microprocessor-based systems as reference, and including the interconnection between analog and digital logic.
- Contribute to the development of new algorithms for defect-oriented fault testing of microprocessor-based systems, in the form of on software-based self-test or making use of special design-for-test features.


Outline of possible research plan

First year: The candidate will begin by conducting a comprehensive literature review of defect-oriented test methods for digital circuits and developing a suitable fault grading framework. They will also review previous work by other PhD students in the research group on modelling cell-aware faults on open and proprietary technology libraries from STMicroelectronics. The candidate will then create a set of test cases to assess the effectiveness of existing state-of-the-art test methods for detecting defect-oriented faults. They will use the new framework to compare and evaluate these test methods.

Second year: In the second year, the candidate will focus on designing and implementing new test algorithms to improve the fault coverage of defect-oriented faults on microprocessor-based systems. They will build upon existing software-based self-test techniques developed for less complex fault models and investigate the propagation of errors through interconnections between analog and digital logic. The candidate will also develop proper mitigation methods to improve the system's reliability. They will validate the effectiveness of the new algorithms on a set of benchmarks and compare them with existing state-of-the-art test methods.

Third year: In the final year, the candidate will conduct simulations and experiments to analyze the impact of defect-oriented faults on the reliability of microprocessor-based systems. They will also investigate existing design-for-test features and design ad-hoc hardware to further improve the system's reliability. The candidate will conduct experiments on real-world systems provided by STMicroelectronics to validate the effectiveness of the proposed techniques.

List of possible venues for publications


The candidate will prepare and submit papers to top-tier conferences and journals in the field of electronic systems, embedded systems, and fault tolerance.
Possible venues for publications could include:
- IEEE Transactions on Computers
- IEEE Transactions on Computer-Aided Design of Integrated Circuits and Systems
- IEEE Transactions on Very Large Scale Integration (VLSI) Systems
- International Conference on Computer-Aided Design (ICCAD)
- International Test Conference (ITC)
- IEEE European Test Symposium (ETS)
- IEEE International Symposium on Circuits and Systems (ISCAS)
- Design, Automation and Test in Europe Conference (DATE)

Projects
The research is consistent with the themes of the National Centers on Sustainable Mobility and HPC, as well as with those of the Extended Partnership on Artificial Intelligence, in which members of the CAD group participate.
Required skills: - Solid background in digital circuits and microprocessor-based systems design and testing
- Experience with fault modeling and testing techniques for digital circuits, such as stuck-at faults, transition faults, and path-delay faults.
- Knowledge of EDA tools, particularly for fault simulation.

Group website: https://cad.polito.it
Summary of the proposal: The research addresses the pressing need for dependable electronic systems in safety-critical domains, with a specific focus on sustainable mobility. The objective is to develop innovative hardware and software methodologies to qualify electronic systems against stringent reliability and safety requirements. The work will involve the development of suitable hardening techniques on the hardware, software safety mechanisms, and a comprehensive assessment methodology supported by EDA partners.
Topics: Safety, Dependability, Sustainable mobility
Rsearch objectives and methods: Research objectives
The novelty of this research lies in its focus on sustainable mobility, which is an emerging area of research with great potential for real-world impact. The work is expected to significantly improve the reliability and safety of electronic systems, thereby enhancing the performance of safety-critical applications. The research team's expertise in electronic design automation (EDA) will be leveraged to develop robust methodologies that are both practical and effective. Furthermore, this research is aligned with the goals of the National Centers on Sustainable Mobility and HPC, as well as the Extended Partnership on Artificial Intelligence, which further emphasizes its significance in advancing the state-of-the-art in this field.

The objectives of this research are summarized as follows:
- Identify a suitable hardware platform for sustainable mobility applications with special emphasis given to RISC-V based systems.
- Identify suitable software for mobility applications to be used as a representative benchmark for the qualification activities.
- Assess dependability figures on the identified hardware/software infrastructure to identify critical parts of the design that require hardening.
- Develop innovative hardening solutions to improve the reliability of critical areas in the design.
- Focus on sustainable mobility as an emerging area of research with great potential for real-world impact.
- Establish a comprehensive assessment methodology in collaboration with EDA partners.

Outline of possible research plan

First year:
The candidate will start by conducting a thorough literature review on dependable electronic systems and sustainable mobility, aiming to identify the most recent and relevant research works. They will then select a suitable hardware platform for sustainable mobility applications, taking into account a variety of RISC-V based systems publicly available, and using IP cores from industrial partners (e.g., Synopsys). Furthermore, they will identify suitable software for mobility applications, including AI applications, and leverage publicly available benchmarks developed for other domains such as automotive and space. The candidate will develop a preliminary assessment methodology for the identified hardware and software infrastructure, which will be refined and improved in the following years.

Second year:
The candidate will focus on identifying the critical parts of the design that require hardening and implementing initial solutions to enhance the overall system reliability. They will perform dependability analysis on the identified hardware/software infrastructure, aiming to improve the quality of the developed assessment framework. The candidate will explore various hardening techniques, including redundancy, error-correcting codes, and fault-tolerant architectures, and select the most suitable ones to enhance the system reliability and safety.

Third year:
The candidate will develop innovative hardening solutions to improve the reliability of critical areas in the design, while ensuring the availability of safety mechanisms in the event of a fault. They will develop a comprehensive assessment methodology in collaboration with EDA partners, leveraging their expertise in electronic design automation to refine and optimize the assessment process. The proposed methodologies will be extensively evaluated through simulations and testing, and the candidate will collaborate with industry partners to validate their effectiveness on real-world applications.

List of possible venues for publications
The candidate will prepare and submit papers to top-tier conferences and journals in the field of electronic systems, embedded systems, and fault tolerance.

Possible venues for publications could include:
- IEEE Transactions on Computers
- IEEE Transactions on Computer-Aided Design of Integrated Circuits and Systems
- IEEE Transactions on Very Large Scale Integration (VLSI) Systems
- International Conference on Computer-Aided Design (ICCAD)
- International Test Conference (ITC)
- IEEE European Test Symposium (ETS)
- Design, Automation and Test in Europe Conference (DATE)
- RISC-V Summit


Projects
The research is consistent with the themes of the National Centers on Sustainable Mobility and HPC, as well as with those of the Extended Partnership on Artificial Intelligence, in which members of the CAD group participate.
The research will be supported by industrial partners involved in active collaborations. Synopsys is involved in research activities on functional safety and reliability, and provides licensed tools, IP cores, and support. Infineon is also involved in the frame of research contracts on electronic system dependability.
Required skills: - Background in digital design and verification.
- Solid foundations on microelectronic system and embedded system programming
- Experience with fault modeling and testing techniques for digital circuits, such as stuck-at faults, transition faults, and path-delay faults.
- Knowledge of EDA tools, particularly for fault simulation.

Group website: https://www.dauin.polito.it/it/la_ricerca/gruppi_di_ricerca/sic_s...
Summary of the proposal: The aim of the proposed research activity is to develop a novel multi-objective approach to jointly optimize the following tasks (commonly addressed as two completely separated problems):

- Trip optimization: minimization of the trip duration based on the selected destination and the online information provided by the navigation system
- Optimal energy and thermal management of the electric propulsion system: maximization of the vehicle range and speed of recharge on the basis of the online information provided by Vehicle-to-everything (V2X) communications.
Topics: Battery electric vehicles, Optimal predictive control strategy, Optimal energy management
Rsearch objectives and methods: Research objectives

The objectives of the PhD project are both methodological and applications-oriented. We summarize them as follows.

1) Methodological objectives:
the considered multi-objective optimization problem will be formulated in terms of optimal predictive control. In particular, a multi-loop model predictive control (MPC) scheme will be looked for: the inner-loop will optimize the behavior of the low-level devices for thermal and energy management (battery, powertrain, cooling systems, etc.) over a relatively short prediction horizon (5-30 min); the outer-loop will optimally plan the vehicle trip (in terms of time duration, energy consumption and driver comfort) over the whole trip. The low-level inner-loop will be driven by the optimal long-term reference system behavior computed as solution of the high-level long-term optimization. Machine learning (ML) algorithms will be exploited in the long-term prediction in order to suitably account for all the information provided the V2X communication network.

2) Applications-oriented objectives: the problem of simplifying the obtained optimal algorithms/control structures will be studied in order to make them actually implementable on a real production vehicle. Particular attention will be devoted to the trade-off between computation complexity reduction and performance degradation. The obtained algorithms will be implemented and tested on an accurate model of the systems and/or a real prototype vehicle (in case it will be made available by Stellantis/CRF before the end of the project)

Outline of the research work plan

M1-M6:
- Study of the literature on trip optimization and optimal energy and thermal management for battery electric vehicles.
- Development of an accurate mathematical model of the whole system to be controlled/optimized

M7-M18: General mathematical formulation of the problem. Design of the low-level optimal feedback control structure.
Results obtained in this stage of the project are expected to be the core of both a conference contribution and a paper to be submitted to an international journal.
M19-M31: Mathematical formulation of the long-term optimal prediction/planning problem. Design of the high-level control loop.
Results obtained in this stage of the project are expected to be the core of both a conference contribution and a paper to be submitted to an international journal.
M32-M36: Simulated/Experimental test and final performance assessment
List of possible venues for publications
Journals:
IEEE Transactions on Control System Technology
IEEE Transactions on Intelligent Transportation Systems


Conferences:
American Control Conference
SAE World Congress
IEEE Conference on Control Technology and Application (CCTA)

Industrial Partner
The research activity will be conducted in strict collaboration with STELLANTIS / CRF which is co-funding the Ph.D scholarship
Required skills: Strong background on:

- Fundamental results on System theory and Automatic Control
- System identification
- Model predictive control

At least some notions on Machine learning algorithms / Artificial neural networks structures and training

Group website: https://www.ieiit.cnr.it/research/laboratories/communication-net...
Summary of the proposal: Next-generation communication networks, including the wireless ones (Wi-Fi 7+, WSN, 5G/6G), define specific mechanisms for satisfying the demanding requirements of industrial applications, in terms of reliability, latency, and power consumption. The research activities foreseen by this proposal aim to study how to improve the behavior and performance of these networks, by primarily acting on protocol enhancements, network optimization, and, possibly, artificial intelligence algorithms.
Topics: Industrial Networks, Wireless networks, Machine learning
Rsearch objectives and methods: Wireless technologies are gradually replacing the wired ones in many scenarios, including those characterized by specific and typically demanding performance requirements like industrial environments. Over limited areas, new-generation IEEE 802.11 (Wi-Fi 6/7) and IEEE 802.15.4 wireless sensor (and actuator) networks (WSN/WSAN based on 6TiSCH) are often valid alternatives to 5G/6G. The main goals of Wi-Fi are short latency and high throughput, while WSNs are primarily aimed at extremely low-power consumption. Clearly, the ongoing research on wired technologies, and in particular the advent of, e.g., time-sensitive networking (TSN) and single-pair Ethernet (SPE), cannot be neglected since modern communication infrastructures include both wireless and wired segments. A promising research direction related to above networks aims at lowering latency and/or power consumption, improving at the same time reliability and determinism to meet the tight constraints of industrial systems. The goal of this research proposal is to analyze the most recent trends about industrial networks and to propose protocol enhancements and advanced techniques for improving one or more of their features in the considered scenarios. Although the research activity of the PhD student is expected to mainly focus on wireless communication technologies, it could embrace wired technologies whenever this is deemed appropriate. In fact, many concepts and mechanisms about reliability and determinism can be applied, with relatively small changes, to both kinds of transmission media, potentially leading to valuable cross-fertilization opportunities. A relevant option in the context of this research proposal is to exploit artificial intelligence and/or machine learning algorithms to improve communication quality. For example, the ability to predict the quality of transmission channels accurately permits to (re)configure communication parameters automatically and proactively, in such a way to further enhance behavior and performance.

A possible research plan is as follows:
1) Identification of one or more relevant wireless (or wired) technologies, on which attention will be focused. Very likely, IEEE 802.11ax/be, also known as Wi-Fi 6/7, will be initially included, as well as TSCH-based IPv6 WSNs.
2) Identification of possible improvements, with a potential beneficial impact on performance. For example, multi-link operation (MLO) and TSN frame replication and elimination (FRER) can be adopted to enable seamless redundancy in hybrid wired-wireless networks, by sending every frame for which a reliable transmission service is required on two separate paths. Doing so lowers latency and improves determinism, but also impacts on power consumption. Finding a suitable trade-off is an interesting aspect that deserves to be analyzed.
3) Demonstration of the feasibility of the proposed approaches and assessment of their effectiveness, through proofs-of-concept, simulation, or mathematical proofs. For example, discrete event simulators like NS3 could be employed to assess parts of the protocols.


For the whole duration of the PhD course, points 1), 2) and 3) will be periodically revised based on both the output of the previous research activities and the innovations about the involved technologies coming from the relevant scientific communities and standardization bodies. For instance, to improve performance further, the option to apply reactive duplication avoidance to MLO, by aborting frame transmission on all links as soon as an acknowledgement arrives on one of them, could be analyzed at a later stage. Additionally, machine learning could be envisaged to decide on which channel the first copy of any frame should be sent, when deferral techniques are exploited for saving bandwidth and lowering power consumption.


The above research plan foresees that any achievements are tracked, and the original sequence of activities can be iterated, with changes, one or more times during the PhD course.
The current version of the plan is detailed enough and represents a concrete proposal to drive the research activities on next-generation (wireless) industrial networks since the very beginning.
However, due to the huge amount of research efforts carried out Worldwide on the subject (for example, the task group related to Wi-Fi 8 has been recently formed), this plan should be considered only a possible, though promising, starting point. What must be avoided is to limit a priori the innovation outcomes that the candidate can potentially achieve due to a too rigid workplan.

Possible venues for publications are all the Conferences and Journals of the Industrial Electronic Society of the IEEE (IEEE Transactions on Industrial Informatics, IEEE Open Journal of the Industrial Electronics Society, IEEE Transactions on Industrial Cyber-Physical Systems, etc.), other IEEE journals (IEEE Transactions on Wireless Communications, IEEE Internet of Things Journal, etc.), broader-spectrum Journals like IEEE Access, plus Journals from other Publishers like Ad Hoc Networks and Computers in Industry (Elsevier).

The PhD student will be involved in the activities of the RESTART PNRR Project carried out by partner CNR-IEIIT, for which a Research Scholarship is going to be granted. The related activities are completely aligned with those described in the above workplan.
Required skills: A detailed knowledge is demanded of the Python language and the TensorFlow machine learning library, as well as skills in machine learning techniques based on neural networks. Basic knowledge of wireless communication networks is also required. Expertise in managing big data, datasets, software design, and code optimization, as well as concerning the implementation of language translators and compilers, are deemed valuable.

Group website: https://www.ieiit.cnr.it/research/laboratories/communication-netw...
Summary of the proposal: Industry 4.0/5.0 and the Industrial Internet of Things are the main drivers that will shape the evolution of communication for automation. Similar requirements are found in other application fields, like multimedia streaming, online user interaction, and environmental sensing. Broad-scope research activities, not rigidly tied to any specific communication technologies, are the key for devising holistic solutions able to meet application constraints in modern heterogeneous scenarios, and are at the core of this proposal.
Topics: Wireless networks, Industry 5.0, Machine learning
Rsearch objectives and methods: Unlike the past decades, where the primary goal of transmission technologies was typically bare speed, next-generation industrial, office, and home networks will likely consist of a potentially large number of connected nodes (up to a few hundred in some cases), whose interactions must satisfy a variety of requirements that can be expressed by suitable key performance indicators (KPIs), e.g., latency, jitter, reliability, availability, and power consumption. Properly supporting communication among entities in distributed applications, in such a way that their constraints, specified through said KPIs, are always met (or, at least, in the vast majority of cases) could be a real challenge, especially when wireless transmissions are involved. As a matter of fact, any specific wireless technology is likely unable to satisfy alone all requirements, and this is particularly true for those solutions which operate on a single communication channel. Consequently, future communication systems will include a plurality of coexisting wireless networks characterized by different capabilities, features, and cost. For example, IEEE 802.11 (Wi-Fi 6/7/8) is targeted at high throughput and low latency on local areas, IEEE 802.15.4 can be exploited by multi-hop ultra-low power wireless sensor (and actuator) networks (WSN/WSAN), 5G/6G enable high throughput and low latency over wide (geographic) areas, LoRa/LoRaWAN support long-range low-power data transmission, Bluetooth Low Energy (BLE) suits small-range ultra-low power connections, and so on. In heterogeneous contexts, issues like, e.g., mutual interference, either within the same network or between different technologies, and the mobility of nodes, pose significant challenges, which make it difficult to comply with the required KPIs. A point to be carefully considered is how the different networks should be interconnected so that KPIs could be ensured to end-to-end communication. This aspect is particularly relevant when communication paths include both wired and wireless segments, as happens in the real world. For the wired ones, Time-Sensitive Networking (TSN) is expected to become the standard solution for satisfying the needs of time-aware applications. However, when mobility is involved, paths are partially wireless, and constraints (about, e.g., latency and reliability) have to be met on air as well. Defining harmonized strategies to achieve these goals is one of the main challenges about communication for automation in the next future. In particular, this research proposal seeks the definition of intelligent networks, which exploit advanced access mechanisms, scheduling/reservation techniques, and possibly algorithms based on artificial intelligence and machine learning to react proactively to any changes in the surrounding environment, either to improve one or more KPIs or, al least, to preserve a certain quality of service in spite of adverse conditions.

As a possible research plan, after the selection of one or more relevant wireless communication technologies among those listed above, the Ph.D. candidate must investigate (and possibly propose) some improvements, assessing at the same time how KPIs are affected. As said before, the research activities will be possibly extended to include selected wired technologies (e.g., TSN), which share many aspects with their wireless counterparts. For example, the use of cross-technology redundancy techniques may be envisaged where the MAC layer of the wireless segments of the network is enhanced in such a way to foresee the automatic selection of the best transmission channel, transmission rate, and even transmission technology. To this purpose, machine learning (ML) and/or reinforcement learning (RL) could be exploited. All the proposed techniques will be validated through simulation, mathematical models, and/or by implementing demonstrators.

To ensure flexibility, no specific constraints are given at this stage for the workplan. Instead, three basic aspects are defined for the research activities to be performed:
1) Application scenario: distributed automation systems are chosen because they are quite demanding and pose significant challenges, as defined by a well-defined and agreed set of KPIs. Extensions to similar contexts, like multimedia streaming and online interaction, is possible if deemed relevant.
2) Communication infrastructure: heterogeneous networks are considered that potentially include both wireless and wired parts. For the former, several different technologies that do not require any subscriptions will be analyzed. Extensions to 5G/6G is possible if deemed relevant.
3) As far as possible, both single technologies and holistic solutions will be studied (the latter are often left out in the specialized literature). Improvements on KPIs will be also assessed on end-to-end communication.


Possible venues for publications are:
- all the Conferences and Journals of the Industrial Electronic Society of the IEEE, e.g., IEEE Transactions on Industrial Informatics, IEEE Open Journal of the Industrial Electronics Society, IEEE Transactions on Industrial Cyber-Physical Systems, etc.
- other IEEE journals, e.g., IEEE Transactions on Wireless Communications, IEEE Internet of Things Journal, etc.
- broader-spectrum Journals like IEEE Access
- Journals from other Publishers like Ad Hoc Networks and Computers in Industry (Elsevier)
- more in general, all the Journals or Conferences about communication for automation.


The Ph.D. student will be involved in the activities of the RESTART PNRR Project carried out by partner CNR-IEIIT, for which a Research Scholarship in going to be granted. The related activities are completely aligned with those described in the above workplan.
Required skills: The Ph.D. candidate must possess a high-profile, preferably with an outstanding school record (MS degree and exams’ marks), proven autonomy, and the ability to think out-of-the box. A detailed knowledge of the Python language is required, as well as some skills about formal languages and translators. Prior experience about industrial applications and their implementation is appreciated. Basic knowledge of wireless and wired communication systems is also required.

Group website: https://softeng.polito.it
Summary of the proposal: Software Engineering (SE) is undergoing a significant transformation with the integration of Artificial Intelligence (AI) techniques into the software development lifecycle. Conversely, SE practices hold potential benefits for the AI lifecycle. This PhD proposal aims at advancing AI techniques within software engineering practices and improving the quality and reliability of AI pipelines with the adoption of customized SE techniques, e.g. MLOps.
Topics: keArtificial Intelligence for Software Engineering
Software Engineering for Artificial Intelligence ywords
Rsearch objectives and methods: RESEARCH OBJECTIVES


1. Build a comprehensive framework for the existing and potential synergies between SE and AI. The objective is to explore, document and organize in a conceptual framework the complementary aspects of SE and AI, deeply analyzing and identifying the areas within SE that can benefit from the effective application of AI techniques and vice versa.

2. Develop Customized SE Techniques for AI Lifecycle.
This objective focuses on developing tailored SE techniques to improve the AI lifecycle in a set of selected dimensions such as reliability, robustness, and interpretability of AI systems, and according to specific stages (es., dataset collection and management, feature engineering, model training, model deployment). The high-level goal is to ensure AI systems align with established SE principles and standards.

3. AI-Driven Approaches for Software Quality Assurance
This objective focuses on the creation of novel AI techniques specifically tailored for software quality assurance of SE processes (e.g., requirements elicitation and verification, code review, software testing). The general goal is to fortify the reliability and robustness of software systems through AI-powered methodologies.

OUTLINE OF THE WORKPLAN

1) Literature review and framework development Conduct an extensive review of the existing literature on Artificial Intelligence for Software Engineering (AI4SE) and Software Engineering for Artificial Intelligence (SE4AI). Identify current proposals and implementations, research gaps and potential areas for further investigation. The findings will be organized within a comprehensive conceptual framework, that should consider both the specific phases of software development and the stages of a typical AI pipeline. This will take place mainly during the 1st year.

2) Development/customization and evaluation of SE Techniques for AI improvement. Building on top of the previous activity’s findings, the candidate shall select a set of SE techniques to be specifically tailored for the AI lifecycle. Examples are design methodologies for requirements elicitation, testing strategies, maintenance techniques and tools. Experimental evaluation should validate the improvements in terms of more robust and reliable AI systems, comparing the outcomes achieved with and without SE techniques. Comparison with existing benchmarks is also possible. This will take place mainly during the 2nd and 3rd years.

3) Development/customization and evaluation of AI-Driven Approaches for software quality assurance. Based on the findings of the first activity, the candidate will develop or customize AI techniques to address the identified SE quality assurance challenges, either in terms of processes or artifacts quality. This activity may involve using one or more of the following techniques: ML algorithms, neural networks, natural language processing, image recognition/classification techniques. Experimental evaluation should validate the improvements in the involved aspects of the software development lifecycle, comparing the performance and effectiveness of the AI-driven novel approach with existing SE methods. This should allow to highlight the advantages, disadvantages, and potential improvements offered by the AI-driven approaches. This will take place mainly during the 2nd and 3rd years.

LIST OF POSSIBLE VENUES FOR PUBLICATION
Journals:
- IEEE Transactions on Software Engineering
- Empirical Software Engineering
- ACM Transactions on Information Systems
- European Journal of Information Systems
- Journal of Systems and Software


Conferences:
- ACM/IEEE International Symposium on Empirical Software Engineering and Measurement
- International Conference on Software Engineering
- International Conference on Product-Focused Software Process Improvement
- International Conference on Evaluation and Assessment in Software Engineering (EASE)
Required skills: The candidate should exhibit a good mix of:
- Solid foundation in SE principles, methodologies, and practices. He/she should be familiar with the software development lifecycle and software quality assurance techniques.
- Good understanding of AI techniques. He/she should be proficient in the use of ML and AI techniques, with evidence of past project experiences. He/she should also be able to communicate the research in an effective way.

Group website: https://www.dauin.polito.it/it/la_ricerca/gruppi_di_ricerca/grain...
Summary of the proposal: Automobile recent development, including autonomous driving systems, are based on several sensors such as cameras, radars, and others. These sensors are used not only for managing driving, but also for improving road illumination, both for human and autonomous drivers. This proposal is related to more sophisticated types of automotive lighting, in collaboration with Italdesign S.p.A. In fact automotive lighting is getting more and more complex in providing several functions aside low beam and high beam. The purpose of the work will be to design new automatic systems for managing illumination; computer vision algorithms and image processing methods will be used together with optical design, in co-working with Italdesign.
Topics: Computer Vision, Lighting, Image processing
Rsearch objectives and methods: Automobile evolution requires increasingly automatic systems for driving and traffic detection, for example of other cars or bicycles or other vehicles. Therefore, also lighting systems are in fast evolution. In particular future vehicles’ headlamps will move towards several light sources independently driven, up to several thousand of different sources, each of them driven by means of a technology similar to the one used in digital micromirror projectors. The final purpose is to develop a headlamp able to move automatically the light on obstacles like pedestrian on bicycles suddenly appeared on the road. In order to find where to move the light all the sensors available in the car can be used, mainly cameras and radars. This PhD aim of the activity is the development of the already existent system up to a higher complexity level that allows measurements on matrix high beam functionalities in function of different road and car simulated configurations. The proposal puts together competences of Dipartimento di Automatica e Informatica and industrial strong knowledge of Italdesign S.p.A.. Therefore experimental activies, also in the foreign sites on the company, mainly in Germany, will be performed. This work will be developed during the three years, following the usual Ph.D program:
- first year, improvement of the basic knowledge about lighting systems, attendance of most of the required courses, also on applied optics, submission of at least one conference paper
- second year, design and implementation of new algorithms for testing headlamp optical functionalities and submission of conference papers and at least one journal
- third year, finalization of the work, with at least a selected journal publication.
Possible venues for publication will be, if possible, journals and conferences related to computer vision and optics, from IEEE, ACM and SPIE. An example could be the IEEE Transactions on Image Processing. The scholarship, funded at 50% by Italdesign S.p.A, follows DM 117 (2 March 2023). A period of six months abroad will be done during the PhD, and a period of at least six months in Italdesign will be mandatory too. The work will therefore be done in strict collaboration together with Italdesign Giugiaro S.p.A, with whom there is already a collaboration.
Required skills: The ideal candidate should have an interest in optics, computer vision, and image processing.
The candidate should also have a good background in programming skills, mainly in Python. Good teamwork skills will be very important, since the work will require to be integrated with company work.

Group website: https://netgroup.polito.it
Summary of the proposal: This project aims at investigating the problem of efficient software-based network services in medium-large datacenters, in particular with respect to (a) novel kernel-based paradigms (e.g., eBPF, DPDK), (b) their potential acceleration with special-purpose hardware (e.g. Smartnic). Furthermore, the integration of special-purpose hardware in future generations of server-type processors will be investigated, with the aim of accelerating and making the aforementioned services faster and more energy efficient.
Topics: Cloud computing, software networking
Rsearch objectives and methods: Research objectives
The research goal of improved efficiency of software-oriented data centers is expected to be achieved by pursuing the following objectives:
- Evaluate the possible performance improvements when leveraging multiple technologies for packet processing, such as DPDK, AF_XDP, and eBFP/XDP.
- Assess the possible efficiency improvements when running network processing software on SmartNICs.
- Evaluate the network functions currently required by modern datacenter orchestrators, looking for potential optimization opportunities.
- Explore novel parallelization approaches that can allow to scale horizontally (multiple CPU cores exploited in parallel) and vertically (split the software across multiple software modules executed in sequence, on different servers).
- All the above objectives will require the design of dedicated processing algorithms, as well as proper tools for validation. All the above studies will be validated by means of a set of network services already running in current datacenters, executed in realistic conditions.

Outline of the research work plan
- First year: the Ph.D. student will review the state of the art regarding software-based network processing, with a focus on technologies whose support was recently introduced in the Linux operating system. A conference publication is expected to be produced based on the results of the review. Afterward, the candidate will explore the performance of technologies such as DPDK, AF_XDP, and eBFP/XDP, providing the ground for possible optimizations.
- Second year: the Ph.D. will develop a model for performance prediction for DPDK, AF_XDP, and eBFP/XDP software (which is expected to be published in a paper), then he/she will leverage the above model to determine the best technology to be used in each running condition, based on static (e.g., software configuration) and dynamic (actual traffic to be processed) data. This is expected to originate a third paper.
- Third year: the Ph.D. will pursue two directions. First, analyze the possible speed-up of SmartNICs when applied to the field of investigation. Second, the extension of the achieved results in a small data center scenario, such as the ones operated by datacenter operators, in order to leverage the power of cloud-native functions. The above activities are expected to be published in two separate papers.

Expected target publications
Top conferences:
- USENIX Symposium on Operating Systems Design and Implementation (OSDI)
- USENIX Symposium on Networked Systems Design and Implementation (NSDI)
- International Conference on Computer Communications (INFOCOM)
- IEEE conference on Network Softwarization (Netsoft)

Journals:
- IEEE/ACM Transactions on Networking
- IEEE Transactions on Computers
- ACM Transactions on Computer Systems (TOCS)

Magazines:
- IEEE Computer, IEEE Networks
Required skills: The ideal candidate has good knowledge and experience in cloud computing and networking. Availability for spending periods abroad would be preferred for a more profitable investigation of the research topic.

Group website: https://netgroup.polito.it
Summary of the proposal: This project aims at investigating the challenges behind the “Software-Defined Vehicle”, in particular with respect to (1) the evolution of the automobile towards cloud-native technologies, (2) the management of software processes and services with real-time and mission critical requirements, and (3) the integration of the vehicle into the “computing continuum.”
Topics: Cloud/edge computing, computing continuum, mission-critical cloud computing services
Rsearch objectives and methods: Research objectives
The research goal the “Software-Define Vehicle” is expected to be achieved by pursuing some of the following objectives:

- Local orchestration algorithms with support for multiple computing devices available on a future car, with support for safety and real-time constrained applications.
- Resource offloading algorithms suitable in case of short-lived/unreliable connections between devices, some of which may be energy constrained.
- Distributed processing algorithms, enabled by the presence of computing/sensing resources nearby (e.g., other cars with their own set of sensors), whose results are shared among the different cars participating in a swarm.
- Decentralized orchestration algorithms, enabling autonomous cars to carry out their tasks taking the best of the surrounding environment, e.g., leveraging edge resources or swarms of vehicles when available, but being able to operate also in harsh (and resource-limited) environments if needed.
- Algorithms for load balancing / seamless switching between edge-based, swarm-based, or local services for the sake of resiliency and energy consumption.
- Resource sharing models supporting multiple administrative domains (e.g., car manufacturer, city council, telco operator).

Outline of the research work plan
A possible outline of the research plan is the following:
- First year: the Ph.D. student will review the state of the art regarding software-defined vehicle, including the current status of the standardization activities as well as emerging open-source initiative. A conference publication is expected to be produced based on the results of the review. Afterward, the candidate will explore the characteristics of resource-constrained orchestration algorithms with support for real-time services, providing the ground for possible optimizations, which is expected to originate another initial workshop paper.
- Second year: the Ph.D. will develop a model for performance prediction for services running in different locations (on board of the vehicle, at the edge of the network, in the cloud), which is expected to be published in a paper. Then he/she will leverage the above model to determine the best technology to be used in each running condition, based on static (e.g., software requirements) and dynamic (status of the infrastructure) data. This is expected to originate a fourth paper.
- Third year: the Ph.D. will integrate the previous findings in two directions: (1) optimize the location of each running service including real-time ones over the entire computing continuum, and (2) predict the future infrastructure conditions and software requirements in order to possibly optimize, in advance, the running software. The above activities are expected to be published in two separate papers.

Expected target publications
Top conferences:
- USENIX Symposium on Operating Systems Design and Implementation (OSDI)
- USENIX Symposium on Networked Systems Design and Implementation (NSDI)
- IEEE Vehicular Technology Conference
- International Conference on Computer Communications (INFOCOM)

Journals:
- IEEE Transactions on Cloud Computing
- IEEE Transactions on Vehicular Technology
- IEEE Transactions on Computers
- ACM Transactions on Computer Systems (TOCS)
- IEEE/ACM Transactions on Networking

Magazines:
- IEEE Vehicular Technology Magazine, IEEE Computer, IEEE Networks
Required skills: The ideal candidate has good knowledge and experience in cloud computing and networking. Availability for spending periods abroad would be preferred for a more profitable investigation of the research topic.

Group website: www.sysbio.polito.it
Summary of the proposal: The proposal is in the context of eHealth: use of innovative technologies for non-invasive vital sign monitoring of subjects affected by chronical diseases and/or elderly, frail people during their activities of daily living. The potential of wearable sensors operating in optical spectrometry will be analyzed as for the estimation of several parameters: hearth rate /rate fluctuations; breathing rate; diastolic and systolic blood pressure; serum concentration of several substances (e.g., lactate, alcohol).The movement artifact removal will be addressed in order to afford dynamic measurements.
Topics: Optical spectrometry, Data processing, Artificial Intelligence
Rsearch objectives and methods: This research proposal regards the use of optical spectrometry to provide innovative estimation of vital signs and parameters, in view of the design of a health monitoring system able to operate in home environment.

The first objective of this research is to study the application of optical multispectral sensors to achieve a reliable estimation of:
- heart rate and hearth rate variability;
- breathing rate;
- diastolic and systolic blood pressure.
To this end, experimental instrumentation available at STMicroelectronics, Agrate Brianza, will be made available. The Ph.D. candidate will be involved in the experimental set-up and the data collection on healthy subjects. Artificial Intelligence (AI) algorithms will be identified, trained, validated and tested and on these data (along with proper libraries already present in STMicroelectronics), using usual ECG and sphygmomanometer data as ground truth.

The second objective of this research is to set up a reliable procedure to measure different substances and/or markers in the patient blood using a multispectral optical approach. Again, experimental instrumentation available at STMicroelectronics, Agrate Brianza, will be made available. The Ph.D. candidate will be involved in the experimental set-up and the data collection on healthy subjects. The optical absorption characteristics of the substances whose estimation is clinically relevant (lactate, glucose and alcohol, but possibly also others) will be matched to the spectral characteristics of the available instrumentation.
The measured data will be analyzed using advanced Artificial Intelligence algorithms in order to achieve a reliable estimation of their serum concentrations.

The last objective of this study is related to the use of proper wearable sensors developed in STMicroelectronics using MEMS technology, for the long-lasting extraction and computation of ECG/EEG bio-potentials. This could complete the setup of a eHealth platform monitoring vital signs and parameters.

The expected results are:
- design and evaluation of AI algorithm(s) for vital sign estimation in healthy subjects;
- identification of substances of clinical interest that can be reliably measured using multispectral cameras, and preliminary AI algorithms;
- Validation and testing of MEMS sensors for ECG/EEG measurements.

The main innovation relies in the use of multispectral optical cameras for the estimation of vital signs, and the integration with other wearable sensors to achieve a complete eHealth platform.

In this context, the candidate activity will address:

ACTIVITIES: YEAR 1.
Task 0: state of the art of hearth/breathing rate and blood pressure estimation using optical multispectral cameras. Attention will be paid to motion artifact removal in order to enable dynamic measurements.
Task 1: Preliminary design of AI algorithms for hearth/breathing rate and blood pressure estimation using optical multispectral cameras.
Task 2: Measurement campaign in healthy subjects (organized in STMicroelectronics Labs, Agrate Brianza).
Task 3: AI algorithms validation and testing.
Task 4: Preliminary analysis of the possible concentration estimation of some substances of interest (alcohol, lactate, glucose, others?).

YEAR 2.
Task 5: Measurement campaign (continued).
Task 6: AI algorithm implementation for the evaluation of substance concentration.
Task 7: Design of algorithms for bio-potential estimation using wearable MEMS sensors.

YEAR 3.
Task 8: Final refinement of the algorithm. Validation and testing.
Task 9: Possible experimentation on patients, to be defined and submitted to the competent Ethics Committee,
Task 10: Critical analysis of the results, proposal of integration of the different algorithms in a prototype eHealth platform.

N.B: Due to the complexity and novelty of this topic, the Task description is forcedly preliminary and may be subject to modifications depending on the obtained early results.
We plan to have at least two journal papers published per year.
Target journals:
IEEE Transactions on Biomedical Engineering
IEEE Journal on Biomedical and Health Informatics
IEEE Journal of Translational Engineering in Health and Medicine
MDPI Sensors
Frontiers in Neurology

Cooperations:
The Ph.D. position is co-funded by STMicroelectronics, Agrate Brianza, in the context of DM 117/2023. The Ph.D. candidate will spend a period of at least six months in the STM Labs, Agrate Brianza. Moreover, an agreement is being established for a period to be spent at Uppsala University to carry on joint research on these topics.
Required skills: Preferred skills:
- Expertise in the fields of Signal Processing, Image and Video Analysis, Statistics and Machine Learning.
- Basics in spectroscopy analysis.
- Basics in digital electronics.
- Good knowledge in data acquisition and managing.
- Good knowledge of C, Python, Matlab, Simulink programming languages.
- Good problem solving attitude relational and team work attitude.

Group website: https://cad.polito.it
Summary of the proposal: Autonomous electric vehicles, as well as the communication infrastructure making use of the 5G network, pose severe challenges to the Public Administration in terms of new regulations and procedures for safety and security. This research aims at developing software and hardware to ease such transition, specifically targeting fault-tolerant solutions, Electrical/Electronic/Programmable Electronic Safety-related Systems (E/E/PES), and cybersecurity for automotive.
Topics: Safety, Dependability, Autonomous driving
Rsearch objectives and methods: Research objectives
Autonomous electric vehicles and 5G networks are expected to play a prominent role in the future. The quality of safety-critical systems will influence the management of IoT infrastructure and connected cars by the Public Administration or private bodies. Releasing and verifying V2X safety and security regulations will be of utmost importance, and procedures will need to be implemented to verify compliance with existing and upcoming standards. The work in this proposal is expected to significantly improve the safety and security of electronic systems, thus facilitating the work of the Public Administration in developing and certifying safe and secure V2X communications. The research team's expertise in electronic design automation (EDA) will be leveraged to develop robust methodologies that are both practical and effective. Furthermore, this research is aligned with the goals of the National Centers on Sustainable Mobility and HPC, as well as the Extended Partnership on Artificial Intelligence, which further emphasizes its significance in advancing the state-of-the-art in this field.

The objectives of this research are summarized as follows:
- Identify a suitable hardware platform for V2X applications and suitable software to be used as a representative benchmark for the qualification activities.
- Assess dependability figures on the identified hardware/software infrastructure to identify failure modes, critical parts of the design that require hardening, and suitable safety mechanisms.
- Develop an innovative qualification flow to allow the Public Administration and private bodies to apply safety and security regulations on the IoT infrastructure and connected cars.

Outline of possible research plan
First year:
- Conduct a thorough literature review on dependable and secure electronic systems and V2X communication systems, as well as available safety and security standards, to identify the most recent and relevant research works.
- Select a suitable hardware platform for V2X applications, considering a variety of RISC-V based systems publicly available and using IP cores from industrial partners when available.
- Identify suitable software for mobility and IoT applications.
- Develop a preliminary assessment methodology for the identified hardware and software infrastructure and define suitable failure modes.

Second year:
- Qualitatively and quantitatively assess the weaknesses of the identified V2X infrastructure. The work will include fault injection experiments as well as approaches based on formal methods.
- Identify the critical parts of the design that require hardening (hardware or software) and devise new safety mechanisms and security solutions.
- Cooperate with the Public Administration to identify the necessary actions to focus on when defining the developed hardening solutions, as well as how to integrate them in a system-level qualification flow.

Third year:
- Develop a comprehensive assessment methodology tailored to the needs of the Public Administration in collaboration with EDA partners that cooperate with the research group, leveraging their expertise in electronic design automation to refine and optimize the assessment process.
- Evaluate the proposed methodologies extensively through simulations and testing.
- Collaborate with industry partners to validate their effectiveness on real-world applications.

List of possible venues for publications:
The candidate will prepare and submit papers to top-tier conferences and journals in the field of electronic systems, embedded systems, and fault tolerance. Possible venues for publications could include:
- IEEE Transactions on Computers
- IEEE Transactions on Computer-Aided Design of Integrated Circuits and Systems
- IEEE Transactions on Very Large Scale Integration (VLSI) Systems
- International Conference on Computer-Aided Design (ICCAD)
- International Test Conference (ITC)
- IEEE European Test Symposium (ETS)
- Design, Automation and Test in Europe Conference (DATE)
- RISC-V Summit

Projects
The research is consistent with the themes of the National Centers on Sustainable Mobility and HPC, as well as with those of the Extended Partnership on Artificial Intelligence, in which members of the CAD group participate.
The research will be supported by industrial partners involved in active collaborations. Synopsys is involved in research activities on functional safety and reliability, and provides licensed tools, IP cores, and support. Infineon is also involved in the frame of research contracts on electronic system dependability.
Required skills: - Background in digital design and verification.
- Solid foundations on microelectronic system and embedded system programming
- Experience with fault modeling and testing techniques for digital circuits, such as stuck-at faults, transition faults, and path-delay faults.
- Knowledge of EDA tools, particularly for fault simulation.

Group website: https://cad.polito.it
https://www.synopsys.com
Summary of the proposal: Functional Safety (FuSa) is a concept applied in safety-critical domains for years. As new hardware and software paradigms are emerging (e.g., due to ADAS or high-performance applications for space), current FuSa workflows will soon become obsolete. This research aims to develop innovative FuSa methodologies by introducing new solutions for analyzing Soft Errors and their impact on safety, new safety mechanisms, and automatic verification methods to integrate into next-generation EDA tools.
Topics: Functional Safety, Embedded systems, Reliability
Rsearch objectives and methods: New hardware and software paradigms and emerging technologies will pose severe challenges to the current Functional Safety (FuSa) workflows. Some examples:
- In cyber-physical systems and IoT devices, it can be challenging to ensure the safety in the presence of cyber-attacks.
- Artificial intelligence is being used in safety-critical systems, and can introduce new risks, such as the possibility of unintended consequences.
- In Edge computing, ensuring safety in the presence of limited resources and connectivity can be difficult.
As these technologies continue to evolve, developing new methods for ensuring the safety of next-generation safety-critical systems is essential.

Research objectives
The research will focus on advancing state-of-the-art Functional Safety, defining failure modes, safety mechanisms, and evaluation methods, specifically emphasizing emerging hardware and software paradigms.

Summarizing, the objectives of the research will be:
- Analyzing the current functional safety standards, such as ISO 26262 and IEC 61508, and state-of-the-art EDA tools for FuSa and reliability evaluation, identifying their weaknesses when dealing with the next-generation safety-critical systems.
- Developing innovative soft error analysis methodologies based on available and new safety mechanisms and verifying them formally or empirically.
- Prototyping an overall qualification flow for safety-critical systems to guide the development of next-generation EDA tools for FuSa insertion and assessment.

This research is aligned with the goals of the National Centers on Sustainable Mobility and HPC, as well as the Extended Partnership on Artificial Intelligence, which further emphasizes its significance in advancing the state-of-the-art in this field.

Outline of possible research plan
First year:
The candidate will study the main functional safety standards used in various domains. They will also identify suitable hardware and software to use in their experiments. In this first year, it would be essential to learn about the state-of-the-art on FuSa and dependability in general (including the overlapping with cybersecurity for specific domains). Moreover, the candidate will conduct experiments using commercial EDA tools on the identified platform.

Second year:
The candidate will develop new techniques to enhance the traditional soft error analysis workflow supported by available EDA tools. As the objective is to face with limitations of those tools, they will expose scalability and technological-related issues. In cooperation with Synopsys, the candidate will work on prototyping innovative methodologies for the insertion of proper safety mechanisms, their verification, and the evaluation of their impact on reliability metrics.

Third year:
The candidate will automatize the developed methodology and validate the overall qualification flow on industrial test cases, possibly from Synopsys’ partners or open source. As the research group is involved in activities with RISC-V communities, such as OpenHW, the candidate will be involved in open-source projects on the safety and security of RISC-V, where they can further test the developed framework.

List of possible venues for publications
Different venues for publications will be considered, which could include:
- IEEE ToC, TVLSI, TCAD
- ICCAD, ITC, ETS, DATE, RISC-V Summit, SNUG
Required skills: - Background in CPU architectures.
- Digital Design knowledge, from the RTL to the Physical implementation stage.
- Deep knowledge of testing, fault assessment and modeling strategies.
- Good knowledge of industrial EDA tools and the ability to rapidly learn new ones.

Group website: https://cad.polito.it
Summary of the proposal: Quantum Key Distribution (QKD) is a leading technology of Quantum Communication (QC) since it allows to realize a secrecy protocol, safe and resilient to any external attack. Recently, QKD transmitters have been successfully tested on CubeSat, enabling QCs for satellite.
The final goal of the project is to develop and build an innovative architectural prototype, suitable for space satellite, integrating resilient reconfiguration capabilities, and validated with radiation beam campaigns.
Topics: FPGA, Reconfigurability, Reliability, Clusters
Rsearch objectives and methods: Cluster computation nodes based on radiation hardened self-reconfigurable Field Programmable Gate Arrays (FPGAs) will provide the optimal computational platform without introducing overhead in terms of performance such as data rate, throughput, memories interface access rate, when applying Radiation-Hardening Assurance (RHA) techniques at the cluster level.
Simulation-based radiation particle interaction with matter based on fully integrated particle physics Monte Carlo tools will provide a prototype implementation compliant with the Low Earth Orbit (LEO) and Geostationary Earth Orbit (GEO) radiation environment, necessary to improve the physical geometry of the cluster embedded within a satellite. Moreover, it will measure the criticality and sensitive position and study the fine cluster topology structure.
A QKD communication data acquisition system and online data analysis will provide the monitoring systems integrated with the REASE computational architecture to provide the necessary measurement feedback performed during radiation beam testing using protons, heavy ions, and ultra-energetic heavy ions.
A Dedicated Quantum Communication application to continuously sustain the transmission of new random data and analysis will be developed, implemented, and tested to provide the necessary feedback for robust and real-time quantum communication testbed.
The REASE project will be supported by the PRIN experts on in robust design techniques for reconfigurable architectures, quantum key distribution algorithms and implementation techniques for FPGA, radiation effects modeling on satellite physical structure, and radiation beam testing campaigns. The proposed project will take advantage of outstanding and promising results achieved by the REASE proponents in previous and ongoing projects.
The research program will target the following conference venues:
- IEEE Design and Test in Europe (DATE)
- IEEE Dependable Network and System (DSN)
- IEEE International Test Conference (ITC)
- IEEE International conference on Field Programmable Logic (FPL)
- IEEE RADECS
- IEEE NSREC
The research program will target the following journals:
- IEEE Transactions on Circuits and Systems-I
- IEEE Transactions on Computers
- IEEE Transactions on Reliability
- ACM Transactions on Reconfigurable Technology and Systems
Required skills: The research program is characterized by the following encouraged skills:
- good knowledge of VHDL languages and computer architectures
- enthusiastic approach to hardware design of computing architecture and systems
- good knowledge of CAD tools for FPGA and ASICs.

Group website: https://softeng.polito.it
Summary of the proposal: The PRIN project EndGame focuses on End-to-End (E2E) testing of web and mobile apps. EndGame aims to support the generation of relevant E2E test scenarios leveraging the gamification approach - i.e. the application of game-like mechanics to activities of different nature. -, as well as address the quality of the test harness by revealing and removing issues - e.g., fragility, dependencies, and flakiness - commonly found in test. Effectiveness of the solutions and practical relevance will be assessed also in two real-world case studies.
Topics: Software Engineering, Software Testing, Gamification
Rsearch objectives and methods: The proposal is related to the “EndGame - Improving End-to-End Testing of Web and Mobile Apps through Gamification” PRIN project. EndGame focuses on End-to-End (E2E) testing of the W&M apps; E2E testing refers to validation of a complex system in its context. E2E test practice is still often manual and thus it results tedious and error-prone, and yields poor cost/effectiveness. EndGame aims to support the generation of relevant E2E test scenarios leveraging a powerful approach: gamification, i.e. the application of game-like mechanics to activities of different nature. A gamified approach for E2E testing will enable testers to challenge each other in the hunt for hidden faults and vulnerabilities. EndGame will also address the quality of the test harness, in particular the test suites, by providing novel gamified approaches to reveal and remove issues - e.g., fragility, dependencies, and flakiness - commonly found in test code so as to ensure the resilience of the tests themselves.

RESEARCH OBJECTIVES
1. Develop a Gamified Solution for Exploratory UI Testing. Build a proof-of-concept tool to perform exploration of UI in order to build test cases. The gamification enhancement will make the task more challenging and motivate the user to produce higher quality tests scripts.
2. Assess the Available Gamified Solutions for Exploratory UI Testing. Empirical assessment of the available gamified solutions in order to evaluate the practical benefits that can be gained by means of the gamification approach in general and through specific gamification mechanics.

OUTLINE OF THE WORKPLAN
The research activities will be carried out using a combination of two methodological approaches:
- an engineering approach thread devoted to the development of novel solutions to introduce gamification elements in E2E testing.
- a scientific approach thread focused on the empirical quantitative assessment of the developed solutions, in order to provide feedback to the engineering activities, and to build a knowledge base of empirical evidence.

LIST OF POSSIBLE VENUES FOR PUBLICATION
Journals:
- IEEE Transactions on Software Engineering
- Empirical Software Engineering
- ACM Transactions on Information Systems
- Journal of Systems and Software

Conferences:
- ACM/IEEE International Symposium on Empirical Software Engineering and Measurement
- International Conference on Software Engineering
- International Conference on Product-Focused Software Process Improvement
- International Conference on Evaluation and Assessment in Software Engineering (EASE)
Required skills: The candidate should exhibit a good mix of:
- Solid foundation in SE principles, methodologies, and practices. He/she should be familiar with the software development lifecycle and software quality assurance techniques.
- Good understanding of mobile and web development is important to understand and put in practice the instrumentation needed to implement the gamification tools.
He/she should also be able to communicate the research in an effective way.

Group website: https://elite.polito.it/
Summary of the proposal: All educational institutions are facing the challenge of digital technologies in the teaching and learning processes. The domain of "Educational Technologies" (Ed-Tech) is flourishing, but innovation is just starting: research is needed in order to assess, develop and deploy technical solutions that are suitable to the needs of the learning process, in a teacher- and student-centered process. Emerging technologies such as Artificial Intelligence, Physical Computing, Development Environments, etc will be explored in the domain of CS and other STEM disciplines.
Topics: EdTech, HCI, Physical computing
Rsearch objectives and methods: The research activity will focus on the advancement and application of EdTech approaches and solutions, especially from the point of view of the innovative technologies involved in such solutions. Technologies supporting Digital Education will be exploited to design and test new approaches for managing the teaching and learning processes, both in an in-person and an on-line setting. Involved technologies will range from web or mobile applications, to the adoption of Artificial Intelligence techniques, to the integration of physical computing, and Maker Spaces in the education processes, as well as the advancements in Development Environments.
The focus of the research will be towards the innovation of the solutions, primarily from the technical point of view, but also from the pedagogical side (for which a collaboration with the TLlab is envisaged). The research aims at publishing research results both in technical venues (according to the adopted technologies), and in educational journals (preferably, IEEE Transactions on Education or ACM Transactions on Computing Education).
The first year of the research will be focused on getting acquainted with the state of the art in this sector, by studying the literature and contacting the main national and international players. In the second and third years, specific technologies will be conceived, designed, and tested, possibly of a real population of learners at the secondary, tertiary, and/or professional level. Particular focus in the research will be given to the inclusion properties of the proposed solutions (both covering persons with disabilities or impairments, gender issues, and other forms of digital divide), and on the usability and user-centered design involving all interested parties. EdTech solutions must be customized and programmed by teachers, therefore new development methods (low-code, intelligent assistants, simulated environments, ...) will be researched to allow non-computer professionals to develop their own training experiences, both on digital platforms, and on physical computing. The impact on Digital Well-being of the students will also be considered, in collaboration with the on-going active topics in our research group.
The work will also be included in two Erasmus+ projects, namely "AccessCoVe" and "Critical Making", as well as in the forthcoming "Digital Education Hub" partnership. It will involve collaboration and co-funding with the Links Foundation, and other partners in the projects.
Required skills: The candidate should possess a strong background in Computer Science or Computer Engineering, a good mastering of programming technologies, especially in the fields of Web Applications and Mobile Applications. The candidate should master a sound software development process, and apply methodologies from Human Computer Interaction and from Software Engineering domains. Previous experience in the field of EdTech is an optional but desirable requirement.

Group website: https://security.polito.it
Summary of the proposal: In modern ICT infrastructures computing and storage is no more located in the core only but – with fog computing, personal devices, and IoT – there are several distributed components that concur to data processing. In a similar way, networks are no more based on hardware appliances only but have evolved into intelligent elements able to perform several tasks. SDN (Software Defined Networking) NFV (Network Function Virtualization). This requires a new security paradigm: zero-trust security, where no component can be trusted a-priori but it has to demonstrate its identity and integrity before being accepted in the infrastructure, as well as periodically during operation. The research proposed will deal with various aspects of zero-trust security, from electronic identity (of users and devices) to verification of component integrity and monitoring that the intended security properties are built and maintained during system operation. The final objective is the design and test of a coherent zero-trust architecture for modern ICT infrastructures.
Topics: Cybersecurity, Softwarized networks (SDN, NFV), Edge and cloud computing
Rsearch objectives and methods: The proposed research aims to go beyond the state of the art with respect to the base technologies that support the zero-trust security paradigm in next generation networks. First, each node has to prove its trustworthiness. This is often achieved by providing an unforgeable proof of its software status (binaries, memory, and configuration). While the basics is covered by the Trusted Computing paradigm (through the remote attestation procedure) there are several aspects to be investigated such as run-time attestation, deep attestation, attestation in virtualized environments, fast attestation, and root-of-trust hooks for low-cost devices. Second, in order to avoid human errors, the system must e configured as much as possible automatically, and its operations must be continuously compared with the expected behaviour. For the networking part, AI techniques and "intents" are currently the hot topic to express the desired behaviour and perform automatic configuration. We plan to extend these techniques to cover also various security aspects (confidentiality, integrity, availability, and privacy) and to use them not only for configuration but also for monitoring the system behaviour. Last but not least, strong identity verification of all the nodes is needed, for implementing access control and to store reliable and undeniable evidence of the performed actions (so providing support for security audit and even forensic analysis). The strongest identity verification is provided by asymmetric cryptography and PKI, but this has limits in terms performance (speed and required hardware capability and trust (the hierarchical model is not always acceptable in various environments). Here we will investigate alternative identity solutions, based on fast key distribution, low-power cryptography, and delegated identity (e.g. based on a proxy, reputation, or agreement, possibly backed by an appropriate type of blockchain).

The first year will be devoted to gaining insight in the technologies at the foundation of zero-trust security: electronic identity (for users and devices) based on PKI and Blockchain, AI and intent-based networking for configuration, and trusted computing for integrity verification. The second year will be used to explore the design alternatives, achieving a balance between centralized and distributed approaches (i.e. core, edge, and clients). This will include also considering various alternatives for implementing critical functions, in hardware or software (e.g. TPM chip versus firmware-based solutions to implement a TEE, Trusted Execution Environment), reactive versus proactive protection, closed or open ecosystem (e.g. with respect to PKI and blockchain solutions). Finally, the third year will be devoted to experimental evaluation of the proposed architectures. This will possibly take place in the frame of the EU-funded project iTrust6G which deals with security and trust in 6G networks.

Possible target publications: IEEE Security and Privacy, Springer International Journal of Information Security, Elsevier Computers and Security, Elsevier Computer networks, Future Generation Computer Systems.

This research is part of the Horizon Europe iTrust6G project (security and trust in 6G networks).

Note: website not yet available, as the project will start on January 2024.
Required skills: Cybersecurity (mandatory)
Network security (mandatory)
Trusted computing (preferred)

Group website: http://grains.polito.it/
https://vr.polito.it/
Summary of the proposal: Extended Reality (XR) technologies are emerging as game-changing tools, facilitating immersive experiences able to transcend traditional online communications. The challenge in this rapidly evolving landscape is to reproduce authentic social interactions and realistic stimulations, enhancing collaboration in shared virtual environments. This proposal aims to address open issues in this field, such as creating realistic virtual worlds and optimizing XR technologies for sensory experiences and socialization.
Topics: eXtended Reality, shared virtual environments, social interaction
Rsearch objectives and methods: Extended Reality (XR) technologies are rapidly evolving and, by spreading in the consumer market, promise to transform the way we communicate, entertain, work, study, etc. Essential prerequisite for this evolution is the ability to reproduce authentic and engaging social interactions, along with realistic sensory stimulations, so that to elevate the quality of the user experience in shared virtual environments, today succinctly referred to as metaverse.
The creation of such experiences, however, poses several challenges from a research perspective, which can greatly affect their applicability and utility.
For instance, using XR for spatial visualization in, e.g., collaborative decision-making, requires ensuring visual and temporal coherence for meaningful decision support, addressing visual interference to distinguish crucial information, and guaranteeing accurate registration of the digital contents with the real world. At the same time, it requires managing “egocentric” viewpoints to guarantee the users’ freedom of movement and mitigating the possible data overload for clarity. A first research objective will be to investigate means to optimize the use of XR in these contexts, in terms of techniques, metaphors, gamification elements, and devices for human-human and human-machine interaction, to study their advantages and disadvantages when applied to real-life use cases. This activity may also lead to the draft of specific guidelines for designing experiences of this kind.
A significant challenge affecting communication in collaborative XR experiences is related to the poor quality of sensory stimuli currently available in such experiences. For example, poor haptic feedback and spatial representation can impede the realistic recreation of collaborative interactions. Hence, a second objective will be related to the design of novel methods to elevate fidelity of sensory stimulation, conveying emotions, and enhancing digital relationships in XR through existing technology.
A further potential advantage of multi-user experiences lies in the possibility of designing them in a multi-role structure. The coexistence of users simultaneously embodying different roles in shared scenarios like, e.g., serious games, can allow users to acquire knowledge in a different way, potentially enhancing engagement and problem-solving skills. A third objective will therefore be to assess the advantages, in terms of the effectiveness of the experience, compared to single-role implementations.
Group dynamics and collaboration patterns are also highly relevant, since group performance is, in fact, determined by how well the workload is distributed among members and how effectively they coordinate to execute the task. Therefore, a fourth objective could be to study how collaboration occurs at individuals’ level and group dynamics, measuring collaboration performance and competition within a group and between groups.
Finally, it is known that the use of XR technologies entails an additional workload compared to the equivalent experience in the real world. However, it is not easy to determine whether this disadvantage is linked to implementation choices or inherent to the technology itself. Therefore, a last objective will be devoted to measuring the impact in terms of workload of various technologies and techniques when applied to representative use cases, with the aim to isolate the contribution of each element and identify the most suitable configurations.
During the first year, the PhD student will review the state of the art in terms of techniques/approaches developed/proposed to deal with issues mentioned above concerning social interaction, sensory engagement, and virtual collaboration in shared XR environments. He or she will start addressing open problems in these domains by focusing, e.g., on one of the identified objectives, and devising solutions that will be experimented in specific application scenarios among those tackled by the GRAphics and Intelligent Systems (GRAINS) group and the VR@POLITO lab, aimed at supporting communication, learning, working activities for citizens and professionals. A conference publication is expected to be produced based on the results of these activities. During the first two years, the student will complete his or her background in topics relevant for the proposal by attending suitable courses. During the third year, the student will devise protocols/develop metrics to test the efficacy of the solutions proposed for the identified problems and will run experiments with relevant stakeholders to prove the validity of work done, advancing the state of the art in the field. Results obtained will be reported into other conference works plus, at least, a high-quality journal publication.
The target publications will cover the fields of XR and HMI. International journals could include, e.g., IEEE TVCG, IEEE THMS, IEEE CG&As, ACM TOCHI, etc. Relevant international conferences could include ACM CHI, IEEE VR, IEEE ISMAR, Eurographics, etc.
Required skills: Knowledge, skills and competences related to computer graphics, multi-player game development and, specifically, XR technologies.

Group website: https://dbdmg.polito.it/dbdmg_web/
http://grains.polito.it
Summary of the proposal: Efficient and timely plant disease detection is vital for guaranteeing food security and climate change adaptation in developing nations. Despite progress in AI, challenges persist due to limited, biased datasets and disparities between lab and open-field conditions. Practical constraints demand efficient solutions capable of running on low-resource computational platforms. This PhD aims to craft a computational-efficient solution for disease severity assessment, adapting advanced Neural Architectural Search to smaller datasets, and creating adaptable models for agricultural sustainability in developing countries.
Topics: Prediction and forecasting, Computer vision, Smart agriculture
Rsearch objectives and methods: State of the art and research gaps Early detection of plant diseases is crucial in averting farm devastation and mitigating risks to food security. While naked-eye detection methods pose considerable complexity, primarily due to inherent drawbacks such as challenges in distinguishing similarities among diseases, identifying their causes, and discerning various disease stages in plants, experts including plant pathologists and farmers face significant hurdles in this detection process. To address this issue, researchers have delved into the realm of computer vision integrated with machine learning, yielding promising outcomes. Notably, current disease phenotyping methods based on RGB images can be classified into two different problem formulations: (i) Disease detection: defined as an absence or presence of disease and (ii) Disease quantification or severity prediction, defined as the extent to which individual leaf has been affected. While disease detection has been extensively tackled in the literature, considerably less effort has been developed to disease quantification and severity prediction.
Despite achieving promising results in laboratory studies, there are several hurdles that prevent the widespread application of computer vision techniques for plant disease management. These challenges primarily revolve around dataset issues, complexities in image backgrounds, practical deployment concerns, and model optimization limitations.
Firstly, dataset insufficiency poses a significant hurdle. While publicly available datasets exist, the lack of severity annotated datasets crucial for plant disease severity research remains evident. Annotating images for severity is a tedious process, demanding both efficiency and accuracy. Addressing this, automating annotation with advanced software and employing semi-supervised training methods emerge as potential solutions, yet the accuracy challenge persists, both in manual and software-driven annotation.
Another substantial challenge is dataset imbalance and bias, impacting model generalizability and potentially leading to overestimated performance estimates. Complexities arising from diverse image backgrounds, especially in natural environments, pose additional obstacles. These complexities include misclassification due to resemblances between ground stains and disease symptoms, reflections in natural lighting, and the simultaneous occurrence of multiple diseases on a single leaf. Publicly available datasets are affected by dataset bias, in terms of background, acquisition modalities and geographical distribution, besides being often acquired in laboratory conditions.
Practical deployment of such models demands considerations of computing resources and model size optimization. Cloud infrastructures are needed to run large models, but are expensive to operate, require high-bandwidth connectivity and may pose security concerns. Solutions operating on low-resource, mobile devices would facilitate adoption at large, but require the design of ad-hoc networks that can balance requirements in terms of accuracy, fine-grained categorization and computational requirements.

Research objectives
The present PhD proposal aims at designing a mature, robust, and frugal deep learning-based solution for the automated assessment of plant disease severity, specifically designed to meet the needs and requirements of developing countries. The research objectives of this proposal are three-fold:
- RQ1: Enhance the accuracy and efficiency of plant disease severity prediction by leveraging advanced methodologies like Neural Architectural Search (NAS) The focus here is to not only improve prediction accuracy but also minimize the computational costs associated by leverating NAS and Hyper-Parameter Optimization (HPO). This objective aims to streamline the prediction process while maintaining or improving model performance.
- RQ2: Design innovative methodologies for NAS and HPO on real-life, small datasets: This objective aims to introduce novel methodologies to improve HPO and NAS on small scale datasets, e.g., by taking into account the uncertainty induced by the smaller scale of the dataset, or by leveraging multiple data sources – both labeled and unlabeled - in the optimization process.
- RQ3: Contribute to the creation of robust and adaptable models designed explicitly for agricultural applications. This involves, on one hand, to identify the sources of potential bias in dataset acquisition, and to exploit this domain knowledge to design more robust datasets and/or trained models, enhancing the model's resilience to handle diverse datasets and adapt effectively to varying conditions within agricultural environments. The objective is to develop models that demonstrate adaptability and reliability in different agricultural scenarios.

Research plan
The activities will be carried out in three phases, roughly corresponding to the three years.
In phase 1, the PhD candidate will start with a survey of the relevant literature, identify existing challenges and limitations of models that predict plant disease severity, and clearly define the specific problems that the study aims to address within the context of advanced NAS. The candidate will also investigate available datasets, delve into potential sources of dataset bias, and acquire relevant datasets while minimizing sources of biases. A baseline implementation is expected within the first year.
In Phase 2, the goal is to integrate Hyperparameter Algorithms into Neural Architecture Search (NAS) for optimizing plant disease severity prediction models, experimenting with diverse strategies for improved efficiency, and ensuring compatibility with low-resource computational infrastructures.
Phase 3 focuses on rigorously evaluating the developed model using tailored datasets, comparing its performance with existing methods, documenting and analyzing results, and emphasizing advancements through NAS integration.
Results will be disseminated at conferences and international journals targeting both foundational and applied data science and computer vision. Conferences include International Conference on Computer Vision (ICCV), European Conference on Computer Vision (ECCV), European Conference on Machine Learning and Knowledge Discovery (ECML-PKDD) and IEEE International Conference on Data Science and Advanced Analytics (DSAA). Targeted journals include IEEE Transactions on Image Processing, IEEE Transactions on Neural Networks and Learning Systems, IEEE Internet of Things Journal, Pattern Recognition, Expert Systems with Applications, Machine Vision and Applications. The PhD proposal is funded by the ENI AWARD 2023 - Debut in Research: Young Talents from Africa.
Required skills: Good knowledge of machine learning and deep learning principles
Proficiency in DL frameworks (e.g., TensorFlow, PyTorch)
Familiarity with image processing techniques
Excellent data analysis and interpretation skills