Research proposals

PhD selection is performed twice a year through an open process. Further information are available at http://dottorato.polito.it/en/admission.

Grants from Politecnico di Torino and other bodies are available.

In the following, we report the list of topics for new PhD students in Computer and Control Engineering. This list will be regularly updated when new proposals will be available.

If you are interested in one of the topics, please contact the related Proposer.


- Media Quality Optimization using Machine Learning on Large Scale Datasets
- Web-based distributed real-time sonic interaction
- Digital Wellbeing in the Internet of Things
- Towards a formal definition of software protection
- Responsible, trustworthy, and human-centric innovation in Artificial Intelligenc...
- Algorithms, architectures and technologies for ubiquitous applications
- Dependable Computing in the heterogeneous era: Fault Tolerance in Operating Syst...
- Computational Intelligence for Computer-Aided Design
- Promoting Diversity in Evolutionary Algorithms
- Scalable architectures for neural-symbolic scene interpretation and generation
- New techniques for the Reliability Evaluation of Neural Networks running on GPUs...
- Quantum Machine Learning Applications and Algorithms
- Modality-Agnostic Deep Learning
- Intermittent Computing for Batteryless Systems
- When the cloud meets the edge: Reusing your available resources with opportunist...
- Human-Centered AI in the Internet of Things
- Analysis, search, development, and reuse of smart contracts
- Blockchain and Industry
- GPU-based Parallel Techniques for the Evaluation of Reliability Measures of Larg...
- Cybersecurity in the Quantum Era
- Advancing UI Testing Techniques
- Beyond 5G (B5G) and the road to 6G Networks
- Distributed learning in support of applications for Vulnerable Road Users
- Manufacturing and In-Field Testing Techniques for Automotive oriented Systems-on...
- Architectures and Algorithms for Management of Resilient Edge Infrastructures vi...
- Urban data science
- Studying human interaction with autonomous vehicles in simulated experiences usi...
- Graph network models for Data Science
- Extended reality for telepresence and digital twinning
- Online learning of time-varying systems
- Explainable AI (XAI) for spoken language understanding
- Explaining AI (XAI) models for sequential data.
- Autonomous systems' reliability and security
- Local energy markets in citizen-centred energy communities
- Automatic configuration techniques for improving the exploitation of emerging ed...
- Design of a framework for supporting the development of new computational paradi...
- Co-simulation platform for realtime analysis of smart energy communties
- Simulation and Modelling of V2X connectivity with traffic simulation
- Co-simulation platform for realtime analysis of V2X connectivity
- Digital Twin development for Industrial Cyber Physical Systems
- Blockchain Technology for Intelligent Vehicles
- Blockchain Technology for European Data Spaces
- Early markers to predict the evolution from Mild Cognitive Impairment to Overt D...
- Trusted Execution in a networked environment


Detailed descriptions

Title: Media Quality Optimization using Machine Learning on Large Scale Datasets
Proposer: Enrico Masala
Group website: http://media.polito.it
Summary of the proposal: An ever-increasing amount of data is available in digital format, a large part in the form of multimedia content, i.e., audio, images, and video. The perceptual quality of such data varies widely, depending on the type of content (professional or user-generated), equipment used, bandwidth, storage space, etc. However, users’ quality of experience when dealing with multimedia content strongly depends on the perceived quality, therefore a number of algorithms and techniques have been proposed in the past for quality prediction. Recently, machine learning significantly contributed to the effort (e.g., Netflix VMAF proposal). However, despite the improvements, no technique can currently be considered really reliable, partly because the inner workings of machine learning (ML) models cannot be easily understood. The activity proposed here will focus on optimizing the media compression and communication scenario. Particular attention will be devoted to the key objectives of this proposal, i.e., the creation of tools for a systematic, large-scale approach to the problem and the identification of media characteristics and features that can explain the typical black box ML models. The final aim is to find the characteristics that most influence perceptual quality in order to improve existing measures and algorithms on the basis of such new knowledge.
Rsearch objectives and methods: A few ML-based techniques have been recently proposed for media quality optimization, however almost all of them work as a black box without giving hints about what could be changed to improve performance. Starting from our ongoing work on modeling the behavior of single human subjects in media quality experiments, a systematic approach will be followed by employing several subjectively annotated databases to obtain improved media quality features that, combined, yield reliable and explainable quality predictions. Moreover, large-scale unannotated datasets will also be considered. To this aim, it is necessary to develop a framework comprising a set of tools that allows to more easily process subjective scores (given by human subjects) as well as objective ones in an efficient and integrated manner, since currently every dataset has its own characteristics, quality scale, way of describing distortions, etc. which make integration difficult. Such framework, that we will make publicly available for research purposes, will constitute the basis for reproducible research, which is increasingly important for ML techniques. The framework will allow to systematically investigate existing quality prediction algorithms finding strength and weaknesses, as well as to identify the most challenging content on which newer development can be based. The large set of data obtained by such a systematic investigation is expected to facilitate the identification of a set of features that can be considered as candidates to explain the reason of subjective quality scores. Such objectives will be achieved by using both theoretical and practical approaches. The resulting insight will then be validated in practical cases by analyzing the performance of the system with simulations and experiments with industry-grade signals, leveraging the ongoing cooperation with companies to facilitate the migration of the developed algorithms and technologies into prototypes that can then be effectively tested in real industrial media processing pipelines.
Outline of work plan: In the first year, the PhD candidate will first familiarize with the recently proposed ML and AI-based techniques for media quality optimization, as well as the characteristics of the publicly available datasets for research purposes. In parallel, a framework will be created to efficiently process the large sets of data (especially for the video case) with potentially complex measures that might need retraining, fine-tuning or other computationally complex optimizations. It is expected to make this framework publicly available also to address the research reproducibility issues that are of growing interest in the ML community. This initial investigation and activities are expected to lead to conference publications. In the second year, building on the framework and the theoretical knowledge already present in the research group, new media quality indicators for specific quality features will be developed, simulated, and tested to demonstrate their performance and in particular their ability to identify the root causes of the quality scores for several existing quality prediction algorithms, thus partly explaining their inner working methods in a more understandable form. In this context, potential shortcomings of such algorithms will be systematically identified. These results are expected to yield one or more journal publications. In the third year the activity will then be expanded to propose improvements that can mitigate the identified shortcoming as well as to create proposals for quality prediction algorithms based on the previously identified robust features. Such proposal will target journal publications.
Expected target publications: Possible targets for research publications (well known to the proposer) include:
- IEEE Transactions on Multimedia
- Elsevier Signal Processing: Image Communication
- ACM Transactions on Multimedia Computing Communications and Applications
- Elsevier Multimedia Tools and Applications
- various IEEE/ACM international conferences (IEEE ICME, MMSP, QoMEX, ACM MM, ACM MMSys)
Current funded projects of the proposer related to the proposal: PIC4SeR Interdepartmental Center for Service Robotics (1 PhD scholarship - XXXV cycle - already in place).
Collaboration with the Video Quality Experts Group (VQEG).
Possibly involved industries/companies:

Title: Web-based distributed real-time sonic interaction
Proposer: Antonio Servetti
Group website: http://media.polito.it/
Summary of the proposal: Recent integration of real-time audio functionalities in the web browsers such as Web Audio API and WebRTC has permitted new potentialities in web-based interactive and distributed audio systems. Everyone, given an HTTP URL and a web application, can experiment sonic interaction even through a mobile device. For example, a performer can control the speakers of spectators’ smartphones during a performance, a visitor can connect with an installation and let the sound mediate the interaction with it. However, the disparities across mobile device capabilities, e.g., audio latency, sound intensity, CPU speed, relative position, significantly hinders the quality of any performance and long preliminary device setup sessions are needed to “tune” the devices. The proposed activity will address, both theoretically and practically, the problems of managing audio interaction in a mobile web application considering both technical issues such as device coordination and synchronization, and sonic issues related to the information conveyed by the sound, its meaning and emotional content. The main technical aim is to provide guidance in the improvement of such interactive audio performances by means of automatic techniques that could compensate for the differences in the inhomogeneous set of smartphones, as well as to provide support for improved device coordination.
Rsearch objectives and methods: Many sonic interaction works have been recently proposed for web-based applications. However, almost all of them are designed to interact with a single user in an acoustically isolated environment, e.g., through headphones. The main challenges, when dealing with multiple users and devices that share the same physical space and software application, are coordination and synchronization. In fact, different devices exhibit different behaviors because of their hardware and software capabilities. Even in a human scenario, i.e., an orchestra, a conductor is required to provide a means for synchronization and a tuning session is required to setup each instrument before the performance. Starting from our ongoing work on web audio and streaming technologies, a systematic approach will be followed by analyzing and taking advantage of synchronization and coordination techniques discussed in literature for different scenarios. The final aim is to realize a framework that will improve collective sonic interaction when web applications are used. Audio functionalities for web applications and mobile devices are relatively new and further studies are needed to improve their usability, stability and performance. To this aim, we plan to focus on technical issues in order to improve automatic setup of the devices that, as the instruments in an orchestra, should be tuned and synchronized. For example, it is essential to uniform their sound level, identify their position, know their latency and be able to synchronize them on the beats. Such objectives will be achieved by using both theoretical and practical approaches. The resulting insight will then be validated in practical cases by analyzing the performance of the system with simulations and real experiments. In this regard, the research will be carried on in close cooperation with the Turin Music Conservatory, so as ¬to supplement our experience in sonic production and interaction.
Outline of work plan: In the first year, the PhD candidate will familiarize with the Web Audio and WebRTC API for audio processing in the web browser, as well as with the characteristics of the existing applications for sonic interaction. This activity will address the creation of a JavaScript framework that could allow multiple devices to coordinate and interact together in real-time through a web application, along with the definition of a set of practical use cases. Such activity, culminating in the implementation, analysis, and comparison of different synchronization techniques in a web environment, is expected to lead to conference publications. In the second year, building on the knowledge already present in the research group and on the candidate background on Internet networking technologies, new experiments for automatic tuning and synchronization of the devices will be developed, simulated, and tested to demonstrate their performance and their ability to improve the coordination of sound events and the interaction through the devices. The actual production of new sound works will be crucial for this assessment. In this context, potential advantages of such techniques will be systematically analyzed. These results are expected to yield at least a journal publication. In the third year, the activity will be expanded to study new sonic interaction experiences. In this context, this novel approach could unfold new possibilities in the design of interfaces for musical expression and in the composition of multisource electro-acoustic music. Such proposals will target journal publications. Throughout the whole PhD program, the Electronic Music Studio and School of the Music Conservatory of Turin will be involved in the research activity, specifically focusing on its practice-based aspects and the production of new interactive sound works, for this reason, the candidate is not required to be able to play a musical instrument or to know music theory.
Expected target publications: Possible targets for research publications (well known to the proposer) include IEEE Transactions on Multimedia, ACM Transactions on Multimedia Computing Communications and Applications, Elsevier Multimedia Tools and Applications, Computer Music Journal, Journal of the Audio Engineering Society, various international conferences (Web Audio Conference, New Interfaces for Musical Expression Conference, International Computer Music Conference, International Symposium on Computer Music Multidisciplinary Research, Audio Mostly Conference, Sound and Music Computing Conference, IEEE Intl. Conf. Multimedia and Expo, AES Conference, ACM WWW, ACM Audio Mostly, ACM SIGGRAPH)
Current funded projects of the proposer related to the proposal:
Possibly involved industries/companies:Turin Music Conservatory "G. Verdi" (prof. Andrea Agostini)

Title: Digital Wellbeing in the Internet of Things
Proposer: Luigi De Russis
Group website: https://elite.polito.it
Summary of the proposal: While people derive several benefits from using a plethora of “smart” devices every day, the last few years have seen a growing amount of attention on the negative aspects of overusing technology. This attention has resulted in the so-called “digital wellbeing”, a term that refers to the impact of digital technologies on people’s lives, i.e., “what it means to live a life that is good for a human being in an information society”. Models and tools related to the digital wellbeing context, however, often consider single technological sources at a time, mainly the smartphone. Targeting a single source is not sufficient to capture all the nuances of people’s digital wellbeing: in today’s multi-device world, indeed, people use more than one device at a time, and more effort should be put into evaluating multi-device and cross-device interaction to enhance digital wellbeing. This Ph.D. proposal investigates digital wellbeing in the Internet of Things (IoT), with the aim of providing insights, strategies, tools, and interfaces to better cope with digital wellbeing in our multi-device and connected environments.
Rsearch objectives and methods: The main research objective is to explore novel and effective ways to cope with digital wellbeing in IoT ecosystems, where people may use multiple devices at a time for different purposes. The Ph.D. student, in particular, will work on the study, design, development, and evaluation of proper models and novel technical solutions for this domain (e.g., tools and frameworks), starting from the relevant scientific literature and performing studies directly involving users in multi-device settings.
Possible areas of investigation for digital wellbeing solutions include:
a) Data integration. Tools should be able to make sense of data coming from different devices to provide users with a high-level overview of their technological habits. The variety of IoT devices we have today, in particular, opens up new possibilities: a tool could exploit cameras and smartphone sensors in a sensible and privacy-preserving way to enrich usage data with contextual information, e.g., to figure out where a person is looking at. Understanding what the user is currently doing with their devices would be extremely important, as the underlying task is a discriminant factor to differentiate between positive and negative multi-device experiences.
b) Cross-device adaptation. Tools should adapt to the characteristics of the target device, to allow users to control different device-specific behaviors, from managing smartphone notifications to avoiding excessive zapping on the smart TV. Data integration could allow the activation of interventions according to the current activity, e.g., to block smartphone distraction while working at the PC.
c) Agent-based support. Contemporary digital wellbeing solutions primarily focus on lock-out mechanisms to reduce device use. While those mechanisms are the shortest path to avoid unwanted behaviors, they also proved not to be effective. Instead of blocking “bad” behaviors, a novel solution could be used as learning support, e.g., by suggesting desirable alternatives to scaffold new habits.
Outline of work plan: The work plan will be organized according to the following four phases, partially overlapped.
Phase 1 (months 0-6): literature review about digital wellbeing in various contexts (e.g., home, work, …); study and knowledge of IoT devices and smart appliances, as well as related communication and programming practices and standards.
Phase 2 (months 6-18): based on the results of the previous phase, definitions and development of a set of use cases, interesting contexts, and promising strategies to be adopted. Initial data collection for validating the identified strategies in some contexts and use cases.
Phase 3 (months 12-24): research, definition, and experimentation of multi-device technical solutions for digital wellbeing, starting from the outcome of the previous phase. Such solutions will likely imply the design, implementation, and evaluation of distributed and intelligent systems, able to take into account both users’ preferences and capabilities of a set of connected devices.
Phase 4 (months 24-36): extension and possible generalization of the previous phase to include additional contexts and use cases. Evaluation in real settings over long periods of time to assess to which extent the proposed solutions are able to address the negative impact and increase any positive outcome on our digital wellbeing.
Expected target publications: It is expected that the results of this research will be published in some of the top conferences in the Human-Computer Interaction field (ACM CHI, ACM CSCW, ACM Ubicomp). One or more journal publications are expected on a subset of the following international journals: ACM Transactions on Computer-Human Interaction, ACM Transactions on Cyber-Physical Systems, International Journal of Human Computer Studies, IEEE Internet of Things Journal, ACM Transactions on Internet of Things, IEEE Transactions on Human-Machine System.
Current funded projects of the proposer related to the proposal:
Possibly involved industries/companies:

Title: Towards a formal definition of software protection
Proposer: Cataldo Basile
Group website: https://security.polito.it/
Summary of the proposal: The last years have seen an increase of Man-at-the-End (MATE) attacks against software applications, both in number and severity. MATE attackers have complete control over the devices where they attack software (white-box model). They may use simulators, debuggers, disassemblers, and all kinds of static analysis, dynamic analysis, and tampering tools. The software contains sensitive assets that demand protection. For example, cryptographic keys and intellectual property (e.g., algorithms) need to be kept confidential, critical functions (e.g., license checks or processes used in Industrial Control Systems) must be protected from tampering. However, software protection is still in his infantry as a research field. Protections such as obfuscation, software and remote attestation, anti-debugging, software diversity, and anti-emulation do not aim to prevent MATE attacks completely. They aim at mitigating the risk that assets are compromised in a given time frame. Hence, they delay potential attackers and lower the expected attacks Return Of Investment (ROI). Fuzzy concepts and techniques dominate the design and evaluation of software protections. Security-through-obscurity is omnipresent in industry, protection tools and consultancy are expensive and opaque, there is no commonly accepted method for evaluating protection effectiveness, and we lack any form of standardization. The Ph.D. proposal has a high-level research objective: to progress towards a formal definition of software protection as a mitigation phase in a Software Risk analysis process.
Rsearch objectives and methods: The main research objective is to be the first research group to find a precise formulation of software protection as a risk analysis process. In this risk analysis context, this objective translates into investigating formal models that can be used to perform the automatic (or semi-automatic) protection of software assets and their validation through empirical experiments. This research will progress towards a decision support system that helps developers (with limited or no knowledge about software protection techniques) to defend their assets. To this purpose, we expect to build predictive models to categorize, assess, and objectively estimate threats and mitigations. The Ph.D. candidate will improve the existing definitions of potency, the metric that determines the effectiveness of software protections against (human) attackers. The objective is to develop predictive models that can reliably estimate the effectiveness of protections on code fragments before applying them. To this purpose, one research objective is modelling the complexity of reverse engineering code to acquire a helpful understanding to mount attacks. The validation of this model will be performed with empirical experiments designed to determine the complexity of comprehension and tampering tasks and how these are affected by the presence of software protections. Furthermore, the candidate will investigate how to estimate the stealth of software protections, the property that measures how much the protected assets are visible to attackers. A related research objective is improving the existing models to optimize the stealth of protections. The candidate will investigate methods to measure resilience, which measures the resistance of software protections against attack tasks that remove them via automated tools and approaches. To this purpose, the investigation will consider state-of-art advanced AI methods, e.g., Automatic Exploit Generation.
Outline of work plan: The initial phases of the Ph.D. will be devoted to the formalization of the risk analysis framework of software protection to contextualize all the research objectives. The student will start from the potency. The most challenging task is building predictive models that estimate the effectiveness of protections before they are actually applied. Measuring ex-post is a very time- and resource-consuming task that may easily last minutes, rendering impossible an optimized selection of the protections. The objective of the first year is a formalization of the potency and an initial potency estimation model, which will be validated and improved during the last two years. Together with protection-specific parameters, several co-factors will be considered for this purpose. Objective metrics computed on the code both before and after the protection (e.g., LOCs, Halstead, cyclomatic complexity, already used by with limited results) will be complemented with metrics based on the semantics of the code to protect. For the correct formulation of the potency, the Ph.D. candidate will develop and use models of human comprehension when reverse engineering binaries. These models will consider how attackers symbiotically interact with analysis tools that produce artifacts and abstractions like CFG/DDG, traces, disassembled, and decompiled code. Then, the student will extend the initial modelling to tampering tasks. These models are one of the objectives of the second year. Furthermore, he will participate in the design of experiments to empirically assess the impact of protections against attack tasks. Experimental data will also work as validation of the models. In parallel, models of stealth will be formalized using machine learning methods defined to identify protected assets and classify the used protections. Models of resilience will be based on the effectiveness of the Automatic Exploit Generation, a family of techniques used to automatically determine and exploit vulnerabilities in applications, which will be improved to perform the risk assessment of software assets. The objective of the third year is to design and implement the empirical experiments and analyse the collected data. Moreover, it is expected to formalize stealth and resilience. One or two 3-6 months internship periods are expected in an external institution. The objective is to acquire competencies that may emerge as needed. Research collaborations are ongoing with EU and US academia and with leading software protection firms based in the EU.
Expected target publications: We expect at least two publications on top-level cybersecurity conferences and symposia (e.g., ACM CCS, IEEE S and P, IEEE EuroSP). Results about software protection metrics (potency, stealth, resilience) will be submitted to top-tier journals in scope (e.g., ACM Transactions on Software Engineering, ACM Transactions on Privacy and Security, IEEE Transactions on Dependable and Secure Computing, IEEE Transactions on Emerging Topics in Computing) We expect results for at least one paper about the empirical assessments of software protections to be submitted to journals dealing with empirical methods (e.g., Empirical Software Engineering, Springer)
Current funded projects of the proposer related to the proposal:
Possibly involved industries/companies:

Title: Responsible, trustworthy, and human-centric innovation in Artificial Intelligence: from principles to actionable industrial practices
Proposer: Antonio Vetrò, Juan Carlos De Martin
Group website: https://nexa.polito.it/
Summary of the proposal: The Ph.D. proposal focuses on the problems of accountability and reliability of AI applications which automate decisions on relevant aspects of human lives, such as mortgage authorizations, social benefits allocation, personalized insurance tariffs definition, etc. A large corpus of research showed that AI systems that automate these kinds of decisions often unfairly discriminate against certain individuals or groups of individuals in favor of others, on grounds that are unreasonable or inappropriate. The goal of the Ph.D. work is to translate the emergent principles and guidelines of responsible, trustworthy, and human-centric Artificial Intelligence, elaborated by a large number of institutions in recent years, into techniques and actionable industrial practices. The Ph.D. work will be done in strict collaboration with the company ClearboxAI, which is developing the “AI control room”, a platform for developing more robust, explainable, and monitorable AI models. ClearboxAI will share know-how and access to its technological assets with the Ph.D. candidate, to make research experiments within real industrial settings and scenarios. As a consequence, it is expected that the Ph.D. candidate will also focus on technology transfer activities, helping to translate the latest developments in the field of X-AI and AI fairness into industrial services.
Rsearch objectives and methods: The proposal has two main objectives, namely O1 and O2.
O1) Research, development, and test of techniques for bias and data quality improvement.
The Ph.D. candidate will research, implement, and test techniques that can help detect bias and data quality issues in training data and mitigate them by experimenting with a variety of techniques, to find the most suitable solutions depending on the data, algorithm, and application context. To reach this goal, the Ph.D. candidate will research and experiment on the most suitable measures of data bias and data quality and on intervention mechanisms to improve them, with a special focus on pre-processing techniques (e.g., data rebalancing algorithms, datasets augmentation methods).
O2) Research, development, and test of model enrichment methods to facilitate human comprehension, intervention and overall agency in AI model development and monitoring.
While generating explanation-based assessments for black-box AI models is not very complicated, presenting that information to humans in a way that they can make positive interventions within the datasets and models is an open research issue. In addition, most of the available approaches to implement eXplainable Artificial Intelligence (XAI) focus on technical solutions usable only by experts able to manipulate the algorithms, but not by other relevant stakeholders (including end users). The Ph.D. candidate will research, implement, and test the best practices to provide augmented cues not only to model builders but also to laymen. In this context, a promising alternative to the most beaten routes for XAI is represented by Knowledge Graphs, because they are natively developed to support explanations intelligible to humans: therefore, the Ph.D. candidate will research on how to integrate symbolic systems in the typical AI pipeline for the purpose of facilitating human comprehension, intervention, and overall agency.
Outline of work plan: High-level research work plan for O1:
a) Identification of data quality and balance measures (created ad-hoc and searched in the literature, also from other disciplines), and tests on available datasets, on their mutations and on synthetic datasets.
b) Experiments on the propagation of bias and quality issues to the output of classification/prediction tasks, with different algorithms and datasets.
c) Correlational analyses of bias and quality measures with classification output measurers, with special attention on the fairness-accuracy tradeoff.
d) Identification and test of mitigation techniques, through re-iteration of steps b) and c) High-level research work plan for O2:
a) Research, development (and/or reuse and adaptation) and testing of a knowledge matching mechanism to bind input features of selected classification/prediction algorithms to classes of an ontology and entities of a knowledge graph (KG). The process could be partially or fully automated. The activity may include the building of an ontology and of a KG in case they are not yet available for the scenario being tested.
b) Research, development (and/or reuse and adaptation) and testing of an interactive application for producing explanations for algorithms output, by leveraging the manipulation of symbols from the previously developed KG, to produce i) transparent inferences of new information and ii) high level explanations. The interaction with the user should be in the form of questions made by users in natural language (also using predefined questions) that are translated in SPARQL queries run over the knowledge graph. The visual output should be evaluated in terms of usability, level of comprehension and possibility to configure changes in the model and/or in the data.
Expected target publications: ACM Journal of Data and Information Quality
ACM Transactions on Information Systems
Expert Systems with Applications
Engineering Applications of Artificial Intelligence
Journal of Systems and Software
Software X
Empirical Software Engineering Journal
Semantic Web Journal
Information Fusion
Journal of Responsible Technology
Current funded projects of the proposer related to the proposal: “Strategia digitale europea nella gestione e regolamentazione dei dati e considerazioni di confronto con gli USA” (2021, finanziatore: TIM)
Possibly involved industries/companies:Clearbox AI (https://clearbox.ai/) is a startup incubated in I3P, winner of the National Innovation Award (PNI 2019) in the ICT category and of the EU Seal of Excellence.

Title: Algorithms, architectures and technologies for ubiquitous applications
Proposer: Renato Ferrero
Group website: http://www.cad.polito.it/
Summary of the proposal: Ubiquitous computing is an innovative paradigm of human-computer interaction, which aims at the integration of technology into everyday objects to share and process information. It envisions accessibility of computing and communication services every time and everywhere, as well as calm technology, which asks minimal attention to the user when interacting with the system. Current technology trend is moving from one side towards smart devices, which offer mobile, complex and personalized functionalities, and from the other side towards embedded systems disappearing in the physical world and performing simple and specific tasks. It follows that several issues must be addressed in the development of ubiquitous applications: architecture design for interlinking the ubiquitous components (smart devices and embedded systems); integration of different technologies (e.g., sensor networks for monitoring physical conditions, RFID network for tagging and annotating information, actuators for controlling the environment); development of algorithms for smart and autonomous functionalities. The research activity of the PhD candidate regards the design, development and evaluation of ubiquitous applications, so he/she is required to own multidisciplinary skills (e.g., distributed computing, computer network, advanced programming).
Rsearch objectives and methods: The research objectives concern the identification of requirements and the investigation of solutions for designing and developing ubiquitous applications. More in details, the following objectives will be pursued:
1) to develop distributed architectures able to support both local and remote services, thus increasing the number of functions offered and avoiding duplication of code. Resource availability and access (storage, applications, data for end-user) will be based on cloud computing and fog computing.
2) to develop context-aware applications, which can identify the most useful services for the user, instead of proposing the full list of available functionalities. This can limit the user's effort in interfacing the system and can reduce the computational requirements. The involved technology consists of wireless sensor networks, which provide useful information about the physical environment (e.g., location, time, temperature, light…), and RFID networks, for context-based query, item location and tracking, automated access to physical place or virtual service.
3) to enhance the autonomy of the ubiquitous system. For example, algorithms for power savings will increase the lifetime of mobile components, thus reducing the need of human maintenance.
4) to develop "smart" systems for proactive behavior and adaptability in dynamic contexts. The research will focus on modeling the physical environment, as well as human behavior. Limitations are due to the dynamicity of the environment, its incomplete observability, the impossibility to completely determine the user actions and goals. Therefore, algorithms for handling the incompleteness of the system and the non-deterministic user behavior will be designed.
Outline of work plan: The PhD research activities are organized in two consecutives phases. In the first phase the PhD candidate will improve his/her background by attending PhD courses and by surveying relevant literature, then he/she will apply the learnt concepts to tackle specific issues in the implementation of ubiquitous systems. In particular, the first research hints will regard applications already developed by the research group, such as air pollution monitoring and thermal monitoring in smart building: the PhD candidate may be in charge of enhancing the ubiquity of applications developed in this context. As already existing applications will be considered in the first phase, the research objectives can be considered as independent from each other. In this way, it is possible to anticipate the expected outcome (personal scientific contribution, research papers) by focusing on one research objective at a time, since there is no need to master all concepts in advance. In the second phase, when the training is completed and the PhD candidate owns a full vision of the matter, he/she will be able to evaluate the ubiquity of existing solutions, in particular with respect to the automation of human tasks and the access to information anywhere and at any time. He/she will be able to propose technologic improvements, and/or design new solutions for solving problems within the research group expertise. The PhD candidate will be involved in research projects aiming at designing and implementing new ubiquitous applications. He/she will be required to exploit his/her competence to analyze and solve real problems and finally to evaluate the performance of proposed solutions.
Expected target publications: - IEEE Pervasive Computing
- IEEE Journal on Internet of Things
- ACM International joint Conference on Pervasive and Ubiquitous Computing (UbiComp)
- IEEE International Conference on Pervasive Computing and Communications (PerCom)
Current funded projects of the proposer related to the proposal:
Possibly involved industries/companies:

Title: Dependable Computing in the heterogeneous era: Fault Tolerance in Operating Systems
Proposer: pAlessandro Savino, Maurizio Rebaudengoroposer
Group website: www.smilies.polito.it
www.cad.polito.it
Summary of the proposal: Nowadays, the computing continuum is growing in computing capabilities and several application fields, placing Cyber-Physical Systems (CPSs) in many heterogeneous scenarios, ranging from agriculture to biology, including autonomous systems and IoT. This variety of scenarios comes with new challenges because the dependability constraints defined by each field may vary a lot, while the system's dependability is always crucial. For this reason, modern systems may require a middle layer that, by exploiting the highest level of fault tolerance, can enhance any application with solid dependability. The proposed Ph.D. will focus on methodologies and tools to assess the reliability of Operating Systems (including Real-Time OSs) and target the design of hardening techniques tailored to specific requirements imposed by each application field. The investigation will include exploiting all hardware features in modern microprocessors and systems to track the system's behavior to support real-time adaptations.
Rsearch objectives and methods: The research objectives envisioned for this Ph.D. include:
1. To design and build fault injection strategies and tools to evaluate the OS resilience, resorting to all modern facilities, such as cloud-aware computation, architectural models, etc. All feasible solutions should deliver detailed data about all single parts of the OS, including all system parameters evaluation, and support highly scalable solutions to ensure the feasibility of the analysis.
2. To develop selective hardening techniques to empower the OS both of improved resiliency and higher filtering capabilities and reduce the number of errors rising at the application layer. The goal is to introduce techniques to improve power consumption and execution time concerning the classical voting approaches on replicated executions by being OS-aware and not generic. Those techniques will cover the full system stack: hardware and software. When leveraging hardware, they will take advantage of already available hardware features, such as Performance Monitor Counters (PMC), and target new hardware features in modern microprocessors, like the RISC-V.
3. To propose real-time online adaptation/reconfiguration techniques to react to non-masked faults dynamically and allow preemptive behavior to future foreseeing issues. The previous methods will be investigated in a dynamic scenario, identifying fault detection techniques that, once triggered, induce the activation of countermeasures run-time, being able to carefully balance the impact of fault-tolerance methods to other system parameters as power consumption and execution time. To such an extent, covering either hardware or software techniques will pave the way to a higher masking capability.
Outline of work plan: Year 1: Dependability analysis of Hardware and Software Platforms
Reliability assessment is a crucial requirement to appropriately intervene within the OS and positively harden the most sensible parts. Thus, the Ph.D. student will spend the first year studying and designing solutions to assess the system's reliability from an OS standing point. The study comprises a thorough evaluation of the state-of-the-art solution, either to define the set of comparison or identify work with potential of improvements. After this initial phase, the student will develop injectors (either based on architectural simulators, i.e., gem5, or pure software) able to properly inject all kinds of faults, e.g., permanent, transient, and intermittent, to investigate better both the HW contribution to OS fault and the OS masking capabilities. These tools will need to run on docker-like virtualization environments to allow effective scalability of the fault injection campaigns. Moreover, all solutions will have to target different hardware architectures and be able to customize those architectures when required by the definition of new hardware features.
Year 2: Hardening the OS
Detailed data collected by fault injection campaigns will require an in-deep analysis capable of pointing out the OS's unreliable components, paving the way to hardening those components and reducing the overheads introduced by the fault-tolerant techniques. These techniques will be designed by using more selective and OS-aware solutions, like specific code protection, data reduction, and replacing a portion of the OS with intrinsic resilient methods such as approximate Computing, aiming at reducing the need for the classical duplication/triplication approaches. For these reasons, each tool deployed in the previous year will support the extraction of power consumption models and other parameters, i.e., timing profile, to link the hardening solutions with them.
Year 3: Reconfigurable OS
Modern CPSs include many hardware facilities to online monitoring the system behavior. Those facilities may have predictive capabilities that, coupled with a reconfiguration capability of the OS, may deliver real-time adaption. The Ph.D. student will target the online monitoring and fault detection tasks and develop reconfiguration methods to actively react on threads to the dependability detected at the hardware level and link their employment with power consumption and execution time. This online task might leverage new features to be studied and introduced at the design and model level by the candidate. Moreover, at this point, to evaluate the flexibility of the run-time adaptation capabilities, the work will target specific applications representative of the most common embedded systems scenarios.
Expected target publications: We expect the candidate to submit their work to several high-level conferences (e.g., DATE, DAC, DFT, ESWEEK, MICRO, etc.) and top-ranking journals (e.g., IEEE Transaction on Computers, IEEE Transaction on Reliability). We foresaw a minimum of two conference publications per year, and at least two journal publications, spawning along with the second and the third year.
Current funded projects of the proposer related to the proposal:
Possibly involved industries/companies:

Title: Computational Intelligence for Computer-Aided Design
Proposer: Giovanni Squillero
Group website: http://www.cad.polito.it/
Summary of the proposal: Computational Intelligence (CI) is playing an increasingly important role in many industrial sectors. Techniques ascribable to this large framework have long been used in the Computer-Aided Design field: probabilistic methods for the analysis of failures or the classification of processes; evolutionary algorithms for the generation of tests; bio-inspired techniques, such as simulated annealing, for the optimization of parameters or the definition of surrogate models. The recent fortune of the term “Machine Learning” renewed the interests in many automatic processes; moreover, the publicized successes of (deep) neural networks smoothed down the bias against other non-explicable black-box approaches, such as Evolutionary Algorithms, or the use of complex kernels in linear models. The proposal would focus on the use and development of algorithms ascribed to the broad framework of “Artificial Intelligence”, yet specifically tweaked on the need and peculiarities of CAD industries.
Rsearch objectives and methods: The goal of the research is twofold: from an academic point of view, tweaking existing methodologies, as well as developing new ones, specifically able to tackle CAD problems; from an industrial point of view, creating a highly qualified expert able to bring the scientific know-how into a company, while being also able to understand the practical needs, such as how data are selected and possibly collected. The need to team the experts from industry with more mathematically minded researchers is apparent: frequently a great knowledge of the practicalities is not accompanied by an adequate understanding of the statistical models used for analysis and predictions. The research will consider techniques less able to process large amount of information, but perhaps more able to exploit all problem-specific knowledge available. It will almost certainly include bio-inspired techniques for generating, optimizing, minimizing test programs; statistical methods for analyzing and predicting the outcome of industrial processes (e.g., predicting the maximum operating frequency of a programmable unit based on the frequencies measured by some ring oscillators; detecting dangerous elements in a circuit; predicting catastrophic events). The activity is also like to exploit (deep) neural networks, however developing novel, creative results in this area is not a priority. On the contrary, the research shall face problems related to dimensionality reduction, feature extraction and prototypes identification/creation.
Outline of work plan: The research would start by analyzing a current practical need, namely: “predictive maintenance”. A significant amount of data is currently collected by many industries, although in a rather disorganized way. The student would start by analyzing the practical problems of data collection, storage, and transmission, while, at the same time, practicing with the principles of data profiling, classification, and regression (all topics that are currently considered part of “machine learning”). The analysis of sequences to predict the final event, or rather identify a trigger, is an open research topic, with implications far beyond CAD. Unfortunately, unlikely popular ML scenarios, the availability of data is a significant limitation, a situation sometimes labeled “small data”. Then the research shall focus on the study of surrogate measures, that is, the use of measures that can be easily and inexpensively gathered as a proxy for others, more industrially relevant but expensive. In this regard, Riccardo Cantoro and Squillero are working with a semiconductor manufacturer for using in-situ sensors values as a proxy for the prediction of operating frequency, and they jointly supervised master students. The work could then proceed by tackling problems related to “dimensionality reduction”, useful to limit the number of input data of the model, and “feature selection”, essential when each single feature is the result of a costly measurement. At the same time, the research is likely to help the introduction of more advanced optimization techniques in everyday tasks.
Expected target publications: Top journals with impact factors
• ASOC – Applied Soft Computing
• TEC – IEEE Transactions on Evolutionary Computation
• TC – IEEE Transactions on Computers
Top conferences
• ITC – International Test Conference
• DATE – Design, Automation and Test in Europe Conference
• GECCO – Genetic and Evolutionary Computation Conference
• CEC/WCCI – World Congress on Computational Intelligence
• PPSN - Parallel Problem Solving From Nature
Current funded projects of the proposer related to the proposal: • The proposer is collaborating with Infineon on the subjects listed in the proposal: Two contracts have been signed, the third extension is currently under discussion; A joint paper has been published at ITC, other one was submitted, others are in preparation.
• The proposer collaborated with SPEA under the umbrella contract “Colibri”. Such a contract is likely to be renewed on precisely the topics listed in the proposal.
Possibly involved industries/companies:The CAD Group has a long record of successful applications of intelligent systems in several different domains. For the specific activities, the list of possibly involved companies include: SPEA, Infineon (through the Ph.D. student Niccolò Bellarmino), ST Microelectronics, Comau (through the Ph.D. student Eliana Giovannitti)

Title: Promoting Diversity in Evolutionary Algorithms
Proposer: Giovanni Squillero
Group website: http://www.cad.polito.it/
Summary of the proposal: Computational intelligence (CI) in general, and evolutionary computation (EC) in particular, is experiencing a peculiar moment. On the one hand, fewer and fewer scientific papers focus on EC as their main topic; on the other hand, traditional EC techniques are routinely exploited in practical activities that are filed under different labels. Divergence of character, or, more precisely, the lack of it, is widely recognized as the most impairing single problem in the field of EC. While divergence of character is a cornerstone of natural evolution, in EC all candidate solutions eventually crowd the very same areas in the search space, such a “lack of speciation” has been pointed out in the seminal work of Holland back in 1975. It is usually labeled with the oxymoron “premature convergence” to stress the tendency of an algorithm to convergence toward a point where it was not supposed to converge to in the first place. The research activity would tackle “diversity promotion”, that is either “increasing” or “preserving” diversity in an EC population, both from a practical and theoretical point of view. It will also include the related problems of defining and measuring diversity.
Rsearch objectives and methods: This research started about a decade ago with “A novel methodology for diversity preservation in evolutionary algorithms” (G. Squillero, A. Tonda; GECCO Companion; 2008) and is more active every passing year. The proposer organized workshops on diversity promotion in 2016 and 2017; moreover, he presented tutorials on the subject at IEEE CEC 2014, PPSN 2016, and ACM GECCO 2018. Recently, together with Dirk Sudholt, a scholar devoted to theoretical aspects of EAs, the research incorporated rigorous runtime analyses — joint tutorials on “Theory and Practice of Population Diversity in Evolutionary Computation” have been accepted at ACM GECCO 2020 and ACM GECCO 2022 (to be presented in July). The primary research objective will be to choose a possible definition of diversity, and to analyze and develop well-known, highly-effective, general-purpose methodologies able to promote it. That is, methodologies like the “island-model”, that are not linked to a specific implementation nor to a specific paradigm but are able to modify the whole evolutionary process. Then the study would examine why and how the divergence of character works in nature, and then find analogies in the artificial environment. Such ideas would be first tested on reduced environment and problems used by scholars. Later, they will be generalized to a wide scenario, broadening their applicability for practitioners. Rigorous runtime analyses would be used to assess the proposed methodologies. As “premature convergence” is probably the single most impairing problem in the industrial application of EC, any methodology able to ease it would have a tremendous impact. To this end, the proposed line of research is generic and deliberately un-focused, not to limit the applicability of the solutions. However, the research will explicitly consider domains where the proposer has some experience. Namely:
• CAD Applications, mostly related to the generation of Turing-complete assembly programs for test and validation of microprocessors.
• Evolutionary Machine Learning, that is mostly EC techniques used to complement traditional ML approaches.
• Computational Intelligence and games
Outline of work plan: The first phase of the project shall consist of an extensive experimental study of existing diversity preservation methods across various global optimization problems. The MicroGP, a general-purpose EA, will be used to study the influence of various methodologies and modifications on the population dynamics. Solutions that do not require the analysis of the internal structure of the individual (e.g., Cellular EAs, Deterministic Crowding, Hierarchical Fair Competition, Island Models, or Segregation) shall be considered. This study should allow the development of a, possibly new, effective methodology, able to generalize and coalesce most of the cited techniques.
During the first year, the candidate will take a course in Artificial Intelligence, and all Ph.D. courses of the educational path on Data Science. Additionally, the candidate is required to learn the Python language.
Starting from the second year, the research activity shall include Turing-complete program generation. The candidate will move to MicroGP v4, the new, Python version of the toolkit under active development. That would also ease the comparison with existing state-of-the-art toolkits, such as inspyred and deap. The candidate will try to replicate the work of the first year on much more difficult genotype-level methodologies, such as Clearing, Diversifiers, Fitness Sharing, Restricted Tournament Selection, Sequential Niching, Standard Crowding, Tarpeian Method, and Two-level Diversity Selection.
At some point, probably toward the end of the second year, the new methodologies will be integrated into the Grammatical Evolution framework developed at the Machine Learning Lab of University of Trieste – GE allows a sharp distinction between phenotype, genotype and fitness, creating an unprecedented test bench (Squillero is already collaborating with prof. Medvet on these topics, see “Multi-level diversity promotion strategies for Grammar-guided Genetic Programming” (Applied Soft Computing, 2019).
A remarkable goal of this research would be to link phenotype-level methodologies to genotype measures.
Expected target publications: Journals with impact factors
• ASOC - Applied Soft Computing
• ECJ - Evolutionary Computation Journal
• GPem - Genetic Programming and Evolvable Machines
• Informatics and Computer Science Intelligent Systems Applications
• IS - Information Sciences
• NC - Natural Computing
• TCIAIG - IEEE Transactions on Computational Intelligence and AI in Games
• TEC - IEEE Transactions on Evolutionary Computation
Top conferences
• ACM GECCO - Genetic and Evolutionary Computation Conference
• IEEE CEC/WCCI - World Congress on Computational Intelligence
• PPSN - Parallel Problem Solving From Nature
Current funded projects of the proposer related to the proposal:
Possibly involved industries/companies:The CAD Group has a long record of successful applications of evolutionary algorithms in several different domains. For instance, the on-going collaboration with STMicroelectronics on test and validation of programmable devices, does exploit evolutionary algorithms and would benefit from the research. Squillero is also in contact with industries that actively consider exploiting evolutionary machine-learning for enhancing their biological models, for instance, KRD (Czech Republic), Teregroup (Italy), and BioVal Process (France).

Title: Scalable architectures for neural-symbolic scene interpretation and generation
Proposer: Fabrizio Lamberti, Lia Morra
Group website: http://grains.polito.it
Summary of the proposal: Deep neural networks (DNNs), despite achieving unprecedented success in most computer vision tasks, lack desirable properties including top-down control, transparency, and generalization to unseen data. Neural-symbolic techniques combine DNNs for (sub-symbolic) representation learning with (symbolic) Knowledge Representation and Reasoning techniques. To achieve this, Knowledge Base (KB) is grounded (i.e., mapped) in a tensor space. These techniques are especially suited to visual tasks that combine efficient pattern recognition with higher cognitive capabilities, such as scene graph generation (SGG), event detection in videos, and procedural generation of 3D scenes. Besides the ability to explicitly represent and manipulate abstract concepts in the KB, neural-symbolic architectures can encode prior knowledge, e.g., logical axioms, which improves learning from small-scale, biased, incomplete, or conflicting reference standards, as is often the case in SGG. Despite their potential, neural-symbolic architectures have reached limited diffusion so far, due to issues related to scalability, and lack of demonstrated applications. The objective of this Ph.D. project is thus to improve the scalability and ease of train of neural-symbolic architectures and investigate their application on large-scale benchmarks and realistic tasks, including image classification or scene graph generation, and procedural generation of 3D and image content.
Rsearch objectives and methods: Neuro-symbolic techniques have already been proposed for semantic image interpretation [1,2,3]. The neural-symbolic component is typically placed on top of representations learnt by DNNs: for instance, SGG architectures include an object detector, followed by a visual relationship detection module. Preliminary work has proven the feasibility of jointly training neural-symbolic and representation learning components [3]. Yet, research remains limited on how to effectively unify the two components for various tasks and find the best trade-off between modularity and end-to-end training (in terms of efficiency and accuracy). Additionally, scaling to large datasets remains challenging. For instance, many methods rely on a predefined KB which includes declarative and prior knowledge in the form of logical axioms. As only groundings are learnt from data, a new KB must be manually defined for each dataset. To progress towards more scalable and largely applicable techniques, the following objectives are identified:
Obj 1) Develop techniques and tools to design general purpose KBs that can be shared among multiple tasks or datasets.
Obj 2) Design and validate neural-symbolic architectures for semantic image interpretation.
Obj 3) Design and validate generative models to generate an image from a structured latent model (e.g., scene graphs): solving the inverse problem of mapping the scene graph back to the image domain is an important stepping stone towards allowing semi-supervised. Besides, it has several important practical applications, e.g., for the automatic generation of computer graphics content.
Obj 4) Investigate the feasibility of learning the KB base in an unsupervised or semi-supervised fashion from raw data
[1] I. Donadello et al., Compensating Supervision Incompleteness with Prior Knowledge in Semantic Image Interpretation, IJCNN, 2019
[2] S. Bedraddine et al., Logic Tensor Networks, Artificial Intelligence, 2021
[3] F. Manigrasso et al., Faster-LTN: a neuro-symbolic, end-to-end object detection architecture, ICANN, 2021
Outline of work plan: Phase 1: the Ph.D. candidate will start with a survey of the relevant literature. Additionally, he/she will strengthen core competencies in deep learning, statistical relational reasoning, and representation learning. The candidate will investigate neural-symbolic architectures for visual relationship detection, which leverage a KB including prior knowledge (Obj. 2). Different techniques will be compared to compile the KB semi-automatically from existing sources, e.g., WordNet or ConceptNet (Obj. 1). These sources, however, are often noisy or incomplete, or may include irrelevant ones (e.g., some properties may not be visible). Thus, the KB will need to be cleaned, or an appropriate attention mechanism will be included to select relevant factors. An alternative or complementary direction is to define general-purpose KBs, e.g., relative to intuitive physical properties, essential for encoding spatial and visual relationships. End-to-end architectures in which all modules are jointly trained will be designed and compared in terms of effectiveness and efficiency, including cost of training (Obj. 2), on large-scale datasets.
Phase 2: the candidate will tackle the complementary problem of procedural generation of visual content from a structured latent model, e.g., a scene graph (Obj. 3). Architectures will be trained and tested on well-defined domains, such as indoor scene generation, leveraging public or synthetic datasets.
Phase 3: finally, the candidate will evaluate the feasibility of constructing a KB which is compatible with raw observations under minimal supervision (Obj. 4). Since neural-symbolic techniques allow to represent KBs as neural networks, a possible strategy is to employ neural architectural search techniques. Being a very challenging task for which very limited literature exists, it will be tackled starting from toy problems.
At least one publication is envisioned for each phase. The candidate will have the opportunity to apply these techniques in the context of projects and research contracts carried out by the GRAINS group.
Expected target publications: Publications will target conferences (e.g., CVPR, ECCV, ICPR) and journals in the areas of machine learning and computer vision. Target journals include, e.g., IEEE Transactions on Pattern Analysis and Machine Intelligence, International Journal of Computer Vision, IEEE Transactions on Neural Networks and Learning Systems, IEEE Transactions on Visualization and Computer Graphics, Pattern Recognition, Computer Vision and Image Understanding, Journal of Machine Learning Research.
Current funded projects of the proposer related to the proposal: Topics addressed in the proposal are related to those tackled in the following projects and contracts managed by the proposer:
- FACETS - Face Aesthetics in Contemporary Society (ERC), in which the proposers cooperate with experts in semiotics from the University of Turin to analyze social media images through DNNs
- AIBIBANK – Imaging BioBanking for Artificial Intelligence, in which the proposers are developing novel deep neural networks for mammography screening and triage. In particular, architectures characterized by different inductive biases are compared in terms of performance and interpretability.
Possibly involved industries/companies:

Title: New techniques for the Reliability Evaluation of Neural Networks running on GPUs
Proposer: Matteo Sonza Reorda
Group website: http://www.cad.polito.it
Summary of the proposal: Neural Networks (NNs) are increasingly used in many application domains where safety is crucial (e.g., automotive and robotics). In order to match the time requirements, in most cases, the NN is executed on a GPU architecture. Possible faults affecting the hardware of the GPU executing the NN can severely impact the produced results. Unfortunately, we still miss a full understanding of which are the most critical faults (i.e., those which can modify the output of the NN), and which are the most sensible modules in a GPU. The goal of the proposed research activity is to fill this gap, trading-off the result accuracy with the computational effort required to simulate the effects of the faults that may affect the hardware, tracing their effects up to the application level.
Rsearch objectives and methods: GPUs are increasingly adopted in safety-critical applications (e.g., in the automotive and robotics domains), where the probability of failures must be lower than well-defined (and extremely low) thresholds. This goal is particularly challenging, since GPUs are extremely advanced devices, built with the highly sophisticated (and hence less mature) semiconductor technologies. On the other side, since these applications are often based on Artificial Intelligence (AI) algorithms, they benefit of their intrinsic robustness, at least with respect to some faults. Unfortunately, given the complexity of these algorithms and of the underlying architectures, an extensive analysis to understand which faults/modules are particularly critical is still missing. The planned research activities aim first at exploring the effects of faults affecting the hardware of the GPU implementing the NN. Experiments will study the effects of the considered faults on the results produced by the NN. This study will mainly be performed resorting to fault injection experiments. In order to keep the computational effort reasonable, different solutions will be considered, combining simulation- and emulation-based fault injection with multi-level one. The trade-off between the accuracy of the results and the required computational effort will also be evaluated.
Outline of work plan: The proposed plan of activities is organized in the following phases:
- phase 1: the student will first study the state of the art and the literature in the area of NNs, their implementation on GPUs and their applications. At the same time, the student will become familiar with open source GPU models (e.g., FlexGripPlus) and simulation environments (e.g., NVbitFI). Suitable cases of study will also be identified, whose reliability and safety could be analyzed with respect to faults affecting the underlying hardware.
- phase 2: suitable solutions to analyze the impact of faults on GPUs will be devised and prototypical environments implementing them will be put in place.
- phase 3: based on the results of a set of fault injection campaigns performed to assess the reliability and safety of the selected cases of study a detailed analysis leading to the identification of the most critical faults/components will be carried out.
Phases 2 and 3 will also include dissemination activities, based on writing papers and presenting them at conferences. We also plan for a strong cooperation with the researchers of the Federal University of Rio Grande do Sul (Brazil), having special expertise in reliability evaluation, and with NVIDIA engineers.
Expected target publications: Papers at the main international conferences related to test, reliability and safety (e.g., ETS, ATS, VTS, IOLTS, ITC, DSN). Papers on the main IEEE journals related to design, test and reliability (e.g., IEEE Design and Test, IEEE Transactions on VLSI, IEEE Transactions on CAD, IEEE Transactions on Reliability, IEEE Transactions on Computers).
Current funded projects of the proposer related to the proposal:
Possibly involved industries/companies:NVIDIA

Title: Quantum Machine Learning Applications and Algorithms
Proposer: Bartolomeo Montrucchio
Group website: http://grains.polito.it
Summary of the proposal: Quantum Computing (QC) is a quite new research field, mainly related to physics departments until recent years. In 2016, IBM introduced the first ever quantum computer, setting a milestone in this field, opening the topic to the computer science domain. The main target of QC engineers will be the analysis and development of new algorithms, as well as of new technologies for building such quantum computers. Along with IBM, several other companies and startups are developing their development framework, boosting the interest in several fields. The Ph.D. candidate will be required to use a strong interdisciplinary approach for working on machine learning on quantum computers, since quantum mechanics must be considered together with specific computer engineering techniques like, just as examples, artificial intelligence and computer security.
Rsearch objectives and methods: Being a totally new paradigm, quantum computing is going to be a challenge for engineers, who would have not only to re-implement classical algorithms in a quantum way, but also explore uncharted paths of the new way of representing and elaborating information and its processing. In the last three years, QC companies and research institutes have come up with different software stacks, appealing to a wide spectrum of possible users, from Machine Learning to Optimization to Material simulation. These companies are trying to provide the programmer pseudo-standard APIs like the ones already available for conventional computers. Analysis and possibly development of new algorithms will therefore be the final research objective of this research activity. Since the QC landscape is in continuous evolution thanks to hardware improvements, the Ph.D. student should be able to quickly adapt to these changes, carefully studying and comparing APIs for a certain application domain. Therefore, the final purpose of the work will be to understand how to apply this quite new mindset in the computer engineering environment. Industrial applications will be seen with particular attention, since industries will be the first to be involved in QC revolution. In particular, problems like resource allocation and task scheduling could be analyzed, as well as novel Image Processing techniques with applications in the medical or manufacturing domain. Also, fault tolerance in algorithms will be considered, since quantum computers fault model is quite different from the standard electronic computers one. Since QC has a very fast evolution, the most interesting research problems will be identified together among the cited at the beginning of the activity. Knowledge of Python and quantum mechanics is useful, but not mandatory.
Outline of work plan: The work plan is structured in the three years of the Ph.D. program:
1- in the first year the Ph.D. student should improve his/her knowledge of quantum computing and technology, in particular since quantum mechanics and quantum computing are not seen in the previous curriculum; he/she should also follow in the first year most of the required courses in Politecnico. At least one or two conference papers will be submitted during the first year. The conference works will be presented by the Ph.D. student him/herself based on the preliminary study of algorithms working on envisioned platforms.
2- In the second year the work will be both on designing and implementing new algorithms and on preparing a first work for a journal, together with another conference. Interdisciplinary aspects will be also considered. Teaching credits will be also finalized.
3- In the third year, the work will be completed with at least a publication in a selected journal summarizing the results of the implementation of the algorithms on platforms and technologies that will be selected as the most promising. The participation to the preparation of proposals for funded projects will be taken in consideration.
Expected target publications: The target publications will be main conferences and journals related to quantum computing, if possible. Since at the moment there are only a very few of them in the computing engineering field, the choice will be done selecting, if possible, those linked to IEEE and ACM, that already started publishing specifically on QC. It is important to note that interdisciplinary aspects will be considered as fundamental, since QC is now very useful for solving problems that can come from many research fields.
Current funded projects of the proposer related to the proposal: In the last three years , a (funded) collaboration with TIM has been done on these arguments, also in strict collaboration with Fondazione Links.
Possibly involved industries/companies:Fondazione Links

Title: Modality-Agnostic Deep Learning
Proposer: Luca Cagliero
Group website: https://dbdmg.polito.it
Summary of the proposal: MultiModal Deep Learning (MMDL) aims at gaining insights into various combinations of media types such as text, audio, video, physiological signals, facial expressions and body gestures. Within this research field, a relevant effort has been devoted to extending Deep Natural Language Processing models such as Transformers to handle not only the raw textual documents but also additional data sources (e.g., images, audio speech). As a drawback, most of the existing MMDL techniques are tailored to specific combinations of input/output data types. Modality-Agnostic Deep Learning (MADL) focuses on relaxing these dependences thus making the devised solutions portable to different scenarios, domains, and tasks The Ph.D. candidate will study, design, and develop new MADL techniques and will explore their applicability to various real-world contexts. During the Ph.D. the candidate will investigate the extension of state-of-the-art Deep Learning architectures designed for MMDL towards additional input and output types, addresses the problem of domain adaptation, and as seeks new, innovative solutions to the MADL problem.
Rsearch objectives and methods: State-of-the-art Deep Learning architectures extract meaning from vary large quantities of data by leveraging their inherent data structure. The data-driven inference process based on Neural Networks relies on simplifying assumptions, called inductive bias, which reflect the underlying data characteristics and, indirectly, their modality of acquisition. MultiModal Deep Learning (MMDL) approaches aim at blending data of multiple data types to overcome the inherent architectural and performance limitations of traditional single-modal solutions. However, their scope is often limited to specific combinations of data types and to particular application contexts. Modality-Agnostic Deep Learning (MADL) focuses on relaxing the dependence of MMDL models on the input/output data types. The purpose is to make the devised solutions more flexible and easily portable to different scenarios, domains, and tasks. Currently, MADL techniques are challenged by
- The limited number of existing DL architectures: few attempts to solve the MADL problem have been presented in literature. Hence, the number of unexplored directions is potentially large.
- Applications to a rather small number of tasks, domains and applications: compared to the rapidly increasing number of data combinations, tasks, and application domains that have already been explored by the MMDL community, still the use of MADL is quite preliminary.
- The complexity of domain shift, which requires both cross-modal content adaptation at the source level and the effective handling of unseen domains at the target level.
The goal of the Ph.D. research proposal is to deeply explore the extension of existing MMDL techniques towards input or output data type independence, with a particular attention paid to the extension of state-of-the-art Deep Natural Language Processing architectures such Transformers and Seq2Seq models. It also aims at investigating novel approaches to MADL, domain shift, and their application to real-world scenarios, domains, and tasks (e.g., aspect-based sentiment analysis, summarization).
Outline of work plan: PHASE I (1st year)
Step 1: Study of the state-of-the-art MMDL solutions. Overview of the related tasks and benchmarks.
Step 2: Retrieval of additional input data sources (with different modalities). Envisage of new use cases that would require the integration of additional input data sources or the adaptative generation of new output types.
Step 3: Design and testing of the extended MMDL solutions. Cross-modal content adaptation at the source level.
PHASE II (2nd year):
Step 1: Study of the state-of-the-art MADL solutions such as Perceiver (https://arxiv.org/abs/2103.03206), PerceiverIO (https://arxiv.org/abs/2107.14795). Overview of the related tasks and benchmarks.
Step 2: Design of extensions/variants of the existing solutions. Study of unsupervised cross-domain adaptation methods.
Step 3: Application of the MADL models to new, challenging scenarios (e.g., news content curation, learning analytics).
PHASE III (3rd year):
Step 1: Study, design, and development of new, original MADL architectures.
Step 2: Proposal of new benchmarks for testing the performance of MADL solutions.
During all the three years the candidate will have the opportunity to attend top quality conferences, to collaborate with researchers from different countries, and to participate to NLP research challenges such as the Financial Narrative Processing Workshop (http://wp.lancs.ac.uk/cfie/shared-tasks/), TREC PodCast Track (https://trecpodcasts.github.io/)
Expected target publications: Conferences: ACM SIGIR, EMNLP, ACL, COLING, IEEE ICDM
Journals: ACM TOIS, ACM TIST, IEEE/ACM TALP. IEEE TKDE, ACM TKDD, Elsevier ESWA, IEEE TETC, Springer DMKD
Current funded projects of the proposer related to the proposal: - PRIN Italian Project Bando 2017 "Entrepreneurs As Scientists: When and How Start-ups Benefit from A Scientific Approach to Decision Making". The activity will be focused on the analysis of questionnaires, reviews, and technical reports related to training activities.
Possibly involved industries/companies:

Title: Intermittent Computing for Batteryless Systems
Proposer: Massimo Poncino
Group website: eda.polito.it
Summary of the proposal: The claimed IoT scenario in which billions or trillions of tiny devices that compute, sense, and learn, while being also small, cheap, and operating for the entire lifetime of the things they monitor implies a batteryless future. Batteries wear out, are expensive, they are bulky, and replacing or recycling the number of batteries needed to power the billions sensors is not feasible nor environmentally responsible. A novel paradigm called intermittent computing is now emerging, in which these nodes are powered solely by environmental sources; The devices have little or no energy storage so that energy harvesting becomes essential, supply voltages — which are normally stable and clean — become intermittent, and power failures become common events. When they occur, an appropriate mechanism to save the current execution state on a non-volatile memory must be set up. This mode of operation has implications in every phase of their design: architectures, hardware components, software implementation and communication. This research project focuses on the exploration of all the aspects above in order to deploy sophisticated intermittent computing solutions.
Rsearch objectives and methods: The research objective is to develop a set of techniques that address the various issue involved in intermittent systems.
- development of software/hardware modules for the prediction of environmental quantities. Having even a rough long-term estimation of the amount of available energy can help to drive decisions on future backups. Notice that the period (alternation of on-off intervals) of these systems tends to be small (in the order of ms), so the prediction of the environmental quantity, which have much larger time constants (many seconds), will serve as a long-term estimate.
- Intermittent-aware software. This will either encompass (1) code instrumentation to support state recovery after a power-off, or (2) re-design of algorithms that can provide approximate or partial outputs when interrupted (anytime algorithms). The latter option is consistent with the nature of typical IoT applications in which it is possible to return a partial result without needing the termination of the full program.
- Time vs. quality models of anytime algorithms. Anytime algorithms exhibit a monotonically increasing quality of the results (e.g., accuracy of a classification) over time. It is essential to pre-c characterize an anytime algorithm by providing a quality curve that can be used to estimate the quality of the output.
- Energy and functional simulators for the whole system. This is essential for a quick prototyping and what-if analysis of the various parameters involved in the different techniques.
While there are some research works addressing some of these issues, they are mostly focused on the correctness of the execution and not on the energy efficiency. Secondly, most works rely only on abstract models of the architecture, of the computation and of the environmental sources and lack validation. For this reason, one strength of our research will be the deployment on an actual hardware platform, which will allow us to provide an accurate assessment of the proposed techniques.
Outline of work plan: 1st year. The candidate will study state-of the-art techniques in particular i) existing architectures for intermittent systems, ii) existing strategies for the energy management of these systems both in hardware and software, iii) existing simulation platforms, and iv) existing commercial platforms. Before the end of the 1st year a target system will be identified that is both available as a real platform and for which a simulator is available. Moreover, the target open research problems will be identified.
2nd year. Based on the outcomes of the first year, specific techniques will be evaluated on the target platform, first by simulating them and porting the most promising ones on the physical platform.
3rd year. The methodology and the algorithms developed in the previous years will be validated to prove their robustness and scalability in being applied on realistic workload to be executed on the platform.
Expected target publications: ● IEEE Transactions on CAD
● IEEE Transactions on Computers
● IEEE Journal on Internet of Things
● IEEE Transactions on Circuits and Systems (I and II)
● IEEE Design and Test of Computers
● IEEE Sensors Journal
● ACM Transactions on Embedded Computing Systems
● ACM Transactions of Design Automation of Electronic Systems
● ACM Transaction on IoT
Current funded projects of the proposer related to the proposal: ● End-to-end digitalised production testbeds (EIT X-KIC)
● AMBEATion (H2020, MSCA-RISE)
● MADEIN4 (H2020, ECSEL)
● DTWIN (Regional Funds)
Possibly involved industries/companies:● STMicroelectronics
● Reply

Title: When the cloud meets the edge: Reusing your available resources with opportunistic datacenters
Proposer: Fulvio Risso
Group website: http://netgroup.polito.it
Summary of the proposal: Cloud-native technologies are increasingly deployed at the edge of the network, usually through tiny datacenters made by a few servers that maintain the main characteristics (powerful CPUs, high-speed network) of the well-known cloud datacenters. However, we can notice that in most domestic environments or enterprises a huge number of traditional computing/storage devices are available, such as desktop/laptop computers, embedded devices, and more, which run mostly underutilized. This project proposes to aggregate the above available hardware into an “opportunistic” datacenter, hence replacing the current micro-datacenters at the edge of the network and the consequent potential savings in energy and CAPEX. This would transform all the current computing hosts into datacenter nodes, including the operating system software. Furthermore, in order to further leverage the above infrastructure, this project would consider evolving the current software into a cloud-native approach, in which a computing device can borrow resources from a neighbor node, with potential additional advantages. The current Ph.D. proposal aims at investigating the problem that may arise in the above scenario, such as defining a set of algorithms that allow orchestrating jobs on an “opportunistic” datacenter, as well as a proof-of-concept showing the above system in action.
Rsearch objectives and methods: The objectives of the present research are the following.
• Evaluate the economic potential impact (in terms of hardware expenditure, i.e., Capital Expenditures - CAPEX, and energy savings, i.e., Operating Expenses - OPEX) of such a scenario, in order to validate the economic sustainability and the impact in terms of energy consumption.
• Extend existing operating systems (e.g., Linux) with lightweight distributed processing/storage capabilities, in order to allow current devices to host “foreign” applications (in case of availability of resources), or to borrow resources in other machines and delegate the execution of some of its tasks to the remote device.
• Define the algorithms for job orchestration on the “opportunistic” datacenter, which may differ considerably from the traditional orchestration algorithms (limited network bandwidth between nodes; highly different node capabilities in terms of CPU/RAM/etc; reliability considerations; necessity to leave free resources to the desktop owner, etc).
• Define a strategy to transform traditional applications (e.g., desktop applications) into cloud-native software, which can work on cloud-native infrastructure as well, along the lines of the “Kubernetes on Desktop” project , while apparently maintaining their “desktop” look and feel. This would be required for running the current applications on the cloud-native platform defined above.
Finally, a use case will be defined in order to validate the above finding in more realistic conditions. Among the possible choices, a university lab with many desktop computers, including possible “friendly users” among the students who contribute with their laptops.
Outline of work plan: The proposed research plan, which covers a subset of the possible objectives listed in the previous section, is structured as follows (in months):
• [1-10] Economic and energy impact of opportunistic datacenters
o Real-world measurements in different environment conditions (e.g., University lab; domestic environment; factory) about computing characteristics and energy consumption
o Creation of a model to assess potential savings (economic/energy)
o Paper writing
• [11-26] Job orchestration on opportunistic datacenters
o State of the art of job scheduling on distributed infrastructures (e.g., including edge computing)
o Real-world measurements of the features required for distributed orchestration algorithms (CPU/memory/storage consumption; device availability; network characteristics)
o Definition of a scheduling model that achieves the foreseen objectives, evaluated with simulations
o Paper writing
• [27-34] Experimenting with opportunistic datacenters
o Proof of concept of the defined orchestration algorithm on real platforms
o Real-world measurements of the behavior of the above algorithm in a single use-case (e.g., University lab)
o Paper writing
• [30-32] (in parallel) Transforming traditional applications into cloud-native services
o This task aims at addressing the problem of using realistic applications in the real-world validation scenario presented above. This task is delayed in the hope that the technological evolution will bring to life a new set of cloud-native desktop applications, hence making this task un-necessary.
• [35-36] Writing Ph.D. dissertation.
Expected target publications: Top conferences:
• USENIX Symposium on Operating Systems Design and Implementation (OSDI)
• USENIX Symposium on Networked Systems Design and Implementation (NSDI)
• International Conference on Computer Communications (INFOCOM)
• ACM European Conference on Computer Systems (EuroSys)
• ACM Symposium on Principles of Distributed Computing (PODC)
• ACM Symposium on Operating Systems Principles (SOSP)
Journals:
• IEEE/ACM Transactions on Networking
• IEEE Transactions on Computers
• ACM Transactions on Computer Systems (TOCS)
Magazines:
• IEEE Computer
Current funded projects of the proposer related to the proposal:
Possibly involved industries/companies:Italdesign Giugiaro (IDG)

Title: Human-Centered AI in the Internet of Things
Proposer: Luigi De Russis, Corno Fulvio
Group website: https://elite.polito.it
Summary of the proposal: Artificial Intelligence (AI) systems are widespread in many aspects of the society. Machine Learning enabled the development of algorithms able to automatically learn from data without human intervention. While this leads to many advantages in decision processes and productivity, it also presents drawbacks such as disregarding end-user perspectives and needs. Human-Centered AI (HCAI) emerged as a novel conceptual framework [1] for reconsidering the centrality of humans while keeping the benefit of AI systems. To do so, the framework builds on the idea that a system can contemporary exhibit high levels of automation and high levels of human control. The Ph.D. proposal extends the research on HCAI to smart environments, e.g., AI-powered environments equipped with Internet-of-Things devices (including sensors, actuators, mobile interfaces, service robotics, etc). In such environments, AI systems typically automate the activities that people perform; users, however, want to remain in control. This generates a conflict that could be tackled by adopting the HCAI framework. This proposal aims at designing, developing, and evaluating concrete HCAI systems to support users of IoT-enabled environments. Also, it aims at extending the understanding of the HCAI framework’s principles and providing valuable lessons for different fields.
[1] Ben Shneiderman (2020) Human-Centered Artificial Intelligence: Reliable, Safe and Trustworthy, International Journal of Human-Computer Interaction, 36:6, 495-504
Rsearch objectives and methods: The main research objective is to investigate solutions for designing and developing HCAI systems in smart IoT-enabled environments. A particular focus will be on how the adoption of the HCAI framework can bring tangible benefits to users and to the smart environments research field, while extending the research on HCAI. The research activities will mainly build on the following characteristics of the HCAI framework:
- High levels of human control and high levels of automation are possible: design decisions should give users a clear understanding of the AI system state and its choices, guided by human-centered concerns, e.g., the consequences and reversibility of errors. Well-designed automation preserves human control where appropriate, thus increasing performance and enabling creative improvements.
- AI systems should shift from emulating and replacing humans to empowering and “augmenting” people, as people are different from computers. Intelligent system designs that take advantage of unique computer features are more likely to increase performance. Similarly, designs that recognize the unique capabilities of humans will have advantages such as encouraging innovative use and supporting continuous improvement.
In particular, the Ph.D. research activity will focus on:
1) Study of AI algorithms and models, distributed architectures, and HCI techniques able to support the identification of suitable use cases for building effective and realistic HCAI systems.
2) Enhancement of the HCAI framework to include end-user personalization, e.g., as a way to recover from errors or to guide the system choices.
3) Development of strategies for dealing with de-skilling effects. Such effects may undermine the human skills that are needed when automation fails and the difficulty of remaining aware when some user actions become less frequent.
Such goals will require advancement both in interfaces and interaction modalities, and in AI algorithms and their integration into user-facing smart environments.
Outline of work plan: The work plan will be organized according to the following four phases, partially overlapping.
Phase 1 (months 0-6): literature review about HCAI and smart environments; study and knowledge of AI algorithms and models, IoT devices and smart appliances, as well as related communication and programming practices and standards.
Phase 2 (months 6-18): based on the results of the previous phase, definitions and development of a set of use cases and interesting contexts to be adopted for building user-facing smart environments. Initial data collection for validating use cases, and possible applications of end-user personalization strategies.
Phase 3 (months 12-24): research, definition, and experimentation of HCAI systems in the defined smart environments and use cases, starting from the outcome of the previous phase. Such solutions will imply the design, implementation, and evaluation of distributed and intelligent systems, able to take into account users’ preferences, capabilities of a set of connected devices, as well as AI methods and algorithms.
Phase 4 (months 24-36): extension and possible generalization of the previous phase to include additional contexts and use cases. Evaluation in real settings of one or more of the realized systems over a significant amount of time.
Expected target publications: For each of the previously mentioned phases, at least one conference or journal publication is expected. Suitable venues might include: ACM CHI, ACM IUI, ACM Ubicomp, IEEE Internet of Things Journal, ACM Transactions on Internet of Things, IEEE Pervasive Computing, IEEE Transactions on Human-Machine Systems, ACM Transactions on Computer-Human Interaction.
Current funded projects of the proposer related to the proposal:
Possibly involved industries/companies:

Title: Analysis, search, development, and reuse of smart contracts
Proposer: Valentina Gatteschi, Fabrizio Lamberti
Group website: grains.polito.it
Summary of the proposal: In recent years, Blockchain technology received increasing interest from researchers. The global blockchain market is expected to grow from $5.8 billion in 2021 to more than $390 billion by 2028. A key advantage of blockchain is that it can run smart contracts, (small) programs that are automatically executed when some conditions occur. Once a smart contract's code is stored on the blockchain, everyone can inspect it and it becomes no longer modifiable. Despite the statistics, a considerable portion of the wider public still do not trust blockchain technology and smart contracts, mainly due to its complexity. In fact, understanding the behavior of a smart contract requires some (good) programming skills. Furthermore, a comprehensive, easily-browsable repository to retrieve smart contracts’ code, to ease the development of new smart contracts is missing. This proposal aims at addressing the above limitations by investigating new techniques and proposing novel approaches for the analysis/search/development/reuse of smart contracts. The achieved results could be relevant for both developers, that could find/reuse smart contracts already developed, both people without technical skills, that could be able to understand the behavior of (or even code) a smart contract thanks to visualization and semantic technologies, among others.
Rsearch objectives and methods: The activities carried out in this Ph.D. program will aim at investigating existing approaches, devising, and testing novel ones for:
a) the analysis of smart contracts’ source code/OPCODE: smart contracts’ source code/OPCODE deployed on the blockchain mainnet/testnet will be analyzed by considering different aspects, e.g., the quality of the written code, the behavior of the smart contract, the amount of triggered transactions, its context/objective, etc. The outcome of this activity will provide a methodology (or a tool) that could support developers and non-skilled people at a macro- and a micro-level. At a macro-level it could help them to understand how smart contracts are developed/used (or have been developed/used in the past), or to detect trends. At a micro-level, it could support them in quickly understanding the behavior of a smart contract. The analysis will be performed on the Ethereum blockchain, and eventually on other blockchains, to detect, for example, whether a given blockchain is preferred by a given sector, or to accomplish some specific tasks. Visual Analytics tools could be also exploited/developed to support the analysis.
b) the search of already developed smart contracts: a framework to categorize smart contracts analyzed in the previous phase will be designed. A methodology (or a tool) to search and retrieve the code of previously deployed smart contracts that could fulfill a given need will be devised. Semantics and Visual Analytics will be also used to support the search.
c) the development/reuse of smart contracts’ code: existing approaches for visual and natural language programming will be investigated to study their applicability to smart contracts. The result of this phase will be a methodology (or a tool) to support both programmers and non-skilled people in coding smart contracts from scratch or from existing ones (e.g., retrieved using the search engine devised in the previous phase).
Outline of work plan: The research work plan of the three-year Ph.D. program is the following:
- First year: the candidate will perform an analysis of the state-of-the-art on available methodologies/tools for analysis, search and development/reuse of code, with a particular focus on smart contracts. The candidate will also deepen his/her competences on visual analytics, semantic technologies and natural language programming, among others. After the analysis, the candidate will identify the advantages, the disadvantages and the limitations of existing solutions and will define approaches to overcome them.
- Second year: during the year, the candidate will design and develop methodologies and tools for analyzing smart contracts’ code, and will design a framework to categorize and search them.
- Third year: the third year will be devoted to the design and development of methodologies and tools enabling smart contracts reuse and coding, as well as to testing the developed tools.
The ideal candidate should have a Master Degree in Computer Science (or similar disciplines) as well as the following characteristics:
- Good programming skills in commonly used programming languages (e.g., Python, Java, C, Node.js, PHP)
- Basic/Good programming skills in blockchain-related programming languages (e.g., Solidity)
- Basic/Good knowledge of existing blockchain frameworks
- Basic/Good knowledge of machine learning techniques.
Expected target publications: IEEE Transactions on Services Computing
IEEE Transactions on Visualization and Computer Graphics
IEEE Transactions on Knowledge and Data Engineering
IEEE Access
Future Generation Computer Systems
IEEE International Conference on Decentralized Applications and Infrastructures
IEEE International Conference on Blockchain and Cryptocurrency
IEEE International Conference on Blockchain
Current funded projects of the proposer related to the proposal:
Possibly involved industries/companies:

Title: Blockchain and Industry
Proposer: Valentina Gatteschi
Group website: http://grains.polito.it/
Summary of the proposal: Researchers and industry recently started to investigate the potential of Blockchain technology. The market for this technology is expected to grow from $5.8 billion in 2021 to more than $390 billion by 2028. The advantage of blockchain is that transactions recorded on it are irreversible, transparent, and can be performed without relying on an intermediary. At the present time, no winning blockchain framework has been identified. Instead, an increasing amount of blockchain frameworks have been developed, which vary in terms of performance, consensus algorithms, openness, adoption, among other aspects. Consequently, for a company, choosing the most suitable blockchain framework – and designing the architecture – for a use case is a complex task. In fact, when this technology is employed in real contexts some issues emerge, which are not only related to tradeoffs between privacy and transparency, or between decentralization and scalability, but also to their integration with existing system. This proposal aims at addressing the above limitations by investigating, analyzing and developing instruments and methodologies that could simplify the design and development of blockchain-based applications.
Rsearch objectives and methods: The activities carried out in this Ph.D. programme will aim at investigating existing approaches, devising, and testing novel ones for:
a) the selection of the most suitable blockchain framework(s) for a use case, based on several constraints, e.g., the amount of privacy required, the performance needed, the characteristic of existing systems, the need to exploit smart contracts or to rely on oracles, the need to rely on more than one blockchain, etc. The outcome of this phase will be a methodology or a tool for the selection of the building blocks and frameworks needed.
b) the interface with existing (legacy) systems, e.g., in terms of the type and amount of data that should be stored on the blockchain, or the mechanisms that enable information stored on existing systems and on the blockchain coexist. The outcome of this phase will be a methodology, a tool or some guidelines to ease the integration of a blockchain-based product with existing systems already available in the industrial environment.
c) the usability of blockchain-based systems, and their acceptance from employees. The outcome of this phase will be a methodology, a tool or some guidelines to hide the complexity which characterizes the interaction with the blockchain, by still satisfying security requirements.
In order to test the approaches devised during the Ph.D. real use cases could be used.
Outline of work plan: The research work plan of the three-year Ph.D. programme is the following:
- First year: the candidate will perform an analysis of the state-of-the-art on existing methodologies/tools for the selection of a blockchain framework. The candidate will also analyze existing blockchain frameworks and will compare them. He/she will also research and analyze existing use cases, in order to a) identify how blockchain technology has been integrated with existing systems, b) possibly identify trends (e.g., related to the sector, to the characteristics of the use case, etc.).
- Second year: the candidate will propose a methodology and/or develop a tool for the selection of a blockchain framework and will test the methodology/tool on several use cases. He/she will also particularly focus on the integration of a blockchain-based system with existing programs already used in an industrial context. Furthermore, he/she will also focus on the integration of different blockchain frameworks (e.g., a public blockchain for transparency, a private/consortium one for recording information related to the supply chain).
- Third year: the candidate will refine what he/she proposed during the second year, possibly by performing additional tests on new use cases, and will also work on instruments to increase the usability of blockchain-based system.
Expected target publications: IEEE Transactions on Services Computing
IEEE Transactions on Knowledge and Data Engineering
IEEE Access
Future Generation Computer Systems
IEEE International Conference on Decentralized Applications and Infrastructures
IEEE International Conference on Blockchain and Cryptocurrency
IEEE International Conference on Blockchain
Current funded projects of the proposer related to the proposal:
Possibly involved industries/companies:

Title: GPU-based Parallel Techniques for the Evaluation of Reliability Measures of Large System-on-Chip devices
Proposer: Stefano QUER, Paolo BERNARDI
Group website: http://fmgroup.polito.it/quer/
http://www.cad.polito.it/
Summary of the proposal: With the introduction of general-purpose computing on Graphical Processing Units (GPUs), many-core processing has become a reality. GPUs can execute concurrently millions of threads in a single-instruction-multiple-data fashion. This proposal investigates parallel accelerated multi- and many-core applications in the Electronic Design Automation (EDA) area, specifically within circuit simulation and fault simulation, aiming at the collection of Reliability Measures. In simulation and fault simulation scenarios, recent trends in Systems-on-Chip (SoC) circuit size, complexity, and heterogeneity highlight a growing demand to identify strategies that can abate computational time. For example, traditional algorithms could be unapplicable to measure delay faults coverage in designs of multi-million gates. When timing accurate simulation approaches are used, all current techniques quickly become inapplicable for large designs, due to high computational requirements. Even though GPU devices can achieve massive computational throughput, their ability often poses many restrictions to map new algorithms into massively parallel processing. The use of acceleration engines and the development of innovative techniques to perform a quick evaluation of Reliability Measures of very large, complex, and heterogenous nanometric designs, is the objective of this proposal. The expectation is to build a framework that can mitigate the computational time required to perform several kinds of evaluation of reliability metrics.
Rsearch objectives and methods: Macro-objectives that the proposers want to highlight are the following. Simulation and Fault Simulation Speed-up
Given a circuit, if two gates are neither in the input nor in the output cone of each other, they are mutually data-independent. In this case, the order of their evaluation does not matter. We plan to use the computing power of parallel architectures to divide the design into partitions of data-independent gates and to manage them concurrently. Unfortunately, this process entails many problems. First of all, the number of gates belonging to each partition limits the amount of parallelism, and this number is irregular. Moreover, the amount of waveform storage available during the simulation is constrained on GPUs and overflows may occur during the evaluation. Furthermore, groups of independent faults should be simulated in parallel. Unfortunately, finding independent faults is equivalent to repeatedly finding a maximum clique in a graph. In turn, this problem is NP-complete and its solution requires ad-hoc platform-dependent heuristics.
Reliability Measures Evaluation
In simulation and fault simulation, data flow tracking has become a very important reliability measure in the context of automotive system-level testing. Dynamic taint analysis marks the problematic data fields as tainted, and then tracks their propagation in the code. One of the main performance penalty comes from a strict coupling of the program execution and the data flow tracking logic. To improve performances in this domain, we plan to decouple data flow tracking from program execution, by introducing the real silicon device in the exploration loop. We also intent to extract information from the chip, parallelize, and pipeline data flow tracking methods. The idea is to use threads to form multiple pipeline stages working in parallel. To reduce overheads, we will also push forward data compression and buffering thread pools.
Outline of work plan: The work plan is structured in three years, as the Ph.D. program.
Year 1
The Ph.D. student will improve his/her knowledge of writing parallel/concurrent software with advanced languages, such as C++, Python, or CUDA. The student will study the portability of standard algorithms on parallel architectures, and he/she will deepen his/her knowledge on testing and software flow tracking. Detailed activities include:
• Identification of an adequate set of circuit benchmarks (large and SoC oriented).
• Realization of a sequential simulator prototype.
• Analysis of the bottlenecks and conceptualization of new techniques to overcome them, by:
- Exploiting parallelization.
- Dealing with approximated and estimated methods.
He/she will also follow most of the teaching activities required by the Ph.D. curriculum. During this preliminary research activity, we will mainly target IEEE or ACM conference papers.
Year 2
The work will concentrate on the implementations of an initial version of a parallel simulator and a fault simulator. On the set of identified circuit benchmarks, the student will:
• Perform comparisons with traditional methods, in terms of cost-benefit trade-off.
• Conceive reliability measurement techniques that could break out limits of the exact strategies by introducing real silicon in automated flows, for example:
- To elaborate the functional flow by managing traces that comes from real devices.
- To use debug features to inject faults directly in the chip.
The research will start to target journal papers. Credits for the teaching activity will be finalized. The work will mainly target software written on HPC and desktop architectures.
Year 3
The activity carried forward during the second year will be consolidated, targeting more specific algorithms and optimizations. The long-term plan should include, but it is not limited to:
• Optimization and evolution of the simulation and fault simulation environments to be used for real circuit netlists.
• Finalization of techniques to measure reliability metrics.
Expected target publications: Proceedings of the IEEE (Q1)
Computer (Q1)
Transaction on Computers (Q1)
IEEE Access (Q1)
Transaction on Computer-Aided Design of Integrated Circuits and Systems (Q2)
Design and Test (Q2)
International Journal of Parallel Programming (Q3)
Journals of Electronic Testing (Q3)
MDPI Electronics (Q3)
Current funded projects of the proposer related to the proposal:
Possibly involved industries/companies:STMicroelectronics

Title: Cybersecurity in the Quantum Era
Proposer: Antonio Lioy
Group website: security.polito.it
Summary of the proposal: Computer engineering research interest in Quantum Computing and Quantum Communication has significantly grown in recent years. Quantum computing will enhance classical computation in many areas such a Finance, AI, Chemistry, and Physics. The impact of this new paradigm on cybersecurity will be significant: the advent of Quantum Computing will endanger current public-key cryptography. Quantum communication provides approaches to enable quantum and classical computers communication, leveraging quantum phenomena such as entanglement. One of the ultimate goals of this field is to build a Quantum Internet along with the current classical one to implement protocols otherwise impossible in a classical scenario. In the short to medium term, Quantum Communication and, in particular, Quantum Cryptography offers technologies that could overcome the issues coming with the quantum advent. One example is the Quantum Key Distribution (QKD). Academia is actively researching this topic, and some companies (e.g. ID Quantique, MagicQ, Toshiba) already commercialise special-purpose devices to implement QKD protocols. Integrating QKD and other Quantum Cryptography-based technologies with security protocols and modern infrastructures is a challenging task requiring efforts from security experts other than physicists.
Rsearch objectives and methods: Exploring and understanding the main principles behind both Quantum Computing and Quantum Communication is the starting point for the research activity. From a security perspective, knowing the main threats of Quantum Computing, such as Shor's algorithm and its implementation to break public-key cryptographic schemes, is essential. The main goals of the research activity are the following ones.
1. Analysing Post-quantum (PQC) and Quantum Cryptography (QC) algorithms and protocols. As a result of the quantum advent, two main strategies have been proposed to mitigate the related threats: Post-quantum and Quantum Cryptography. The analysis of both approaches is required to enhance current security solutions with the most suitable strategy for specific domains.
2. Simulating Quantum Cryptography protocols. Special-purpose quantum devices are expensive and require access to extensive network infrastructures for testing. An approach to boost design and test of quantum algorithms and protocols is given by the simulation of the quantum aspects. Several frameworks are available for this purpose (e.g. SimulaQron, NetSquid, Qiskit) and could be used to test the protocols in the scope of Quantum Cryptography.
3. Integrating PQC and Quantum Cryptography with common state-of-the-art security protocols (e.g. TLS, IPsec, SSH) and modern software infrastructures in diverse domains such as Cloud-, Fog-, and Edge-computing, as well as in Network Functions Virtualisation (NFV).
4. Analysing the attacks against PQC and Quantum Cryptography, and the possible countermeasures. PQC suffers from a plethora of side-channel attacks (e.g., timing, fault, cold-boot attacks). Quantum Cryptography and, in particular, QKD leaves room for different kinds of attacks: individual, collective, and coherent (e.g., intercept-resend). In addition, the non-idealities introduced by the implementation of quantum devices lead to specific "quantum attacks" (e.g. PNS, Time-shift attack). Because of this, classical techniques could be used to enhance those systems security (e.g. Privacy Amplification).
Outline of work plan: The first year will be spent studying Quantum Computing, Quantum Communication, and PQC principles, algorithms, and technologies. The PhD student will also analyse modern security paradigms applied to software infrastructures. During this year, the student should also follow most of the mandatory courses for the PhD and submit at least one conference paper. During the second year, the PhD student will analyse a domain-specific application of Quantum Cryptography, PQC or even a hybrid approach. The application domain should be oriented to modern infrastructures that heavily rely on virtualisation technologies. This analysis will lead to the design of a specific security solution that involves at least one of the aforementioned paradigms. At the end of the second year, the student should have started preparing a journal publication on the topic and submit at least another conference paper. Finally, the third year will be devoted to the evaluation of the proposed solution. This could be achieved in the case of Quantum Cryptography leveraging simulation platforms, and if possible, actual use case scenarios utilising quantum devices. This evaluation and experimental phase could be complemented by cooperation with other departments inside POLITO, also leveraging some ongoing projects in which an experimental facility and tests on physical devices are expected. A promising project for this purpose is a collaboration with TIM, which focuses on exploring Quantum Communication technologies and experimenting with QKD protocols both at simulation level and with physical devices. At the end of this final year, a publication in a high-impact journal shall be achieved.
Expected target publications: IEEE Security, Springer Security and Privacy, Elsevier Computers and Security, Future Generation Computer Systems, IEEE Transactions on Quantum Engineering
Current funded projects of the proposer related to the proposal: A project in collaboration with TIM focusing on Quantum Communication and, in particular, on QKD simulation.
Possibly involved industries/companies:Interest in this activity from TIM and Telefonica, although there is not yet any direct formal involvement.

Title: Advancing UI Testing Techniques
Proposer: Marco Torchiano, Luca Ardito
Group website: http://softeng.polito.it/
Summary of the proposal: Testing of User Interfaces (UI) of modern typologies of applications consists in writing test cases that exercise the UI and allow performing end-to-end testing of the whole software in an automated way. Despite the presence of several mature frameworks for many different domains, UI tests show a limited adoption. This issue is mainly due to the significant effort needed in writing the test cases, and in the inherent fragility and difficulty in maintaining test suites. Additionally, most testing techniques are focused on the verification of non-graphical application data, and not on the verification of the actual appearance of the software under test as it is shown to the user. The usage of image recognition techniques to verify the visual appearance of the GUIs is still limited in both research and practice. Finally, modern ways of conveying user interactions (e.g., conversational, textual, and vocal interfaces) are also scarcely tested with the aid of automated tooling. The main objective of the proposed research plan is to devise and integrate new methodologies to perform effective, reproducible, and robust testing through modern user interfaces.
Rsearch objectives and methods: O1. Identification of test fragilities
This step is about defining of techniques to detect the patterns that cause test fragility. The development of the comprehensive taxonomy (O1.1) is the prerequisite. Based on such taxonomy, the automatic detection of fragilities (O1.2) can then be developed. A tool that can work as a plug-in of an IDE represents the main outcome.
O2. Definition of novel testing techniques
The identification of the fragility-inducing patterns represents the basis also for a gap analysis of existing techniques (O2.1). Novel techniques should be defined (O2.2) typically by leveraging the relative strengths of existing ones and using novel techniques, e.g. computer vision to generate accurate tests of the actual graphical appearance of the GUI, or the usage of virtualization environments for the execution of test suites in a continuous integration/continuous development context.
O3. Definition of test cases through modern user interfaces
Many software artifacts nowadays are provided with conversational (e.g., textual, or vocal) interfaces. These interfaces provide a different paradigm of human-machine-interaction that require the definition of new testing methodologies, metrics, and practices (O3.1) and the implementation of tools capable of enforcing them (O3.2).
Outline of work plan: The main activities conducted in the three years of the PhD studies are:
Task 1.1: Development of a fragility detection tool (M1-M5)
Task 1.2: Empirical assessment of effectiveness of fragility detection (M3-M6)
Task 1.3: Identification of the fragility mitigation techniques (M7-M9)
Task 1.4: Development of a fragility removal tool (M8-M12)
Task 1.5: Empirical assessment of fragility removal (M12-M14)
Task 2.1: Analysis of limitation of existing tools and techniques (M14-M15)
Task 2.2: Definition of a novel (integrated) testing approach (M15-M18)
Task 2.3: Development of a CI/CD tool for the execution of test suites (M18-M19)
Task 2.4: Integration of advanced computer vision-enabled techniques for GUI testing (M20-M23)
Task 2.5: implementation of a continuous integration pipeline for GUI testing (M23-M24)
Task 3.1: Review of existing methodologies to evaluate conversational interfaces (M25-M28)
Task 3.2: Definition of a metric framework for conversational interface evaluation (M28-M32)
Task 3.3: Implementation of a prototype tool for the evaluation of conversational interface (M30-M36)
The three work units will follow similar methodological approaches where:
- an initial identification of the issues is conducted by means of observational methods (case studies, surveys, case control studies)
- a solution is proposed
- a proof-of-concept implementation is developed
- an empirical assessment is conducted typically using controlled experiments.
Expected target publications: The target for the PhD research includes a set of conferences in the general area of software engineering (ICSE, ESEM, EASE, ASE, ICSME) as well as in the specific area of testing (ICST, ISSTA). More mature results will be published in software engineering journal, the main being: IEEE Transactions on Software Engineering, ACM TOSEM, Empirical Software Engineering, Journal of Systems and Software, Information and Software Technologies.
Current funded projects of the proposer related to the proposal:
Possibly involved industries/companies:Several companies expressed concerns on mobile UI tests, among them three showed more interest in the specific goals of this PhD proposal: Reale Mutua Assicurazioni, Intesa SanPaolo, Engineering

Title: Beyond 5G (B5G) and the road to 6G Networks
Proposer: Claudio Casetti
Group website: http://www.dauin.polito.it/research/research_groups/netgroup_comp...
Summary of the proposal: 5G is expected to pave the way for the digitalisation and transformation of key industry sectors like transportation, logistics, commerce, production, health, smart cities, and public administration. While 5G has enabled us to consume digital media anywhere, anytime, the technology of the future should enable us to embed ourselves in entire virtual or digital worlds. In the world of 2030, human intelligence will be augmented by being tightly coupled and seamlessly intertwined with the network and digital technologies. B5G/6G shall aggregate multiple types of resources, including communication, data and AI processing that optimally connect at different scales, ranging from, e.g., in-body, intra- machine, indoor, data centres, to wide areas networks. Their integration results in an enormous digital ecosystem that grows more and more capable, intelligent, complex, and heterogeneous, and eventually creates a single network of networks. The PhD will focus on the investigation of the potentiality of B5G/6G communication and their impact on people’s lives, industrial development and societal advancement.
Rsearch objectives and methods: The PhD candidate will focus the research on one or more of the following directions:
• Novel zero-energy devices, with the challenge of making the network and devices smart enough to detect them without increasing EMF and energy consumption.
• Coexistence and cooperation of (non-3GPP and 3GPP) networks to be deployed through network densification using ubiquitous miniaturised, heterogeneous base stations with new MAC protocols for spatially multiplexed transmissions.
• Mechanisms and interfaces for intent-based direct wireless connectivity to be studied based on terminal trajectories or novel sensor-based intent detection for, e.g., Human-Machine Interfaces.
• Flexible resource planning by studying and characterizing the prediction of communication needs in combination with trajectory, resource and spectrum planning through system simulations of, e.g., Industry 4.0 settings.
• Resource planning and balancing of D2D and D2I resource assignments aided by AI algorithms.
Outline of work plan: The first six months (M6) will be devoted to the establishment of the state-of-the-art in the field, looking at developments highlighted by the 6G Flagship consortium. Then, in the following 6 months, (M12) the PhD candidate will narrow down the focus on one or more of the research objectives outlined above, placing them into the context of the 6G roadmap.
During the second year, until M24, the PhD candidate will work on the design and early development of tools (simulators, emulators, prototypes) for the study of the research objectives.
In the final year, until M36, the candidate will assess the work, review the objectives, quantify the Key Performance Indicators of the research and the benchmarks, leading to the finalization of the PhD thesis.
Throughout the three years, the PhD candidate is expected to actively pursue the publication of scientific results at conferences and on journals.
Expected target publications: Conferences:
IEEE Infocom, IEEE CCNC, ACM Mobicom, ACM Mobihoc
Journals:
IEEE Transactions on Mobile Computing
IEEE Transactions on Networks and Service Management
IEEE Communication Magazine
Current funded projects of the proposer related to the proposal: Hexa-X
Possibly involved industries/companies:

Title: Distributed learning in support of applications for Vulnerable Road Users
Proposer: Claudio Casetti
Group website: http://www.dauin.polito.it/research/research_groups/netgroup_comp...
Summary of the proposal: In an urban context, vulnerable road users (VRUs) are pedestrians, bicycles, e-scooters. In contrast to “traditional” vehicular safety systems, devices used by VRUs often have energy constraints, limited storage and computing resources. Here, we consider the VRUs as the target of the applications and services leveraging machine learning (ML) models for decision making (e.g., for image classification purpose and identification of obstacles/hazards). The goal is to envision an ecosystem where user applications and services can be created dynamically by users, exploiting local information and local resources. The new research challenges that arise are several. First, to realize an efficient mobile communication system, it is essential to fully exploit the resources (networking, computing, storage) as well as the data available at the user devices. Second, it is vital to develop sustainable ML approaches, which optimize the tradeoff between decision quality and system resources, most notably energy consumption. This can be achieved through edge intelligence (EI), broadly defined as a form of system intelligence at the network edge and cooperatively built by various actors through wireless communications and ML techniques. This approach can be further empowered by distributed learning, which enables the training of different parts of a deep neural network (DNN) at different entities in a privacy-preserving fashion.
Rsearch objectives and methods: The main goal is to lay down the theoretical and algorithmic foundations for creating a new network ecosystem to fully leverage local data, resources and connectivity and, in so doing, support ubiquitous ML-based applications and services. Besides an analytical performance evaluation, the project aims at assessing the effectiveness of the developed solutions and methodologies through simulation in urban intersection scenarios, populated by connected, autonomous vehicles and VRUs as well as through a proof-of-concept testbed. In so doing, safety services for the ever-growing VRU category will be tackled, which has scarcely been addressed so far, and generate relevant data sets for a use case with high societal impact, which will be made publicly available. A further objective is to envision new paradigms enabling ubiquitous ML while exploiting the available resources. To this end, among the possible ML approaches, the focus will be on deep neural networks, which are used for a wide range of supervised, as well as unsupervised, learning tasks. The most demanding operation concerning DNNs is the training of such models, which can then be used for decision-making by entities. Whenever the computing and energy demand of the learning task is cumbersome compared to the capabilities of the nodes at which decision-making should take place, other nodes should help to do the job. Once trained, the model can then be delivered to the decision makers for inference, which is indeed a much lighter task. Thus, the main questions that need to be answered are (i) Given the DNN architecture for the VRU application at hand, what is the distributed DNN structure that best fits the resource availability, i.e., how can the DNN structure be split into micro learning tasks in an optimal manner given an instance of the VRU app? (ii) Which nodes should provide the data necessary for training? (iii) Which (micro) task should be executed by which node, hence which nodes should exchange the computed model parameters?
Outline of work plan: The first six months (M0-M6) will be devoted to the establishment of the state-of-the-art in the field. The next six months (M7-M12) will therefore see the PhD candidate formulate a detailed research plan for the coming two years, including developing theoretical models, algorithms, protocols, and methods that create a synergy among the different system components with diverse resources and data. Further, the candidate will look at distributed ML approaches to optimally match such tasks with the capabilities of the distributed network sources available to VRUs. Finally, it will explore new mechanisms for exploiting in a cooperative way the information collected by VRUs at the different points of the network system towards an efficient ML-driven decision process.
During the second year, until M24, the PhD candidate will thus work on the design and early development of tools (simulators, emulators, prototypes) for the study of the above research objectives.
In the final year, until M36, the candidate will assess the work, review the objectives, quantify the Key Performance Indicators of the research and the benchmarks, leading to the writing of scientific papers and of the PhD thesis.
Expected target publications: Conferences:
IEEE Infocom, IEEE CCNC, ACM Mobicom, ACM Mobihoc
Journals:
IEEE Transactions on Mobile Computing
IEEE Transactions on Networks and Service Management
IEEE Communication Magazine
Current funded projects of the proposer related to the proposal:
Possibly involved industries/companies:

Title: Manufacturing and In-Field Testing Techniques for Automotive oriented Systems-on-Chip
Proposer: Paolo Bernardi (co-proposer Riccardo Cantoro)
Group website: cad.polito.it
Summary of the proposal: Nowadays, the number of integrated circuits included in critical environments such as the automotive field is continuously growing. For this reason, semiconductor manufacturer has to guarantee the reliability of the released components for the entire life-cycle that can be up to 10 -15 years. Quality is achieved by a combination of good manufacturing test practices and in-field detection techniques. The activities planned for this proposal includes efforts towards the Manufacturing and In-Field Testing Techniques for Automotive oriented Systems-on-Chip (SoC):
- Study and development of innovative Fault Simulation strategies that could manage current SoC designs complexity
- Implementation of Silicon Diagnosis techniques based on ad-hoc test equipment development.
- The design of test strategies aimed at supporting in-field error detection and malfunctioning information collection, which are demanded by recent standards such as the ISO-26262 and researched by the leading companies of the Automotive Market.
The project will enable a phd student to work on a very timely subject and supported by companies currently collaborating with the research group. Expectation is to actively contribute to next generation chip reliable design. The potential outcome of the project is both related to industrial advances and high quality publication.
Rsearch objectives and methods: The phd student will pursue objectives in the broader research field of the Automotive Reliability and Testing. Key enabling factor for this work is the availability of a setup that includes both netlists to be simulated and real silicon chips with development boards to effectively use. Innovative Fault Simulation strategies In this research field, the phd student will look into the following directions:
• Setup of a low-cost test scenario based on low-cost equipment and computational infrastructure.
1. Design and implementation of a tester able to effectively drive the test and diagnosis procedure
2. Logic diagnosis based on the collected results on a set of failing devices
3. Provide information to Failure Analysis labs for a faster identification of the root cause of a malfunctioning
• Conception of techniques that exploits the real silicon to accelerate simulation and fault simulation processes.
1. Enabling fast prototyping of functional procedures to be graded via logic simulation
2. Injecting faults through the design for testability features
Silicon Diagnosis techniques
This research subject is supported by the availability of some failing devices identified as faulty during the manufacturing tests. Activities will be the following:
• Reproduction of the faulty behavior by exploiting several methods
1. Direct stimulation of the scan chain and other Design for Testability mechanisms like Built-In Self-Test
2. Accurately generated Functional programs execution
• Usage and development of EDA tools to
1. Perform a logic investigation about the location and nature of faults affecting the population of faulty chips
2. Refine test patterns to achieve a faster detection
In-field error detection and malfunctioning information collection
Development of methods for achieving high coverage of defects appearing along mission behavior as demanded by the ISO-26262
1. Key-on and runtime execution of Software-based Self-Test procedure
2. Diagnosis trace for faster failure analysis of returns from fields.
Outline of work plan: The working plan for the PhD student is recalling the objectives drawn in the previous sections. The order is not fixed and may vary according to the advancement during the PhD program.
1st year
1. Development of a Low-Cost Tester based on multi-processor and programmable architecture
2. Investigation on real Failing Devices by signature collection and netlist interfering
3. Use of the Design for Testability for Fault Injection on real silicon devices.
2nd year
1. Diagnosis strategies development for quickly feedback failures appearing along lifetime or causing field returns
2. Logic diagnosis of failing devices, with the intention of providing a fault location and fault model hypothesis to failure analysts
3rd year
1. Implementation of advanced test and reliability techniques based on novel design for testability
2. Optimization of the diagnostic capability by exploring various stimulations methods.
Expected target publications: IEEE Transactions on Computers, IEEE Transactions on Emerging Topics in Computing, IEEE Design and Test, IEEE Access; DATE, ITC, ETS conferences
Current funded projects of the proposer related to the proposal: In this topic, the proposer is involved in research programs funded by STMicroelectronics
Possibly involved industries/companies:STMicroelectronics

Title: Architectures and Algorithms for Management of Resilient Edge Infrastructures via Centralized and Distributed Learning
Proposer: Guido Marchetto
Group website: Guido Marchetto
Summary of the proposal: Next-Generation (NextG) networks are expected to support advanced and critical services, incorporating computation, coordination, communication, and intelligent decision making. To secure, adapt, and propel autonomy and longevity in such complex NextG systems, we need to constantly monitor, assess, react, explain, and adapt our networks’ observed (mis)behaviors and disruptions. Today’s growing interest in applying machine learning (ML) and artificial intelligence (AI) methods to understanding and developing such network systems amplifies the need for data to support research and development. However, the application of data-driven ML techniques to Internet infrastructure research brings many challenges: each network is unique, dynamic, typically not instrumented for scientific measurement. Furthermore, available data are characterized by anomalies and misbehaviors that complicate the creation of training data sets, and are usually proprietary. In this project, we propose to design and implement novel mechanisms using supervised and unsupervised (distributed) learning, within software-defined networks and software-defined radio environments to serve the needs of data-driven edge infrastructure management decisions.
Rsearch objectives and methods: Three research questions (RQ) guide the proposed work:
RQ1: How can we design and implement on local and larger-scale wireless testbeds effective transport and routing network protocols that integrate the network stack at different scopes using recent advances in supervised and unsupervised learning?
RQ2: To scale the use of machine learning-based solutions in network management, what are the most efficient distributed machine learning architectures that can be implemented at the network edge layer, combining the best of federated learning and split learning?
RQ3: How can we design cross-layer distributed learning protocols that use real-time deep learning within and outside the physical layer to create self-adaptive wireless networks?
The final target of the research work is to answer these questions, also by evaluating the proposed solutions on small-scale network emulators or large-scale virtual network testbeds, using a few applications, including virtual and augmented reality, precision agriculture, or haptic wearables. In essence, the main goals are to provide innovation in network monitoring, network adaptation, and network resilience, using centralized and distributed learning integrated with edge computing infrastructures. Both vertical and horizontal integration will be considered. By vertical integration, we mean considering learning problems that integrate states across network hardware and software, as well as states across the network stack across different scopes. For example, we will design data-driven algorithms for congestion control problems to address the tussle between in-network and end-to-end congestion notifications. By horizontal learning, we mean using states from local (e.g., physical layer) and wide area (e.g., transport layer) as input for the learning-based algorithms. Aside from supporting resiliency with the vertical integration, solutions must offer resiliency across a wide (horizontal) range of network operations: from close-edge, i.e., near the device, to the far-edge, with the design of secure data-centric resource allocation (federated) algorithms.
Outline of work plan: Phase 1 (1st year): the candidate will analyze the state-of-the-art solutions for network management, with particular emphasis on knowledge-based network automation techniques. The candidate will then define detailed guidelines for the development of architectures and protocols that are suitable for automatic operation and configuration of NextG networks, with particular reference to edge infrastructures. Specific use-cases will also be defined during this phase (e.g., in virtual reality). Such use cases will help identifying ad-hoc requirements and will include peculiarities of specific environments. With these use cases in mind, the candidate will also design and implement novel solutions to deal with the partial availability of data within distributed edge infrastructures. Results of this work will likely result in conference publications.
Phase 2 (2nd year): the candidate will consolidate the approaches proposed in the previous year, focusing on the design and implementation of mechanisms for vertical and horizontal integration of supervised and unsupervised learning with network virtualization. Network, spectrum, and computational resources will be considered for the definition of proper allocation algorithms. All solutions will be implemented and tested. Results will be published, targeting at least one journal publication.
Phase 3 (3rd year): the consolidation and the experimentation of the proposed approach will be completed. Particular emphasis will be given to the identified use cases, properly tuning the developed solutions to real scenarios. Major importance will be given to the quality offered to the service, with specific emphasis to the minimization of latencies in order to enable a real-time network automation for critical environments (e.g., telehealth systems, precision agriculture, or haptic wearables). Further conference and journal publications are expected.
Expected target publications: The contributions produced by the proposed research can be published in conferences and journals belonging to the areas of networking and machine learning (e.g. IEEE INFOCOM, ICML, ACM/IEEE Transactions on Networking, or IEEE Transactions on Network and Service Management) and cloud/fog computing (e.g. IEEE/ACM SEC, IEEE ICFEC, IEEE Transactions on Cloud Computing), as well as in publications related to the specific areas that could benefit from the proposed solutions (e.g., IEEE Transactions on Industrial Informatics, IEEE Journal of Biomedical and Health Informatics, or IEEE Transactions on Vehicular Technology)
Current funded projects of the proposer related to the proposal: Unrestricted grant from Futurewei Inc. Possible (currently under definition) research contract with Tiesse SpA. H2020 DESIRE project
Possibly involved industries/companies:Futurewei, Tiesse

Title: Urban data science
Proposer: Silvia Chiusano
Group website: dbdmg.polito.it
Summary of the proposal: In the urban ecosystem a multitude of strongly intertwined systems coexists, varying from people sociality to transport systems. While each of these urban facets already represents in itself a complex system, their interconnection is definitively a challenging scenario. Urban Data Science entails the acquisition, integration, and analysis of big and heterogeneous data collections generated by a diversity of sources in urban spaces to profile the different facets and issues of the urban environment. It addresses the development of competences and technical infrastructure required to study and address urban challenges, from a data-driven perspective. It can unearth a rich spectrum of knowledge valuable for urban data creation, integration, enrichment, analysis, and exploration. Thus, urban data science plays a key role in achieving a smart and sustainable city. However, data analytics on urban data collections is still a daunting task, because they are generally too big and heterogeneous to be processed through machine learning techniques currently available. Thus, today's urban data give rise to a lot of challenges that constitute a new inter-disciplinary field of data science research.
Rsearch objectives and methods: The PhD student will work on the study, design and development of proper data models and novel solutions and for the acquisition, integration, storage, management and analysis of big volumes of heterogeneous urban data. The research activity involves multidisciplinary knowledge and skills including database, machine learning techniques, and advanced programming. Different case studies in urban scenarios such as urban mobility, citizen-centric contexts, and healthy city will be considered to conduct the research activity. The objectives of the research activity consist in identifying the peculiar characteristics and challenges of each considered application domain and devise novel solutions for the management and analysis of urban data for each domain. More urban scenarios will be considered with the aim of exploring the different facets of urban data and evaluating how the proposed solutions perform on different data collections. More in detail, the following challenges will be addressed during the PhD:
- Suitable data fusion techniques and data representation paradigms should be devised to integrate the heterogeneous collected data into a unified representation describing all facets of the targeted domain. For example, since urban data are often collected with different spatial and temporal granularities, suitable data fusion techniques should be devised to support a spatio-temporal alignment of collected data.
- Adoption of proper data models. The storage of heterogeneous urban data collections requires the use of alternative data representations to the relational model such as NoSQL databases (e.g., MongoDB), also able to manage geo-referenced data.
- Design and development of algorithms for big data analytics. Huge volume of data demands the definition of novel data analytics strategies also exploiting recent analysis paradigms and cloud based platforms. Moreover, urban data is usually charaterized by spatio-temporal coordinates describing when and where data has been acquired, which entails the design of suitable data analytics methods.
Outline of work plan: 1st Year. The PhD student will review the recent literature on urban computing to identify the up-to-date research directions and the most relevant open issues in the urban scenario. Based on the outcome of this preliminary explorative analysis, an application domain, such as urban mobility, will be selected as a first reference case study. The selected domain will be investigated to (i) identify the open research issues, (ii) identify the most relevant data analysis perspectives for gaining useful insights, and (iii) assess of main data analysis issues. The student will perform an exploratory evaluation of state-of-the-art technologies and methods on the considered domain, and will present a preliminary proposal for the optimization techniques of these approaches.
2nd and 3rd Year. Based on the results of the 1st year activity, the PhD student will design and develop a suitable data analysis framework including innovative analytics solutions to efficiently extract useful knowledge in the considered domain, aimed at overcoming weaknesses of state-of-the-art methods.
Moreover, during the 2nd and 3rd year, the student will progressively consider a larger spectrum of application domains in the urban scenario. The student will evalute if and how his/her proposed solutions can be applied to the new considered domains as well as he/she will propose novel analytics solutions.
During the PhD, the student will have the opportunity to cooperate in the development of solutions applied to the research projects on smart cities. The student will also complete his/her background by attending relevant courses. The student will participate to conferences presenting the results of his/her research activity.
Expected target publications: Any of the following journals
IEEE TKDE (Trans. on Knowledge and Data Engineering)
ACM TIST (Trans. on Intelligent Systems and Technology)
IEEE T-ITS (Trans of Intelligent Transport Systems)
Expert Systems With Applications (Elsevier)
Information Sciences (Elsevier)
IEEE/ACM International Conferences
Current funded projects of the proposer related to the proposal: S[m2]ART (Smart Metro Quadro): Bando Smart City and Communities; Ente finanziatore: MIUR (Ministero dell’istruzione, Università e Ricerca)
Possibly involved industries/companies:

Title: Studying human interaction with autonomous vehicles in simulated experiences using eXtended Reality
Proposer: Fabrizio Lamberti
Group website: http://grains.polito.it/
Summary of the proposal: Connected and autonomous vehicles promise to revolutionize the concept of mobility in the next few years. In order to pass from current Advanced Driver-Assistance Systems (ADAS) to full, self-driving vehicles, an intense research activity is needed, encompassing different domains. Indeed, human-machine interaction (HMI) will play a key role in this scenario since, in order to fully benefit of autonomous driving (AD) systems, humans, both drivers/passengers and other road users, will need to trust their safety, reliability and helpfulness. Another primary role will be played by simulation systems, since it will be essential to be able to virtually recreate those scenarios the users of next-generation vehicles and mobility infrastructures will be exposed to before they are physically implemented. This proposal will focus on the use of eXtended Reality (XR) technologies to support the research in the field. The envisaged activities could consider, for instance, the design and development of human-in-the-loop, Virtual Reality (VR-)-based simulation platforms capable to faithfully recreate the experience of riding such vehicles in an immersive way. Augmented Reality (AR) could be leveraged, in turn, to study possible concepts of forthcoming interfaces, both meant for an in-vehicle use (targeting occupants) or an out-of-vehicle use (for communicating, e.g., with pedestrian and other drivers).
Rsearch objectives and methods: Simulators are indeed commonplace in the car design process. However, because of their high costs, their use has been for long restricted to big players, i.e., to car makers. Today, the growing availability of off-the-shelf technologies for creating virtual experiences potentially allows to make simulation environments accessible to the whole production chain, enabling their use in the realization of next-generation vehicles, including autonomous ones. To reach this goal, a number of challenges have to be faced. In fact, software tools for vehicle simulation are generally not capable yet to support vehicles spanning the whole spectrum of autonomy levels, reproducing all the possible situations a human is expected to experience in such rides. Moreover, commercial software tools are meant to be used with expensive, hence rare, physical hardware, e.g., recreating the movement of a simulated vehicle body. Furthermore, most of the available solutions are not capable to fully immerse the users in quickly changing simulated environments that may be needed, e.g., to study different concepts of the vehicle interior, of its interfaces with the occupants, the experience of external users, etc. The goal of this research proposal is to tackle the above limitations by working on the design of a fully-fledged simulation platform for autonomous vehicles based on consumer level XR technologies. The platform will be experimented in different contexts regarding, e.g., the design and validation of new HMI solutions for autonomous vehicles, the study of user experience in autonomous rides (in terms of trust, motion sickness, or the possibility to carry out secondary tasks), the training of future users of these vehicles, etc. Simulated experiences could also be used to generate data that are essential for the training of algorithms controlling the vehicle intelligence itself, e.g., to gather the human driving style and transfer it to the machine for acceptability purposes.
Outline of work plan: During the first year, the PhD student will review the state of the art regarding XR-based vehicle simulation, with a focus on immersive experiences and considering all the possible dimensions of user experience with the vehicle. A conference publication is expected to be produced based on the results of the review. Afterwards, he or she will start the design and the realization of the simulation platform, by grounding on previous developments by the GRAphics and Intelligent Systems (GRAINS) group and on hardware available at the VR@POLITO lab. The platform will leverage off-the-shelf inertial hardware capable to mimic the behavior of a real vehicle, useful, e.g., to study the users’ psycho-physical response to it. VR will be used to immerse the users in realistic situations, whose appearance and complexity could be configured using suitable authoring tools. The users will be allowed to live experiences from within the vehicle (as a drivers or passengers) or from the outside (as road users). They will also be allowed to see their body reconstructed, and to interact with the simulated vehicle and its functionalities. AR technology could be exploited to simulate new HMI paradigms, e.g., based on Head-Up Displays (HUDs), to enhance users’ awareness about the surrounding context, the vehicle’s status and its intentions. A prototype of the platform is expected to be available during the second year. The student will start using this prototype to tackle open problems in the field, and will be expected to publish at least another conference paper on the outcomes of these activities. During the first two year, the student will complete his or her background in AR, VR, HMI and simulation by attending relevant courses. During the third year, the work of the student will build onto a set of simulation scenarios related to one or more applications (selected among those mentioned above and/or related to challenges proposed by companies/institutions the GRAINS group is collaborating with) with the aim to study the applicability of the devised platform to real problems and advance the state of the art in the field. Results obtained will be reported into other conference works plus, at least, a high-quality journal publication.
Expected target publications: The target publications will cover the fields of XR, HMI and simulation. International journals could include, e.g.:
IEEE Transactions on Visualization and Computer Graphics
IEEE Transactions on Human-Machine Systems
IEEE Transactions on Emerging Topics in Computing
IEEE Computer Graphics and Applications
ACM Transactions on Computer-Human Interaction
IEEE Transactions on Vehicular Technology
Relevant international conferences could include ACM CHI, IEEE VR, ACM SIGGRAPH Eurographics, IEEE Intelligent Vehicles Symposium, etc.
Current funded projects of the proposer related to the proposal: Topics addressed in the proposal are strongly related to those tackled in funded projects managed by the proposer. In particular, it is worth mentioning the currently running project funded by Centro Ricerche Fiat – FCA Group titled “Trustable and comfortable driving on autonomous vehicles” on VR vehicle simulation, that follows two previous projects on machine and deep learning applied to autonomous vehicles. Other running projects to be mentioned are E2DRIVER on the use of VR in the automotive sector (funded by the European Commission in the H2020 programme), PITEM RISK FOR and PITEM RISK ACT on the use of VR for training in the context of emergency situations (funded by Regione Piemonte and ALCOTRA), and VR4CBRN, on the use of VR for CBRN operations (funded by LINKS Foundation, in collaboration with the Italian Air Force).
Possibly involved industries/companies:

Title: Graph network models for Data Science
Proposer: Daniele Apiletti
Group website: https://dbdmg.polito.it
Summary of the proposal: Machine learning approaches extract information from data with generalized optimization methods. However, besides the knowledge brought by the data, extra a-priori knowledge of the modeled phenomena is often available. Hence an inductive bias can be introduced from domain knowledge and physical constraints, as proposed by the emerging field of Theory-Guided Data Science. Within this broad field, the candidate will explore solutions exploiting the relational structure among data, represented by means of Graph Network approaches. The relational structure is present in many real-life settings, both in physical conditions, such as among actors in supply chains or users in social networks, and logical processes performed by humans, such as industrial procedures, and logistic or transport activities. The structure of the data can be exploited to directly build the network graph itself, incorporating hierarchies and relationships among the different elements. Analogous approaches can be exploited for logical processes where domain experts separate the overall procedure in connected subtasks for the decision-making process. Hence, a graph-like structure can be crafted to design an ensemble architecture consisting of different building blocks, each connecting a network node representing the sub-problem and the corresponding domain-driven knowledge and constraints. The candidate should explore such a set of approaches to design and evaluate innovative learning strategies able to blend domain-expert behaviors, a-priori knowledge, and physical or theoretical constraints with the traditional data-driven training.
Rsearch objectives and methods: The research aims at defining new methodologies for semantics embedding, propose novel algorithms and data structures, explore applications, investigate limitations, and advance the solutions based on different emerging Theory-guided Data Science approaches. The final goal is to contribute to improving the machine learning model performance by reducing the learning space thanks to the exploitation of existing domain knowledge in addition to the (often limited) available training data, pushing towards more unsupervised and semantically richer models. To this aim, the main research objective is to exploit the Graph Network frameworks in deep-learning architectures by addressing the following issues:
- Improving state-of-the-art strategies of organizing and extracting information from structured data.
- Overcoming the Graph-Network model limitation in training very deep architectures, with a consequent loss in expressive power of the solutions.
- Advancing the state-of-the-art solutions to dynamic graphs, which can change nodes and mutual connections over time. Dynamic Networks can successfully learn the behavior of evolving systems.
- Experimentally evaluate the novel techniques in large-scale systems, such as supply chains, social networks, collaborative smart-working platforms, etc. Currently, for most graph-embedding algorithms, the scalability of the structure is difficult to handle since each node has a peculiar neighborhood organization.
- Applying the proposed algorithms to natively graph-unstructured data, such as texts, images, audio, etc.
- Developing techniques to design ensemble graph architectures to capture domain-knowledge relationships and physical constraints.
Outline of work plan: 1st year. The candidate will explore the state-of-the art techniques of dealing with both structured and unstructured data, to integrate domain-knowledge strategies in network model architectures. Applications to physics phenomena, images and text, taken from real-world networks such as social platforms and supply chains will be considered.
2nd year. The candidate will define innovative solutions to overcome the limitations described in the research objectives, by experimenting the proposed techniques on the identified real-world problems. The development and the experimental phase will be conducted on public, synthetic, and possibly real-world datasets. New challenges and limitations are expected to be identified in this phase.
During the 3rd year, the candidate will extend the research by widening the experimental evaluation to more complex phenomena able to better leverage the domain-knowledge provided by the Graph Networks. The candidate will perform optimizations on the designed algorithms, establishing limitations of the developed solutions and possible improvements in new application fields.
Expected target publications: IEEE TKDE (Trans. on Knowledge and Data Engineering)
ACM TKDD (Trans. on Knowledge Discovery in Data)
ACM TOIS (Trans. on Information Systems)
ACM TOIT (Trans. on Internet Technology)
ACM TIST (Trans. on Intelligent Systems and Technology)
IEEE TPAMI (Trans. on Pattern Analysis and Machine Intelligence)
Information sciences (Elsevier)
Expert systems with Applications (Elsevier)
Engineering Applications of Artificial Intelligence (Elsevier)
Journal of Big Data (Springer)
ACM Transactions on Spatial Algorithms and Systems (TSAS)
IEEE Transactions on Big Data (TBD)
Big Data Research
IEEE Transactions on Emerging Topics in Computing (TETC)
Information sciences (Elsevier)
Current funded projects of the proposer related to the proposal: Research contract “Data Science and Machine Learning techniques for supply chains”.
Research contract “Tecniche di Machine Learning e Data Science per clinical trials e PLF”.
Possibly involved industries/companies:XelionTech, FBK, Istituto Mario Negri.

Title: Extended reality for telepresence and digital twinning
Proposer: Fabrizio Lamberti, Alberto Cannavò
Group website: http://grains.polito.it/
Summary of the proposal: Telepresence and digital twinning are two closely connected technologies that can largely contribute to improving process efficiency in many domains, while supporting sustainable development strategies. Telepresence is meant to let people feel present in a given place having just their virtual representation transferred there. By cutting unnecessary travels, it can help to reduce emissions, enabling also time-effective interactions necessary in many work, study, and leisure activities. Digital twinning, in turn, implies the creation of digital replicas of things, or even people, which can then be used to let relevant stakeholders gather knowledge, take decisions and implement actions by working onto digital, rather than physical, entities. Application fields of these concepts are manifold. They can, e.g., allow visitors to explore a museum from distance; let a medical doctor perform a robotic surgery from remote; help to study an industrial plant, a smart-city environment or another critical infrastructure, predict/identify possible issues and evaluate possible corrective actions before actually intervening onto them. The aim of this proposal is to contribute to the research in this field focusing on a key enabling technology behind telepresence and digital twinning, i.e., eXtended Reality (XR), and devising methods for making the experiences with the virtual replicas ever more compelling and helpful, thanks to high levels of realism, interaction, immersion, and presence.
Rsearch objectives and methods: The research will start by exploring ways to create the XR experiences mentioned above. Telepresence and digital twinning require the generation of 3D assets and the implementation of control/interaction logics that are often very specific to the particular application scenario. Unfortunately, existing authoring tools are not flexible enough, requesting to “reinvent the wheel” almost every time. The first objective will consist in identifying a methodology, supported by a proper development environment, to ease the creation of such experiences through, e.g., the adoption of new user interfaces and the introduction of novel automatic generative processes powered by artificial intelligent paradigms. Still considering the reconstruction of the environment, a second objective will consider the (real time) transfer of information required to interact with local physical assets from remote, by operating on their digital copies. Aspects concerning, among others, sensor capabilities, network dependability aspects, intelligibility/usability of virtual replicas, will need to be addressed. A third objective will be to investigate means for boosting the level of realism of digital contents that populate virtual environments (VEs). Ways to leverage offline and online digital reconstructions obtained by combining 3D modeling, photogrammetry, laser scanning, motion capture, etc., will be explored. Particular attention will be devoted to the reconstruction of humans, as their role will get increasingly relevant, especially in collaborative VEs and social XR experiences. A fourth, and last, objective will focus on making user interaction (both synchronous and asynchronous) in VEs as much faithful as possible. In order to achieve a complete sense of immersion and presence, every stimulus provided from the real world, every sensation shall be possibly recreated. Technology, however, is still not able to stimulate all the users’ senses in a suitable way. Thus, research will investigate ways to enhance the fidelity in reproducing sight, hearing, and touch with currently available hardware.
Outline of work plan: During the first year, the PhD student will review the state of the art in terms of techniques/approaches developed/proposed to deal with issues mentioned above concerning telepresence and digital twinning with Virtual Reality (VR) and Augmented Reality (AR) technologies. He or she will start addressing open problems in these domains by focusing, e.g., on one of the identified objectives, and devising solutions that will be experimented in specific application scenarios among those tackled by the GRAphics and Intelligent Systems (GRAINS) group and the VR@POLITO lab. A conference publication is expected to be produced based on the results of these activities. In the second year, the student will start designing and building a methodology (and a platform) capable to support more than one of the above objectives in a holistic way, still focusing on a limited set of application scenarios (choosing from, e.g., industrial training, emergency management, cultural heritage, health and medicine, etc.). The student is expected to publish at least another conference paper on the outcomes of these activities. During the first two years, the student will complete his or her background in AR, VR, and concepts relevant for the proposal by attending relevant courses. During the third year, the student will devise protocols/develop metrics to test the efficacy of the solutions proposed to identified problems, and will run experiments with relevant stakeholders to prove the validity of work done and advance the state of the art in the field. Results obtained will be reported into other conference works plus, at least, a high-quality journal publication.
Expected target publications: The target publications will cover the fields of XR, HMI and distributed systems. International journals could include, e.g.:
IEEE Transactions on Visualization and Computer Graphics
IEEE Transactions on Human-Machine Systems
IEEE Transactions on Emerging Topics in Computing
IEEE Transactions on Sustainable Computing
IEEE Computer Graphics and Applications
ACM Transactions on Computer-Human Interaction
Relevant international conferences could include ACM CHI, IEEE VR, IEEE ISMAR, ACM SIGGRAPH Eurographics, , etc.
Current funded projects of the proposer related to the proposal: Topics addressed in the proposal are strongly related to those tackled in the following projects (co)managed by the proposer:
- E2DRIVER (European project, H2020), on the use of VR for training in industrial settings.
- PITEM RISK FOR and PITEM RISK ACT (European projects, ALCOTRA), on the use of VR for training in emergency situations in trans-national scenarios.
- Research grant from Centro Ricerche Fiat (FCA Group) on the use of VR for simulating autonomous vehicles.
- Research grant from LINKS Foundation on the creation of VR tools for training operators onto risks concerning national/international defense.
- Research grant from Fondazione TIM on the exploration of resources in public libraries with XR technologies.
Activities will be carried out in the context of the VR@POLITO initiative and its laboratory (hub) at the Department of Control and Computer Engineering (DAUIN).
Possibly involved industries/companies:

Title: Online learning of time-varying systems
Proposer: Sophie Fosson, Diego Regruto
Group website: https://sic.polito.it/
Summary of the proposal: System identification or system learning consists in the estimation of the mathematical model of a dynamical system from measured input-output data. Although most of the system identification algorithms available in the literature focus on the time-invariant case, in many interesting engineering applications the model is intrinsically time-varying, due either to internal changes of the system dynamics or to the effects of external actions (e.g., physical degradation, faults and adversarial attacks). The identification of time-varying systems turns into a tracking problem: the estimate of the model must be periodically updated, based on periodic measurements, possibly on an infinite time horizon. Promptness and low-complexity are among the fundamental features of an online algorithm to perform an effective tracking. The main goal of this PhD activity is to develop and analyze online learning algorithms for time-varying systems. More precisely, we are interested in problems where a smoothly time-varying profile is associated with possible sudden changes, due to the interaction with the external framework. This is particularly relevant in the context of cyber-physical systems, where the system dynamics can abruptly change when it is subjected to attacks.
Rsearch objectives and methods: The objectives of this PhD project are both methodological and applications oriented. We summarize them as follows.
1) Methodological objectives: development and analysis of online learning algorithms, to track time-varying systems with good accuracy and responsiveness to sudden changes. By starting from the literature on online convex optimization, identification of time-varying systems and identification of hybrid systems, we expect to develop original algorithms that improve the state-of-the-art. The developed algorithms should be supported by theoretical results that guarantee their performance and practical feasibility.
2) Applications-oriented objectives: the developed algorithms will be implemented and tested on real-world case studies in the fields of automotive and cyber-physical systems.
Outline of work plan: The research workplan is articulated as follows:
M1-M6: Study of the literature on online convex optimization and identification of time-varying systems, with particular reference to the case of systems subjected to abrupt changes. Development of general mathematical formulation of the problem accounting for different possible a-priori assumptions on the systems time evolution and on the noise affecting the measured data. Decomposition of the general problems into a sequence of simplified sub-problems.
Milestone 1:
Report of the results available in the literature; theoretical formulation of problem and analysis of the main difficulties/critical aspect/open problems. Results obtained from this theoretical analysis of the problem will be the subject of a first contribution to be submitted to an international conference.
M7-M12: Development and analysis of novel techniques that improve the performance of the state-of-the-art methods for the simplified sub-problems.
Milestone 2:
Results obtained in this stage of the project are expected to be the core of a paper to be submitted to an international journal.
M13-M24: Development and analysis of novel techniques that improve the performance of the state-of-the-art methods, for the most general version of the problem.
Milestone 3:
Results obtained in this stage of the project are expected to be the core of both a conference contribution and a paper to be submitted to an international journal.
M25-M36: Analysis and formulation of suitable strategies for practical implementation of the proposed techniques/algorithms in presence of limited computational resources.
Milestone 4:
Application of the developed methods and algorithms to real-world problems.
Expected target publications: Journals:
IEEE Transactions on Automatic Control, Automatica, IEEE Transactions on Neural Networks and Learning Systems
Conferences:
IEEE Conference on Decision and Control (CDC), IFAC Symposium on System Identification (SYSID), IFAC World Congress, International Conference on Machine Learning (ICML), Conference on Neural Information Processing Systems (NeurIPS)
Current funded projects of the proposer related to the proposal:
Possibly involved industries/companies:

Title: Explainable AI (XAI) for spoken language understanding
Proposer: Elena Baralis
Group website: http://dbdmg.polito.it/
Summary of the proposal: Machine learning models and automated-decision making procedures are becoming more and more pervasive, thus a growing interest is arising on the careful understanding of their behavior. The focus of this research activity is on models for spoken language understanding.
Spoken language understanding (SLU) systems infer semantic information from user utterances. Traditionally, SLU systems rely on two models: the automatic speech recognition and the Natural Language Understanding models. The former processes speech signals and translate the utterance to text, while the latter processes the text to derive the target internal representation. This pipeline allows to clearly separate the two processes and validate the corresponding models separately. In contrast, end-to-end (E2E) models directly rely on spoken signals to infer the utterance semantics.
E2E SLU models impose new challenges for the explainable AI research. These models do not reveal the reasons behind predictions and their results are hence hard to interpret. Moreover, the combination of both the acoustic and semantic information into a single model makes the identification and the understanding of errors more difficult.
Rsearch objectives and methods: End to End (E2E) Spoken Language Understanding (SLU) models perform the spoken language understanding task as a complex black-box process without the need of distinct automatic speech recognition and natural language understanding steps. The semantics of the utterance are inferred without showing any intermediate step, as for example explicit transcription of strings. Hence, explaining model errors and understanding the reasons for its performance becomes a difficult task. Investigating the presence of data subgroups that behave in problematic ways is central to model understanding, as well as to studying model fairness, and debugging AI pipelines. Typically, the overall performance of an AI model reveals how well the model performs on average on the whole dataset. However, it does not reveal the problems that may affect particular portions of the data. The research activity will address the explanation of an E2E SLU model by identifying and characterizing data subgroups for which model performance shows an anomalous behavior (e.g., the False Positive Rate is higher than average). Critical subgroups may be identified by exploiting the notion of pattern. Patterns are conjunctions of attribute-value pairs intrinsically interpretable. The research activity will consider model agnostic techniques, because they do not rely on the knowledge of the inner workings of any classification paradigm. Since pattern mining techniques do not rely on any specific language knowledge, they may provide an effective tool to address different spoken languages.
Outline of work plan: PHASE I (1st year)
During the first year, the following activities will be performed. State-of-the-art survey for XAI methods for (a) natural language understanding models, and (b) speech recognition models. Performance analysis of state-of-the-art ML and XAI approaches will be performed. The assessment of main approaches will be performed on 2-3 public speech datasets. Preliminary proposals of improvements over state-of-the-art XAI algorithms will be studies, focusing on exploratory analysis of novel, creative solutions for XAI.
PHASE II (2nd year)
During the second year, the collection of relevant language features will drive the definition of new algorithms for bias detection and XAI. The algorithms will be experimentally evaluated on public speech datasets. An industrial validation of the developed prototypes is envisioned.
PHASE III (3rd year)
During the last year of the PhD, the focus will be on the consolidation of the developed algorithms, and on the exploration of different approaches to explanation, driven by the knowledge of (parts of) the internal classification process.
During the second-third year, the candidate will have the opportunity to spend a period of time abroad in a leading research center.
Expected target publications: Any of the following journals
IEEE TKDE (Trans. on Knowledge and Data Engineering)
ACM TKDD (Trans. on Knowledge Discovery in Data)
ACM TDS (Trans. On Data Science)
ACM TOIS (Trans. on Information Systems)
ACM/IEEE TASLP (Trans. On Audio, Speech, and Language Processing)
Information sciences (Elsevier)
Expert systems with Applications (Elsevier)
Machine Learning with Applications (Elsevier)
Engineering Applications of Artificial Intelligence (Elsevier)
Journal of Big Data (Springer)
IEEE/ACM International Conferences
Current funded projects of the proposer related to the proposal: Research grant with Amazon.
Possibly involved industries/companies:Amazon Research.

Title: Explaining AI (XAI) models for sequential data.
Proposer: Elena Baralis
Group website: http://dbdmg.polito.it/
Summary of the proposal: Machine learning models are increasingly adopted to assist human experts in decision-making. Especially in critical tasks, understanding the reasons behind model predictions is essential for trusting the model itself. Investigating model behavior can provide actionable insights. For example, experts can detect model wrong behaviors and actively work on model debugging and improvement. Unfortunately, most high-performance ML models lack interpretability. Time series data allow an effective representation of many interesting phenomena, e.g., sensor reading in many different application domains ranging from predictive maintenance to automated factory floor management. Current state-of-the-art techniques for time series prediction models are based on deep learning techniques (e.g., RNN, but also CNN). These techniques provide so called black-box models, i.e., models that do not expose the motivations for their predictions. The main goal of this research activity is the study of methods to allow human-in-the-loop inspection of classifier reasons behind predictions for time series data. Explanations can help data scientists and domain experts to understand and interactively investigate individual decisions made by black-box models.
Rsearch objectives and methods: Exploring and understanding the motivations behind black-box model predictions is becoming essential in many different applications. Different techniques are usually needed to account for different data types (e.g., images, structured data, time series). The research activity will consider industrial domains (e.g., the construction and spatial domains) in which the availability of understandable explanations is particularly relevant for explaining anomalous behaviors. The explanation algorithms will target both structured data and time series. The following different facets of XAI (Explainable AI) will be addressed.
Model understanding. The research work will address local analysis of individual predictions. These techniques will allow the inspection of the local behavior of different classifiers and the analysis of the knowledge different classifiers are exploiting for their prediction. The final aim is to support human-in-the-loop inspection of the reasons behind model predictions.
Model trust. Insights into how machine learning models arrive at their decision allow evaluating if the model may be trusted. Methods to evaluate the reliability of different models will be proposed. In case of negative outcomes, techniques to suggest enhancements of the model to cope with wrong behaviors and improve the trustworthiness of the model will be studied.
Model debugging and improvement. The evaluation of classification models generally focuses on their overall performance, which is estimated over all the available test data. An interesting research line is the exploration of differences in the model behavior, which may characterize different data subsets, thus allowing the identification of potential sources of bias in the data.
Outline of work plan: YEAR I: state-of-the-art survey for algorithms and for XAI both for time series and structured data, performance analysis and preliminary proposals of improvements over state-of-the-art algorithms, exploratory analysis of novel, creative solutions for XAI; assessment of main explanation issues in 1-2 specific industrial case studies.
YEAR 2: new algorithm design and development, experimental evaluation on a subset of application domains; deployment of algorithms in selected industrial contexts.
YEAR 3: algorithms improvements, both in design and development, experimental evaluation in new application domains.
During the second-third year, the candidate will have the opportunity to spend a period of time abroad in a leading research center.
Expected target publications: Any of the following journals
IEEE TKDE (Trans. on Knowledge and Data Engineering)
ACM TKDD (Trans. on Knowledge Discovery in Data)
ACM TDS (Trans. On Data Science)
ACM TOIS (Trans. on Information Systems)
Information sciences (Elsevier)
Expert systems with Applications (Elsevier)
Machine Learning with Applications (Elsevier)
Engineering Applications of Artificial Intelligence (Elsevier)
Journal of Big Data (Springer)
IEEE/ACM International Conferences
Current funded projects of the proposer related to the proposal:
Possibly involved industries/companies:

Title: Autonomous systems' reliability and security
Proposer: Ernesto Sanchez
Group website: http://www.cad.polito.it/
Summary of the proposal: Autonomous Systems (A. Sys.) have been a subject of great interest during the last years. In fact, autonomous devices have been investigated and proposed by very different actors: from the automotive industry to hospital UV disinfection robots; from automatic stacking systems to unmanned aerial vehicles. In order to support the software complexity, A.Sys. may include some microprocessor cores, different memory cores, and hardware accelerators in charge of performing the Deep Artificial Neural Network applications. An emerging problem is the verification, testing, reliability, and security of Autonomous Systems, and in particular, regarding the computational elements involved in the artificial intelligence computations. In general, these very complex systems are still lacking a holistic analysis and comprehension related to reliability and security. It is for example unclear, how to functionally verify the behavior of a DNN, or what may happen when the autonomous system is affected by a fault from both points of view: reliability and security. During this project, the Ph.D. candidate will study from the hardware and software perspective, how to improve the reliability and security of autonomous systems based on AI solutions.
Rsearch objectives and methods: The Ph.D. proposal aims at studying the current design, verification and testing methodologies that try to guarantee a correct implementation of AI based systems in A.Sys. with particular interest on the available solutions to increase the systems reliability and security. During the initial phase, a set of benchmarks that will provide the suitable cases of study for the following research steps are defined. Two main types of AI systems in A.Sys. will be analyzed: the first one is based on hardware accelerators that exploit for example FPGA implementations; and the second one implements the A.I. algorithm supported by components-off-the-shelfs (COTS) such as systems that embeds high performance processor cores. From the reliability point of view, there is a lack of metrics able to correctly assess how reliable is an AI-based system, in fact, a study and proposal of appropriate metrics is also required at this point. As a matter of facts, it will be necessary to gather the most suitable or define a set of fault models oriented to better identify the device vulnerabilities during the development time. A first attempt to consider the system security of AI algorithms running on embedded systems is also required. Regarding security, the lack of metrics and experimental demonstration make important to fulfill this gap by providing some indications about the main security criticalities and how to mitigate them in an A.Sys. based on embedded systems. Finally, mitigation strategies based on self-test, error-recovery and earlier detection mechanisms will be developed for the autonomous systems studied. The final goal is to equip the AI hardware with self-test mechanisms to detect hardware errors, and possible thread intrusions thanks to the implementation of fault-tolerance and secure oriented mechanisms for increasing the reliability and security of the AI algorithm while maintaining the system accuracy.
Outline of work plan: Proposed work plan The work plan is divided in three years as follows:
• First year:
1. Study and identification of the most important works on design and verification of AI solutions used in autonomous systems.
2. Design and implementation of the cases of study resorting to hardware accelerators and COTS based on high performance processor cores.
• Second year:
3. Fault model definition and experimentation mainly resorting to the implemented cases of study.
4. Reliability and safety metrics definition.
5. Security analysis and metrics definition.
• Third year:
6. Mitigation strategies proposal.
In a few words: the first two steps will left the Ph.D. candidate with the appropriate background to perform the following activities. Steps 3 to 6 are particularly interesting form the research point of view, allowing the student to write papers and present them in the International conferences related to the research area. During these research phases, the student will have the possibility to cooperate with international companies such as NVIDIA and STMicroelectronics, and foreign research groups in Lyon, Montpellier and other universities.
Expected target publications: The main conferences where the Ph.D. student will possibly publish her/his works are:
DATE: IEEE - Design, Automation and Test in Europe Conference
ETS: IEEE European Test Symposium
ITC: IEEE International Test Conference
VTS: IEEE VLSI Test Symposium
IOLTS: IEEE International Symposium on On-Line Testing and Robust System Design
MTV: IEEE International workshop on Microprocessor/SoC Test, Security and Verification
Additionally, the project research may be published in relevant international journals, such as: TCAD, TVLSI, ToC.
Current funded projects of the proposer related to the proposal:
Possibly involved industries/companies:NVIDIA

Title: Local energy markets in citizen-centred energy communities
Proposer: Proposer’s name Enrico Macii, Lorenzo Bottaccioli, Edoardo Patti
Group website: www.eda.polito.it
Summary of the proposal: A smart citizen-centric energy system is at the center of the energy transition in Europe and worldwide. Local energy communities will enable citizens to participate collectively and actively in local energy markets. New digital tools (e.g., smart energy contracts) will be used to manage financial transactions connected to the exchange of energy among community members, different communities and with the grid. On the one side, digital and energy technology combined together will provide a framework for a more intelligent and sustainable final use of energy in buildings and cities. On the other side, citizens will need to understand how to interact with smart energy systems and local energy markets. Indeed, new complex socio-techno-economic interactions will take place in such intelligence energy systems. Given this emerging panorama, it will become even more important the understanding on the dynamics of energy technology diffusion, corporate structures of the communities, billing mechanisms and the impact that regulation and policy could have on diffusion patterns and genesis of new communities.
Rsearch objectives and methods: Research objectives (max 300 words) The diffusion of distributed (renewable) energy sources poses new challenges in the underlying energy infrastructure, e.g., distribution and transmission networks and/or within micro (private) electric grids. The optimal, efficient and safe management and dispatch of electricity flows among different actors (i.e., prosumers) is key to support the diffusion of distributed energy sources paradigm. The goal of the project is explore different corporate structures, billing and sharing mechanism inside energy communities. For instance, the use of smart energy contracts based on Distributed Ledger Technology (blockchain) for energy management in local energy communities will be studied. A testbed comprising of physical hardware (e.g., smart meters) connected in the loop with a simulated energy community environment (e.g., a building or a cluster of buildings) exploiting different RES and energy storage technology will be developed and tested during the three-year program. Hence, the research will focuses on the development of agents capable of describing:
1) the final customer/prosumer beliefs desire and intention and opinions.
2) the local energy market where prosumers can trade their energy and or flexibility
3) the local system operator that has to provide the grid reliability
All the software entities will be coupled with external simulators of grid and energy sources in a plug and play fashion. Hence the overall framework it as to be able to work in co-simulation environment with the possibility of performing hardware in the loop. The final outcomes of this research will be an agent based modelling tool that can be exploited for:
• Planning the evolution of future smart multi energy system by taking in to account the operational phase
• Evaluating the effect of different policies and related customer satisfaction
• Evaluating the diffusion of technologies and/or energy policies under different regulatory scenarios
• Evaluating new business model for energy communities and aggregators
Outline of work plan: 1st year. The candidate will study state-of the-art solution of existing agent based modelling tools in order to identify the best available solution for large scale smart energy system simulation in distributed environments. Furthermore, the candidate will review the state of the art in prosumers/aggregators/market modelling in order to identify the challenges and identify possible innovations. Moreover, the candidate will focus on the review possible corporate structures, billing and sharing mechanism of energy communities. Finally, it will start the design of the overall platform starting for the requirements identification and definition.
2nd year. The candidate will terminate the design phase and will start the implementation of the agents intelligences. Furthermore, it will start to integrate agents intelligent and simulators together in order to crate the first beta version of the tool.
3rd year. The candidate will ultimate the over all platform and test it in different case study and scenarios in order to show all the effects of the different corporate structures, billing and sharing mechanism in energy communities.
Expected target publications: IEEE Transaction Smart Grid,
IEEE Transactions on Evolutionary Computation,
IEEE Transactions on Control of Network Systems,
Enviromental modelling and Software,
JASSS,
ACM e-Energy,
Current funded projects of the proposer related to the proposal:
Possibly involved industries/companies:

Title: Automatic configuration techniques for improving the exploitation of emerging edge technologies in the AIoT computing field
Proposer: Proposer’s name Enrico Macii, Gianvito Urgese
Group website: Group website https://eda.polito.it/
Summary of the proposal: Today, the process of task mapping on the hardware units available on the heterogeneous embedded systems (multi-core, CPU, GPU, FPGA) is one of the main challenges for a software developer. Several research teams are involved in the design of new solutions for enhancing compilers and programming models such as to improve the exploitation of heterogeneous architectures in the domain of the edge computing, also by supporting automatic resource allocation and optimisation procedures. Alongside, the adoption of heterogeneous embedded systems for the analysis of sequential data streams is increasing in industrial and biomedical applications. However, the cost and complexity of software development and the energy footprint of the created solutions are still not well balanced. The basic idea of this proposal is the definition and the design of an integrated methodology capable of decomposing data-stream applications, defined by the user in high-level programming languages, into basic atomic tasks (kernels) that can be placed on the properly-selected devices of the heterogeneous embedded system. This automatic partitioning and allocation system will support the compiling procedure of the code for the heterogeneous architectures allowing also not overly specialized embedded programmers to fully exploit the advantages of this class of architectures.
Rsearch objectives and methods: The objectives of the PhD plan are the following:
1. Develop the competence to analyse available data from product documentation and experiments, for extraction of features in complex components and systems.
2. Analyse the state-of-the-art of the compiler technology available for heterogeneous HW architectures developed for the embedded system field of applications.
3. Develop a general (machine learning based) approach for partitioning a sequential data stream application, defined by the user in high-level programming languages, in elementary computation tiles called kernels that can be placed on the properly-selected devices of a target heterogeneous architecture.
4. Design a reliable methodology for placing the elementary kernels on the devices available on the target heterogeneous embedded systems, together with the generation of the inter-task communication interfaces.
5. Design of proof-of-concept experiments for demonstrating that the developed partitioning and allocation methodology succeed in better exploiting resources of heterogeneous HW by reducing the execution time and/or the power consumption of an application without the explicit instrumentation of the code by specialized embedded programmers.
6. Provide a framework for configuring heterogeneous embedded systems in an automatic way, so as to further facilitate the optimised porting of applications on the many emerging embedded system architectures. The activities of research mentioned above will focus on three main areas of application:
- Medical and biological data stream analysis.
- Video surveillance and object recognition;
- Smart energy system;
Outline of work plan: 1st year. The candidate will study state-of-the-art methodologies, APIs, and frameworks used for configuring embedded system platforms. Moreover, she/he will improve skills in the machine learning techniques domain for generating automatic profiling systems capable of scanning applications written in high-level programming language and identifying the kernels (sub-optimal partitioning of instructions) to be mapped on the different devices of a heterogeneous platform. In the beginning period, various techniques should be considered in order to achieve the first results and for proving the effectiveness of the basic approaches.
2nd year. The candidate will develop a methodological approach for modelling systems and processes accordingly to the experiences obtained during the first year of research. The basic structure of a user-friendly task partitioning and allocation tool should be developed and tested at least in one area of application such as object recognition or data-stream analysis. The tool will have a modular structure, integrated with available technologies for defining and exploring program representations for machine learning on source code tasks at the intermediate representation level (LLVM and GIMPLE). The task partitioning and allocation tool will be profiled and preliminary results will be produced for discussing the capacity of the tool to improve the power and/or computational performances of a set of test-bench applications configured automatically for a pool of selected target heterogeneous embedded platforms.
3rd year. The candidate will apply the proposed approach to different embedded system architectures making possible a greater generalisation of the methodology to many target heterogeneous platforms adopted in the edge-computing field for implementing AI and data-stream applications. A stable version of the automatic configuration framework will be made available for the programmers of the embedded system community.
Research activities will be carried out, partly, in collaboration with the academic partners of the Arrowhead-Tools consortium and will involve industry (STMicroelectronics).
Expected target publications: The main outcome of the project will be disseminated in three international conference papers and in at least one publication in a journal of the field. In the following the possible conference and journal targets:
- IEEE/ACM International Conferences (e.g., DAC, DATE, AICAS, NICE, ISLPED, GLSVLSI, PATMOS, ISCAS, VLSI-SoC);
- IEEE/ACM Journals (e.g., TCAD, TETC, TVLSI, TCAS-I, TCAS-II, TCOMP), MDPI Journals (e.g., Electronics).
Current funded projects of the proposer related to the proposal: ECSEL Arrowhead-Tools
ECSEL MadeIn4
Possibly involved industries/companies:STMicroelectronics

Title: Design of a framework for supporting the development of new computational paradigms capable of exploiting neuromorphic hardware architectures
Proposer: Enrico Macii, Gianvito Urgese
Group website: https://eda.polito.it/
Summary of the proposal: Although initially intended for brain simulations, the adoption of the emerging neuromorphic hardware architectures is also appealing in fields such as IoT edge devices, high-performance computing, and robotics. It has been proved that neuromorphic platforms provide better scalability than traditional multi-core architectures and are well suitable for the class of problems that require massive parallelism as well as the exchange of small messages for which the neuromorphic hardware has a native optimised support. Moreover, since brain-inspired, the neuromorphic technologies are identified from the scientific community as particularly adapt for low power and adaptive applications required to analyse data in real-time. However, the tools currently available in this field are still weak and miss many useful features required to support the spreading of a new neuromorphic-based computational paradigm. The basic idea of this proposal is the definition, and the design of a high-level framework that collects simple neuromorphic models (SNM) designed to performs small general-purpose tasks compatible with the neuromorphic hardware. The SNM framework, once included in a user-friendly EDA tool, can be directly used by most users for describing their complex application that can be then easily executed on a neuromorphic platform.
Rsearch objectives and methods: The objectives of the PhD plan are the following:
1. Develop the competence to analyse available data from product documentation and experiments, for extraction of features in complex components and systems.
2. Evaluate the potentiality of a spiking neural network (SNN), efficiently simulated on the neuromorphic platforms, when customised at the abstraction level of a flow-graph and used for implementing a general-purpose algorithm.
3. Present a general approach for generating simplified neuromorphic models implementing basic kernels that can be exploited directly by users for describing their algorithms. The abstraction level of the models will depend on the availability of software libraries supporting the neuromorphic target hardware.
4. Design a couple of proof-of-concept applications generated by combining a set of neuromorphic models which will provide outputs with a limited (i.e., acceptable) error with respect to experimental data generated by applications implemented for general-purpose architectures. Furthermore, they should also reduce the execution time and/or the power consumption.
5. Providing a framework for generating and connecting neuromorphic models in a semi-automatic way, so as to further facilitate the modelling process and the exploration of new neuromorphic-based computational paradigms.
The activities of research above will focus on the implementation of algorithms in three main areas of application:
- Video surveillance and object recognition;
- Real-time data analysis from IoT and Industrial applications.
- Medical and biological data stream analysis and pattern matching.
Outline of work plan: 1st year. The candidate will study state-of-the-art modelling techniques that can be adapted for the neuromorphic framework and will improve the skills for generating models at different levels of abstraction. In the beginning, various modelling methods should be considered, such as behavioural, physical, and SNN-based in order to achieve first results and for proving the effectiveness of the basic approach for the algorithm descriptions.
2nd year. The candidate will develop a methodological and integrated approach for modelling applications and systems, accordingly to the experiences obtained during the first year of research in a multi-scenario analysis.The basic structure of a user-friendly framework supporting the new neuromorphic computational paradigm should be developed for at least one area of application such as object recognition and pattern matching.
3rd year. The candidate will apply the proposed approach to different complex systems making possible a greater generalisation of the methodology to different domains: for instance, from the analysis of future investments in the field of neuromorphic compilers that will enhance the usability of the new generations of neuromorphic hardware soon available on the market alongside the general-purpose computing units.
The research activities will be carried out, in collaboration with the Human Brain Project partners @UMAN.
Expected target publications: The main outcome of the project will be disseminated in three international conference papers and in at least one publication in a journal of the neuromorphic field. In the following the possible conference and journal targets:
- IEEE/ACM International Conferences (e.g., DAC, DATE, AICAS, NICE, ISLPED, GLSVLSI, PATMOS, ISCAS, VLSI-SoC);
- IEEE/ACM Journals (e.g., TCAD, TETC, TVLSI, TCAS-I, TCAS-II, TCOMP), MDPI Journals (e.g., Electronics).
Current funded projects of the proposer related to the proposal: H2020 Human Brain Project
Possibly involved industries/companies:

Title: Co-simulation platform for realtime analysis of smart energy communties
Proposer: Enrico Macii, Lorenzo Bottaccioli, Edoardo Patti
Group website: www.eda.polito.it
Summary of the proposal: The emerging concept of smart energy societies and cities is strictly connected to heterogeneous and interlinked aspects, from energy systems, to cyber-infrastructures and active prosumers. One of the key objectives of the Energy Center Lab (EC-L) is to develop instruments for planning current and future energy systems, accounting for the complexity of the various interplaying layers (physical devices for energy generation and distribution, communication infrastructures, ICT tools, market and economics, social). The EC-L tackles this issue by aiming at building a virtual model made of interactive, interoperable blocks. These blocks must be designed and developed in the form of multi-layer distributed infrastructure. Examples of systems realizing partial aspects of this infrastructure have been recently developed in the context of European research projects, such as energy management of district heating systems, smart-grid simulation, thermal building simulation systems, renewable energy source planning. However, a comprehensive and flexible solution for planning and simulate future smart energy cities and societies is still missing. The research program aims at developing the backbone multi-model and multi-energy co-simulation infrastructure allowing to interface and interoperate real/virtual models of energy production systems, energy networks (e.g. electricity, heat, gas), communication network and prosumer models.
Rsearch objectives and methods: This research aims at developing a novel distributed infrastructure to model and co-simulate different Multi-Energy-Systems and general-purpose scenarios by combining different technologies (both Hardware and Software) in a plug-and-play fashion and analysing heterogeneous information, often in real-time. The final purpose consists of simulating the impact of future energy systems. Thus, the resulting infrastructure will integrate in a distributed environment heterogeneous i) data-sources, ii) cyber-physical-systems, i.e. Internet-of-Things devices, to retrieve/send information in real-time, iii) models of energy systems, iv) real-time simulators, v) third-party services to retrieve information in real-time data, such as meteorological information. This infrastructure will follow the modern software design patterns (e.g. microservice) and each single component will adopt the novel communication paradigms, such as publish/subscribe. This will ease the integration of “modules” and the link between them to create holistic simulation scenarios. The infrastructure will enable also both Hardware (HIL) and Software-in-the-Loop again to perform real-time simulations. Furthermore, the solution should be able to scale the simulation from micro-scale (e.g. dwelling, buildings) up to macro-scale (e.g. urban or regional scale) and span different time scales from micro-seconds up to years. In a nutshell, the co-simulation platform will offer simulations as a service that can be used by different stakeholders to build and analyse new energy scenarios for short- and long-term planning activities and for testing and managing the operational status of smart energy systems. Hence the research will focus on the development of a distributed co-simulation platform capable of:
• Interconnecting and syncronizing digital real-time simulators, even located remotely
• Integrating HIL in the co-simulation process
• Easing the integration of simulation modules by aumatizing the code generation to build new simulation scenarios in a plug-and-play fashion
The resulting co-simualtion platform can be exploited for:
• Planning the evolution of the future smart multi-energy system by taking into account the operational phase
• Evaluating the effect of different policies and related customer satisfaction
• Evaluating the performances of hardware components in a realistic test-bench
Outline of work plan: 1st year. The candidate will study the state-of-the-art solution of existing co-simulation platforms to identify the best available solution for large scale smart energy system simulation in distributed environments. Furthermore, the candidate will review the state of the art in hardware in the loop integration and in automatic composability of the scenario code by identifing challenges and possible solution. Finally, it will start the design of the overall platform starting from the requirements identification and definition.
2nd year. The candidate will end the design phase and will start the implementation of the co-simualiton platform including HIL features to be integrated with software simulators together to create the first beta version of the tool. Furthermore, the candidate will start developing software solutions to solutions to eay the integration of simulation modules by aumatizing the code generation.
3rd year. The candidate will ultimate the overall platform and test it in different case study and scenarios to show all capabilities of the platform in terms of automatic scenario composition and integration of HIL.
Expected target publications: IEEE Transaction Smart Grid,
IEEE Transaction on Industrial Informatics,
Parallel and distributed computing,
Environmental Modelling and Software,
ACM e-Energy.
Current funded projects of the proposer related to the proposal:
Possibly involved industries/companies:

Title: Simulation and Modelling of V2X connectivity with traffic simulation
Proposer: Enrico Macii, Lorenzo Bottaccioli, Edoardo Patti
Group website: www.eda.polito.it
Summary of the proposal: The development of new Information Communication Technologies (ICT) in smart-cities and smart-grids has opened new opportunities to foster novel services for energy management and energy saving in all end-use sectors, with particular emphasis on V2X connectivity. Specifically, the new generation of distribution grids will open the possibility to provide new services for both citizens and energy providers. For instance, in the electricity sector, V2X connectivity could enable demand flexibility that could help in providing grid stability by exploiting Demand Response and Demand Side Management strategies. The need for demand flexibility as a tool for balancing the grid is becoming necessary from distribution to transmission networks to cope with the increasing penetration in the electricity mix of distributed energy sources, especially variable renewable power sources such as wind and photovoltaic. Alongside, in the transportation sector, ICT solutions have opened the way to new transportation services like car-sharing and car-pooling. All the services in both transportation and electricity sectors require the engagement of users that must become active consumers. Furthermore, such services are widening the marketplace to create new economic values that were not possible before. In particular, for the transportation sector, an analysis that takes into account the traffic situation and variation is required.
Rsearch objectives and methods: This research aims at developing novel simulation tools for smart cities/smart grid scenarios that exploit the Agent-Based Modelling (ABM) approach to evaluate novel strategies to manage the V2X connectivity with traffic simulation. The candidate will develop an ABM simulator that will provide a realistic and virtual city where different scenarios will be executed. The ABM should be based on real data, demand profiles and traffic patterns. Furthermore, the simulation framework should be flexible and extendable so that i) It can be improved with new data from the field; ii) it can be interfaced with other simulation layers (i.e. physical grid simulators, communication simulators); iii) It can interact with external tools executing real policies (such as energy aggregation). This simulator will be a useful tool to analyse how V2X connectivity and the associated services impact both social behaviours and traffic. It will also help the understanding of the impact of new actors and companies (e.g., sharing companies) in both the marketplace and the society, again by analysing the social behaviours and the traffic conditions In a nutshell ABM simulator will simulate both traffic variation and the possible advantages of V2X connectivity strategies in a smart grid context. This ABM simulator will be designed and developed to span different spatial-temporal resolutions. All the software entities will be coupled with external simulators of grid and energy sources in a plug and play fashion to be ready for being integrated with external simulators and platforms. This will enhance the resulting AMB framework unlocking also hardware in the loop features.
The outcomes of this research will be an agent-based modelling tool that can be exploited for:
• Simulating V2X connectivity considering traffic conditions
• Evaluating the effect of different policies and related customer satisfaction
• Evaluating the diffusion and acceptance of demand flexibility strategies
• Evaluating the new business model for future companies and services
Outline of work plan: 1st year. The candidate will study the state-of-the-art solution of existing agent-based modelling tools to identify the best available solution for large scale traffic simulation in distributed environments. Furthermore, the candidate will review the state of the art of V2X connectivity to identify the challenges and identify possible innovations. Moreover, the candidate will focus on the review Artificial Intelligence algorithms for simulating traffic conditions and variation for estimating EV flexibility and users' preferences. Finally, it will start the design of the overall ABM framework and algorithms starting with the requirements identification and definition.
2nd year. The candidate will terminate the design phase and will start the implementation of the agents' intelligence and test the first version of the proposed solution.
3rd year. The candidate will ultimate the overall ABM framework and AI algorithms and test it in different case studies and scenarios to assess the impact of V2X connection strategies and novel business models.
Expected target publications: IEEE Transaction Smart Grid,
IEEE Transactions on Evolutionary Computation,
IEEE Transactions on Vehicular Technologies
IEEE Transactions on Control of Network Systems,
Environmental Modelling and Software,
Journal of Artificial Societies and Social Simulation,
ACM e-Energy
Current funded projects of the proposer related to the proposal:
Possibly involved industries/companies:

Title: Co-simulation platform for realtime analysis of V2X connectivity
Proposer: Enrico Macii, Lorenzo Bottaccioli, Edoardo Patti
Group website: www.eda.polito.it
Summary of the proposal: The emerging concept of smart grid and cities is strictly connected to heterogeneous and interlinked aspects, from energy systems, to cyber-infrastructures and active prosumers. In this context, Electric Vehicles can play a crucial role in demand-side management and grid demand response. However, such EV connection with the rest grid and city must be tested, evaluated, planned and can not be left to chance. This research this issue by aiming at building a co-simulation platform that acts as a virtual environment made of interactive, interoperable blocks. These blocks must be designed and developed in the form of multi-layer distributed infrastructure. Moreover, the platform will integrate both HW and SW model blocks. Examples of systems realizing partial aspects of this infrastructure have been recently developed in the context of European research projects, such as energy management of district heating systems, smart-grid simulation, thermal building simulation systems, renewable energy source planning. However, a flexible solution for evaluation a testing V2X connectivity in smart grid and city is still missing. The research program aims at developing the backbone co-simulation infrastructure allowing to interface and interoperates real/virtual models of EV connectivity with energy production systems, energy networks (e.g. electricity, heat, gas), communication network and prosumer models.
Rsearch objectives and methods: This research aims at developing a novel distributed infrastructure to model and co-simulate different V2X connection scenarios by combining different technologies (both Hardware and Software) in a plug-and-play fashion and analysing heterogeneous information, often in real-time. The final purpose consists of simulating the implication of V2X connectivity with future energy systems and cities. Thus, the resulting infrastructure will integrate into a distributed environment heterogeneous i) data-sources, ii) cyber-physical-systems, i.e. Internet-of-Things devices, to retrieve/send information in real-time, iii) models of the different components of EV, iv) energy systems models , v) real-time simulators, vi) third-party services to retrieve information in real-time data. This infrastructure will follow the modern software design patterns (e.g. microservice) and each single component will adopt the novel communication paradigms, such as publish/subscribe. This will ease the integration of “modules” and the link between them to create holistic simulation scenarios. The infrastructure will enable also both Hardware-in-the-Loop (HIL) and Software-in-the-Loop again to perform real-time simulations. Furthermore, the solution should be able to scale the simulation from micro-scale (e.g. single EV) up to macro-scale (e.g. urban or regional scale) and span different time scales from micro-seconds up to years. In a nutshell, the co-simulation platform will offer simulations as a service that can be used by different stakeholders to build and analyse new scenarios for short- and long-term activities and for testing and managing the operational status of smart energy systems.
Hence the research will focus on the development of a distributed co-simulation platform capable of:
• Interconnecting and synchronizing digital real-time simulators, even located remotely
• Integrating Hardware in the loop in the co-simulation process
• Easing the integration of simulation modules by aumatizing the code generation to build new simulation scenarios in a plug-and-play fashion
The outcomes of this research will be a distributed co-simualtion platform that can be exploited for:
• Planning the evolution of the future V”X connectivity
• Evaluating the effect of different policies and related customer satisfaction
• Evaluating the performances of hardware components in a realistic test-bench
Outline of work plan: 1st year. The candidate will study the state-of-the-art solution of existing co-simulation platforms to identify the best available solution for V2X connectivity simulation in distributed environments. Furthermore, the candidate will review the state of the art in hardware in the loop integration and in automatic composability of the scenario code by identifing challenges and possible solution. Finally, it will start the design of the overall platform starting from the requirements identification and definition.
2nd year. The candidate will end the design phase and will start the implementation of the co-simualiton platform including HIL features to be integrated with software simulators together to create the first beta version of the tool. Furthermore, the candidate will start developing software solutions to solutions to easy the integration of simulation modules by aumatizing the code generation
3rd year. The candidate will ultimate the overall platform and test it in different case study and scenarios to show all capabilities of the platform in terms of automatic scenario composition and integration of HIL.
Expected target publications: IEEE Transaction Smart Grid
IEEE Transaction on Vehicular Technology
IEEE Transaction on Industrial Informatics
Parallel and distributed computing
Environmental Modelling and Software
ACM e-Energy.
Current funded projects of the proposer related to the proposal:
Possibly involved industries/companies:

Title: Digital Twin development for Industrial Cyber Physical Systems
Proposer: Enrico Macii, Sara Vinco
Group website: https://eda.polito.it/
Summary of the proposal: The Industry 4.0 paradigm requires tightly monitoring physical systems with the goal of enhancing monitoring, control and optimization of their operation. This requires the development of models of the physical systems, that will be tightly connected with the cyber infrastructure to allow run time monitoring and reaction. Such a coupled interdependency between cyber and physical goes under the name of digital twin. The goal of this project is to develop models of physical behaviors, to analyze the modeling approaches and allow the application of energy-efficient algorithms and solutions, with a focus on the estimation of energy consumption and efficiency. The digital twin will exploit models developed both top-down, with formal and physical descriptions, or bottom-up, with the application of machine learning algorithms that extrapolates information from sensed data to predict future evolution. The joint application of such techniques, with the other Industry 4.0 enabling technologies (big data, IoT, …) allow to foresee the state of health of the physical system. In this perspective, energy consumption is a very crucial aspect: it is related to costs and waste of natural resources, allows to get insights on possible malfunctioning and inefficiencies (thus enabling production optimization), and to support predictive maintenance.
Rsearch objectives and methods: The objectives of the Ph.D. plan are the following:
- Developing the competences on the physical systems modeling, including machine learning approaches, simulation languages like SystemC-AMS and Simulink, and simulation frameworks like PlantSimulation
- Identifying modeling solutions available at state-of-the-art suitable for the identified application scenario to make a critical analysis of the current solutions and of their limitations
- Providing top-down and bottom-up modeling solutions for a specific application case study, to allow energy efficiency optimization and state of health improvement of the monitored system, identifying a scalable and effective trade-off between simulation accuracy and real-time simulation speed
- Application of CAD languages and methodologies to a new application scenario, with extensions of the available solutions
- Analysis of the most suitable technologies, in terms of data storage, network infrastructure, and framework support, for the development of the digital twin of interest
The identified solutions can be used to:
- Optimize the energy consumption of the monitored system, by optimizing the production settings and the production recipes that control physical plant evolution
- Monitor and enhance the state of health of the physical system, through the early identification of unexpected behaviors
Outline of work plan: 1st year. The candidate will study state-of-the-art techniques to simulate physical systems, including i) the main simulation languages (SystemC-AMS, Simulink) and ii) machine learning frameworks adopted in the context of physical plant monitoring. The candidate will work on available data sets to get acquainted with the modeling and machine learning techniques, by selecting the scenario of interest from contexts defined by projects involving industrial partners. The candidate will explore the technologies available at the state-of-the-art for the development of the underlying framework of digital twin development (e.g., data storage, network connectivity).
2nd year. Based on the outcomes of the first year, the candidate will identify a case study for complete digital twin development, and apply both the top-down and bottom-up modeling strategies to the domain of interest, to achieve effective monitoring. This will require an Introduction to the enabling technologies of Industry 4.0 enabling technologies (big data, IoT) to enable successful interaction with the monitored physical system. This will allow the publication of conference papers to modeling conferences and the submission of one journal paper.
3rd year. The methodology and the algorithms developed in the previous years will be validated to optimize the evolution of the physical system, both in terms of energy consumption and of state of health preservation, to achieve lifelong improvement of the system. The focus will thus be on the application of overall strategies for monitoring the system state of health and optimization of production and of energy efficiency. This will lead to the publication of at least one journal paper.
Expected target publications: The work developed within this project can be submitted to conferences (DATE, DAC, …) and to journals like:
- IEEE Transactions on Computers
- IEEE Transactions on CAD
- IEEE Transactions on Industrial Informatics
- ACM Transactions on Embedded Computing Systems
- ACM Transactions of Design Automation of Electronic Systems
- IEEE Transaction on Emerging Topics in Computing
Current funded projects of the proposer related to the proposal: H2020 Serena H2020 Manuela
H2020 Mesomorph
Possibly involved industries/companies:FCA, Comau, Prima Industrie, SILK FAW

Title: Blockchain Technology for Intelligent Vehicles
Proposer: Valentina Gatteschi
Group website: http://grains.polito.it/
Summary of the proposal: The decade 2020-2030 is expected to be characterized by vehicles that are more and more intelligent and capable of acquiring/processing increasing amounts of data from sensors positioned on board, from other vehicles or from the outside world. Similarly to intelligent vehicles, another technology, which is undergoing a great development, is blockchain. Blockchain enables nodes of a network to carry out transactions in a decentralized manner. The potential of this technology is increased by smart contracts, programs autonomously executed if certain conditions are met. The use of blockchain technology in the context of intelligent vehicles can bring several benefits, such as: a) the use of a shared ledger to store the digital twin of vehicles/components provides transparency to the end user, e.g. on the history of a vehicle; b) the underlying cryptographic mechanisms guarantee the integrity of the data exchanged between vehicles and/or external sensors, improving security against cyber-attacks; c) the ability to send transactions (or to schedule them based on certain conditions) allows vehicles to exchange money autonomously. The Ph.D. candidate will investigate the impact of the blockchain technology in the context of intelligent vehicles and of the automotive supply chain and study/propose solutions enabling decentralization, in this field.
Rsearch objectives and methods: The objective of this research activity is to investigate the impact of the blockchain technology in the context of intelligent vehicles and of the automotive supply chain, in order to analyze the added value and limitations of this technology, as well as the most promising solutions for disintermediated communication between vehicles and for data storage. In particular, the research activities will be initially aimed at implementing solutions for the traceability of components and of the previous history of the vehicle, which can be used by different actors (manufacturers, dealers, owners and repair shops) and which could guarantee transparency, safety, scalability and privacy. The solutions will enable the tracking of planned/not planned maintenance activities on the vehicle and the automatic verification of warranty coverage. Subsequently, this research area will be extended to include, among the above actors, the producers of spare parts, in order to evaluate the impact that blockchain technology can have on the automotive supply chain, and to design/develop blockchain-based solutions that will be tested by means of simulations or use in real contexts. A third area of research will be related to electric vehicles. In particular, the research activities in this area will be devoted to designing blockchain-based systems for the acquisition of data on vehicle’s consumption, for the certification of the residual life of the batteries and for the electricity trading as well as for the use of (intelligent) electric vehicles to balance the grid. In this case, the blockchain could also be used as a means to allow vehicles and devices to autonomously make payments. Finally, a latter area of research will concern the exploitation of the blockchain to manage in a disintermediated manner the distribution of costs/profits between owners/users/maintainers/insurers of (fleets of) vehicles.
Outline of work plan: The research work plan of the three-year Ph.D. program is the following:
- First year: the candidate will perform an analysis of the state-of-the-art on available blockchain platforms and will investigate which architectures are most suitable to be used by intelligent vehicles, also considering aspects related to security, scalability and privacy. After the analysis, the candidate will identify the advantages, the disadvantages and the limitations of existing solutions and will define approaches to overcome them. The candidate will also research/study the existing standards, in order to identify the most suitable ones for the representation of the digital twin of physical vehicles/components (e.g., the MOBI standard), of data acquired by sensors and for the transfer of value between humans or between machines;
- Second year: the candidate will address specific use cases, e.g., related to the traceability of components and of the previous history of the vehicle, or addressing the whole supply chain. He/she will investigate how different types of blockchains (for example, public and private) can be combined with each other and how a blockchain-based solution can be integrated with systems for the peer-to-peer storage of large amounts of data (such as the InterPlanetary File System-IPFS);
- Third year: the candidate will address additional use cases which encompass data acquired real-time by the vehicle, e.g., related to their consumption/position for electricity trading. He/she will research how smart contracts can automate the interactions among vehicles and will also investigate how simplified, yet effective visualizations can be created for management purposes, e.g., to represent the status of the vehicle(s) and the information saved on the blockchain.
Expected target publications: Journals:
- IEEE Transactions on Services Computing
- IEEE Transactions on Knowledge and Data Engineering
- IEEE Transactions on Vehicular Technology
- IEEE Access
- Future Generation Computer Systems
Conferences:
- IEEE International Conference on Decentralized Applications and Infrastructures
- IEEE International Conference on Blockchain and Cryptocurrency
- IEEE International Conference on Blockchain
- IEEE COMPSAC Conference
Current funded projects of the proposer related to the proposal:
Possibly involved industries/companies:REPLY

Title: Blockchain Technology for European Data Spaces
Proposer: Valentina Gatteschi
Group website: http://grains.polito.it/
Summary of the proposal: Today, we are witnessing a sea change in the way companies in our hyper-connected digital world leverage data to create more value behind their business processes. These players are paving the way for creating data spaces where data can be bought, sold, or traded, ranging from heterogeneous datasets to constant data streams. The term “data space” refers to a type of data relationship between trusted partners, each of whom apply the same high standards and rules to the storage and sharing of their data. The main characteristic of a data space is that data are not stored centrally. Instead, they are stored at the source and are only shared (via semantic interoperability) when necessary. Recent advancements in data marketplace management encompass the exploitation of blockchain technology to protect and mediate the access to the submitted data streams from data providers, automating the buy, sell and rent operations through smart contracts. Blockchain-based data marketplace systems generally create an upper-layer with crypto tokens to incentivize good behaviors accordingly to the alignment of every stakeholder's needs. Given the above scenario, showing a massive evolution and growth of data spaces/marketplaces, the Ph.D. candidate will study, investigate, and propose novel approaches for decentralized data spaces/marketplaces.
Rsearch objectives and methods: The objective of the research activity that will be carried out by the Ph.D. candidate is to study, investigate, and propose novel approaches for decentralized data marketplaces enabling the sharing of different types of data (e.g., private, B2B, or IoT data), addressing several aspects (e.g., security, privacy, governance, etc.) and innovative sectors (e.g., energy, mobility, medical research, etc.). In particular, the candidate will devote particular attention at investigating the impact of the blockchain technology in the context of data spaces, at analyzing the added value and limitations of this technology, as well as the most promising solutions for data and AI algorithms exchange. The research will be organized in three phases, devoted at accomplishing the following objectives:
- First phase: study/design/implementation of a blockchain architecture leveraging smart contracts, able to guarantee proof-of-origin for data and AI models, on-chain identity management, authentication and verification service while ensuring transparency, safety, scalability and privacy.
- Second phase: development of proof of concepts in specific use cases (e.g., in the energy data space), that will include the exploitation of sensors to gather data. The developed proof of concepts will enable data sell, rent or share in a deterministic, trustable and transparent way, contributing to an Explainable AI (XAI) and enabling a clear data provenance and audit trails.
- Third phase: testing of different token-based incentive models to investigate a new economy for data and AI models through the design and implementation of a decentralized and open blockchain-based marketplaces.
Outline of work plan: - First year: the research activities will be aimed at studying/designing/implementing a blockchain architecture able to guarantee a proof-of-origin for data and AI models, leveraging smart contracts, guaranteeing an on-chain identity management, authentication and verification service while insuring transparency, safety, scalability and privacy. Benefits and threats of the different blockchain configurations will be deeply investigated. A detailed benchmark of existing DLT-based data marketplaces will also be performed and the DLT layer will be designed.
- Second year: the Ph.D. candidate will address a specific use case, e.g., the energy data space. Research activities will be extended to including the analysis of governance and transaction models for trusted and secured devices. Methodologies to register data gathered from sensors in a tamper-proof manner along with their origin (or identity) to guarantee a “Proof-of-Origin” will be investigated. Mechanisms to offer trustable data provisioning with an efficiency/review mechanism to evaluate the goodness of the data provided will be analyzed. The data provisioning process will be coded in a specific data transaction model and automatized using smart contracts. Aggregating and packaging the data using well-known off-chain data pattern mechanisms (e.g., hashing the content, signing, and storing it on the blockchain) will regulate the data sell/rent/share in a deterministic, trustable and transparent way, contributing to an Explainable AI (XAI) and enabling a clear data provenance and audit trails.
- Third year: different token-based incentive models will be analyzed and tested to identify the most suitable one for data valorization and monetization of them and AI models. This activity will aim to investigate a new economy for data and AI models through the design and implementation of a decentralized and open blockchain-based marketplaces, that will lay their foundations on blockchain-based tokenization mechanisms (i.e., tokenomics) to provide incentives for data providers who will receive crypto tokens.
Expected target publications: Journals:
- IEEE Transactions on Services Computing
- IEEE Transactions on Knowledge and Data Engineering
- IEEE Access
- Future Generation Computer Systems
Conferences:
- IEEE International Conference on Decentralized Applications and Infrastructures
- IEEE International Conference on Blockchain and Cryptocurrency
- IEEE International Conference on Blockchain
- IEEE COMPSAC Conference
Current funded projects of the proposer related to the proposal: DATA-CELLAR – Data hub for the Creation of Energy communities at Local Level and to Advance Research on them – HORIZON-CL5-2021-D3-01-01
Possibly involved industries/companies:LINKS Foundation

Title: Early markers to predict the evolution from Mild Cognitive Impairment to Overt Dementia, and support to differential diagnosis of diverse forms of dementia
Proposer: Gabriella Olmo
Group website: www.sysbio.polito.it
Summary of the proposal: Dementias represent the most prevalent neurodegenerative disease, and a huge human tragedy. Even though, at present, no therapy exists able to revert or slow down the inevitable progression of the disease, novel drugs are under clinical trial. However, preliminary results reveal that they may be effective only if administered in a very early, preclinical stage. Hence, the identification of early markers, possibly non-invasive and suitable for a mass screening, are of enormous interest. To this aim, Artificial Intelligence may represent a valuable tool, in that it can be fed with heterogeneous data (fMRI, voice/video, sleep, biochemical/genomic) and output a measure of the progression from health to disease. Another issue is the differential diagnosis between Alzheimer’s and frontotemporal dementia. This is not trivial in early stages of the disease but may drive the prognosis and the patient’s treatment. It is known that the emotional responses of patients to stressing situations are different in either case. Hence, the analysis of the stress/emotional involvement, which is not considered in diagnostic scales, may help better identifying the patient’s condition and support differential diagnosis. Last but not least, the caregivers themselves may be subject of the study, as their quality of life is heavily impaired.
Rsearch objectives and methods: This activity will be carried out in strict cooperation with the Molinette Hospital, Department of Neurology, and the Links Foundation. The intermediate objectives are:
• To set up a clinical protocol, implemented during the outpatient visits of persons affected by Mild Cognitive Impairment (MCI), and collect data deemed relevant to identify markers of the progression towards overt dementia. The Ph.D. student will be asked to suggest proper instrumentation selection and positioning (e.g., video camera, inertial sensors, EEG/ECG electrodes, …). Possible integration of diverse sources of data (fMRI, biochemical, genomics) will be considered.
• To complement the abovementioned protocol, implemented during the outpatient visits of persons affected either by MCI or overt dementia, and evaluate the emotional response to stressing situations, such as the visit itself (hence, during the administration of cognitive assessment tests such MOCA – Montreal Cognitive Assessment). Emotional aspects are seldom evaluated on dementia patients; however, clinicians agree that these issues, besides useful for prognostics and differential diagnosis, are also important to set up a proper caring strategy. Other ad-hoc tasks may be added to the clinical protocol, e.g., the vision of short video clips characterized by different emotional content.
• To support the clinical staff in the data collection (indicatively 5 patients per month) by properly managing the instruments.
• To implement, test and validate AI algorithms for: achievement of preliminary information on possible early markers of disease progression (also using longitudinal data already present in the Molinette hospital); measure the stress/emotional conditions of the patients and the caregivers; support differential diagnosis between Alzheimer’s and fronto-temporal dementia.
The final objective is the design and preliminary implementation, on a limited yet significant number of patients, of an AI-based tool, based on data collected during outpatient visits and/or available in the health records, to support the clinicians in early identification and follow-up of dementias.
Outline of work plan: YEAR 1.
Task 0: analysis of the state of the art on: early markers of evolution MCI -> dementia; early physiological signs and symptoms (poor sleep quality; akinesia/hyperkinesia; wandering; voice and facial expression anomalies); technological instrumentation and methods to measure them.
Task 1: definition of the clinical protocol to measure relevant data during outpatient visits (e.g., video recordings, IMUs or other sensor number and positioning, networking set-up). Arrangement of the tasks to be administered to the patients (in strict cooperation with the clinical staff).
Task 2: analysis of state of the art and integration in the former protocol of methods for emotion recognition (e.g., from video sequences recorded during the visit). Proper integration of the tasks to be administered to the patients.
Task 3: Preliminary selection of AI algorithms.
Task 4: Data collection (after approval of ethics committee). The patients will be selected and invited by the clinical staff, based on proper inclusion and exclusion criteria. This task will be likely continued in successive years.
YEAR 2.
Task 5: AI algorithm implementation, validation and testing for early progression markers.
Task 6: AI algorithm implementation, validation and testing for emotion recognition and stress evaluation (on patients, caregivers and properly identified control subjects, which will be invited by the clinical staff).
YEAR 3.
Task 7: AI algorithm implementation, validation and testing for differential diagnosis between early Alzheimer’ and frontotemporal dementia.
Task 8: critical analysis of the results, proposal of integration of the algorithms with heterogeneous data present in the patient’s clinical records.
Expected result: AI-based tool to help early diagnosis, differential diagnosis and follow-up of patients at risk of/with overt dementia.
N.B: Due to the complexity and novelty of this topic, the Task description is forcedly preliminary and may be subject to modifications depending on the obtained early results.
Expected target publications: We plan to have at least two journal papers published per year.
Target journals:
IEEE Transactions on Biomedical Engineering
IEEE Journal on Biomedical and Health Informatics
IEEE Access
IEEE Journal of Translational Engineering in Health and Medicine
MDPI Sensors
Frontiers in Neurology
Journal of Alzheimer's Disease
Current funded projects of the proposer related to the proposal:
Possibly involved industries/companies:Links Foundation has expressed their interest in this project.
Molinette Hospital, Department of Neuroscience “Rita Levi Montalcini”, and Molinette Hospital, Neurology Department (dementias, sleep disorders). The Department has explicitly asked to start a collaboration on this topic, and expressed their will to set up the protocol, recruit patients and control subjects and evaluate the clinical relevance of the results.
STMicroelectronics has expressed the intention of making available for the project their proprietary platform SensorTile:
www.st.com/en/evaluation-tools/stevastlkt01v1.html?ecmp=tt7201_us_social_may2018

Title: Trusted Execution in a networked environment
Proposer: Antonio Lioy
Group website: https://security.polito.it
Summary of the proposal: Modern ICT infrastructures go beyond traditional boundaries: computing and storage are not available only at the core but – with edge and fog computing, personal devices, and IoT – there are several distributed components that concur to data processing and storage. Similarly, networks are no more just switching packets but have evolved into intelligent Infrastructures able to perform several tasks. SDN (Software Defined Networking) permits intelligent packet processing based on an external supervisor (the controller and the SDN applications), while NFV (Network Function Virtualization) implement on-demand processing (firewall, VPN, …) once requiring dedicated appliances. In this scenario, nodes may run applications with different levels of trustworthiness and security. For example, a smartphone may be running gaming app alongside an Internet banking one. Similarly, IoT nodes may be running user-defined applications alongside management applications of the network provider. This calls for a robust execution environment, where applications with different trustworthiness levels are kept segregated so that the untrusted ones cannot interfere with the trusted ones (e.g. to access cryptographic keys or sensitive information). The final objective is the design and test of a TEE (Trusted Execution Environment) based on a hardware root-of-trust and open-source components, suitable for different hardware platforms.
Rsearch objectives and methods: The main objectives of the research activity are the following ones.
1. Identify, design, and implement appropriate hardware elements to support a secure and trusted execution environment.
2. Adapt an existing open-source trusted execution framework to the custom hardware designed.
3. Implement a device with the hardware and software components designed and develop suitable applications it to provide security features in various domains, such as a IoT gateway or a multi-tenant NFV node.
Outline of work plan: The first year will be spent studying the existing paradigms for trusted execution, such as Intel TXT and SGX, the Trusted Computing platform, and the ARM TrustZone. The PhD student will also analyse modern security paradigms applied to software infrastructures. During this year, the student should also follow most of the mandatory courses for the PhD and submit at least one conference paper. During the second year, the PhD student will design a custom approach for trusted execution in an open hardware platform (e.g. RISC-V) enriched with security components implemented in FPGA. The application domain should be oriented to modern infrastructures, for personal/edge/fog devices that support lightweight virtualisation technologies. At the end of the second year, the student should have started preparing a journal publication on the topic and submit at least another conference paper. Finally, the third year will be devoted to the Implementation and evaluation of the proposed solution., compared with the existing ones. At the end of this final year, a publication in a high-impact journal shall be achieved.
Expected target publications: IEEE Security, Springer Security and Privacy, Elsevier Computers and Security, Future Generation Computer Systems
Current funded projects of the proposer related to the proposal: H2020 SPIRS (Secure Platform for ICT systems Rooted at the Silicon manufacturing process)
https://www.spirs-project.eu/
Possibly involved industries/companies:Telefonica
Hewlett-Packard