Research proposals

PhD selection is performed twice a year through an open process. The call for admission for this year is available at

Grants from Politecnico di Torino and other bodies are available.

In the following, we report the list of topics for new PhD students in Computer and Control Engineering. This list may be further updated if new positions will be made available.

If you are interested in one of the topics, please contact the related Proposer.

- Augmented Cognition: Design Principles for Human-Computer Interaction
- Optimizing Computing and Communication Infrastructure for Service Robotics
- Bilevel stochastic optimization problems for Urban Mobility and City Logistics
- Image processing for machine vision applications
- Learning Analytics
- Summarization of heterogeneous data
- Models and methods for Lean Business and Innovation Management
- Deep Neural Network models for speaker recognition
- Optimization models and algorithms for synchro-modal network problems
- Cross-layer Lifetime Adaptable Systems
- Functional safety of electronic systems in autonomous and semi-autonomous cars
- Conversational agents meet knowledge graphs
- Big Crisis Data Analytics
- Automatic and Context-aware Orchestration for Fog Computing Services
- New methods for set-membership system identification and data-driven robust cont...
- Quality and policy assurance in critical distributed systems
- Virtual, Augmented and Mixed Reality for education and training
- Human-centered visualization and interaction methods for Smart Data applications
- Deep learning techniques for fine-grained object detection and classification in...
- Efficient Functional Model-Driven Networking
- Algorithms and software infrastructures for manufacturing process modelling from...
- Dynamic and Adaptive User Interfaces for the Internet of Things
- ICT for Urban Sustainability
- Hardware assisted security in embedded systems
- Industrial machine learning
- Modelling cancer evolution through the development of artificial intelligence-ba...
- Biometrics in the wild
- Context-Aware Power Management of Mobile Devices
- Blockchain for Physical Internet and Sustainable Logistics
- Multi-user collaborative environments in Augmented and Virtual Reality
- Towards algorithmic transparency
- Key management techniques in Wireless Sensor Networks
- Dependability of next generation computer networks
- CityScience: Data science for smart cities
- Promoting Diversity in Evolutionary Algorithms
- Security and Privacy for Big Data
- Distributed software methods and platforms for modelling and co-simulation of mu...

Detailed descriptions

Title: Augmented Cognition: Design Principles for Human-Computer Interaction
Proposer: Andrea Sanna
Group website:
Summary of the proposal: The goal of Augmented Cognition is to extend, by an order of magnitude or more, the information management capacity of the human-computer integral by developing and demonstrating quantifiable enhancements to human cognitive ability in diverse, stressful, operational environments. Augmented Cognition explores the interaction of cognitive, perceptual, neurological, and digital domains to develop improved performance application concepts. The field of Augmented Cognition has evolved over the past decade from its origins in the Defense Advanced Research Projects Agency (DARPA)-funded research program, emphasizing modulation of closed-loop human-computer interactions within operational environments, to address a broader scope of domains, contexts, and science and technology challenges. Among these are challenges related to the underlying theoretical and empirical research questions, as well as the application of advances in the field within contexts such as training and education.
This Ph.D. proposal addresses students interested in investigating issues related to the design of natural, multimodal, augmented and innovative interfaces to move forward new paradigms of user-machine interaction. A particular focus will be devoted to human-robot interaction in a context of Industry 4.0.
Rsearch objectives and methods: Main goals of this Ph.D. are the design, the implementation and the validation of new human-machine interaction paradigms able to efficiently and effectively support the user when interacting with different kind of machines ranging from robots to home appliances.
The analysis of cognitive load plays a key role in the design of user interfaces (UIs) as the capability of users in performing "critical tasks" can be deeply affected both by the way information are organized and presented and by the way users are able to interact with the system.
For this reason, a main topic is the evaluation of the user experience (UX). The UX includes a person’s perceptions of system aspects such as utility, ease of use and efficiency. User experience may be considered subjective in nature to the degree that it is about individual perception and thought with respect to the system. User experience is dynamic as it is constantly modified over time due to changing usage circumstances and changes to individual systems as well as the wider usage context in which they can be found. An effective measure both of the user experience and of the cognitive load is the starting point for a dynamic adaptation of the UI.
Augmented reality and augmented cognition can provide UI designers new tools to imagine novel interaction paradigms in a lot of different contexts: military, education, industry, surgery, and so on. This Ph.D. proposal aims to investigate how these augmented UIs can transform our everyday life.
Outline of work plan: The work plan will be defined year by year; for the first year of activity, it is expected the candidate will address the following points:
• Analysis of the state-of-the-art (in this phase, both the measure of UX and cognitive load have to be investigated. Moreover, the candidate have to analyze augmented and virtual reality technologies). The candidate could complete this analysis step by attending to specific Ph.D. courses related to these topics.
• Objective definition (after the analysis step, the candidate has to be able to identify challenging open problems, thus defining concrete objectives for the next steps).
• Methods (concurrently to the objective definition step, the research methodology has to be also chosen; a right mix between theoretical formulation and experimental tests has to be identified).
• Expected outcome (research papers, scientific contributions, significance, potential applications).
The next years will be mainly focused on the design, implementation and test phases.
Expected target publications: IEEE Transactions on Human-Machine Systems
IEEE Transactions on Visualization and Computer Graphics
ACM Transaction Human Computer Interaction
International Journal of Human–Computer Interaction
Current funded projects of the proposer related to the proposal: HuManS (Fabbrica intelligente)
Possibly involved industries/companies:COMAUN, FCA, CRF, CNH

Title: Optimizing Computing and Communication Infrastructure for Service Robotics
Proposer: Enrico Masala, Fulvio Risso
Group website:
Summary of the proposal: Service robotics (that include drones, rovers, etc.) is expected to grow significantly in the next few years, fueled by the large number of innovations in the field and the decreasing trend in the cost of the hardware.
However, it is not yet completely clear how to use the above robots in the most effective way to create innovative services, e.g., alone or in coordination, how to interact with the neighbor devices (e.g., other robots, physical world), and more.
In this context, the activity proposed in this PhD proposal will address the processing, storage and communication issues that may arise in the service robotics scenario, in particular when such objects experience intermittent and/or delayed network connections. The above peculiar operating context will have an impact on different aspects of computing, since the device cannot count on remote resources or services that may not always be reachable. This requires the capability to select carefully the services/resources the robot relies upon, the ability to survive in case of (temporary) network issues, and in general self-adaptation and self-healing capabilities. On the other side, this may require a carefully designed infrastructure that is able to limit the impact of the above issue, e.g., by adopting a forwarding mechanism that is derived from the Delay-Tolerant Networks (DTN) paradigm.
In this PhD proposal, among the many service robotics application fields, a few will be identified, analyzed in details and addressed, in coordination with the activity carried out by the Interdepartmental Center “PIC4SeR”.
Rsearch objectives and methods: First, among the many service robotics application fields, a few will be identified, analyzed in details and addressed, in coordination with the activity carried out by the Interdepartmental Center “Pic4Ser”. As a starting point, applications will include, but will not be limited to, the so called Precision Agriculture, where efficient coordination and communication between robots (aerial and ground vehicles) and cloud services is required. Moreover, it is also important to perform smart and targeted data acquisition activities in an autonomous or semi-autonomous fashion and timely communicate results to control and supervision operators or centers.
The activity proposed in this PhD proposal will address the processing, storage and communication issues that may arise in the service robotics scenario, in particular when such objects experience intermittent and/or delayed network connections. This harsh and (often) unpredictable operating context may require specific strategies for service delivery (i.e., to guarantee service resiliency), while optimizing the resource consumption, which include problems such as where to save data, how to communicate with neighbor nodes, how to encode information, how to cope with remote services (e.g., in cloud) that are not always reachable, where to run elementary service components.
In this respect, fog computing technologies may represent a fundamental paradigm to be considered, as it proposes to move computational (and storage) loads from the cloud towards the edge of the network, and potentially on the robotic devices themselves, and vice versa.
Research in this area is ongoing. For instance, in the context of fog computing, it is still an open research question how to automatically perform optimal allocation and migration of computational loads so that application requirements, especially in terms of throughput and latency, are satisfied, as well as costs are minimized. Another example involves data communication protocols that need to address temporary loss of connectivity, as well as prioritization of traffic which is important from the point of view of achieving the goals identified in each specific application scenario.

In a nutshell, the main objective of the PhD activities will be the development of algorithms, technologies and systems that can effectively optimize the performance of the considered systems in terms of traditional communication metrics (e.g., bandwidth, latency) and application-layer metrics (user’s utility) which will be defined on a case-by-case basis depending on the application, while at the same time minimizing all costs (e.g., computational complexity, number of required devices and their economic cost).

Such objectives will be achieved by using both theoretical and practical approaches. In fact, the adopted methodologies will include the development, wherever possible, of theoretical frameworks to model the whole system, to investigate the problem from an analytical point of view. The resulting insight will then be validated in practical cases by analyzing the performance of the system with simulations and real-world experiments. Both proponent research groups have extensive expertise in such fields.
In this regard, cooperation with companies will also be sought to facilitate the migration of the developed algorithms and technologies to prototypes that can then be effectively tested in real application scenarios.
Outline of work plan: In the first year, the PhD candidate will familiarize with the requirements of the application scenario, as well as the relevant communication architectures and protocols (e.g., for delay tolerant networks). The initial activity will address the processing, storage and communication issues that may arise in the service robotics scenario, in particular when such objects experience intermittent and/or delayed network connections. It is expected that this initial investigation activity will lead to conference publications with heuristic optimization strategies.
In the second year, building on the theoretical knowledge already present in the research group, new techniques will be developed and experimented to demonstrate, by means of simulations, how the proposed techniques optimize resource consumption, including problems such as where to save data, how to communicate with neighbor nodes, how to encode information, how to cope with remote services (e.g., in cloud) that are not always reachable, where to run elementary service components. Such activities are expected to yield one or more journal publications.
In the third year the activity will then be potentially expanded in the direction of developing prototypes suitable to run experiments in real-world scenarios, also taking advantage of the facilities provided by the PIC4SeR Center. The experimental activity will provide insight to improve the shortcomings identified in the modeling framework. Refined versions of such frameworks will target journal publications that will provide added value in terms of verified adherence to reality of the results.
Expected target publications: Possible targets for research publications (well known to the proposer) include IEEE Transactions on Multimedia, Elsevier Signal Processing: Image Communication, IEEE/ACM Transactions on Networking, Elsevier Computer Networks, IEEE International Conference on Communications (ICC), IEEE International Conference on Image Processing, IEEE International Conference on Multimedia and Expo (ICME), IEEE INFOCOM, IEEE GLOBECOM.
Current funded projects of the proposer related to the proposal: PIC4SeR Interdepartmental Center for Service Robotics
Possibly involved industries/companies:Topcon, Tierra Telematics, Nebbiolo Technologies.

Title: Bilevel stochastic optimization problems for Urban Mobility and City Logistics
Proposer: Guido Perboli
Group website:
Summary of the proposal: The proposal considers the problem of pricing of services in different application settings, including road network and logistics operations. When dealing with real urban logistics applications, the aforementioned problems becomes large and normally affected by uncertainty in some parameters (e.g., transportation costs, congestion, strong competition, etc.).
Under this context, one of the most suitable approaches is modeling the problem as a bilevel optimization problem.
Indeed bilevel programs are well suited to model hierarchical decision-making processes where a decision maker (leader) explicitly integrates some reaction of another one (follower) into its decision-making process.
This representation permits to explicitly represent the strategic behavior of users/consumers into the ricing problem.
Unfortunately, solving large-sized bilevel optimization problems with integral variables or affected by uncertainty is still a challenge in the literature.
This Research Proposal aims to fulfill this gap, deriving new exact and heuristic methods for integral and stochastic bilevel programs.

These activities will be carried out in collaboration with the ICT for City Logistics and Enterprises lab of Politecnico di Torino and the INOCS team of INRIA Lille. Actually, the PhD candidate will be co-directed by prof. Luce Brotcorne, the leader of the INOCS group, who is a worldwide specialist in Bilevel Programming.
Rsearch objectives and methods: The objectives of this research project are grouped in three macro-objectives:

Integer Bilevel Programs:
• Define price-setting models for last-mile logistics operations
• Develop exact and heuristic methods based on the mathematical structure of the problems for solving the aforementioned problems
Stochastic Bilevel Programs:
• Define stochastic price-setting models for network optimization
• Develop exact and heuristic methods based on the mathematical structure of the problems and the knowledge of the distribution of probability of the uncertainty parameters
Last Mile and City Logistics Instances:
• Test the new models and methods in real Last Mile, E-commerce and City Logistics projects in collaboration with the ICE Lab.
Outline of work plan: The PhD Candidate will develop his/her research in both the research centers (ICE and INOCS). In more detail, he/she will spend about half of the time in INRIA Lille.

PHASE I (I semester). The first period of the activity will be dedicated to the study of the state of the art of Integer and Stochastic Bilevel Programming, as well as of the Last Mile and City Logistics applications.

PHASE II (II and III semesters). Identification of the main methodological difficulties for the design of efficient solution methods for Integer Bilevel models studied in this proposal. Identification of the main properties letting the solution method to converge with a limited computational time to high-quality solutions.

PHASE III (IV and V semester). Identification of the main methodological difficulties for the design of efficient solution methods for Stochastic Bilevel models studied in this proposal. Identification of the main properties letting the definition of solution methods converging with a limited computational time to high-quality solutions. In particular, the PhD candidate will focus on the specific issues coming from having a stochastic behavior in the first level (Leader) or in the second one (Follower).

PHASE IV (V and VI semesters). Prototyping and case studies. In this phase, the results of Phase II and III will be applied to real case studies. Several projects are currently in progress, which require experimentations and the validation of the proposed solutions.
Expected target publications: • Journal of Heuristics
• Transportation Science
• Transportation Research part A-E
• Interfaces
• Omega - The International Journal of Management Science
• Management Science
• International Journal of Technology Management
• Computer & OR
• European Journal of Operational Research
Current funded projects of the proposer related to the proposal: • Urban Mobility and Logistics Systems Interdepartmental Lab
• Open AGORA (Municipality of Turin)
Possibly involved industries/companies:TNT, DHL, Amazon

Title: Image processing for machine vision applications
Proposer: Bartolomeo Montrucchio
Group website:
Summary of the proposal: Machine vision applications, both in industrial and research environments, are becoming ubiquitary. Applications like automatic recognition of materials, checking of components and verification of the food quality are used in all major companies. In the same time, in the research, machine vision is used for automatizing all the aspects related to images. Open source tools (such as ImageJ) are used by scientists to manage images coming from microscopes, telescopes and other imaging systems.
Therefore this proposal is addressed to the creation of new specific competences in this framework. This PhD proposal has the target of investigating machine vision applications by means of image processing mainly in the following contexts:
- manufacturing (e.g. food industry)
- applied sciences, such as in biology or civil engineering
The PhD candidate will be requested to use an interdisciplinary approach, with particular interest to fast image processing techniques, since methods like artificial intelligence are often not applicable in industrial contexts due to strict requirements of cycle time (that can not be increased).
Rsearch objectives and methods: Image processing, in particular for machine vision, is one of the most developing sectors in nowadays industries. Research, pure and applied, is therefore very important, since machine vision is used for validating and managing basically all production workflows.

The research objectives are to work on contexts like both manufacturing and research. About manufacturing, examples are food industry, tire industry, welding industry. Research examples are image analysis for biological purposes, image analysis of thermal infrared images, image analysis in applications of civil engineering. Interdisciplinary aspects are very important, since it is not sufficient having competencies only of image processing without additional context knowledge.

Methods include using multispectral imaging, thermal imaging, UV fluorescence induced imaging. Algorithms are used, as usual in image processing, for preparing images (e.g. flat field), for segmenting them, for recognizing objects (pattern recognition and other aspects, also statistical) and finally for visualizing results (scientific visualization). Since the basic idea is to produce fast and reliable algorithms (also for industrial applications), a strong co-design will be applied to production of the image (mainly by means of illumination) and to image analysis, in order to avoid using complex approaches where a simpler one could be more than sufficient if image shooting details are carefully checked.

The ideal candidate should have a good background in programming skills, mainly in C/C++ and Java and should know well the main techniques of image processing, in particular those that can be parallelized. In fact in many cases it could be required to write an optimized implementation for GPU, since real-time characteristics are often required in machine vision, especially in industrial applications.
Outline of work plan: The work plan is structured in the three years of the PhD program:
1- in the first year the PhD student should improve his/her knowledge of image processing, mainly covering the aspects not seen in the previous curriculum; he/she should also follow in the first year most of the required courses in Politecnico. At least one or two conference papers will be submitted during the first year. The conference works will be presented from the PhD student himself.
2- In the second year the work will be both on designing and implementing new algorithms and on preparing a first work for a journal, together with another conference. Interdisciplinary aspects will be also considered. Credits for didactic will be also finalized.
3- In the third year the work will be completed with at least a publication in a selected journal. The participation to the preparation of proposals for funded projects will be required. If possible the candidate will be also required to participate to writing an international patent.
Expected target publications: The target publications will be the main conferences and journals related to image processing, computer vision and specifically machine vision. Also interdisciplinary conferences and journals linked to the activities will be considered. Since parallel implementations (mainly with GPU) could be considered, also journals related to parallel computing could be considered. Journals (and conferences) will be selected mainly among those from IEEE, ACM, Elsevier, Springer and considering their indexes and coherence with 09/H1 sector. Examples are (journals):
IEEE Transactions on Image Processing
Elsevier Pattern Recognition
IEEE Transactions on Parallel and Distributed Systems
ACM Transactions on Graphics
Current funded projects of the proposer related to the proposal: The proposer has currently two funded projects with important manufacturing companies (one of them is Magneti Marelli) that can be considered related to the proposal. Both of them are on machine vision aspects. Moreover there is a project (in which the proposer is one of the task leader of one WP) for the so called Fabbrica Intelligente, in which the purpose is also to help industries in the Industry 4.0 framework; this third project is with Ferrero.
Possibly involved industries/companies:None at the moment. Many different industries/companies could be involved in this proposal during the PhD period. It is important to note that often industries prefer patents to publications for industrial privative rights reasons. For this reason it will be important to consider, eventually, also patents, given current Italian National Scientific Qualification requirement.

Title: Learning Analytics
Proposer: Laura Farinetti
Group website:
Summary of the proposal: Learning Analytics is defined as the measurement , collection, analysis and reporting of data about learners and their contexts, with the purpose of understanding and optimizing learning in the environments in which it occurs.
The possibility to collect, analyze, categorize, integrate large amounts of heterogeneous educational data coming from different sources (i.e. students’ interaction with Learning Management Systems, learners’ data, performance data, portfolios, surveys, statistics, …) has the chance to create an informed environment able to reinforce and personalize learning. This is a challenge that educational institutions have to face today, in any public or private sector, and at any level.
The PhD proposal investigates novel and promising strategies to mine and exploit educational data, with the purpose of proposing and validating solutions that can positively impact the quality and the effectiveness of learning.
Rsearch objectives and methods: Learning analytics is a specific area of data mining, where data are related to the educational environment.
The PhD research will focus on the main technological challenges related to learning analytics, with techniques borrowed from the data mining and the artificial intelligence domains.
Specifically, research activities will address the following issues:
- Reporting data: how to collect and summarize large sets of heterogeneous historical data coming from different sources.
- User modeling and profiling: how to model learner’s knowledge, behavior, motivation, experience, satisfaction, and how to cluster users into similar groups.
- Analyzing trends: how to identify historical trends and correlations.
- Predictive analytics: how to predict future student behavior and performance, based on past patterns.
Research results will be applied and validated in a number of educational settings, with the objective to improve learning effectiveness by:
- Detecting and correcting undesirable learner behaviors.
- Identifying “at risk” students to prevent their drop out.
- Personalizing the learning process and recommending learning resources and/or actions.
Increasing reflection and awareness by providing learning data overviews through data visualization tools.
Outline of work plan: Phase 1 (months 1-6): study of the state of the art in learning analytics: data mining and artificial intelligence techniques, innovative applications.
Phase 2 (months 6-12): definition of the main objective of the research: identification of a challenging educational objective, identification of all the relevant educational data, and identification of innovative technical strategies.
Phase 3 (months 12-24): design and implementation of the experimental setting: data gathering, data integration, user modeling and profiling, trends analysis, predictive algorithms application; preliminary validation.
Phase 4 (months 24-36): complete demonstrator, user testing, reflections on learning effectiveness.
During all the phases of the research, the PhD candidate will have the chance to cooperate with other international academic institutions and with companies in the area of training, and to attend top quality conferences.
Expected target publications: IEEE Transactions on Learning Technologies
IEEE Transactions on Emerging Topics in Computing
IEEE Intelligent Systems
IEEE Transactions on Education
ACM Transactions on Computing Education (TOCE)
Expert Systems with Applications (Elsevier)
IEEE/ACM International Conferences on Learning Analytics, Data Mining, Learning Technologies (e.g., ACM LAK, ACM SIGCSE, ACM SIGIR, IEEE COMPSAC, IEEE FIE)
Current funded projects of the proposer related to the proposal: No currently funded project.
Possibly involved industries/companies:Possible involvement: FCA – Training EMEA.

Title: Summarization of heterogeneous data
Proposer: Luca Cagliero
Group website:
Summary of the proposal: In today's world, large and heterogeneous datasets are available and need to be analyzed, among which textual documents (e.g. learning documents), graphs (e.g. social network data), sequential data, time series (e.g. historical stock prices), and videos (e.g., online news, learning videos).
Summarization entails extracting salient information from very large datasets (e.g. an abstract of a set of news documents, the most salient trends into stock price movements, the most relevant relationships among the users of a social network, the most salient fragments of a video). Although some attempts to apply data mining techniques to summarize specific data types have been made, extracting salient content from heterogeneous data is still an open and challenging research issue.
The candidate will investigate new data mining approaches to summarizing data collections that are different in nature, to combining the information provided by heterogeneous models, and to effectively supporting knowledge discovery from such models. Furthermore, she/he will target the development of flexible multi-type data analytics systems, which allow experts to cope with heterogeneous data without the need for exploiting a bunch of ad hoc solutions.
Rsearch objectives and methods: The research objectives address the following key issues.
- Study and development of new portable and scalable summarization algorithms. Available summarization algorithms are able to cope with a limited number of data types and their applicability is often limited to specific application contexts (e.g. documents written in a given language, news ranging over the same subject). Thus, the goal is to develop new summarization approaches that are (i) portable to different data types, (ii) scalable towards Big document collections, and (iii) applicable to data acquired in various application contexts.
- Integration of heterogeneous models. Data mining and machine learning models are generated with the twofold aim at (i) automatically exploring data to pinpoint the most salient information, and (ii) predicting which content is more pertinent to user-defined queries. The models generated from different data types (e.g., document abstracts, video fragments, social network descriptors) are integrated into a unified model capturing complementary facets of the analyzed data.
- Knowledge discovery. To support domain experts in decision making, the generated models need to be explored. Therefore, the goal is to effectively and efficiently explore the generated data mining models to gain insights into these heterogeneous data collections.
Research methods include the study, development, and testing of in-core, parallel, and distributed data mining algorithms that are able to cope with various data types, including textual data, videos, sequential data, time series, and graphs. Extension of existing solutions tailored to specific data types (e.g. language processing tools, graph indexing algorithms) will be initially studied and their portability to different data types will be investigated.
Outline of work plan: PHASE I (1st year): overview of existing summarization algorithms, study of their portability to different data types, analysis of the algorithm scalability, qualitative and quantitative assessment of the existing solutions, and preliminary proposals of new summarization strategies for various types of data.
PHASE II (2nd year): study and development of new summarization algorithms, experimental evaluation on a subset of application domains (e.g. e-learning, finance, social network analysis). Exploitation of the extracted knowledge in a subset of selected contexts.
PHASE III (3rd year): Integration of different data mining models into unified solutions able to cope with multiple types of data. Study of the portability of the designed solutions to data acquired in different contexts.

During all the three years the candidate will have the opportunity to attend top quality conferences, to collaborate with researchers from different countries, and to participate to competitions on data summarization organized by renowned entities, such as
- the Text Analytics Conference tracks organized by the National Institute of Standards and Technologies (NIST),
- the Data Challenges organized by the Financial Entity Identification and Information Integration (FEIII), and
- the Computational Linguistics Scientific Document Summarization Shared Task (CL-SciSumm) overseen by the National University of Singapore.
Expected target publications: Any of the following journals on Data Mining and Knowledge Discovery (KDD) or Data Analytics:

IEEE TKDE (Trans. on Knowledge and Data Engineering)
ACM TKDD (Trans. on Knowledge Discovery from Data)
ACM TIST (Trans. on Intelligent Systems and Technology)
IEEE TETC (Trans. on Emerging Topics in Computing)
IEEE TLT (Trans. on Learning Technologies)
ACM TOIS (Trans. on Information Systems)
Information Sciences (Elsevier)

IEEE/ACM International Conferences on Data mining and Data Analytics (e.g., IEEE ICDM, ACM SIGMOD, IEEE ICDE, ACM KDD, ACM SIGIR)
Current funded projects of the proposer related to the proposal:
Possibly involved industries/companies:

Title: Models and methods for Lean Business and Innovation Management
Proposer: Guido Perboli
Group website:
Summary of the proposal: In recent years, Lean Business approach and the Industry 4.0 introduce new challenges in the industrial sectors. In fact, firms are involved in an intelligent environment where different actors interact also supported by technology infrastructures and tools that affect their decision-making processes and behaviors.

In this context, characterized by a high level of dynamicity, the innovation and particularly its management become a relevant factor. The management according to the Lean principles of the Innovation process thus exited from the Startup environment to medium and large companies and several (qualitative) methods have been introduced, including the Value Proposition Canvas, the Value Ring, and the Business Model Canvas.

A common point of all these methods (and many others commonly used in Strategic Management) is a graph-based representation of the information.

Unfortunately, for different reasons, including a clear split between qualitative and quantitate researchers, the literature presents a lack in terms of automated tools to support and speed up the decision process of a strategist. Moreover, complex solutions, like Smart City solutions, brings to hyperconnected graphs representations, which are too much complex to manage with a pure expert-based approach. In fact, the simplification of the solution strategy is needed to derive a factual business strategy.

The aim of this research proposal is first to map the strategy tools that are graph and taxonomy-based and to introduce combinatorial algorithms, able to provide strategic plans automatically. Second, a DSS incorporating the developed algorithms will be provided.

These activities will be carried out in collaboration with the ICT for City Logistics and Enterprises (ICE) lab of Politecnico di Torino and Istituto Superiore Mario Boella.
Rsearch objectives and methods: The present state-of-the-art in this field has several lacks. First, the majority of the methodologies focuses only on one specific step of the Innovation process (business idea and value proposition or operational level), lacking in managing the entire process (idea definition, idea declination in strategic decisions, implementation of the strategic ideas in factual action). Second, even if several strategy tools and methods are clearly graph-based and in large innovation projects the outcome of these tools is clearly not manageable by hand even by an expert, no specific algorithm is present in the literature for supporting this process.

The objectives of this research project are grouped into three macro-objectives below. Management Science/Operations Research objectives:
• Creation of a logic model for the innovation process. In particular, we will start with the Lean Business GUEST methodology (a methodology developed in Politecnico and already used by about 50 companies);
• Analysis of the most recent strategy tools and their representation by a graph. In details, we will focus on Value Proposition Canvas and the Balanced Scorecard.
Technological objectives:
• Definition of the combinatorial graph-based problems for deriving strategy plans from a draft of a Value Proposition Canvas and a Balanced Scorecard;
• Definition of the algorithms for the Graph-based Combinatorial Optimization problems;
• Identify KPIs and analytics able to measure the overall process in real settings.
Testing object:
• Field testing of solutions in collaboration with ICE Lab and the I3P.

The ideal candidate has a strong knowledge of web and mobile development, as well as an expertise in Python and Java and a strong motivation in developing his/her managerial skills.
Outline of work plan: PHASE I (1st semester). The first period of the activity will be dedicated to the study of the state of the art of Lean Startup and Lean Business methodologies, with particular emphasis on the user engagement, the strategy tools and Key Enabling Technologies (KETs) points of view.

Phase II (2nd and 3rd semester). Mapping of the strategic tools with a graph-based representation. Definition of the underlying graph problems and definition of the combinatorial optimization algorithms (exact and heuristics). The ideal starting point will be, for their inner structure, the Value Proposition Canvas, and the Balanced Scorecard.

Phase III (3rd and 4th semester). ICT infrastructure implementation (with ICE support) and creation of the software prototypes.

Phase IV (5th-6th semester). Test of solutions in real pilots.
Expected target publications: • Transportation Research
• Interfaces
• Omega - The International Journal of Management Science
• Management Science
• International Journal of Technology Management
• Business Process Management Journal
• IT Professional
Current funded projects of the proposer related to the proposal: • Open Agorà (JoL/DAUIN)
• SynchroNET
Possibly involved industries/companies:Amazon (the Proposer is member of the Amazon Innovation Award)
Ennova (startup of I3P with 500 workers)

Title: Deep Neural Network models for speaker recognition
Proposer: Pietro Laface
Group website:
Summary of the proposal: State-of-the-art systems in speaker recognition are based on Gaussian Mixture Models (GMMs), where a speaker model is represented by a supervector stacking the GMM means. To solve the problem of utterances coming from different channels or environments, the best solutions rely on different forms of Factor Analysis (FA), which allows obtaining a compact representation of a speaker or channel model as a point in a low–dimensional subspace, the “identity vector” or i-vector. Very good performance has been obtained using i-vectors and classifiers based on Probabilistic Linear Discriminant Analysis (PLDA) evaluating the likelihood ratio between the “same speaker” hypothesis and “different speakers” hypothesis for a pair of i–vectors.
Another successful approach using i-vectors is based on discriminative classifiers, in particular Pairwise Support Vector Machines.
Recent research has introduced DNN models to extract alternative compact representations with promising, yet not competitive, results.

In this framework, this proposal aims at developing speaker identification systems, exploring new models and techniques that allow better estimating the i-vectors, or similar compact representations of speaker voice segments, and classifiers that are more accurate and robust to noisy and short utterances.
Rsearch objectives and methods: The objectives of this proposal are to devise new approaches for the following topics:
- DNN based utterance modeling
We plan to investigate different approaches for extracting high-level representation of the utterances. In particular, attention will be devoted to exploit features derived from Deep Neural Networks, which can provide noise-robust information useful for compensating the effect of noise on spectral level.
- Classification using non-linear models
We have recently proposed a successful non-linear probabilistic approach to i-vector classification. We plan to extend it to the classification of DNN based utterance representation, and possibly to combine the extraction and classification into an end-to-end system.
Outline of work plan: Phase1 (1st year): - Study of the literature
- Study of advanced features of Python language and of distributed computing
- Study of the computing environment and speaker recognition tools developed in the last years by our group.
- Development and integration of Deep Neural Network models in the framework of a GMM/PLDA system. Phase2 (2nd year):
- Development of DNN based extractors and classifiers to replace current approaches based on Factor Analysis.
Phase3 (3nd year):
- Evaluation and tuning of the proposed approaches for text-dependent and text-independent speaker recognition.
Expected target publications: IEEE International Conference on Audio, Speech and Signal Processing
ISCA Interspeech Conference
IEEE/ACM Transaction on Audio, Speech and Language Processing
Current funded projects of the proposer related to the proposal: The Speech Recognition Group collaborates with NUANCE Communication on the basis of annual contracts devoted to speaker recognition topics.
Possibly involved industries/companies:NUANCE Communication

Title: Optimization models and algorithms for synchro-modal network problems
Proposer: Roberto Tadei
Group website:
Summary of the proposal: Synchro-modality (i.e., a multi-modal paradigm of transportation in which synchronization aspects play a strategic role) is becoming more and more crucial for the optimization of any supply chain. In the recent past, some funded projects and academic researchers have shown how a good synchronization in a multi-modal supply chain can lead to effective and sustainable solutions reducing emissions and costs. However, a complete knowledge in this context is far to be achieved and many research scenarios are still open. Enlarging such a knowledge will be the first objective of this research. Then, learning from the supply-chain lesson, this research will try to apply the synchro-modality concept also to the optimization of other kinds of networks problems, where the advantages of such approach could be massive. The investigative work will follow the typical quantitative perspective coming from the Operations Research world, proposing new optimization models and algorithms to efficiently deal with realistic problems in different types of synchro-modal networks. Managerial insights in many different contexts will potentially arise.
Rsearch objectives and methods: This research will focus on apply the concept of synchro-modality to networks in which this approach is not used yet or to improve some aspects in the ones that are already using it.
Taking the supply chain as example, there are problems characterized by high uncertainty of some parameters (travel times, delivery times), geocoded large data (detailed multi-modal maps) and large sized instances (thousands of stops). Moreover, there is the need of solving them within a limited computational time. For example, a parcel delivery company needs to solve a 4000 delivery problem in 15 minutes, while the methods in the literature uses hours to manage just 1000 deliveries.
Several issues are still open in the literature. Transportation problems are based on networks. Nowadays, different tools can use network (map) information and multiple modal combination with the objective to optimize paths between pairs of nodes. However, these tools are not flexible enough to face with complex objectives (economic, service quality, environmental) and to be adapted to different kind of networks.
A software able to optimize a supply chain with a synchro-modal approach will not be usable for example for a telecommunication network. To make it possible we should study the new network and adapt the tools to work in a completely different environment.

The objectives of this research project are as follows:
• Identify those kinds of networks where the synchro-modal approach could bring great benefits. Identify interesting optimization problems in that context.
• Represent the network under study as a graph storing all the information needed for the actual problem optimization.
• Develop new optimization models and algorithms to efficiently deal with the identified problems for realistic instances and scenarios (or improve existing algorithms to work on different kinds of networks).

Methodologies to be used:
• Mathematical Programming
• Optimization models and Algorithms
• Graph theory
• Exact and Heuristic methods
• Stochastic and Dynamic Programming
• Combinatorial Optimization.
Outline of work plan: PHASE I
The first phase will be dedicated to study networks where the synchro-modality is already applied, with particular emphasis on the algorithms already developed. Aim of the student is to full understand the difficulty to represent a complex network and the efficiency and effectiveness of the existing algorithms.

In the second phase the student will study how to apply the concepts, learnt during the first phase, to other networks where the synchro-modality could provide some benefits. Models of synchro-modality problems for these networks will be provided.

In the last phase, algorithms to solve the models of phase II will be developed. These algorithms will be tested on some real-life problems.
Expected target publications: Scientific papers on some of the following journals:

• Transportation Research (part B, C, E)
• Transportation Science
• Networks
• EURO Journal in Transportation and Logistics
• Interfaces
• Omega - The International Journal of Management Science
• Management Science
• International Journal of Technology Management
Current funded projects of the proposer related to the proposal: SYNCHRO-NET project, H2020-EU.3.4.—Societal Challenges —Smart, Green and Integrated Transport, Ref. 636354.
Possibly involved industries/companies:No industries are directly involved in the research proposal up to now. However, there is the concrete possibility that some international companies, partners of the Synchro-Net project (e.g. DHL Spain, COSCO Shipping Lines, Kuehne+Nagel), will be interested and involved in such research.

Title: Cross-layer Lifetime Adaptable Systems
Proposer: Stefano Di Carlo
Group website:
Summary of the proposal: Cyber-physical systems constantly rely on a larger number of hardware blocks with different architectures and programmability levels integrated together on a single piece of silicon or system-on-chip (SOC). The goal is to efficiently execute parallel and application-specific tasks either in a general-purpose CPU or in a special-purpose core (such as GPUs, sensor processors, DSPs or other types of co-processors).
SOCs offer a diverse set of programmability features (hardware programmability vs. software programmability) and parallelism granularity (fine-grain vs. coarse grain). There is no golden rule or systematic methodology to determine the best setup for a particular cyber-physical system and application. Depending on the application domain, software-programmable devices like GPUs, DSPs or sensor processors may prevail in terms of performance or energy-efficiency while state-of-the-art hardware reconfigurable FPGAs already penetrate domains where software programmable devices used to be the norm.
This project wants to develop a Cyber-physical System+Application co-design framework for joint and dynamic optimization of the performance, energy and resilience of low power heterogeneous architectures during the lifetime of the system. The final system “quality of operation” will be decided as a mixed function of the three important parameters: “max performance per availability unit per watt” or “watts per max availability per performance unit”. Known error tolerance methods based on information, hardware and software redundancy will be evaluated for different heterogeneous platforms under given performance and energy requirements for the studied application domains.
Rsearch objectives and methods: Heterogeneity is now part of any cyber-physical system. While it offers several opportunities, it poses a challenge to the programmer to efficiently take advantage of all the available resources. The integration of many CPUs and special-purpose hardware operating in non-controlled environments and variable, unpredictable workload conditions maximizes the probability of failure due to errors sourcing from the hardware itself (variability, aging, etc.) as well as from the external environment (radiation, etc.). The optimization of cyber-physical systems design is now a multi-dimensional problem. Maximum performance should come along with maximum resilience and energy efficiency; as well as security and dependability.
Thus, programmability and lifetime adaptability to the variable environment and workload becomes crucial in the design and deployment of cyber-physical systems. In this project, adaptability is not only understood as the capacity to self-tune the hardware characteristics (i.e. voltage and frequency scaling) but also to offer a high degree of resiliency to hardware errors; as well as, availability at the application level (i.e. security).
We envision the development of a self-aware system that can autonomously adapt to the run-time conditions measured for performance, power and reliability. Through the definition of this interface with the hardware, an application container at the OS level will leverage the application requirements to the hardware reports and take the appropriate decisions. Unlike virtual machines, containers have the advantage of creating an exclusive execution zone for the application while the application is run directly on the system thus incurring in minimal overhead. The presence of the container adds a dimension of freedom to insert a light-weight per-application –in contrast to hypervisors- self-aware system where runtime decisions will be taken. This system will inherently communicate with the different system layers (i.e. hardware, OS, application).
Jointly maximizing performance, energy efficiency and resilience of the complete system and application is a hot research and development challenge. While software-based resilience methods can be more suitable to software-programmable accelerator-based systems, the performance and energy overhead they incur may be unacceptable at the extreme performance scale. On the other hand, hardware reconfigurable fabrics may spare significant silicon areas for improved resilience to errors.
We consider such a co-design framework a major step forward of future cyber-physical systems where the selection of the best operation point for the particular application domain is a very hard task with large numbers
Outline of work plan: Year 1

The student needs to become familiar with the use of micro architectural models of hardware core that represents a candidate approach to simulate the hardware infrastructure and to enable the development of the hardware/software co-design framework. This task will include the modification of existing simulators such as GEM5, MARS, GpuSIM, Multi2Sim in order to implement a set of different monitoring facilities.

As a result of this activity a complete simulation environment for a heterogeneous system will be developed.

Year 2

The second year will be dedicated to development of the container representing the interface between the hardware and the OS. The container needs to implement algorithms able to process at run-time information obtained from the hardware monitoring facilities and to implement self-aware decisions to maximize the execution based on target optimization criteria (reliability, performance, power consumption etc.).

This activity can be break down into the following tasks:
- Specifications of Virtualization Layers
- Design of the system management according to QoS parameters
- Development of virtualization extensions
- Container Engine Development and Integration

Year 3

The last part of the PhD will be devoted to the development of a massive experimental campaign to show the effectiveness of the developed demo on a set of representative application. The experimental campaign must be carefully designed to make sure a significant number of test cases with different requirements and complexity are properly selected.
Expected target publications: The work developed within this project can be submitted to:


Current funded projects of the proposer related to the proposal:
Possibly involved industries/companies:Intel Italia, Thales, ABB

Title: Functional safety of electronic systems in autonomous and semi-autonomous cars
Proposer: Matteo SONZA REORDA
Group website:
Summary of the proposal: The automotive area is experiencing a major change, since high-performance computing systems are increasingly used for safety-critical applications aimed now at supporting the driver (Advanced Driver Assistance Systems, or ADAS) and in the future at autonomously driving the car. This requires the ability to design and manufacture complex electronic systems with extremely high requirements in terms of functional safety, while still matching the strict requirements in terms of cost which are typical of the automotive area.
The proposed research activity aims at developing new and more effective solutions allowing the test of electronic products (Integrated Circuits or ICs, boards, sub-systems) to be used during the operational phase. In this way, possible faults arising in the hardware (e.g., due to aging) will be detected before creating failures and affecting the system behavior.
In particular, the research activity will focus on the development of techniques able to use at the board level and during the operational phase the test features existing in many ICs. These features are commonly introduced by semiconductor manufacturers for testing their devices at the end of the production and with suitable changes could also be used during the operational phase, significantly enhancing the dependability of the whole system.
Rsearch objectives and methods: Due a number of factors (including the lower reliability of the new semiconductor technologies, the higher complexity of the manufactured ICs, the wide usage of electronics in safety-critical applications) the silicon manufacturer companies are forced to adopt increasingly sophisticated techniques to test their products at the end of the manufacturing process. For this purpose they are used to introduce in their circuits special structures to support the test (Design for Testability, or DfT). As an example, it is common that memories embedded in a circuit are tested resorting to a solution known as Built-in Self-test (BIST), allowing the memory module to test itself through some additional circuitry which can be triggered from the outside and is in charge of generating test stimuli and checking the memory answer. Unfortunately, at the moment these structures can seldom be re-used for testing the same devices when they are in the field, for example because the semiconductor company does not transfer to the system company the information about them. However, there is a great potential in these DfT structures, provided that they are designed and implemented taking into account both the end-of-manufacturing and in-field test usage. In particular, several issues have to be considered:
• End of manufacturing test can benefit from the presence of an external tester, which cannot be exploited during in-field test.
• In-field test normally has severe constraints in terms of duration, since It Is often performed at power-on / power-off, or during the application Idle times. Hence, the DfT solutions should be designed in such a way that the test can be easily interrupted and restarted in a successive phase, without impairing its effectiveness.
• In-field test is performed when the system is already in the operational phase, and a specific system configuration has been adopted. Hence, special constraints exist for the test (e.g., in terms of accessible memory).
• In-field test should only focus on those faults that are able to generate a critical system failure, given its configuration and architecture
• There is a growing concern in the automotive sector about security issues: DfT solutions often represent a back-door for attacks to enter electronic system and steal information/take control. Hence, the proposed solutions should also guarantee some sort of protection with respect to such attacks.
• The cost (in terms of hardware and design effort) should be compatible with the strict requirements of current automotive systems.

The proposed research activity is intended to
• revisiting the existing DfT solutions, identifying the key reasons why they can often not be effectively used for in-field test
• proposing changes which allow them to overcome the previous limitation
• evaluating their possible integration at the board level.

In some cases, completely new solutions may also be envisaged, which first fit the In-field test requirements, and then comply with those of end-of-manufacturing test.
It is crucial to underline that due to the low reliability of current semiconductor technologies it will be unfeasible to match the high dependability requirements of future automotive applications unless effective techniques for in-field test of ICs and systems will be available. Hence the great practical impact of the present research proposal.
Outline of work plan: The proposed research plan is organized in several (partly overlapping) phases
• analysis of the existing DfT solutions and of the requirements for in-field test
• identification of new DfT solutions fitting both the end-of-manufacturing and in-field scenarios
• evaluation of their effectiveness on pre-selected and representative test cases
• dissemination of the proposed techniques and gathered results.

The research plan will benefit of the many cooperations and connections existing between the research group of the proposer and several leading companies in the automotive area, such as STMicroelectronics, Infineon, Marelli, ITT, Mentor, SPEA. The expertise of the group in areas, such as space, where dependability have been a major concern since many decades, will also be crucial.
Expected target publications: Papers on selected journals (IEEE Transactions on CAD, VLSI, Computers, Reliability) and conferences (ITC, ETS, ATS, IOLTS)
Current funded projects of the proposer related to the proposal: Contracts with Magneti Marelli, SPEA, STMicroelectronics (Sanchez), Infineon (Bernardi), H2020 MaMMoTH-Up, H2020 TUTORIAL
Possibly involved industries/companies:Magneti Marelli, SPEA, STMicroelectronics, Infineon

Title: Conversational agents meet knowledge graphs
Proposer: Maurizio Morisio (Polito) and Giuseppe Rizzo (ISMB)
Group website:
Summary of the proposal: Traditional user interfaces, although graphically rich, are based on a rigid set of commands, with intelligence and flexibility on the user side only. Recently, bots have been introduced to understand commands in natural language. The challenge is to improve bots to the level of conversational agents capable of providing flexibility and intelligence on the server side too.
The state of the art is quite initial, with bots capable of correctly understanding the concepts and intent in free text questions with very low accuracy and precision, in very focused and standardized domains. Even worse is their performance on producing relevant answers, and low robustness across different languages. The automated understanding of the user requests and the retrieval of the related information in order to provide a human-like answer are as of today two outstanding challenges. We propose a research investigation to improve the capabilities of bots up to real conversational agents, working both on natural language understanding and answer generation on English and Italian text.
Rsearch objectives and methods: The research is centered around the two main problems, understanding requests/questions from the user in natural language (objective 1), then computing relevant answers (objective 2). In parallel a demonstrator is built to evaluate the approach (objective 3).

Objective 1: Natural language understanding.
Design of a deep learning approach for the automated and context-tailored understanding of user requests across different contextual domains. State-of-the-art techniques propose approaches based on recurrent neural networks trained with word embeddings and domain-specific training data. We will build upon these approaches extending them with the concept of thread, i.e. sequences of requests and answers, pivoting the time component and sentence dependence. We aim to reach better performance computed as F-measure over broader domains (beyond DBpedia), and on both English and Italian. We remark that working with the Italian language is harder because of the more limited availability of open source tools and data.

Objective 2: Answer generation. Design of a knowledge-based approach for the retrieval and filtering of information exploiting domain-specific knowledge graph embeddings. State-of-the-art approaches propose machine learning popularity based (PageRank) mechanism for the retrieval of information from databases and the generation of a template-based answer. We will build upon these and introduce the notion of salience within a given context that is represented in knowledge graphs. We aim to reach better performance computed as F-measure over broader domains (beyond DBpedia), and on both English and Italian.

Objective 3: Runnable prototype (TRL 3) that is able to interact with any user to provide tailored domain-specific requests in English and Italian.

In terms of research results vs the state of the art we aim at
-better precision and recall in entity and intent recognition (objective 1), in answer generation (objective 2)
-over wider knowledge domains
-in english and in italian
Outline of work plan: Year 1 (objective1, objective3): Investigation and experimentation of deep learning techniques for intent and entity classification from natural language text.

Year 2 (objective1, objective3): Investigation and experimentation of deep learning techniques for content based dialogue understanding in order to customize the processing of the requests according to user profiles and contexts. Will exploit domain-specific knowledge graphs (such as tourism, scientific literature, encyclopedia) and process both English and Italian.

Year 3 (objective2, objective3): Investigation and experimentation of information retrieval mechanisms to intelligently tapping into domain-specific knowledge graphs (such as tourism, scientific literature, encyclopedia) and to generate natural language answers.

During the 3 years, we will run continuously in-lab benchmark validation of the approach using well-known gold standard compared with state-of-the-art approaches.

The candidate will be also actively involved in an international project and a national project. Will be daily co-tutored by Politecnico di Torino and ISMB, working in a unique environment created by the blend of an academia and a research center.
In addition, there will be the possibility that candidate will spend 1 year abroad in a top-tier research center.
Expected target publications: 3 top-tier conference papers such as WWW, ISWC, RecSys, ESWC, ACL, LREC
1 journal such as Semantic Web Journal, Knowledge-Based Systems, Intelligent Systems, Information Processing & Management
Current funded projects of the proposer related to the proposal: Crystal (Artemis JU 332830) PasTime (EIT Digital no. 1716)
Possibly involved industries/companies:Amadeus, TIM.

Title: Big Crisis Data Analytics
Proposer: Paolo Garza
Group website:
Summary of the proposal: Society as a whole is increasingly exposed to natural disasters because extreme weather events, exacerbated by climate change, are becoming more frequent and longer. To address this global issue, advanced data analytics solutions able to cope with heterogeneous big data sources are needed. The big amount of data generated by people and automatic systems during natural hazard events (e.g., social network data, satellite images of the affected areas, images generated by drones), is typically referred to as Big Crisis Data. To transform this overload of heterogeneous data into valuable knowledge, we need to (i) integrate them, (ii) select relevant data based on the target analysis, and (iii) extract knowledge in near-real time or offline by means of novel data analytics solutions. Currently, the analysis is focused on one single type of data at a time (e.g., social media data or satellite images). Their integration into big data analytics systems capable of building accurate predictive and descriptive models will provide effective support for emergency management.

The PhD candidate will design, implement and evaluate big data analytics solutions able to extract insights from big crisis data.

An ongoing European research project will allow the candidate to work in a stimulating international environment.
Rsearch objectives and methods: The main objective of the research activity will be the design of data mining and big data analytics algorithms and systems for the analysis of heterogeneous big crisis data (e.g., social media data generate by citizens and first responders during natural hazards, satellite images or drone-based images collected from area affected by emergency events), aiming at generating predictive and descriptive models.

The main issues that would be addressed are the followings.

Scalability. The amounts of big crisis data are significantly increased in the last years and some of them are singularly large (e.g., the satellite images). Hence, big data solutions must be exploited to analyze them, in particular when historical data analyses are performed.

Near-real time constraint. To effectively tackle natural hazards and extreme weather events, timely responses are needed to plan emergency activities. Since large amounts of streaming data are generated (e.g., social data, environmental measurements) and their integration and analysis is extremely useful for planning emergency management activities, scalable streaming systems able to process in near-real time data sources and build incrementally prediction data mining/machine learning models are needed. The current big data streaming systems (e.g., Spark and Strom) provide limited support for incremental data mining and machine learning algorithms. Hence, novel algorithms must be designed and implemented.

Heterogeneity. Several heterogonous sources are available. Each source represents a different facet of the analyzed event and provides an important insight about it. The efficient integration of the available spatial data sources is an important issue that must be addressed in order to build more accurate predictive and descriptive models.
Outline of work plan: The work plan for the three years is organized as follows.

1st year. Analysis of the state-of-the-art algorithms and data analytics frameworks for big crisis data storage and analysis. Based on the analysis of the state-of-the-art, pros and cons of the current solutions will be identified and preliminary algorithms will be designed to optimize and improve the available approaches. During the first year, descriptive algorithms, based on offline historical data analyses, will be initially designed and validated on real data related to one specific hazard (e.g., floods) to understand how to extract fruitful pattern for medium- and long-term hazard management/planning actions.

2nd year. The design of incremental and real-time predictive models will be addressed during the second year. These systems will allow managing emergency in near-real time. For instance, classification algorithms will be designed to automatically classify tweets in informative/non-informative in real-time during hazards in order to send to the first responders only useful knowledge.

3rd year. The algorithms designed during the first two years will be improved and generalized in order to be effectively applied in different hazard domains (e.g., floods, fires, heat waves).

During the second/third year, the candidate will have the opportunity to spend a period of time abroad in a leading research center.
Expected target publications: Any of the following journals:
• IEEE Transactions on Big Data (TBD)
• IEEE Transactions on Knowledge and Data Engineering (TKDE)
• IEEE Transactions on Emerging Topics in Computing (TETC)
• ACM Transactions on Knowledge Discovery in Data (TKDD)
• Journal of Big Data

IEEE/ACM International Conferences
Current funded projects of the proposer related to the proposal: I-REACT - "Improving Resilience to Emergencies through Advanced Cyber Technologies", H2020 European project (
Possibly involved industries/companies:

Title: Automatic and Context-aware Orchestration for Fog Computing Services
Proposer: Fulvio Risso
Group website:
Summary of the proposal: Future (complex) software services are expected to result from the composition of multiple elementary services, each one with different characteristics and requirements, and each one possibly instantiated in a different location of the infrastructure. For instance, the options for placement of the above elementary services can range from some running on the user’s terminal, other at the edge of the network (fog computing); others are probably better in the cloud (cloud computing).
This PhD proposal originates from the above considerations and aims at studying the problem of automatic and context-aware discovery and orchestration of generic services, which are delivered as the composition of multiple micro-services running across the while set of computing devices present in the available infrastructure.
The PhD candidate is expected to define algorithms for optional placement of services that include infrastructure capabilities and resources, services requirements and constraints, and relationships between those services, and will be used to determine where (elementary) services will be deployed. This modelization will consider both computing parameters, e.g., where to schedule a given service from the point of view of the processing requirements, and data location, e.g., where to locate the service based on data-related constraints. In particular, given that future cars will feature a large set of computing devices (main processing unit plus several tens of subsystem-specific processing units), the automatic placement of services can be envisioned within the in-car infrastructure as well. In addition, the model should consider additional constraints that are rather common in the automotive sector, such as the necessity to provide real-time services (e.g., automatic braking).
Rsearch objectives and methods: The objectives of this PhD proposal is the definition of orchestration algorithms that are able to select the optional placement of software service under different constraints. In particular, the above orchestration algorithms will take into account the following aspects:

• Infrastructure capabilities and resources: services will have to be executed on real hardware, which includes physical devices (e.g., user terminals such as smartphones, etc.), network equipment, possibly able to execute generic services (such as Cisco 819 Integrated Services Router), in-network servers (e.g., micro data centers in the point of presence of the network operator), data centers. Capabilities (e.g., possibility to execute VMs or Docker) and resources (e.g., available CPU/memory), are part of the model, such as network topology and parameters (e.g., bandwidth).
• Service requirements and constraints: the overall service can require the deployment of many elementary services, which have their own requirements (e.g., low latency) and constraints (e.g., service B requires the presence of service A). We foresee here the necessity to express both service-wide and instance-specific parameters; the formers are used to define the overall behavior of the service, while the latter are used to decide the actual placement of each service on the available infrastructure.
• Relationships between services: the overall service connections can be modeled as a graph that defines the relationships between the several composing services. Relationships are definitely important as they express how services are chained one to the other, and, possibly, the characteristics of the data exchanged among them, e.g., in terms of volume and dependencies.

The developed algorithms will be prototyped by creating a reference architecture that will be validated on real use cases, taken from the physical world, in particular coming from the automotive sector. For instance, the PhD candidate can envision two possible working environments. First, a car-to-cloud infrastructure service deployment, in which the problem of instantiating applications spans across the wide range of devices starting from the car, to the neighboring nodes (e.g., other cars, on-road-aside infrastructure, telco fog nodes) and finally to over-the-top datacenters. Second, an in-car service deployment model, in which all the computing components available in the car represent suitable nodes.
The PhD candidate will investigate the possible architecture and algorithms, validate the proposed solution with respect to the considered environment, and, when possible, will supervise a physical deployment of the selected solution.
Outline of work plan: The candidate is expected to split the work in the following phases.
• Service modelling. Software services will be analyzed looking for common (and significant) properties that are required to influence their location. For instance, resource requirement (e.g., CPU, memory), dependency on other services. A common model capturing service characteristics will be developed. A publication targeting a major conference will be submitted.
• Service orchestration algorithms. Starting from service modelling, this phase will develop a set of orchestration algorithms that take into account service requirements, orchestration capabilities, and possibly determine the best placement for each elementary service. Theoretical frameworks such as game theory may be used to accommodate the diverging requirement of services (asking for resources) and infrastructure, which has to guarantee the simultaneous execution of many services on a shared set of resources. A publication covering a portion of the above work and targeting a major conference will be submitted.
• Prototyping and reference implementation. This phase will extend previous results, obtained mostly through simulations, with a real implementation aimed a possibly becoming the reference implementation. In this respect, real works scenario will include either fog computing or the Internal communication/storage/processing of a car. A publication that extends the previous one plus the experimental validation and targeting a major journal will be submitted. In addition, a publication more focused on the reference implementation will be submitted to a major conference.
Expected target publications: Possible targets for research publications (well known to the proposer) include:

• IEEE/ACM Transactions on Networking
• IEEE Transactions on Computing
• IEEE Micro
• Elsevier Computer Networks

• IEEE International Conference on Communications (ICC)
• ACM European Conference on Computer Systems (EuroSys)
• ACM Symposium on High Performance Distributed Computing (HDPC)
• ACM/IFIP/USENIX Middleware Conference (Middleware)
• ACM Symposium on Principles of Distributed Computing (PODC)
Current funded projects of the proposer related to the proposal:
Possibly involved industries/companies:Italdesign

Title: New methods for set-membership system identification and data-driven robust control design based on customized convex relaxation algorithms.
Proposer: Vito Cerone
Group website:
Summary of the proposal: The research project is focused on the development of efficient algorithms to solve nonconvex polynomial optimization problems, which arise from the field of system identification and data-driven control design. Convex-relaxation approaches based on sum-of-square decomposition will be exploited in order to relax the formulated nonconvex problems into a collection of convex semidefinite programming (SDP) problems. Solutions of such problems will be computed by means of customized parallel interior-point algorithms, which will be designed and implemented on parallel computing platforms. The derived algorithms will be applied to experimental data collected from automotive real-world problems.
Rsearch objectives and methods: The research will be focused on the development of new methods and algorithms to efficiently solve polynomial optimization problems arising from the identification and control fields. The main topics of the research project are summarized in the following three parts.

Part I: Convex relaxations for system identification and control design

This part is focused on the formulation of suitable convex relaxation techniques for constrained polynomial optimization problems, arising from some open issues in the identification and control fields. Typical examples include:

• Enforcement of structural constraints in the identification of linear and nonlinear interconnected multivariable models:

System identification procedures aim at deriving mathematical models of physical systems on the basis of a set of input-output measurements. Although a number of a-priori information on the internal structure of the system to be identified are often available, most of the proposed techniques do not exploit such a priori information in the identification algorithms, since formal inclusion of such structural constraints makes the estimation problem difficult to be solved. Aim of the research project is to show that SDP optimization techniques can be used to reliably enforce structural constraints in the identification of quite general multivariable block-structured nonlinear systems.

• Design of fixed structure data-driven robust controllers:

The problem of robust control design for uncertain systems has been the subject of extensive research efforts in the last three decades, and many powerful tools have been developed. However, most of the proposed techniques lead to the design of high-order dynamical controllers, which are sometimes too complex to be applied in industrial settings, where simple controller structures, involving a small number of tuning parameters (e.g. PID controllers), are typically used. Unfortunately, the model-based approaches proposed in the literature for the design of fixed structure robust controllers often lead to quite complex nonconvex problems. The aim of the research is to propose a reformulation of the fixed structure robust controller design problem in the framework of data-driven control, recently proposed in the literature. The idea is to propose an original formulation of the data-driven control problem, based on the set-membership estimation theory, in order to overcome the main limitations of the approaches already proposed in the literature.

Part II: Development of customized SDP solvers

This part is focused on the reduction of the computational load in solving SDP problems arising from the relaxation of polynomial optimization problems through SOS decomposition. In particular, the peculiar structure of such SDP problems will be exploited in order to design interior-point algorithms more efficient than the ones employed in general purpose SDP, in terms of both memory storage and computational time. A possible way to develop efficient algorithms is to exploit the particular structure of the formulated SDP-relaxed problems in order to design, implement and test parallel interior-point algorithms on commercial computing platforms, which consist of multi-core processors (e.g. Intel multi-core) or many-core graphics processing units (e.g. NVIDIA or AMD graphics cards). Such platforms are available on the market at relatively low cost. For instance, an NVIDIA GeForce GTX 590 graphic card with 1024 cores can be purchased for 800 USD.

PART III: Application to automotive real-world problems

The derived algorithms will be applied to modeling, identification and control of dual-clutch systems and other automotive problems.
Outline of work plan: The research project is planned to last three years.
The time schedule is as follows:


January 1st – June 30th :
the first six months of the project will be devoted to the study of the literature with reference to the subject of system identification, control design, convex relaxations of polynomial optimization problems, interior-point algorithms for semidefinite programming.

Milestone 1:
report of the results available in the literature; selection of a set of open problems to be addressed.

July 1st – December 31st:
the second part of the first years will be devoted to the analysis of the open problems selected in Milestone 1 with the aim of deriving some new convex relaxation-based methodologies and algorithms.

Milestone 2:
formulation of the considered problems is terms of optimization; development of new convex relaxation-based algorithms for solving the formulated optimization problems. Theoretical analysis of the proposed algorithms.


January 1st – June 30th:

the first half of the second year of the project will be focused on the study of the interior-point algorithms available in the literature for the solution of convex optimization problem (with particular focus on semidefinite problems).

July 1st – December 31st: the objective of this part will be to develop new efficient interior-point algorithms, specifically tailored to the characteristics of the semidefinite programming problems obtained from convex relaxation of the considered optimization problems, arising from system identification and control design.

Milestone 3:
development of new interior-point algorithms and comparison with the existing general-purpose softwares for semidefinite programming.


January 1st – December 31st:
the last year of the project will be devoted to explore parallel implementation of the derived interior-point algorithms and to apply the derived algorithms to modeling, identification and control of a real dual-clutch system and other real-world problems from the automotive field.
Expected target publications: JOURNALS:
IEEE Transactions on Automatic Control, Automatica, IEEE Transactions on Control System Technologies, System and Control Letters, International Journal of Robust and Nonlinear Control.

IEEE Conference on Decision and Control, American Control Conference, IFAC Symposium on System Identification, IFAC World Congress
Current funded projects of the proposer related to the proposal: The derived algorithms are expected to be profitably applied to the problem of modeling, identification and control of dual-clutch systems, which is the object of the research contract (leaded by the Proposer) between DAUIN and Fiat Chrysler Automobile (FCA) titled “Non Manual Transmission technologies as key enabler for high performance and efficient powertrain development”.
Possibly involved industries/companies:Fiat Chrysler Automobile (FAC) will be involved in the application of the derived algorithms to the modeling, identification and control problems arising from the contract “Non Manual Transmission technologies as key enabler for high performance and efficient powertrain development”.

Title: Quality and policy assurance in critical distributed systems
Proposer: Guido Marchetto
Group website:
Summary of the proposal: The increasing flexibility of networking and cloud infrastructures, mainly due to the introduction of Software-Defined Networking (SDN) and Network Function Virtualization (NFV), led to an increasing interest for distributed systems also in specific critical scenarios. We can mention telemedicine applications, as well as Smart Grids or other industrial environments. For example, telemedicine applications can benefit from a cloud-based deployment approach for the opportunity of both reducing overall costs and improving the offered services. However, these critical systems have totally different requirements with respect to traditional cloud applications, mainly in terms of quality and policy assurance. They typically need low latency communications and high-speed processing, as well as a proper support for their safety and/or security critical nature. The research work aims at studying novel cloud and fog computing architectures able to satisfy such constraints. Both infrastructure and network components must be considered, so that a proper combination of solutions may provide the desired outcomes. In addition to that, proper solutions for the verification of deployed services should be provided, in order to enforce safety and security properties of the system.
Rsearch objectives and methods: The main objective of the proposed research is the study of novel cloud and fog computing architectures for critical distributed systems, where low latency communications, high-speed computing, and safety/security policy assurance are typically strict requirements.
Considering for example a telemedicine application, we can imagine several doctors exchanging data and discussing online in order to agree on a specific pathology, maybe during surgeries. This would require exchanging high-resolution images with very low latency. Hence, a large amount of data should be processed in a very short time and in the correct place in order to satisfy such requirements. Similar needs might be envisioned in a distributed industrial environment where robot controllers exchange data in order to fulfill a given task. Best-effort Internet connections are simply not enough to support such applications and edge computing and virtualization technologies such as SDN (e.g., OpenFlow) and NFV seem necessary to enable such important technologies.
A first research objective is then the study of how a proper combination of SDN and NFV solutions could reach this target, starting from the existing QoS-aware solutions based on distributed SDN controllers (e.g., ONOS). The candidate will study proper adaptability mechanisms of virtual paths hosting critical data. In particular, solutions will be proposed for enabling flexibility in the mechanisms that regulate the management (creation, monitoring, and migration) of virtual network resources (especially virtual paths) carrying the information. This would provide proper adaptability to the network conditions, thus assuring quality of processing and communications.
A second relevant research objective is related to the proper management of these critical systems from a safety/security point of view. It is evident how errors in both architecture and service definition and implementation might cause serious damages to equipment or people. At the same time, security leaks might compromise the system operation, thus also affecting its safety. Hence, the defined architectures and protocols must be properly verified against these possible issues. Furthermore, proper solutions for service verification are also necessary, as possible source of safety and security might be a wrong service construction rather than the architecture itself. For example, a wrong configuration of the service components, which can be done on-the-fly by means of proper VNF deployment, might clearly cause service degradation, but also safety and security problems. A set of general service-oriented policies will be defined, with the aim of representing safe or trusted situations in our scenarios. Then, a proper framework for policy verification will be studied and developed, so that the overall system could be considered verified. Formal methods are a possible solution to provide formal policy assurance in these contexts, as also certified by the recent contributions on formal verification of SDN/NFV systems available in literature. In particular, since configurations and reconfigurations of virtual networks are automatic, they are expected to occur rapidly, thus requiring fast verification approaches. The use of Satisfiability Modulo Theories (SMT) solvers is promising from this point of view, but experimental evaluations are necessary to validate this guess. This will be part of the research activity.
Outline of work plan: Phase 1 (1st year): the candidate will analyze the state-of-the-art solutions for QoS-aware cloud systems, with particular emphasis to the SDN and NFV based fog computing architectures. Starting from the existing material, the candidate will define detailed guidelines for the development of architectures and protocols that are suitable for critical distributed systems. Specific use-cases will also be defined during this phase (e.g., in the telemedicine or industrial field), which could help in identifying correct requirements, including peculiarities of specific environments. Results of this work will likely result in publications. During this phase, the candidate will also acquire fundamentals of verification techniques (among the others, formal methods), useful for the rest of the PhD program. This will be done by attending courses and by personal study.

Phase 2 (2nd year): the candidate will consolidate the proposed approach and will implement it. The preliminary results of this work will be published. Appropriate approaches for the specification and the verification of policies will also be explored and selected.

Phase 3 (3rd year): the implementation and experimentation of the proposed approach will be completed, including the policy verification framework. The final objective is a proof-of-concept prototype demonstrating the feasibility and the effectiveness of QoS-aware fog systems for critical applications. With this respect, also the related dissemination activity will be concluded.
Expected target publications: The contributions produced by the proposed research can be published in conferences and journals belonging to the areas of networking (e.g. INFOCOM, ACM/IEEE Transactions on Networking, or IEEE Transactions on Network Management) and cloud/fog computing (e.g. IEEE/ACM CCGrid, IEEE ICFEC, IEEE Transactions on Cloud Computing), as well as in publications related to the specific areas that could benefits from the proposed solutions (e.g., IEEE Transactions on Information Technology in Biomedicine, IEEE Transactions on Industrial Informatics).
Current funded projects of the proposer related to the proposal: None
Possibly involved industries/companies:TIM, Tiesse

Title: Virtual, Augmented and Mixed Reality for education and training
Proposer: Fabrizio Lamberti
Group website:
Summary of the proposal: Nowadays, people are getting familiar with terms like Virtual, Augmented and Mixed Reality (VAMR), and applications of these technologies are becoming commonplace. Domains that can benefit from the implementation of the above technologies are basically endless, with many solutions developed already both at the industry and academic level in the fields of engineering, design, healthcare, business, arts and, clearly, entertainment.

Within such domains, two of the envisaged “killer applications” for the considered technologies are education and training. In fact, VAMR applications are regarded as being capable, e.g., to make the education and training processes significantly more effective, by improving engagement and fostering active participation. Moreover, they can allow the development of learning experiences that would not be otherwise possible or would be incredibly costly.

However, the quality of outcomes is strongly linked to the so called “ecology” of the virtual experience, i.e., to the extent to which it is capable to mimic the real one.

The objective of this research is to study issues influencing the effectiveness of virtual education and training and to design and develop suitable solutions to such issues by considering, e.g., the quality of 3D graphics, the naturalness of human-machine interaction, the interactivity of simulations, etc.
Rsearch objectives and methods: VAMR applications for education and training can allow users to immerse themselves in highly realistic environments and scenarios that would normally be dangerous to learn in, such as a burning building, or difficult to simulate, such as a critical event during a surgical operation. Unlike some traditional instruction methods, VAMR-based technique can offer consistent education delivery that do not vary from instructor to instructor. Quality virtual representations and seamless human-computer interaction also afford the development of psychomotor skills and the study of users’ behavior under given conditions. This is especially important when resources that can be devoted to education and training are limited or extremely costly, e.g., because of impact related to the interruption of normal operations.

In order to actually benefit from all the possible advantages offered by VAMR technologies, developments are still needed to ensure that the virtual experience resemble as much as possible that of a real setting.

To this aim, a number of advancements are needed, which pertain both technical and non-technical aspects.

On the technical side, research is needed, e.g.:

a) to allow instructors (often lacking technical skills) to easily take control over the simulated environment and create personalized and adaptive learning paths capable to optimize the achievement of given learning outcomes;

b) to support the use of suitable human-machine interfaces capable to strengthen the sense of presence and immersion during the experience both from the point of view of user input (e.g., by exploiting natural interfaces based hand and body gestures, voice commands, etc.) as well as of system output (i.e., feedback provided to the user, using all the senses);

c) to enable the participation, to the same (simulated) experience, of more than one interacting trainee, possibly interfacing with the system with different technologies (desktop computers, immersive virtual reality settings, wearable “olographic” displays, etc.), perceiving the presence of other users and collaborating with them towards a given goal;

d) to make it possible for realistic 3D environments used in game and movie production to live with real-time simulation of physical phenomena (like spreading of fire and smoke, e.g., to teach or test evacuation procedures to/with civilians or fire-fighting personnel).

From the non-technical point of view, pedagogical, psychological, sociological and perceptual aspects have to be taken into account in a user-centered perspective, in order to ensure that technological outcomes actually match both instructors’ and learners’ capabilities and needs. For instance, an open research issue pertains the determination of the level of fidelity a given virtual experience should offer to provide users with the mental affordances required for learning.

The results of “transversal” research activities mentioned above will be applied to “vertical” use cases that could involve, e.g., Industry 4.0 scenarios, medical practice, cultural heritage and sport. In order to support design and development activities with numerical evidences and allow for the assessment of results, quali-quantitative strategies to assess users’ level of participation and related achievements will be investigated.
Outline of work plan: During the first year, the PhD student will review the state of the art of technologies, methods and applications of VAMR technologies focusing on education and training, with the aim to identify up-to-date trends and most promising research directions. The student will complete his/her background on VAMR and approaches to training (both with traditional and novel methods) by attending relevant courses. He/she will also begin the design and development of transversal capabilities required for the creation of VAMR applications.

During the second and third year, the PhD student will focus on both technical and non-technical components which are regarded as more critical for effective VAMR-based education and training, by applying them into vertical use cases to be implemented with relevant stakeholders.

Applicants should preferably have previous experience in the field of computer graphics (3D modeling, 3D animation, development of interactive graphics applications and games, etc.) and knowledge of tools like Unity3D, Blender/Maya, etc. Previous experience with VR and AR and related devices would be appreciated. Applicants should know the main programming languages and development environments, with a specific focus on those used in the considered domains.
Expected target publications: International journals in areas related to the proposal, including:

IEEE Transactions on Learning Technologies
IEEE Transactions on Human-Machine Systems
IEEE Transactions on Visualization and Computer Graphics
IEEE Computer Graphics and Applications
IEEE Transactions on Emerging Topics in Computing
ACM Transactions on Computer-Human Interaction
Computer & Education

Relevant international conferences such as: ACM CHI, IEEEVR, IEEE 3DUI, GI, Eurographics, etc.
Current funded projects of the proposer related to the proposal: Topics addressed in the proposal are strongly related to those tackled in the research contract managed by the proposer with LogosNet e-Real as well as to those of other projects/research contracts of the GRAphics and INtelligent Systems (GRAINS) group.
In particular, the research involves two areas that historically represent the core fields of interest of the GRAINS group, i.e., computer graphics (and related sub-field like Virtual, Augmented and Mixed Reality, Human-Machine Interaction, user-centered design and User eXperience) and intelligent technologies for education and training. Activities will be carried out in the frame of the "VR@Polito" initiative of Politecnico di Torino.
Possibly involved industries/companies:FCA, Kuka, Illogic

Title: Human-centered visualization and interaction methods for Smart Data applications
Proposer: Fabrizio Lamberti
Group website:
Summary of the proposal: There is an ongoing trend to support organizations in leveraging Smart Data by making machine learning models easier to train, use and deploy. In particular, deep neural networks have shown unprecedented performance in modelling multidimensional and complex datasets, but our understanding of how they work and learn is far from complete. Practitioners and users alike have difficulties in understanding and manipulating complex, black-box models. Intuitive and easy-to-understand visualization techniques are one of the most effective ways for humans to make sense of data patterns and structure.

The candidate will design and evaluate new approaches for visualizing and interacting with machine learning models and large-scale feature representations, targeting users with varying level of experience. The research proposals intersect the following research areas:
(a) visualization of deep learning models and feature spaces;
(b) integration of machine learning and semantic knowledge;
(c) interfaces for interactive training / manipulation of data models;
(d) integration with adaptive / online learning;

The research activity will target application domains characterized by inherent data complexity such as computer vision, image captioning and visual question answering, information superiority, medical imaging and exploration of large scale, multimodal datasets.
Rsearch objectives and methods: The overall objective is to support effective and intuitive visualization and interaction with data models and machine learning applications by developing new general-purpose or application-specific tools.

The candidate will focus on research issues related to the visualization of deep neural networks and other complex data representations, including applications that challenge the current state of the art such as 3D images, videos and multimedia, medical data. The candidate will design novel efficient and user-friendly ways to visualize and manipulate complex data in a multidimensional space (deep embeddings, similarity layouts, graph-like representations). Deep architectures that are capable of learning disentangled representations (that more likely factorizes some latent cause or causes of variation), offer greater interpretability, reusability and semantic grounding, at the expense of greater training complexity; in this sense, effective training and visualization are closely related.

The candidate will compare different models in terms of complexity, interpretability and capability of supporting efficient user interaction.

The candidate will also study novel ways to support user interaction in data science, employing a wide range of interfaces (natural language interfaces, advanced visualization, virtual reality, etc.). Interfaces targeted to different users (machine learning expert vs. end-user) and applications (visual analytics, decision support systems, data exploration, visual query answering, etc.) will be developed. The goal is to allow users to derive insight into (or from) the model, as well as supply new information to the system thus supporting online/adaptive learning.

During the PhD, new solutions will be studied and developed to address the issues listed above, and they will be evaluated and compared with state-of-the-art approaches to assess their performance and improvements. Solid and reusable implementation of core visualization components is expected. Experimental evaluation will be conducted on real data collections. Validation will assess standalone performance as well as interaction with the user.
Outline of work plan: During the first year, the candidate will build core competencies in deep learning / machine learning, visual analytics and 2D/3D data visualization by attending PhD courses and seminars; he/she will also familiarize with existing deep learning/machine learning platforms, if necessary. He/she will survey the relevant literature to identify advantages/disadvantages of available solutions. State-of-the-art visualization methods will be implemented/expanded for different applications domains, building a core library of reusable components. The candidate will submit at least one conference paper.

During the second year, the candidate will develop training/visualization and interaction methods for increasingly advanced models focusing on their interpretability and efficient user interaction. The algorithms will be tested on selected applications. By the end of the second year, at least one paper in an international journal and one additional conference submissions are expected. The candidate will also achieve a firm grasp of advanced deep learning / machine learning methods.

During the third year, the candidate will improve/refine the implemented methods, based on the results of previous literature, and focus on expanding user interaction capabilities. The candidate will design and carry out experiments testing the experience of users in realistic scenarios.
Expected target publications: The target publications will cover machine learning, computer vision, scientific visualization, and human computer interaction conferences and journals. Interdisciplinary conferences and journals related to specific applications (such as medical imaging) will also be considered. Possible journals include:

IEEE Transactions on Pattern Analysis and Machine Intelligence
International Journal of Computer Vision
IEEE Transactions on Human-Machine Systems
IEEE Transactions on Visualization and Computer Graphics
IEEE Transactions on Knowledge and Data Engineering
Elsevier Pattern Recognition
Elsevier Computer Vision and Image Understanding
IEEE Transactions on Medical Imaging
Medical Image Analysis
Current funded projects of the proposer related to the proposal: Research contract w/ Leonardo - Finmeccanica
Research contract w/ Reale Mutua Assicurazioni
Research contract w/ im3D
Possibly involved industries/companies:Companies that could be interested in the proposal and its outcomes include:
Leonardo - Finmeccanica
Reale Mutua Assicurazioni
Consoft Sistemi

Title: Deep learning techniques for fine-grained object detection and classification in image analysis
Proposer: Fabrizio Lamberti
Group website:
Summary of the proposal: Deep learning networks and especially convolutional neural networks have become the state-of-the-art benchmark for object detection and classification problems. In this context, fine-grained object classification is attracting increasing attention in a number of fields, including scene parsing, classification and medical applications. However, it is a quite challenging problem due to subtle inter-class differences, and large intra-class variation. Fine-grained object characterization, intended as describing an object and its parts with a rich set of semantic attributes (“a tall woman with blond hair wearing a red coat and black shoes”), is a related problem with similar difficulties. In both cases, collecting large-scale annotated dataset is complicated and expensive, and hence advances in unsupervised / semi-supervised approaches and transfer / lifelong learning will be key to make substantial improvements in this direction.

The candidate will study, design and evaluate new techniques for improved object detection, classification and characterization at the attribute level. The research proposals intersect various research areas in image analysis including deep learning models for image understanding, multi-task learning and domain adaptation, semantic grounding at the object and attribute level, saliency models, and interactive machine learning.

The research activity will target various application domains including natural scene parsing, medical image analysis, and other real-life scenarios.
Rsearch objectives and methods: The overall objective of this research proposal is to propose novel techniques for learning fine-grained object detection, classification and characterization. Compared to generic object recognition, fine-grained recognition benefits from learning critical parts of the objects that can help discriminate between neighboring classes. Automated visual systems that can perform fine-grained object recognition and characterization can provide significant support to many applications, especially those requiring specialized domain knowledge or a richer semantic grounding (ecology, image forensics, retail, social sciences, medical applications, image captioning, etc.).

The candidate will focus on research issues related to automatically learning fine-grained object recognition and classification with minimal training data required. The candidate is expected to provide contributions to the state of the art in one or more of the following research areas, where existing solutions are mostly targeted at object recognition or classification.

a) Transfer learning: learning from limited labeled training can be significantly improved by transferring information learned from one task to another one (transfer learning) or when learning multiple tasks simultaneously (multi-task learning)

b) Domain adaptation: real-life datasets can exhibit differences in terms of image acquisition, including background, location, pose, illumination or even different types of image acquisition systems or parameters. Domain adaptation include a variety of techniques to transfer a given task to a variety of different domains exploiting unlabelled or small quantities of labeled data.

c) Saliency models: extending existing saliency models to automatically learn critical parts of the objects, based on both low-level visual features and semantic categories, can help discriminate between neighboring classes

d) Semantic grounding of visual attributes: learning to map visual features with semantic attribute description is key to fine-grained object characterization; attributes are not concept or category specific (e.g., animals have stripes and so do clothing items; balls are round, and so are oranges and coins), and thus allow us to express similarities and differences across concepts and domains more easily.

e) Interactive (human-in-the-loop) learning approaches: by integrating a human-in-the-loop, algorithms which interact with agents and can optimize their learning behaviour through this interaction can leverage the relative strengths of both automated machine learning and human cognition. Interactive ML approaches can be of particular interest to solve problem where we are lacking big data sets, deal with complex data and/or rare events.

During the PhD, new solutions will be studied and developed to address the issues listed above, and they will be evaluated and compared with state-of-the-art approaches to assess their performance and improvements. Experimental evaluation will be performed on realistic datasets and on available public datasets for benchmarking purposes.
Outline of work plan: During the first year, the candidate will build core competencies in deep learning / machine learning, image processing and computer vision by attending PhD courses and seminars; he/she will also familiarize with existing deep learning/machine learning platforms, if necessary. He/she will survey the relevant literature to identify advantages/ disadvantages of available solutions, and develop a detail research plan to address such gap. In the first year, the candidate will conduct experiments in the context of transfer learning, and submit at least one conference paper.

During the second year, the candidate will develop a set of tools for semi-supervised learning domain-independent, semantic-rich visual attributes to support fine-grained object categorization, recognition and description. The algorithms will be tested on selected applications and compared with existing state-of-the-art methods. By the end of the second year, at least one paper in an international journal and one additional conference submissions are expected. The candidate will also achieve a firm grasp of several deep learning models/techniques.

During the third year, the candidate will improve/refine the implemented methods, based on the results in the previous years; he/she will also test their robustness and generalizability in several application scenarios.
Expected target publications: The target publications will cover machine learning, computer vision and human computer interaction conferences and journals. Interdisciplinary conferences and journals related to specific applications (such as medical imaging) will also be considered. Journals include:

IEEE Transactions on Pattern Analysis and Machine Intelligence
International Journal of Computer Vision
IEEE Transactions on Image Processing
IEEE Transactions on Human-Machine Systems
IEEE Transactions on Neural Networks and Learning Systems
Elsevier Pattern Recognition
Elsevier Computer Vision and Image Understanding
IEEE Transactions on Medical Imaging
Medical Image Analysis
Current funded projects of the proposer related to the proposal: Research contract w/ Leonardo - Finmeccanica
Research contract w/ Reale Mutua Assicurazioni
Research contract w/ im3D
Possibly involved industries/companies:Companies that could be interested in the proposal and its outcomes include:
Leonardo - Finmeccanica
Reale Mutua
Consoft Sistemi

Title: Efficient Functional Model-Driven Networking
Proposer: Fulvio Risso
Group website:
Summary of the proposal: This research activity aims at redefining the networking domain by proposing a functional and model-driven approach, in which novel declarative languages are used to define the desired behavior of the network, and networking primitives are organized in a set of inter-changeable functional modules, which can be flexibly rearranged and composed in order to implement the requested services.
The current proposal will allow network operators to focus on what to do, instead of how to do, and it will also facilitate the operations of automatic high-level global orchestration systems, which can more easily control the creation and update of network services. However, the high-level view guaranteed by the functional approach should not give up to the efficiency of the implementation, will be taken into high consideration.
The prototype implementation will target the Linux operating system through the eBPF component, which will be leveraged to provide networking services for major open-source orchestration frameworks such as Kubernetes and OpenStack.
Rsearch objectives and methods: The objective of this research is to explore the possibility to introduce a functional and model-driven approach to create, update and modify network services, without impacting the efficiency. Such an approach allows (i) to deploy complex network services by means of functional languages, i.e., focusing on the behavior that has to be obtained (also known as intent-driven approach); (ii) to create elementary functional components that can provide basic network functionalities and that can be arbitrarily rearranged and composed in complex service chain in order to deliver the expected service; (iii) to deploy optimized version of each components based on run-time gathered information, such as the actual traffic patterns, the availability of resources (e.g., CPU, memory, hardware coprocessors such as SmartNICs).
The above objective will be reached by investigating the following areas.

Functional languages for complex service composition and service description. Novel languages are required to specify the functional behavior of the network, starting from existing research proposals (e.g., FRENETIC,; group-based policies, and extending the above approaches to higher-level modular constructs that facilitate compositional reasoning for complex and flexible service composition. This requires a functional modelling or each network function as well, which enables the creation of abstract functions, possibly with multiple implementations. This implies the definition of a functional model for each elementary network module that can define exactly which incoming network traffic is accepted, how it is possibly modified, and what will be returned in output. This model, which resembles to a transfer function, enables the creation of complex network models operating at the functional level, hence providing a match between user requests, coming from service-wide functional languages, into the delivered network service. In a nutshell, the requested service will be obtained by the composition of the above elementary (functional) blocks, and the overall service properties are obtained thanks to the formal composition properties of each transfer function.

Model-driven creation of efficient elementary network components. High-level languages are definitely important as they represent the entry point of potential users (e.g., network operators) in this new intent-based word, but they must be efficiently mapped on the underlying infrastructure. This second area of research aims at creating modular networking building blocks that provide elementary functions, with the capability to be flexibly rearranged in complex service graphs defined at run-time. A model-driven approach brings additional flexibility in the system as each component can be selected based on what it does instead of the actual implementation, enabling massive reuse and improvement of components without affecting the functional view of the service. A possible starting point for this approach consists in leveraging some existing technologies, such as the IOVisor virtual components (, which are based on a networking-tailored micro virtual machines running in the Linux kernel, and the model-driven development of networking components proposed in many communities (eg., OpenDayLight, ONOS, NetConf/YANG). Particularly, the former will be exploited to guarantee the efficient implementation of the above services, thanks to the capability to modify the running code at run-time and adapt the software the surrounding execution environment (e.g., traffic patterns, usage of resources).
Outline of work plan: This current research project can be carried out in the following phases.

Modelling of Virtual Network Functions: this activity involves the definition of a functional model that captures the characteristics of (at least) the most common network functions. This enables the creation of functional blocks that may have different implementations, which can be possibly chosen at run-time based on additional constraint (e.g., number of users, requirement of resources, possible dependencies such as availability of a given hardware coprocessor). This work can be possibly carried out with the collaboration of prof. Sisto. Depending on the outcome, this work will be submitted for publication either in a major conference or in a journal.

Functional languages for complex service composition: this activity involves the definition of a functional language that enables the creation of complex services by means of declarative constructs. A compiler will be developed that translates high-level constructs into actual service graphs, exploiting he functional modules defined above. This work will be submitted for publication in a major conference.

Model-driven creation of efficient elementary network components: this will represents the major part of the work and it consists in two logically separated steps. First, the model-driven creation of components that involves mainly the control and management plane, which need to be generic enough to be suitable for any generic service of a given type (e.g., firewall). Second, an efficient implementation of a subset of the above services leveraging the eBPF and the recently added Express Data Path (XDP) technologies, both available in the Linux kernel. This is expected to originate at least three publications, the first presenting the model-driven approach to modular networking component; the second focusing on creation of efficient components; the third that includes an extension of both previous topics and that will be submitted to a major journal.

Datacenter-wide orchestration: this phase will conclude the project with the definition of a set of orchestration algorithms that are able to orchestrate a service across the entire datacenter. This will be translated into a proof-of-concept orchestrator that creates a complex network service given a set of high-level requirements and constraints, showing the advantages of the proposed technologies. This work is expected to be submitted to a major journal.
Expected target publications: Most important conferences:
• USENIX Symposium on Networked Systems Design and Implementation (NSDI)
• USENIX/ACM Symposium on Operating Systems Design and Implementation (OSDI)
• IEEE International Conference on Computer Communications (Infocom)
• ACM workshop on Hot Topics in Networks (HotNets)
• ACM Conference of the Special Interest Group on Data Communication (SIGCOMM)

Most significant journals:
• IEEE/ACM Transactions on Networking
• IEEE Transactions on Computers
• ACM Transactions on Computer Systems
• Elsevier Computer Networks
Current funded projects of the proposer related to the proposal: Grant from Vmware and Huawei
Possibly involved industries/companies:Vmware, Huawei, Facebook

Title: Algorithms and software infrastructures for manufacturing process modelling from IoT data streams in smart industry applications
Proposer: Andrea Acquaviva
Group website:
Summary of the proposal: Thanks to the advances in sensors nodes and networks, manufacturing processes are subject to extensive and pervasive monitoring to improve operation performance, product quality, reduce costs, pollutants and improve energy efficiency. Through the IoT paradigm, data from the cyber-physical systems involved in the manufacturing process as well as environmental conditions become available to complex processing in local or remote computational facilities. This opens the way to the implementation of data-driven process models, allowing the analysis and optimization of parameters with respect to product variations and environmental conditions.

However, due to the complexity of the manufacturing processes and the high number of parameters involved, developing effective process models is in general a considerable challenge. In this context, machine learning algorithms for regression and sequential data analysis integrated into IoT platforms come to rescue.

In these III years research program, the main purpose will be to develop machine learning and time series analysis techniques for process modelling to turn IoT data streams into process knowledge. These techniques will be implemented into a IoT framework, where the optimal balancing between local, edge and cloud computing must be considered. This balancing is critical specially in case of industrial processes, where time criticality is a relevant concern.
Rsearch objectives and methods: The objectives of this research program are the following:

1. To study and implement IoT framework suitable to industrial manufacturing applications. Several IoT industrial frameworks are available either commercial or open, providing basic tools to communication, process and store sensor data. The objective of this part will be to understand the most suitable environment for the considered industrial case studies (see below for details) and defining guidelines to generalize this selection process.
2. To apply regression analysis and learning techniques on sensor data to develop manufacturing process models. IoT frameworks make available libraries and tools to analyse data streams and time series. Manufacturing processes are typically characterized by many parameters which impact process efficiency and product quality. By collecting parameter information, monitoring data concerning the process and outcomes of product quality (e.g. defects, anomalies, etc..), black-box models can be developed correlating parameter setting with outcomes. Techniques such as regression models, kernel methods, neural networks are nowadays consolidated techniques for non-linear system modelling. However, in industrial cases, often the available data are not expressive and require pre-processing, conversion, sub or oversampling to make them suitable for the application of such modelling techniques. The research will focus on developing techniques dealing with the kind and amount of data available from industrial processes.
3. To develop parameter exploration techniques using the developed models inside an optimization loop. In many cases, tuning of parameters is challenging and require expert personnel. Examples can be found in both new processes such as 3D printing (i.e. additive manufacturing) and more traditional such as high pressure die casting (HDPC), car painting and assembly. In presence of a trusted model, this can be put inside an exploration loop. Techniques to reduce the number of parameter combination to be explored can be considered to speed up the space exploration.
Outline of work plan: Phase I: Background
- Industrial IoT frameworks
- Machine learning applied to time series analysis (Regression models, Kernel methods, Hidden Markov models, Neural networks – Feed forward, Recurrent, LSTM)
- Industrially relevant KPIs, metrics for evaluation, manufacturing process cases (e.g. HPDC, 3D printing, car manufacturing).

Phase II: Data collection and process model development
- Case study definition (model parameters, potential correlations, accounting for external factors)
- Collection of data, definition of additional sensors and monitoring tools to be deployed
- Selection of process modelling strategy depending on process characteristics, type and amount of data available
- Implementation and training of models, comparison with between basic regression models with more complex models (kernel, RNN).
- Exploration of parameters using the developed model. Testing with additional data.

Phase III: Parameter optimization and computation allocation
- Development of an optimization loop enclosing the developed model
- Development of automatic parameter exploration strategies
- Evaluation and testing of the parameter exploration using testing data set
- Mapping of model learning and execution into computational infrastructures.
Expected target publications: - Transactions on industrial informatics
- Transactions on industry applications
- Transactions on emerging topics in computing
- Transaction on Parallel and distributed systems
- Internet of Things Journal
- Transaction on knowledge and data engineering
- Transaction on pattern analysis and machine intelligence
Current funded projects of the proposer related to the proposal: - FLEXMETER (H2020)
- Machine learning for Telecom Support Systems (TIM research contract)
- Intelligent transportation systems (CLUSTER-ITS)
- Human Brain Project (SP9 – Neuromorphic computing)
Possibly involved industries/companies:FCA, 2A, Reply

Title: Dynamic and Adaptive User Interfaces for the Internet of Things
Proposer: Fulvio Corno
Group website:
Summary of the proposal: The increasing adoption of Internet of Things (IoT) technologies, for the creation of Ambient Intelligence systems (Smart Environments, Smart Buildings, Smart Cities) confronts the users with a multiplicity of devices, of different nature, to interact with. The current solutions adopted for interaction, such as installing multiple applications on the user own device (or, worse, all owned devices), are clearly not scalable and don’t allow on-the-fly interaction with previously unknown services.

The proposed research aims at exploring new interaction mechanisms, that full exploit the multi-device nature of IoT systems and the current trends in mobile computing. The IoT services that will be available in the users’ context will be enabled to discover the available device, and generate user interfaces (with various modalities) to enable their control.
Rsearch objectives and methods: The research activity aims at conceiving new interaction paradigms in this domain, where user interfaces can be automatically generated, dynamically customized and deployed, according to real-time discovery of available IoT services, to the availability of interaction devices (own by the users, or seamlessly integrated in the environment), and considering user preferences, habits and, in general, the whole interaction context.

This kind of dynamic user interface generation will rely on a suitable semantic representation of the available services, devices, and user context and preferences, coupled with suitable reasoning or learning algorithms. The algorithms will aim at finding the right combination of devices (screens, smartphones, …) and modalities (speech, touch, vision, AR, …) for letting the users interact and control the available features offered by the intelligent environment. The generated interaction platform will support a cross-device paradigm, where more than one device may participate to the same use case.
The research methods would be grounded in the Human-Computer Interaction domain, starting from empirically-gathered user requirements, and will be validated through a rapid prototyping (with web and/or mobile devices) and user studies.
Outline of work plan: The research will cover the following macro-phases:
Year 1, Semester 1: Study of interaction methods, in particular of the literature on multi-device interaction. Study of IoT systems and their programming.
Year 1, Semester 2: Design of a multi-device interaction mechanism, and validation with a user study (no actual implementation, yet, just interaction and interface). Study of Semantic modeling techniques.
Year 2: Integration of the multi-device interaction method with a semantic back-end. Experimentation with real devices in a realistic context. Possible extension to different interaction modalities (and media types). Study of conversational interaction paradigms.
Year 3: Extension of the approach to a conversational paradigm, by enriching the context knowledge of the systems and allowing a bidirectional "conversation" (by voice, text, and other interaction clues) with the ensemble of all IoT devices (where the relevant ones will automatically be involved).
Expected target publications: ACM Transactions on Computer-Human Interaction
IEEE Transactions on Human-Machine Systems
Journal of Ambient Intelligence and Humanized Computing
IEEE Internet of Things Journal
Current funded projects of the proposer related to the proposal: --none-- (possibly FCA, still pending)
Possibly involved industries/companies:None at the moment

Title: ICT for Urban Sustainability
Proposer: Maurizio Rebaudengo
Group website:
Summary of the proposal: Air quality nowadays is receiving ever growing attention since it has become a critical issue, in fact long-term exposure to polluted air can result in permanent health issues. The causes of air pollution are various: fossil fuels, power plants, factories, waste incineration, controlled burn, cattle farming and so on. All these sources contribute to release several toxic agents. In fact, they can put in danger not only the people - it is one of the major cause of deaths per year - but also our entire ecosystem. The research will investigate a smart city system to help local government and citizens themselves to monitor what currently happens in the city. The designed architecture of the system will use sensor networks able to capture city condition like temperature, air pollution, traffic situation, etc. The system will be deployed in a mobile and static network installed on different entities, like the bicycles available in the bike-sharing service, the public transportation vehicles and fixed spots to acquire real-time and diffused information on the urban area. The huge amount of data will be analyzed to develop mobile applications useful to help the citizens and the stakeholders in the management of the actions to reduce the air pollution or the traffic congestion, and optimizing the use of the urban resources. The research activity will be focused on some open issues related to the efficient application of mobile wireless sensor networks: e.g., the evaluation of the mobile systems in case of deterministic and stochastic routes, the auto-calibration of data collected by low cost sensors, the energy saving based on optimization of transmission and data collection protocols, the application of machine learning approaches to selectively discard erroneous or not relevant data.
Rsearch objectives and methods: The research will study and develop a hardware system and a software tool able to monitor, control and handle urban problems such as air pollution, traffic congestion, water quality, and so on. The research will design the whole architecture of the system composed of different types of nodes, fixed and mobile. Mobile nodes could be deployed into public transport vehicles and public bicycles. Fixed nodes will be installed in some critical points, like the most crowded crossroads. The use of mobile nodes requires a study on the spatial and temporal coverage due to the possible route strategies (e.g., stochastic routes for bike sharing, fixed paths for busses), in order to verify the trade-off between costs and benefits for an efficient urban air quality monitoring.

Among the amount of hazardous gases, some pollutants (e.g., CO, ground level O3, Particulate Matter, and Pb) are the most dangerous and will be considered as the starting point for the development of the sensor nodes. The use of low cost sensors involves a temporal variation of the sensor output. In order to address this problem, algorithms for the auto-calibration of the data will be studied and developed. In case of mobile node, the localization of each node becomes an essential constraint, in order to realize an ubiquitous real-time monitoring system: the integration of a GPS module, which delivers accurate position and time information, will be considered in order to guarantee a pervasive system. A particular attention will be paid in evaluating the scalability of the system, in order to maximize the number of possible nodes. Possible integration with different technologies will be considered, and in particular RFID-based sensors fixed nodes will be exploited in cooperation with mobile WSN nodes. Data will be exchanged among the nodes and toward a gateway exploiting low-power wireless data routing protocols or a telecommunication network. The research activity will consider the optimization of data acquisition, storing, aggregation and transmission, in order to reduce energy consumption. Data will be processed on the low-level module of the network and then will be collected by distributed gateways and a central server according to a hierarchical architecture that must guarantee their reliability and the availability of the system.

The foreseen activity will consider also the analysis of the huge amount of data with the goal to realize a platform of data acquisition and evaluation, useful to develop mobile applications to be installed in portable or automotive devices, with the goal to display a dashboard containing useful information such as air pollution, traffic condition or parking availability, and to elaborate those information to make some decision useful for the user citizen, e.g., the best path, as far as the traffic is considered, the area with a high or low level of pollution, the zone with available parking, etc.
Outline of work plan: The research activity is organized in 3 main work packages (WPs). The first WP includes the design, implementation and experimentation of heterogeneous devices for monitoring useful parameters. The second WP deals with their integration in a large WSN for a widespread coverage of the city. Finally, the third WP consists in the management and analysis of the data collected from the WSN.

The first activity of WP1 aims to identify, for each urban problem that will be faced, a set of parameters for a complete characterization. For example, traffic congestion can be evaluated recording the number of passages in fixed points, air pollution can be measured according to the level of noxious gases in the air. It is expected that different modules will be integrated to obtain more comprehensive information: for example, a GPS module can be added to a sensor for precisely localize the place of measurement. State-of-the-art protocols for data acquisition, storing, aggregation and transmission will be studied and implemented on the nodes in order to find new optimized energy –saving solutions. In the last part of WP1, the developed devices will be tested on the laboratory and on the field to evaluate their effectiveness in facing the considered urban problems. A special attention will be focused on the auto-calibration of the data, aiming at proposing new solutions in order to improve the accuracy of the collected data.

The activities of WP2 are based on an initial state-of-the-art analysis to identify the most common practical issues in the design of a big WSN, with a special focus on their adoption in smart cities. Existing solutions will be reviewed in order to propose an effective integration of the devices in a robust network. The proposed infrastructure will be implemented and tested. Particular attention will be paid to the efficiency and security of the WSN, in terms of resource consumptions and communication reliability and security. A study on the characteristics of possible vehicles for the implementation of a mobile sensors network will investigate the time and spatial frequency of the measurements, the additional cost (e.g., in terms of human labor) for the system and the possible alteration of the data (e.g., due to the emissions of the vehicle).

The first activity carried on during WP3 is the development of a database for storing the data acquired from the WSN and for efficiently retrieving this information: Data Mining algorithms will be studied and developed for a deep analysis of the measured data; the main activity of WP3 regards the development of dashboard useful to develop solutions exploiting the so-formed knowledge for effectively improving the urban life.

All the results achieved in the WPs will be submitted to international conferences and journals.
Current funded projects of the proposer related to the proposal: Progetto FDM, Regione Piemonte
Possibly involved industries/companies:Città di Torino, Fondazione Torino Smart City, 5T, oBike

Title: Hardware assisted security in embedded systems
Proposer: Ernesto Sanchez
Group website:
Summary of the proposal: The always increasing number of security threats based on hardware or software attacks, together with the growing number of information security breaches unveiled on almost any type of processor-based system, ask to modify the design paradigm targeting hardware security as one of the most important goals.
Today, the most of the attempts to increase security on computer-based systems are mainly based on software solutions; however, new approaches involving hardware-based solutions are demonstrating that security solutions based on hardware modifications are also suitable to face this particular problem.
The first goal of this project is to study the different security (hardware and software) attacks affecting embedded systems, and in particular, the study will focus on the weakness derived by the inclusion of speculative modules, such as caches and BPUs, in today embedded applications.
Then, in the second part of the project, the candidate will focus on possible solutions to the studied attacks based on hardening techniques applied to the speculative modules that may include pure hardware-based approaches, as well as hybrid solutions including hardware and software mechanisms.
Rsearch objectives and methods: Today, security is one of the most important properties to be guaranteed in any computer systems. In fact, almost any computer-based system, from high-end computers to very simple embedded systems are susceptible to suffer security attacks, trying for example, to steal private keys, valuable information or getting the control of the system; thus, it is important to develop new techniques able to prevent software as well as hardware-based attacks.
The main goal of this proposal is to develop a hardware assisted design methodology able to improve the system security on embedded systems.
In particular, the first objective of the research project is to identify and study the principal security weaknesses in embedded systems used in safety-critical applications.
The Ph.D. candidate is required to acquire the basic knowledge on the state of the art related to software- and hardware-based attacks, and in particular, it will be required to identify the most relevant aspects on speculative modules that make of these modules a security target. Additionally, the student will be required also to gather the state of the art related to hardware and software-based solutions available today.
Then, after an accurate definition of the cases of study, the Ph.D. candidate will have the possibility to develop hardware-based solutions based on the hardening of the processor core as well as on hardening the speculative modules available in the embedded system.
Then, a hybrid solution should be proposed by mixing hardware and software solutions contemporary.
Outline of work plan: The proposed work plan is outlined in the following.

1. Study and identification of the security weakness in today embedded systems
The first part of the research work consists on the study of the state of the art, detailing software and hardware based attacks.

1.1 software-based attacks
Regarding software-based attacks, the most relevant security attacks are studied here. In particular, it will be studied the different attacks trying to exploit speculative modules behavior such as, caches and branch prediction units.

1.1.1 run-time attacks
1.1.2 side-channel attacks
1.1.3 cache attacks
1.1.4 security weakness on other speculative modules

1.2 hardware-based attacks
At this point, the study is focused on hardware-based attacks. Particularly, hardware attacks trying to exploit the weakness on the devices DfT mechanisms are detailed initially; then, hardware Trojans should be detailed. Finally, it will be required to analyze the system architecture in order to identify the possible weakness steaming from the system architecture.

1.2.1 DfT related attacks
1.2.2 hardware Trojans
1.2.3 architectural security weakness

1.3 state of the art on hardware and software-based solutions
The main solutions available today are studied here, covering software as well as hardware based solutions, as well as hybrid ones.

1.4 identification and definition of the cases of study
Two different types of case studies will be defined here; the first ones use academic embedded systems using freely available processor cores such as the OR1200 and LEON.
The second type instead, takes advantage of industrial devices produced by STMicroelectronics targeting the automotive sector.

2 hardware-based solutions
The first solutions related to the faced problems are provided at this point. Initially, methodologies trying to improve security thanks to improve the processor core are developed here. In the second section, the solutions will be based on the improvement of the speculative modules available in the system will be devised.

2.1 by hardening the microprocessor core
2.2 by hardening the speculative modules

3 software-based solutions
Software based solutions do not represent the main goal of this research project, however, it is important to consider that In order to solve hardware related issues, a pure software solution may be an acceptable solution. In particular, the problems may be addressed by exploiting software solutions introduced at the compile time.

3.1 pure compiler based solutions

4 hybrid solutions
At the end of the research project, it will be possible to propose hybrid solutions exploiting hardware and software mechanisms, creating trusted environments for embedded systems.

4.1 Trusted execution environment

The first year covers sections 1 - 2.1
The second year covers sections 2.2 - 3.1
The third year covers sections 3.1 - 4.1
Expected target publications: The project publications are expected to be obtained by the research work from section 2 and the following ones. The conferences where the initial work may be published may include test related conferences since the topic is getting more relevance on this sector, thus the initial target conference are:
DATE: IEEE - Design, Automation and Test in Europe Conference
HOST: IEEE International Symposium on Hardware Oriented Security and Trust
ETS: IEEE European Test Symposium
ITC: International Test Conference
VTS: IEEE VLSI Test Symposium
IOLTS: IEEE International Symposium on On-Line Testing and Robust System Design

Additionally, the project research may be published in relevant international journals, such as: TCAD, TVLSI, ToC, IEEE security and privacy, TDSC.
Current funded projects of the proposer related to the proposal: N/A
Possibly involved industries/companies:N/A

Title: Industrial machine learning
Proposer: Elena Baralis
Group website:
Summary of the proposal: The availability of massive datasets currently denoted as “big data” characterizes many application domains (e.g., IoT-based systems, energy grids). Data is generated at a fast growing pace in application domains where its collection is automated. For example, IoT sensor data streams are generated by intelligent devices embedded in cars and by smart appliances of different kinds. This kind of data is typically characterized by a time dimension, hence calling for algorithms capable of managing and analyzing huge streams of automatically generated time series.
Industrial domain applications are frequently characterized by complex physics-based models. Hence, a key issue in industrial machine learning will be the capability of coupling data driven analysis techniques to these complex models, thus integrating the bottom-up knowledge derived from data with the top-down knowledge represented by models.

Ongoing collaborations with the SmartData@polito research center, universities (e.g., Eurecom) and companies (e.g., GM Powertrain, ENEL, ENI) will allow the candidate to work in a stimulating international environment.
Rsearch objectives and methods: The objective of the research activity is the definition of big data analytics approaches to analyze IoT streams for a variety of applications (e.g., sensor data streams from voltage transformers or instrumented cars).
The following steps (and milestones) are envisioned.
Data collection and exploration. The design of a framework to store relevant information in a data lake. Heterogeneous data streams encompassing custom proprietary data and publicly available data will be collected in a common data repository. Tools for explorative analysis will be exploited to characterize data and drive the following analysis tasks.
Big data algorithms design and development. State-of-the-art tools and novel algorithms designed for the specific data analysis problem will be defined. The analytics techniques may generate system (or device) profiles, which can be exploited to predict future behaviors. This approach finds interesting applications in the context of predictive maintenance, in which learning the system behavior will allow the prediction of failures (e.g., power failures in transmission lines, engine faults in cars). The research activity will target application domains characterized by varying data sparsity levels.
Knowledge/model interpretation. The understanding of a discovered behavior requires the interaction with domain experts, that will allow operational validation of the proposed approaches. A key issue in this step is the interpretability of the generated models.
Outline of work plan: PHASE I (1st year): state-of-the-art survey for algorithms and available platforms for time series and stream data mining, performance analysis and preliminary proposals of optimization techniques for state-of-the-art algorithms, exploratory analysis of novel, creative solutions for Big Data; assessment of main data analysis issues in 1-2 specific industrial case studies.
PHASE II (2nd year): new algorithm design and development, experimental evaluation on a subset of application domains, implementation on a subset of selected Big Data platforms; deployment of algorithms in selected industrial contexts.
PHASE III (3rd year): algorithms improvements, both in design and development, experimental evaluation in new application domains, implementation on different Big Data platforms.
During the second-third year, the candidate will have the opportunity to spend a period of time abroad in a leading research center.
Expected target publications: Any of the following journals
IEEE TBD (Trans. on Big Data)
IEEE TKDE (Trans. on Knowledge and Data Engineering)
ACM TODS (Trans. on Database Systems)
ACM TKDD (Trans. on Knowledge Discovery in Data)
Journal of Big Data (Springer)
ACM TOIS (Trans. on Information Systems)
IEEE/ACM Trans. on Networking
ACM TOIT (Trans. On Internet Technology)

IEEE/ACM International Conferences (e.g., IEEE BigData, IEEE ICDM, ACM KDD)
Current funded projects of the proposer related to the proposal: I-React (Improving Resilience to Emergencies through Advanced Cyber Technologies) - H2020 EU project -
Possibly involved industries/companies:ENEL

Title: Modelling cancer evolution through the development of artificial intelligence-based techniques on temporal and spatial molecular data
Proposer: Elisa Ficarra
Group website:
Summary of the proposal: Cancer is, in essence, an evolving entity and the evolutionary properties of each tumor are likely to play a critical role in shaping its natural behavior and how it responds to therapy. However, effective tools and metrics to measure and categorize tumors based on their evolutionary characteristics still must be identified. We plan to combine mathematical modelling and AI-based approaches to develop a new generation of cancer classifiers based on tumor evolutionary properties and proxy data.
Rsearch objectives and methods: One of the reasons why cancer evolutionary properties are still largely overlooked in diagnostic predictions is that key evolutionary parameters are hardly measurable in patients due to the impracticability of extensive sampling in space and time, which is needed for accurate estimate of key evolutionary metrics such as diversity and divergence of the cancer cell population.
Here we will take advantage of a unique data source, provided by the collaboration with Dr. Andrea Bertotti (University of Torino, Department of Oncology), which allows the extensive molecular characterization of multiple tumor samplings, repeated in time and space, through the exploitation of a dedicated and proprietary ex-vivo experimental platform.

This project has three main goals:
1- Define appropriate evolutionary metrics to describe and predict the behavior of cancer, by applying (and refining/adapting) classical mathematical models of evolution to the specific context of cancer.
2- Identify multidimensional molecular proxies of the cancer evolutionary behavior, based on non-linear correlation analysis, and artificial intelligence techniques.
3- Exploit such non-linear associations through learning techniques to categorize cancers based on molecular surrogates of their evolutionary properties, as a groundwork for future diagnostic applications.

The research group lead by Prof. Ficarra within Prof. Enrico Macii's group is involved in fifteenth years of researches about the development of algorithms for the analysis of causative genetic mechanisms in complex diseases. In the specific area of the current PhD proposal, the group of Prof. Ficarra has developed several algorithms for DNA- and RNA-sequencing data analysis, has developed methodologies based on statistical models, machine learning and deep learning approaches. The group established several national and international collaborations with companies, and prestigious universities and research groups in computer science and biomedical fields (such as, among the others, the Columbia University and the Cornell University in New York, the Helsinki University in Finland, the EPFL in Switzerland, the Marie Curie Institute in France, the Berlin University in German, and the Uppsala University in Sweden). Finally, the group participated to two Italian projects (CIPE 2005-2008, IZS PLV 04/12 RC 2013-2016), to the funded FP7 European NGS-PTL project and is currently participating as associate partner to the funded IMI2 European project HARMONY.

The project candidate should have good computer programming skills (C, C++, Python, script language) as well as strong inclination to research.
Outline of work plan: 1st year:
- Study of high-throughput genomic data, such as DNA break whole-genome mapping data, methylation data, sequencing data.
- Study of biological data creation and experimental biotech protocols to better understand the biological meaning from the data.
- Data framework organization, data formatting, and collection. Definition of new experiments to create complete and coherent data sets.
- Development of methodologies for feature extraction based on deep learning approaches.
- Study of temporal data analysis techniques and preliminary temporal model development.

2nd year:
- Starting from first year results, development of advanced modelling techniques based on machine learning (e.g. regression models, recurrent neural networks, hidden Markov models, etc.), deep learning, and statistical models for the correlation of the evolution of the tumor and the molecular temporal data, and for the identification of more relevant features driving the evolution.

3rd year:
- Development of algorithms for the evaluation of spatial/temporal correlations in different tumor clones evolution.
- Translation of temporal features into a suitable input for a classifier
- Development of classifiers predicting diagnosis and therapeutic response/resistance based on the most relevant features and paths detected in the previous phases.

The algorithms planned in the proposal rely on different methodologies and then offer to the PhD student very large training opportunities. Moreover, the developed algorithms can be applied on the study of molecular evolution of different diseases, such as different kinds of cancer and neurodegenerative diseases. These diseases are increasingly frequent, and therefore they constitute a major burden to families and health care system. The proposed doctoral research, aiming at the development of algorithms and methods supporting neurodegenerative as well as cancer disease understanding and treatment, could lead to potentially high scientific and social impacts.
Expected target publications: Peer-reviewed journals:
• IEEE Transactions on Computational Biology and Bioinformatics
• IEEE Journal of Biomedical and Health Informatics (ex IEEE Transactions on Information Technology in Biomedicine)
• IEEE Transactions on Biomedical Engineering
• Bioinformatics (Oxford Ed)
• BMC Bioinformatics
• Nature methods
• Nature communications
• PLOS Computational Biology
• BMC Systems Biology
Current funded projects of the proposer related to the proposal: • EU IMI2 116026 "HARMONY: Healthcare Alliance for Resourceful Medicines Offensive against Neoplasms in HematologY" (2017-2021 associated partner)
• ERC CoG 724748 (2017-2022 PI BERTOTTI)
• AIRC IG 20697 (2018-2023 PI BERTOTTI)
Possibly involved industries/companies:Potentially involved:
• Merck Serono, Genève, Switzerland
• Personal Genomics, Verona, Italy
• Genomnia, Milano, Italy
• Fasteris, Genève, Switzerland

Title: Biometrics in the wild
Proposer: Andrea Bottino
Group website:
Summary of the proposal: Biometric traits are frequently used as an authentication system in a plethora of applications ranging from security to surveillance and forensic analysis. Today biometric recognition systems are accurate and cost-effective solutions. Thanks to these characteristics, they are starting to be deployed in a variety of scenarios, such as granting access to schools, health or leisure facilities, identifying patients in hospitals, developing pay-with-fingerprint systems and unlocking consumer devices like notebooks or smartphones. However, biometric recognition in the wild, that is biometric recognition from data captured in unconstrained settings, still represent a challenge and requires to face several issues. Among them, detection of region of interests (alignment, landmarking) in real-world data, segmentation of biometric traits, data normalization and fusion (at different levels) of multiple biometric traits.
Another relevant issue in this context is the development of effective counter-spoofing measures. As a matter of facts, biometric recognition systems are vulnerable to more or less sophisticated forms of malicious attacks. The most common (and more simple for an intruder) type of attack is using fake biometric traits. Current counter-spoofing methods are based on the analysis of the liveness of the biometric traits. However, there is still a trade-off between security and accuracy, since introducing a spoof detector often causes a decrease of the acceptance ratio of the genuine traits without reducing to zero the false acceptance ratio.
Rsearch objectives and methods: The objective of this research proposal is to address several issues in the context of biometric recognition in the wild. In particular, the investigation will be focused on the following main topics.

Fusion approaches in the wild
Biometric recognition algorithm in the wild can exploit fusion approaches to improve their robustness. When focused on a single biometric trait (e.g., fingerprint, iris, face, voice), several features can be extracted from the incoming samples. These features represents different complementary “views” on the same data, each with its own peculiarity and drawbacks. For this reason, developing approaches that combine multiple features, capable of mutually exploiting their strengths and, at the same time, softening their weaknesses, could be a valuable solution to improve both the accuracy and the generalization properties of the classification system. A related approach refers to the use of multiple biometrics characteristics, used in conjunction to identify an individual. In both cases, tackling the fusion problem will require to:
- Investigate different feature extraction techniques, the relationship between different feature spaces and their confidence level for the task at hand; as far as it concerns biometrics in the wild, for some biometric traits, a necessary preliminary step for feature extraction will be also the development of robust techniques for segmenting in real-world data the region of interest where biometric traits are located
- Analyze the contribution of feature selection techniques, in order to avoid, during the integration of multiple feature sets, incorporating redundant, noisy or trivial information, which can seriously affect the performances of the recognition process
- Investigate different fusion approaches (e.g., early or late fusion) and their relevance to the classification problem

Liveness detection of biometric traits
One of the simplest form of attacks to a biometric recognition system is using fake biometric traits, which requires the development of specific protection methods capable of identifying live samples and rejecting fake ones. The objective of this task is the development of effective software based counter spoofing modules capable of rejecting fake samples without affecting in a sensible way the acceptance ratio of the live samples. This is particularly challenging when biometric recognition should be deployed in unconstrained environment, where the variability of the working conditions causes as well a dramatic decrease of the discriminability of the information obtained.

Optimization of the computational resources
One of the most relevant application area for biometrics in the wild is mobile biometrics (i.e., the use of biometric recognition systems on mobile/smart phones), which aims at guaranteeing the system robustness supporting as well portability and mobility, allowing its deployment in a wide range of operational environments from consumer applications to law enforcement. However, from a computational point of view, this requires as well the optimization of the resources required, in terms of computational power, memory requirements and algorithm complexity.
Outline of work plan: First year
Analysis of the state-of-the-art in the field of Biometrics in the wild. The expected outcome of this phase are (i) the identification of pros and cons of the current solutions and (ii) the preliminary design of methods capable of improving the available approaches.

Second year
The methods proposed in the first year will be thoroughly evaluated on publicly available benchmarks, which allow a comparison with a great variety of approaches in the literature. The expected results of this phase are the refinement of the proposed methods, and the design and evaluation on novel approaches on the specific domain.

Third year
The approaches analyzed during the first two years will be improved and possibly optimized in terms of resources required, in order to allow their deployment on a variety of different platforms.
Expected target publications: International peer-reviewed journals in the fields related to the current proposal, such as: IEEE Transactions Image Processing, IEEE Transactions on Information Forensics and Security, IEEE transactions Pattern Analysis and Machine Intelligence, Pattern Recognition, Pattern Recognition Letter, Image and Vision Computing, International Journal of Computer Vision. Relevant and well-reputed international conferences, such as: IEEE Face and Gesture Recognition (FG), IEEE International conference on Biometrics (ICB), Biometrics Theory, Applications and Systems (BTAS) conference, IEEE International Conference on Pattern Recognition.
Current funded projects of the proposer related to the proposal: No
Possibly involved industries/companies:No

Title: Context-Aware Power Management of Mobile Devices
Proposer: Enrico Macii
Group website:
Summary of the proposal: In spite of technological advances in circuit technologies (in terms of very low-power devices) and, though to a lesser extent, in battery technologies, modern mobile devices suffer from an evident shortage of available energy. Fancy GUIs, frequent wireless communications of many types for almost any type of application result in an increase of power and energy demand that exceeds what current batteries can today provide.
However, modern devices such as smartphones offer unprecedented opportunities to improve the energy consumption based on an adaptation of the user behavior, thanks to the numerous types of sensors they host. The latter could be easily leveraged to extract information about the user behavior and tune performance and power accordingly.

While context detection in mobile devices is a widely studied topic in the literature (e.g, for prediction of mobility, user patterns, etc), it has been only occasionally used to optimize the power management of those devices.
The objective of this research is precisely that of extending current power management strategies to incorporate knowledge of the context of the device usage.
Rsearch objectives and methods: The grand objective of the project is the development of a OS-based power management strategy in mobile devices that incorporates information about the context in which the device is used, derived from the analysis of the various available sensors. By context we mean specifically the ``situation'' of the mobile device and the user. Examples are the user being indoor vs. outdoor, running vs. walking vs. still, or the device being held in a pocket vs. being used.
As a matter of fact, most mobile apps are context-aware - they ask permission to access various information on the phone, some of which involve the use or geo-location of the phone.

Sensors like accelerometer, camera, GPS, and when available, compass and temperature sensors can all be used to infer different (device and user) contexts, which can be leveraged to optimize the usage of battery power/energy. As another simple example, a device held in a pocket (e.g. sensed by a mix of readings of accelerator and temperature) but used (e.g., as a music player) can drastically reduce power consumption by automatically turning off services such as navigation or put the display in a deep sleep state.

The novelty of the proposal with respect to the state of the art consists therefore in the exploitation of the information about the context provided by the sensors for power optimization. This is only marginally used by current power optimization strategies, which are solely based on the usage pattern of the resource (i.e., the interaction of the user). The novelty lies in using also information about where the user is interacting with the device and not just how. The former and the latter are oftern but not systematically correlated.

The project will consist of two main tasks, partially overlapping in time:
1. Context and Policy definition: this is a conceptual task that aims at defining of the largest possible classes of contexts and associating power management rules to each of them and to the corresponding transitions. The various context should be extracted automatically by an engine that processes all the data collected by the various sensors and identifies a specific context. Once the detection of the context is done, specific policies for each context will be defined.
2. Implementation: While the previous phase can be evaluated offline by using sensors traces via a simulation model, the nature of modern mobile devices allows an easy deployment in a real context. Therefore, implementation on actual mobile devices will be essential to prove the benefits of the proposed method. We currently envision to implement the power management policies in the power manager included in the mobile OS, although this choice will be confirmed after the first phase of the project to further analysis of the OS kernel and its accessibility; alternatively an application-level implementation will be sought.

A natural feedback between 1 and 2 is expected in order to tune context detection and the relative policies to actual data.

The candidate should have some expertise in C++ and Android programming , operating systems, sensor interfacing (protocols and APIs).
Outline of work plan: PHASE I (months 1-9):
• State of the art analysis
• Overview of typical sensors, their interfaces and the data made available to the OS
• Analysis of the main OS API and hooks to access battery and sensor information and implementation of power management decisions.

PHASE II (months 9-18):
• Context and policy definition: static analysis
• Context and policy definition: off-line validation on sensor data

PHASE III (months 18-36):
• Implementation of the techniques either statically in the OS or as as middleware (depending on the results of Phase I)
• On-line validation of the policies and possible tuning of the context detection and policies
Expected target publications: e.g,
• IEEE Transactions on Computers
• IEEE Transactions on Mobile Computing
• IEEE Transactions on CAD
• IEEE Design and Test of Computers
• ACM Transactions on Embedded Computing Systems
Current funded projects of the proposer related to the proposal: Although there is currently no funded project ongoing on the specific subject addressed in the proposal, activities will leverage experience and part of the results on related projects, namely those using sensors to improve the energy-efficiceny of buildings (FP7 DIMMER) as well as the experienced matured in projects on Smart Systems (FP7 SMAC) for what concerns sensors interfacing and technologies.
Possibly involved industries/companies:Reply, STMicroelectronics.

Title: Blockchain for Physical Internet and Sustainable Logistics
Proposer: Guido Perboli
Group website:
Summary of the proposal: The blockchain is an incorruptible digital ledger of economic transactions that can be programmed to record not just financial transactions but virtually everything of value.
Therefore, this Ph.D. thesis wants to explore the potential applications of blockchain to the supply chain problem, in particular when it is associated with the Physical Internet paradigm, a vision for a sustainable and deployable solution to global problems associated with the way “we move, store, realize, supply and use physical objects all around the world”.
In more details, the candidate will explore the usage of the IOTA blockchain, a blockchain developed explicitly for Internet of Things applications.
These activities will be carried out in collaboration with the ICE for City Logistics and Enterprises Lab of Politecnico di Torino and Istituto Superiore Mario Boella ICELAB@Polito). The Ph.D. candidate will be co-supervised by Ing. Edoardo Calia (ISMB). ICELab@Polito is already member of the IOTA Foundation and partner in the creation of the first marketplace for IoT Data with the Municipality of Turin. The student will be integrated in the Blockchain group of the ICELab@Polito. Presently, four master students are already working both to ICT and Business aspects of the Blockchain.
Rsearch objectives and methods: The objectives of this research project are grouped into three macro-objectives:

Management Science/Operations Research objectives:
• Integration of Physical Internet paradigm and Blockchain
• Definition of operations management problems arising from the integration of freight transportation and blockchain

• Implementation differences with Bitcoin;
• Deep understanding of ICT infrastructure;
• A better understanding of data privacy, integrity, trustability, sharing, and standardization.
Deployment and field testing:

• Smart City applications;
• Validation of the solutions (in collaboration with ICE Lab).
Outline of work plan: PHASE I (I and II semesters)
• State of the art on the blockchain, Physical Internet, supply chain and transportation in a smart city.
• Identification of needs and integration with City Logistics and Physical Internet.
The student will start his/her research from the state-of-the-art analysis carried by prof. Perboli and ICELab@Polito, presented to the next 9th IFIP International Conference on New Technologies, Mobility and Security in the tutorial of prof. Perboli titled Applications of Blockchain to Supply Chain and Logistics: emerging trends and new challenges

PHASE II (II and III semesters).
• Identification of promising applications;
• Implementation of the blockchain-based applications.
The student will start from the analysis of the current industrial projects and the analysis of the needs coming from the companies and associations collaborating with ICELAb@Polito (mainly, TNT, DHL, Amazon and SOSLog).

PHASE III (IV semester)
• Deployment of solutions in collaboration with ICE Lab;
• On-field test and refinement.

PHASE IV (V and VI semesters)
• Cost-benefit analysis;
• Business model and market analysis;
• Guidelines for standardization of blockchain-based transportation applications;
• Scalability and replicability of the solution on a large scale.
Expected target publications: • Transportation Research;
• Interfaces - INFORMS;
• Omega - The International Journal of Management Science - Elsevier;
• Management Science - INFORMS;
• Business Process Management Journal;
• IT Professional - IEEE;
• Sustainable Cities and Societies – Elsevier;
• Transportation Research (Part A-E) – Elsevier;
• Transportation Science – INFORMS.
Current funded projects of the proposer related to the proposal: • SynchroNET;
Possibly involved industries/companies:TNT, DHL, Amazon

Title: Multi-user collaborative environments in Augmented and Virtual Reality
Proposer: Andrea Bottino
Group website:
Summary of the proposal: Computer supported environments capable of effectively assisting (and fostering) collaborative user tasks can provide a relevant contribution in a number of application scenarios, ranging from learning to training, collaborative design, serious games, virtual heritage and information sharing (e.g., for control and debriefing tasks).
To this end, recent advances in Augmented and Virtual Reality (AR/VR) technologies offer novel and interesting tools for the development of such applications. AR technologies seamlessly blend real and virtual environments, providing tools to support a multi-user, natural and face-to-face interaction. VR enables the development of shared environments that guarantee an effective communication and interaction both with the virtual objects and among different (and possibly remote) users.
Despite the potential of AR/VR technologies, many of the recent efforts were focused on the development and analysis of single-user environments and much work has to be done to fully understand how they can be used for handling shared information and collaborative tasks when multiple users are involved. The main objective of this research project will be to analyze the issues to face in the development of effective multi-user AR/VR collaborative environments in specific scenarios and to propose, for these cases, innovative solutions capable of tackling the identified challenges.
Rsearch objectives and methods: The spatial dimension and multimodal capabilities of multi-user virtual and augmented environments offer new possibilities for designing, deploying and enacting collaborative activities. However, fully exploiting these potentials requires facing several challenges. In particular, research is needed on different relevant aspects.

- Collaborative interaction design.
Novel AR/VR interaction tools increase the user sense of immersion and presence in VR and (finally) provide support for the development of effective and robust AR environments. However, in shared environments, the available interfaces affect the way users collaborate with each other and organize their joint activities in the common working space. This problem raises a variety of empirical questions and technical challenges: how do user coordinate their actions? How can they monitor the current state and activities of other users? How to establish joint attention? Which interaction paradigms are most effective for groups of local or/and remote users? How to design computational media that enhance interaction between users? As far as AR is concerned, how to design UI that allows physical manipulations? How can real objects affect virtual ones, and how to develop more intuitive interaction possibilities between them?

- Scenario adaptivity.
Enabling an effective user collaboration might require the design and deployment of shared environments capable of: (1) guaranteeing multiple forms of collaboration among users (i.e., co-located, distributed or mixed), (2) enabling a seamless integration of multiple interaction and visualization devices (ranging from wearable to mobile devices, in both AR and VR) and, (3) supporting the adaptation of the visualized information; that is, users with different roles (e.g., doctors, caregivers and patients in a medical scenario) should be provided with different views of the same object, or with different information associated to the same object.

- Assessment.
The assessment of the proposed approaches shall be based on different criteria (including group and individually-based behaviors, cognitions, and affective states) and on both quantitative data (e.g., obtained analyzing the extent to which, and how, the group has achieved its a priori objectives) and qualitative analysis (e.g., questionnaires regarding perceived social support amongst team members, or third-party assessments, such as expert ratings of group behaviors).

The research activities required to tackle the aforementioned issues, which are transversal to different application scenarios, will be carried out through a case study research design that will involve the participation of various stakeholders (companies, public bodies, research centers).
Outline of work plan: Phase 1 (months 1-6)
- Analysis of the state-of-the-art in the field of AR/VR collaborative environments, with specific focus on theoretical frameworks, design frameworks and best practices.
- Analysis of the state-of-the-art of interaction tools in AR/VR, with specific focus on interaction metaphors and interaction design principles for multi-user environments
- Acquisition of a solid background in Usability Evaluation
Phase 2 (months 6-12)
- Identification of one (or more) application area of interest (e.g, collaborative learning, collaborative design, control and debriefing tasks, cooperative training, or any other promising application scenario identified in Phase 1) and the potential stakeholders involved (companies, public bodies, research centers).
Phase 3 (months 6-12)
- Identification, for the selected application scenarios, of the problems to face in embodied practice regarding specific design and interaction techniques. In this analysis phase, participation of stakeholders is critical to identify specific issues and requirements that needs to be addressed when designing effective collaborative environments
- Analysis, proposal and preliminary implementation of techniques capable of improving the current HCI paradigms and their general usability in the selected scenarios
Phase 4 (months 12-24)
- Development of (prototypes of) multi-user collaborative environment in the selected application contexts.
- Assessment and user evaluation of the developed prototypes.
Phase 5 (months 24-36)
- Integration of the most promising approaches into complete demonstrators and their assessment.
- Proposal of theoretical frameworks, based on the research outcomes, for supporting the design of collaborative applications in AR/VR

Candidates should possibly have a good background in programming (in C#/C++ languages) and knowledge of the main principles of VR/AR and HCI. The research program is intrinsically multidisciplinary and requires the contribution of experts from different areas. To this end, the candidate will benefit from ongoing collaborations between our research group and the department of Psychology of the University of Torino, with expertise in usability area, cognitive psychology and pedagogy. Furthermore, our group has an on-going PhD project focused on Serious Games for Collaborative Educational Tasks, from which synergy effects on the current proposal are expected.
Expected target publications: International peer-reviewed journals in the fields related to the current proposal, such as: IEEE Transactions on Visualization and Computer Graphics, IEEE Transactions on Human-Machine Systems, International Journal of Computer-Supported Collaborative Learning, ACM Transactions on Computer-Human Interaction, IEEE Transactions on Learning Technologies, IEEE Transactions on Emerging Topics in Computing, Computer and Education (Elsevier), Entertainment Computing (Elsevier) Relevant and well-reputed international conferences, such as: ACM SIGCHI, IEEE VR, IEEE IHCI, IEEE 3DUI, IEEE CIG.
Current funded projects of the proposer related to the proposal: The research program involves several areas representing the core interests of the Computer Graphics and Vision Group. Among them, virtual and augmented reality, artificial intelligence, human computer interaction, natural user interfaces, usability engineering, participatory design and cooperative learning. Many of the foreseen activities find a natural mapping into currently ongoing or foreseen private contracts managed by the group.
Possibly involved industries/companies:Several companies, public bodies and research centers expressed their interest in the research program and will be involved as stakeholders in different application scenarios, such as collaborative design (Altec Space), control and debriefing tasks (Altec Space, Mars Society, LogosNet, SIMNOVA simulation center), cooperative training and collaborative learning (LogosNet, SIMNOVA simulation center, Department of Psychology of the University of Torino)

Title: Towards algorithmic transparency
Proposer: Tania Cerquitelli
Group website:
Summary of the proposal: In the last few years algorithms have been widely exploited in many practical use cases, thus they increasingly support and influence various aspects of our life. With little transparency in the sense that it is very difficult to ascertain why and how they produce a certain output, wrongdoing is possible. For example, algorithms can promote healthy habits by recommending activities that minimize risks only for a subset of the population because of biased training data. Whether these effects are intentional or not, they are increasingly difficult to spot due the opaque nature of machine learning and data mining.

Since algorithms affect us, transparent algorithms are indispensable by making accessible the results of the analysis and also the processes and models used.

The main research goal of this proposal is to design innovative solutions to explain the main relationships between the inputs and outputs of a given prediction made by a black-box data analytics algorithm (e.g., neural networks). Specifically, different indices will be proposed to produce prediction-local explanations for a black-box model on image and/or text classification. Innovative meta-algorithms will be devised to make the software human-readable, usable and well-accepted in many different real scenario where algorithmic transparency and user control are needed.
Rsearch objectives and methods: Today, the most efficient machine learning algorithms - such as deep neural networks - operate essentially as black boxes. Specifically, deep learning algorithms have an increasing impact on our everyday life: complex models obtained with deep neural network architectures represent the new state-of-the-art in many domains concerning image and video processing, natural language processing and speech recognition. However, neural network architectures present a natural propensity to opacity in terms of understanding data processing and prediction. This overall opacity leads to black-box systems where the user remains completely unaware of the process that models inputs over output predictions. As algorithms increasingly support different aspects of our life, they may be misused and unknowingly support bias and discrimination, if they are opaque (i.e., the internal algorithmic mechanics are not transparent in that they produce output without making it clear how they have done so). Since algorithms affect us, algorithmic transparency and user control are needed.
To guarantee algorithmic transparency innovative strategies to explain the main relationships between the inputs and outputs of a given prediction/classification made by a black-box model will be designed. The ultimate goal of this research is to explain the inner functioning of black-box classification models by providing explanations about the produced outcome. To produce prediction-local explanations different indices will be studied to assess the impact of each input on the final algorithm outcome. Transparent solutions should produce more credible and reliable information and support better services.
The main research objectives addressed by the candidate during her/his PHD studies will be:

1. Designing innovative solutions to offer algorithmic transparency.
New meta-algorithms that offer a greater level of transparency of state-of-the-art black-box models will be designed and developed. The main focus will be on black-box classification models and the proposed solutions will be tailored to unstructured data (images and text).

2. Defining innovative indices to assess the impact of classifier inputs over the classifier outcomes.
Innovative, qualitative and quantitative indices will be study to assess both the local and global influence of input features with respect to either a specific class or the inter-class.

3. Exploiting the proposed solutions in different real use-cases
Proposed solutions will be exploited in one/two domains where algorithmic transparency plays a key role (e.g., healthcare, Industry 4.0 and energy-related applications).
Outline of work plan: During the 1st year the candidate will study available solutions to enhance the algorithmic transparency and design innovative methods to put existing, effective, and efficient state-of-the-art black-box models to practical use cases. The first step towards the human-oriented analytics process is the definition of a set of interpretable features to correctly explain the forecasting/classification of a black-box model. Focused on unstructured data, such as images and textual data, the candidate will identify a set of features to correctly ascertain why and how a given black-box classifier produces a certain output.

During the 2nd year the candidate will study and define innovative indices to explain the behavior of the black-box model and provide more insights on the final outcome. Indices should measure the local influence of each interpretable feature with respect to the real class as well as inter-class feature influence.

During the 3rd year the candidate will design a smart user interface to effectively exploit the proposed solutions in practical use cases requiring a greater level of transparency.

During the 2nd-3rd year the candidate will assess the proposed solutions in diverse real application domains (e.g., healthcare and Industry 4.0).

During all the three years the candidate will have the opportunity to cooperate in the development of solutions applied to the research project and to participate to conferences presenting results of the research activity.
Expected target publications: Any of the following journals IEEE TKDE (Trans. on Knowledge and Data Engineering) ACM TKDD (Trans. on Knowledge Discovery in Data) ACM TOIS (Trans. on Information Systems) ACM TOIT (Trans. on Internet Technology) Information sciences (Elsevier) Expert systems with Applications (Elsevier) Engineering Applications of Artificial Intelligence (Elsevier) Journal of Big Data (Springer)

IEEE/ACM International Conferences
Current funded projects of the proposer related to the proposal: Regione Piemonte project DISLOMAN (Dynamic Integrated ShopfLoor Operation MANagement for Industry 4.0)
Research contract with Enel Foundation (Classification of residential consumers based on hourly-metered consumption data)
Possibly involved industries/companies:None

Title: Key management techniques in Wireless Sensor Networks
Proposer: Filippo Gandino
Group website:
Summary of the proposal: Wireless sensor networks (WSNs) offer benefits in several applications but are vulnerable to various security threats, such as eavesdropping and hardware tampering. In order to reach secure communications among nodes, many approaches employ symmetric encryption. Several key management schemes have been proposed in order to establish symmetric keys exploiting different techniques (e.g., random distribution of secret material and transitory master key). According to the different applications of WSNs, the state-of-the-art protocols have different characteristics and different gaps.

The proposed research activity will be focused on the study and development of key management protocols. The proposed investigation will consider both the security requirements and the performance (e.g., power consumption, quantity of additional messages, computational effort) of the networks. The research will analyze different possible solutions, evaluating the trade-off in terms of costs and benefits, according to different possible scenarios and applications.

The initial part of the activity will be focused on the proposals currently in progress within the research group. However, the candidate will be stimulated to formulate and develop new solutions.

The proposed research involves multidisciplinary knowledge and skills (e.g., computer network and security, advanced programming).
Rsearch objectives and methods: The objectives of the proposed activity consist in studying the state-of-the-art of key management protocols for WSNs and to propose new solutions suitable to various application scenarios. The requirements that affect the security protocols will be considered in order to find the best solution for different kinds of WSNs. In particular, the mobility of the nodes and the possibility to add new nodes after the first deployment will be considered.

The proposal of new solutions will start by analyzing the state-of-the-art protocols in order to find their weakness and to overcome them. For example, for key management schemes based on a transitory master key, which are exposed to great risks during the initialization phase, a new strategy could consider the possibility of reducing the time required for the key establishment.

The candidate will start to cooperate to the ongoing research activity by implementing, testing and evolving the proposals actually under consideration by the research group. However, a fundamental goal of the candidate is to reach skills suitable for the development of autonomous proposals.

The proposed solutions will be evaluated and compared to state-of-the-art approaches, in order to evaluate their security level and their performance. The analysis of the protocols will be composed by: (a) a theoretical analysis, (b) simulations, and (c) implementation on real nodes.

The theoretical analysis will consider several aspects of the key management protocols. A part of this analysis will evaluate the statistical characteristics of the scheme, in order to analyze the level of security and other network parameters for protocols based on stochastic elements. A security analysis will compare the properties and the requirements of the protocols.

The simulations will be used to analyze the performance of the schemes with a high number of nodes. This kind of investigation allows reaching significant results within a limited time. There are many available tools that allow simulating WSNs, e.g., TOSSIM, which uses the same code written for the TinyOS platform.

The last kind of analysis will be based on an experimental session on a real network. The main platform that will be used will be TinyOS, in order to develop a code that can be used also for the simulations. The main purpose of a real implementation is to validate the results achieved by simulations.

In order to work on this research topic, a candidate may have good programming skills, and he/she should have a good knowledge on networking and security.
Outline of work plan: During the first part of the first year the candidate will investigate the field of the research activity. Therefore, the candidate will study the existing protocols and will improve his/her knowledge on the related topics. This phase lasts 3 months. During the rest of the first year, the candidate will start to work on research proposals currently under consideration within the research group. The activity will include the cooperation in the detailed definition of the key management schemes, and in their implementation. The preliminary results will be submitted to international conferences.

During the second year, the activity will be divided in two main tasks. The first task will consist in the accurate analysis and evaluation of the proposed ideas and an eventual modification of those approaches. This task will include a programming activity, in order to implement also the main state-of-the-art protocols and to execute a comparative analysis. The achieved results will be submitted to an international journal. The second task will consist in the design of approaches for the key management. The new proposals will be based on the knowledge achieved by the candidate during the research activity.

During the last year, the main goal of the candidate will be to conclude the analysis on his proposals and to publish the results. However, also the task of evolution of the current proposals of the research group will be curried on, in order to differentiate its research activity and to increase the probability of reaching valuable results.

During all the three years the candidate will have the opportunity to cooperate in the development and implementation of key management solutions applied to research projects. These tasks will allow the candidate to understand the real scenarios in which WSNs are used and to find their actual issues.
Current funded projects of the proposer related to the proposal: FDM - Bando Fabbrica Intelligente
Possibly involved industries/companies:No

Title: Dependability of next generation computer networks
Proposer: Riccardo Sisto
Group website:
Summary of the proposal: As computer systems are becoming more and more pervasive, their dependability is becoming more and more important. At the same time, there is a trend to make these dependable systems distributed, i.e. network-based, where the possibility to have interactions between mobile applications or smart devices and the cloud plays a fundamental role. Projects like the Internet of Things (IoT), Smart Grids, and Industry 4.0 are just examples of this trend.
In this scenario, network dependability becomes fundamental. However, the techniques adopted for ensuring dependability have to be updated, following the upcoming evolution of the networks towards IoT and higher flexibility and dynamicity made possible by virtualization and Software-Defined Networking (SDN). The main objective of the proposed research is to advance the state of the art in the techniques for ensuring network dependability, considering this evolution of the networks. The study will be conducted with a special focus on formal methods. In this context, the main challenge is how to formally model next-generation networks and their components, in such a way that the models are accurate enough, and at the same time amenable to fully automated and computationally tractable approaches for ensuring safety and security.
Rsearch objectives and methods: In the past, the main formal approach for preventing undesired conditions in the operation of computer networks and for ensuring their correct functional behavior has been formal verification. According to this approach, a formal model of the network is built and analyzed, which can reveal the existence of potential fault conditions (e.g. security attacks or violations of safety requirements). A rich literature already exists about using these techniques, even for SDN-based and NFV-based networks. The Netgroup has been active in this field in the last years. The formal verification approach has the disadvantage that, when a fault condition is found, a correction of the error (e.g. a change in the configuration of network components) has to be found manually. In rapidly evolving systems, such as the ones based on NFV and SDN, where network reconfigurations can be triggered frequently and automatically, manual steps are undesirable because they can limit the potential dynamics of these networks. The presence of manual steps is a problem, for example, when a rapid solution to reconfigure a network is necessary to respond to a security attack. In this case, reconfiguration should be totally automatic and very fast, in addition to being correct.
A possible alternative approach that the candidate will explore for ensuring safety and security properties of VNF- and SDN-based networks is a correctness-by-construction one. The idea is to still use formal models of network components, as it is done with formal verification, but instead of just checking that the behavior of a fully defined model satisfies some properties (verification), some aspects of the model (e.g. the configurations of some network components) are left open, and a solution that assigns these open values is searched (formal correctness by construction). The aim of the research is to identify and experiment correctness-by construction approaches that exploit the potentiality of Satisfiability Modulo Theories (SMT) solvers. These tools are good candidates for this purpose because they are very efficient in determining if a set of logical formulas is satisfiable or not, and if it is, they can also efficiently find assignments of free variables that make the logical formulas true. The main idea to be explored in the research is to exploit a SMT solver as backend for building a tool that can automatically and quickly synthesize correct-by-construction configurations of NFV- and SDN-based networks. This implies finding ways of encoding the correct construction problem of such networks as a SMT problem that can be solved efficiently enough. The candidate will exploit the experience the Netgroup already has with using SMT solvers for fast formal verification of NFV-based networks.
Till now, no researcher has yet tried this approach. The previously proposed solutions for automatic reconfiguration of networks generally use non-formal approaches or specially crafted algorithms. If successful, this innovative approach can have high impact, because it can improve the level of automation in re-configuring next generation networks and at the same time provide high assurance levels, due to the use of mathematically rigorous models.
Outline of work plan: Phase 1 (1st year): the candidate will analyze the state-of-the-art of formal approaches for the modeling and verification of SDN-based, NFV-based, and IoT-based network infrastructures, with special emphasis on the assurance of security-related and safety-related properties and with special attention to the approaches already developed within the NetGroup (Verigraph).
Also, recent literature will be searched extensively in order to find whether any new approach that goes in the direction of correctness by construction with formal guarantees (today not available) has appeared in the meanwhile.
Subsequently, with the guidance of the tutor, the candidate will start the identification and definition of new approaches for ensuring safety and security requirements in these network infrastructures based on correctness by construction, as explained in the previous section. The formal modelling approach for virtual network functions and their configurations will be similar to the ones already used by formal verification tools, but with the addition of parameterization in configurations to be exploited for construction. At the end of this phase, the publication of some preliminary results is expected. During the first year, the candidate will also acquire the background necessary for the research. This will be done by attending courses and by personal study.

Phase 2 (2nd year): the candidate will consolidate the proposed approaches, will fully implement them, and will make experiments with them, e.g. in order to study their scalability. The results of this consolidated work will also be submitted for publication, aiming at least at a journal publication.

Phase 3 (3rd year): based on the results achieved in the previous phase, the proposed approach will be further refined and improved, in order to optimize scalability, performance, and generality of the approach, and the related dissemination activity will be completed.
Expected target publications: The contributions produced by the proposed research can be published in conferences and journals belonging to the areas of networking (e.g. INFOCOM, ACM/IEEE Transactions on Networking, or IEEE Transactions on Network Management, Netsoft, IEEE Transactions on Industrial Informatics) and security (e.g. IEEE SandP, ACM CCS, NDSS, ESORICS, IFIP SEC, DSN, ACM Transactions on Information and System Security, or IEEE Transactions on Secure and Dependable Computing).
Current funded projects of the proposer related to the proposal: None
Possibly involved industries/companies:TIM

Title: CityScience: Data science for smart cities
Proposer: Silvia Chiusano
Group website:
Summary of the proposal: A smart city is an urban environment in which social and environmental resources are combined to make cities more livable by increasing urban prosperity and competitiveness. On the one side, the establishment of innovative technologies related to mobile or wearable computing and smart city infrastructure led to the continuous massive generation of heterogeneous data. On the other side, due the search for better-informed decisions by urban stakeholders, there is a natural tendency to collect, process, and analyse these data by transforming them into information and actionable insights. From a data science perspective, data emerging from smart cities give rise to a lot of challenges that constitute a new inter-disciplinary field of research.

The objective of the PhD is the study and development of proper solutions for the collection, management, and analysis of huge volumes of heterogeneous urban data, to finally extract useful insights for increasing the efficiency, accessibility and functionality of offered services, and finally the well-being of citizens. Different smart city scenarios will be considered as reference case studies such as urban mobility, citizen-centric contexts, and healthy city.

The research activity involves multidisciplinary knowledge and skills such as database and data mining techniques and advanced programming.
Rsearch objectives and methods: Urban computing entails the acquisition, integration, and analysis of big and heterogeneous data collections generated by a diversity of sources in urban spaces to profile the different facets and issues of the city environment. For example, sensors can estimate the volume of traffic at specific spots, citizens interact with mobile sensors in their smart-phones, and micro-blogging applications as Twitter provide a stream of textual information on citizen needs and dynamics.

The analysis of these data collections to gain actionable insights results in various folds of challenges in the context of data analytics, that will be addressed during the PhD, such as:
• Definition of cross-domain data fusion methods. To fully characterize the urban environment heterogeneous data sources should considered and integrated. For example, to predict the air quality in a city area different data sources, consisting of traffic, meteorology, Point Of Interests (POIs) and road networks, should be considered. Since these data are usually collected with different spatial and temporal granularities, suitable data fusion techniques should be devised to support the data integration phase, and provide a spatio-temporal alignment of collected data.
• Definition of suitable models for data representation. The storage of heterogeneous urban data requires the use of alternative data representations to the relational model such as NoSQL databases (e.g., MongoDB).
• Adoption of suitable technologies and design of novel algorithms for big data analytics. The capability of smart cities to generate and collect data of public interest has increased at an unprecedented rate, to such an extent that data can rapidly scale towards “Urban Big Data”. Consequently, the analysis of this massive volume of data demands distributed processing algorithms and technologies (e.g., Apache Spark).
• Design of machine learning algorithms to deal with spatial and spatio-temporal data. Urban data is usually characterized by spatio-temporal coordinates describing when and where data has been acquired. Spatio-temporal data have unique properties, consisting of spatial distance, spatial hierarchy, temporal smoothness, period and trend, which entail the design of suitable data analytics methods.

The objectives of the PhD research activity consist in identifying the peculiar characteristics and challenges of each considered urban application domain and devising novel solutions for the management and analysis of the corresponding data. More urban scenarios will be considered with the aim of exploring the different facets of urban data and evaluating how the proposed solutions perform on different data collections. The experimental evaluation will be conducted on data collected from the considered smart city scenarios, possibly integrated with the open data made available from the municipalities.
Outline of work plan: 1st Year. The PhD student will review the recent literature on urban computing to identify the up-to-date research directions and the most relevant open issues in the urban scenario. Based on the outcome of this preliminary explorative analysis, an application domain, such as urban mobility, will be selected as a first reference case study. This domain will be investigated to identify the most relevant data analysis perspectives for gaining useful insights and to assess of main data analysis issues. The student will perform an exploratory evaluation of state-of-the-art technologies and methods on the considered domain, and will present a preliminary proposal for the optimization techniques of these approaches.

2nd and 3rd Year. Based on the results of the 1st year activity, the PhD student will design and develop a suitable data analysis framework including innovative analytics solutions to efficiently extract useful knowledge in the considered domain, aimed at overcoming weaknesses of state-of-the-art methods. Moreover, during the 2nd and 3rd year, the student will progressively consider a larger spectrum of application domains in the urban scenario. The student will evalute if and how his/her proposed solutions can be applied to the new considered domains as well as he/she will propose novel analytics solutions.

During the PhD, the student will have the opportunity to cooperate in the development of solutions applied to the research projects on smart cities. The student will also complete his/her background by attending relevant courses. The student will participate to conferences presenting the results of his/her research activity.
Expected target publications: Any of the following journals
IEEE TKDE (Trans. on Knowledge and Data Engineering)
ACM TODS (Trans. on Database Systems)
Journal of Big Data (Springer)
Expert Systems With Applications (Elsevier)
Information Sciences (Elsevier)

IEEE/ACM International Conferences
Current funded projects of the proposer related to the proposal: S[m2]ART (Smart Metro Quadro): Bando Smart City and Communities; Ente finanziatore: MIUR (Ministero dell’istruzione, Università e Ricerca)

MIE (Mobilità Intelligente Ecosostenibile): Bando Cluster Tecnologico Nazionale “Tecnologie per le Smart Communities”; Ente Finanziatore: MIUR (Ministero dell’istruzione, Università e Ricerca)
Possibly involved industries/companies:

Title: Promoting Diversity in Evolutionary Algorithms
Proposer: Giovanni Squillero
Group website:
Summary of the proposal: Divergence of character is a cornerstone of natural evolution. On the contrary, evolutionary optimization processes are plagued by an endemic lack of diversity: all candidate solutions eventually crowd the very same areas in the search space. Such a “lack of speciation” has been pointed out in the seminal work of Holland in 1975, and nowadays is well known among scholars. It has different effects on the different search algorithms, but almost all are quite deleterious. The problem is usually labeled with the oxymoron “premature convergence”, that is, the tendency of an algorithm to convergence toward a point where it was not supposed to converge to in the first place. The research activity would push further on this line, tackling explicitly “Diversity Promotion”. While the topic is not new, it is far from being solved, and it is gaining increasing importance: no general solution applicable to industrial problems is available today.
Rsearch objectives and methods: The primary research objective is the development of novel highly-effective general-purpose methodologies able to promote diversity. That is, methodologies like the “island-model”, that are not linked to a specific implementation nor to a specific paradigm, but are able to modify the whole evolutionary process. The study would start by examining why and how the divergence of character works in nature, and then trying to find analogies in the artificial environment. Such ideas would be tested on reduced environment and toy problems, and later generalized to a wide scenario, broadening their applicability.

The second objective is the study of diversity and EA in the context of Turing-complete program generation, that is, the generation of real assembly-level code that takes advantage of the full instruction set and characteristic of the target CPU. Generation of programs is a challenging problem with a complex search space landscape having multiple local optima. Therefore, diversity promotion schemes appear to be a promising research direction, as they could significantly improve the performance of EA. This part of the project shall also include the study of program representation and choice of suitable genetic operators, as these factors can have a dramatic effect on the quality of the results.

This will be achieved through both theoretical algorithm analysis and computational experiments. Extensive experimental study of the impact of EA parameters on the population dynamics across various global optimization problems is planned. Proposed experimental study should lay a foundation for the development of effective EA variants that alleviate the “premature convergence” problem. The emphasis shall be put on the development of general-purpose diversity promotion schemes applicable to a wide range of optimization problems.

As “premature convergence” is probably the single most impairing problem in the industrial application of Evolutionary Computation, any methodology able to ease it would have a tremendous impact. To this end, the proposed line of research is generic and deliberately un-focused, not to limit the applicability of the solutions.

Moreover, the work on Turing-complete program generation is synergistic with CAD group work on SBST (software-base self test), and the tool called “MicroGP” ( developed by the same G. Squillero.
Outline of work plan: The first phase of the project shall consist of an extensive experimental study of existing diversity preservation methods across various global optimization problems. The MicroGP v3, a C++ general-purpose EA developed in house, will be used to study the influence of various methodologies and modifications on the population dynamics. Solutions that do not require the analysis of the internal structure of the individual (e.g., Cellular EAs, Deterministic Crowding, Hierarchical Fair Competition, Island Models, and Segregation) shall be considered. This study should allow the development of a, possibly new, effective methodology, able to generalize and coalesce most of the cited techniques.

During the first year, the candidate will take a course in Artificial Intelligence, and all Ph.D. courses of the educational path on Data Science. Additionally, the candidate is required to learn the Python language.

Starting from the second year, the research activity shall include Turing-complete program generation. The candidate will move to MicroGP v4, the new, Python version of the toolkit currently under development. That would also ease the comparison with existing state-of-the-art toolkits, such as inspired and deap. The candidate will try to replicate the work of the first year on much more difficult genotype-level methodologies, such as Clearing, Diversifiers, Fitness Sharing, Restricted Tournament Selection, Sequential Niching, Standard Crowding, Tarpeian Method, and Two-level Diversity Selection.

At some point, probably toward the end of the second year, the new methodologies will be integrated into the Grammatical Evolution framework developed at the Machine Learning Lab of University of Trieste – GE allows a sharp distinction between phenotype, genotype and fitness, creating an unprecedented test bench, and G. Squillero is already collaborating with prof. E. Medvet on these topics. The final goal of this research will be to link phenotype-level methodologies to genotype measures.

Finally, the research would almost certainly include the influence of parameters, representation, and the choice of ad-hoc operators.
Expected target publications: Journals with impact factors
• ASOC - Applied Soft Computing
• ECJ - Evolutionary Computation Journal
• GPem - Genetic Programming and Evolvable Machines
• IEEE Transactions On Systems, Man, And Cybernetics
• Informatics and Computer Science Intelligent Systems Applications
• IS - Information Sciences
• JETTA - Journal of Electronic Testing
• NC - Natural Computing
• TCIAIG - IEEE Transactions on Computational Intelligence and AI in Games
• TEC - IEEE Transactions on Evolutionary Computation
• The Official Journal of the World Federation on Soft Computing (WFSC)
Top conferences
• PPSN - Parallel Problem Solving From Nature
• CEC/WCCI - World Congress on Computational Intelligence
• GECCO - Genetic and Evolutionary Computation Conference
• ITC – Iternational Test Conference
• PPSN - Parallel Problem Solving From Nature
• CEC/WCCI - World Congress on Computational Intelligence
• GECCO - Genetic and Evolutionary Computation Conference
Current funded projects of the proposer related to the proposal: The proposer is a member of the Management Committee of COST Action CA15140: Improving Applicability of Nature-Inspired Optimisation by Joining Theory and Practice
Possibly involved industries/companies:The CAD Group has a long record of successful applications of evolutionary algorithms in several different domains. For instance, the on-going collaboration with STMicroelectronics on test and validation of programmable devices, does exploit evolutionary algorithms and would definitely benefit from the research. Squillero is also in contact with industries that actively consider exploiting evolutionary machine-learning for enhancing their biological models, for instance, KRD (Czech Republic), Teregroup (Italy), and BioVal Process (France).

Title: Security and Privacy for Big Data
Proposer: Melek Önen, Antonio Lioy
Group website:
Summary of the proposal: The recent technology developments enable millions of people to collect and share data on a massive scale. Such data allow to derive relevant information about people through advanced analytics such as statistical analysis or machine learning. The analytical findings can help companies improve their customer services, or hospitals identify patterns and come up with early treatments. Unfortunately, this new data collection paradigm raises serious privacy concerns mainly because of the high sensitivity of the collected data. The goal of the PhD is to design and evaluate privacy enhancing technologies that would allo the processing over encrypted data and hence help companies to comply with the upcoming GDPR (General Data Protection Regulation).
Rsearch objectives and methods: While data encryption techniques would help ensure data privacy, traditional symmetric encryption mechanisms would unfortunately fail in addressing the need for allowing the search or some other advanced analytics such as machine learning over the collected data. Despite some very promising recent advancements in fully-homomorphic encryption, which enables any arbitrary operations over encrypted data, this technology is not ready yet to be applied as a general purpose solution to the problem of privacy preserving data analytics. The goal of the PhD is to design and evaluate customized privacy-preserving and security primitives that will on the one hand protect the confidentiality of the data and on the other hand enable data centres to perform these data mining or machine learning techniques over the encrypted data.
Outline of work plan: To this end, during the first year of his PhD the successful candidate will study the privacy and security challenges associated with Big Data applications leveraging different data mining techniques such as statistical data analysis and/or machine learning.
During the second year. the PhD candidate will investigate privacy-preserving variants of some specific data mining techniques while leveraging more practical homomorphic encryption solutions that are not fully homomorphic but can support some operations (such as addition only). These operations will be tailored to improve the efficiency of the underlying primitives while not sacrificing their accuracy. Another possible approach is to use secure multiparty computation (SMC) which can be used for the protection of data both at the collection and processing phases. Parties involved in the SMC protocol (such as different hospitals) will be able to perform collaborative machine learning over the entire dataset and retrieve the desired results without revealing any information on their respective datasets (e.g. patients’ health records). The newly designed solutions should be evaluated in terms of both privacy and accuracy.
Finally, during the final year of his/her studies, the PhD candidate will focus on the multi-user case where data is coming from distinct independent sources and during the analysis of the data these sources only discover the output of the data mining algorithm and do not learn any information about each other’s’ data.
The results should be new privacy-preserving data mining systems based on improved homomorphic encryption techniques and/or secure multiparty computation schemas.
Expected target publications: ACM Computer and Communication Security, IEEE Security and Privacy, European Symposium on Research in Computer Security, IEEE Trans. on Secure and Dependable Systems, ACM/IEEE Trans. on Networking, Int. J. of Information Security
Current funded projects of the proposer related to the proposal: EU H2020 projects (at EURECOM): TREDISEC, CLARUS
Possibly involved industries/companies:The candidate is expected to spend its time split between EURECOM (mostly) and POLITO.

Title: Distributed software methods and platforms for modelling and co-simulation of multi-energy systems
Proposer: Andrea Acquaviva
Group website:
Summary of the proposal: The emerging concept of smart energy societies and cities is strictly connected to heterogeneous and interlinked aspects, from energy systems, to cyber-infrastructures and active prosumers. One of the key objectives of the Energy Center Lab is to develop instruments for planning current and future energy systems, accounting for the complexity of the various interplaying layers (physical devices for energy generation and distribution, communication infrastructures, ICT tools, market and economics, social). The EC tackles this issue by aiming at building a virtual model made of interactive, interoperable blocks. These blocks must be designed and developed in the form of multi-layer distributed infrastructure that exploits the modern design patterns (e.g., microservice approach). Examples of systems realizing partial aspects of this infrastructure have been recently developed in the context of European research projects, such as energy management of district heating systems[1], smart-grid simulation[2], thermal building simulation[3] systems, renewable energy source planning such as[4-5]. However, a comprehensive and flexible solution for planning and simulate future smart energy cities and societies, is still missing. The research program aims at developing the backbone software infrastructure allowing to interface and interoperate real/virtual models of energy production systems, energy networks (electricity, heat, gas), communication network and prosumer models.
Rsearch objectives and methods: This research aims at developing a novel distributed software infrastructure to model and co-simulate different Multi-Energy-Systems and general-purpose scenarios by combining different technologies (both Hardware and Software) in a plug-and-play fashion and analysing heterogeneous information, often in (near-) real-time. The final purpose consists of simulating the impact of future energy systems. Thus, the resulting infrastructure will integrate in a distributed environment heterogeneous i) data-sources, ii) cyber-physical-systems, i.e. Internet-of-Things devices, to retrieve/send information in (near-) real-time, iii) models of energy systems, iv) real-time simulators, v) third-party services to retrieve information in (near-) real-time data, such as meteorological information. This infrastructure will follow the modern software design patterns (e.g. microservice) and each single component will adopt the novel communication paradigms, such as publish/subscribe. This will ease the integration of “modules” and the link between them to create holistic simulation scenarios. The infrastructure will enable also both Hardware-in-the-Loop and Software-in-the-Loop again to perform real-time simulations. Furthermore, the solution should be able to scale the simulation from micro-scale (e.g. dwelling, buildings) up to macro-scale (e.g. urban or regional scale) and span different time scales from micro-seconds up to years. In a nutshell, the platform will offer simulations as a service that can be used by different stakeholders to build and analyse new energy scenarios for short- and long-term planning activities and for testing and managing the operational status of smart energy systems.

[1] Brundu et al. IoT software infrastructure for Energy Management and Simulation in Smart Cities. In: IEEE TRANSACTIONS ON INDUSTRIAL INFORMATICS vol. 13 n. 2, pp. 832-840. - ISSN 1551-3203
[2] Bottaccioli et al. A Flexible Distributed Infrastructure for Real-Time Co-Simulations in Smart Grids. In: IEEE TRANSACTIONS ON INDUSTRIAL INFORMATICS - ISSN 1551-3203 (In Press)
[3] Bottaccioli et al. Building energy modelling and monitoring by integration of IoT devices and Building Information Models. In: 41st IEEE Annual Computer Software and Applications Conference (COMPSAC 2017), Torino, Italy, 4-8 July 2017. pp. 914-922
[4] Bottaccioli et al. GIS-based Software Infrastructure to Model PV Generation in Fine-grained Spatio-temporal Domain. In: IEEE SYSTEMS JOURNAL (In press)
[5] Bottaccioli et al. PVInGrid: A Distributed Infrastructure for evaluating the integration of Photovoltaic systems in Smart Grid. In: 8th Advanced Doctoral Conference on Computing, Electrical and Industrial Systems (DoCEIS 2017), Caprica (Lisbon), Portugal, 03-05 May 2017. pp. 316-324
Outline of work plan: The work plan is structured into three main phases, requiring the gradual understanding of platform, programming models and methods.

First year

The PhD candidate should:
- Familiarize with service-oriented and microservices –based architectures
- Learn the main platforms and tools used for energy grid simulation, including dedicated hardware devices (e.g. real-time simulators)
- Learn the energy production/storage/distribution scenario
- Learn the main factors and actors in a realistic energy scenario and their interactions
- Learn data formats and standards used in energy management and control systems at grid level

At the end of the first year, the candidate is expected to have published and presented at least one international conference proceeding paper.

Second year

The PhD candidate should:
- Deepen the understanding software simulation infrastructure and realize a first version of the simulation model considering one energy vector (e.g. starting with electricity grid)
- Extend the first version horizontally by interoperating virtual (simulated) and physical devices (e.g. smart meters)
- Create the interfaces to extend the platform horizontally to other energy vectors and vertically to higher layers simulating energy management policies (aggregation, balancing, storage) and energy market policies

At the end of the second year, the candidate is expected to have published and presented at least two papers at international conferences and to have submitted at least one paper to an international peer-reviewed journal.

Third year

The PhD candidate should:
- Finalize and consolidate the platform
- Complete the extension towards additional energy vectors (e.g. thermal)
- Complete the extension including energy policy management algorithms
- Evaluate and validate the results against the defined metrics

At the end of the third year, the candidate is expected to have published at least two papers in international peer-reviewed journals.

During the PhD period, the candidate must spend 5/6 months abroad, in a well-reputed research center or academic institution and attend PhD courses and seminars.
Expected target publications: ACM Transactions on cyber-physical systems
IEEE transactions on parallel and distributed systems
IEEE Transactions on industrial informatics
IEEE Transactions on smart grid
IEEE systems journal
IEEE IoT journal
Current funded projects of the proposer related to the proposal: FLEXMETER, CLUSTER-EEB
Possibly involved industries/companies: