Research proposals

PhD selection is performed twice a year through an open process. The next call for admission to the XXXIV cycle will be published in winter. For more information, please visit the website of the Doctoral School.

Grants from Politecnico di Torino and other bodies are available. Details can be found on the website of the Doctoral School and on the pages dedicated to the call.

In the following, we report a list of possible topics for new PhD students in Computer and Control Engineering. This list will be continuously updated. If you are interested in one of the topics, please contact the related Proposer.

For general information, you can also refer to phd-dauin@polito.it.

- Internet-of-Things Technologies for the Factory of the Future (FoF)
- Context-Aware Power Maangement of Mobile Devices
- Industry 4.0
- Key management techniques in Wireless Sensor Networks
- Body Area Network for Ubiquitous Health Monitoring
- Low-Latency Multimedia Communication Optimization for the Web Environment
- Real-time scheduling of automotive embedded software for multi-core computing pl...
- Advanced User Interaction in Ambient Intelligence Systems
- Summarization of heterogeneous data
- Neural Networks for speaker recognition
- Energy aware software
- Functional Model-Driven Networking
- Cross-layer Lifetime Adaptable Systems
- Synchro-Modal Supply Chain in Logistic Optimization
- Formal verification of next generation computer networks
- Security of IoT systems
- ICT Tools and methods for Lean Business and Innovation Management
- Video Quality Evaluation on Large Scale Datasets
- Transparent Algorithms for data analytics
- Geospatial data analytics
- Mining heterogeneous data collections from complex application domains
- Pattern recognition and artificial intelligence techniques for bioimaging applic...
- Virtual Design and Validation of Standard Cells Technology for High Reliability ...
- Algorithms and methods for knowledge extraction from IoT data streams in Smart C...
- Development of innovative methodologies for DNA/RNA sequences analysis exploitin...
- Bilevel stochastic optimization problems for Urban Mobility and City Logistics
- Image processing for machine vision applications
- New methods for set-membership system identification and data-driven robust cont...
- Methodologies, HW equipment and SW Instruments for the characterization of Nanot...
- New strategies and methodologies for System Level Test (SLT)
- Effective design and validation techniques for high-reliability and secure appli...
- New Techniques for on-line fault detection
- Effective techniques for secure and reliable system validation
- Inside Big Data: diving deep into massive datasets
- Reliability in Power Electronics
- Software-to-Silicon Solutions for Near-Sensor Data-Analytics in the IoT


Detailed descriptions

Title: Internet-of-Things Technologies for the Factory of the Future (FoF)
Proposer: Enrico Macii
Group website: eda.polito.it
Summary of the proposal: The collective term "Industry 4.0" used to indicate the convergence of automation, ICT, and manufacturing technology has become a buzzword in the context of IoT technologies; yet, many of the ambitions have clashed against the reality of the existing infrastructure of the vast majority of manufacturing in typical enterprises, which are not ready to support such digital transformation required by the new paradigm.

Therefore, in spite of the vast technological opportunities of hardware (sensors and similar devices), software (protocols and interfaces), and network interfaces, their integration in existing environments is the real challenge that has to be solved in order to bring such technologies into practice. The keywords here are thus interoperability, modularity and flexibility.

This project aims at providing a solution to these challenges in terms of an integrated HW (sensorized elements) and SW (middleware) platform that allow on one hand to augment the degree of feedback by the sensorized objects and on the other hand to access these information using a standard interface.
Rsearch objectives and methods: The synthetic objective of the project is the creation of an ICT platform for the integrated, dynamic and autonomous management of highly automated manufacturing operations with the objective of the optimization of resources, i.e., operators, materials, and production systems.

The platform will consist by HW and SW modules.
The HW will consist of:
• Wearable sensors/devices (such as smart gloves or wrist bands).
• Devices for the indoor localization of operators or system for the transportation of parts/materials
• Devices for the identification human gesture and posture
• Devices for the identification of operators (fingerprints, iris, heartbeat)
The actual type of sensors will depend on the use case
The SW will consist of:
• Modules for wireless communication for real-time data collection
• Various data management modules (e.g., data cleaning, sensor fusion, etc)
• Modules for machine learning that can analyze the sensed scenarior and provide feedback to the operator/manager
• Dashboard e control center software to be integrated with legacy production and logistic information systems
• Mobile applications for access to those information to be used on the wearable devices.

HW and SW modules will be integrated together through a reference common architecture that can possibly leverage a cloud-based implementation (although it will not be strictly needed) of the data acquisition system, the aggregation and the dynamic storage of the data streams
Key to this objective will be a central middleware that decouples the device-specific information (e.g., drivers) from the interfaces, which will be based on webservices in order to allow easy deployment of the basic functionalities on mobile devices (to be used by operators but also from production managers).

The platform will be tested in specific industrial testcases in two different use scenarios with distinct targets:
1) HMI: optimization of the avaialbility of production data at various levels of responsibility (operators, team leaders, etc.) on the production line.
2) Smart Maintenance: optimization of progammed maintenance events by reducing downtimes.

The candidate should have a good mix of expertise ranging to C++ programming, board programming, network interfaces, and mobile applications.
Outline of work plan: PHASE I (months 1-9):
• State of the art analysis
• Analysis of the IoT ecosystem (market, technology and research evolutions and trends)
• Detailed study on sensor networks infrastructure, intercommunication techniques and protocols, sensor integration on smart applications
• Analysi of the target sensors and devices

PHASE II (months 9-18):
• Simulation-based prototyping of the target use cases/scenarios
• Gain familiarity with embedded processing, advanced microcontroller architectures, firmware development, embedded OS and real time OS.
• Gain familiarity with webservices and the interfaces to IoT protocols
PHASE III (months 18-36):
• Integration of components into the simulation platform
• Two-phase integration in the described use scenarios and simulation-based validation
• Contribution on ongoing projects and research activities (supporting tasks, deliverables, patents and publications)
Expected target publications: • IEEE Transactions on Computers
• IEEE Transactions on Consumer Electronics
• IEEE Transactions on Instrumentation and Measurement
• IEEE Transactions on Smart Grids
• IEEE Design & Test of Computers
• IEEE Circuits and Systems Magazine
Current funded projects of the proposer related to the proposal: All ongoing and incoming projects related to smart system integration advanced environmental monitoring (DIMMER www.energy.manchester.ac.uk/research/multi-energy-systems/dimmer/), modeling and simulation of extra functional properties on mixed criticality systems (CONTREX, contrex.offis.de), and IoT Technologies applied to manufacturing (Project with FCA and the possible Regional project on Smart Factory) will benefit from this research project. In addition they can contribute in defining and building real application scenarios helping to measure benefits related to proposed approaches and to drive the research activities.
Possibly involved industries/companies:Reply, FCA, STMicroelectronics S.R.L, ST-PoliTO S.C.aR.L.

Title: Context-Aware Power Maangement of Mobile Devices
Proposer: Enrico Macii
Group website: eda.polito.it
Summary of the proposal: In spite of technological advances in circuit technologies (in terms of very low-power devices) and, though to a lesser extent, in battery technologies, modern mobile devices suffer from an evident shortage of available energy. Fancy GUIs, frequent wireless communications of many types for almost any type of application result in an increase of power and energy demand that exceeds what current batteries can today provide.
However, modern devices such as smartphones offer unprecedented opportunities to improve the energy consumption based on an adaptation of the user behavior, thanks to the numerous types of sensors they host. The latter could be easily leveraged to extract information about the user behavior and tune performance and power accordingly.
While context detection in mobile devices is a widely studied topic in the literature (e.g, for prediction of mobility, user patterns, etc), it has been only occasionally used to optimize the power management of those devices.
The objective of this research is precisely that of extending current power management strategies to incorporate knowledge of the context of the device usage.
Rsearch objectives and methods: The grand objective of the project is the development of a OS-based power management strategy in mobile devices that incorporates information about the context in which the device is used, derived from the analysis of the various available sensors.

By context we mean specifically the ``situation'' of the mobile device and the user. Examples are the user being indoor vs. outdoor, running vs. walking vs. still, or the device being held in a pocket vs. being used.
As a matter of fact, most mobile apps are context-aware - they ask permission to access various information on the phone, some of which involve the use or geo-location of the phone.

Sensors like accelerometer, camera, GPS, and when available, compass and temperature sensors can all be used to infer different (device and user) contexts, which can be leveraged to optimize the usage of battery power/energy. As another simple example, a device held in a pocket (e.g. sensed by a mix of readings of accelerator and temperature) but used (e.g., as a music player) can drastically reduce power consumption by automatically turning off services such as navigation or put the display in a deep sleep state.

The project will consist of two main tasks, partially overlapping in time:
1. Context & Policy definition: this is a conceptual task that aims at defining of the largest possible classes of contexts and associating power management rules to each of them and to the corresponding transitions. The various context should be extracted automatically by an engine that processes all the data collected by the various sensors and identifies a specific context.
Once the detection of the context is done, specific policies for each context will be defined.
2. Implementation: While the previous phase can be evaluated offline by using sensors traces via a simulation model, the nature of modern mobile devices allows an easy deployment in a real context. Therefore, implementation on actual mobile devices will be essential to prove the benefits of the proposed method. We currently envision to implement the power management policies in the power manager included in the mobile OS, although this choice will be confirmed after the first phase of the project to further analysis of the OS kernel and its accessibility; alternatively an application-level implementation will be sought.

A natural feedback between 1 and 2 is expected in order to tune context detection and the relative policies to actual data.

The candidate should have some expertise in C++ and Android programming , operating systems, sensor interfacing (protocols and APIs).
Outline of work plan: PHASE I (months 1-9):
• State of the art analysis
• Overview of typical sensors, their interfaces and the data made available to the OS
• Analysis of the main OS API and hooks to access battery and sensor information and implementation of power management decisions.

PHASE II (months 9-18):
• Context and policy definition: static analysis
• Context and policy definition: off-line validation on sensor data

PHASE III (months 18-36):
• Implementation of statically in the OS or as as middleware (depending on the results of Phase I)
• On-line validation of the policies and possible tuning of the context detection and policies
Expected target publications: • IEEE Transactions on Computers
• IEEE Transactions on Mobile Computing
• IEEE Transactions on CAD
• IEEE Design & Test of Computers
• ACM Transactions on Embedded Computing Systems
Current funded projects of the proposer related to the proposal: Although there is currently no funded project ongoing on the specific subject addressed in the proposal, activities will leverage experience and part of the results on related projects, namely those using sensors to improve the energy-efficiceny of buildings (FP7 DIMMER, FP7 TRIBUTE) as well as the experienced matured in projects on Smart Systems (FP7 SMAC) for what concerns sensors interfacing and technologies.
Possibly involved industries/companies:STMicroelectronics S.R.L, ST-PoliTO S.C.aR.L.

Title: Industry 4.0
Proposer: Claudio Demartini
Group website: http://grains.polito.it/
Summary of the proposal: Lately, the term “Industry 4.0” has been devised to refer to the most recent, just started, fourth industrial revolution, that depicts factories as intelligent environments, whose composing elements (e.g. machines, storage systems, etc.) are seen as smart components, able to communicate with each other and to take decisions autonomously. Benefits for the industrial world are various, ranging from a more flexible adaptation of the production chain, to the continuous monitoring of products’ and machines’ state, to more natural human-machine interaction paradigms, etc.
The present Ph.D. aims at improving the state of art of research of Industry 4.0, by designing and developing new solutions for this field. Activities will be carried out in collaboration with the Istituto Superiore Mario Boella and could include, among others, the development of intelligent systems for production/maintenance/logistics, the definition of new human-machine interaction modalities, the design of system for the (visual) analysis of Big Data concerning production lines, customers, etc., the integration of smart/mobile devices with industrial plants and processes, the investigation on how new production technologies, such as additive manufacturing, could be exploited and included in the production system, etc.
Rsearch objectives and methods: The Ph.D. candidate will first deepen his/her knowledge concerning technologies related to Internet of Things, Internet of Services, Cyber-Physical Systems, Big Data, Human-Computer and Human-Machine Interaction, Computer Vision, Visual Analytics, Natural User Interfaces, Augmented Reality, Semantic Web and sensors, then will exploit them for devising and developing intelligent systems which could be used in different industry’s areas:
- Production: exploiting Internet of Things and Internet of Services to devise systems able to a) real time locate materials and parts anywhere and anytime, enabling a better products’ traceability; b) monitor current stock levels; c) autonomously adapt the production line to customers’ needs;
- Maintenance: a) relying on classification and learning techniques to enable predictive maintenance; b) combining Computer Vision techniques and Natural User Interfaces for designing Augmented Reality maintenance systems;
- Management: a) supporting decision making by exploiting Visual Analytics techniques for processing production- and customer-related Big Data; b) easing hiring processes by automatically crossing semantically described work requirements with available candidates;
- Logistics: by exploiting autonomous navigation techniques in order to coordinate driverless vehicles or drones to deliver materials inside the factory building, or directly to customers;
- Supply chain: by devising an integrated architecture allowing a) the automatic identification of suppliers for a good, e.g. based on bill of materials; b) the exploitation of data extracted from client companies’ production processes in order to improve the forecast;
- Product design/marketing: by relying on semantic techniques to analyze the voice of customers (to make an example, collected from social media);
- Ergonomics: relying on Natural User Interfaces for devising new interaction modalities with machines.
The research activity will be focused on one or several of the above aspects/areas, on the basis of the research and commercial scenario that will be arising during the Ph.D.
Outline of work plan: Phase 1 (1st year): acquisition of knowledge, skills and competences related to technologies involved in the Ph.D. Literature review and analysis of existing applications related to the exploitation in the industrial domain of the following technologies: Internet of Things, Internet of Services, Cyber-Physical Systems, Big Data, Human-Computer and Human-Machine Interaction, Computer Vision, Visual Analytics, Natural User Interfaces, Augmented Reality, Semantic Web and sensors. The outcome of this phase could be a survey, published into international journals. Based on the above analysis (and eventually on requirements expressed by involved companies), selection of the most relevant industry areas/aspects/issues to be addressed and identification of state of the art approaches.

Phase 2 (2nd year): development of a prototype system aimed at improving existing state of the art approaches and/or industrial processes. This phase will start with an analysis of system requirements (eventually carried out with the collaboration of involved companies), with the identification of involved production assets and/or available data and/or actors, with the analysis of existing processes, and will result in the definition of the architecture of the prototype and in its development. During this second year a (journal or conference) paper will be written.

Phase 3 (3rd year): testing of the prototype. This phase will be devoted at evaluating whether the developed prototype was able to meet requirements identified in the second phase, and to rate the extent of improvements with respect to the state of the art. Similarly to previous phases, testing could be eventually carried out with involved companies. Part of the year will be devoted to writing a (conference or journal) paper and to the composition of the thesis.
Expected target publications: IEEE Transactions on Industrial Informatics
IEEE Transactions on Emerging Topics in Computing
IEEE Transactions on Knowledge and Data Engineering
IEEE Computer Graphics and Applications
IEEE Internet of Things
IEEE Intelligent Systems Magazine
IEEE IT Professional Magazine
IEEE Industrial Electronics Magazine
Industrial Management & Data Systems Journal
Computers in Industry Journal
IEEE International Conference on Industrial Engineering and Engineering Management
IEEE International Conference on Industrial Informatics
IEEE International Conference on Emerging Technology and Factory Automation
IEEE International Conference on Automation and Computing
European Conference on Smart Objects, Systems and Technologies
Current funded projects of the proposer related to the proposal: - STI3MA “Scienza, Tecnologia, Informatica e Impresa Internazionale: nuovi Modelli di Apprendimento” project;
- Industrial Cloud project http://industrial-cloud.com, in the frame of the agreements among Politecnico di Torino, Bauman Moscow Technical University (Russia) and Polytechnic of Nizhny Novgorod (Russia);
- SYNCHRO-NET “Synchro-modal Supply Chain Eco-Net“ project http://cordis.europa.eu/project/rcn/
193399_en.html
Possibly involved industries/companies:- Istituto Superiore Mario Boella
- Comau
- MESAP - Meccatronica e Sistemi Avanzati di Produzione
- Reply
- Unione Industriale Torino
- Confindustria
- Festo
- DHL

Title: Key management techniques in Wireless Sensor Networks
Proposer: Filippo Gandino
Group website: http://www.cad.polito.it/
Summary of the proposal: Wireless sensor networks (WSNs) offer benefits in several applications but are vulnerable to various security threats, such as eavesdropping and hardware tampering. In order to reach secure communications among nodes, many approaches employ symmetric encryption. Several key management schemes have been proposed in order to establish symmetric keys exploiting different techniques (e.g., random distribution of secret material and transitory master key). According to the different applications of WSNs, the state-of-the-art protocols have different characteristics and different gaps.
The proposed research activity will be focused on the study and development of key management protocols. The proposed investigation will consider both the security requirements and the performance (e.g., power consumption, quantity of additional messages, computational effort) of the networks. The research will analyze different possible solutions, evaluating the trade-off in terms of costs and benefits, according to different possible scenarios and applications.
The proposed research involves multidisciplinary knowledge and skills (e.g., computer network and security, advanced programming).
Rsearch objectives and methods: The objectives of the proposed activity consist in studying the state-of-the-art of key management protocols for WSNs and to propose new solutions suitable to various application scenarios. The requirements that affect the security protocols will be considered in order to find the best solution for different kinds of WSNs. In particular, the mobility of the nodes and the possibility to add new nodes after the first deployment will be considered.
The proposal of new solutions will start by analyzing the state-of-the-art protocols in order to find their weakness and to overcome them. For example, for key management schemes based on a transitory master key, which are exposed to great risks during the initialization phase, a new strategy could consider the possibility of reducing the time required for the key establishment.
However, also the proposal of completely new approaches will be exploited.
The proposed solutions will be evaluated and compared to state-of-the-art approaches, in order to evaluate their security level and their performance. The analysis of the protocols will be composed by: (a) a theoretical analysis, (b) simulations, and (c) implementation on real nodes.
The theoretical analysis will consider several aspects of the key management protocols. A part of this analysis will evaluate the statistical characteristics of the scheme, in order to analyze the level of security and other network parameters for protocols based on stochastic elements. A security analysis will compare the properties and the requirements of the protocols.
The simulations will be used to analyze the performance of the schemes with a high number of nodes. This kind of investigation allows reaching significant results within a limited time. There are many available tools that allow simulating WSNs, e.g., TOSSIM, which uses the same code written for the TinyOS platform.
The last kind of analysis will be based on an experimental session on a real network. The main platform that will be used will be TinyOS, in order to develop a code that can be used also for the simulations. The main purpose of a real implementation is to validate the results achieved by simulations.
In order to work on this research topic, a candidate may have good programming skills, and he/she should have a good knowledge on networking and security.
Outline of work plan: During the first part of the first year the Ph.D. student will investigate the field of the research activity. Therefore, the candidate will study the existing key management protocols and will improve his/her knowledge on the related topics in order to fill in any gaps about them. This phase lasts 3 months. During the rest of the first year, the candidate will start to design and develop a modified version of the studied protocols. The preliminary results will be submitted to international conferences.
During the second year, the activity will be divided in two main tasks. The first task will consist in the accurate analysis and evaluation of the proposed ideas and an eventual modification of those approaches. This task will include a programming activity, in order to implement also the main state-of-the-art protocols and to execute a comparative analysis. The results of the advanced analysis on the ideas proposed during the previous year will be submitted to an international journal. The second task will consist in the design of new approaches for the key management. The new proposals will be based on the knowledge achieved during the activity research.
During the last year, the candidate will finalize his work in order to conclude his analysis and to publish the results, considering different metrics and scenarios.
During all the three years the candidate will have the opportunity to cooperate in the development and implementation of key management solutions applied to research projects. These tasks will allow the candidate to understand the real scenarios in which WSNs are used and to find the actual issues in such applications.
Expected target publications: IEEE TRANSACTIONS ON PARALLEL AND DISTRIBUTED SYSTEMS
IEEE TRANSACTIONS ON WIRELESS COMMUNICATIONS
WIRELESS NETWORKS (Springer)
JOURNAL OF NETWORK AND COMPUTER APPLICATIONS (Elsevier)
Current funded projects of the proposer related to the proposal: OPLON: funded by MIUR (Smart Cities and Communities and Social Innovation)
FDM - Food Digital Monitoring: funded by Regione Piemonte (PIATTAFORMA TECNOLOGICA “ FABBRICA INTELLIGENTE”)
Possibly involved industries/companies:Santer Reply

Title: Body Area Network for Ubiquitous Health Monitoring
Proposer: Maurizio Rebaudengo
Group website: http://www.cad.polito.it/
Summary of the proposal: A body area network (BAN) consists in a wireless network of wearable computing devices. It can be considered as a wireless sensor network in which the nodes are deployed on a body. As a research field, BAN can be considered interdisciplinary, since it involves various areas for the devices such as networking, embedded software, and sensors, and areas for the applications such as medicine and biology. An application for health-care may perform continuous health monitoring with real-time updates. Many physiological sensors can be integrated into a wearable wireless BAN, which can be used for rehabilitation or early detection of medical conditions. Although all the people are possible users of a BAN since, there are specific subject that are more interested, such as elders and athletes.

The research activity will investigate new solutions for BAN. The activity will be focused both on the technological open issues (i.e., interferences) and on the innovative applications (e.g., elder assistance and sports performance).

The proposed research involves multidisciplinary knowledge and skills (e.g., computer network, advanced programming, data mining).
Rsearch objectives and methods: The objectives of the activity are divided in two categories: (a) technical improvements; (b) innovative applications.
The correct working of a BAN is subject to many problems. There are interference problems due to the different devices operating in the same network and to the observed body. Even the potentially high mobility of the subject can generate an exposition to high interferences. Another issue is represented by the power supply, since the working life of a BAN is a strict constraint for its real applicability. The algorithms working on the devices and the communication protocols must be developed trying to minimize the power consumption.
Even the data security is an important issue, since a BAN manages very sensible data. However, security protocols are often too computational intensive for the devices that compose a BAN. A goal of the research activity will consist in improving the state-of-the-art by proposing new solutions to the open issues.

The second goal of the research activity is to design, develop and implement innovative application of BANs. Nowadays, the quantity of commercial applications of BAN is limited to few activities, and the potentiality of BANs is far from being fully exploited. Therefore, the Ph.D student will explore new solutions and applications.
The research activity will include both theoretical and experimental tasks. The proposed innovation will be analyzed through simulations and on real devices, in order to execute comparative analysis.

Possible indicators to be measured with a BAN are physical strength and muscle tone. For the evaluation of physical strength, the use of surface electromyography signals (sEMG) will be investigated, while specific sensors for the measurement of resistance will be used for the assessment of muscle tone. Furthermore, the effectiveness of the assessment of the mobility through the use of one or more pairs of accelerometers will be assessed. Different configurations for the placement of the accelerometers (waist, knees, elbows) will be studied: the choice of the positioning points will be based upon the evaluation of the effectiveness of the collected data in determining the user’s movements. Finally, information on the person's daily perspiration, correlated to possible diseases, may be provided by wearable temperature and humidity sensors.

The possibility of incorporating a tracker into the wearable sensors will also be evaluated, which would allow the detection of emergency situations, such as a fall and unusual behavior, and the timely communication to a service center.

The research activity will be also focused on the organization and analysis of the large amount of data generated by the BANs, exploiting data mining algorithms to classify detected movements patterns and to define effective algorithms to predict the user’s performance.
Outline of work plan: The research activity is organized in 3 main work packages (WPs). The first WP includes the design, implementation and experimentation of BAN applications. The second WP includes the research activity with the goal of improving and advancing BAN technology. The third WP consists in the organization and analysis of the big data generated by the BAN.
The two main applications that will be designed and implemented within the first WP are the assistance of elders and athletes. During the first year such applications will be studied, and a prototype of BAN will be developed. During the second year, an experimentation phase will allow not only to improve the BAN but also to achieve a huge quantity of data. In parallel new applications will be designed and developed. During the third year the BAN for the new applications will be implemented and used on the field.
During the first year, the research activity of the second WP will include a study on the state of the art, in order to understand the open issues in BANs. During the second year, on the base of the theoretical study and of the practical experience, new solutions to improve BANs will be proposed (e.g., security, power supply, communication). During the last year the proposed improvements will be integrated in the developed BANs.
The third WP will start in the second year. The first task will consist in the organization of a database suitable for the large quantity of data produced by a BAN. This task is not limited to the implementation of a database, but requires an accurate study of existing solutions and open issues. During the third year the achieved data will be analyzed in order to extract useful algorithms for prediction models.
All the described WP will include the submission of the achieved results to international conferences and journals.
Expected target publications: IEEE TRANSACTIONS ON PARALLEL AND DISTRIBUTED SYSTEMS
IEEE TRANSACTIONS ON WIRELESS COMMUNICATIONS
IEEE TRANSACTIONS ON INFORMATION TECHNOLOGY IN BIOMEDICINE
IEEE TRANSACTIONS ON BIOMEDICAL ENGINEERING
WIRELESS NETWORKS (Springer)
Current funded projects of the proposer related to the proposal: OPLON: progetto finanziato da MIUR (Smart Cities and Communities and Social Innovation)
Possibly involved industries/companies:Santer Reply

Title: Low-Latency Multimedia Communication Optimization for the Web Environment
Proposer: Enrico Masala
Group website: http://media.polito.it
Summary of the proposal: Numerous multimedia delivery services are now available through Internet even within the web browser itself, without the need of installing additional add-ons or plugins. Moreover, modern web browsers are highly powerful and programmable, therefore complex logic can be embedded in web pages that include or interact with multimedia content. Recent standards such as the WebRTC, which is increasingly been supported by major browsers, moves towards this direction, exposing the multimedia element through a web-friendly Javascript API. While many websites take advantage of such interface to interact with multimedia content, how to best optimize its parameters for challenging communication environments is still an open research problem. For instance, in case of low-latency and/or wireless communications, packetization and prioritization policies are key in determining the quality of experience of the communication. The PhD candidate will perform research in this area developing strategies for optimized content encoding, packetization and delivery over the Web, with particular attention to the case in which the device is mobile with limited resources.
Rsearch objectives and methods: Objectives: Identify, develop and investigate the performance of new algorithms for video coding and transmission optimization in the context of web-based communications, e.g., with the WebRTC protocol stack. The configuration possibilities of the browser API will be investigated in details, in terms of both multimedia coding parameters and packet transmission policies. Currently, standards only either offer few indications or do not specify how to set these parameters, therefore tuning such parameters and designing efficient communication protocols with limited complexity is an open research field. Current methods and algorithms, in fact, are mainly heuristic and focus on networking issues only (e.g., rate control of the communication). A promising direction is to include measures of the quality of the multimedia content (e.g., MSE, PSNR), which can be computed at the transmitter with limited computational complexity, in the optimization framework so that performance can be improved from the point of view of the Quality of Experience (QoE) of the final user.
Methods: Develop a theoretical framework to model the whole encoding and transmission chain in order to formulate and investigate the problem from an analytical point of view, including a key factor such as multimedia quality in the framework. The resulting insight will then be validated in practical cases by means of analyzing performance of the system with simulations and real experiments. The research group has extensive expertise in coding, network simulations and prototyping for multimedia communications that will be leveraged for this work. The experimental activity will also leverage ongoing research in the group based on a commercial contract seeking to develop a low-latency voice communication system for difficult wireless communication scenarios in which a limited set of services are available (e.g., HTTP only).
Outline of work plan: In the first year, the PhD candidate will familiarize with the recently proposed WebRTC standard and its API, so that it will be possible to build a detailed picture of which parameters are available for further investigation, tuning, and inclusion in a theoretical optimization framework. During this process the fundamental abilities of identify, modify and extract the relevant parts of the coding engine and protocol stack will be acquired. This activity will initially lead to conference publications with heuristic optimization strategies, e.g., dynamic size adaptation of the basic coding unit (video frame). In the second year, building on the theoretical knowledge already present in the research group, new techniques will be developed and experimented to demonstrate, by means of simulations, the gains that can be achieved by carefully configuring all the elements involved in the coding and transmission chain, in terms of both improved multimedia quality and resource savings. This activity will target journal publications which will present both the theoretical modeling and analysis of optimization algorithms as well as simulation results. In the third year the activity will then be expanded in the direction of developing prototypes suitable to run experiments with traditional browsers running on both desktop and mobile platforms using wireless communication channels, so that the developed algorithms will be experimented in the field investigating the match between simulations and results in actual conditions. This will provide insight to improve the shortcomings identified in the modeling framework. A refined version of that will target a journal publication that will provide added value in terms of verified adherence to reality of the results.
Expected target publications: Possible targets for research publications (well known to the proposer) include IEEE Transactions on Multimedia, Elsevier Signal Processing: Image Communication, Elsevier Computer Networks, IEEE International Conference on Communications, IEEE International Conference on Image Processing, IEEE International Conference on Multimedia and Expo
Current funded projects of the proposer related to the proposal: None at the moment
Possibly involved industries/companies:None at the moment

Title: Real-time scheduling of automotive embedded software for multi-core computing platforms
Proposer: Massimo Violante
Group website: www.cad.polito.it
Summary of the proposal: Multi-core computing platforms are offering new opportunities to embedded system developments, but they are also posing new challenges. When faced with the problem of porting an existing algorithm devised for single-core computing platforms to multi-core ones, engineers need a methodology to preserve the desired functionality while meeting deadlines as requested by hard real-time applications. When real-time automotive embedded software is considered, the number of tasks (which can be in the range of 10 to 50 tasks), and the dependencies existing among them, make the problem very challenging, and today established solutions are not available. The research activity aims at tackling the mentioned problem, taking into consideration actual automotive applications ranging from body and chassis control to engine control, by developing suitable techniques for deploying tasks set on multicore architectures and to schedule them under hard real-time constraints. The research activity would be carried out in cooperation with leading automotive international companies.
Rsearch objectives and methods: Problems and objectives
The semiconductor industry is moving toward multicore architectures in response to the ever-increasing demand for computing power that many applications exhibit. For embedded real-time applications, such as powertrain control, multicore would have a disruptive effect. Traditionally, embedded real-time applications are deployed on single core and, although using multiple tasks, they are not intended for exploiting the parallelism of multicores. As a result, two challenges arise:
• The first challenge stems from the need of reusing legacy applications on multicore. Legacy applications are important assets deployed in millions of products, and reuse is crucial. As an example, powertrain control algorithms are the result of decades of improvements. As a matter of fact, legacy applications, which may comprise concurrent tasks and that may be tuned for real time, are designed and validated on single core architectures, only, and cannot ported to multicore easily. As a result, the first objective of the research focuses on porting existing applications on multicores under specific requirements, such as real-time, and will address the development of techniques to analyze the application functional and timing behaviors, to identify suitable real time scheduling to preserve functional/timing properties when moving from a single execution core to multiple execution cores.
• The second challenge stems from the need of rethinking real-time applications levering multiple execution cores. As of today, novel algorithms, especially in real-time contexts such as powertrain control, are designed using model-driven approaches where domain specific languages, such as Simulink, are used to capture the functional behaviors, and then automatic code generation is used to map the model on computing architectures. Architectural features such as multiple execution cores are not considered during modeling: therefore, some architectural constraints may be neglected (e.g., the time needed for synchronizing two tasks deployed on two different cores) resulting in timing close problems when mapping the application on multicore architectures. As a result, the second objective of the research focuses on techniques for anticipating meaningful architectural features into the modeling process in order to simplify, and possibly automate, the translation of the model into code that can be executed on multicore computing architectures.
Methods
The research, carried out in cooperation with automotive companies (FCA, ITT Motion Technology, and Ideas & Motion), will start with the analysis of existing control applications developed for single core architectures. Two methods will be followed:
• Actual production code will be analyzed to derive an abstract representation of the relation among tasks, in terms of dependency and communication. Scheduling techniques will then be developed to deploy the tasks into different execution cores and its feasibility will be evaluated using hardware-in-the-loop techniques.
• A model of the control algorithm will be analyzed. A benchmark implementation targeting multicore will be developed, using the techniques defined to fulfill the first research objective. A technique to restructure the model will then be proposed, to embed architectural-related features in the model. The modified model will then be transformed into code, it will be evaluated using hardware-in-the-loop techniques, and compared with the benchmark.
Outline of work plan: Year 1
The first year will be devoted to:
• Analysis of the actual production code of an automotive control application. Starting from existing tools, such as Amalthea, a methodology will be developed to analyze the C code (average size 1,000 files; 5,000 lines of code for each file; 50 concurrent tasks; 1,000 objects shared among tasks) of the application, to build a graph-based abstract implementation to highlight the relationship among tasks in terms of dependency (which task synchronizes with whom), and communication (which task produces/consumes which object).
• Definition of scheduling techniques to map existing tasks sets into multicore architectures under dependency and communication constraints. The analysis mentioned before will be used to guide the definition of the scheduling techniques, and the application considered will be used as benchmark for the developed techniques.
Year 2
The second year will be devoted to:
• Analysis of the Simulink model of an automotive control algorithm and of the architectural features of typical multicore architectures for embedded systems.
• Identification of the properties synthesizing the features of multicore architectures suitable for enriching Simulink model of control algorithm.
• Development of a novel multicore-aware control algorithm and its experimental evaluation using model-in-the-loop techniques.
Year 3
The third year will be devoted to:
• Synthesis of the code from the multicore-agnostic and the multicore-aware models.
• Development of a suitable harness for hardware-in-the-loop validation.
• Benchmarking of the two developed implementations.
The activities for the tree years will be implemented in cooperation with powertrain companies, and in particular we expect to cooperate with:
• FCA for what concerns the analysis of legacy application for drivetrain (powertrain, and dual clutch gearbox);
• ITT Motion technologies for what concerns innovative braking systems;
• Ideas & Motion for what concerns the analysis of control algorithms with particular emphasis on permanent magnet synchronous motors for driveline applications.
Expected target publications: IEEE Transaction on Computers
IEEE Transactions on Industrial Electronics
Current funded projects of the proposer related to the proposal: The activities described in this proposal will be supported by projects already funded by ITT Motion Technologies and Ideas & Motion. Activities are under discussion with FCA.
Possibly involved industries/companies:FCA
ITT Motion Technologies
Ideas & Motion

Title: Advanced User Interaction in Ambient Intelligence Systems
Proposer: Fulvio Corno
Group website: http://elite.polito.it
Summary of the proposal: Ambient Intelligence design methodologies require that users must be involved throughout all design phases, and that system behavior should be designed to support "proactively, but sensibly" users' daily activities.
The PhD proposal explores novel user interaction modalities, by analyzing the full spectrum between passive-user systems (e.g., in crowdsending) to multisensorial user interaction (e.g., in immersive gaming). The focus of the thesis will be on the interaction modalities, by applying rigorous HCI research methods, and on the underlying system intelligence needed to support the advanced interaction.
Rsearch objectives and methods: Ambient Intelligence systems, based on Internet of Things technologies, may deliver innovative and useful services, only if a suitable and effective intelligent reasoning backend is available and suitable interaction methods are available to users.
The PhD thesis will focus on studying, proposing and engineering one or more interaction solutions (enablet by an intelligent interactive system) that may act as enablers of new IoT services or AmI systems.
The research methods and topics will tackle the design and evaluation of AmI systems from two opposite sides: from the Human-Computer Interaction (HCI) research, and from the Intelligent Systems perspective.
The aim is to exploit novel HCI solutions, backed by rigorous HCI research methods, and couple them with intelligent algorithms embedded in the AmI system that will enable context-aware, user-adapted and proactive interaction.
Outline of work plan: Phase 1 (months 1-6): study of HCI research methods and of the state of the art in AmI systems.
Phase 2 (months 6-12): identification of an application area (e.g, ambient intelligent games, user awareness in smart homes, spontaneous objects in smart cities, or other promising application areas identified in Phase1)
Phase 3 (months 6-12): analysis of the requirements for the intelligent system, and development of the necessary formal models and ontologies. Comparative study, in the selected application area, of the algorithmic and interactive approaches.
Phase 4 (months 12-24): Development of interaction methods in the selected application context. Experimentation and user evaluation. Identification or development of experimental data-sets.
Phase 5 (months 24-36): integration of the algorithms into a complete AmI demonstrator. User testing.
Expected target publications: IEEE Internet of Things Journal
IEEE Transactions on Human-Machine Systems
IEEE Intelligent Systems
ACM Transactions on Computer Human Interaction (TOCHI)
ACM Transactions on Cyber-Physical Systems
Current funded projects of the proposer related to the proposal: No currently funded project.
Possibly involved industries/companies:Telecom Italia
Reply
Consoft Sistemi

Title: Summarization of heterogeneous data
Proposer: Luca Cagliero
Group website: http://dbdmg.polito.it/wordpress/
Summary of the proposal: In today's world, large and heterogeneous datasets are available and need to be analyzed, among which textual documents (e.g. learning documents), graphs (e.g. social network data), sequential data and time series (e.g. historical stock prices).
Summarization entails extracting salient information from large datasets (e.g. an abstract of a set of learning documents, the most salient trends into stock price movements, the most relevant relationships among the users of a social network). Although some attempts to apply data mining techniques to summarize specific data types have been made, extracting salient content from heterogeneous data is still an open and challenging research issue.
The candidate will investigate new data mining approaches to summarizing data collections different in nature, to combining the information provided by heterogeneous models, and to effectively supporting knowledge discovery from such models. Furthermore, she/he will target the development of flexible multi-type data analytics systems, which allow experts to cope with heterogeneous data without the need for exploiting a bunch of ad hoc solutions.
Rsearch objectives and methods: The research objectives address the following key issues.
- Study and development of new portable summarization algorithms. Available summarization algorithms are able to cope with a limited number of data types and their applicability is often limited to specific application contexts (e.g. documents written in a given language, news ranging over the same subject). Thus, the goal is to develop new summarization approaches that are portable to different data types and applicable to data acquired in various application contexts.
- Integration of heterogeneous models. Data mining models capture the most salient information hidden in the analyzed data. Coping with heterogeneous data entails integrating models extracted from different data types. Hence, the goal is to study the development of unified models, which capture the information contained in datasets different in nature.
- Knowledge discovery. To support domain experts in decision making, the generated models need to be explored. Therefore, the goal is to effectively and efficiently explore the generated data mining models to gain insights into these heterogeneous data collections.
Research methods include the study, development, and testing of in-core, parallel, and distributed data mining algorithms that are able to cope with various data types, including textual data, sequential data, time series, and graphs. Extension of existing solutions tailored to specific data types (e.g. language processing tools, graph indexing algorithms) will be initially studied and their portability to different data types will be investigated.
Outline of work plan: PHASE I (1st year): overview of existing summarization algorithms, study of their portability to different data types, performance analysis of existing solutions, and preliminary proposals of new summarization strategies for various types of data.
PHASE II (2nd year): study and development of new summarization algorithms, experimental evaluation on a subset of application domains (e.g. e-learning, finance, social network analysis). Exploitation of the extracted knowledge in a subset of selected contexts.
PHASE III (3rd year): Integration of different data mining models into unified solutions able to cope with multiple types of data. Study of the portability of the designed solutions to data acquired in different contexts.

During all the three years the candidate will have the opportunity to attend top quality conferences, to collaborate with researchers from different countries, and to participate to competitions on data summarization organized by renowned entities, e.g., the Text Analytics Conference tracks organized by the National Institute of Standards and Technologies (NIST), and the Data Challenges organized by the Financial Entity Identification and Information Integration (FEIII).
Expected target publications: Any of the following journals on Data Mining and Knowledge Discovery (KDD) or Data Analytics:

IEEE TKDE (Trans. on Knowledge and Data Engineering)
ACM TKDD (Trans. on Knowledge Discovery from Data)
ACM TIST (Trans. on Intelligent Systems and Technology)
IEEE TETC (Trans. on Emerging Topics in Computing)
IEEE TLT (Trans. on Learning Technologies)
ACM TOIS (Trans. on Information Systems)
Information Sciences (Elsevier)

IEEE/ACM International Conferences on Data mining and Data Analytics (e.g., IEEE ICDM, ACM SIGMOD, IEEE ICDE, ACM KDD, ACM SIGIR)
Current funded projects of the proposer related to the proposal:
Possibly involved industries/companies:

Title: Neural Networks for speaker recognition
Proposer: Pietro Laface
Group website: http://www.dauin.polito.it/research/research_groups/srg_speech_re...
Summary of the proposal: State-of-the-art systems in speaker recognition are based on Gaussian Mixture Models (GMMs), where a speaker model is represented by a supervector stacking the GMM means. To solve the problem of utterances coming from different channels or environments, the best solutions rely on different forms of Factor Analysis (FA), which allows obtaining a compact representation of a speaker or channel model as a point in a low–dimensional subspace, the “identity vector” or i-vector. Very good performance has been obtained using i-vectors and classifiers based on Probabilistic Linear Discriminant Analysis (PLDA). The goal of such systems is to model the underlying distribution of the speaker and channel components of the i–vectors in a Bayesian framework. From these distributions, it is possible to evaluate the likelihood ratio between the “same speaker” hypothesis and “different speakers” hypothesis for a pair of i–vectors.
Another successful approach using i-vectors is based on discriminative classifiers, in particular Pairwise Support Vector Machines.
In this framework, this proposal aims at developing speaker identification systems that are robust to environment changes and short utterance durations, and at exploring new models and techniques that allow better estimating the i—vectors, or similar compact representations of speaker voice segments.

Rsearch objectives and methods: The objectives of this proposal are to devise new approaches for the following topics:
- Robustness to environment and channel distortion
We plan to investigate different approaches for feature extraction which combine spectral information with high-level features. In particular, attention will be devoted to exploit features derived from speech decoders based on Deep Neural Networks, which can provide noise-robust information useful for compensating the effect of noise on spectral level.
- Short duration utterances
The text-dependent identification task, which involves user authentication from short speech segments, i.e., a fixed user-selected passphrase, is obviously affected by the scarcity of available speaker information. We plan to address this problem by means of a combination of Factor Analysis and generative or discriminative models, which incorporate duration information in speaker recognition modeling.

Phase1 (1st year):
- Study of the literature
- Study of advanced features of Python language and of distributed computing
- Study of the computing environment and speaker recognition tools developed in the last years by our group.
- Development and integration of Deep Neural Network models in the framework of a GMM/PLDA system.
Phase2 (2nd year):
- Text-dependent speaker recognition: evaluation of state of the art techniques, and proposal of new more accurate models.
- Investigation of the most effective parameters and techniques for training text-independent speaker models.
Phase3 (3nd year):
Integration of the proposed approaches for text-dependent and text-independent speaker recognition.

Outline of work plan: Phase1 (1st year):
- Study of the literature
- Study of advanced features of Python language and of distributed computing
- Study of the computing environment and speaker recognition tools developed in the last years by our group.
- Development and integration of Deep Neural Network models in the framework of a GMM/PLDA system.
Phase2 (2nd year):
- Text-dependent speaker recognition: evaluation of state of the art techniques, and proposal of new more accurate models.
- Investigation of the most effective parameters and techniques for training text-independent speaker models.
Phase3 (3nd year):
Integration of the proposed approaches for text-dependent and text-independent speaker recognition.
Expected target publications: IEEE International Conference on Audio, Speech and Signal Processing
ISCA Interspeech Conference
IEEE/ACM Transaction on Audio, Speech and Language Processing
Current funded projects of the proposer related to the proposal: The Speech Recognition Group collaborates with NUANCE Communication on the basis of annual contracts devoted to speaker recognition topics.
Possibly involved industries/companies:NUANCE Communication.

Title: Energy aware software
Proposer: Morisio Maurizio
Group website: Softeng.polito.it
Summary of the proposal: Energy consumption and energy efficiency are becoming mainstream in the IT domain too, both for sustainability reasons (energy consumption in IT is a small fraction of the total – around 2% - but increasing) and for practical reasons (energy availability limitations in battery operated mobile devices or Internet Of Things/IOT scenarios). This proposal is about defining methods and tools to 1) characterize the energy consumption of software applications using both direct measurements and models 2) refactor applications to reduce energy consumption 3) analyze the tradeoffs between energy consumption and other non-functional properties of applications. The main scenarios considered are 1-smartphones, 2-IOT
Rsearch objectives and methods: Energy consumption in IT is usually measured at the level of hardware components (CPU, screen, storage etc.). This research activity aims at measuring energy consumption at the level of software applications, evaluate (by comparison with similar applications or benchmarks) if the consumption is within average or not, if not understand where the application can be refactored to improve its energy efficiency. The main foci of the work are: mobile applications (Android, IoS); IOT platforms (Arduino, Raspberry).
Objective 1: Characterization of consumption. Knowing the consumption of software is not straightforward since it involves many components (CPU, screen etc) shared with other applications and basic operating system services. Characterization will be pursued with two means, direct measurement with external gauges, and modeling. The former is more precise but not practical for normal users, so it made mainly to define, validate and improve the precision of models. A further goal is to classify applications in consumption categories easy to understand for the end user, in a similar way as what happens in consumer devices (dishwashers, cars).
Objective 2: Reduction of consumption. Knowing the level of consumption of an application it is possible to try to reduce it. Three means are explored: definition of refactoring patterns at the code level, run time self-adaptation of applications, modification of end user behavior.
Outline of work plan: First year (objective 1)
Improvement of a method to measure consumption of applications starting from instrumentation; definition /evaluation of models to measure consumption without instrumentation; definition of usage profiles; definition of application families; definition of benchmarks for application families and usage profiles. Definition of classes of consumption. Definition of a complete and validated process to classify mobile applications in consumption classes
Second year (objective 1)
Application of the method to a relevant number of mobile applications from public repositories (Google play, ITunes). Application of the method in IOT scenario. Evaluation and improvement of the method.

Second year, third year (objective 2)
Analysis and proposal of techniques to reduce consumption; experimental evaluation of their effect; definition of design patterns to reduce energy consumption and of their context of application. Evaluation of effect on other non functional properties, notably performance.
Analysis of user involvement to reduce consumption. Experimentation with end users, evaluation of the effect of classes of consumption on usage profiles.
Expected target publications: IEEE Computer, IEEE Transaction software engineering, IEEE Transactions on computers, IEEE Software
Current funded projects of the proposer related to the proposal: No
Possibly involved industries/companies:Telecom Italia

Title: Functional Model-Driven Networking
Proposer: Fulvio Risso
Group website: http://netgroup.polito.it
Summary of the proposal: The recent trend in the networking consists in moving network functions to the virtual domain, leveraging technologies that are already widely used in the ICT sector.
However this trend does not change the foundation principle of the networking itself, which is still based on the same concepts (OSI models, IP/MAC addresses, TCP/IP protocols, etc.) defined more than 30 years ago.
This research activity aims at redefining the networking domain by proposing a functional and model-driven approach, in which novel declarative languages are used to define the desired behavior of the network, and networking primitives are organized in a set of inter-changeable functional modules, which can be flexibly rearranged and composed in order to implement the requested services.
The current proposal will allow network operators to focus on what to do, instead of how to do, and it will also facilitate the operations of automatic high-level global orchestration systems, which can more easily control the creation and update of network services
Rsearch objectives and methods: The objective of this research is to explore the possibility to introduce a functional and model-driven approach to create, update and modify network services. Such an approach allows (i) to deploy complex network services by means of functional languages, i.e., focusing on the behavior that has to be obtained (also known as intent-driven approach), and (ii) by creating elementary functional components that can provide basic network functionalities and that can be arbitrarily rearranged and composed in complex service chain in order to deliver the expected service.
The above objective will be reached by investigating the following three areas:
• Definition of novel functional languages. Novel languages are required to specify the functional behavior of the network, starting from existing research proposals (e.g., FRENETIC, http://www.frenetic-lang.org/) and extending that approach to more high-level modular constructs that facilitate compositional reasoning for complex and flexible service composition.
• Model-driven creation of elementary network components. High-level languages are definitely important as they represent the entry point of potential users (e.g., network operators) in this new intent-based word, but they must be efficiently mapped on the underlying infrastructure. This second area of research aims at creating modular networking building blocks that provide elementary functions, with the capability to be flexibly rearranged in complex service graphs defined at run-time. A model-driven approach brings in the system additional flexibility as each component can be selected based on what it does instead of the actual implementation, enabling massive reuse and improvement of components without affecting the functional view of the service. A possible starting point for this approach consists in leveraging some existing technologies, such as the IOVisor virtual components (http://iovisor.github.io), which are based on a networking-tailored virtual machines, and the model-driven development of networking components proposed in OpenDayLight (https://www.opendaylight.org/).
• Functional model of each network functions. In order to be able to connect each network function in arbitrary graphs, we need to know exactly what the network function does. This problem requires the definition of a functional model for each elementary network module that can define exactly which incoming network traffic is accepted, how it is possibly modified, and what will be returned in output. This model, which resembles to a transfer function, enables to create complex network models operating at the functional level, hence providing a match between user requests, coming from the above mentioned functional languages, into the delivered network service, obtained through the composition of the above elementary blocks, and whose service is obtained thanks to the formal composition properties of each transfer function
Outline of work plan: This current research project can be carried out in the following phases:
• Year 1: Analysis of the literature with respect to the topic. Extension of the current work about modelling network functions, in collaboration with prof. Sisto. Initial publication about modelling network functions.
• Year 2: Monitoring of the scientific literature with respect to the topic of the project. Extension of previous work and integration with model-driven frameworks supporting network functions. Creation of a proof-of-concept orchestrator that is able to create a complex network service given a set of high-level requirements and constraints, exploiting the above mentioned framework. Second publication about functional-based delivery of network services.
• Year 3: Monitoring of the scientific literature with respect to the topic of the project. Analysis of the functional languages for networking and possible extensions to support complex use cases. Proof-of-concept implementation of the above languages targeting the orchestrator developed in Y2. Third publication focusing on the possible extensions of functional languages for networking. Final publication including all the work done in the three-year project.
Expected target publications: Most important conferences:
• USENIX Symposium on Networked Systems Design and Implementation (NSDI)
• USENIX/ACM Symposium on Operating Systems Design and Implementation (OSDI)
• IEEE International Conference on Computer Communications (Infocom)
• ACM workshop on Hot Topics in Networks (HotNets)
• ACM Conference of the Special Interest Group on Data Communication (SIGCOMM)
Most significant journals:
• IEEE/ACM Transactions on Networking
• IEEE Transactions on Computers
• ACM Transactions on Computer Systems
• Elsevier Computer Networks
Current funded projects of the proposer related to the proposal: None
Possibly involved industries/companies:PLUMgrid, Santa Clara, CA

Title: Cross-layer Lifetime Adaptable Systems
Proposer: Stefano Di Carlo
Group website: http://www.testgroup.polito.it
Summary of the proposal: Cyber-physical systems constantly rely on a larger number of hardware blocks with different architectures and programmability levels integrated together on a single piece of silicon or system-on-chip (SOC). The goal is to efficiently execute parallel and application-specific tasks either in a general purpose CPU or in a special-purpose core (such as GPUs, sensor processors, DSPs or other types of co-processors).
SOCs offer a diverse set of programmability features (hardware programmability vs. software programmability) and parallelism granularity (fine-grain vs. coarse grain). There is no golden rule or systematic methodology to determine the best setup for a particular cyber-physical system and application. Depending on the application domain, software-programmable devices like GPUs, DSPs or sensor processors may prevail in terms of performance or energy-efficiency while state-of-the-art hardware reconfigurable FPGAs already penetrate domains where software programmable devices used to be the norm.
This project wants to develop a Cyber-physical System+Application co-design framework for joint and dynamic optimization of the performance, energy and resilience of low power heterogeneous architectures during the lifetime of the system. The final system “quality of operation” will be decided as a mixed function of the three important parameters: “max performance per availability unit per watt” or “watts per max availability per performance unit”. Known error tolerance methods based on information, hardware and software redundancy will be evaluated for different heterogeneous platforms under given performance and energy requirements for the studied application domains.
Rsearch objectives and methods: Heterogeneity is now part of any cyber-physical system. While it offers several opportunities, it poses a challenge to the programmer to efficiently take advantage of all the available resources. The integration of many CPUs and special-purpose hardware operating in non-controlled environments and variable, unpredictable workload conditions maximizes the probability of failure due to errors sourcing from the hardware itself (variability, aging, etc.) as well as from the external environment (radiation, etc.). The optimization of cyber-physical systems design is now a multi-dimensional problem. Maximum performance should come along with maximum resilience and energy efficiency; as well as security and dependability.
Thus, programmability and lifetime adaptability to the variable environment and workload becomes crucial in the design and deployment of cyber-physical systems. In this project, adaptability is not only understood as the capacity to self-tune the hardware characteristics (i.e. voltage and frequency scaling) but also to offer a high degree of resiliency to hardware errors; as well as, availability at the application level (i.e. security).
We envision the development of a self-aware system that can autonomously adapt to the run-time conditions measured for performance, power and reliability. Through the definition of this interface with the hardware, an application container at the OS level will leverage the application requirements to the hardware reports and take the appropriate decisions. Unlike virtual machines, containers have the advantage of creating an exclusive execution zone for the application while the application is run directly on the system thus incurring in minimal overhead. The presence of the container adds a dimension of freedom to insert a light-weight per-application –in contrast to hypervisors- self-aware system where runtime decisions will be taken. This system will inherently communicate with the different system layers (i.e. hardware, OS, application).
Jointly maximizing performance, energy efficiency and resilience of the complete system and application is a hot research and development challenge. While software-based resilience methods can be more suitable to software-programmable accelerator-based systems, the performance and energy overhead they incur may be unacceptable at the extreme performance scale. On the other hand, hardware reconfigurable fabrics may spare significant silicon areas for improved resilience to errors.
We consider such a co-design framework a major step forward of future cyber-physical systems where the selection of the best operation point for the particular application domain is a very hard task with large numbers
Outline of work plan: Year 1

The student needs to become familiar with the use of micro architectural models of hardware core that represents a candidate approach to simulate the hardware infrastructure and to enable the development of the hardware/software co-design framework. This task will include the modification of existing simulators such as GEM5, MARS, GpuSIM, Multi2Sim in order to implement a set of different monitoring facilities.

As a result of this activity a complete simulation environment for an heterogeneous system will be developed.

Year 2

The second year will be dedicated to development of the container representing the interface between the hardware and the OS. The container needs to implement algorithms able to process at run-time information obtained from the hardware monitoring facilities and to implement self-aware decisions to maximize the execution based on target optimization criteria (reliability, performance, power consumption etc.)

Year 3

The last part of the PhD will be devoted to the development of a massive experimental campaign to show the effectiveness of the developed demo on a set of representative application. The experimental campaign must be carefully designed to make sure a significant number of test cases with different requirements and complexity are properly selected.
Expected target publications: The work developed within this project can be submitted to:

Conferences:
ASPLOS, HPC, DSN, IOLTS, DFTS

Journals:
IEEE TOC, IEEE TCAD, ACM TACO
Current funded projects of the proposer related to the proposal: CLERECO
Possibly involved industries/companies:Intel Italia, Thales.

Title: Synchro-Modal Supply Chain in Logistic Optimization
Proposer: Roberto Tadei
Group website: http://www.orgroup.polito.it/
Summary of the proposal: Horizon 2020 programme puts, as its main objective, the development of “smart, green and integrated transport”. The programme also suggests developing innovative frameworks that incorporates the usage of multi-modal networks and the synchro-modal optimization of the related supply chain
This research focuses on the optimization of transportation operations at the global and local/urban systems. The complete knowledge of the system is required nowadays. Different actors, the transportation network, traffic information, scheduling, risk, emissions must be explicitly considered as a complex system.
Aim of this research proposal is to study the peculiarities of the supply chain operations, with a special focus on routing applications, and propose new algorithms able to deal with realistic problems.
Rsearch objectives and methods: In this research, particular attention will be devoted to the local/urban system. Last mile logistics is one of the most promising research field in the literature. In fact, it is both challenging from an academic point of view and crucial for the development of factual Smart City and City Logistics applications.
These problems are characterized by high uncertainty of some parameters (travel times, delivery times), geocoded large data (marge and detailed multi-model maps) and large sized instances (thousands of stops). Moreover, there is the need of solving them with a limited computational time. For example, a parcel delivery company needs to solve a 4000 delivery problem in 15 minutes, while the methods in the literature uses hours to manage 1000 deliveries.
Several issues are still open in literature. Transportation problems are based on a network. Nowadays, different tools can use network (map) information and multiple modal combination with the objective to optimize the path between a pair of nodes. However, these tools are not flexible enough to face with complex objectives (economic, service quality, environmental). The optimization of freight transportation requires the computation of large origin-destination cost matrices in a short time (from few minutes to few seconds depending on the application). The objective function usually does not consider multiple objectives that describe the complex system of the urban area. Finally, the multi modal transportation problem is often simplified without taking into account the synchronization and transshipment costs that characterize the freight transportation.

The objectives of this research project are grouped in three macro-objectives:

Management Science/Operations Research objectives:
• Compute large sized delivery cost matrices over urban maps (matrix of 4000x4000 shortest paths in minutes, instead of 2.5 hours as done by Google Maps services)
• Develop new models and method for solving large-sized multimodal multi-objective urban delivery problems
• Identify Key Performance Indicators (KPIs) and analytics able to measure the overall process.
ICT objectives:
• Create a reliable MAP service based on OpenStreet and Open Trip Planner to be used as Software as a Service
• Incorporate the routing algorithms and make them usable as Software as a Service
• Create dashboards incorporating KPIs and analytics for Smart City applications.
Testing object:
• Field testing of solutions in collaboration DAUIN labs, in particular ICE Lab.
Outline of work plan: PHASE I (1st semester). The first period of the activity will be dedicated to the study of the state of the art of Smart cities and City logistics, with a particular attention to e-grocery and last-mile applications. The student will study the relevant topics in combinatorial optimization, the stochastic programming that consider the uncertainty of information and the optimization methods related to the transportation methods.

PHASE II (II and III semester). Cost matrix definition. In the second phase, the student will study the existing frameworks that use map information and provides multi-modal itineraries. Aim of the student is to advance the state of the art of the methods used in these framework in order to compute the cost matrix of the transportation problem considering multiple objectives as well as the transshipment operations, their synchronization and costs.

PHASE III (IV semester). Development of new algorithm per Smart Cities and City Logistics. The student will develop new mathematical models and algorithms for transportation problems in the urban system able to incorporate different sources of uncertainty (e.g. online pickup and delivery, demand, travel times, etc.). The problems and the methods will thus belong to the Stochastic Programming and the Combinatorial Optimization domains. The student will also validate the algorithms proposed in this phase in a simulation environment that consider realistic scenarios.

PHASE IV (V and VI semester). Prototyping and case studies. In this phase the results of phase II and III will be engineered and incorporated into a larger framework. It will be applied to real case studies in the urban area. Several projects are currently in progress, which require experimentation and validation of the proposed solutions
Expected target publications: • Transportation Research
• Interfaces
• Omega - The International Journal of Management Science
• Management Science
• International Journal of Technology Management
• Business Process Management Journal
• IT Professional
Current funded projects of the proposer related to the proposal: • SynchroNET
• URBeLOG
Possibly involved industries/companies:TNT, DHL, TIM JoL

Title: Formal verification of next generation computer networks
Proposer: Guido Marchetto
Group website: http://netgroup.polito.it/
Summary of the proposal: The application of formal methods to computer networks experienced an increasing interest in recent years, mainly due to the offered possibility of formally proving the correctness of such complex and highly distributed systems. These approaches have been used to verify the correct behavior of protocols and complex network configurations, as well as to spot safety-related, performance-related and security-related problems.
Since the landscape of computer networks is now rapidly evolving, it is necessary to study if and how formal methods can be applied also to these next generation networks. The aim of the proposed PhD research project is to analyze the relationship between formal methods and next generation networks (in particular the network systems based on software defined networking and network function virtualization), also studying how such methods and tools can be adapted to better work within these scenarios. Later on, the candidate might propose enhancements to the techniques that have been considered, in order to better adapt them to the specific features and requirements of the next generation networks. A specific attention will be given to the usability of these tools, by means of the definition of proper and user-friendly modeling languages.
Rsearch objectives and methods: The main objective of the proposed research is to improve the use of formal methods in relation with the new specific features of next generation networks.
The research work will start with a study of the new features of next generation networks and of the currently available formal verification techniques for network systems and protocols. In particular, the candidate will focus on existing techniques that have already been developed with next generation networks as main target. This initial study will be used to identify possible issues and limitations of these existing solutions, in terms of both performance and usability. This last feature is key for the definition of a proper formal verification technique in this field. Many solutions, in fact, although very flexible and powerful, require the utilization of complex modeling languages that network operators are not familiar with. This might significantly limit their adoption in real computer networks, especially in so dynamic scenarios like software defined networks and network function virtualization systems.
On the basis of this initial study, the candidate will run proper experiments to compare and deeply evaluate existing techniques. The results of this work will be submitted for publication. Afterwards, the candidate will focus on some of the limitations of the current techniques, emerged in the experimentation work, and will propose and experiment possible solutions for overcoming them. As early mentioned, a specific focus will be given to the usability of the tools. In particular, a possible research objective will be the definition of proper user-friendly (e.g., Java-like) languages to model the system components, in order to spread the adoption of the proposed solutions in the networking community.
Outline of work plan: Phase 1 (1st year): the candidate will study next generation networks, formal methods, and the state of the art about applying formal methods to next generation networks, also focusing on the modeling languages. This will be done by attending courses and by personal study. In this phase the candidate will also identify the main limitations of the existing techniques with reference to next generation networks. Results about this survey of the state of the art, including the emerged limitations, will be published.
Phase 2 (2nd year): the candidate will continue the experimentation of the existing formal verification techniques on next generation networks and will identify possible improvements in order to overcome the emerged limitations. The definition of user-friendly modeling languages will also be considered. Furthermore, the implementation of the improved formal verification techniques and their experimentation will be started.
Phase 3 (3rd year): the implementation and experimentation of the proposed improved formal verification techniques and related dissemination activity will be completed.
Expected target publications: The contributions produced by the proposed research can be published in conferences and journals belonging to several different research communities: publications related to networking (e.g. ACM/IEEE Transactions on Networking or IEEE Transactions on Network Management), publications related to formal methods (e.g. Formal Aspects of Computing) or publications related to security (e.g. IEEE Transactions on Secure and Dependable Computing)
Current funded projects of the proposer related to the proposal:
Possibly involved industries/companies:TIM S.p.A., Tiesse S.p.A., already involved in the Netgroup activities on this topic.

Title: Security of IoT systems
Proposer: Antonio Lioy + Fulvio Corno
Group website: http://security.polito.it/ + https://elite.polito.it/
Summary of the proposal: The emerging popularity of IoT devices and Systems, and their expected exponential growth, opens new and dangerous problems from the point of view of their security. As recent news have highlighted, many IoT systems use insecure protocols, expose weakly protected services, don't allow users to secure them, or run on obsolete operating systems.
The issue of IoT security is one of the major concern in this field, as more and more sensitive data are generated and/or handled by these devices and they are becoming part of important architectures (e.g. smart cities).
IoT devices used in healthcare could pose threats to human lives, as would do the devices used in the monitoring and control of critical infrastructures (e.g. water or power supply, train or airplane traffic control).
The PhD candidate will explore all these issues, assess vulnerabilities, and propose possible hardening solutions and best practices.
Rsearch objectives and methods: The PhD objective will be to identify security threats and vulnerability on the design of IoT systems and of IoT devices. In particular, specific solutions will be proposed for the hardening of such systems, for the detection of security risks, and for a robust configuration by the end users.
Research methods will include:
- the study of protocols adopted in wireless networks (from the point of view of their security), and of the APIs provided by middleware gateways and by cloud services dedicated to IoT integration.
- the experimental analysis and security assessment of real-life devices and system
- the analysis of cryptographic techniques viable for IoT devices (that are typically limited in CPU speed and power, memory, communication speed, and electric power)
- the proposal of new communication paradigms, the development of management solutions aimed at containing the security issues. These solutions can be based on ICT advanced architectures such as the Trusted Computing paradigm and the Trust Zones available on recent CPUs.
Outline of work plan: Phase 1 (months 1-12): study of IoT systems and ICT security, with special reference to wireless communications and devices with limited capabilities.
Phase 2 (months 6-18): security assessment of IoT systems, from the points of view of the devices, of the adopted protocols (both for the local communications and towards the cloud), of middleware gateways (typically used as a local management point and as a front-end towards the Internet), and of cloud services (used for integration of many IoT devices, data aggregation, and analysis).
Phase 3 (months 18-30): research proposals of hardening methodologies, and individual experimental validation.
Phase 4 (months 24-36): integration of the proposals into a coherent security architecture for IoT and experimental evaluation of the integrated solution.
Expected target publications: IEEE Internet of Things Journal
IEEE Transactions on Human-Machine Systems
ACM Transactions on Cyber-Physical Systems
IEEE Transactions on Secure and Dependable Computing
IEEE Security & Privacy
International Journal of Information Security
Computers & Security
Software Practice & Experience
Current funded projects of the proposer related to the proposal: SECURED (FP7)
SHIELD (H2020)
Possibly involved industries/companies:Telecom Italia
Reply
Consoft Sistemi
Alyt

Title: ICT Tools and methods for Lean Business and Innovation Management
Proposer: Guido Perboli
Group website: http://www.orgroup.polito.it/
Summary of the proposal: In recent years, Lean Business approach and the Industry 4.0 introduce new challenges in the industrial sectors. In fact, firms are involved in an intelligent environment where different actors interact also supported by technology infrastructures and tools that affect their decision-making processes and behaviors.
In this context characterized by a high level of dynamicity, the innovation and particularly its management become a relevant factor. Differently from other methodologies and AGILE, The Lean Startup and WCM, the GUEST method, developed by the Proposer, gives a more general framework able to guide the innovation process from the business idea and business model definition to the factual action list definition. The method was already successfully tested in different National and European grants, as well as in Industrial projects (TIM, Emerson Power, Quercetti, TNT and others)

Aim of this research proposal is to consolidate the GUEST from a methodological point of view and to provide ICT-based tools for managing the different steps f the GUEST.

These activities will be carried out in collaboration with the ICT for City Logistics and Enterprises (ICE) lab of Politecnico di Torino and Istituto Superiore Mario Boella
Rsearch objectives and methods: The objectives of this research project are grouped in three macro-objectives:
Management Science/Operations Research objectives:
• Creation of a logic model for the innovation process in SMEs, identifying the innovation flow, the actors involved and the incentives policies.
• Creation of value for the companies, actors and different stakeholders involved.
• Specialization of Lean Business methods, and the GUEST methodology in particular, in order to deal with small and medium sized research groups.
Technological objectives:
• Identify the technology infrastructure;
• Develop an ICT solution for managing the GUEST;
• Identify KPIs and analytics able to measure the overall process.
Testing object:
• Field testing of solutions in collaboration with ICE Lab.
Outline of work plan: PHASE I (1st semester). The first period of the activity will be dedicated to the study of the state of the art of Lean Startup and Lean Business methodologies, with particular emphasis on the user engagement and Key Enabling Technologies (KETs) points of view.

Phase II (2nd and 3rd semester). Identification of the main methodological changes to incorporate in the GUEST. In this phase the KETs, with a specific emphasis on the ICT Enabling Technologies, will be consolidated. Moreover, a focus on existing open and commercial solutions will be developed.

Phase III (4th semester). ICT infrastructure implementation (with ICE support) and identification of the test site.

Phase IV (5th-6th semester). Test of the Open Innovation Community over Politecnico and the test site.

Expected target publications: • Transportation Research
• Interfaces
• Omega - The International Journal of Management Science
• Management Science
• International Journal of Technology Management
• Business Process Management Journal
• IT Professional
Current funded projects of the proposer related to the proposal: • Open Agorà (JoL/DAUIN)
• SynchroNET
Possibly involved industries/companies:

Title: Video Quality Evaluation on Large Scale Datasets
Proposer: Enrico Masala
Group website: http://media.polito.it
Summary of the proposal: Evaluating video quality by means of reliable objective algorithms is currently a very challenging task. However, it is a key element in optimizing multimedia communication systems, since providing good quality while saving bandwidth is extremely beneficial to the services, leading to more efficient network resource usage and, ultimately, cost savings.
Recently, in the Video Quality Expert Group (VQEG), a new approach is being investigated, i.e., studying, proposing and improving such algorithms on the basis of investigating very large-scale datasets of quality measures about differently processed video sequences, in terms of coding and transmission parameters.
Creating, maintaining and updating such a large scale dataset is challenging from many point of views: required storage space, bandwidth for data sharing, reproducibility of the research experiments, interpretation of the multi-dimensional data, usage of such data in machine-learning based algorithms.
The PhD candidate will perform research in this area developing strategies for dealing with this large video quality measure dataset, in particular working on the analysis, visualization, and proposal of new quality estimation algorithms.
Rsearch objectives and methods: Objectives: Identify, develop and investigate new strategies for analyzing, visualizing and using quality measures available from or contributed to a very large-scale dataset for proposing new quality measures, mostly based on machine-learning systems. Currently, objective video quality algorithms belong to two categories: either rather old but widespread algorithms proposed in literature, which however offer limited correlation with subjective perception, or better performing algorithms, typically proposed by companies, for which, however, the development process is obscure and there is a lack of sufficient data to build and improve on them (provided that patents allow it). The joint effort proposed in the context of VQEG seems to be a promising direction, in that it joins forces from different actors (both academia and industry) towards building or improving the performance of a new quality measure by taking contributions from multiple actors. This is similar to what has been already done in the MPEG community for the well-known video compression standards.
Methods: First, develop a set of instruments and tools that allow to work with such a very large dataset, in terms of both contribution of new measures and efficient and rapid analysis of the existing measures, which at the frame level can be in the order of tens of millions of samples and could easily reach the billion if new analysis are performed. For this purpose, the cloud and/or the Polito@HPC infrastructure will be exploited if necessary, by adapting the tools to these environments. Then, a theoretical framework will be developed to understand how much of the available data can be predicted by other data already in the dataset. This will provide hints about the most interesting sequence and parameter configurations to scrutinize in order to advance with the development of a new measure or the improvement of existing ones. The research group has extensive expertise in video quality evaluation algorithms, for both the case of coding artifacts and transmission impairments, which will be leveraged for this work. The performance of the proposed techniques will then be validated through standard subjectively-annotated databases.
Outline of work plan: In the first year, the PhD candidate will familiarize with the existing VQEG HEVC dataset and the set of tools necessary to operate on the data (e.g., video quality measures, including the recently proposed VMAF from Netflix). During this process the fundamental abilities of identifying, modifying and extracting the most interesting parts of the datasets will be acquired. The activity will be performed strictly in the context of reproducible research. Since dealing with a very large dataset in a reproducible way is challenging in itself, the developed methodology is itself expected to initially lead to conference publications.
In the second year and third years, building on the new instruments developed in the previous step, a theoretical framework will be developed in order to spot the interesting behaviors in the metrics present in the dataset, and analyze them in more details. Strategies may include machine learning approaches to identify the most difficult-to-predict values (hence interesting sequences), that can then be further analyzed. On this basis new proposals to improve the shortcoming of the existing metrics will be formulated and tested.
The whole activity will target journal publications which will present both the theoretical modeling, the analysis of the results and the new proposals.
Expected target publications: Possible targets for research publications (well known to the proposer) include IEEE Transactions on Multimedia, Elsevier Signal Processing: Image Communication, Elsevier Computer Networks, IEEE International Conference on Communications, IEEE International Conference on Image Processing, IEEE International Conference on Multimedia and Expo
Current funded projects of the proposer related to the proposal: Collaboration with the Video Quality Expert Group (VQEG), http://www.vqeg.org. It is expected that joint project proposals will be submitted in collaboration with other members of the VQEG.
Possibly involved industries/companies:Companies currently interested in VQEG – JEG-Hybrid group.

Title: Transparent Algorithms for data analytics
Proposer: Tania Cerquitelli
Group website: http://dbdmg.polito.it
Summary of the proposal: An emerging area of data management and analytics, with a considerable impact on society is represented by the transparent data algorithms. This is an important subject of debate in engineering and jurisprudence. Since algorithms are powerful and necessary tools behind a large part of the information we use every day, rendering them more transparent should improve their usability in various areas, not least because discrimination and biases have to be avoided.

The main research goal of this proposal is designing innovative data management solutions that offer a level of transparency greater than existing methods. The vast majority of existing data algorithms are opaque – that is, the internal algorithmic mechanics are not transparent, in that, they produce output without making clear how they have done it. Innovative approaches will be devised to make the algorithm software human-readable and usable by both analysts and end-users to significantly increase transparency and user control.
Rsearch objectives and methods: The research objectives address the following issues:

Transparent data collection and data characterization.
When an algorithm-driven application is able to be transparent about its data collection processes and allows its users to control their personal information, it will be rewarded by increased trust, interaction and access to their data, and effective exploitation of the algorithm outcomes in real-applications. Innovative strategies will be defined to collect data and maintain users informed about it. Furthermore, innovative transparent criteria will be studied to model data distributions by exploiting unconventional but transparent statistical indexes.

Transparent algorithms for data analytics
As algorithms increasingly support different aspects of our life, they may be misused and unknowingly support bias and discrimination, if they are opaque (i.e., the internal algorithmic mechanics are not transparent in that they produce output without making it clear how they have done so). Innovative transparent solutions will be designed to produce more credible and reliable information and services. They will play a key role in a large variety of application domains by making the results of the data analytical process and its models widely accessible.

Partial mining techniques for transparent solutions
When dealing with large data collections or complex datasets, the computational cost of the transparent solutions for data analytics (and in some cases the feasibility of the process itself) can potentially become a critical bottleneck in data analysis. Adaptive partial mining strategies will be studied and tailored to transparent solutions to avoid the expensive and resource consuming procedure of mining the entire dataset when not necessary.

Knowledge navigation.
The data mining process performed on real and complex databases may lead to the discovery of a huge amount of knowledge, that is usually hard to process and analyze. Nevertheless, an in-depth analysis may be required to identify only the most actionable knowledge. The characterization of the significance of knowledge in terms of unconventional statistical criteria will be addressed to define how navigate the discovered knowledge.
Outline of work plan: During the 1st year the candidate will study available solutions to enhance the transparency of data analytics algorithms and design innovative methods to allow users to control their personal data. New data collection methods will be defined to maintain users informed about the data collection process. Moreover, new indices will be studied to model data distributions by exploiting unconventional but transparent statistical indexes and underlying data structures. A set of descriptive metrics will be defined to significantly increase transparency and user data control.

During the 2nd year the candidate will study and define transparent and innovative algorithms for data analytics tasks by providing more credible and reliable information and services and by making the results of the data analytical process and its models widely accessible. Selecting specific subsets from which interesting knowledge can be independently derived is of paramount importance to bring hidden knowledge to the surface. Partial mining strategies will leverage the innovative transparent techniques developed for the characterization of input data, by focusing the analysis on the most significant data subsets.

During the 3rd year the candidate will design a smart user interface to effectively navigate actionable knowledge items discovered through the proposed solutions.

During the 2nd-3rd year the candidate will assess the proposed solutions in diverse real application domains (e.g., Industry 4.0, service context-awareness, building energy consumptions).

During all the three years the candidate will have the opportunity to cooperate in the development of solutions applied to the research project and to participate to conferences presenting results of the research activity.

Expected target publications: Any of the following journals
IEEE TKDE (Trans. on Knowledge and Data Engineering)
ACM TKDD (Trans. on Knowledge Discovery in Data)
ACM TOIS (Trans. on Information Systems)
ACM TOIT (Trans. on Internet Technology)
Information sciences (Elsevier)
Expert systems with Applications (Elsevier)
Engineering Applications of Artificial Intelligence (Elsevier)
Journal of Big Data (Springer)

IEEE/ACM International Conferences
Current funded projects of the proposer related to the proposal: Progetto di ricerca regionale DISLOMAN (Dynamic Integrated ShopfLoor Operation MANagement for Industry 4.0)
Possibly involved industries/companies:

Title: Geospatial data analytics
Proposer: Paolo Garza
Group website: http://dbdmg.polito.it/
Summary of the proposal: Nowadays, thanks to the increasing availability of low-cost GPS receivers installed on mobile devices, large amounts of geo-referenced and spatial data are available. Satellite images, risk maps, and user generated content, such as pictures and tweets, are only a subset of the potential big geo-referenced data sources that can be generated and collected every day. The proper integration of the different data sources can be profitably exploited to build more accurate descriptive and predictive data mining models of several events. For instance, geo-referenced tweets can be used to support emergency management activities and build more accurate hazard forecasting models. Novel algorithms and data analytics solutions for geo-referenced and spatial data are needed to effectively extract useful knowledge and build models from the big amount of geo-referenced data currently available.

The PhD candidate will design, implement and evaluate new big data analytics solutions able to extract insights from big geospatial and geo-referenced data.

An ongoing European research project will allow the candidate to work in a stimulating international environment.
Rsearch objectives and methods: The main objective of the research activity will be the design of novel data mining and big data analytics algorithms and solutions for the analysis of geo-referenced data, aiming at generating more accurate location-aware predictive models.

The main issues that must be addressed are the followings.

Location-awareness data mining algorithms. The current data mining algorithms consider the geo-spatial feature as one of the many characteristics of the data. However, it plays an important role when we are interested in modeling some events, such as natural disaster events. Hence, novel algorithms are needed to properly handle and exploit the spatial features of the data.

Scalability. The amounts of geospatial and geo-referenced data are significantly increased in the last years. Hence, Big data solutions must be exploited to analyze them. However, traditional geospatial systems are not designed for big data, whereas the Big data frameworks are not designed for spatial data. Hence, new approaches must be proposed to manage big collection of geo-referenced data.

Heterogeneity. Several heterogonous geospatial sources are available (e.g., risk maps, tweets, satellite images). Each source represents a different facet of the analyzed event and provides an important insight about it. The efficient and effective integration of the available spatial data sources is an important issue that must be addressed in order to build more accurate predictive models.
Outline of work plan: The work plan for the three years is organized as follows.

1st year. Analysis of the state-of-the-art algorithms and data analytics frameworks for big geospatial data. Based on the analysis of the state-of-the-art, pros and cons of the current solutions will be identified and preliminary algorithms will be designed to optimize and improve the available approaches.

2nd year. Specific domains related to geospatial data (e.g., the hazard and critical event management domain) will be considered and new data mining and analytics algorithms will be designed, developed and evaluated on a selected domains.

3rd year. The algorithms designed during the first two years will be improved and generalized in order to be effectively applied in different domains.

During the second-third year, the candidate will have the opportunity to spend a period of time abroad in a leading research center.
Expected target publications: Any of the following journals
ACM Transactions on on Database Systems (TODS)
ACM Transactions on Knowledge Discovery in Data (TKDD)
IEEE Transactions on Big Data (TBD)
IEEE Transactions on Knowledge and Data Engineering (TKDE)
Journal of Big Data

IEEE/ACM International Conferences
Current funded projects of the proposer related to the proposal: I-REACT - "Improving Resilience to Emergencies through Advanced Cyber Technologies", H2020 European project (http://www.i-react.eu/)
Possibly involved industries/companies:ISMB

Title: Mining heterogeneous data collections from complex application domains
Proposer: Silvia Chiusano
Group website: dbdmg.polito.it
Summary of the proposal: In the last few years, the use of Information and Communication Technologies has made available a huge amount of heterogeneous data in various complex application domains. For example, in the urban scenario, Internet of Things (IoT) systems generate and capture massive data collections describing citizen’s exploitation and perception of services available in the urban area (e.g., mobility, safety perception). In health care systems, electronic health records allow storing a variety of information about patients as adopted treatments and monitored physiological conditions. Extracting knowledge from these data collections is still a daunting task, because they are generally too big and heterogeneous to be processed through data analysis techniques currently available.

The proposed research activity will be focused on the study and development of proper techniques for both the management and analysis of such huge volume of heterogeneous data. Different application domains will be considered as example case studies. The proposed activity involves multidisciplinary skills such as database and data mining and advanced programming.
Rsearch objectives and methods: The proposed research activity will be focused on the study and development of proper techniques for both the management and analysis of huge volume of heterogeneous data from complex application domains. During the PhD, different application scenarios will be selected as reference case studies for the devise of proper data analytics solutions (e.g., citizen mobility in the urban scenario).

The research objectives address the following issues. To properly describe the different facets of the considered application domain, huge collections of heterogeneous data should be collected, integrated, stored, and analysed.
Proper solutions should be devised to deal with critical issues that arise from high data dimensionality, heterogeneous data content, and massive data cardinality. More specifically, (i) suitable data fusion techniques should be defined to support the data integration phase, and (ii) proper data representations should be devised to store the resulting dataset (e.g., based on NOSQL databases). Moreover, (iii) the analysis of this massive volume of data demands distributed processing algorithms and technologies that will be studied and evaluated. Another issue includes (iv) considering different possible platforms for collecting and analysing data, and developing ad-hoc solutions for them. For example, applications based on mobiles devices can enhance on the one side the acquisition of richer user-generated data collections and on the other side the accessibility of the mined knowledge to the end-user. In the urban scenario, these applications can facilitate the collection of data on citizen’s condition or his/her perception of provided services, but also support citizens in accessing valuable urban-related information mined through the data analysis process.

During the PhD, new solutions will be studied and developed to address issues listed above, and they will be evaluated and compared with state-of-the-art approaches to assess their performance and improvements. The experimental evaluation will be conducted on real data collections.
Outline of work plan: During the 1st year the student will start considering one reference application domain (e.g., mobility in the urban scenario, or medical treatments in healthcare area). First, the student will analyse the critical issues from the data analysis perspective for the considered application domain (e.g., data dimensionality, heterogeneous data types to be considered, data cardinality, relevant knowledge to be mined, etc). Then, starting from the results of the above analysis, the student will perform an exploratory evaluation of existing approaches and platforms for data integration, storage and mining. The student will present a preliminary proposal for optimization of these approaches.

During the 2nd year, based on the activity carried out in the 1st year, the student will design and develop innovative mining strategies to efficiently extract useful knowledge in the considered domain, aimed at overcoming weaknesses of existing approaches. Different data mining techniques will be evaluated for data analysis (as clustering, classification, association analysis) based on the type of knowledge to be mined. During the 2nd-3rd year, the student will also assess the proposed solutions by considering additional reference application domains. The experimental evaluation will be conducted on real data collections.

During the 3rd year, the student will further improve the proposed solutions, both in the design and development, on the considered application domains.

During the PhD the student will have the opportunity to participate to conferences presenting the results of the research activity and to cooperate in the development of solutions applied to the research projects.
Expected target publications: Any of the following journals: IEEE TKDE (Trans. on Knowledge and Data Engineering) ACM TKDD (Trans. on Knowledge Discovery in Data) ACM TOIS (Trans. on Information Systems) IEEE International Journal of Medical Informatics (Elsevier) IEEE/ACM International Conferences
Current funded projects of the proposer related to the proposal: MIE (Mobilità Intelligente Ecosostenibile): Bando Cluster Tecnologico Nazionale “Tecnologie per le Smart Communities”; Ente Finanziatore: MIUR (Ministero dell’istruzione, Università e Ricerca);
S[m2]ART (Smart Metro Quadro): Bando Smart City&Communities; Ente finanziatore: MIUR (Ministero dell’istruzione, Università e Ricerca);
Possibly involved industries/companies:Currently, the DBDMG (DataBase and Data Mining Group) group has research cooperations on the analysis of medical data with - School of Health and Human Performance, Dublin City University, Dublin, Ireland, - Exercise Pathophysiology Laboratory, Cardiac Rehabilitation Division, Salvatore Maugeri Foundation IRCCS, Scientific Institute of Veruno - Veruno (NO), Italy

Title: Pattern recognition and artificial intelligence techniques for bioimaging applications.
Proposer: Enrico Macii
Group website: http://eda.polito.it/
Summary of the proposal: With the recent advance and diffusion of biomaging technologies, computerized image analysis has become fundamental in many fields of biomedicine and biotechnology. The main challenge in this context is the extreme variability of the images, where the “biological” noise (e.g. different types of cells and tissues coexisting in the same specimens) is added to a general lack of standards in the image generation and acquisition process.

This PhD aims at investigating computerized image analysis techniques able to extract knowledge from multi-modal biological images (e.g. identify 2D or 3D regions of interest in a specimen, categorize different stages of a disease, recognize cancer, etc.) without requiring user-interaction. This will improve the repeatability of image interpretation in many clinical applications, tackle the challenges of variability, and possibly open the way to a deeper understanding of important biological processes at different scales (tissue-level, cell-level, sub-cellular level).

For this purpose, the PhD candidate will explore and integrate different approaches:
- image processing techniques, including methods for the automated transformation, enhancement, filtering, segmentation and encoding of digital images
- artificial intelligence and pattern recognition techniques, including computational statistics, machine learning or deep learning techniques applied to visual patterns.

Rsearch objectives and methods: With the continuous developments of advanced techniques and imaging devices, the impact of biological imaging on life science and medicine is rapidly growing. Biological images of cells and tissues have been proven critical to seek answers to many important problems related to pathogenesis, as well as to disease diagnosis and prognosis and drug target validation. Currently, scientists often resort to slow, manual analysis to extract information from the images, which is time-consuming, extremely subject to inter- and intra-observer variance and definitely not scalable for studies on large-scale bioimage databases.
To visualize, trace, quantify and model cellular and molecular processes with high spatial and temporal resolution is the key to a deeper understanding of physiology and pathogenesis and a fundamental support for basic and applied sciences and bioengineering. This motivates our research project in the field of computerized bioimage analysis.

Our purpose is the automated analysis of biological images across different modalities and scales, from tissue to molecular level. Our applications are both therapy and pharmacogenesis improvement through sub-cellular and molecular analysis.
At each scale (tissue, cellular, sub-cellular) the PhD candidate will investigate and design novel algorithms and tools for
1. segmentation of specific regions of interest in the images (sub-cellular regions, individual cells and parts of tissues), either in 2D or 3D
2. image feature extraction, feature selection and feature encoding
3. pattern recognition and pattern classification, including computational statistics, machine learning or deep learning techniques applied to visual patterns.
4. quantification and modeling of molecular processes based on image features

The project candidate should have good basics of computer programming (any C-like programming language) and image processing techniques, as well as strong inclination to research in a multi-disciplinary environment. Good command of Matlab and object-oriented programming are a plus.
Outline of work plan: The work plan is structured into three main phases, requiring the gradual development of skills in the field of computer vision, artificial intelligence and pattern recognition.

First year

The PhD candidate should
- study basics and state of the art of image processing (automated transformation, enhancement, filtering, segmentation and encoding of digital images), artificial intelligence and pattern recognition (computational statistics, machine learning, deep learning techniques applied to visual patterns)
- attend PhD courses and seminars in Politecnico
- work on applications related to the automated segmentation and categorization of visual patterns in 2D static images

At the end of the first year, the candidate is expected to have published and presented at least two papers at international conferences in the field of bioimage analysis and pattern recognition.

Second year

The PhD candidate should
- work on more complex applications requiring 2D or 3D dynamic image analysis at different scales (e.g. automated cell tracking)
- participate to the tutoring of master thesis students
- have a 6-months research experience abroad, in a well-reputed research center or academic institution

At the end of the second year, the candidate is expected to have published and presented at least three papers at international conferences in the field of bioimage analysis and pattern recognition and to have submitted at least one paper to an international peer-reviewed journal.

Third year

The PhD candidate should
- investigate applications requiring the analysis of complex 3D images as well as the integration and registration of heterogeneous images (e.g. images at different scales or acquired from different patients/imaging modalities)
- be actively involved in preparing proposals of funded projects

At the end of the third year, the candidate is expected to have published at least two papers in international peer-reviewed journals.
Expected target publications: International peer-reviewed journals in the field of medical imaging, pattern recognition and computerized image analysis. For example:

• IEEE Transaction on Medical Imaging
• IEEE Transaction on Biomedical Engineering
• Pattern Recognition, Elsevier
• Medical Image Analysis, Elsevier
• Bioinformatics, Oxford Journals

As well as well-reputed international conferences. For example:
• ICPR
• ISBI
• EMBC
• BIBM
• Biostec Biomaging
Current funded projects of the proposer related to the proposal: The proposal is not linked to specific funded projects at the moment. Nonetheless, the research activity of the candidate will partially exploit skills, experience and results on multimodal image processing of past European projects (e.g. FP7 ENIAC CSI and DENECOR). Furthermore, the research activity will benefit from established collaborations with physicians, pathologists and biotechnologists, either in Italy and abroad.
Possibly involved industries/companies:Philips Healthcare, STMicroelectronics (partners of FP7 ENIAC CSI and DENECOR projects)

Title: Virtual Design and Validation of Standard Cells Technology for High Reliability Cores
Proposer: Luca Sterpone
Group website: www.cad.polito.it
Summary of the proposal: Several safety-critical application fields are today evaluating the usage of VLSI nanometer devices for their computation. These devices, generally implemented using Standard Cells Technology (SCT) process, are extremely appealing for their performances, flexibility and for the high level of integration, they can guarantee. During the last 5 years, research studies were focused on different type of topics mainly related to the creation of design, verification and optimization tools. With the increasing pervasivity of embedded systems within fields such as automotive and avionics systems, technology and end-users like carmakers and spaceship deployers started to evaluate the usage of nanometer technologies for their application. Safety-critical application dependability is characterized by several attributes: availability, reliability, safety and maintainability. All these aspect should be effectively configured in order to obtain a system compliant with the required application standard. The topic of the present PhD program proposal is oriented to investigate the technology implementation process at the physical level (layout) to address the reliability concerns. The analysis will include permanent faults induced by external factors, such as accumulation of radiation or Total Ionizing Dose, or by technology degradation; and transient faults induced by alpha particles that provoke bit-flip within the implemented circuits. The research will be oriented to the development of innovative Virtualization Computer Aided Design Platforms or Virtual Design Algorithms (VDA) able to speed up the whole implementation phases.
Rsearch objectives and methods: The main objectives of the proposal deals with the creation of new software platforms and algorithms for the design validation and reliability characterization of the Standard Cells Technology process. As widely known, VLSI technology reduction changes circuit sensitivity to environmental (radiation) induced errors. These errors range from temporal disruption of the device up to permanent critical functionalities. The present proposal has the objective of analyzing, investigating and developing a new set of methods suitable for Standard Cell VLSI technology and able to mitigate or reduce the impact of failures without affecting the performances in terms of operational frequency and area. The methodologies will be essentially deployed at the physical level, directly accessing to the layout information of the Standard Cell implementation. The benefit introduced by these objectives will be of large impact for several fields, not only involving the automotive area, but also the designer of VLSI technology and the computer science.
Several preliminary experimental results obtained by simulation and comparing the results from radiation-test data demonstrated the necessity to push the investigation at a lower level. Different sets of hypothetical testing applications have been developed in order to estimate the occurrences of permanent failures induced by Total Ionizing Dose (TID) phenomena and the occurrences of transient faults. The results show the necessity to better investigation of the internal structure of the device prompting the analysis to the physical layers.
The capability of investigating a VLSI circuit at the layers level requires a set of software and libraries that are currently available in the present research group. The software tool set includes Synopsys Synplify and Cadence and the usage of Graphic Database System (GDS) file in order to manage the circuit geometry.
The methodology of the present research proposal combine the state-of-the-art methods used at the layout levels applied to 45 and 15nm Open Cell libraries. All the analysis will be focused on the realistic implementation of physical devices thanks to the support of the collaboration with the research groups in IMEC (Belgium) and the Cognitronics and Sensor System group (University of Bielefeld). The research will include the participation of companies such as Nanoxplore (France) which are currently at the state-of-the-art on the implementation of fault-resilient (i.e., radiation hardened) VLSI technology; and end-user agencies and companies such as the European Space Agency (ESA) and Airbus DS.
The flow of the proposal is divided in three main activities: analysis and characterization of 45 and 15 nm Standard Cell technologies, deployment of a Virtualization software for circuit analysis, verification and effects classification. As final, the realization of a technology-oriented EDA tool for the creation of a complete tool chain for the 45 and 15 nm technology process based on a software virtualization approach is expected to be the final step.
The failure effects and modeling will be applied on the Nanoxplore technology within the Framework of the VEGAS (http://cordis.europa.eu/project/rcn/199833_en.html) European funded project.
These nanometer devices will be analyzed considering the configuration basics of programmable array devices and developing new testing methods for these devices. The testing methods consist on the creation of a suitable test bed (consisting of a testing board and a control board) that will be used under radiation environments.
The results provided by the analysis and classification allows defining a set of case related to the device and the technology behavior in failure conditions. The realization of the Virtualized EDA tool will support designers in the implementation of robust algorithm and architectural mitigation solution.
Outline of work plan: The activity follows the below work packages:

WPA. (1 year) Technology Characterization and Virtualized model creation

This work package is devoted to the modeling of the 45 and 15 nm technology. At the end of the WPA a deliverable consisting on the characterization experimental data obtained from physical radiation test and simulative analysis by means of the developed tools, will be available.

WPB. (1st year/2nd year) Virtual Architecture Framework

This work package is oriented to the development of a software framework including a set of tools oriented to the design, validation and failure analysis of the technology. Besides, the techniques will be oriented to the implementation of virtual-framework solutions able to reduce the error probability estimated during the WPA. Preliminarily, a sub-set of traditional fault tolerance techniques are applied considering Duplication With Comparison (DWC) with Error Detection and Triple Modular Redundancy (TMR) approaches.

WPC. (2nd year) Extended Virtualized EDA framework

This work package is oriented on the development of a software framework capable to insert the requested technology reliability level to the considered circuit.

WPD. (2nd year/3rd year) Final Benchmark Technology Demonstrator

This work package is oriented to the development of a benchmark set of demonstration circuit in cooperation with the available technology. The activity works as twofold: on one side, a set of IP cores will be evaluated thanks to the developed Virtualized EDA tool, on the other side, a set of end-user case study will be tested and effectively evaluated in harsh environments.
Expected target publications: IEEE Transactions on Computers
IEEE Transactions on VLSI
IEEE Design & Test
IEEE Transactions on Reliability
IEEE Transactions on CAD
ACM Transactions on Reconfigurable Technology and Systems
IEEE Transactions on Emerging Topics in Computing
Current funded projects of the proposer related to the proposal: Sterpone-ESA (2016) and Sterpone-VEGAS (2016)

Possibly involved industries/companies:The proposal is expected to provide a stronger synergy between Politecnico di Torino, NanoXplore, European Space Agency and technology end-user company (Airbus and Thales Alenia Space)

Title: Algorithms and methods for knowledge extraction from IoT data streams in Smart City and Smart Industry applications
Proposer: Enrico Macii
Group website: http://eda.polito.it/
Summary of the proposal: IoT devices will generate zettabytes of data by 2020. Of this large amount of data, only a small fraction will be actually exploited, unless a deeper and wider application of knowledge extraction methods and technologies will be applied on these data in real application contexts such as building, energy infrastructures, industrial plants.

This PhD aims at investigating algorithms and methods for knowledge extraction from meters and sensors installed in buildings and industrial plants, by studying, designing and applying machine learning techniques and models. The research will be focused on smart city and smart industry applications and will be developed in the context of European and Regional research projects in these contexts.

For this purpose, the PhD candidate will explore the following topics:

- Middleware, platforms and tools for IoT data collection, processing and visualization
- Machine learning and knowledge extraction techniques from IoT data streams
- Applications to building energy modeling, smart grid, smart factory applications
Rsearch objectives and methods: Research objectives

Smart systems such as wireless sensors nodes for metering and environmental monitoring applications are increasingly widespread inside buildings, factories and infrastructures, including energy plants and grids. Despite the large amount of data generated by these devices, their potential is still weak and poorly understood. From the other side, techniques such as deep learning are emerging as methods for unlocking a wiser and more profitable exploitation of the data generated by smart devices. Possible applications are range from energy modelling for energy management applications to worker security improvement in industrial environments. For instance, electric consumption curve analysis from high sampling frequency smart meters can provide information about appliance usage and power consumption. For this reason, nowadays there is a very interesting opportunity to create and apply complex methodologies exploiting the large amount of data generated by IoT devices.

Methods

1. IoT platforms
IoT software platforms are increasingly widespread allowing easy data collection from IoT devices exploiting protocols such REST and MQTT and proving support for data processing (stream processors, machine learning modules) and storage. These platforms enable the development of applications for event detection and design of optimization and prediction methods. These platforms have been developed both in the context of scientific applications, such as ThingSpeak or IBM Bluemix and in the context of industrial applications such as ThingWorkx and Predix. The student will familiarize with these platforms that will represent their development environment.

2. Languages and modules for machine learning
These platforms can be used both for classifications and prediction. Languages such as TensorFlow, for example, can be used to design deep learning networks in a fast and easy way. The research will focus not only on the application of existing networks but also on the design of innovative networks suitable for the smart city and smart industry applications to be pursued in this research.

3. Web services and microservices
The design of applications for IoT data processing requires the exploitation of a microservices-oriented approach, where different modules are connected through web APIs. This allows scalability and upgradeability, which are concrete requirements in the targeted application domains. As such, students will learn how to design web-service based software systems.
Outline of work plan: The work plan is structured into three main phases, requiring the gradual understanding of platform, programming models and methods.

First year

The PhD candidate should:
- Familiarize with IoT data related to targeted application domains
- Learn how to develop a IoT system exploiting commercial and open IoT platforms such as SiteWhere and Thing Speak
- Learn web service technologies
- Learn how to design and implement data processing and learning languages such as TensorFlow
- Start digging into deep learning networks for IoT (in particular recurrent networks used for time series analysis)

At the end of the first year, the candidate is expected to have published and presented at least one international conference proceeding paper.

Second year

The PhD candidate should:
- Deepen the understanding of deep learning networks and challenge their applicability in smart city and smart industry contexts
- Apply one of the studied network on a real case study
- Create a first implementation of a IoT data collection, processing and learning system on the targeted case study
- Identify metrics for evaluation of the system

At the end of the second year, the candidate is expected to have published and presented at least two papers at international conferences and to have submitted at least one paper to an international peer-reviewed journal.

Third year

The PhD candidate should:
- Finalize and consolidate the IoT data processing system
- Evaluate and validate the results against the defined metrics

At the end of the third year, the candidate is expected to have published at least two papers in international peer-reviewed journals.

During the PhD period, the candidate must spend 5/6 months abroad, in a well-reputed research center or academic institution and attend PhD courses and seminars.
Expected target publications: International peer-reviewed journals in the field of IoT devices, software for industrial applications, machine learning and journal concerning specific application domains. For example:

- IEEE Transaction on Industrial informatics
- IEEE Transactions on Smart Grid
- IEEE IoT magazine

As well as well-reputed international conferences.
Current funded projects of the proposer related to the proposal: FLEXMETER (H2020, 2015-2018)
EEB (CLUSTER smart communities, 2014-2017)
TRIBUTE (FP7, 2013-2017)
Possibly involved industries/companies:STMicroelectronics, IREN, EON

Title: Development of innovative methodologies for DNA/RNA sequences analysis exploiting machine learning and brain-inspired techniques
Proposer: Enrico Macii
Group website: http://eda.polito.it/
Summary of the proposal: This PhD proposal is aimed at i) investigating novel algorithms for the analysis of genetic data provided by latest generation sequencers, ii) developing novel methodologies including machine learning, deep learning, and neuromorphic methods for the translation of genomic findings into clinics. The next generation technologies for sequencing provide substrings of genetic code in a short time and low costs. This has allowed a wide spread of these machines in many laboratories, research centres and hospitals across the world addressing personalized medicine. Moreover, recent technological progresses have made available to the research community several different types of molecular data that enable the discovering of very new and substantial information. In fact, many complex diseases are characterized by large genetic variations that make standard therapies ineffective. The bottleneck in the exploitation of sequencing technology is the development of data analysis tools that are still inaccurate and incomplete. The purpose of this PhD proposal is the development of effective algorithms for the discovery of genomic aberrations and their impact on the pathologies. Moreover, learning techniques and brain-inspired methods (e.g. SNN) will be exploited to highlight the correlations among aberrations, diagnosis and therapy translating therefore biological discoveries into the clinics and the personalized medicine.
Rsearch objectives and methods: The software solutions developed during the three years of PhD will aim at providing physicians and scientists with powerful tools for the understanding the molecular processes underlying complex diseases. Moreover, the biological findings consisting of several different types of genetical data, will be exploited to build classifiers for the prediction of diagnoses and therapeutic responses/resistances. The sequencing data, ranging from several tens of Gbytes to a few TBytes per individual, are substrings of a few hundred bases that need to be assembled to reconstruct the genetic code. This is a tricky process because of computational issues related to the amount of data, errors in the sequencing process, redundancy in the genetic code, and several variants in the substrings that characterize the heterogeneity of each individual. However, the main challenges are related to the presence of genetic variants, mutations, and complex aberrations that are actually the algorithm’s targets as potential drivers of multiple-factors diseases. The software solutions currently available are often ineffective to meet the needs of the healthcare industry and medical research because too much inaccurate or not suitable for the scope. Moreover, there is a lack of methods for the study of the interactions among different aberrations occurring in the same patient, for the analysis of RNA editing mechanisms and non-coding RNA sequencing data, strategic for the study of genetic regulation mechanisms, and for the prediction of therapeutic response/resistance given patient’s genomic signatures.
During the three years, the PhD student will acquire knowledge about the latest generation of sequencing technology and the computational/algorithmic issues; moreover, he/she will acquire the ability to design and apply computational effective and efficient algorithmic solutions to biological/medical problems, as well as expertise about optimization techniques on cluster platforms. Among others, the PhD student will acquire skills in text and pattern matching techniques, data-clustering, machine and brain-inspired learning (including deep learning), probabilistic analysis, network design, parallel programming. Finally, he/she will acquire knowledge about molecular biology, essential for the understanding of the biological mechanisms and issues, and to be effective in the algorithm design.

The research group lead by Prof. Enrico Macii is involved in fifteenth years of researches about the development of algorithms for the analysis of causative genetic mechanisms in complex diseases. In the specific area of the current PhD proposal, the group of Prof. Macii has developed several algorithms for RNA-sequencing data analysis and implemented corresponding optimized versions on cluster architectures. The group established several national and international collaborations with companies, and prestigious universities and research groups in computer science and biomedical fields (such as, among the others, the Columbia University and the Cornell University in New York, the Helsinki University of Technology in Finland, the EPFL in Switzerland, the Marie Curie Institute in France). Finally, the group participated as consultant in the funded FP7 European NGS-PTL project and participate as associate partner in the just funded IMI2 European project HARMONY.

The project candidate should have good computer programming skills (C, C++, Python, script language, Java) as well as strong inclination to research.
Outline of work plan: 1st year:
-Development of algorithms based on text-matching, and machine learning methods for the prediction of functional gene fusions in RNA-sequencing data based on a computational model of features and on an ensemble learning techniques revealing the most informative features
-Development of algorithms based on text-matching and probabilistic methods for the detection of RNA base’s mutations and of RNA editing events in RNA-sequencing data.
-Development of algorithms based on text-matching, and supervised learning methods for the detection of both non-coding and long non-coding RNA molecules in RNA-sequencing data

2nd year:
-Development of algorithms based on machine learning, deep learning and brain-inspired methods, and statistical models for i) mRNA/non-coding RNA/proteins microcircuits detection, and ii) the characterization of the impact of Single Nucleotide Polymorphisms (SNPs) within enhancer (i.e., short DNA regions that can be bound by proteins to regulate mRNA production) sequences.

3rd year:
-Development of algorithms for the embedding of heterogeneous molecular data into a unitary framework; for this purpose, it will be considered network-based frameworks, exploiting also multiplex networks and communities detection methods, aimed at highlighting the biological and functional relationships between the variables of the model.
- Development of classifiers predicting diagnosis and therapeutic response/resistance; these methods will exploit machine/deep learning and brain-inspired methods (e.g SNN), and statistics.

The algorithms planned in the proposal rely on different methodologies and then offer to the PhD student very large training opportunities. Moreover, Prof. Macii’s group is among the partners of different European projects among whose activities there is the development of algorithms for the genetic analysis of different kinds of cancer and neurodegenerative diseases. These diseases are increasingly frequent, and therefore they constitute a major burden to families and health care system. The proposed doctoral program fits on neurodegenerative as well as cancer disease, equally high-impact scientific and social context.
Expected target publications: • e.g, peer-reviewed journal:
• IEEE Transactions on Computational Biology and Bioinformatics
• IEEE Journal of Biomedical and Health Informatics (ex IEEE Transactions on Information Technology in Biomedicine)
• IEEE Transactions on Biomedical Engineering
• Bioinformatics (Oxford Ed)
• BMC Bioinformatics
• Nature methods
• Nature communications
• PLOS Computational Biology
• BMC Systems Biology
Current funded projects of the proposer related to the proposal: • ICT FET Flagships "Human Brain Project"
• EU IMI2 “HARMONY: Healthcare Alliance for Resourceful Medicines Offensive against Neoplasms in HematologY”
Possibly involved industries/companies:• Merck Serono, Genève, Switzerland
• Personal Genomics, Verona, Italy
• Genomnia, Milano, Italy
• Fasteris, Genève, Switzerland

Title: Bilevel stochastic optimization problems for Urban Mobility and City Logistics
Proposer: Guido Perboli
Group website: http://www.orgroup.polito.it/
Summary of the proposal: The proposal considers the problem of pricing of services in different application settings, including road network and logistics operations. When dealing with real urban logistics applications, the aforementioned problems becomes large and normally affected by uncertainty in some parameters (e.g., transportation costs, congestion, strong competition, etc.).
Under this context, one of the most suitable approach is modeling the problem as a bilevel optimization problem.
Indeed bilevel programs are well suited to model hierarchical decision-making processes where a decision maker (leader) explicitly integrates some reaction of another one (follower) into its decision making process.
This representation permits to explicitly represent the strategic behaviour of users/consumers into the ricing problem.
Unfortunately, solving large-sized bilevel optimization problems with integral variables or affected by uncertainty is still a challenge in the literature.
This Research Proposal aims to fulfil this gap, deriving new exact and heuristic methods for integral and stochastic bilevel programs.

These activities will be carried out in collaboration with the Urban Mobility and Logistics Systems lab of Politecnico di Torino and the INOCS team of INRIA Lille. Actually, the PhD candidate will be co-directed by prof. Luce Brotcorne, the leader of the INOCS group, who is a worldwide specialist in Bilevel Programming.
Rsearch objectives and methods: The objectives of this research project are grouped in three macro-objectives:

Integer Bilevel Programs:
- Define price-setting models for last-mile logistics operations
- Develop exact and heuristic methods based on the mathematical structure of the problems for solving the aforementioned problems

Stochastic Bilevel Programs:
- Define stochastic price-setting models for network optimization
- Develop exact and heuristic methods based on the mathematical structure of the problems and the knowledge of the distribution of probability of the uncertainty parameters

Last Mile and City Logistics Instances:
- Test the new models and methods in real Last Mile, E-commerce and City Logistics projects in collaboration with the ICE Lab.
Outline of work plan: The PhD Candidate will develop his/her research in both the research centers (ICE and INOCS). In more detail, he/she will spend about half of the time in INRIA Lille.

PHASE I (I semester). The first period of the activity will be dedicated to the study of the state of the art of Integer and Stochastic Bilevel Programming, as well as of the Last Mile and City Logistics applications.

PHASE II (II and III semester). Identification of the main methodological difficulties for the design of efficient solution methods for Integer Bilevel models studied in this proposal. Identification of the main properties letting the solution method to converge with a limited computational time to high quality solutions.

PHASE III (IV and V semester). Identification of the main methodological difficulties for the design of efficient solution methods for Stochastic Bilevel models studied in this proposal. Identification of the main properties letting the definition of solution methods converging with a limited computational time to high quality solutions. In particular, the PhD candidate will focus on the specific issues coming from having a stochastic behaviour in the first level (Leader) or in the second one (Follower).

PHASE IV (V and VI semester). Prototyping and case studies. In this phase the results of Phase II and III will be applied to real case studies. Several projects are currently in progress, which require experimentations and the validation of the proposed solutions.
Expected target publications: - Journal of Heuristics
- Transportation Science
- Transportation Research part A-E
- Interfaces
- Omega - The International Journal of Management Science
- Management Science
- International Journal of Technology Management
- Computer & OR
- European Journal of Operational Research
Current funded projects of the proposer related to the proposal: - Urban Mobility and Logistics Systems Interdepartmental Lab
- Open AGORA
Possibly involved industries/companies:TNT, DHL, Amazon

Title: Image processing for machine vision applications
Proposer: Bartolomeo Montrucchio
Group website: http://grains.polito.it/
Summary of the proposal: Machine vision applications, both in industrial and research environments, are becoming ubiquitary. Applications like automatic recognition of materials, checking of components and verification of the food quality are used in all major companies. In the same time, in the research, machine vision is used for automatizing all the aspects related to images. Open source tools (such as ImageJ) are used by scientists to manage images coming from microscopes, telescopes and other imaging systems.
Therefore this proposal is addressed to the creation of new specific competences in this framework. This PhD proposal has the target of investigating machine vision applications by means of image processing mainly in the following contexts:
- manufacturing (e.g. food industry)
- applied sciences, such as in biology or civil engineering
The PhD candidate will be requested to use an interdisciplinary approach, with particular interest to fast image processing techniques, since methods like artificial intelligence are often not applicable in industrial contexts due to strict requirements of cycle time (that can not be increased).
Rsearch objectives and methods: Image processing, in particular for machine vision, is one of the most developing sectors in nowadays industries. Research, pure and applied, is therefore very important, since machine vision is used for validating and managing basically all production workflows.

The research objectives are to work on contexts like both manufacturing and research. About manufacturing, examples are food industry, tire industry, welding industry. Research examples are image analysis for biological purposes, image analysis of thermal infrared images, image analysis in applications of civil engineering. Interdisciplinary aspects are very important, since it is not sufficient having competencies only of image processing without additional context knowledge.

Methods include using multispectral imaging, thermal imaging, UV fluorescence induced imaging. Algorithms are used, as usual in image processing, for preparing images (e.g. flat field), for segmenting them, for recognizing objects (pattern recognition and other aspects, also statistical) and finally for visualizing results (scientific visualization). Since the basic idea is to produce fast and reliable algorithms (also for industrial applications), a strong co-design will be applied to production of the image (mainly by means of illumination) and to image analysis, in order to avoid using complex approaches where a simpler one could be more than sufficient if image shooting details are carefully checked.

The ideal candidate should have a good background in programming skills, mainly in C/C++ and Java and should know well the main techniques of image processing, in particular those that can be parallelized. In fact in many cases it could be required to write an optimized implementation for GPU, since real-time characteristics are often required in machine vision, especially in industrial applications.
Outline of work plan: The work plan is structured in the three years of the PhD program:
1- in the first year the PhD student should improve his/her knowledge of image processing, mainly covering the aspects not seen in the previous curriculum; he/she should also follow in the first year most of the required courses in Politecnico. At least one or two conference papers will be submitted during the first year. The conference works will be presented from the PhD student himself.
2- In the second year the work will be both on designing and implementing new algorithms and on preparing a first work for a journal, together with another conference. Interdisciplinary aspects will be also considered. Credits for didactic will be also finalized.
3- In the third year the work will be completed with at least a publication in a selected journal. The participation to the preparation of proposals for funded projects will be required. If possible the candidate will be also required to participate to writing an international patent.
Expected target publications: The target publications will be the main conferences and journals related to image processing, computer vision and specifically machine vision. Also interdisciplinary conferences and journals linked to the activities will be considered. Since parallel implementations (mainly with GPU) could be considered, also journals related to parallel computing could be considered. Journals (and conferences) will be selected mainly among those from IEEE, ACM, Elsevier, Springer and considering their indexes and coherence with 09/H1 sector. Examples are (journals):
IEEE Transactions on Image Processing
Elsevier Pattern Recognition
IEEE Transactions on Parallel and Distributed Systems
ACM Transactions on Graphics
(conferences)
IEEE ICIP
ACM SIGGRAPH
Current funded projects of the proposer related to the proposal: The proposer has currently two funded projects with important manufacturing companies (one of them is Magneti Marelli) that can be considered related to the proposal. Both of them are on machine vision aspects. Moreover there is a project (in which the proposer is one of the task leader of one WP) for the so called Fabbrica Intelligente, in which the purpose is also to help industries in the Industry 4.0 framework; this third project is with Ferrero.
Possibly involved industries/companies:Many different industries/companies could be involved in this proposal during the PhD period. It is important to note that often industries prefer patents to publications for industrial privative rights reasons. For this reason it will be important to consider, eventually, also patents, given current Italian National Scientific Qualification requirement.

Title: New methods for set-membership system identification and data-driven robust control design based on customized convex relaxation algorithms.
Proposer: Vito Cerone
Group website:
Summary of the proposal: The research project is focused on the development of efficient algorithms to solve nonconvex polynomial optimization problems, which arise from the field of system identification and data-driven control design. Convex-relaxation approaches based on sum-of-square decomposition will be exploited in order to relax the formulated nonconvex problems into a collection of convex semidefinite programming (SDP) problems. Solutions of such problems will be computed by means of customized parallel interior-point algorithms, which will be designed and implemented on parallel computing platforms. The derived algorithms will be applied to experimental data collected from automotive real-world problems.
Rsearch objectives and methods: The research will be focused on the development of new methods and algorithms to efficiently solve polynomial optimization problems arising from the identification and control fields. The main topics of the research project are summarized in the following three parts.

Part I: Convex relaxations for system identification and control design

This part is focused on the formulation of suitable convex relaxation techniques for constrained polynomial optimization problems, arising from some open issues in the identification and control fields. Typical examples include:

- Enforcement of structural constraints in the identification of linear and nonlinear interconnected multivariable models:

System identification procedures aim at deriving mathematical models of physical systems on the basis of a set of input-output measurements. Although a number of a-priori information on the internal structure of the system to be identified are often available, most of the proposed techniques do not exploit such a priori information in the identification algorithms, since formal inclusion of such structural constraints makes the estimation problem difficult to be solved. Aim of the research project is to show that SDP optimization techniques can be used to reliably enforce structural constraints in the identification of quite general multivariable block-structured nonlinear systems.

- Design of fixed structure data-driven robust controllers:

The problem of robust control design for uncertain systems has been the subject of extensive research efforts in the last three decades, and many powerful tools have been developed. However, most of the proposed techniques lead to the design of high-order dynamical controllers, which are sometimes too complex to be applied in industrial settings, where simple controller structures, involving a small number of tuning parameters (e.g. PID controllers), are typically used. Unfortunately, the model-based approaches proposed in the literature for the design of fixed structure robust controllers often lead to quite complex nonconvex problems. The aim of the research is to propose a reformulation of the fixed structure robust controller design problem in the framework of data-driven control, recently proposed in the literature. The idea is to propose an original formulation of the data-driven control problem, based on the set-membership estimation theory, in order to overcome the main limitations of the approaches already proposed in the literature.

Part II: Development of customized SDP solvers

This part is focused on the reduction of the computational load in solving SDP problems arising from the relaxation of polynomial optimization problems through SOS decomposition. In particular, the peculiar structure of such SDP problems will be exploited in order to design interior-point algorithms more efficient than the ones employed in general purpose SDP, in terms of both memory storage and computational time. A possible way to develop efficient algorithms is to exploit the particular structure of the formulated SDP-relaxed problems in order to design, implement and test parallel interior-point algorithms on commercial computing platforms, which consist of multi-core processors (e.g. Intel multi-core) or many-core graphics processing units (e.g. NVIDIA or AMD graphics cards). Such platforms are available on the market at relatively low cost. For instance, an NVIDIA GeForce GTX 590 graphic card with 1024 cores can be purchased for 800 USD.

PART III: Application to automotive real-world problems

The derived algorithms will be applied to modeling, identification and control of dual-clutch systems and other automotive problems.
Outline of work plan: The research project is planned to last three years.
The time schedule is as follows:

FIRST YEAR

January 1st – June 30th :
the first six months of the project will be devoted to the study of the literature with reference to the subject of system identification, control design, convex relaxations of polynomial optimization problems, interior-point algorithms for semidefinite programming.

Milestone 1:
report of the results available in the literature; selection of a set of open problems to be addressed.

July 1st – December 31st:
the second part of the first years will be devoted to the analysis of the open problems selected in Milestone 1 with the aim of deriving some new convex relaxation-based methodologies and algorithms.

Milestone 2:
formulation of the considered problems is terms of optimization; development of new convex relaxation-based algorithms for solving the formulated optimization problems. Theoretical analysis of the proposed algorithms.

SECOND YEAR

January 1st – June 30th :

the first half of the second year of the project will be focused on the study of the interior-point algorithms available in the literature for the solution of convex optimization problem (with particular focus on semidefinite problems).

July 1st – December 31st:
the objective of this part will be to develop new efficient interior-point algorithms, specifically tailored to the characteristics of the semidefinite programming problems obtained from convex relaxation of the considered optimization problems, arising from system identification and control design.

Milestone 3:
development of new interior-point algorithms and comparison with the existing general-purpose softwares for semidefinite programming.

THIRD YEAR

January 1st – December 31st:
the last year of the project will be devoted to explore parallel implementation of the derived interior-point algorithms and to apply the derived algorithms to modeling, identification and control of a real dual-clutch system and other real-world problems from the automotive field.
Expected target publications: JOURNALS:
IEEE Transactions on Automatic Control, Automatica, IEEE Transactions on Control System Technologies, System and Control Letters, International Journal of Robust and Nonlinear Control.

CONFERENCES:
IEEE Conference on Decision and Control, American Control Conference, IFAC Symposium on System Identification, IFAC World Congress
Current funded projects of the proposer related to the proposal: The derived algorithms are expected to be profitably applied to the problem of modeling, identification and control of dual-clutch systems, which is the object of the research contract (leaded by the Proposer) between DAUIN and Fiat Chrysler Automobile (FCA) titled “Non Manual Transmission technologies as key enabler for high performance and efficient powertrain development”.
Possibly involved industries/companies:Fiat Chrysler Automobile (FAC) will be involved in the application of the derived algorithms to the modeling, identification and control problems arising from the contract “Non Manual Transmission technologies as key enabler for high performance and efficient powertrain development”.

Title: Methodologies, HW equipment and SW Instruments for the characterization of Nanotechnologies during Reliability Testing
Proposer: Paolo Bernardi
Group website: http://www.cad.polito.it
Summary of the proposal: Current design and manufacturing technologies for integrated circuits have to guarantee increasing yield and dependability performance. In addition, the number of integrated circuits included in critical environments is continuously growing, such as in the automotive and the avionic scenarios, where semiconductor manufacturers have to guarantee the reliability of the released components for the entire life-cycle that can be up to 10 -15 years. Measuring and guaranteeing the required extremely low failure rates implies devising suitable accelerated testing procedures, understanding and predicting the failure mechanisms, and correlating test data with information from customer returns and production parameters.

For the reliability sakes in this field, the research shall propose and evaluate effective methodologies allowing to
1. Extract failure information during the stress/test activity without impacting its efficiency
2. Store the failure information that may derive from the analysis of a large number of devices
3. Interpret the obtained results, possibly relating failures with the manufacturing technology returning indication of those aspects that can be improved during realization.

These methodologies will target also analog, mixed-signal and power circuits, which specifically require a multidisciplinary approach for the application and the analysis of parametric test procedures.
Rsearch objectives and methods: The research aims at studying, developing and evaluating new methodologies for guaranteeing the dependability of electronic devices and systems, by means prediction, tolerance, removal and prevention of failures.
Such goal requires a synergic approach that includes the understanding of circuit design, device modeling and analysis, definition and application of tests, collection and examination of data.

Commonly, the measurement of the reliability of an IC is done using Burn-In and Reliability testers. Such instruments permit to accelerate the obsolescence of a component by running suitable test up to limit conditions such as high temperature, high voltages, high frequencies. This type of operation is normally called stress/test and it is performed on a variable number of ICs (up to some thousands) both at wafer or package level.
Short stress/test duration (“burn in”) allows to discard those components that would exhibit a early failure during their mission, and permits to mitigate the so called infant mortality problem. This hypothesis is based on a failure distribution model known as “bathtub curve”, which indicates that ICs showing a good behavior after an initial stress period will work correctly for the rest of their life cycle. On the contrary, long stress/test times permits to evaluate the length of the IC life cycle and to identify and study the defect mechanisms.

General objectives.
Development of a multi-disciplinary platform that includes electronic circuit design, SW engineering, embedded operative systems and database aspects and oriented to the reliability characterization of ICs.

Industrial objectives.
Identification of failure mechanisms due to obsolescence and remodeling of the bathtub curve for nanometric technologies.
Optimization of reliability test procedures and equipment, in particular concerning the operative system.

Academic objectives.
Dissemination in the following fields:
a. Design-for-Diagnosis
b. Automatic generation of diagnostic patterns oriented to nanometric fault models
c. Efficiency of the stress/test process
d. Diagnostic information Data mining
e. Statistical interpretation of diagnostic data over large IC groups.

The following paragraphs summarize the envisioned research methods.

1. Failure information extraction during stress/test
Work related to this aspect of the problem can be divided in two parts:
a. Design of a SW platform for Design-for-Test and Design-for-Diagnosis structures insertion in IC design in order to retrieve precise information on faulty components
b. Embedded Operative System optimization to perform data collection without impacting the stress/test time

2. Information storage
Design and realization of Database suitable to store the downloaded information and whose structure suits for a large number of measure types and temporal information.

3. Result interpretation
Design of a SW platform able to analyze the failure information stored in the DB to provide the following information:
a. The temporal progression of failures, including circuit parameters
b. The set of possibile location of systematic problems along the various stress types
c. The statistical analysis of failures over IC groups.
Outline of work plan: The research activity will start with a literature survey addressing the state-of-the-art in reliability characterization of integrated circuits, aiming at the classification of currently available methodologies for both digital and analog/mixed signal/power devices and at identifying the current open issues about test, diagnosis and reliability characterization.

Then, methodologies to improve reliability characterization will be outlined and then developed and evaluated.
The activities will first concentrate on the definition of test methodologies supported by a suitable tester architecture possibly boosted with the use of integrated Design for Testability/Reliability structures on chip.
A fundamental component of the reliability test equipment to be employed lies in a Database able to acquire and store parametric data from test of devices during both reliability characterization and burn-in: such software architecture will be designed and developed taking into account industrial requirements.
The last part of the work will deal with the development of analysis methodologies for tracking the occurrence of failures and for studying failure mechanisms, in order to provide useful feedback to design and manufacturing.

The research activities will be based on industry-relevant case studies, and the proposed ideas and the scientific results will be periodically disclosed at conferences, for collecting useful feedback from the international research arena, and in journal and magazine articles.
Expected target publications: DATE Conference, IEEE International Test Conference, IEEE European Test Symposium, IEEE VLSI Test Symposium, IEEE International Reliability Physics Symposium, IEEE Transactions on VLSI, IEEE Transactions on Industrial Informatics ,IEEE Transactions on Reliability, Springer Journal of Electronic Testing.
Current funded projects of the proposer related to the proposal: The proposer is collaborating with STMicroelectronics since many years and from 2010 with regular funding about automotive IC testing; the most recent project funded (2016) is titled "SW BIST implementation for Floating Point Unit of 40nm Automotive Microcontrollers", which ended in December 2016.
INFINEON - Assesment and Optimization of Hierarchical CPU-based Test for MultiCore eFlash applying DfT Hardware
XILINX - LPD (Low Power Domain) subystem of the MPSoC device
Possibly involved industries/companies:STMicroelectronics

Title: New strategies and methodologies for System Level Test (SLT)
Proposer: Ernesto Sanchez
Group website: http://www.cad.polito.it/
Summary of the proposal: This Ph.D. proposal targets some of the most relevant issues in System Level Test (SLT). In a few words, considering a SoC, SLT tries to replicate an almost real scenario and provides the system with a set of test patterns that are an actual application. Then, the system answers are evaluated to determine whether the system is ok or not.
Predominantly, the Ph.D. proposal intends to cover some of the most critical aspects still open in SLT for example, SLT oriented fault models, the lack of metrics able to determine the goodness level of a SLT solution, and the creation of new strategies that consider the interaction of all the cores that compose the whole system.
Rsearch objectives and methods: The increasing complexity of modern systems on chip (SoC) circuits that include in the same device some memory, processing, and custom cores, together with the increasing susceptibility to many different defect types that may affect any one of the cores embedded in the SoC, makes more and more challenging the task of devising an effective test plan able to achieve a sufficiently low defect level at the end of the production. As a matter fact, it is more and more evident in industrial practice that an increasing number of defects escape the current test solutions, resulting in an increasing Defect Level figure, which is often incompatible with the addressed market and application scenario.
Actually, the most of the test processes performed at the end of production mainly target single core defects, leaving aside system defects that can arise due to the system integration process. For example, some delay defects that pass the single core testing procedures may emerge when the different cores are integrated together. However, today it is not still clear how to test these system defects, neither how to identify them when the whole system is the target.
An interesting solution to overcome this problem, which starts to be used by some companies, consists in the introduction of an additional step performed at the end of the test process. This new test step mimics the typical application scenario the device is expected to work in when in the operation environment. It uses some application setup that stresses the whole system by running some pieces of code on the processor(s) and performing some typical tasks that involve other cores in the SoC, in conjunction with a set of carefully selected input stimuli. The behavior of the whole system is observed during the execution of the mimicked application, and compared with the expected results. Clearly, the targeted faults are different from the ones targeted in the single core testing processes, since during this step, the idea is to cover faults emerged thanks to the integration of the different cores. This step, which is often denoted as System Level Test, or SLT, is clearly an evolution of the well-known functional approach that is widely used in board testing, and was very common even for circuit test before the introduction of design for testability (DfT) strategies. However, SLT peculiarity is that it targets the whole system, and it is based on stimuli coming from applications, while the commonly adopted functional test (especially at device level) is often generated targeting specific faults.
SLT seems to be very promising since it is able to take advantage of the following characteristics: it is possible to run the testing applications at speed, the system is considered as a whole, and the operation conditions of the system are very similar to the ones experimented during the in-field operation. As a matter of facts, SLT is likely able to detect delay faults, and system defects related to the interconnections / interactions among the different modules that compose the SoC.
In any case, SLT has still many open issues, which represent the main goals of the Ph.D. proposal described here. In particular, the Ph.D. student will be required to face the following problems:
- It is currently difficult to anticipate the real testing effectiveness of a given application. In fact, the effectiveness of a given SLT solution is basically assessed only by counting the number of devices returning from the field, and then possibly adapting it to cover new defect types as soon as they emerge.
- There are no available metrics able to determine the goodness of a given solution, since SLT mainly targets unmodeled faults.
- There are not mature enough methodologies able to produce high quality SLT procedures.
Outline of work plan: The main research objectives to be pursued during the Ph.D. proposal presented here are summarized in the following:
- To propose a set of new fault models that can be effectively targeted by SLT
- To devise a metric or set of metrics able to determine the goodness of a given SLT solution
- To propose a series of methodologies able to generate good SLT solutions.
In order to reach the proposed goals, the Ph.D. student will be required to perform the following tasks:
1. Study and analysis of the state of the art related to SLT
- SLT related fault models
- SLT quality and effectiveness metrics
- SLT generation methodologies
2. Development and setup of a test case (in cooperation with STMicroelectronics), taking into account that
- the system should be modeled at different abstraction levels (architectural, RTL, and gate level)
- the system may contain the following modules:
* Processor cores (possibly more than 1)
* Memory system (including not only RAM, but also FLASH modules)
* Peripheral cores (I/O as well as control ones)
* Custom logic modules.
3. Benchmark selection and integration, including
- possible application scenarios provided by STMicroelectronics
- MiBench benchmarks (http://vhosts.eecs.umich.edu/mibench), possibly integrated with additional applications
4. Proof of concept of the state of the art fault models in the developed case study and application benchmarks
5. Definition of more adequate SLT fault models
6. Proof of concept on the case study of the proposed SLT fault models
7. Definition and creation of new strategies to improve SLT applications
- Software based solutions
- Hardware based solutions
8. Proof of concept on the case study of the proposed SLT strategies.
Expected target publications: During the first year, it is expected to publish at least 2 conference papers in international conference related to the research area, for example: DATE, ETS, VTS, LATS, IOLTS, DDECS, MTV.
It is also expected that at the end of the second year, the developed research produce enough material to be presented to a relevant international journal, such as: TCAD, TVLSI, ToC, TEC.LSI, ToC, TEC.
Current funded projects of the proposer related to the proposal: Funds from research projects sponsored by STMicroelectronics, Infineon and Magneti Marelli are available for supporting this proposal.
Possibly involved industries/companies:The Ph.D. proposal presented here easily match the industry requirements oriented to improve the post-production testing phase based on System Level Test. In particular, thanks to the well stablished cooperation between DAUIN and STMicroelectronics, we are confident that STMicroelectronics may benefit of the results obtained on this Ph.D. proposal by implementing SLT solutions more efficiently.

Title: Effective design and validation techniques for high-reliability and secure applications
Proposer: Luca Sterpone
Group website: http://www.cad.polito.it
Summary of the proposal: The proposal will consider high reliability applications for aerospace, automotive, medical or industrial domains that need to work reliably and safely in an aggressive working environment (e.g. radiation effects). In addition, many applications must perform correctly for a wide spectrum of operating conditions. The PhD program will be focused on the error management techniques, methodologies and instruments to detect and/or correct errors and reconfigure the design to meet the environmental constraints. Embedded hardware instruments will be able to detect faults and errors induced by technology, design, application and environment. Following error analysis, the reconfiguration of the design features will enable continuous safe, reliable behavior in the given context, environment or application. The proposal will focus mostly on hardware capabilities that will be transparent to the application or assisted by a light software layer.
Furthermore, the proposal will develop also new techniques able to support the designer of secure and reliable nanoelectronic systems in the validation of their correctness. In particular, the proposal will address the validation of mechanisms adopted by the designer to guarantee security and reliability. These activities will consider not only the space of all possible scenario where the electronic system is used, but also a further dimension represented by the possible hardware faults and external attacks the system is designed to face.
Rsearch objectives and methods: The objective of the PhD proposal will target novel tools and methodologies for the analysis of ultra-nanometer systems IPs for different purposes such as the detection and/or correction of multiple categories of faults induced by the environment (i.e. single event effects such as soft errors, single event upsets, single event transients, multiple events); the application (i.e. dynamic operation, marginal operating conditions – low power, high speed) or c) the design itself (i.e. aging, inescapable non-zero defect rates). Ensuring design reliability and longevity is a demanding requirement for today’s high-reliability applications, as specified by current standards such as ISO26262, DO-254, etc.
Furthermore, the activity targets the Next Generation of ultrascale Reconfigurable FPGA (NG-FPGA) with the particular emphasis on the development of predictive tools for the radiation effects (i.e., ground level, space, high-energy physics) analysis considering various phenomenon such as Single Event Effects, Total Ionizing Dose and Permanente Latch-Up with respect of the device application field. The tools/methods will be applied to the European BRAVE FPGA generating a systematic comparison with experimental results to fully validate this type of approach.
The will devise new solutions which take into account the wide literature and set of solutions already available. The proposed solutions will be assessed not only in terms of theoretical effectiveness, but also in terms of practical applicability when adopted in a real design flow. Some meaningful test cases identified in cooperation with the industrial partners of the project will be used for this purpose. It is expected the creation of a final prototypical environment implementing the proposed techniques (hopefully integrated into a commercial design flow platform), together with a report detailing the implemented techniques and the results of the performed evaluation experiments.
The methodology of the proposed PhD program will be aligned with the supporting company: IROC technologies.
The proposal will offer the opportunity to work on current, critical applications from automotive, networking, aerospace and automotive applications, as provided by IROC partners in the framework of technical and commercial collaborations.
Outline of work plan: The activity follows the below work packages:

WPA. (1 year) Error management methods

This work package is devoted to the design of instruments and methods for detecting, correcting or self-managing errors in a design depending to the environmental constraints. The error analysis and the relative reconfiguration of the design will ensure the continuity of the system-safe operations. The work package will focus on both hardware and software capabilities with a particular emphasis on providing an effective design-transparent solution supported by specific software framework layers.

WPB. (1st year/2nd year) Ultra-nanometer systems reliability standards management

This work package will target radiation-induced effects on a wide range of ultra-nanometer devices: from FPGAs to Reconfigurable Computing Fabric (RCF) cores. The reliability and the applicability of the methods will be interfaced with the specific standard such as DO-254 or ISO26262.

WPC. (2nd year) Future generation of Self-Repairing Design techniques

This work package will have emphasis on software instrument to control and manage the self-correction of reconfigurable systems with respect of permanent faults such as the one induced by accumulated radiation (Total Ionizing Dose) or design-rupture such as Single Event Latch-Up (SEL) .

WPD. (2nd year/3rd year) Software framework for security validation

This work package will be focused on the development of new solution for assessing the security of the system in a real design flow. The method will integrated different state-of-the-art techniques assessing software validation, hardware validation, evolutionary computation and design for validation.

WPE (3rd year) Experimental demonstrator

This work package will be oriented to the implementation of a prototypal environment of the detailed analysis and realistic application of the analysis and error management techniques. A set of evaluation experiments will be executed with the developed experimental demonstrator.
Expected target publications: IEEE Transactions on Computers
IEEE Transactions on VLSI
IEEE Design and Test
IEEE Transactions on Reliability
IEEE Transactions on CAD
ACM Transactions on Reconfigurable Technology and Systems
IEEE Transactions on Emerging Topics in Computing
Current funded projects of the proposer related to the proposal: The RESCUE project financially support the entire PhD program scholarship.
Possibly involved industries/companies:IROC Technologies

Title: New Techniques for on-line fault detection
Proposer: Matteo Sonza Reorda
Group website: http://www.cad.polito.it/
Summary of the proposal: This proposal focuses on the development and assessment of new techniques for detecting permanent faults arising during the operational phase of an electronic system, e.g., due to ageing phenomena. This task will be accomplished by resorting to suitable Design for Testability mechanisms, or to a functional approach, or to a clever combination of both. The activities of this ESR will build over the expertize in this area of the group of POLITO in cooperation with automotive and aerospace companies. Activities will in particular explore new challenges which are recently becoming important, with special emphasis on the development of on-line test procedures for GPU-based systems (due to their increasingly important role in many automotive applications related to driver assistance and autonomous driving).
Rsearch objectives and methods: The growing adoption of electronic systems in systems where dependability is a concern asks for strict requirements in terms of fault prevention, detection, and management. These requirements are sometimes simply provided by companies, in other case cases standards (e.g., ISO 26262) and regulations specify which dependability figures must be guaranteed by such systems. In order to achieve these requirements it is crucial to be able to detect possible hardware faults affecting a system as fast as possible, thus reducing the chance that they can produce any failure. Test procedures activated during the operational life represent the typical solution.
In some domains, such as automotive, different players contribute to the final system. Hence, a key point is how to match the above requirements with systems composed of products (e.g., hardware and software IP cores, devices, boards) which are developed by different companies, who own the property of their products and want to maintain the full control over the related intellectual property. In other words, some of these products must be considered as black boxes, and dependability must be estimated based on the information provided by the different companies, and the target dependability figures must be achieved without having so much control over the different components of the system.
More in details, the proposed activities will focus on the test procedures that must be run during the system operating life, aimed at checking whether any fault affected it. These procedures are sometimes developed by the system company, in other cases by the companies having the control of each component. In any case, they must guarantee the achievement of the target figures in terms of fault detection capabilities, as well as minimum intrusiveness in the system operations (e.g., in terms of duration and memory requirements).
The proposed activities will mainly focus on a functional approach, which can better fits the black-box scenario. Special attention will be devoted to some components, such as GPUs, which are becoming increasingly adopted in automotive safety-critical systems (such as ADAS, or Advanced Driver-Assistance Systems). GPUs are mainly oriented to perform sensor-fusion processing, sometimes related to vision, and are characterized by a high complexity and high performance. They are normally developed by very specialized companies (e.g,, NVIDIA or Imagination Technologies) which hardly deliver any information about their internal structure, and it is therefore quite complex to develop test procedures able to effectively detect hardware faults which may possibly affect them.
The proposed activities will be based on the adoption of some simplified but representative open-core GPU modules, and will lead to the development of test procedures able to effectively test them resorting to a functional approach. On the other side, the availability of a synthesizable model will allow the PhD student to quantitatively assess the ability of the test procedures to effectively detect structural faults.
Outline of work plan: The activities will be structured over different actions
1. Analysis of the literature in the field and of the solutions currently adopted by industries
2. Analysis of some open-core GPU modules and identification of a representative one; creation of an environment for assessing the fault coverage achievable with a given test procedure
3. Identification of solutions for the development of effective functional test procedures for the different modules in a GPU
4. Assessment of the effectiveness of the proposed solutions on the selected GPU model
5. Dissemination and discussion of the proposed solution within the international dependability community.
Expected target publications: The activities are expected to produce publications In the main conferences (e.g., ETS, IOLTS, ITC, VTS) and journals (e.g., IEEE TCAD, TVLSI, TCOMP, JETTA) in the area.
Current funded projects of the proposer related to the proposal: This project is fully supported by the H2020 European Project RESCUE (http://www.rescue-etn.eu/). The CAD Group has currently several other projects running in the area, including the H2020 projects MaMMoTH-Up and TUTORIAL.
Possibly involved industries/companies:According to the rules of the RESCUE project, the PhD student involved in the proposed activities will have to spend some period at one of the companies participating in the project (e.g., CADENCE). He/she is also expected to interact with the other companies the hosting group Is already In touch with, including Bosch, Infineon, STMicroelectronics, Magneti Marelli.

Title: Effective techniques for secure and reliable system validation
Proposer: Matteo Sonza Reorda
Group website: http://www.cad.polito.it/
Summary of the proposal: This ESR project aims at developing new techniques to support the designer of secure and reliable nanoelectronic systems in the validation of their correctness and effectiveness. In particular, the project will address the validation of mechanisms adopted by the designer to guarantee security and reliability. This task requires considering not only the space of all possible scenarios where the system is used, but also the possible hardware faults and external attacks the system is designed to face. Assessing the correct functionality of the system with such a huge combination of possibilities can only be done by combining different techniques coming from different communities (e.g., software validation, hardware validation, hardware testing) and exploiting different paradigms (e.g., simulation, formal techniques, evolutionary computation, Design for Validation). Meaningful test cases identified in cooperation with the industrial partners of the project will be considered.
Rsearch objectives and methods: The growing adoption of electronic systems in systems where safety is a concern asks for strict requirements in terms of fault prevention, detection, and management, so that the probability of occurrence of any failure due to a hardware fault is kept under the acceptable threshold. More recently, another issue arose, related to the attacks that may be purposely driven against an electronic system to steal the information it stores, or to modify its behaviour. Due to the increasing frequency of these attacks, it is crucial that the system is designed in such a way that the chances of their success are minimized (security).
The solutions adopted by the designer to face the requirements in terms of both safety and security must be assessed in terms of correctness and effectiveness (validation). Methods and tools already exist to support companies in this task. However, the existing solutions are now not able to solve anymore the practical problems existing in the area in industries due to several reasons:
- the growing complexity of the target systems, making the available solutions often unable to provide results with acceptable computational requirements
- the continuous evolution in the set of faults to be considered when dealing with safety and in the set of possible attacks when dealing with security asks for new solutions
- the changing mechanisms which are adopted by the designers to face faults and attacks; new mechanisms are often not adequately supported by the existing validation techniques.
The purpose of the proposed PhD activity is to tackle the above issues and devise new solutions able to effectively support the designer of safe and secure electronic systems. The approach which be followed will prioritize hybrid solutions where different approaches, such as simulation, formal verification, and heuristic solutions are combined.
Major emphasis will be put on fault simulation and fault injection techniques, given their widespread adoption in industry. The introduction of new test solutions is making the current algorithms and approaches completely unable to match the real constraints. Hence, new techniques are required, which can benefit of the expertize owned by the proposing research group in terms of traditional fault simulation techniques. Moreover, the group already owns the most commonly adopted tools used in this area, which can thus be used as a reference and to build new solutions
Outline of work plan: The activities will be structured over different actions
1. Analysis of the literature in the field and of the solutions and tools currently adopted by industries
2. Identification of the most relevant faults and attacks that have to be considered, together with some of the countermeasures used by the designers to face them; identification of the requirements to validate their effectiveness and correctness
3. Proposal of new solutions for the validation of the correctness and effectiveness of the mechanisms implemented by the designers to face safety and security, with special emphasis on fault simulation / injection techniques
4. Assessment of the effectiveness of the proposed solutions on some selected test cases.
Expected target publications: The activities are expected to produce publications In the main conferences (e.g., ETS, IOLTS, ITC, VTS) and journals (e.g., IEEE TCAD, TVLSI, TCOMP, JETTA) in the area.
Current funded projects of the proposer related to the proposal: This project is fully supported by the H2020 European Project RESCUE (http://www.rescue-etn.eu/). The CAD Group has currently several other projects running in the area, including the H2020 projects MaMMoTH-Up and TUTORIAL.
Possibly involved industries/companies:According to the rules of the RESCUE project, the PhD student involved in the proposed activities will have to spend some period at one of the companies participating in the project (e.g., iROC). He/she is also expected to interact with the other companies the hosting group Is already in touch with, including Bosch, Infineon, STMicroelectronics, Magneti Marelli.

Title: Inside Big Data: diving deep into massive datasets
Proposer: Elena Baralis
Group website: http://dbdmg.polito.it/
Summary of the proposal: Many application domains are currently characterized by the availability of massive datasets (e.g., network traffic data, social network data), currently denoted as “big data”. Big Data stresses the limits of existing data mining techniques and sets new horizons for the design of innovative techniques to address data analysis.
The candidate will focus her/his research activities on data mining algorithms and techniques in Big Data environments. The candidate will design and evaluate new approaches that contribute to a paradigm-shift in distributed data mining, by addressing issues raised by the characteristics of Big Data such as data sparsity, horizontal scaling, and parallel computation. More specifically, the following different topics are envisioned:
(a) Clustering and classification algorithms
(b) Time series analysis
(c) Interplay between deep learning and semantic knowledge
(d) Interpretable machine learning techniques
The research activity will target application domains characterized by varying data sparsity levels (e.g., network traffic characterization, sensor data analysis). Algorithm development and testing will take place in the newly established research laboratory bigdata@polito.
Ongoing collaboration with universities (e.g., Eurecom, UPM Madrid) and companies (e.g., GM Powertrain) will allow the candidate to work in a stimulating international environment.
Rsearch objectives and methods: The research objectives address the following key issues, which are relevant for all the research topics outlined before.
- Algorithm optimization. Available large-scale data mining algorithms are poorly optimized for cloud computing environments. Hence the goal is to improve the scalability of the existing techniques and their performance in massively parallel computing environments
- Horizontal scalability. None or few complex mining techniques are available to be applied to petabyte-scale datasets. Thus the goal is to design and develop advanced algorithms (e.g., clustering techniques) with horizontally scalable approaches, such as those based on Map Reduce and shared columnar storage backends.
- Incrementality. The goal is to design and develop incremental algorithms that are able to cope with result refinement over different incremental runs (e.g., obtained by historical data tracking) and that scale by adapting to new data without the need to re-analyze the entire dataset.

Research methods, besides the standard state-of-the-art study, include designing, developing, and testing algorithms on the Hadoop platform and its current and future evolutions (e.g., Spark), to understand the exploitable advantages and to tackle the drawbacks from the data mining perspective. Since data mining algorithms are often iterative and can request access to large portions of data to extract features and aggregate measurements, the deep understanding of the currently available Big Data platforms is a key requirement to advance the state of the art and to radically innovate in a rapidly growing field.
Outline of work plan: PHASE I (1st year): state-of-the-art survey for algorithms and available platforms for distributed data mining, performance analysis and preliminary proposals of optimization techniques for state-of-the-art algorithms, exploratory analysis of novel, creative solutions for Big Data.
PHASE II (2nd year): new algorithm design and development, experimental evaluation on a subset of application domains, implementation on a subset of selected Big Data platforms.
PHASE III (3rd year): algorithms improvements, both in design and development, experimental evaluation in new application domains, implementation on different Big Data platforms.
During the second-third year, the candidate will have the opportunity to spend a period of time abroad in a leading research center.
Expected target publications: Any of the following journals
IEEE TBD (Trans. on Big Data)
IEEE TKDE (Trans. on Knowledge and Data Engineering)
ACM TODS (Trans. on Database Systems)
ACM TKDD (Trans. on Knowledge Discovery in Data)
Journal of Big Data (Springer)
ACM TOIS (Trans. on Information Systems)
IEEE/ACM Trans. on Networking
ACM TOIT (Trans. On Internet Technology)
Computer Networks (Elsevier)

IEEE/ACM International Conferences (e.g., IEEE BigData, IEEE ICDM, ACM KDD)
Current funded projects of the proposer related to the proposal: I-React (Improving Resilience to Emergencies through Advanced Cyber Technologies) - H2020 EU project - http://www.i-react.eu/
Possibly involved industries/companies:Cloudera
ENEL
GM
Intesa SanPaolo

Title: Reliability in Power Electronics
Proposer: Matteo Sonza Reorda
Group website: http://www.cad.polito.it/
Summary of the proposal: Power management is becoming a major issue in many electronic systems, and several research activities are currently under way aimed at developing smarter systems able to better face all the aspects related to power consumption and management.
Current trends clearly include not only the development of improved power devices, but also the introduction of complex smart systems able to optimize their work. As a result, power management is increasingly based on a wide set of components, integrated into complex systems including analog and digital modules.
A major issue when dealing with these systems lies in their reliability. In fact, being widely used in many applications, including safety-critical ones, it is first of all important to be able to suitably evaluate their reliability, taking into account not only their characteristics and manufacturing process, but also the environment they are supposed to work in, and their typical operational profile. Secondly, it may be crucial in some cases to be able to improve their reliability up to a some target level, typically depending on the application each system is used in. Increased reliability figures can be obtained resorting to different strategies, ranging from the usage of higher quality components up to the adoption of fault tolerant architectures.
The goal of the proposed PhD curriculum is to cover the above areas by focusing on some representative power systems, developing new and more effective techniques for the estimation of their reliability figures, and finally exploring possible solutions aimed at increasing these figures.
Rsearch objectives and methods: Power Electronics is a key enabling technology for modern and more efficient power conversion in strategic fields of application, such as transportation electrification, energy conversion and advanced manufacturing. Strategic applications include, for example, electric vehicles powertrains and chargers, more electric aircrafts, energy production and harvesting from renewables, smart grids, and power supply for laser technology in additive manufacturing. Power electronics can lead the innovation in such fields by becoming smaller, faster, more reliable and capable of managing higher power levels. More intelligence can and must be carried on board of power electronics to make it open to the ICT, in line with Industry 4.0.
At the same time, being in some cases used in safety-critical scenario, Power Electronic systems need to match specific requirements in terms of reliability.
This requires
- Methods to evaluate their reliability, taking into account that the Power Electronic systems are increasingly complex, and may integrate heterogeneous modules (e.g., including both digital and analog ones)
- Methods to Increase their reliability, resorting to different solutions, such as higher-quality components, more sophisticated testing solutions (both at the end of manufacturing and in-the-field), and fault tolerant architectures.
The main objectives of the proposed research project is to devise new techniques to face the two above challenges, combining together existing solutions specifically focused to single modules (e.g., digital, analog, mixed).
Since the project will benefit from the infrastructure offered by the Power Electronics Innovation Center (PEIC) which is going to start its activities in the next months, it is planned that the PhD student will interact with other researchers from different fields and departments (in particular, with those from DENER, DET, DIMEAS and DISAT) and will assess the effectiveness of the proposed techniques on some representative test cases.
Outline of work plan: The work plan for the PhD student activities includes several phases
- phase 1: this phase will be devoted to complete the skills of the student in the relevant area of interest for the project, including In particular power devices, power management systems, reliability. This phase will include the attendance to Internal courses offered by the DIIS PhD curriculum and by others within Politecnico, as well as other learning chances, such as summer schools, tutorials, conferences. During this phase the student will also identify the major limitations and weakness points of current power electronic systems in terms of reliability.
- phase 2: in this phase a small number of representative test cases will be selected, In cooperation with the other members of PEIC.
- phase 3: this phase will focus on developing new solutions to evaluate the reliability of power electronic systems. Given their increasing complexity, emphasis will be put on the identification of multi-level heterogeneous approaches combining the ability to provide correct estimations with reasonable requirements in terms of required time and effort. The issue of automating the reliability evaluation process will also be considered.
- phase 4: this phase will focus on the identification of solutions to harden Power Electronic systems, thus allowing to achieve the target reliability requirements while taking into account the cost and time to market constraints which are typical of any real product. both phase 3 and 4 will be completed by assessing the effectiveness of the proposed solutions on the test cases identified in phase 2.
- phase 5: this phase will focus on the dissemination of the achieved results, e.g., through the publication of papers on major journals and at conferences in the area, as well as on the preparation of the final dissertation.
Expected target publications: IEEE Trans. on CAD
IEEE Trans. on Reliability
IEEE Trans. on Power Electronics
Current funded projects of the proposer related to the proposal: The proposing group is Involved in several projects related to reliability evaluation and device/system hardening, funded both by public bodies (e.g., EU and ESA) and by companies (e.g., STMicroelectronics, AMS, Infineon).
Possibly involved industries/companies:The PhD student will benefit of the support by the companies involved in PEIC (e.g., STMicroelectronics, Vishay ) as well as of the wide network of industrial relationships established by the hosting research group.

Title: Software-to-Silicon Solutions for Near-Sensor Data-Analytics in the IoT
Proposer: Andrea Calimera
Group website: http://eda.polito.it/
Summary of the proposal: There is a wide consensus among the research community that the next IoT generation will move towards a distributed implementation where some tasks (e.g., data-analysis) are shifted near/onto the physical sensors.

Needless to say, implementing sensors able to run data-analytics tasks with a limited budget of power requires the availability of new low-power computing solutions.

Within this context, this proposal focuses on the design of an embedded hardware platform specialized for the acceleration of data-analytics routines and on the implementation of the supporting CAD tools.
Rsearch objectives and methods: The main objective of the project is twofold:
1. Devise new energy-efficient computer architectures that leverage dedicated hardware accelerators able to run data-analytics tasks and machine-learning algorithms, with particular emphasis on deep-learning algorithms.
2. Develop an optimization framework that, acting at different levels of abstraction, i.e., from software-level down to circuit-level, provides users with a holistic design methodology for energy, power, area and performance optimization.

The research activities will focus on three main applications:
1. Video reasoning – goal: reduce communication cost and energy by extracting video information close to the sensor.
2. Speech recognition – goal: reduce latency and improve dependability and privacy.
3. Biomedical signal processing – goal: minimize energy cost avoiding data transmission and performing computation locally, important especially for wearable or implantable devices.
Outline of work plan: 1. State of the art (6 months): Analysis and study of the state of the art in the field of machine-learning, deep-learning and viable power-management strategies.

2. Design of digital architectures customized for machine-learning (12 months): implementation of a reconfigurable digital platform and its memory/communication structure able to host machine-learning routines; the new architecture will be conceived considering power/energy consumptions as the main target.

3. Optimization techniques at software-, architectural-, circuit-level Implementation (18 months): automation of design flow and optimization loop to achieve ultra-low energy consumption; such flows will address power optimization at different levels of abstraction, from software level down to circuit and physical level.
Expected target publications: Journals and Conferences/Workshops whose topics include but are not limited to Design Automation, VLSI circuits, Computer Architectures, Communication and Networking. Some example below:
- ACM TRANSACTIONS ON DESIGN AUTOMATION OF ELECTRONIC SYSTEMS
- IEEE TRANSACTION ON COMPUTER-AIDED DESIGN OF INTEGRATED CIRCUITS AND SYSTEMS
- IEEE TRANSACTIONS ON CIRCUITS AND SYSTEMS
- IEEE TRANSACTIONS ON COMPUTERS
- IEEE TRANSACTIONS ON VLSI
- IEEE TRANSACTIONS ON COMMUNICATION
- IEEE/ACM TRANSACTIONS ON NETWORKING
Current funded projects of the proposer related to the proposal:
Possibly involved industries/companies: