Skip to content

New papers on assistive technology

May 27, 2021

Article 1:


A Navigation and Augmented Reality System for Visually Impaired People

Sensors 2021, 21(9), 3061;

Authors:Alice Lo Valvo Daniele Croce Domenico Garlisi Fabrizio Giuliano Laura Giarré and Ilenia Tinnirello


In recent years, we have assisted with an impressive advance in augmented reality systems and computer vision algorithms, based on image processing and artificial intelligence. Thanks to these technologies, mainstream smartphones are able to estimate their own motion in 3D space with high accuracy. In this paper, we exploit such technologies to support the autonomous mobility of people with visual disabilities, identifying pre-defined virtual paths and providing context information, reducing the distance between the digital and real worlds. In particular, we present ARIANNA+, an extension of ARIANNA, a system explicitly designed for visually impaired people for indoor and outdoor localization and navigation. While ARIANNA is based on the assumption that landmarks, such as QR codes, and physical paths (composed of colored tapes, painted lines, or tactile pavings) are deployed in the environment and recognized by the camera of a common smartphone, ARIANNA+ eliminates the need for any physical support thanks to the ARKit library, which we exploit to build a completely virtual path. Moreover, ARIANNA+ adds the possibility for the users to have enhanced interactions with the surrounding environment, through convolutional neural networks (CNNs) trained to recognize objects or buildings and enabling the possibility of accessing contents associated with them. By using a common smartphone as a mediation instrument with the environment, ARIANNA+ leverages augmented reality and machine learning for enhancing physical accessibility. The proposed system allows visually impaired people to easily navigate in indoor and outdoor scenarios simply by loading a previously recorded virtual path and providing automatic guidance along the route, through haptic, speech, and sound feedback.

Keywords: navigation; visually impaired; computer vision; augmented reality; cultural context; convolutional neural network; machine learning; haptic

Article 2

Assistive Navigation using Deep Reinforcement Learning Guiding Robot with UWB/Voice Beacons and Semantic Feedbacks for Blind and Visually Impaired People

Front. Robot. AI | doi: 10.3389/frobt.2021.65413

authors: Chen-Lung Lu1Zi-Yan Liu2, Jui-Te Huang1Ching-I Huang1, Bo-Hui Wang3, Yi Chen1, Nien-Hsin Wu4Hsueh-Cheng Nick Wang1*Laura Giarré5 and Pei-Yi Kuo4


Facilitating navigation in pedestrian environments is critical for enabling people who are blind and visually impaired (BVI) to achieve independent mobility. A deep reinforcement learning (DRL)–based assistive guiding robot with ultrawide-bandwidth (UWB) beacons that can navigate through routes with designated waypoints was designed in this study. Typically, a simultaneous localization and mapping (SLAM) framework is used to estimate the robot pose and navigational goal; however, SLAM frameworks are vulnerable in certain dynamic environments. The proposed navigation method is a learning approach based on state-of-the-art DRL and can effectively avoid obstacles. When used with UWB beacons, the proposed strategy is suitable for environments with dynamic pedestrians. We also designed a handle device with an audio interface that enables BVI users to interact with the guiding robot through intuitive feedback. The UWB beacons were installed with an audio interface to to obtain environmental information. The on-handle and on-beacon verbal feedback provides points of interests and turn-by-turn information to BVI users. BVI users were recruited in this study to conduct navigation tasks in different scenarios. A route was designed in a simulated ward to represent daily activities. In real-world situations, SLAM-based state state estimation might be affected by dynamic obstacles, and the visual-based trail may suffer from occlusions from pedestrians or other obstacles. The proposed system successfully navigated through environments with dynamic pedestrians, in which systems based on existing SLAM algorithms have failed.

Keywords: UWB Beacon, navigation, Blind and visually impaired, Guiding Robot, Verbal instruction, Indoor navigation, deep reinforcement learning

MED 2021

May 18, 2021

The preliminary program od the Mediterranean Conference on Control and Automation that will be held VIRTUALLY in Bari on June 22-25, 2021 is


The first day (Tuesday, June 22nd) You will find 7 WORKSHOPS.

Please check, if interested in the workshops, you can register only to one of them.

Then 3 days of sessions with 3 intersting plenaries and 244 papers will follow, and 1 round table.

The program is available here


MED 2021 submission deadline

January 25, 2021

The deadline to paper submission for the Mediterranean Control Conference 2021 has been postponed (last!) to February 1st, 2021. Please contribute with papers!!!

Smart Academia and gender balance

December 2, 2020

LPV 2021

October 20, 2020

In july 2021 there will be the IFAC workshop on linear parameter varying systems . It will be in Milan (hopefully not virtually) .

the website is here

MED 2021

September 22, 2020

Sono aperte le sottomissioni per la conferenza MED2021 : 29th Mediterranean Conference on Control and Automation, June 22-25 2021, PUGLIA, ITALY che sarà in modalità ibrida (in presenza e virtuale).

La conferenza ha come tema principale smart systems e smart city.

Ecco il flyer della call for papers

August 9, 2020

New paper on scl

You can download it here

Gender e scenari post covid

August 9, 2020

Ho pubblicato questo articolo

Assistive technology for sensory disabled people

May 29, 2020

We are launching a research topic for Frontiers in Robotics and AI journal .

This is a call to submit papers on the topic of Assistive technology for sensory disabled people

Editors: Daniele Croce, Laura Giarrè, Federica Pascucci and
Hsueh-Cheng Nick Wang

Both theoretical and experimental research is welcomed, including but not limited to the following topics:

• Assistive technologies and assistive robotics for sensory disabilities
• AAL, smart environments and IoT for sensory-disabled people
• Assistive technologies and artificial intelligence for sensory disabilities
• Virtual and augmented reality for sensory disabilities
• Computer vision applications for sensory-disabled people
• Bioengineering for sensory disabilities
• Auditory and spatial perception of sensory-disabled people
• Alternative and Augmentative Communication for sensory-disabled people
• Wearables and haptics for sensory-disabled people
• Navigation and guidance for sensory-disabled people
• Assisted mobility for sensory-disabled people
• Accessibility of images, software, Web and Social Media
• Safety and security of assistive technologies for sensory-disabled people
• Inclusive R&D, usability, ergonomics and user-centered design of assistive technologies for sensory-disabled people
• Assistive technologies in education for sensory-disabled people
• Assistive technologies for sensory-disabled people in low- and middle-income countries

Workshop Arrivano i nostri

May 28, 2020

Istituto italiano di Robotica e Macchine Intelligenti (I-RIM) ed il Chapter Italiano della Robotics and Automation Society di IEEE (I-RAS) hanno il piacere di invitarvi al workshop:

“Arrivano i nostri… Robot – Domande e risposte sulla robotica al servizio della comunità e contro Covid-19”
che si terrà in streaming la mattina del 28 maggio in live streaming accessibile anche da:

Il workshop intende mettere a confronto le richieste da parte degli addetti ai lavori con le risposte e le soluzioni che il mondo della ricerca è in grado di fornire. I temi che verranno affrontati spaziano dall’agricoltura, al manifatturiero e alla salute.

Nel seguito il programma preliminare della giornata:

ore 9:15 – Introduzione ai lavori da parte del Prof. Andrea Zanchettin (Politecnico di Milano, Presidente I-RAS).

ore 9:30 – Robot per l’agricoltura – Modera: Andrea Zanchettin (Politecnico di Milano)
• Renato Reggiani (founder BioPic)
• Lorenzo Marconi (Università di Bologna)
• Presentazione di progetti pilota, intervengono: Giovanni Muscato (Università degli Studi di Catania),   Luca Bascetta (Politecnico di Milano),  Andrea Gasparri (Università degli Studi Roma Tre)

 ore 10:30 – Robot per il manifatturiero – Modera: Gianluca Antonelli (Università di Cassino e del Lazio Meridionale)
• Giulio Guadalupi (Vice presidente Confindustria Bergamo con delega all’innovazione)
• Bruno Siciliano (Università degli Studi di Napoli Federico II)
• Presentazione di progetti pilota, intervengono: Lucia Pallottino (Università di Pisa),   Cristian Secchi (Università di Modena e Reggio Emilia),  Arash Ajoudani (Istituto Italiano di Tecnologia)

 ore 11:30 – Robot per salute e società – Modera: Federica Pascucci (Università degli Studi Roma Tre)
• Alberto Tozzi (Ospedale Pediatrico Bambino Gesù)
• Eugenio Guglielmelli (Università Campus Bio-Medico)
• Presentazione di progetti pilota, intervengono: Manuel Catalano (Università di Pisa, IIT),   Domenico Prattichizzo (Università di Siena, IIT),   Andrea Zanchettin (Politecnico di Milano)

Concluderà la mattinata il Prof. Antonio Bicchi (Università di Pisa, Presidente I-RIM) con alcune riflessioni finali.