Skip to content

New papers on assistive technology

May 27, 2021

Article 1:

ARTICLE https://www.mdpi.com/1424-8220/21/9/3061#

A Navigation and Augmented Reality System for Visually Impaired People

Sensors 2021, 21(9), 3061; https://doi.org/10.3390/s21093061

Authors:Alice Lo Valvo Daniele Croce Domenico Garlisi Fabrizio Giuliano Laura Giarré and Ilenia Tinnirello

Abstract

In recent years, we have assisted with an impressive advance in augmented reality systems and computer vision algorithms, based on image processing and artificial intelligence. Thanks to these technologies, mainstream smartphones are able to estimate their own motion in 3D space with high accuracy. In this paper, we exploit such technologies to support the autonomous mobility of people with visual disabilities, identifying pre-defined virtual paths and providing context information, reducing the distance between the digital and real worlds. In particular, we present ARIANNA+, an extension of ARIANNA, a system explicitly designed for visually impaired people for indoor and outdoor localization and navigation. While ARIANNA is based on the assumption that landmarks, such as QR codes, and physical paths (composed of colored tapes, painted lines, or tactile pavings) are deployed in the environment and recognized by the camera of a common smartphone, ARIANNA+ eliminates the need for any physical support thanks to the ARKit library, which we exploit to build a completely virtual path. Moreover, ARIANNA+ adds the possibility for the users to have enhanced interactions with the surrounding environment, through convolutional neural networks (CNNs) trained to recognize objects or buildings and enabling the possibility of accessing contents associated with them. By using a common smartphone as a mediation instrument with the environment, ARIANNA+ leverages augmented reality and machine learning for enhancing physical accessibility. The proposed system allows visually impaired people to easily navigate in indoor and outdoor scenarios simply by loading a previously recorded virtual path and providing automatic guidance along the route, through haptic, speech, and sound feedback.

Keywords: navigation; visually impaired; computer vision; augmented reality; cultural context; convolutional neural network; machine learning; haptic

Article 2

https://www.frontiersin.org/articles/10.3389/frobt.2021.654132

Assistive Navigation using Deep Reinforcement Learning Guiding Robot with UWB/Voice Beacons and Semantic Feedbacks for Blind and Visually Impaired People

Front. Robot. AI | doi: 10.3389/frobt.2021.65413

authors: Chen-Lung Lu1Zi-Yan Liu2, Jui-Te Huang1Ching-I Huang1, Bo-Hui Wang3, Yi Chen1, Nien-Hsin Wu4Hsueh-Cheng Nick Wang1*Laura Giarré5 and Pei-Yi Kuo4

Abstract:

Facilitating navigation in pedestrian environments is critical for enabling people who are blind and visually impaired (BVI) to achieve independent mobility. A deep reinforcement learning (DRL)–based assistive guiding robot with ultrawide-bandwidth (UWB) beacons that can navigate through routes with designated waypoints was designed in this study. Typically, a simultaneous localization and mapping (SLAM) framework is used to estimate the robot pose and navigational goal; however, SLAM frameworks are vulnerable in certain dynamic environments. The proposed navigation method is a learning approach based on state-of-the-art DRL and can effectively avoid obstacles. When used with UWB beacons, the proposed strategy is suitable for environments with dynamic pedestrians. We also designed a handle device with an audio interface that enables BVI users to interact with the guiding robot through intuitive feedback. The UWB beacons were installed with an audio interface to to obtain environmental information. The on-handle and on-beacon verbal feedback provides points of interests and turn-by-turn information to BVI users. BVI users were recruited in this study to conduct navigation tasks in different scenarios. A route was designed in a simulated ward to represent daily activities. In real-world situations, SLAM-based state state estimation might be affected by dynamic obstacles, and the visual-based trail may suffer from occlusions from pedestrians or other obstacles. The proposed system successfully navigated through environments with dynamic pedestrians, in which systems based on existing SLAM algorithms have failed.

Keywords: UWB Beacon, navigation, Blind and visually impaired, Guiding Robot, Verbal instruction, Indoor navigation, deep reinforcement learning

No comments yet

Leave a Reply

Fill in your details below or click an icon to log in:

WordPress.com Logo

You are commenting using your WordPress.com account. Log Out /  Change )

Google photo

You are commenting using your Google account. Log Out /  Change )

Twitter picture

You are commenting using your Twitter account. Log Out /  Change )

Facebook photo

You are commenting using your Facebook account. Log Out /  Change )

Connecting to %s

%d bloggers like this: