Transferring ConvNet Features from Passive to Active Robot Self-Localization: The Use of Ego-Centric and World-Centric Views

 


Kanya Kurauchi, Kanji Tanaka, Ryogo Yamamoto, and Mitsuki Yoshida

Abstract. The training of a next-best-view (NBV) planner for visual place recognition (VPR) is a fundamentally important task in autonomous robot navigation, for which a typical approach is the use of visual experiences that are collected in the target domain as training data. However, the collection of a wide variety of visual experiences in everyday navigation is costly and prohibitive for real-time robotic applications. We address this issue by employing a novel domain-invariant NBV planner. A standard VPR subsystem based on a convolutional neural network (CNN) is assumed to be available, and its domain-invariant state recognition ability is proposed to be transferred to train the domain-invariant NBV planner. Specifically, we divide the visual cues that are available from the CNN model into two types: the output layer cue (OLC) and intermediate layer cue (ILC). The OLC is available at the output layer of the CNN model and aims to estimate the state of the robot (e.g., the robot viewpoint) with respect to the world-centric view coordinate system. The ILC is available within the middle layers of the CNN model as a high-level description of the visual content (e.g., a saliency image) with respect to the ego-centric view. In our framework, the ILC and OLC are mapped to a state vector and subsequently used to train a multi-view NBV planner via deep reinforcement learning. Experiments using the public NCLT dataset validate the effectiveness of the proposed method.

Keywords: Visual robot place recognition, Domain-invariant next-best-view planner, Transferring convnet features

Our work has been supported in part by JSPS KAKENHI Grant-in-Aid for Scientific Research (C) 17K00361, and (C) 20K12008.

Members: Tanaka Kanji, Kurauchi Kanya, Yoshida Mitsuki, Yamamoto Ryogo

Relevant Publication:

Bibtex sourceDocument PDF

 

 

         (a) ILC in the ego-centric view.                                          (b) OLC in the world-centric view.

Fig.1. The training of a next-best-view (NBV) planner for visual robot place recognition (VPR) is fundamentally important for autonomous robot navigation. In this study, instead of the common path of training from visual experiences, we exploit a standard single-view VPR model using a deep CNN as the source of training data. Specifically, we divide the visual cues that are available from the CNN model into two types: the OLC and ILC, and fuse the OLC and ILC into a new state vector to reformulate the NBV planning as a domain-invariant task. Heat maps are overlaid on the images to visualize the OLC/ILC values.

 

Fig.2. Active VPR framework. The NBV planner is trained by transferring the state recognition ability of the CNN model for single-view VPR. Two types of cues, namely the CNN OLC (viewpoint-specific PDV") and CNN ILC (saliency image"), are extracted and transferred from the CNN.


Fig.3. Experimental environment. Top: entire trajectories and Segway vehicle robot. Bottom: views from onboard front-facing camera in different seasons.

 

Table 1. Performance results

test session

2012/1/8

2012/1/15

2012/3/25

2012/8/20

2012/9/28

single-view

0.441

0.293

0.414

0.345

0.365

random

0.547

0.413

0.538

0.457

0.542

OLC-only

0.619

0.471

0.579

0.497

0.567

ILC-only

0.625

0.457

0.585

0.494

0.608

proposed

0.647

0.493

0.596

0.518

0.623

                 (a) Proposed NBV planner                                            (b) Random planner

Fig.4. Example results. The view image sequences at the NBVs planned by the proposed method (left) and by the random method (right) are depicted for three different starting viewpoints. The panels from left to right show the view images at the 1st, 2nd, 3rd, and 4th viewpoints.