FC1 Machine Vision Applications II
Time : 15:20~16:50
Room : Sapphire
Chair : Prof.Wang-Heon Lee (Hansei University, Korea)
15:20~15:35        FC1-1
Automatic sphere detection for extrinsic calibration of multiple RGBD cameras

Young Chan Kwon, Jae Won Jang, Ouk Choi(Incheon National University, Korea)

To combine depth data from multiple RGBD cameras in a unified coordinate system, the cameras need to be extrinsically calibrated based on 3D correspondences of scene features. To maximize the scene features simultaneously observed from different viewpoint, a spherical calibration object is widely used. We propose a method for automatically detecting a spherical object in an RGB image. Since the detection relies only on RGB information, the proposed method has a potential to be extended to conventional multi-view calibration.
15:35~15:50        FC1-2
Extrinsic Calibration of RGB-D Sensor and Robot using Color Chessboard

Kyeong Seock Jang, Jong-Eun Ha(Seoul National University of Science and Technology, Korea)

In this paper, we deal with extrinsic calibration between a robot and RGB-D sensor. Converting sensor measurement into robot frame is necessary for robot to manipulate objects on a workspace. Three frames of world, robot and sensor are relevant to transfer sensor measurements into robot frame. For this extrinsic calibration, we use color chessboard where red, green and blue color is used to identify the x, y, and z axis of frame. Control points corresponding to cross points on the chessboard are automatically detected and they are used for the extrinsic calibration.
15:50~16:05        FC1-3
Extrinsic Calibration of camera and laser range finder using dummy camera without ir cut filter

Jaeyeul Kim, Jong-Eun Ha(Seoul National University of Science and Technology, Korea)

We present a method for the extrinsic calibration of a camera and laser range finder that enables verification of calibration results via dummy camera removing IR cut-filter. Previous algorithms could not check the accuracy of calibration because they could not find the actual locus of laser range finder on image. We can obtain the actual locus of laser range finder on image using camera removing IR cut-filter. Control points for extrinsic calibration among points by laser range finder are automatically detected. Therefore, extrinsic calibration could be done automatically using just one shot
16:05~16:20        FC1-4
Lane Detection using Deep Learning Semantic Segmentation

Daehun Kim, Jong-Eun Ha(Seoul National University of Science and Technology, Korea)

Lane detection is essential in self-driving cars. Recent researches using deep learning show dramatic improvements in lane detection compared to previous approaches that rely on hand-crafted features. In this paper, we present a method for detecting lanes using deep semantic segmentation. Deep semantic segmentation requires much more time in preparing samples for learning because it requires ground-truth label for each pixel on image. We augment training samples by compositing results by an algorithm which finds an ego-lane using CNN(Convolutional Neural Networks).
16:20~16:35        FC1-5
Localization of Welding Defects using Weakly Supervised Neural Network

Hyeon Joon Choi(University, Korea), Dong-Joong Kang(Pusan National University, Korea)

Recently as the performance of machine learning improves, numerous studies related to inspection systems involving machine learning are actively underway. Therefore, in this work we developed a defect inspection system for use during the transmission welding process which relies on a deep neural network. We use two algorithms, one for a classification model and the other for a post-processing model, for localization. A global average pooling (GAP) layer is added to the last convolution layer, enabling localization despite the training being done only with image-level label data.

<<   1   >>