CVClient

Shinsaku Hiura

  (日浦 慎作)

Profile Information

Affiliation
Professor, Graduate School of Engineering, University of Hyogo
Degree
Master's degree in Engineering (Osaka University)(Mar, 1995, Osaka University)
Ph. D in Engineering (Osaka University)(Mar, 1997, Osaka University)

J-GLOBAL ID
200901030947225493
researchmap Member ID
1000282146

External link

Major Papers

 147
  • Yuki Shiba, Satoshi Ono, Ryo Furukawa, Shinsaku Hiura, Hiroshi Kawasaki
    ICCV 2017, 115-123, Oct, 2017  Peer-reviewed
  • 河本悠, 日浦慎作, 宮崎大輔, 古川亮, 馬場 雅志
    情報処理学会論文誌, 57(2) 783-793, Feb 15, 2016  Peer-reviewed
  • Hiroshi Kawasaki, Satoshi Ono, Yuki Horita, Yuki Shiba, Ryo Furukawa, Shinsaku Hiura
    Proceedings of the IEEE International Conference on Computer Vision, 2015 3568-3576, Feb 17, 2015  Peer-reviewed
    The central projection model commonly used to model cameras as well as projectors, results in similar advantages and disadvantages in both types of system. Considering the case of active stereo systems using a projector and camera setup, a central projection model creates several problems, among them, narrow depth range and necessity of wide baseline are crucial. In the paper, we solve the problems by introducing a light field projector, which can project a depth-dependent pattern. The light field projector is realized by attaching a coded aperture with a high frequency mask in front of the lens of the video projector, which also projects a high frequency pattern. Because the light field projector cannot be approximated by a thin lens model and a precise calibration method is not established yet, an image-based approach is proposed to apply a stereo technique to the system. Although image-based techniques usually require a large database and often imply heavy computational costs, we propose a hierarchical approach and a feature-based search for solution. In the experiments, it is confirmed that our method can accurately recover the dense shape of curved and textured objects for a wide range of depths from a single captured image.
  • Masakazu Iwamura, Masashi Imura, Shinsaku Hiura, Koichi Kise
    IPSJ Transactions on Computer Vision and Applications, 6 45-52, 2014  Peer-reviewed
    The paper addresses the recognition problem of defocused patterns. Though recognition algorithms assume that the input images are focused and sharp, it does not always hold on actual camera-captured images. Thus, a recognition method that can recognize defocused patterns is required. In this paper, we propose a novel recognition framework for defocused patterns, relying on a single camera without a depth sensor. The framework is based on the coded aperture which can recover a less-degraded image from a defocused image if depth is available. However, in the problem setting of "a single camera without a depth sensor," estimating depth is ill-posed and an assumption is required to estimate the depth. To solve the problem, we introduce a new assumption suitable for pattern recognition templates are known. It is based on the fact that in pattern recognition, all templates must be available in advance for training. The experiments confirmed that the proposed method is fast and robust to defocus and scaling, especially for heavily defocused patterns.
  • 笹尾朋貴, 日浦慎作, 佐藤宏介
    電子情報通信学会論文誌, J96-D(8) 1778-1789, Aug, 2013  Peer-reviewed
  • 武田祐一, 日浦慎作, 佐藤宏介
    電子情報通信学会論文誌, J96-D(8) 1688-1700, Aug, 2013  Peer-reviewed
  • Yu Kawamoto, Shinsaku Hiura, Naoki Asada
    International Conference on Computational Photography(ICCP2013), Apr 13, 2013  Peer-reviewed
  • Tomoki Sasao, Shinsaku Hiura, Kosuke Sato
    2013 IEEE INTERNATIONAL CONFERENCE ON COMPUTATIONAL PHOTOGRAPHY (ICCP 2013), 2013  Peer-reviewed
    This paper shows a random and distinct shape of each pixel improves the performance of super-resolution using multiple input images. Since the spatial light sensitivity distribution in each pixel of an image sensor is rectangular and identical, the process of imaging is equivalent to the point sampling of blurred image which is a result of convolution of a rectangle with the original image. The convolution results in a loss of the high spatial frequency component of the original image, which limits the performance of super-resolution. Thus, we sprayed a fine-grained black powder on an image sensor to give a random code to the spatial light sensitivity distribution in each pixel. This approach was combined with a reconstruction technique based on sparse regularization, which is commonly used in compressed sensing, in an experiment with an actual setup. A high-resolution image was reconstructed from a limited number of input images and the performance of super-resolution was significantly improved.
  • Yuichi Takeda, Shinsaku Hiura, Kosuke Sato
    2013 IEEE CONFERENCE ON COMPUTER VISION AND PATTERN RECOGNITION (CVPR), 209-216, 2013  Peer-reviewed
    In this paper we propose a novel depth measurement method by fusing depth from defocus (DFD) and stereo. One of the problems of passive stereo method is the difficulty of finding correct correspondence between images when an object has a repetitive pattern or edges parallel to the epipolar line. On the other hand, the accuracy of DFD method is inherently limited by the effective diameter of the lens. Therefore, we propose the fusion of stereo method and DFD by giving different focus distances for left and right cameras of a stereo camera with coded apertures. Two types of depth cues, defocus and disparity, are naturally integrated by the magnification and phase shift of a single point spread function (PSF) per camera. In this paper we give the proof of the proportional relationship between the diameter of defocus and disparity which makes the calibration easy. We also show the outstanding performance of our method which has both advantages of two depth cues through simulation and actual experiments.
  • Takehiro Tachikawa, Shinsaku Hiura, Kosuke Sato
    IPSJ Transactions on Computer Vision and Applications, 3 172-185, 2011  Peer-reviewed
    This paper describes a method to determine the direction of a light source and the distribution of diffuse reflectance from two images under different lighting conditions. While most inverse-rendering methods require 3 or more images, we investigate the use of only two images. Using the relationships between albedo and light direction at 6 or more points, we firstly show that it is possible to simultaneously estimate both of these if the shape of the target object is given. Then we extend our method to handle a specular object and shadow effect by applying a robust estimation method. Thorough experimentation shows that our method is feasible and stable not only for well controlled indoor scenes, but also for an outdoor environment illuminated by sunlight. © 2011 Information Processing Society of Japan.
  • Tatsuhiko Furuse, Shinsaku HIURA, Kosuke SATO
    計測自動制御学会論文集, 46(10) 589-597, Oct, 2010  Peer-reviewed
    In this paper, we propose a method to accurately measure the shape of objects by suppressing indirect reflection such as interreflection or subsurface scattering. We use a modulation with M-sequence shifted along the line of the slit light to be accurately detected on the captured image in two ways. This method has two advantages; one is the characteristics of propagation of higher spatial frequency components and the other is geometric constraint between the projector and the camera. Prior to the measurement, epipolar constraint is obtained through calibration, and then the phase consistency is evaluated to suppress the interreflection. The value of cross-correlation is used to suppress the dilation of the light caused by the subsurface scattering.
  • Shinsaku Hiura, Ankit Mohan, Ramesh Raskar
    IPSJ Transactions on Computer Vision and Applications, 2 186-199, 2010  Peer-reviewed
    We propose a novel wide angle imaging system inspired by compound eyes of animals. Instead of using a single lens, well compensated for aberration, we used a number of simple lenses to form a compound eye which produces practically distortion-free, uniform images with angular variation. The images formed by the multiple lenses are superposed on a single surface for increased light efficiency. We use GRIN (gradient refractive index) lenses to create sharply focused images without the artifacts seen when using reflection based methods for X-ray astronomy. We show the theoretical constraints for forming a blur-free image on the image sensor, and derive a continuum between 1:1 flat optics for document scanners and curved sensors focused at infinity. Finally, we show a practical application of the proposed optics in a beacon to measure the relative rotation angle between the light source and the camera with ID information. © 2010 Information Processing Society of Japan.
  • Ankit Mohan, Grace Woo, Shinsaku Hiura, Quinn Smithwick, Ramesh Raskar
    ACM TRANSACTIONS ON GRAPHICS, 28(3) 1-8, Aug, 2009  Peer-reviewed
    We show a new camera based interaction solution where an ordinary camera can detect small optical tags from a relatively large distance. Current optical tags, such as barcodes, must be read within a short range and the codes occupy valuable physical space on products. We present a new low-cost optical design so that the tags can be shrunk to 3 m m visible diameter, and unmodified ordinary cameras several meters away can be set up to decode the identity plus the relative distance and angle. The design exploits the bokeh effect of ordinary cameras lenses, which maps rays exiting from an out of focus scene point into a disk like blur on the camera sensor. This bokeh-code or Bokode is a barcode design with a simple lenslet over the pattern. We show that a code with 1 5 mu m features can be read using an off-the-shelf camera from distances of up to 2 meters. We use intelligent binary coding to estimate the relative distance and angle to the camera, and show potential for applications in augmented reality and motion capture. We analyze the constraints and performance of the optical system, and discuss several plausible application scenarios.
  • 日浦慎作, 森谷貴行, 佐藤宏介
    情報処理学会論文誌コンピュータビジョンとイメージメディア, 2(1) 14-31, Mar, 2009  Peer-reviewed
  • Natsumi Kusumoto, Shinsaku Hiura, Kosuke Sato
    CVPR: 2009 IEEE CONFERENCE ON COMPUTER VISION AND PATTERN RECOGNITION, VOLS 1-4, 2544-2551, 2009  Peer-reviewed
    Exaggerated defocus can not be created with an ordinary compact digital camera because of its tiny sensor size, so it is hard to take pictures that attract a viewer to the main subject. On the other hand, there are many methods for controlling focus and defocus of previously taken pictures. However most of these methods require purpose-built equipment such as a camera array to take pictures. Therefore, in this paper we propose a method to create images focused at any depth with arbitrarily blurred background from the set of images taken by a handheld compact digital camera moved randomly. Using our method, it is possible to produce various aesthetic blurs by changing the size, shape or density of the blur kernel. In addition, we confirm the potential of our method through a subjective evaluation of blurred images created by our system.
  • Natsumi Kusumoto, Shinsaku Hiura, Kosuke Sato
    Kyokai Joho Imeji Zasshi/Journal of the Institute of Image Information and Television Engineers, 63(6) 857-865, 2009  Peer-reviewed
    Exaggerated defocus cannot be achieved with an ordinary compact digital camera because of its tiny sensor size, so taking pictures that draw the attention of a viewer to the subject is hard. Many methods are available for controlling the focus and defocus of previously taken pictures. However, most of these methods require custom-built equipment such as a camera array to take pictures. Therefore, in this paper, we describe a method for creating images focused at any depth with an arbitrarily blurred background from a set of images taken by a handheld compact digital camera that is moved at random. Our method can produce various aesthetic blurs by changing the size, shape, or density of the blur kernel. In addition, we demonstrate the potential of our method through a subjective evaluation of blurred images created by our system.
  • 川端聡, 日浦慎作, 佐藤宏介
    電子情報通信学会論文誌, J91-D(1) 110-119, Jan, 2008  Peer-reviewed
  • 森谷貴行, 日浦慎作, 佐藤 宏介
    電子情報通信学会論文誌, J90-D(6) 1579-1591, Jun, 2007  Peer-reviewed
  • Osamu Nasu, Shinsaku Hiura, Kosuke Sato
    2007 IEEE CONFERENCE ON COMPUTER VISION AND PATTERN RECOGNITION, VOLS 1-8, 3586-+, 2007  Peer-reviewed
  • 川端聡, 日浦慎作, 佐藤宏介
    電子情報通信学会論文誌, J89-D(4) 826-835, Apr, 2006  Peer-reviewed
  • 森谷貴行, 日浦慎作, 佐藤宏介
    電子情報通信学会論文誌, J88-D-II(5) 876-885, Mar, 2005  Peer-reviewed
  • Kenji Tojo, Shinsaku Hiura, Seiji Inokuchi
    日本バーチャルリアリティ学会論文誌, 7(2) 169-176, Jun, 2002  Peer-reviewed
    We developed a direction system of real-world operation from distant site. The system consists of several sets of cameras, projectors and PCs connected via network each other. At first, the 3-D shape of the object is measured using pattern light projection method, and it is sent to the distant PC. A supervisor can observe the CG of the object from any viewpoint and draw annotation figures on the CG. The direction message is sent to the real field and projected onto the object using projectors. The projected annotations are well aligned geometrically because the all cameras and projectors are calibrated with single reference object. The worker is free from any wearing equipment, ex. HMD, and multi projectors avoid the problem of occlusion by the worker body. Direction of alignment task of both existing and new object are also implemented.
  • 日浦慎作, 村瀬健太郎, 松山隆司
    情報処理学会論文誌, 41(11) 3082-3091, Nov, 2000  Peer-reviewed
  • 日浦慎作, 松山隆司
    電子情報通信学会論文誌, J82-D-II(11) 1912-1920, Nov, 1999  Peer-reviewed
  • 日浦慎作, 山口証, 佐藤宏介, 井口征士
    電子情報通信学会論文誌, J80-D-II(11) 2904-2911, Nov, 1997  Peer-reviewed
  • 日浦慎作, 山口証, 佐藤宏介, 井口征士
    電子情報通信学会論文誌, J80-D-II(6) 1539-1546, Jun, 1997  Peer-reviewed
  • 日浦慎作, 佐藤宏介, 井口征士
    情報処理学会論文誌, 36(10) 2295-2302, Oct, 1995  Peer-reviewed

Misc.

 134

Books and Other Publications

 9
  • 月刊画像ラボ編集部 (Role: Contributor, pp.1-6「三次元計測の各種法とその特性」)
    日本工業出版, Oct, 2017 (ISBN: 9784819029216)
  • 小松英彦, 西田眞也, 本吉勇, 澤山正貴, 渡邊淳司, 黒木忍, 藤崎和香, 大澤五住, 本田学, 日浦慎作, 佐藤いまり, 中内茂樹, 岡谷貴之, 岩井大輔, 坂本真樹, 岡本正吾 (Role: Joint author, 4.1 質感を生み出す光と物の性質)
    朝倉書店, Oct 20, 2016 (ISBN: 9784254102741)
  • 伊藤 裕之, 岩崎 慶, 大口 孝之, 奥富 正敏, 楽 詠灝, 加藤 博一, 金井 崇, 金田 和文, 後藤 道子, 近藤 邦雄, 斎藤 隆文, 斎藤 英雄, 清水 雅夫, 杉本 麻樹, 高橋 成雄, 乃万 司, 馬場 雅志, 日浦 慎作, 藤代 一成, 宮田 一乘, 村岡 一信 (Role: Joint author, 302-308)
    CG-ARTS協会, Mar 23, 2015 (ISBN: 9784903474496)
  • 天野敏之, 池田聖, 石井裕剛, 石川智也, 一刈良介, 岩井大輔, 内山英昭, 大石岳史, 大隈隆史, 大星直樹, 亀田能成, 神原誠之, 北原格, 清川清, 蔵田武志, 黒田知宏, 黒田嘉宏, 興梠正克, 酒田信親, 柴田史久, 杉本麻樹, 田中秀幸, 谷川智洋, 鳴海拓志, 日浦慎作, 堀謙太, 牧田孝嗣, 山本豪志朗 (Role: Joint author, 第3章1節 シーン形状のモデリング)
    科学情報出版, 2015
  • 高松淳, 日浦慎作, 長原一, 富永昌治, 向川康博 (Role: Joint author, 23-52)
    アドコム・メディア, Dec 7, 2011 (ISBN: 9784915851438)

Presentations

 83

Research Projects

 31

Academic Activities

 3

Social Activities

 2

Media Coverage

 3