理工学部 教員紹介

Atsushi KOIKE

  (小池 淳)

Profile Information

Affiliation
Professor, Faculty of Science and Technology Department of Science and Technology , Seikei University
Degree
博士(情報学)(京都大学)

J-GLOBAL ID
201501023813426965
researchmap Member ID
B000243475

Papers

 12
  • K.Sato, M.Ssugano, H.Murakami, A.Koike
    ECTI-Transaction on Electrical Eng., Electronics, and Communication, 5(2) 308-314, Aug, 2011  Peer-reviewed
  • Mehrdad Panahpour Tehrani, Akio Ishikawa, Shigeyuki Sakazawa, Atsushi Koike
    JOURNAL OF VISUAL COMMUNICATION AND IMAGE REPRESENTATION, 21(5-6) 377-391, Jul, 2010  Peer-reviewed
    Multiview images captured by multicamera systems are generally not uniform in colour domain. In this paper, we propose a novel colour correction method of multicamera systems that can (i) be applied to not only dense multicamera system, but also sparse multicamera configuration and (ii) obtain an average colour pattern among all cameras. Our proposed colour correction method starts from any camera on the array sequentially, following a certain path, for pairs of cameras, until it reaches the starting point and triggers several iterations. The iteration stops when the correction applied to the images becomes small enough. We propose to calculate the colour correction transformation based on energy minimisation using a dynamic programming of a nonlinearly weighted Gaussian-based kernel density function of geometrically corresponding feature points, obtained by the modified scale invariant feature transformation (SIFT) method, from several time instances and their Gaussian-filtered images. This approach guarantees the convergence of the iteration procedure without any visible colour distortion. The colour correction is done for each colour channel independently. The process is entirely automatic, after estimation of the parameters through the algorithm. Experimental results show that the proposed iteration-based algorithm can colour-correct the dense/sparse multicamera system. The correction is always converged with average colour intensity among viewpoint, and out-performs the conventional method. (C) 2010 Elsevier Inc. All rights reserved.
  • Osamu Sugimoto, Sei Naito, Shigeyuki Sakazawa, Atsushi Koike
    Proceedings of SPIE - The International Society for Optical Engineering, 7242, 2009  Peer-reviewed
    The authors study a method for objective measurement of perceived picture quality for high definition video based on the full reference framework. The proposed method applies seven spatio-temporal image features to estimate perceived quality of pictures degraded by compression coding. Computer simulation shows that the proposed method can estimate perceived picture quality at a correlation coefficient of above 0.91. © 2009 SPIE-IS&amp T.
  • Osamu Sugimoto, Sei Naito, Shigeyuki Sakazawa, Atsushi Koike
    2009 16TH IEEE INTERNATIONAL CONFERENCE ON IMAGE PROCESSING, VOLS 1-6, 2237-2240, 2009  Peer-reviewed
    In this paper, the authors propose a novel method to measure the perceived picture quality of H.264 coded video; based on hybrid no reference framework. The latter term means that the proposed model uses only receiver-side information for objective video quality assessment, but analyzes both the compressed bitstream and baseband signals of the decoded picture to improve the estimation accuracy of the subjective quality. The proposed method extracts quantizer-scale information from the bit-stream along with two spatiotemporal image features from the baseband signal, which are integrated to express the overall quality using the weighted Minkowski metric. A computer simulation shows the proposed method can estimate the subjective quality at a correlation coefficient of 0.909 whereas the PSNR metric, which is referred to as a benchmark, correlates the subjective quality at a correlation coefficient of 0.773.
  • SUGIMOTO Osamu, NAITO Sei, SAKAZAWA Shigeyuki, KOIKE Atsushi
    PROCEEDINGS OF THE ITE WINTER ANNUAL CONVENTION, 2008 _10-2-1_, 2008  
    The authors study an objective method to estimate perceptual picture quality of the coded pictures based on No Reference framework. The proposed method extracts two image features that reflect the significance of blocking and flickering artifacts and estimates the subjective quality by weighted sum of the image features. Computer simulations show that the proposed method can estimate subjective quality at a correlation coefficient of 0.754.

Misc.

 4

Books and Other Publications

 2
  • 小池淳, 内藤整, 小林亜令 (Role: Contributor, 第6章1セグ放送の放送網とインターネットによる双方向通信)
    インプレス, Jun, 2005
  • 小池 淳 (Role: Contributor, 10-4 映像データベース)
    コロナ社, Apr, 2003

Presentations

 4