研究者業績

村松 大吾

ムラマツ ダイゴ  (Daigo Muramatsu)

基本情報

所属
成蹊大学 理工学部 理工学科 教授
学位
博士(工学)(2006年2月 早稲田大学)

J-GLOBAL ID
200901008108953941
researchmap会員ID
5000098390

研究キーワード

 2

経歴

 6

受賞

 7

論文

 89
  • Susumu Kikkawa, Fumio Okura, Daigo Muramatsu, Yasushi Yagi, Hideo Saito
    IEEE Access 11 19312-19323 2023年  査読有り
  • Ryosuke Hasegawa, Akira Uchiyama, Fumio Okura, Daigo Muramatsu, Issei Ogasawara, Hiromi Takahata, Ken Nakata, Teruo Higashino
    IEEE Access 10 15457-15468 2022年2月  査読有り
  • Daigo Muramatsu, Kousuke Moriwaki, Yoshiki Maruya, Noriko Takemura, Yasushi Yagi
    BIOSIG 2022 - Proceedings of the 21st International Conference of the Biometrics Special Interest Group 213-220 2022年  査読有り
    CNN is a major model used for image-based recognition tasks, including gait recognition, and many CNN-based network structures and/or learning frameworks have been proposed. Among them, we focus on approaches that use multiple labels for learning, typified by multi-task learning. These approaches are sometimes used to improve the accuracy of the main task by incorporating extra labels associated with sub-tasks. The incorporated labels for learning are usually selected from real tasks heuristically; for example, gender and/or age labels are incorporated together with subject identity labels. We take a different approach and consider a virtual task as a sub-task, and incorporate pseudo output labels together with labels associated with the main task and/or real task. In this paper, we focus on a gait-based person recognition task as the main task, and we discuss the effectiveness of virtual tasks with different pseudo labels for construction of a CNN-based gait feature extractor.
  • Ryosuke Hasegawa, Akira Uchiyama, Fumio Okura, Daigo Muramatsu, Issei Ogasawara, Hiromi Takahata, Ken Nakata, Teruo Higashino
    Lecture Notes of the Institute for Computer Sciences, Social Informatics and Telecommunications Engineering 481-486 2022年  
  • Yasushi Makihara, Yuta Hayashi, Allam Shehata, Daigo Muramatsu, Yasushi Yagi
    2021 IEEE International Joint Conference on Biometrics (IJCB) 2021年8月4日  査読有り
  • Masakazu IWAMURA, Shunsuke MORI, Koichiro NAKAMURA, Takuya TANOUE, Yuzuko UTSUMI, Yasushi MAKIHARA, Daigo MURAMATSU, Koichi KISE, Yasushi YAGI
    IEICE Transactions on Information and Systems E104.D(7) 992-1001 2021年7月1日  査読有り
  • Yiyi Zhang, Yasushi Makihara, Daigo Muramatsu, Jianfu Zhang, Li Niu, Liqing Zhang, Yasushi Yagi
    IEEE Access 9 40550-40559 2021年  査読有り
  • Chi Xu, Atsuya Sakata, Yasushi Makihara, Noriko Takemura, Daigo Muramatsu, Yasushi Yagi, Jianfeng Lu
    IEEE Transactions on Biometrics, Behavior, and Identity Science 1-1 2021年  査読有り
  • Ruochen LIAO, Kousuke MORIWAKI, Yasushi MAKIHARA, Daigo MURAMATSU, Noriko TAKEMURA, Yasushi YAGI
    IEICE Transactions on Information and Systems E104D(10) 1678-1690 2021年  査読有り
    In this study, we propose a method to estimate body composition-related health indicators (e.g., ratio of body fat, body water, and muscle, etc.) using video-based gait analysis. This method is more efficient than individual measurement using a conventional body composition meter. Specifically, we designed a deep-learning framework with a convolutional neural network (CNN), where the input is a gait energy image (GEI) and the output consists of the health indicators. Although a vast amount of training data is typically required to train network parameters, it is unfeasible to collect sufficient ground-truth data, i.e., pairs consisting of the gait video and the health indicators measured using a body composition meter for each subject. We therefore use a two-step approach to exploit an auxiliary gait dataset that contains a large number of subjects but lacks the ground-truth health indicators. At the first step, we pre-train a backbone network using the auxiliary dataset to output gait primitives such as arm swing, stride, the degree of stoop, and the body width - considered to be relevant to the health indicators. At the second step, we add some layers to the backbone network and fine-tune the entire network to output the health indicators even with a limited number of ground-truth data points of the health indicators. Experimental results show that the proposed method outperforms the other methods when training from scratch as well as when using an auto-encoder-based pre-training and fine-tuning approach; it achieves relatively high estimation accuracy for the body composition-related health indicators except for body fat-relevant ones.
  • Kota Aoki, Hirofumi Nishikawa, Yasushi Makihara, Daigo Muramatsu, Noriko Takemura, Yasushi Yagi
    IEEE Access 9 127565-127575 2021年  査読有り
  • Md A.R. Ahad, Thanh T. Ngo, Anindya D. Antar, Masud Ahmed, Tahera Hossain, Daigo Muramatsu, Yasushi Makihara, Sozo Inoue, Yasushi Yagi
    Sensors 20(8) 2020年4月  査読有り
  • Atsuya Sakata, Yasushi Makihara, Noriko Takemura, Daigo Muramatsu, Yasushi Yagi
    2020 IEEE International Joint Conference on Biometrics(IJCB) 1-10 2020年  査読有り
  • Allam Shehata, Yuta Hayashi, Yasushi Makihara, Daigo Muramatsu, Yasushi Yagi
    Lecture Notes in Computer Science 90-105 2020年  
  • Ruochen Liao, Yasushi Makihara, Daigo Muramatsu, Ikuhisa Mitsugami, Yasushi Yagi, Kenji Yoshiyama, Hiroaki Kazui, Masatoshi Takeda
    15(3) 433-441 2019年12月  査読有り
  • Md. Zasim Uddin, Daigo Muramatsu, Noriko Takemura, Md Atiqur Rahman Ahad, Yasushi Yagi
    11(9) 2019年11月  査読有り
  • Fumio Okura, Saya Ikuma, Yasushi Makihara, Daigo Muramatsu, Ken Nakada, Yasushi Yagi
    Computers and Electronics in Agriculture 165(2019) 104944-104944 2019年10月  査読有り
    © 2019 Elsevier B.V. The growth of computer vision technology can enable the automatic assessment of dairy cow health, for instance, the detection of lameness. To monitor the health condition of each cow, it is necessary to identify individual cows automatically. Tags using microchips, which are attached to the cow's body, have been employed for the automatic identification of cows. However, tagging requires a substantial amount of effort from dairy farmers as well as induces stress on the cows because of the body-mounted devices. A method for cow identification based on three-dimensional video analysis using RGB-D cameras, which capture images with RGB color information as well subject distance from the camera, is proposed. Cameras are mostly maintenance-free, do not contact the cow's body, and have high compatibility with existing vision-based health monitoring systems. Using RGB-D videos of walking cows, a unified approach using two complementary features for identification, gait (i.e., walking style) and texture (i.e., markings), is developed.
  • Noriko Takemura, Yasushi Makihara, Daigo Muramatsu, Tomio Echigo, Yasushi Yagi
    IEEE Transactions on Circuits and Systems for Video Technology 29(9) 2708-2719 2019年9月  査読有り
  • Thanh Trung Ngo, Md Atiqur Rahman Ahad, Anindya Das Antar, Masud Ahmed, Daigo Muramatsu, Yasushi Makihara, Yasushi Yagi, Sozo Inoue, Tahera Hossain, Yuichi Hattori
    2019 International Conference on Biometrics (ICB) 2019年6月  
  • Atsuya Sakata, Yasushi Makihara, Noriko Takemura, Daigo Muramatsu, Yasushi Yagi
    COMPUTER VISION - ACCV 2018 WORKSHOPS 11367 55-63 2019年  
    Human age is one of important attributes for many potential applications such as digital signage, customer analysis, and gait-based age estimation is promising particularly for surveillance scenarios since it can be available at a distance from a camera. We therefore proposed a method of gait-based age estimation using a deep learning framework to advance the state-of-the-art accuracy. Specifically, we employed DenseNet as one of state-of-the-art network architectures. While the previous method of gait-based age estimation using a deep learning framework was evaluated only with a small-scale gait database, we evaluated the proposed method with OULP-Age, the world's largest gait database comprising more than 60,000 subjects with age range from 2 to 90 years old. Consequently, we demonstrated that the proposed method outperform existing methods based on both conventional machine learning frameworks for gait-based age estimation and a deep learning framework for gait recognition.
  • Xiang Li, Yasushi Makihara, Chi Xu, Daigo Muramatsu, Yasushi Yagi, Mingwu Ren
    Applied Sciences (Switzerland) 8(8) 2018年8月16日  
    Silhouette-based gait representations are widely used in the current gait recognition community due to their effectiveness and efficiency, but they are subject to changes in covariate conditions such as clothing and carrying status. Therefore, we propose a gait energy response function (GERF) that transforms a gait energy (i.e., an intensity value) of a silhouette-based gait feature into a value more suitable for handling these covariate conditions. Additionally, since the discrimination capability of gait energies, as well as the degree to which they are affected by the covariate conditions, differs among body parts, we extend the GERF framework to spatially dependent GERF (SD-GERF) which accounts for spatial dependence. Moreover, the proposed GERFs are represented as a vector in the transformation lookup table and are optimized through an efficient generalized eigenvalue problem in a closed form. Finally, two post-processing techniques, Gabor filtering and spatial metric learning, are employed for the transformed gait features to boost the accuracy. Experimental results with three publicly available datasets including clothing and carrying status variations show the state-of-the-art performance of the proposed method compared with other state-of-the-art methods.
  • M.Z. Uddin, T. T. Ngo, Y. Makihara, D. Muramatsu, Y. Yagi
    IPSJ Transactions on Computer Vision and Applications 10(5) 1-11 2018年5月  査読有り
  • N. Takemura, Y. Makihara, D. Muramatsu, T. Echigo, Y. Yagi
    IPSJ Transactions on Computer Vision and Applications 10(4) 1-14 2018年2月  査読有り
  • N. Takemura, Y. Makihara, D. Muramatsu, T. Echigo, Y. Yagi
    IPSJ Transactions on Computer Vision and Applications 2018年2月  査読有り
  • Atsuya Sakata, Yasushi Makihara, Noriko Takemura, Daigo Muramatsu, Yasushi Yagi
    Computer Vision - ACCV 2018 Workshops - 14th Asian Conference on Computer Vision 55-63 2018年  
  • Md. Zasim Uddin, Daigo Muramatsu, Takuhiro Kimura, Yasushi Makihara, Yasushi Yagi
    IPSJ Transactions on Computer Vision and Applications 9(18) 1-25 2017年12月  査読有り
    Abstract Single sensor-based multi-modal biometrics is a promising approach that offers simple system construction, low cost, and wide applicability to real situations such as CCTV footage-based criminal investigations. In multi-modal biometrics, fusion at the score-level is a popular and promising approach, and data qualities that affect the matching score of each modality are often incorporated as a quality-dependent score-level fusion framework. This paper presents a very large-scale single sensor-based multi-quality multi-modal biometric score database called MultiQ Score Database version 2 to advance the research into evaluation, comparison, and benchmarking of score-level fusion approaches using both quality-independent and quality-dependent protocols. We extracted gait, head, and height modalities from the OU-ISIR Gait Database and introduce spatial resolution (SR), temporal resolution (TR) and view as quality measures that significantly affect biometric system performance. We considered seven and 10 scaling factors for SR and TR, respectively, with four view variations. We then constructed a database comprising approximately 4 million genuine and 7.5 billion imposter score databases. To evaluate this database, we set two different protocols, and provided a set recognition accuracy for state-of-the-art approaches using protocols for both quality-independent and quality-dependent schemes. This database and the evaluation results will be beneficial for score-level fusion research. Additionally, we provide detailed analysis of the recognition accuracies associated with gait, head, and height modalities in different spatial/temporal resolutions and views. These analyses may be useful in criminal investigation research.
  • 鈴木 温之, 村松 大吾, 槇原 靖, 柏本 雄士朗, 八木 康史
    電子情報通信学会論文誌A 2017年12月  査読有り
  • H. El-Alfy, C. Xu, Y. Makihara, D. Muramatsu, Y. Yagi
    Proc. of the 4th Asian Conf. on Pattern Recognition (ACPR 2017) 2017年11月  査読有り
  • Xiang Li, Yasushi Makihara, Chi Xu, Daigo Muramatsu, Yasushi Yagi, Mingwu Ren
    Lecture Notes in Computer Science (including subseries Lecture Notes in Artificial Intelligence and Lecture Notes in Bioinformatics) 10112 LNCS 257-272 2017年  査読有り
    This paper describes a method of clothing-invariant gait recognition by modifying intensity response function of a silhouettebased gait feature. While a silhouettebased representation such as gait energy image (GEI) has been popular in gait recognition community due to its simple yet effective property, it is also well known that such a representation is susceptible to clothes variations since it significantly changes silhouettes (e.g., down jacket, long skirt). We therefore propose a gait energy response function (GERF) which transforms an original gait energy into another one in a nonlinear way, which increases discrimination capability under clothes variation. More specifically, the GERF is represented as a vector of components of a lookup table from an original gait energy to another one and its optimization process is formulated as a generalized eigenvalue problem considering discrimination capability as well as regularization on the GERF. In addition, we apply Gabor filters to the GEI transformed by the GERF and further apply a spatial metric learning method for better performance. In experiments, the OU-ISIR Treadmill dataset B with the largest clothing variation was used to measure the performance both in verification and identification scenarios. The experimental results show that the proposed method achieved state-of-the-art performance in verification scenarios and competitive performance in identification scenarios.
  • 村松 大吾, 槇原 靖, 八木 康史
    電子情報通信学会 基礎・境界ソサイエティ Fundamentals Review 11(2) 93-99 2017年  
  • Yasushi Makihara, Atsuyuki Suzuki, Daigo Muramatsu, Xiang Li, Yasushi Yagi
    30TH IEEE CONFERENCE ON COMPUTER VISION AND PATTERN RECOGNITION (CVPR 2017) 6786-6796 2017年  査読有り
    This paper describes a joint intensity metric learning method to improve the robustness of gait recognition with silhouette-based descriptors such as gait energy images. Because existing methods often use the difference of image intensities between a matching pair (e.g., the absolute difference of gait energies for the l(1)-norm) to measure a dissimilarity, large intrasubject differences derived from covariate conditions (e.g., large gait energies caused by carried objects vs. small gait energies caused by the background), may wash out subtle intersubject differences (e.g., the difference of middle-level gait energies derived from motion differences). We therefore introduce a metric on joint intensity to mitigate the large intrasubject differences as well as leverage the subtle intersubject differences. More specifically, we formulate the joint intensity and spatial metric learning in a unified framework and alternately optimize it by linear or ranking support vector machines. Experiments using the OU-ISIR treadmill data set B with the largest clothing variation and large population data set with bag, beta version containing carrying status in the wild demonstrate the effectiveness of the proposed method.
  • Hazem El-Alfy, Daigo Muramatsu, Yuuichi Teranishi, Nozomu Nishinaga, Yasushi Makihara, Yasushi Yagi
    THIRTEENTH INTERNATIONAL CONFERENCE ON QUALITY CONTROL BY ARTIFICIAL VISION 2017 10338 2017年  査読有り
    We attempt the problem of autonomous surveillance for person re-identification. This is an active research area, where most recent work focuses on the open challenges of re-identification, independently of prerequisites of detection and tracking. In this paper, we are interested in designing a complete surveillance system, joining all the pieces of the puzzle together. We start by collecting our own dataset from multiple cameras. Then, we automate the process of detection and tracking of human subjects in the scenes, followed by performing the re-identification task. We evaluate the recognition performance of our system, report its strengths, discuss open challenges and suggest ways to address them.
  • Mohamed Hasan, Yasushi Makihara, Daigo Muramatsu, Yasushi Yagi
    Lecture Notes in Computer Science (including subseries Lecture Notes in Artificial Intelligence and Lecture Notes in Bioinformatics) 10118 330-344 2017年  査読有り
    This paper introduces the Gait Gate as the first online walk-through access control system based on multimodal biometric person verification. Face, gait and height modalities are simultaneously captured by a single RGB-D sensor and fused at the matching-score level. To achieve the real-time requirements, mutual subspace method has been used for the face matcher. An acceptance threshold has been learned beforehand using data of a set of subjects disjoint from the targets. The Gait Gate has been evaluated through experiments in actual online situation. In experiments, 1324 walking sequences have resulted from the verification of 26 targets. The verification results show an average computation time of less than 13 ms and an accuracy of 6.08% FAR and 7.21% FRR.
  • 武村紀子, 白神康平, 槇原靖, 村松大吾, 越後富夫, 八木康史
    電子情報通信学会和文論文誌 2016年12月  査読有り
  • X. Li, Y. Makihara, C. Xu, D. Muramatsu, Y. Yagi, M. Ren
    Proc. of the 13th Asian Conf. on Computer Vision (ACCV 2016) 2016年11月  査読有り
  • Kohei Shiraga, Yasushi Makihara, Daigo Muramatsu, Tomio Echigo, Yasushi Yagi
    2016 International Conference on Biometrics, ICB 2016 2016年8月23日  査読有り
    This paper proposes a method of gait recognition using a convolutional neural network (CNN). Inspired by the great successes of CNNs in image recognition tasks, we feed in the most prevalent image-based gait representation, that is, the gait energy image (GEI), as an input to a CNN designed for gait recognition called GEINet. More specifically, GEINet is composed of two sequential triplets of convolution, pooling, and normalization layers, and two subsequent fully connected layers, which output a set of similarities to individual training subjects. We conducted experiments to demonstrate the effectiveness of the proposed method in terms of cross-view gait recognition in both cooperative and uncooperative settings using the OU-ISIR large population dataset. As a result, we confirmed that the proposed method significantly outperformed state-of-the-art approaches, in particular in verification scenarios.
  • Daigo Muramatsu, Yasushi Makihara, Yasushi Yagi
    IEEE TRANSACTIONS ON CYBERNETICS 46(7) 1602-1615 2016年7月  査読有り
    Cross-view gait recognition authenticates a person using a pair of gait image sequences with different observation views. View difference causes degradation of gait recognition accuracy, and so several solutions have been proposed to suppress this degradation. One useful solution is to apply a view transformation model (VTM) that encodes a joint subspace of multiview gait features trained with auxiliary data from multiple training subjects, who are different from test subjects (recognition targets). In the VTM framework, a gait feature with a destination view is generated from that with a source view by estimating a vector on the trained joint subspace, and gait features with the same destination view are compared for recognition. Although this framework improves recognition accuracy as a whole, the fit of the VTM depends on a given gait feature pair, and causes an inhomogeneously biased dissimilarity score. Because it is well known that normalization of such inhomogeneously biased scores improves recognition accuracy in general, we therefore propose a VTM incorporating a score normalization framework with quality measures that encode the degree of the bias. From a pair of gait features, we calculate two quality measures, and use them to calculate the posterior probability that both gait features originate from the same subjects together with the biased dissimilarity score. The proposed method was evaluated against two gait datasets, a large population gait dataset of over-ground walking (course dataset) and a treadmill gait dataset. The experimental results show that incorporating the quality measures contributes to accuracy improvement in many cross-view settings.
  • 槇原 靖, 大倉 史生, 満上 育久, 丹羽 真隆, 村松 大吾, 八木 康史
    バイオメカニズム学会誌 40(3) 167-171 2016年  
    歩行映像解析は,歩き方の個性に基づく個人認証による科学捜査への応用など,様々な利用が期待される技術であり, その基盤として,大量の歩行映像データを収集することが重要となる.一般的に歩行映像データベースは,募集した被験者の 歩行映像を,収集者が撮影することで構築される.この場合,被験者募集の手間や収集者の作業量が大きいことから,データ ベースの大規模化が困難であった.そこで我々は,大規模歩行データベース構築に向けて,体験型の自動歩行計測・データ撮 影システムを開発した.開発したシステムは,歩行映像解析の体験型デモを楽しむ傍らで,研究目的データ利用に関する同意 を参加者から電子的に取りつつ,15度刻みの14方向からの歩行データを撮影することができ,1年弱の展示期間で数万人規 模のデータが収集可能であると予測される.
  • Yasushi Makihara, Takuhiro Kimura, Fumio Okura, Ikuhisa Mitsugami, Masataka Niwa, Chihiro Aoki, Atsuyuki Suzuki, Daigo Muramatsu, Yasushi Yagi
    2016 INTERNATIONAL CONFERENCE ON BIOMETRICS (ICB) 2016年  査読有り
    Biometric data collection is an important first step toward biometrics research practice, although it is a considerably laborious task, particularly for behavioral biometrics such as gait. We therefore propose an automatic gait data collection system in conjunction with an experience based exhibition, In the exhibition, participants enjoy an attractive online demonstration of state-of-the-art video-based gait analysis comprising intuitive gait feature measurement and gait-based age estimation while we simultaneously collect their gait data along with informed consent. At the time of this publication, we are holding the exhibition in association with a science museum and have successfully collected the gait data of 47,615 subjects over 246 days, which has already exceeded the size of the largest existing gait database in the world.
  • 木村 卓弘, 村松 大吾, 槇原 靖, 八木 康史
    電子情報通信学会論文誌 A 2015年12月  査読有り
  • 木村 卓弘, 槇原 靖, 村松 大吾, 八木 康史
    電子情報通信学会論文誌 A 2015年12月  査読有り
  • F. Okura, T. Kimura, M. Niwa, I. Mitsugami, A. Suzuki, Y. Makihara, C. Aoki, D. Muramatsu, Y. Yagi
    2015年11月  招待有り
  • Daigo Muramatsu, Yasushi Makihara, Yasushi Yagi
    IET BIOMETRICS 4(2) 62-73 2015年6月  査読有り
    Gait is a promising modality for forensic science because it has discrimination ability even if the gait features are extracted from low-quality image sequences captured at a distance. However, in forensic cases the observation view is often different, leading to accuracy degradation. Therefore the authors propose a gait recognition algorithm that achieves high accuracy in cases where observation views are different. They used a view transformation technique, and generated multiple joint gait features by changing the source gait features. They formed a hypothesis that the multiple transformed features and original features should be similar to each other if the target subjects are the same. They calculated multiple scores that measured the consistency of the features, and a likelihood ratio from the scores. To evaluate the accuracy of the proposed method, they drew Tippett plots and empirical cross-entropy plots, together with cumulative match characteristic curves and receiver operator characteristic curves, and evaluated discrimination ability and calibration quality. The results showed that their proposed method achieves good results in terms of discrimination and calibration.
  • Yasushi Makihara, Al Mansur, Daigo Muramatsu, Zasim Uddin, Yasushi Yagi
    2015 11TH IEEE INTERNATIONAL CONFERENCE AND WORKSHOPS ON AUTOMATIC FACE AND GESTURE RECOGNITION (FG), VOL. 1 2015年  査読有り
    This paper describes a method of discriminant analysis for cross-view recognition with a relatively small number of training samples. Since appearance of a recognition target (e.g., face, gait, gesture, and action) is in general drastically changes as an observation view changes, we introduce multiple view-specific projection matrices and consider to project a recognition target from a certain view by a corresponding view-specific projection matrix into a common discriminant subspace. Moreover, conventional vectorized representation of an originally higher-order tensor object (e.g., a spatio-temporal image in gait recognition) often suffers from the curse of dimensionality dilemma, we therefore encapsulate the multiple view-specific projection matrices in a framework of discriminant analysis with tensor representation, which enables us to overcome the curse of dimensionality dilemma. Experiments of cross-view gait recognition with two publicly available gait databases show the effectiveness of the proposed method in case where a training sample size is small.
  • Yasushi Makihara, Al Mansur, Daigo Muramatsu, Zasim Uddin, Yasushi Yagi
    2015 11TH IEEE INTERNATIONAL CONFERENCE AND WORKSHOPS ON AUTOMATIC FACE AND GESTURE RECOGNITION (FG), VOL. 3 2015年  査読有り
    This paper describes a method of discriminant analysis for cross-view recognition with a relatively small number of training samples. Since appearance of a recognition target (e.g., face, gait, gesture, and action) is in general drastically changes as an observation view changes, we introduce multiple view-specific projection matrices and consider to project a recognition target from a certain view by a corresponding view-specific projection matrix into a common discriminant subspace. Moreover, conventional vectorized representation of an originally higher-order tensor object (e.g., a spatio-temporal image in gait recognition) often suffers from the curse of dimensionality dilemma, we therefore encapsulate the multiple view-specific projection matrices in a framework of discriminant analysis with tensor representation, which enables us to overcome the curse of dimensionality dilemma. Experiments of cross-view gait recognition with two publicly available gait databases show the effectiveness of the proposed method in case where a training sample size is small.
  • Yasushi Makihara, Al Mansur, Daigo Muramatsu, Zasim Uddin, Yasushi Yagi
    2015 11TH IEEE INTERNATIONAL CONFERENCE AND WORKSHOPS ON AUTOMATIC FACE AND GESTURE RECOGNITION (FG), VOL. 5 2015年  査読有り
    This paper describes a method of discriminant analysis for cross-view recognition with a relatively small number of training samples. Since appearance of a recognition target (e.g., face, gait, gesture, and action) is in general drastically changes as an observation view changes, we introduce multiple view-specific projection matrices and consider to project a recognition target from a certain view by a corresponding view-specific projection matrix into a common discriminant subspace. Moreover, conventional vectorized representation of an originally higher-order tensor object (e.g., a spatio-temporal image in gait recognition) often suffers from the curse of dimensionality dilemma, we therefore encapsulate the multiple view-specific projection matrices in a framework of discriminant analysis with tensor representation, which enables us to overcome the curse of dimensionality dilemma. Experiments of cross-view gait recognition with two publicly available gait databases show the effectiveness of the proposed method in case where a training sample size is small.
  • Yasushi Makihara, Al Mansur, Daigo Muramatsu, Zasim Uddin, Yasushi Yagi
    2015 11TH IEEE INTERNATIONAL CONFERENCE AND WORKSHOPS ON AUTOMATIC FACE AND GESTURE RECOGNITION (FG), VOL. 4 2015年  査読有り
    This paper describes a method of discriminant analysis for cross-view recognition with a relatively small number of training samples. Since appearance of a recognition target (e.g., face, gait, gesture, and action) is in general drastically changes as an observation view changes, we introduce multiple view-specific projection matrices and consider to project a recognition target from a certain view by a corresponding view-specific projection matrix into a common discriminant subspace. Moreover, conventional vectorized representation of an originally higher-order tensor object (e.g., a spatio-temporal image in gait recognition) often suffers from the curse of dimensionality dilemma, we therefore encapsulate the multiple view-specific projection matrices in a framework of discriminant analysis with tensor representation, which enables us to overcome the curse of dimensionality dilemma. Experiments of cross-view gait recognition with two publicly available gait databases show the effectiveness of the proposed method in case where a training sample size is small.
  • Takuhiro Kimura, Yasushi Makihara, Daigo Muramatsu, Yasushi Yagi, Osaka University
    2015 INTERNATIONAL CONFERENCE ON BIOMETRICS (ICB) 519-526 2015年  査読有り
    We constructed a large-scale multi-quality multi-modal biometric score database to advance studies on quality-dependent score-level fusion. In particular, we focused on single sensor-based multi-modal biometrics because of their advantages of simple system construction, low cost, and wide availability in real situations such as CCTV footage-based criminal investigation, unlike conventional individual sensor-based multi-modal biometrics that require multiple sensors. As for the modalities of multiple biometrics, we extracted gait, head, and the height biometrics from a single walking image sequence, and considered spatial resolution (SR) and temporal resolution (TR) as quality measures that simultaneously affect the scores of individual modalities. We then computed biometric scores of 1912 subjects under a total of 130 combinations of the quality measures, i.e., 13 SRs and 10 TRs, and constructed a very large-scale biometric score database composed of 1,814,488 genuine scores and 3,467,486,568 imposter scores. We finally provide performance evaluation results both for quality-independent and quality-dependent score-level fusion approaches using two protocols that will be beneficial to the score-level fusion research community.
  • Yasushi Makihara, Takuya Tanoue, Daigo Muramatsu, Yasushi Yagi, Syunsuke Mori, Yuzuko Utsumi, Masakazu Iwamura, Koichi Kise
    IPSJ Transactions on Computer Vision and Applications 7 74-78 2015年  査読有り
    Most gait recognition approaches rely on silhouette-based representations due to high recognition accuracy and computational efficiency, and a key problem for those approaches is how to accurately extract individuality-preserved silhouettes from real scenes, where foreground colors may be similar to background colors and the background is cluttered. We therefore propose a method of individuality-preserving silhouette extraction for gait recognition using standard gait models (SGMs) composed of clean silhouette sequences of a variety of training subjects as a shape prior. We firstly match the multiple SGMs to a background subtraction sequence of a test subject by dynamic programming and select the training subject whose SGM fit the test sequence the best. We then formulate our silhouette extraction problem in a well-established graph-cut segmentation framework while considering a balance between the observed test sequence and the matched SGM. More specifically, we define an energy function to be minimized by the following three terms: (1) a data term derived from the observed test sequence, (2) a smoothness term derived from spatio-temporally adjacent edges, and (3) a shape-prior term derived from the matched SGM. We demonstrate that the proposed method successfully extracts individuality-preserved silhouettes and improved gait recognition accuracy through experiments using 56 subjects.
  • Takuhiro Kimura, Yasushi Makihara, Daigo Muramatsu, Yasushi Yagi, Osaka University
    2015 INTERNATIONAL CONFERENCE ON BIOMETRICS (ICB) 519-526 2015年  査読有り
    We constructed a large-scale multi-quality multi-modal biometric score database to advance studies on quality-dependent score-level fusion. In particular, we focused on single sensor-based multi-modal biometrics because of their advantages of simple system construction, low cost, and wide availability in real situations such as CCTV footage-based criminal investigation, unlike conventional individual sensor-based multi-modal biometrics that require multiple sensors. As for the modalities of multiple biometrics, we extracted gait, head, and the height biometrics from a single walking image sequence, and considered spatial resolution (SR) and temporal resolution (TR) as quality measures that simultaneously affect the scores of individual modalities. We then computed biometric scores of 1912 subjects under a total of 130 combinations of the quality measures, i.e., 13 SRs and 10 TRs, and constructed a very large-scale biometric score database composed of 1,814,488 genuine scores and 3,467,486,568 imposter scores. We finally provide performance evaluation results both for quality-independent and quality-dependent score-level fusion approaches using two protocols that will be beneficial to the score-level fusion research community.
  • Daigo Muramatsu, Yasushi Makihara, Yasushi Yagi
    2015 INTERNATIONAL CONFERENCE ON BIOMETRICS (ICB) 169-176 2015年  査読有り
    Gait recognition has potential to recognize subject in CCTV footages thanks to robustness against image resolution. In the CCTV footage, several body-regions of subjects are, however, often un-observable because of occlusions and/or cutting off caused by limited field of view, and therefore, recognition must be done from a pair of partially observed data. The most popular approach to recognition from partially observed data is matching the data from common observable region. This approach, however, cannot be applied in the cases where the matching pair has no common observable region. We therefore, propose an approach to enable recognition even from the pair with no common observable region. In the proposed approach, we reconstruct entire gait feature from a partial gait feature extracted from the observable region using a subspace-based method, and match the reconstructed entire gait features for recognition. We evaluate the proposed approach against two different datasets. In the best case, the proposed approach achieves recognition accuracy with EER of 16.2% from such a pair.

MISC

 109

共同研究・競争的資金等の研究課題

 9