研究者業績

山添 大丈

ヤマゾエ ヒロタケ  (YAMAZOE HIROTAKE)

基本情報

所属
兵庫県立大学 大学院工学研究科 電子情報工学専攻 准教授
学位
博士(工学)(大阪大学)

J-GLOBAL ID
200901050672492802
researchmap会員ID
5000024529

外部リンク

論文

 33
  • 𠮷田 竣亮, 施 真琴, 内海 章, 山添 大丈
    知能と情報 36(2) 623-630 2024年5月15日  査読有り最終著者責任著者
  • 施 真琴, 内海 章, 山添 大丈, 李 周浩
    電子情報通信学会論文誌D 情報・システム J106-D(4) 268-276 2023年4月1日  査読有り
    従来の深層学習の多くは,与えられた入出力データに対して推定誤差を最小化するend-to-endの学習過程を採用している.そのため,学習後のネットワークが機能するメカニズムを理解することは困難であった.一方で,我々は対象タスクに関するパラメータ間の関係(例えば,物体の運動やカメラ投影など)について事前に十分な知識をもつ場合が少なくない.本論文では,このような対象タスクに関する既知の関係を,事前定義された計算モジュールとしてニューラルネットワークに統合する方法を提案する.統合されたネットワークは,従来と同様のend-to-endの学習を可能としながら,その学習過程では,組み込まれた計算モジュールが必要とする隠れパラメータを推定するネットワークが自動的に誘導される.実験では,提案手法を顔画像から2次元眼球位置を推定するタスクに適用し,透視投影の幾何学的計算を行う計算モジュールをニューラルネットワークに組み込んだ.実験の結果,計算モジュールの組込みによって2次元眼球位置推定精度が向上すること,統合されたネットワークが隠れパラメータである3次元眼球位置の推定機能を獲得していることを確認した.
  • Hirotake Yamazoe, Jaemin Chun, Youngsun Kim, Kenta Miki, Takuya Imazaike, Yume Matsushita, Joo-Ho Lee
    Intelligent Service Robotics 2022年11月8日  査読有り筆頭著者
  • Makoto Sei, Akira Utsumi, Hirotake Yamazoe, Joo-Ho Lee
    Applied Intelligence 52(10) 11506-11516 2022年1月27日  査読有り
  • Yume Matsushita, Dinh Tuan Tran, Hirotake Yamazoe, Joo-Ho Lee
    Journal of Computational Design and Engineering 8(6) 1499-1532 2021年10月29日  査読有り
    Abstract Gait analysis has been studied for a long time and applied to fields such as security, sport, and medicine. In particular, clinical gait analysis has played a significant role in improving the quality of healthcare. With the growth of machine learning technology in recent years, deep learning-based approaches to gait analysis have become popular. However, a large number of samples are required for training models when using deep learning, where the amount of available gait-related data may be limited for several reasons. This paper discusses certain techniques that can be applied to enable the use of deep learning for gait analysis in case of limited availability of data. Recent studies on the clinical applications of deep learning for gait analysis are also reviewed, and the compatibility between these applications and sensing modalities is determined. This article also provides a broad overview of publicly available gait databases for different sensing modalities.
  • Miran Lee, Ko Ameyama, Hirotake Yamazoe, Joo-Ho Lee
    ROBOMECH Journal 7(1) 2020年12月  査読有り
    Abstract As the geriatric population expands, caregivers require more accurate training to handle and care for the elderly. However, students lack methods for acquiring the necessary skill and experience, as well as sufficient opportunities to practice on real human beings. To investigate the necessity and feasibility of care training assistant robots in care education, we developed a simulated robot as a shoulder complex joint with multi-DOF. In this study, five experts with years of experience in elderly care participated in the data-acquisition process, to acquire information on aspects such as the glenohumeral angle, as well as the sterno-clavicular joint and its torque. The experts performed three types of range-of-motion exercises: (i) elevation–depression of sterno-clavicular joint; (ii) extension and flexion of glenohumeral joint; (iii) lateral and medial rotation of glenohumeral joint. The experimental results showed that the quantitative results for all the exercises were significantly different between the experts. Moreover, we observed that even experienced professionals need consistent care education based on quantitative data and feedback. Thus, we confirmed the necessity and feasibility of the care training assistant robot for improving the skills required for elderly care.
  • Dinh Tuan Tran, Hirotake Yamazoe, Joo-Ho Lee
    Applied Intelligence 50(5) 1468-1486 2020年5月  査読有り
  • Akimichi Kojima, Hirotake Yamazoe, Joo-Ho Lee
    Journal of Robotics and Mechatronics 32(1) 173-182 2020年2月20日  査読有り
    In this paper, we propose a wearable robot arm with consideration of weight and usability. Based on the features of existing wearable robot arms, we focused on the issues of weight and usability. The behavior of human hands during physical work can be divided into two phases. In the first, the shoulder and the elbow joints move before commencing the task by using the hands. In the second, the wrist joints move during the actual work. We found that these features can be applied to wearable robot arms. Consequently, we proposed hybrid actuation system (HAS) with a combination of two types of joints. In this study, HAS is implemented into the prototype wearable robot arm, assist oriented arm (AOA). To verify the validity of the proposed system, we implemented three types of robot arms (PasAct, Act, 6DOF) using simulation to compare the weight, working efficiency, and usability. Furthermore, we compared these simulation models with AOA for evaluation.
  • 山添 大丈, 満上 育久, 小川 拓也, 八木 康史
    看護理工学会誌 7 33-42 2020年2月  査読有り筆頭著者
  • Hirotake Yamazoe, Ikuhisa Mitsugami, Tsukasa Okada, Yasushi Yagi
    Experimental Brain Research 237(11) 3047-3058 2019年11月17日  査読有り筆頭著者
  • Miran Lee, Kodai Murata, Ko Ameyama, Hirotake Yamazoe, Joo-Ho Lee
    Intelligent Service Robotics 12(4) 277-287 2019年10月  査読有り
  • Hirotake Yamazoe, Hitoshi Habe, Ikuhisa Mitsugami, Yasushi Yagi
    Computational Visual Media 4(2) 103-111 2018年6月  査読有り筆頭著者
  • Ryuhei Sakurai, Taiki Shimba, Hirotake Yamazoe, Joo-Ho Lee
    Journal of Korea Robotics Society 13(1) 16-25 2018年3月31日  査読有り
  • Hirotake Yamazoe, Misaki Kasetani, Tomonobu Noguchi, Joo-Ho Lee
    Advances in Robotics Research 2(1) 45-57 2018年3月  査読有り筆頭著者
  • H.Yamazoe, I.Mitsugami, T.Okada, T.Echigo, Y.Yagi
    Transactions of the Virtual Reality Society of Japan 22(3) 435-443 2017年  
    <p>We analyzed changes in human gait (way of walking) that corresponded to changes in human gaze direction. For the purpose, we constructed an immersive walking environment in which we measured participant gait in various controlled gazing situations via a motion capture system and an eye tracker. The environment consisted of a treadmill and 180-degree multi-screen for presenting the gazing target. As preliminary analysis of gaze-gait relations, in this paper, we focused on arm and leg swing amplitudes as a measure of gait and analyzed the relationship between gaze and arm/leg swings. Unlike previous studies, we sought to analyze behavior that occurred when humans intentionally gazed at a specific target. Our experimental results indicate that arm swing is affected by gaze direction. We observed a tendency for decreased swing amplitude in the arm further from the gazing direction, and for increased swing amplitude in the arm closer to the gaze direction. Contrary to our results for arm swing, we did not observe any evidence of modulated leg motion by gaze direction. Our results suggest that it may be possible to estimate gaze from human gait.</p>
  • Dinh Tuan Tran, Ryuhei Sakurai, Hirotake Yamazoe, Joo-Ho Lee
    International Journal of Biomedical Imaging 2017 2017年  
    In this paper, we present robust methods for automatically segmenting phases in a specified surgical workflow by using latent Dirichlet allocation (LDA) and hidden Markov model (HMM) approaches. More specifically, our goal is to output an appropriate phase label for each given time point of a surgical workflow in an operating room. The fundamental idea behind our work lies in constructing an HMM based on observed values obtained via an LDA topic model covering optical flow motion features of general working contexts, including medical staff, equipment, and materials. We have an awareness of such working contexts by using multiple synchronized cameras to capture the surgical workflow. Further, we validate the robustness of our methods by conducting experiments involving up to 12 phases of surgical workflows with the average length of each surgical workflow being 12.8 minutes. The maximum average accuracy achieved after applying leave-one-out cross-validation was 84.4%, which we found to be a very promising result.
  • Mitsuru Nakazawa, Ikuhisa Mitsugami, Hitoshi Habe, Hirotake Yamazoe, Yasushi Yagi
    IEEJ TRANSACTIONS ON ELECTRICAL AND ELECTRONIC ENGINEERING 10(S) S108-S115 2015年10月  
    When using multiple Kinects, there must be enough distances among neighboring Kinects to avoid spoiled range data caused by the interference of their infrared speckle patterns. In the arrangement, their overlapped regions are too small to apply existing calibration methods using correspondences between their observations straightforwardly. Therefore, we propose a method to calibrate Kinects without large overlapped regions. In our method, first, we add extra RGB cameras in an environment to compensate overlapped regions. Thanks to them, we can estimate their camera parameters by obtaining correspondences between color images. Next, for accurate calibration, which considers range data as well as color images of Kinects, we optimize the estimated parameters by minimizing both the errors of correspondences between color images and those of range data of planar regions, which exist in a general environment such as walls and floors. Although our method consists of conventional techniques, its combination is optimized to achieve the calibration. (C) 2015 Institute of Electrical Engineers of Japan. Published by John Wiley & Sons, Inc.
  • TSUJI Airi, YONEZAWA Tomoko, YAMAZOE Hirotake, ABE Shinji, KUWAHARA Noriaki, MORIMOTO Kazunari
    International Journal of Advanced Computer Science and Applications 5(10) 140-145 2014年10月  査読有り
    Elderly people need to urinate frequently, and when they go on outings they often have a difficult time finding restrooms. Because of this, researching a body water management system is needed. Our proposed system calculates timing trips to the toilet in consideration with both their schedules and the amount of body water needing to be expelled, and recommends using the restroom with sufficient time before needing to urinate. In this paper, we describe the suggested methods of this system and show the experimental results for the toilet timing suggestion methods.
  • Hirotake Yamazoe
    IPSJ Transactions on Computer Vision and Applications 6(78-82) 78-82 2014年  
    In this paper, we propose a method to achieve positions and poses of multiple cameras and temporal synchronization among them by using blinking calibration patterns. In the proposed method, calibration patterns are shown on tablet PCs or monitors, and are observed by multiple cameras. By observing several frames from the cameras, we can obtain the camera positions, poses and frame correspondences among cameras. The proposed calibration patterns are based on pseudo random volumes (PRV), a 3D extension of pseudo random sequences. Using PRV, we can achieve the proposed method. We believe our method is useful not only for multiple camera systems but also for AR applications for multiple users.
  • Mitsuru Nakazawa, Ikuhisa Mitsugami, Hirotake Yamazoe, Yasushi Yagi
    IPSJ Transactions on Computer Vision and Applications 6(10) 63-67 2014年  
    We propose a novel method to estimate the head orientation of a pedestrian. There have been many methods for head orientation estimation based on facial textures of pedestrians. It is, however, impossible to apply these methods to low-resolution images which are captured by a surveillance camera at a distance. To deal with the problem, we construct a method that is not based on facial textures but on gait features, which are robustly obtained even from low-resolution images. In our method, first, size-normalized silhouette images of pedestrians are generated from captured images. We then obtain the Gait Energy Image (GEI) from the silhouette images as a gait feature. Finally, we generate a discriminant model to classify their head orientation. For this training step, we build a dataset consisting of gait images of over 100 pedestrians and their head orientations. In evaluation experiments using the dataset, we classified their head orientation by the proposed method. We confirmed that gait changes of the whole body were efficient for the estimation in quite low-resolution images which existing methods cannot deal with due to the lack of facial textures.
  • Tomoko Yonezawa, Hirotake Yamazoe, Akira Utsumi, Shinji Abe
    Paladyn 4(2) 113-122 2013年  
  • 中島秀真, 満上育久, 波部斉, 山添大丈, 槇原靖, 八木康史
    日本バーチャルリアリティ学会論文誌 17(3) 209-217-217 2012年  査読有り
    Existing background subtraction methods often fail to extract a foreground region whose color is similar to that of the background. When we use a co-located camera and range sensor, by which we can obtain both a color image and depth map simultaneously, it is expected to get a better foreground region by integrating the two kind of images. However, it is not straightforward when a moving object is observed because the camera and range sensor do not capture the scene synchronously. In this paper, we propose a novel method that pseudo-synchronize the camera and range sensor and integrate the background subtraction of the color and depth images to realize a good foreground extraction. Experimental results of a walking human show its effectiveness.
  • Tomoko Yonezawa, Hirotake Yamazoe, Akira Utsumi, Shinji Abe
    International Journal of Autonomous and Adaptive Communications Systems 5(1) 18-38 2012年1月  
    This paper introduces a daily-partner robot, that is aware of the user's situation by using gaze and utterance detection. For appropriate anthropomorphic interaction, the robot should talk to the user in proper timing without interrupting her/his task. Our proposed robot 1) estimates the user's context (the target of her/his speech) by detecting his/his gaze the utterance, 2) expresses the need to speak to the user by silent gaze-turns towards the user and the object of joint attention (speech-implying behaviour) and 3) tells the message when the user talks to the robot. Based on preliminary results that show the sufficient human-sensitivity to the speech-implying behaviours of the robot, we evaluate the proposed behavioural model. The results show that the crossmodal awareness is effective for respectful communication that does not disturb the user's ongoing task by silent behaviours that effectively show the robot's intention to speak and draw the user's attention. Copyright © 2012 Inderscience Enterprises Ltd.
  • Tomoko Yonezawa, Hirotake Yamazoe, Yuichi Koyama, Shinji Abe, Kenji Mase
    ヒューマンインタフェース学会論文誌 (Information and Media Technologies) 13(3) 5-17-1281 2011年  
    This paper proposes a video communication assist system using a companion robot in coordination with the user's conversational attitude toward the communication. In order to maintain a conversation and to achieve comfortable communication, it is necessary to provide the user's attitude-aware assistance. First, the system estimates the user's conversational state by a machine learning method. Next, the robot appropriately expresses its active listening behaviors, such as nodding and gaze turns, to compensate for the listener's attitude when she/he is not really listening to another user's speech, the robot shows communication-evoking behaviors (topic provision) to compensate for the lack of a topic, and the system switches the camera images to create an illusion of eye-contact, corresponding to the current context of the user's attitude. From empirical studies and a demonstration experiment, i) both the robot's active listening behaviors and the switching of the camera image compensate for the other person's attitude, ii) elderly people prefer long intervals between the robot's behaviors, and iii) the topic provision function is effective for awkward silences.
  • 山添大丈, 米澤朋子, 内海章, 安部伸治
    電子情報通信学会論文誌D J94-D(6) 998-1006 2011年  
  • 水戸和, 美馬達也, 山添大丈, 吉田俊介, 多田昌裕, 寒川雅之, 金島岳, 奥山雅則, 野間春生
    計測自動制御学会論文集 (ショートペーパー) 47(1) 40-42-42 2011年  
    We have designed and fabricated a tactile array sensors with three inclined micro-cantilevers embedded in elastomer, which can detect both normal and shear stresses. In this paper, we confirmed gripping status classification using sensor output. Using our sensor, four gripping status (free, grasping, holding and slipping) could be classified significantly.
  • 米澤朋子, 山添大丈, 内海章, 安部伸治
    電子情報通信学会論文誌D J92-D(1) 81-92-92 2009年  
    本論文では,画像処理による非装着型視線推定から検出されたユーザの視線方向に応じ,共同注視とアイコンタクトを段階的に用いながら,視線コミュニケーションにおける振舞いを表すぬいぐるみロボットシステムを提案し,視線コミュニケーションモデルに基づいた視線行動の設計を検証・議論する.擬人的媒体の異なる視線行動によるユーザの印象や行動への影響を評価した結果, (i)ぬいぐるみの視線行動はユーザの無意識的な視線を引き付ける, (ii)ユーザ視線に応じた共同注視行動は自然な印象を与える, (iii)ぬいぐるみのアイコンタクト反応はコミュニケーションにおけるユーザの好意を生じさせる, (iv)その好意は,アイコンタクト反応に加えて共同注視行動も見せることで更に強まる,という結果がそれぞれ示され,段階的に視線行動を用いた視線コミュニケーションモデルの有効性が確認された.
  • Hirotake Yamazoe, Akira Utsumi, Kenichi Hosaka, Masahiko Yachida
    IMAGE AND VISION COMPUTING 25(12) 1848-1855 2007年12月  
    In this paper, we propose a body-mounted system to capture user experience as audio/visual information. The proposed system consists of two cameras (head-detection and wide angle) and a microphone. The head-detection camera captures user head motions, while the wide angle color camera captures user frontal view images. An image region approximately corresponding to user view is then synthesized from the wide angle image based on estimated human head motions. The synthesized image and head-motion data are stored in a storage device with audio data. This system overcomes the disadvantages of head-mounted cameras in terms of ease of putting on/taking off the device. It also has less obtrusive visual impact on third persons. Using the proposed system, we can simultaneously record audio data, images in the user field of view, and head gestures (nodding, shaking, etc.) simultaneously. These data contain significant information for recording/analyzing human activities and can be used in wider application domains such as a digital diary or interaction analysis. Experimental results demonstrate the effectiveness of the proposed system. (C) 2006 Elsevier B.V. All rights reserved.
  • 山添大丈, 内海 章, 安部伸治
    映像情報メディア学会誌 61(12) 1750-1755 2007年  
  • 山添大丈, 内海章, 鉄谷信二, 谷内田正彦
    電子情報通信学会論文誌D J89-D(1) 14-26-26 2006年  
    固定カメラと移動カメラからなる多視点システムを用いて人の頭部運動を推定する手法を提案する. 人間の行動の記録やインタラクションの解析において重要となる人物の位置や視線方向(注視物体)を記録するため, 固定カメラと人物頭部に装着した移動カメラ(ヘッドマウントカメラ)による連続撮影を行うシステムが提案されている. 本論文ではこのようなシステムで得られる映像情報から, シーン中の各人物の頭部位置・姿勢を推定することを考える. 提案手法ではまず移動カメラと固定カメラの観測を利用し各人物位置の追跡を行う. 推定された人物の位置変化を用いて移動カメラ上での人物・背景領域の移動を推定し, 移動カメラの画像と比較することによりカメラの位置・姿勢を推定する. これにより, シーン中の全人物の三次元位置とカメラを装着した人物の頭部運動が推定できる. 実験では提案手法の位置・姿勢推定精度を評価し, 本手法の有効性を示すとともに, 実際の対話シーンに対する提案手法の適用例を示す.
  • 内海 章, 山添大丈, 鉄谷信二, 保坂憲一, 猪木誠二
    電子情報通信学会論文誌D J89-D(1) 84-94-94 2006年  
    多数のカメラで観測された多人数歩行画像から, モデル選択基準を利用して, シーン内の歩行者の人数・位置分布を推定する手法を提案する. 多人数シーンの歩行者分布を推定する際には, 歩行者間のオクルージョンによって生じる観測情報の欠落が問題となる. 特に歩行者同士が近接する場合にはオクルージョンの発生が避けられず, 歩行者数及び個々の歩行者位置の推定が困難となる. 提案手法では多視点で得られる観測画像に対してモデル選択基準を用いたモデル当てはめを行うことにより, 歩行者分布の推定を直接行う方法をとる. モデル選択基準として記述長最小基準(MDL)を採用し, 歩行者位置と入力シルエット画像の関係をモデル化した形状投影モデルに基づき, 観測画像群に対するゆう度が極大となる歩行者分布を推定する. 本論文では, 我々の提案する多視点観測に基づく歩行者分布推定手法について述べ, 更に計算機シミュレーション及び実画像を使った実験により提案手法の有効性を示す.
  • 山添大丈, 内海章, 保坂憲一, 谷内田正彦
    映像情報メディア学会誌 59(4) 581-587 2005年  
  • 山添大丈, 内海章, 鉄谷信二, 谷内田正彦
    映像情報メディア学会誌 58(11) 1639-1648 2004年  

MISC

 276

書籍等出版物

 1

講演・口頭発表等

 225

担当経験のある科目(授業)

 22

共同研究・競争的資金等の研究課題

 7