Curriculum Vitaes

Norifumi Watanabe

  (渡邊 紀文)

Profile Information

Affiliation
Musashino University
Degree
政策・メディア(慶應義塾大学)

J-GLOBAL ID
201301073774840492
researchmap Member ID
B000230846

慶應義塾大学大学院政策・メディア研究科後期博士課程修了.玉川大学脳科学研究所嘱託研究員を経て,東京工科大学コンピュータサイエンス学部助教,産業技術大学院大学産業技術研究科情報アーキテクチャ専攻助教,武蔵野大学データサイエンス学部・教育部会准教授.博士(政策・メディア).
専門は知覚情報処理,神経情報処理,認知科学.視覚情報処理に関係する神経細胞のモデル化と,計算機によるシミュレーション,また近年応用研究として人間の意図を推定し,行動を支援するインタフェースの開発,更に知能を持ったロボットの実現を目指したロボカップへも出場している.

Papers

 46
  • Kota Itoda, Norifumi Watanabe, Yasushi Kiyoki
    2022 13th International Congress on Advanced Applied Informatics Winter (IIAI-AAI-Winter), Dec, 2022  
  • Kensuke Miyamoto, Norifumi Watanabe, Osamu Nakamura, Yoshiyasu Takefuji
    Applied Sciences, 12(17) 8720-8720, Aug 31, 2022  
    Human cooperative behavior includes passive action strategies based on others and active action strategies that prioritize one’s own objective. Therefore, for cooperation with humans, it is necessary to realize a robot that uses these strategies to communicate as a human would. In this research, we aim to realize robots that evaluate the actions of their opponents in comparison with their own action strategies. In our previous work, we obtained a Meta-Strategy with two action strategies through the simulation of learning between agents. However, humans’ Meta-Strategies may have different characteristics depending on the individual in question. In this study, we conducted a collision avoidance experiment in a grid space with agents with active and passive strategies for giving way. In addition, we analyzed whether a subject’s action changes when the agent’s strategy changes. The results showed that some subjects changed their actions in response to changes in the agent’s strategy, as well as subjects who behaved in a certain way regardless of the agent’s strategy and subjects who did not divide their actions. We considered that these types could be expressed in terms of differences in Meta-Strategies, such as active or passive Meta-Strategies for estimating an opponent’s strategy. Assuming a human Meta-Strategy, we discuss the action strategies of agents who can switch between active and passive strategies.
  • Norifumi Watanabe, Kota Itoda
    Proceedings of the 14th International Conference on Agents and Artificial Intelligence, 299-305, 2022  
  • Norifumi Watanabe, Kensuke Miyamoto
    2022 JOINT 12TH INTERNATIONAL CONFERENCE ON SOFT COMPUTING AND INTELLIGENT SYSTEMS AND 23RD INTERNATIONAL SYMPOSIUM ON ADVANCED INTELLIGENT SYSTEMS (SCIS&ISIS), 1-5, 2022  
    In human cooperative behavior, there are some strategies: a passive behavioral strategy based on others' behaviors and an active behavioral strategy based on the objective-first. However, it is unclear how to acquire a meta-strategy to switch those strategies. In this study, we conduct a collision-avoidance experiment with agents taking multiple strategies in a grid-like corridor to see whether the subject's behavior changes when the agent's strategy changes. Furthermore, we compare the behavior selected by the subjects with the behavior of the agents acquired by reinforcement learning. The experimental results show that subjects can read the change in strategy from the behavior of the opposing agent.
  • Eimei Oyama, Kohei Tokoi, Ryo Suzuki, Sousuke Nakamura, Naoji Shiroma, Norifumi Watanabe, Arvin Agah, Hiroyuki Okada, Takashi Omori
    Advanced Robotics, 35(20) 1223-1241, Oct 18, 2021  
  • Kensuke Miyamoto, Norifumi Watanabe, Yoshiyasu Takefuji
    Applied Sciences (Switzerland), 11(4) 1-14, Feb 2, 2021  
    In human’s cooperative behavior, there are two strategies: a passive behavioral strategy based on others’ behaviors and an active behavioral strategy based on the objective-first. However, it is not clear how to acquire a meta-strategy to switch those strategies. The purpose of the proposed study is to create agents with the meta-strategy and to enable complex behavioral choices with a high degree of coordination. In this study, we have experimented by using multi-agent collision avoidance simulations as an example of cooperative tasks. In the experiments, we have used reinforcement learning to obtain an active strategy and a passive strategy by rewarding the interaction with agents facing each other. Furthermore, we have examined and verified the meta-strategy in situations with opponent’s strategy switched.
  • 守谷元一, 渡邊紀文, 宮本賢良, 糸田孝太, 今仁順也, 青山浩之, 武藤佳恭
    情報処理学会論文誌ジャーナル(Web), 62(2), 2021  
  • Norifumi Watanabe, Kensuke Miyamoto
    Lecture Notes in Computer Science (including subseries Lecture Notes in Artificial Intelligence and Lecture Notes in Bioinformatics), 12568 LNAI 84-99, 2021  
    In our cooperative behavior, there are two strategies: a passive behavioral strategy based on others’ behaviors and an active behavioral strategy based on the objective-first. However, it is not clear how to acquire a meta-strategy to switch those strategies. The purpose of the proposed study is to create agents with the meta-strategy and to enable complex behavioral choices with a high degree of coordination. In this study, we have experimented by using multi-agent collision avoidance simulations as an example of cooperative tasks. In the experiments, we have used reinforcement learning to obtain an active strategy and a passive strategy by rewarding the interaction with agents facing each other. Furthermore, we have examined and verified the meta-strategy in situations with opponent’s strategy switched.
  • Norifumi Watanabe, Motokazu Moritani
    IEEE Transactions on Systems, Man, and Cybernetics: Systems, 2020-October 3768-3775, Oct 11, 2020  
    One of the external factors that affect human intellectual activity is the concentration of carbon dioxide in the environment. Previous studies have measured individual carbon dioxide concentrations in a room, but it is not clear how carbon dioxide concentrations change in the case of multiple individuals. In this study, we used a small sensor device to simultaneously measure the carbon dioxide concentration in a room with multiple people. Furthermore, by comparing the predicted values using existing prediction models with the measured values, we will verify whether the prediction models can be used effectively in this measurement method. The experimental results show that the carbon dioxide generated by human exhalation diffuses regardless of the distance from the devices, the height difference, or the placement of the person. It was shown that the existing model for predicting carbon dioxide concentration was sufficiently useful for rooms with more than a certain number of ventilations, but in rooms with fewer ventilation, there was an error between the measured value and the actual value.
  • Motokazu Moritani, Norifumi Watanabe, Kensuke Miyamoto, Kota Itoda, Junya Imani, Hiroyuki Aoyama, Yoshiyasu Takefuji
    Applied Sciences (Switzerland), 10(13), Jul 1, 2020  Peer-reviewed
    Recent indoor air quality studies show that even 1000 parts per million (ppm) concentration of Carbon Dioxide (CO ) has an adverse effect on human intellectual activities. Therefore, it is required to keep the CO concentration below a certain value in a room. In this study, in order to analyze the diffusion tendency of carbon dioxide by breathing, we constructed a simultaneous multi-point sensing system equipped with a carbon dioxide concentration sensor to measure indoor environment. Furthermore, it was evaluated whether the prediction model can be effectively used by comparing the prediction value by the model and the actually measured value from the sensor. The experimental results showed that CO by exhaled breathing diffuses evenly throughout the room regardless of the sensor's relative positions to the human test subjects. The existing model is sufficiently accurate in a room which has above at least a 0.67 cycle/h ventilation cycle. However, there is a large gap between the measured and the model's predicted values in a room with a low ventilation cycle, and that suggests a measurement with a sensor still is necessary to precisely monitor the indoor air quality. 2 2 2
  • Norifumi Watanabe, Kota Itoda
    Advances in Intelligent Systems and Computing, 948 568-573, 2020  
    We have a behavior experiment using pattern task abstracting cooperative behaviors that require intention estimation and action switching to specific goals. And we have analyzed strategies to adjust cooperative intention estimations. In this research, we constructed an agent model that have three strategies of “random selection”, “self-priority selection”, and “other agent’s target pattern estimation”. Moreover the decision making process was verified by simulation.
  • Motokazu Moritani, Norifumi Watanabe, Junya Imani, Kota Itoda, Hiroyuki Aoyama, Yoshiyasu Takefuji
    Proceedings - 2018 IEEE International Conference on Systems, Man, and Cybernetics, SMC 2018, 2382-2387, Jan 16, 2019  Peer-reviewed
    © 2018 IEEE. The latest study shows that various substances indoor have an adverse effect on the quality of human life. It was reported that a high concentration of Carbon Dioxide (CO2) has an adverse effect on human's intellectual activity. In order to improve our intellectual activities, we must clarify the diffusion of CO2 concentration from human breath by CO2 measurement units. In this study, we have measured and analyzed the diffusion of CO2 depending on human positions, the distance and heights in the room by simultaneous multi-point sensing. In order to control the amount of CO2 emitted from human breath, we examined using respiration induction that human breath frequency is controlled by music. The results show that CO2 from human breath diffuses throughout the room regardless of the position of persons, distance and height difference. We also have found that it is possible to influence CO2 emissions by respiration induction using monotonous music.
  • Norifumi Watanabe, Kota Itoda
    Advances in Intelligent Systems and Computing, 848 326-333, 2019  Peer-reviewed
    © 2019, Springer Nature Switzerland AG. In a goal type ball game such as soccer and handball, a plurality of players who can pass are searched for, and each player’s intention is estimated and a player who can pass is selected. Furthermore, the ball holder checks the position and behavior of enemies around passable players, estimates their intentions, and determines teammate players whose pass is most successful. In order to realize instantaneous intention estimation and judgment subject to be strong temporal and spatial constraints, cooperative patterns shared within the group are considered to exist. Therefore, in this research, we focus on human gaze behaviors in goal type ball game. We presented to subjects a first-person perspective of professional soccer players by using virtual environment, and analyzed the gaze behaviors during pre- and post-training for constructing cooperative pattern modeling. Based on the results, we model a process of intention estimation concerning cooperative pattern. We discuss that subjects switch their behavior by estimating the intention of other players by presenting the visual information based on the first-person perspective.
  • Norifumi Watanabe, Fumihiko Mori
    Procedia Computer Science, 123 534-540, 2018  Peer-reviewed
    © 2018 The Authors. In this study, we clarify the integration mechanism of sensory information of vision and somatosensory sensation in walking. In this experiment, we evaluated the possibility of affecting walking by attenuating a somatosensory sensation by giving vibration stimulation to the feet, and by presenting optic flow to the peripheral visual field to generate a self-motion sensation of vision superiority. Experimental results confirmed that walking in the direction opposite to the self-motion sensation is presented by presenting the optic flow and vibration stimulation. Based on the results of this experiment, we propose that sensory devices such as vision and somatosensory sensation are not exclusive in walking, but are integrated by superposition.
  • 渡邊 紀文, 守谷 元一, 宮本 賢良, 糸田 孝太, 今仁 順也
    産業技術大学院大学紀要, (12) 101-106, 2018  
  • Kota Itoda, Norifumi Watanabe, Yoshiyasu Takefuji
    Lecture Notes in Computer Science (including subseries Lecture Notes in Artificial Intelligence and Lecture Notes in Bioinformatics), 10414 LNAI 249-258, 2017  Peer-reviewed
    © Springer International Publishing AG 2017. Realizing flexible cooperative group behavior of human and social robots or agents needs a mutual understanding of each intention or behaviors of participants. To understand cooperative intelligence in group behavior, we must clarify the decision-making process with intention estimation in multiple persons. Multi-people decision-making process have top-down intention sharing and bottom-up decision making based on the intention inference and amendment based on the each participants’ behavior. This study suggests the cooperative pattern task focusing on the selection process of others whom to be noticed and balancing process of each intention to achieve the shared purpose. In the 2D grid world of an abstract cooperative environment with restricted modal-ity of subjects, they communicate with each other in a nonverbal way and infer their intention based on their behavior to achieve the purpose. We analyzed the human subjects’ behavior and clarified their policy of behavior and concepts which assumed to be shared by each subject for preventing misunderstanding of each intention. Two main results were obtained through the experiment. First, optimal behavior based on the purpose in minimal steps prevent the misunderstanding of each intention. Second, the narrowing down the number of subjects who change their policy assumed to reduce the burden of intention inference.
  • Watanabe Norifumi, Imani Junya
    The Transactions of Human Interface Society, 19(4) 311-318, 2017  
    We have implemented the personal mobility performing semiautonomous control by estimating the avoidance direction of pedestrian and the avoidance judgment timing of passenger. In the space of pedestrian and personal mobility coexist, it is possible to realize a safety collision avoidance by moving the mobility in the lateral before passenger's avoidance judgment. Therefore, we utilize a Microsoft Kinect which obtains the coordinates of both feet of pedestrian. And we have tested collision avoidance experiments between a pedestrian and the mobility with the implemented model estimating avoiding direction from the feet's relative positions. We have evaluated the pupil size and the number of saccades of passenger by eye camera. As a result, it is possible to reduce the mental stress by semiautonomous control of mobility than autonomous control. Furthermore, we have evaluated the important body site in avoiding judgment and avoidance judgment timing. Based on the results, we have proposed a model of passenger's vision guidance and mobility avoidance.
  • 渡邊 紀文, 守谷 元一, 宮本 賢良, 糸田 孝太, 今仁 順也
    産業技術大学院大学紀要, (11) 103-108, 2017  
  • Yoshihiro Nagano, Ryo Karakida, Norifumi Watanabe, Atsushi Aoyama, Masato Okada
    Journal of the Physical Society of Japan, 85(7), Jul 15, 2016  Peer-reviewed
    ©2016 The Physical Society of Japan. Neural assemblies in the cortical microcircuit can sustain irregular spiking activity without external inputs. On the other hand, neurons exhibit rich evoked activities driven by sensory stimulus, and both activities are reported to contribute to cognitive functions. We studied the external input response of the neural network model with lognormally distributed synaptic weights. We show that the model can achieve irregular spontaneous activity and population oscillation depending on the presence of external input. The firing rate distribution was maintained for the external input, and the order of firing rates in evoked activity reflected that in spontaneous activity. Moreover, there were bistable regions in the inhibitory input parameter space. The bimodal membrane potential distribution, which is a characteristic feature of the up-down state, was obtained under such conditions. From these results, we can conclude that the model displays various evoked activities due to the external input and is biologically plausible.
  • Kensuke Miyamoto, Yoshiyasu Takefuji, Norifumi Watanabe
    2015 IEEE 4th Global Conference on Consumer Electronics, GCCE 2015, 467-469, Feb 3, 2016  Peer-reviewed
    © 2015 IEEE. In this paper, we aim to build an agent model that enables cooperative behaviors by estimation human intention. Human changes their action decision process by other's behavior. So, we analyze human behavior with switching action decision process. We have targeted collision avoidance as an example of a simple cooperative behavior. We have set two agents with Meta-Strategy model to a virtual environment SIGVerse. We have analyzed subject's walking trajectories, when two agents have different behavior strategies. It was confirmed that subjects switch their avoidance behaviors by agents' strategies.
  • Eimei Oyama, Naoji Shiroma, Norifumi Watanabe, Arvin Agah, Takashi Omori, Natsuo Suzuki
    Advanced Robotics, 30(3) 151-164, Feb 1, 2016  Peer-reviewed
    © 2016 Taylor & Francis and The Robotics Society of Japan. Performing general human behavior by experts navigation is expected to be realized as wearable technologies and computing systems are further developed. We have proposed and developed the prototype of the advanced behavior navigation system (BNS) using augmented reality technology. Utilizing the BNS, an expert can guide a non-expert to perform a variety of tasks. The BNSs are useful in tasks to be performed in harsh and hazardous environments, such as factories, construction sites, and areas affected by natural disasters (e.g. earthquakes and tsunamis). In this paper, we present a BNS that is specifically designed to operate in harsh environments, with characteristics such as wet or dusty conditions. The implementation, experimental results, and evaluation of the BNS prototypes are presented.
  • WATANABE Norifumi, MORI Fumihiko, OMORI Takashi
    Journal of Japan Society for Fuzzy Theory and Intelligent Informatics, 28(3) 608-616, 2016  
    Human walking is affected by vision, vestibular, somatic and other various sensations that come through the sensory-motor loop. But detail of the sensory-motor loop is not clear. In this study, we examined a possible affect of self motion sensation by an optical-flow stimulus in peripheral vision with a decayed somato-sensory feeling by a vibration stimulus on leg and foot area. In this experiment, we presented the optical flow for forward direction to the peripheral vision, and then gave the self-motion sensation by changing the flow to left or right direction. We examine the change of walking direction toward opposite of the self-motion sensation because of the decayed somato-sensory feeling. In this paper, we discuss on the sensory fusion mechanism of visual and somatic sensations based on the experimental result.
  • 渡邊 紀文, 森 文彦
    産業技術大学院大学紀要, (10) 67-72, 2016  
  • Kota Itoda, Norifumi Watanabe, Yoshiyasu Takefuji
    Procedia Computer Science, 71 85-91, 2015  Peer-reviewed
    In goal-type ball games, such as handball, basketball, hockey or soccer, teammates and opponents share the same field. They switch dynamically their behaviors and relationships based on other players' behaviors or intentions. Interactions between players are highly complicated and hard to comprehend, but recent technological developments have enabled us to acquire positions or velocities of their behaviors. We focus on handball as an example of goal-type ball games and analyze causality between teammates' behaviors from tracking data with Hidden Semi-Markov Model (HSMM) and delayed Transfer Entropy (dTE). Although 'off-the-ball' behaviors are a crucial component of cooperation, most research tends to focus on 'on-the-ball' behaviors, and relations of behaviors are only known as tacit knowledge of coaches or players. In contrast, our approach quantitatively reveals player's relationships of 'off-the-ball' behaviors. The extracted causal models are compared to the corresponding video scenes, and we claim that our approach extracts causal relationships between teammates' behaviors or intentions and clarifies roles of the players in both attacking and defending phase.
  • Norifumi Watanabe, Hiroaki Yoshioka, Kensuke Miyamoto, Junya Imani
    Procedia Computer Science, 71 50-55, 2015  Peer-reviewed
    © 2015 The Authors. We have implemented a personal mobility (vehicle) that has semiautonomous control by estimating the avoidance direction and the avoidance judgment timing. In coexist space of pedestrians and passengers on personal mobility, it is necessary to realize safety collision avoidance by moving the mobility. Therefore, we estimate avoiding direction from pedestrianfs body parts and implement semiautonomous collision avoidance system. And we have collision avoidance experiments between a pedestrian and a passenger on personal mobility. We evaluate important pedestrianfs body parts for avoiding judgment and avoidance judgment timing. As a result, passengers gaze at pedestrian's lower body parts in semiautonomous control, and avoidance judgment timing is delayed about pedestrian's one step. We have proposed a model of passenger's motion perception and vision guidance on personal mobility.
  • Kensuke Miyamto, Hiroaki Yoshioka, Norifumi Watanabe, Yoshiyasu Takefuji
    HAI 2014 - Proceedings of the 2nd International Conference on Human-Agent Interaction, 2014 257-260, Oct 29, 2014  Peer-reviewed
    Recent years, robots are useful at home such as cleaning task or communication tools. So there are a lot of studies about cooperative behavior with robots. In order to realize cooperative tasks with robots, it is necessary that robots estimate human intention from human behavior and act in the context based on the intention. In this research, we construct an agent model that enables coordinated behavior by estimating human intention. We focus on collision avoidance as an example of a simple cooperative behavior. We implemented that a agent has Meta-Strategy model in a virtual environment. We have a collision avoidance experiment between virtual agent and human subject and analyzed subject's behavior. It was confirmed that the agent's behavior can influence the human avoidance behavior from experimental results. By indicating the agent's intention, we consider it is possible to achieve cooperative collision avoidance.
  • -
    Norifumi Watanabe, Fumihiko Mori, Takashi Omori
    2014  Peer-reviewed
  • Yoshihiro Nagano, Norifumi Watanabe, Atsushi Aoyama
    Lecture Notes in Computer Science (including subseries Lecture Notes in Artificial Intelligence and Lecture Notes in Bioinformatics), 8681 LNCS 467-474, 2014  Peer-reviewed
    Visual attention has recently been reported to modulate neural activity of narrow spiking and broad spiking neurons in V4, with increased firing rate and less inter-trial variations. We simulated these physiological phenomena using a neural network model based on spontaneous activity, assuming that the visual attention modulation could be achieved by a change in variance of input firing rate distributed with a lognormal distribution. Consistent with the physiological studies, an increase in firing rate and a decrease in inter-trial variance was simultaneously obtained in the simulation by increasing variance of input firing rate distribution. These results indicate that visual attention forms strong sparse and weak dense input or a 'winner-take-all' state, to improve the signal-to-noise ratio of the target information. © 2014 Springer International Publishing Switzerland.
  • ITODA Kota, WATANABE Norifumi, TAKEFUJI Yoshiyasu
    Journal of Japan Society for Fuzzy Theory and Intelligent Informatics, 26(3) 678-687, 2014  
    In recent years, autonomous agents have been developed using statistical and probabilistic machine learnings together with deterministically optimized control. In this paper, a new decision making and motion generation method is proposed for adapting to uncertain environments. In the proposed method, we concentrate on passing in soccer, as a tactical group behavior, and understand how the optimization occurs in a group behavior from individual decision makings. In particular, we have quantified how people pass in plays by analyzing a video and tracking data of real soccer, and have constructed pass models with optimized parameters using logistic regression based on the analysis. As a result, our model predicted the next receiver with a high degree of accuracy by weighting positions of the players around the passer.
  • Norifumi Watanabe, Takashi Omori
    BIOLOGICALLY INSPIRED COGNITIVE ARCHITECTURES 2012, 196 351-+, 2013  Peer-reviewed
    In daily life our behavior is guided by various visual stimuli, such as the information on direction signs. However, our environmentally-based perceptual capacity is often challenged under crowded conditions, even more so in critical circumstances like emergency evacuations. In those situations, we often fail to pay attention to important signs. In order to achieve more effective direction guidance, we considered the use of unconscious reflexes in human walking. In this study, we experimented with vision-guided walking direction control by inducing subjects to shift their gaze direction using a vection stimulus combined with body sway. We confirmed that a shift in a subject's walking direction and body sway could be induced by a combination of vection and vibratory stimulus. We propose a possible mechanism for this effect.
  • Norifumi Watanabe, Takashi Omori
    Advances in Intelligent Systems and Computing, 196 AISC 351-359, 2013  Peer-reviewed
    In daily life our behavior is guided by various visual stimuli, such as the information on direction signs. However, our environmentally-based perceptual capacity is often challenged under crowded conditions, even more so in critical circumstances like emergency evacuations. In those situations, we often fail to pay attention to important signs. In order to achieve more effective direction guidance, we considered the use of unconscious reflexes in human walking. In this study, we experimented with vision-guided walking direction control by inducing subjects to shift their gaze direction using a vection stimulus combined with body sway. We confirmed that a shift in a subject's walking direction and body sway could be induced by a combination of vection and vibratory stimulus. We propose a possible mechanism for this effect. © 2013 Springer-Verlag.
  • Eimei Oyama, Naoji Shiroma, Masataka Niwa, Norifumi Watanabe, Shunsuke Shinoda, Takashi Omori, Natuo Suzuki
    2013 IEEE International Symposium on Safety, Security, and Rescue Robotics, SSRR 2013, 1-6, 2013  Peer-reviewed
    Head Mounted Displays (HMDs) are the most popular devices for virtual reality, telexistence/telepresence humanoid operation, and remote behavior navigation. Telexistence/telepresence robot operation is an advanced teleoperation, enabling a human operator to perform remote dexterous manipulation tasks while having the feeling of being present in the remote environment. Behavior navigation is a novel technology that allows an expert to navigate a remote unskilled cooperator to perform general tasks. Since it is difficult to realize a large field of view (FOV) with an HMD, this has led to the development of immersive surround display systems. However, the fact that the operator's arms sometimes conceal the screen image of the surround display presents a serious drawback. To improve the performance of robot operation or remote behavior navigation in unknown environments, we propose the simultaneous utilization of a slimline HMD for central vision and a surround display for peripheral vision. Users of this novel display system can see the relatively high-resolution image on the display screens of the HMD, and their arms do not conceal the HMD image. In addition, users can see the large FOV image on the surround display around the HMD, although this may still be concealed by the user's arms. This enables full utilization of the natural and large FOV image. In this paper, we present the concept of a hybrid head mounted/surround display system, describe the configuration of a prototype setup, and present the results of a preliminary experiment. © 2013 IEEE.
  • Norifumi Watanabe, Fumihiko Mori, Takashi Omori
    Proceedings - 2013 IEEE International Conference on Systems, Man, and Cybernetics, SMC 2013, 4254-4258, 2013  Peer-reviewed
    Recently, we have mobile adaptive information terminal to navigate like to present information during walking (ex. smartphone and glasses-type wearable terminal). On the other hand, attention to the surrounding environment is insufficient and incident is a familiar occurrence. It is considered to solve such a problem, it is important to behavior support by by measuring human behavior in real time and feedback information intuitive. Human walking is affected by vision, vestibular, somatic and other various sensations that come through the sensory-motor loop. But detail of the sensory-motor loop is not clear. In this study, we examined a possible affect of self motion sensation by an optical-flow stimulus in peripheral vision with a decayed somato-sensory feeling by a vibration stimulus on leg and foot area. In the experiment, we presented the optical flow for forward direction to the peripheral vision, and then gave the self-motion sensation by changing the flow to left or right direction. In this paper, we discuss on the unifying mechanism of visual and somatic sensations based on the experimental result. © 2013 IEEE.
  • Norifumi Watanabe, Fumihiko Mori, Takashi Onion
    Kyokai Joho Imeji Zasshi/Journal of the Institute of Image Information and Television Engineers, 67(12) J434-J440, 2013  Peer-reviewed
    In daily life, our behavior is guided by various visual stimuli such as the information on direction signs. However, our environmentally based perceptual capacity is often challenged in crowded circumstances, or more so, in emergency evacuation circumstances. In these situations, we often fail to pay attention to important signs. In order to achieve more effective direction guidance, we considered the use of unconscious reflexes in human walking action. In this study, we experimented with vision-guided walking direction control by optic flow stimulus combined with body sway. We observed a shift in subjects' walking direction and body sway and discuss the possible mechanism.
  • Eimei Oyama, Norifumi Watanabe, Hiroaki Mikado, Hikaru Araoka, Jun Uchida, Takashi Omori, Itsuki Noda, Naoji Shiroma, Arvin Agah
    2012 IEEE/SICE International Symposium on System Integration, SII 2012, 654-659, 2012  Peer-reviewed
    Performing general human behavior by experts' navigation is expected to be realized as wearable and ubiquitous technologies and computing develop further. For example, the user of the behavior navigation system will be able to conduct first aid treatment as an expert would. We proposed and developed the Wearable Behavior Navigation System (WBNS) using Augmented Reality (AR) technology. By using the WBNS, an expert can guide a non-expert to conduct a variety of first aid treatments. However, a number of issues must be resolved in order to commercialize the WBNS using AR technology, because of the limitations of the Head Mounted Display (HMD). It usually takes a few minutes to wear the HMD. The time required for wearing the HMD is a critical problem for performing emergency first-aid treatment. In order to start the first-aid treatment as soon as possible, we propose three simpler Behavior Navigation Systems (BNSs) using popular video conferencing systems on an internet TV, a laptop computer, and a smartphone. Although these BNSs do not have general behavior navigation function, they have enough capability to conduct the behavior navigation for CPR (CardioPulmonary Resuscitation), which is the most critical/important first aid treatment. In this paper, the configuration of the BNSs for CPR, the instructions for remote CPR, and the results of the preliminary experiments are presented. © 2012 IEEE.
  • Norifumi Watanabe, Hiroaki Mikado, Takashi Omori
    IEEE International Conference on Fuzzy Systems, 2732-2736, 2011  Peer-reviewed
    We decide and execute our action from many types of environmental information in our daily lives even if we are not conscious of being guided. The human action induced from the oncoming person's movement is the collision avoidance of passing each other. In collision avoidance, we chiefly judge the avoidance direction from visual information. Especially, it is important to get the information from oncoming person's body part and avoidance timing in each other. Then, we make an experiment to judge the avoidance direction by watching the masking movie of oncoming person's body part. By evaluating this judgment time, it was clarified oncoming person's body part is leg in collision avoidance. Next, it especially paid attention to oncoming person's leg, and the relation between the walking cycle and leg position in avoidance judgment is evaluated. From this result, the avoidance judgment is possible because the traveling direction can be controlled by the leg when the leg is lifting and landing. It was clarified that oncoming person's walking cycle is important in the action decision during collision avoidance. So we propose the action decision model based at the walking cycle. © 2011 IEEE.
  • Eimei Oyama, Norifumi Watanabe, Hiroaki Mikado, Hikaru Araoka, Jun Uchida, Takashi Omori, Kousuke Shinoda, Itsuki Noda, Naoji Shiroma, Arvin Agah, Tomoko Yonemura, Hideyuki Ando, Daisuke Kondo, Taro Maeda
    Proceedings - IEEE International Workshop on Robot and Human Interactive Communication, 755-761, 2010  Peer-reviewed
    The capability to perform specific human tasks with the assistance of expert navigation is expected to be realized through the development wearable and ubiquitous computing technology. For instance, when an injured or ill person requires first-aid treatment, but only non-experts are nearby, instruction from an expert at a remote site is necessary. A behavior navigation system will allow the user to provide first-aid treatment in the same manner as an expert. Focusing on first-aid treatment, we have proposed and developed a prototype wearable behavior navigation system (WBNS) that uses augmented reality (AR) technology. This prototype WBNS has been evaluated in experiments, in which participants wore the prototype and successfully administered various first-aid treatments. Although the effectiveness of the WBNS has been confirmed, many challenges must be addressed to commercialize the system. The head-mounted displays (HMDs) used in the WBNS have a number of drawbacks, for example, high cost (which is not expected to decrease in the near future) and the time required for an ordinary user to become accustomed to the display. Furthermore, some individuals may experience motion sickness wearing the HMD. We expect that these drawbacks to the current technology will be resolved in the future; meanwhile, a near-future remote behavior navigation system (RBNS) is necessary. Accordingly, we have developed RBNSs for first-aid treatment using off-the-shelf components, in addition to the WBNS. In this paper, the basic mechanisms of the RBNS, experiments investigating the demonstration of expert behavior, and a comparative study of the WBNS and the RBNSs are presented. © 2010 IEEE.
  • Eimei Oyama, Norifumi Watanabe, Hiroaki Mikado, Hikaru Araoka, Jun Uchida, Takashi Omori, Kousuke Shinoda, Itsuki Noda, Naoji Shiroma, Arvin Agah, Kazutaka Hamada, Tomoko Yonemura, Hideyuki Ando, Daisuke Kondo, Taro Maeda
    Proceedings - IEEE International Conference on Robotics and Automation, 5315-5321, 2010  Peer-reviewed
    Performing general human behavior by experts' navigation is expected to be realized as wearable and ubiquitous technologies and computing develop. For simple, ordinary behavior, a person does not need the assistance of an expert. However, if one is standing next to an injured/ill person, one needs the instruction on performing first aid treatment from an expert. The wearer of the wearable behavior navigation system will be able to conduct first aid treatment as an expert would. We have developed the wearable behavior navigation systems using Augmented Reality technology, mainly for the navigation of the first aid treatment and for escape from dangerous areas, such as a building on fire. The effectiveness of the wearable navigation systems has been evaluated by a number of experiments. In this paper, the basic mechanism to realize general human behavior navigation is presented, along with the concrete configuration of the prototype of the navigation systems, and the experimental evaluation. ©2010 IEEE.
  • Norifumi Watanabe, Takashi Omori
    2010 IEEE World Congress on Computational Intelligence, WCCI 2010, 1-6, 2010  Peer-reviewed
    We tried to guide human actions using galvanic vestibular stimulation (GVS), which might be a source of human behavior guidance without any attention. We tried to guide the trajectory of subject hands when they were continuously drawing circles. Previous work has mainly dealt with such unstable actions as walking and reaching in a standing posture. In this work, we verified the effects of GVS with a stable sitting posture under a head fixed condition in continuous circle drawing as a guided action. The results showed cases in which the hand is guided by GVS. From these experimental results, we hypothesize that GVS might trigger motion prediction for circle drawing. We confirmed the acceleration in the direction where a right and left imbalance arose when drawing a second circle after GVS. This suggests the possibility of a guiding hand trajectory and a triggered motion prediction by GVS in a stable posture where GVS cannot drastically change the balance perception. © 2010 IEEE.
  • -
    Dec 14, 2009  
  • Iwaki Toshima, Norifumi Watanabe, Takashi Omori
    Conference Proceedings - IEEE International Conference on Systems, Man and Cybernetics, 442-447, 2009  Peer-reviewed
    We tried to guide human action using galvanic vestibular stimulation (GVS). GVS has a possibility of human behavior guidance without any attention. We tried to guide the trajectory of the subjects' hands when as the continuously drew circles. Previous work has mainly dealt with unstable actions, such as walking and reaching in a standing posture. On the basis of the results, it was claimed that GVS is effective for human action guidance. However, in those experiments, GVS influenced just the perception of the direction of gravity direction and balancing. Clarifying the GVS effect for actions performed with stable postures is required. In this work, we verified the effects of GVS with a stable sitting posture under a head-fixed condition in continuous circle drawing as a guided action. The results showed that there are cases in which the hand is guided by GVS. This means that there is a possibility of guiding hand trajectory by GVS even with a stable posture where GVS cannot drastically change balance perception. ©2009 IEEE.
  • Norifumi Watanabe, Shun Ishizaki
    J. Adv. Comput. Intell. Intell. Informatics, 11(7) 780-786, 2007  Peer-reviewed
  • N Watanabe, S Ishizaki
    ARTIFICIAL NEURAL NETWORKS: FORMAL MODELS AND THEIR APPLICATIONS - ICANN 2005, PT 2, PROCEEDINGS, 3697 873-879, 2005  Peer-reviewed
    We propose a new coding model to the associative ontology that based on result of association experiment to person. The semantic network with the semantic distance on the words is constructed on the neural network and the association relation is expressed by using the up and down states. The associative words are changing depending on the context and the words with the polysemy and the homonym solve vagueness in self organization by using the up and down states. In addition, the relation of new words is computed depending on the context by morphoelectrotonic transform theory. In view of these facts, the simulation model of dynamic cell assembly on neural network depending on the context and word sense disambiguation is constructed.
  • N Watanabe, S Ishizaki
    2004 IEEE INTERNATIONAL JOINT CONFERENCE ON NEURAL NETWORKS, VOLS 1-4, PROCEEDINGS, 327-332, 2004  Peer-reviewed
    We construct the model of the perceptual reconstruction from input visual information. The relations between objects are simulated from the correlation of input pulse and associative memory model in this model. And, the relation and the bind between objects are characterized by the morphoelectrotonic transform based on the experimental data of neurophysiology. The associative memory model is composed of the synchronous associative memory network, and extracts the relations between objects by the phase obtained from input pulse sequence. The binding of the object is reconstructed the relation of the associative memory, and the correlation of the modality is learned.
  • Norifumi Watanabe, Shun Ishizaki
    ESANN 2003, 11th European Symposium on Artificial Neural Networks, Bruges, Belgium, April 23-25, 2003, Proceedings, 275-280, 2003  Peer-reviewed

Misc.

 77

Books and Other Publications

 2

Presentations

 74
  • 糸田孝太, 渡邊紀文, 清木康
    第40回日本ロボット学会学術講演会(RSJ2022), Sep, 2022
  • MIYAMOTO Kensuke, WATANABE Norifumi, TAKEFUJI Yoshiyasu, NAKAMURA Osamu
    Proceedings of the Annual Conference of JSAI, 2022, The Japanese Society for Artificial Intelligence
    In humans' cooperative behavior, there are two types of behavioral strategies: passive behavioral strategies based on the others, and active behavioral strategies based on the objective-first. In order to realize a robot that can use different strategies and communicate like a person, we created an agent that can switch between active and passive strategies. However, it is not clear whether people change their own behavioral strategies according to each strategy. In this study, we conducted an experiment in which agents with multiple strategies of actively giving way and passively giving way passed each other in a grid-like space, and analyzed whether people's behavior changed when the agents' strategies changed. The results show that, in addition to subjects who change their own behavior in response to changes in the agent's strategy, there are also subjects who behave in a certain way regardless of the agent's strategy and subjects whose behavior is not clearly divided.
  • 糸田孝太, 渡邊紀文
    日本知能情報ファジィ学会 ファジィ システム シンポジウム 講演論文集, Sep, 2021
  • MIYAMOTO Kensuke, WATANABE Norifumi, TAKEFUJI Yoshiyasu
    Proceedings of the Annual Conference of JSAI, 2021, The Japanese Society for Artificial Intelligence
    In human's cooperative behavior, there are some strategies: a passive behavioral strategy based on others’behaviors and an active behavioral strategy based on the objective-first. However, it is not clear how to acquire a meta-strategy to switch those strategies. In this study, we conduct a collision avoidance experiment with agents taking multiple strategies in a grid-like corridor to see whether subject's behavior changes when agent's strategy changes. We compare the behavior selected by the subjects with the behavior of the agents acquired by reinforcement learning. The experimental results show that subjects can read the change in strategy from the behavior of the oncoming agent.

Research Projects

 4

Social Activities

 1

Media Coverage

 4