Curriculum Vitaes

Akihiro Tanaka

  (田中 章浩)

Profile Information

Affiliation
Professor, School of Arts and Sciences, Division of Psychology and Communication, Department of Psychology, Tokyo Woman's Christian University
Degree
博士(心理学)(東京大学)

Researcher number
80396530
J-GLOBAL ID
200901077725261773
researchmap Member ID
5000089644

External link

Papers

 76
  • Anna K. Nakamura, Akihiro Tanaka
    Cognition and Emotion, 1-16, May 20, 2025  
  • Anna K. Nakamura, Hisako W. Yamamoto, Sachiko Takagi, Tetsuya Matsuda, Hiroyuki Okada, Chiaki Ishiguro, Akihiro Tanaka
    Frontiers in Psychology, 16(1533274), Jan 29, 2025  Peer-reviewed
    Introduction Individuals from Western cultures rely on facial expressions during the audiovisual emotional processing of faces and voices. In contrast, those from East-Asian cultures rely more on voices. This study aimed to investigate whether immigrants adopt the tendency of the host culture or whether common features of migration produce a similar modification regardless of the destination. Methods We examined how immigrants from Western countries to Japan perceive emotional expressions from faces and voices using MRI scanning. Results Immigrants behaviorally exhibited a decrease in the influence of emotions in voices with a longer stay in Japan. Additionally, immigrants with a longer stay showed a higher response in the posterior superior temporal gyrus, a brain region associated with audiovisual emotional integration, when processing emotionally congruent faces and voices. Discussion These modifications imply that immigrants from Western cultures tend to rely even less on voices, in contrast to the tendency of voice-dominance observed in native Japanese people. This change may be explained by the decreased focus on prosodic aspects of voices during second language acquisition. The current and further exploration will aid in the better adaptation of immigrants to a new cultural society.
  • Misako Kawahara, Akihiro Tanaka
    PLOS ONE, 20(1) e0307631-e0307631, Jan 9, 2025  Peer-reviewed
    We perceive and understand others’ emotional states from multisensory information such as facial expressions and vocal cues. However, such cues are not always available or clear. Can partial loss of visual cues affect multisensory emotion perception? In addition, the COVID-19 pandemic has led to the widespread use of face masks, which can reduce some facial cues used in emotion perception. Thus, can frequent exposure to masked faces affect emotion perception? We conducted an emotion perception task using audio-visual stimuli that partially occluded the speaker’s face. Participants were simultaneously shown a face and voice that expressed either congruent or incongruent emotions and judged whether the person was happy or angry. The stimuli included videos in which the eyes or mouth were partially covered and where the whole face was visible. Our findings showed that, when facial cues were partially occluded, participants relied more on vocal cues for emotion recognition. Moreover, when the mouth was covered, participants relied less on vocal cues after the pandemic compared to before. These findings indicate that partial face masking and prolonged exposure to masked faces can affect multisensory emotion perception. In unimodal emotion perception from only facial cues, accuracy also improved after the pandemic compared to before for faces with the mouth occluded. Therefore, changes in the reliance on vocal cues in multisensory emotion perception during the pandemic period could be explained by improved facial emotion perception from the eye region.
  • 新井田統, 小森智康, 酒向慎司, 田中章浩, 布川清彦
    電子情報通信学会誌, 107(3) 237-243, Mar, 2024  
  • Yoshiko SAWADA, Misako KAWAHARA, Akihiro TANAKA
    Transactions of Japan Society of Kansei Engineering, 22(4) 405-416, Dec, 2023  Peer-reviewed
  • Rika Oya, Akihiro Tanaka
    Emotion, 23(5) 1400-1409, Aug, 2023  Peer-reviewed
  • Rika Oya, Akihiro Tanaka
    i-Perception, 14(2), Mar 21, 2023  Peer-reviewed
    Previous research has revealed that several emotions can be perceived via touch. What advantages does touch have over other nonverbal communication channels? In our study, we compared the perception of emotions from touch with that from voice to examine the advantages of each channel at the emotional valence level. In our experiment, the encoder expressed 12 different emotions by touching the decoder's arm or uttering a syllable /e/, and the decoder judged the emotion. The results showed that the categorical average accuracy of negative emotions was higher for voice than for touch, whereas that of positive emotions was marginally higher for touch than for voice. These results suggest that different channels (touch and voice) have different advantages for the perception of positive and negative emotions.
  • Fabiola Diana, Misako Kawahara, Isabella Saccardi, Ruud Hortensius, Akihiro Tanaka, Mariska E. Kret
    International Journal of Social Robotics, 15(8) 1439-1455, Sep 28, 2022  
    Abstract Historically, there has been a great deal of confusion in the literature regarding cross-cultural differences in attitudes towards artificial agents and preferences for their physical appearance. Previous studies have almost exclusively assessed attitudes using self-report measures (i.e., questionnaires). In the present study, we sought to expand our knowledge on the influence of cultural background on explicit and implicit attitudes towards robots and avatars. Using the Negative Attitudes Towards Robots Scale and the Implicit Association Test in a Japanese and Dutch sample, we investigated the effect of culture and robots’ body types on explicit and implicit attitudes across two experiments (total n = 669). Partly overlapping with our hypothesis, we found that Japanese individuals had a more positive explicit attitude towards robots compared to Dutch individuals, but no evidence of such a difference was found at the implicit level. As predicted, the implicit preference towards humans was moderate in both cultural groups, but in contrast to what we expected, neither culture nor robot embodiment influenced this preference. These results suggest that only at the explicit but not implicit level, cultural differences appear in attitudes towards robots.
  • Rika Oya, Akihiro Tanaka
    Acoustical Science and Technology, 43(5) 291-293, Sep 1, 2022  Peer-reviewed
  • Yohtaro Takano, Per F. Gjerde, Eiko Osaka, Kazuko Takeo, Akihiro Tanaka
    Cognitive Studies, 28(4) 541-550, Dec, 2021  Peer-reviewed
  • Roza G. Kamiloğlu, Akihiro Tanaka, Sophie K. Scott, Disa A. Sauter
    Philosophical Transactions of the Royal Society B: Biological Sciences, 377(1841), Nov 15, 2021  Peer-reviewed
    Laughter is a ubiquitous social signal. Recent work has highlighted distinctions between spontaneous and volitional laughter, which differ in terms of both production mechanisms and perceptual features. Here, we test listeners' ability to infer group identity from volitional and spontaneous laughter, as well as the perceived positivity of these laughs across cultures. Dutch ( n = 273) and Japanese ( n = 131) participants listened to decontextualized laughter clips and judged (i) whether the laughing person was from their cultural in-group or an out-group; and (ii) whether they thought the laughter was produced spontaneously or volitionally. They also rated the positivity of each laughter clip. Using frequentist and Bayesian analyses, we show that listeners were able to infer group membership from both spontaneous and volitional laughter, and that performance was equivalent for both types of laughter. Spontaneous laughter was rated as more positive than volitional laughter across the two cultures, and in-group laughs were perceived as more positive than out-group laughs by Dutch but not Japanese listeners. Our results demonstrate that both spontaneous and volitional laughter can be used by listeners to infer laughers’ cultural group identity. This article is part of the theme issue ‘Voice modulation: from origin and mechanism to social impact (Part II)’.
  • Hisako W. Yamamoto, Misako Kawahara, Akihiro Tanaka
    Frontiers in Psychology, 12(702106) 1-13, Aug 18, 2021  Peer-reviewed
    Due to the COVID-19 pandemic, the significance of online research has been rising in the field of psychology. However, online experiments with child participants are rare compared to those with adults. In this study, we investigated the validity of web-based experiments with child participants 4–12 years old and adult participants. They performed simple emotional perception tasks in an experiment designed and conducted on the Gorilla Experiment Builder platform. After short communication with each participant via Zoom videoconferencing software, participants performed the auditory task (judging emotion from vocal expression) and the visual task (judging emotion from facial expression). The data collected were compared with data collected in our previous similar laboratory experiment, and similar tendencies were found. For the auditory task in particular, we replicated differences in accuracy perceiving vocal expressions between age groups and also found the same native language advantage. Furthermore, we discuss the possibility of using online cognitive studies for future developmental studies.
  • 日本感性工学会論文誌(Web), 20(3) 329-335, Aug, 2021  Peer-reviewed
  • Yuichi Mori, Yasuki Noguchi, Akihiro Tanaka, Keiko Ishii
    Culture and Brain, 10(1) 43-55, May 21, 2021  Peer-reviewed
  • Kazuma Mori, Akihiro Tanaka, Hideaki Kawabata, Hiroshi Arao
    NeuroReport, 32(10) 858-863, May 20, 2021  Peer-reviewed
    People require multimodal emotional interactions to live in a social environment. Several studies using dynamic facial expressions and emotional voices have reported that multimodal emotional incongruency evokes an early sensory component of event-related potentials (ERPs), while others have found a late cognitive component. The integration mechanism of two different results remains unclear. We speculate that it is semantic analysis in a multimodal integration framework that evokes the late ERP component. An electrophysiological experiment was conducted using emotionally congruent or incongruent dynamic faces and natural voices to promote semantic analysis. To investigate the top-down modulation of the ERP component, attention was manipulated via two tasks that directed participants to attend to facial versus vocal expressions. Our results revealed interactions between facial and vocal emotional expressions, manifested as modulations of the auditory N400 ERP amplitudes but not N1 and P2 amplitudes, for incongruent emotional face–voice combinations only in the face-attentive task. A late occipital positive potential amplitude emerged only during the voice-attentive task. Overall, these findings support the idea that semantic analysis is a key factor in evoking the late cognitive component. The task effect for these ERPs suggests that top-down attention alters not only the amplitude of ERP but also the ERP component per se. Our results implicate a principle of emotional face–voice processing in the brain that may underlie complex audiovisual interactions in everyday communication.
  • Misako Kawahara, Disa A. Sauter, Akihiro Tanaka
    Cognition and Emotion, 35(6) 1175-1186, May 18, 2021  Peer-reviewed
  • Hisako Yamamoto, Misako Kawahara, Mariska Kret, Akihiro Tanaka
    Letters on Evolutionary Behavioral Science, 11(2) 40-45, Dec 15, 2020  Peer-reviewed
    This study investigated cultural differences in the perception of emoticons between Japanese and Dutch participants. We manipulated the eyes and mouth of emoticons independently and asked participants to evaluate the emotion of each emoticon. The results show that Japanese participants tended to focus on the emotion expressed with the eyes while Dutch participants put weight on the shape of the mouth when evaluating emoticons. This tendency is consistent with a previous cross-cultural study comparing people from Japan and the United States (Yuki et al., 2007).
  • Hisako W. Yamamoto, Misako Kawahara, Akihiro Tanaka
    PLOS ONE, 15(6) e0234553-e0234553, Jun 18, 2020  Peer-reviewed
  • Hisako W. Yamamoto, Misako Kawahara, Akihiro Tanaka
    Acoustical Science and Technology, 40(6) 410-412, Nov 1, 2019  Peer-reviewed
  • Misako Kawahara, Hisako W. Yamamoto, Akihiro Tanaka
    Acoustical Science and Technology, 40(5) 360-363, Sep 1, 2019  Peer-reviewed
  • Hisako W. Yamamoto, Misako Kawahara, Akihiro Tanaka
    The 15th International Conference on Auditory-Visual Speech Processing, 27-32, Aug 10, 2019  Peer-reviewed
  • 田中章浩
    日本薬学会会報「ファルマシア」2018年11月号, 1040-1044, Nov, 2018  Invited
  • 田中章浩
    映像情報メディア学会誌, 72(1) 12-16, Jan 1, 2018  Invited
  • Hisako W. Yamamoto, Misako Kawahara, Akihiro Tanaka
    The 14th International Conference on Auditory-Visual Speech Processing, 105-108, Aug 25, 2017  Peer-reviewed
  • Misako Kawahara, Disa Sauter, Akihiro Tanaka
    The 14th International Conference on Auditory-Visual Speech Processing, 109-114, Aug 25, 2017  Peer-reviewed
  • Marina Kawase, Ikuma Adachi, Akihiro Tanaka
    The 14th International Conference on Auditory-Visual Speech Processing, 115-118, Aug 25, 2017  Peer-reviewed
  • Marie MORITA, Akihiro TANAKA
    The Japanese Journal of Cognitive Psychology, 14(1) 9-19, 2016  Peer-reviewed
  • Fumiya YOKOMORI, Yamato NINOMIYA, Masanori MORISE, Akihiro TANAKA, Kenji OZAWA
    Transactions of Japan Society of Kansei Engineering, 15(7) 721-729, 2016  Peer-reviewed
  • Sachiko Takagi, Shiho Miyazawa, Elisabeth Huis In, Beatrice de Gelder, Akihiro Tanaka
    Proceedings of the Internationl Conference on Facial Analysis, Animation, and Auditory-Visual Speech Processing 2015, 57-62, Sep, 2015  Peer-reviewed
  • Akihiro Tanaka, Sachiko Takagi, Saori Hiramatsu, Elisabeth Huis In’t Veld, Beatrice de Gelder
    Proceedings of the International Conference on Facial Analysis, Animation, and Auditory-Visual Speech Processing 2015, 63-66, Sep, 2015  Peer-reviewed
  • Maiko Takahashi, Akihiro Tanaka
    Proceeding of the International Conference on Facial Analysis, Animation, and Auditory-Visual Speech Processing 2015, 185-189, Sep, 2015  Peer-reviewed
  • Yuta Ujiie, Tomohisa Asai, Akihiro Tanaka, Akio Wakabayashi
    Letters on Evolutionary Behavioral Science, 6(2) 9-12, Aug 11, 2015  Peer-reviewed
    The McGurk effect is a perceptual phenomenon that an observer would perceive the intermediate phoneme when a speaking movie dubbed with an incongruent voice is presented. Several autism spectrum disorders (ASD) studies have shown that individuals with ASD showed weak influence of visual speech in the McGurk effect. Other studies, however, have reported negative results. This inconsistency among previous studies might be caused by the heterogeneity of clinical group. This study examined the relationship between autistic traits and McGurk effect among 51 healthy university students, on the basis of the dimensional model of ASD (Frith, 1991). Results revealed that autistic traits negatively correlated with the rate of visual response and positively correlated with the rate of fused response for the McGurk stimuli, while no correlation with the accuracy in perceiving the audiovisual congruent stimuli. This indicates that autistic traits might predict the weak influence of visual speech in the McGurk effect.
  • Sachiko Takagi, Saori Hiramatsu, Ken-ichi Tabei, Akihiro Tanaka
    Frontiers in Integrative Neuroscience, 9(1) 1-10, Feb 2, 2015  Peer-reviewed
  • 髙木幸子, 平松沙織, 田部井賢一, 田中章浩
    認知科学, 21(3) 344-362, Sep, 2014  Peer-reviewed
  • 髙木幸子, 田部井賢一, Elisabeth HUIS IN'T VELD, Beatrice de GELDER, 田中章浩
    The Japanese Journal of Psychonomic Science, 32(1) 29-39, 2013  Peer-reviewed
    Information derived from facial and vocal nonverbal expressions plays an important role in social communication in the real and virtual worlds. In the present study, we investigated cultural differences between Japanese and Dutch participants in the multisensory perception of emotion. We used a face and voice that expressed incongruent emotions as stimuli and conducted two experiments. We presented either the face or voice in Experiment 1, and both the face and voice in Experiment 2. We found that both visual and auditory information were important for Japanese participants judging in-group stimuli, while visual information was more important for other combinations of participants and stimuli. Additionally, we showed that the in-group advantage provided by auditory information was higher in Japanese than Dutch participants. Our findings indicate that audio-visual integration of affective information is modulated by the perceiver's cultural background, and that there are cultural differences between in-group and out-group stimuli.
  • Souta Hidaka, Hiroshi Shibata, Michiyo Kurihara, Akihiro Tanaka, Akitsugu Konno, Suguru Maruyama, Jiro Gyoba, Hiroko Hagiwara, Masatoshi Koizumi
    Neuroscience Research, 73(1) 73-79, May, 2012  Peer-reviewed
  • Cognitive Studies, 19(1) 122-130, Mar, 2012  Peer-reviewed
    The purpose of this study was to examine the relationship between pitch rehearsal and phonological rehearsal with regard to working memory. We conducted a dual-task experiment using musical tones and speech sounds. A standard-comparison task was the primary task and a suppression task was the secondary task. The participants were asked to engage in articulatory or musical suppression while they maintain speech sounds (phonological information) or musical tones (pitch information). Under articulatory suppression, the participants were asked to say "a, i, u" repeatedly; under musical suppression, they were asked to hum in three pitches (e.g., do, re, mi) repeatedly. The results revealed that articulatory suppression decreased the performance of recognition of phonological information but not of pitch information. Moreover, musical suppression decreased the performance of recognition of pitch information but not of phonological information. This implies that ariticulatory suppression selectively interfered with the rehearsals of speech sounds, and musical supersession selectively interfered with the rehearsals of musical tones. Consequently, the results suggest that pitch rehearsal is independent from phonological rehearsal.
  • Kaori ASAKAWA, Akihiro TANAKA, Hisato IMAI
    Kansei Engineering International Journal, 11(1) 35-40, 2012  Peer-reviewed
  • Bernard M. C. Stienen, Akihiro Tanaka, Beatrice de Gelder
    PLoS ONE, 6(10) e25517-e25517, Oct 7, 2011  Peer-reviewed
  • Akihiro Tanaka, Kaori Asakawa, Hisato Imai
    NeuroReport, 22(14) 684-688, Oct 5, 2011  Peer-reviewed
  • 田中章浩
    認知科学, 18(3) 416-427, Sep, 2011  Invited
  • 高橋麻衣子, 田中章浩
    教育心理学研究, 59(2) 179-192, Jun, 2011  Peer-reviewed
  • Ai Koizumi, Akihiro Tanaka, Hisato Imai, Saori Hiramatsu, Eriko Hiramoto, Takao Sato, Beatrice de Gelder
    Experimental Brain Research, 213(2-3) 275-282, Apr 13, 2011  
  • 高橋麻衣子, 田中章浩
    認知心理学研究, 8(2) 131-143, Feb, 2011  Peer-reviewed
  • Kaori Asakawa, Akihiro Tanaka, Shuichi Sakamoto, Yukio Iwaya, Yôiti Suzuki
    Acoustical Science and Technology, 32(3) 125-128, 2011  Peer-reviewed
  • Yutaka Sato, Koichi Mori, Toshizo Koizumi, Yasuyo Minagawa-Kawai, Akihiro Tanaka, Emi Ozawa, Yoko Wakaba, Reiko Mazuka
    Frontiers in Psychology, 2(70), 2011  Peer-reviewed
  • Akihiro Tanaka, Shuichi Sakamoto, Yôiti Suzuki
    Acoustical Science and Technology, 32(6) 264-267, 2011  Peer-reviewed
  • 田中章浩
    電子情報通信学会技術研究報告, HIP2010-66, 25-28, Nov, 2010  Invited

Misc.

 66

Books and Other Publications

 12

Presentations

 284

Research Projects

 33

Social Activities

 49

Media Coverage

 29