研究者業績

田中 章浩

タナカ アキヒロ  (Akihiro Tanaka)

基本情報

所属
東京女子大学 現代教養学部心理学科 教授
学位
博士(心理学)(東京大学)

研究者番号
80396530
J-GLOBAL ID
200901077725261773
researchmap会員ID
5000089644

外部リンク

論文

 76
  • 岸本 桂子, 髙橋 利供, 赤川 圭子, 熊木 良太, 田中 章浩
    YAKUGAKU ZASSHI 145(9) 791-799 2025年9月1日  
  • Anna K. Nakamura, Akihiro Tanaka
    Cognition and Emotion 1-16 2025年5月20日  
  • Anna K. Nakamura, Hisako W. Yamamoto, Sachiko Takagi, Tetsuya Matsuda, Hiroyuki Okada, Chiaki Ishiguro, Akihiro Tanaka
    Frontiers in Psychology 16(1533274) 2025年1月29日  査読有り
    Introduction Individuals from Western cultures rely on facial expressions during the audiovisual emotional processing of faces and voices. In contrast, those from East-Asian cultures rely more on voices. This study aimed to investigate whether immigrants adopt the tendency of the host culture or whether common features of migration produce a similar modification regardless of the destination. Methods We examined how immigrants from Western countries to Japan perceive emotional expressions from faces and voices using MRI scanning. Results Immigrants behaviorally exhibited a decrease in the influence of emotions in voices with a longer stay in Japan. Additionally, immigrants with a longer stay showed a higher response in the posterior superior temporal gyrus, a brain region associated with audiovisual emotional integration, when processing emotionally congruent faces and voices. Discussion These modifications imply that immigrants from Western cultures tend to rely even less on voices, in contrast to the tendency of voice-dominance observed in native Japanese people. This change may be explained by the decreased focus on prosodic aspects of voices during second language acquisition. The current and further exploration will aid in the better adaptation of immigrants to a new cultural society.
  • Misako Kawahara, Akihiro Tanaka
    PLOS ONE 20(1) e0307631-e0307631 2025年1月9日  査読有り
    We perceive and understand others’ emotional states from multisensory information such as facial expressions and vocal cues. However, such cues are not always available or clear. Can partial loss of visual cues affect multisensory emotion perception? In addition, the COVID-19 pandemic has led to the widespread use of face masks, which can reduce some facial cues used in emotion perception. Thus, can frequent exposure to masked faces affect emotion perception? We conducted an emotion perception task using audio-visual stimuli that partially occluded the speaker’s face. Participants were simultaneously shown a face and voice that expressed either congruent or incongruent emotions and judged whether the person was happy or angry. The stimuli included videos in which the eyes or mouth were partially covered and where the whole face was visible. Our findings showed that, when facial cues were partially occluded, participants relied more on vocal cues for emotion recognition. Moreover, when the mouth was covered, participants relied less on vocal cues after the pandemic compared to before. These findings indicate that partial face masking and prolonged exposure to masked faces can affect multisensory emotion perception. In unimodal emotion perception from only facial cues, accuracy also improved after the pandemic compared to before for faces with the mouth occluded. Therefore, changes in the reliance on vocal cues in multisensory emotion perception during the pandemic period could be explained by improved facial emotion perception from the eye region.
  • 新井田統, 小森智康, 酒向慎司, 田中章浩, 布川清彦
    電子情報通信学会誌 107(3) 237-243 2024年3月  
  • 澤田佳子, 河原美彩子, 田中章浩
    日本感性工学会論文誌 22(4) 405-416 2023年12月  査読有り
  • Rika Oya, Akihiro Tanaka
    Emotion 23(5) 1400-1409 2023年8月  査読有り
  • Rika Oya, Akihiro Tanaka
    i-Perception 14(2) 2023年3月21日  査読有り
    Previous research has revealed that several emotions can be perceived via touch. What advantages does touch have over other nonverbal communication channels? In our study, we compared the perception of emotions from touch with that from voice to examine the advantages of each channel at the emotional valence level. In our experiment, the encoder expressed 12 different emotions by touching the decoder's arm or uttering a syllable /e/, and the decoder judged the emotion. The results showed that the categorical average accuracy of negative emotions was higher for voice than for touch, whereas that of positive emotions was marginally higher for touch than for voice. These results suggest that different channels (touch and voice) have different advantages for the perception of positive and negative emotions.
  • Fabiola Diana, Misako Kawahara, Isabella Saccardi, Ruud Hortensius, Akihiro Tanaka, Mariska E. Kret
    International Journal of Social Robotics 15(8) 1439-1455 2022年9月28日  
    Abstract Historically, there has been a great deal of confusion in the literature regarding cross-cultural differences in attitudes towards artificial agents and preferences for their physical appearance. Previous studies have almost exclusively assessed attitudes using self-report measures (i.e., questionnaires). In the present study, we sought to expand our knowledge on the influence of cultural background on explicit and implicit attitudes towards robots and avatars. Using the Negative Attitudes Towards Robots Scale and the Implicit Association Test in a Japanese and Dutch sample, we investigated the effect of culture and robots’ body types on explicit and implicit attitudes across two experiments (total n = 669). Partly overlapping with our hypothesis, we found that Japanese individuals had a more positive explicit attitude towards robots compared to Dutch individuals, but no evidence of such a difference was found at the implicit level. As predicted, the implicit preference towards humans was moderate in both cultural groups, but in contrast to what we expected, neither culture nor robot embodiment influenced this preference. These results suggest that only at the explicit but not implicit level, cultural differences appear in attitudes towards robots.
  • Rika Oya, Akihiro Tanaka
    Acoustical Science and Technology 43(5) 291-293 2022年9月1日  査読有り
  • Yohtaro Takano, Per F. Gjerde, Eiko Osaka, Kazuko Takeo, Akihiro Tanaka
    Cognitive Studies 28(4) 541-550 2021年12月  査読有り
  • Roza G. Kamiloğlu, Akihiro Tanaka, Sophie K. Scott, Disa A. Sauter
    Philosophical Transactions of the Royal Society B: Biological Sciences 377(1841) 2021年11月15日  査読有り
    Laughter is a ubiquitous social signal. Recent work has highlighted distinctions between spontaneous and volitional laughter, which differ in terms of both production mechanisms and perceptual features. Here, we test listeners' ability to infer group identity from volitional and spontaneous laughter, as well as the perceived positivity of these laughs across cultures. Dutch ( n = 273) and Japanese ( n = 131) participants listened to decontextualized laughter clips and judged (i) whether the laughing person was from their cultural in-group or an out-group; and (ii) whether they thought the laughter was produced spontaneously or volitionally. They also rated the positivity of each laughter clip. Using frequentist and Bayesian analyses, we show that listeners were able to infer group membership from both spontaneous and volitional laughter, and that performance was equivalent for both types of laughter. Spontaneous laughter was rated as more positive than volitional laughter across the two cultures, and in-group laughs were perceived as more positive than out-group laughs by Dutch but not Japanese listeners. Our results demonstrate that both spontaneous and volitional laughter can be used by listeners to infer laughers’ cultural group identity. This article is part of the theme issue ‘Voice modulation: from origin and mechanism to social impact (Part II)’.
  • Hisako W. Yamamoto, Misako Kawahara, Akihiro Tanaka
    Frontiers in Psychology 12(702106) 1-13 2021年8月18日  査読有り
    Due to the COVID-19 pandemic, the significance of online research has been rising in the field of psychology. However, online experiments with child participants are rare compared to those with adults. In this study, we investigated the validity of web-based experiments with child participants 4–12 years old and adult participants. They performed simple emotional perception tasks in an experiment designed and conducted on the Gorilla Experiment Builder platform. After short communication with each participant via Zoom videoconferencing software, participants performed the auditory task (judging emotion from vocal expression) and the visual task (judging emotion from facial expression). The data collected were compared with data collected in our previous similar laboratory experiment, and similar tendencies were found. For the auditory task in particular, we replicated differences in accuracy perceiving vocal expressions between age groups and also found the same native language advantage. Furthermore, we discuss the possibility of using online cognitive studies for future developmental studies.
  • 河原美彩子, 澤田佳子, 田中章浩
    日本感性工学会論文誌 20(3) 329-335 2021年8月  査読有り
  • Yuichi Mori, Yasuki Noguchi, Akihiro Tanaka, Keiko Ishii
    Culture and Brain 10(1) 43-55 2021年5月21日  査読有り
  • Kazuma Mori, Akihiro Tanaka, Hideaki Kawabata, Hiroshi Arao
    NeuroReport 32(10) 858-863 2021年5月20日  査読有り
    People require multimodal emotional interactions to live in a social environment. Several studies using dynamic facial expressions and emotional voices have reported that multimodal emotional incongruency evokes an early sensory component of event-related potentials (ERPs), while others have found a late cognitive component. The integration mechanism of two different results remains unclear. We speculate that it is semantic analysis in a multimodal integration framework that evokes the late ERP component. An electrophysiological experiment was conducted using emotionally congruent or incongruent dynamic faces and natural voices to promote semantic analysis. To investigate the top-down modulation of the ERP component, attention was manipulated via two tasks that directed participants to attend to facial versus vocal expressions. Our results revealed interactions between facial and vocal emotional expressions, manifested as modulations of the auditory N400 ERP amplitudes but not N1 and P2 amplitudes, for incongruent emotional face–voice combinations only in the face-attentive task. A late occipital positive potential amplitude emerged only during the voice-attentive task. Overall, these findings support the idea that semantic analysis is a key factor in evoking the late cognitive component. The task effect for these ERPs suggests that top-down attention alters not only the amplitude of ERP but also the ERP component per se. Our results implicate a principle of emotional face–voice processing in the brain that may underlie complex audiovisual interactions in everyday communication.
  • Misako Kawahara, Disa A. Sauter, Akihiro Tanaka
    Cognition and Emotion 35(6) 1175-1186 2021年5月18日  査読有り
  • Hisako Yamamoto, Misako Kawahara, Mariska Kret, Akihiro Tanaka
    Letters on Evolutionary Behavioral Science 11(2) 40-45 2020年12月15日  査読有り
    This study investigated cultural differences in the perception of emoticons between Japanese and Dutch participants. We manipulated the eyes and mouth of emoticons independently and asked participants to evaluate the emotion of each emoticon. The results show that Japanese participants tended to focus on the emotion expressed with the eyes while Dutch participants put weight on the shape of the mouth when evaluating emoticons. This tendency is consistent with a previous cross-cultural study comparing people from Japan and the United States (Yuki et al., 2007).
  • Hisako W. Yamamoto, Misako Kawahara, Akihiro Tanaka
    PLOS ONE 15(6) e0234553-e0234553 2020年6月18日  査読有り
  • Hisako W. Yamamoto, Misako Kawahara, Akihiro Tanaka
    Acoustical Science and Technology 40(6) 410-412 2019年11月1日  査読有り
  • Misako Kawahara, Hisako W. Yamamoto, Akihiro Tanaka
    Acoustical Science and Technology 40(5) 360-363 2019年9月1日  査読有り
  • Hisako W. Yamamoto, Misako Kawahara, Akihiro Tanaka
    The 15th International Conference on Auditory-Visual Speech Processing 27-32 2019年8月10日  査読有り
  • 田中章浩
    日本薬学会会報「ファルマシア」2018年11月号 1040-1044 2018年11月  招待有り
  • 田中章浩
    映像情報メディア学会誌 72(1) 12-16 2018年1月1日  招待有り
  • Hisako W. Yamamoto, Misako Kawahara, Akihiro Tanaka
    The 14th International Conference on Auditory-Visual Speech Processing 105-108 2017年8月25日  査読有り
  • Misako Kawahara, Disa Sauter, Akihiro Tanaka
    The 14th International Conference on Auditory-Visual Speech Processing 109-114 2017年8月25日  査読有り
  • Marina Kawase, Ikuma Adachi, Akihiro Tanaka
    The 14th International Conference on Auditory-Visual Speech Processing 115-118 2017年8月25日  査読有り
  • 森田磨里絵, 田中章浩
    認知心理学研究 14(1) 9-19 2016年  査読有り
  • 横森文哉, 二宮大和, 森勢将雅, 田中章浩, 小澤賢司
    日本感性工学会論文誌 15(7) 721-729 2016年  査読有り
  • Sachiko Takagi, Shiho Miyazawa, Elisabeth Huis In, Beatrice de Gelder, Akihiro Tanaka
    Proceedings of the Internationl Conference on Facial Analysis, Animation, and Auditory-Visual Speech Processing 2015 57-62 2015年9月  査読有り
  • Akihiro Tanaka, Sachiko Takagi, Saori Hiramatsu, Elisabeth Huis In’t Veld, Beatrice de Gelder
    Proceedings of the International Conference on Facial Analysis, Animation, and Auditory-Visual Speech Processing 2015 63-66 2015年9月  査読有り
  • Maiko Takahashi, Akihiro Tanaka
    Proceeding of the International Conference on Facial Analysis, Animation, and Auditory-Visual Speech Processing 2015 185-189 2015年9月  査読有り
  • Yuta Ujiie, Tomohisa Asai, Akihiro Tanaka, Akio Wakabayashi
    Letters on Evolutionary Behavioral Science 6(2) 9-12 2015年8月11日  査読有り
    The McGurk effect is a perceptual phenomenon that an observer would perceive the intermediate phoneme when a speaking movie dubbed with an incongruent voice is presented. Several autism spectrum disorders (ASD) studies have shown that individuals with ASD showed weak influence of visual speech in the McGurk effect. Other studies, however, have reported negative results. This inconsistency among previous studies might be caused by the heterogeneity of clinical group. This study examined the relationship between autistic traits and McGurk effect among 51 healthy university students, on the basis of the dimensional model of ASD (Frith, 1991). Results revealed that autistic traits negatively correlated with the rate of visual response and positively correlated with the rate of fused response for the McGurk stimuli, while no correlation with the accuracy in perceiving the audiovisual congruent stimuli. This indicates that autistic traits might predict the weak influence of visual speech in the McGurk effect.
  • Sachiko Takagi, Saori Hiramatsu, Ken-ichi Tabei, Akihiro Tanaka
    Frontiers in Integrative Neuroscience 9(1) 1-10 2015年2月2日  査読有り
  • 髙木幸子, 平松沙織, 田部井賢一, 田中章浩
    認知科学 21(3) 344-362 2014年9月  査読有り
  • 髙木幸子, 田部井賢一, Elisabeth HUIS IN'T VELD, Beatrice de GELDER, 田中章浩
    基礎心理学研究 32(1) 29-39 2013年  査読有り
    Information derived from facial and vocal nonverbal expressions plays an important role in social communication in the real and virtual worlds. In the present study, we investigated cultural differences between Japanese and Dutch participants in the multisensory perception of emotion. We used a face and voice that expressed incongruent emotions as stimuli and conducted two experiments. We presented either the face or voice in Experiment 1, and both the face and voice in Experiment 2. We found that both visual and auditory information were important for Japanese participants judging in-group stimuli, while visual information was more important for other combinations of participants and stimuli. Additionally, we showed that the in-group advantage provided by auditory information was higher in Japanese than Dutch participants. Our findings indicate that audio-visual integration of affective information is modulated by the perceiver's cultural background, and that there are cultural differences between in-group and out-group stimuli.
  • Souta Hidaka, Hiroshi Shibata, Michiyo Kurihara, Akihiro Tanaka, Akitsugu Konno, Suguru Maruyama, Jiro Gyoba, Hiroko Hagiwara, Masatoshi Koizumi
    Neuroscience Research 73(1) 73-79 2012年5月  査読有り
  • 宮澤史穂, 田中章浩, 西本武彦
    認知科学 19(1) 122-130 2012年3月  査読有り
    The purpose of this study was to examine the relationship between pitch rehearsal and phonological rehearsal with regard to working memory. We conducted a dual-task experiment using musical tones and speech sounds. A standard-comparison task was the primary task and a suppression task was the secondary task. The participants were asked to engage in articulatory or musical suppression while they maintain speech sounds (phonological information) or musical tones (pitch information). Under articulatory suppression, the participants were asked to say "a, i, u" repeatedly; under musical suppression, they were asked to hum in three pitches (e.g., do, re, mi) repeatedly. The results revealed that articulatory suppression decreased the performance of recognition of phonological information but not of pitch information. Moreover, musical suppression decreased the performance of recognition of pitch information but not of phonological information. This implies that ariticulatory suppression selectively interfered with the rehearsals of speech sounds, and musical supersession selectively interfered with the rehearsals of musical tones. Consequently, the results suggest that pitch rehearsal is independent from phonological rehearsal.
  • Kaori ASAKAWA, Akihiro TANAKA, Hisato IMAI
    Kansei Engineering International Journal 11(1) 35-40 2012年  査読有り
  • 高橋麻衣子, 田中章浩
    認知科学 18(4) 595-603 2011年12月  査読有り
  • Bernard M. C. Stienen, Akihiro Tanaka, Beatrice de Gelder
    PLoS ONE 6(10) e25517-e25517 2011年10月7日  査読有り
  • Akihiro Tanaka, Kaori Asakawa, Hisato Imai
    NeuroReport 22(14) 684-688 2011年10月5日  査読有り
  • 田中章浩
    認知科学 18(3) 416-427 2011年9月  招待有り
  • 高橋麻衣子, 田中章浩
    教育心理学研究 59(2) 179-192 2011年6月  査読有り
  • Ai Koizumi, Akihiro Tanaka, Hisato Imai, Saori Hiramatsu, Eriko Hiramoto, Takao Sato, Beatrice de Gelder
    Experimental Brain Research 213(2-3) 275-282 2011年4月13日  
  • 高橋麻衣子, 田中章浩
    認知心理学研究 8(2) 131-143 2011年2月  査読有り
  • Kaori Asakawa, Akihiro Tanaka, Shuichi Sakamoto, Yukio Iwaya, Yôiti Suzuki
    Acoustical Science and Technology 32(3) 125-128 2011年  査読有り
  • Yutaka Sato, Koichi Mori, Toshizo Koizumi, Yasuyo Minagawa-Kawai, Akihiro Tanaka, Emi Ozawa, Yoko Wakaba, Reiko Mazuka
    Frontiers in Psychology 2(70) 2011年  査読有り
  • Akihiro Tanaka, Shuichi Sakamoto, Yôiti Suzuki
    Acoustical Science and Technology 32(6) 264-267 2011年  査読有り
  • 田中章浩
    電子情報通信学会技術研究報告 HIP2010-66, 25-28 2010年11月  招待有り

MISC

 66

書籍等出版物

 12

講演・口頭発表等

 284

共同研究・競争的資金等の研究課題

 33

社会貢献活動

 49

メディア報道

 29