Curriculum Vitaes

Akihiro Tanaka

  (田中 章浩)

Profile Information

Affiliation
Professor, School of Arts and Sciences, Division of Psychology and Communication, Department of Psychology, Tokyo Woman's Christian University
Degree
博士(心理学)(東京大学)

Researcher number
80396530
J-GLOBAL ID
200901077725261773
researchmap Member ID
5000089644

External link

Papers

 72
  • 新井田統, 小森智康, 酒向慎司, 田中章浩, 布川清彦
    電子情報通信学会誌, 107(3) 237-243, Mar, 2024  
  • Yoshiko SAWADA, Misako KAWAHARA, Akihiro TANAKA
    Transactions of Japan Society of Kansei Engineering, 22(4) 405-416, Dec, 2023  Peer-reviewed
  • Rika Oya, Akihiro Tanaka
    i-Perception, 14(2) 204166952311604-204166952311604, Mar 21, 2023  Peer-reviewed
    Previous research has revealed that several emotions can be perceived via touch. What advantages does touch have over other nonverbal communication channels? In our study, we compared the perception of emotions from touch with that from voice to examine the advantages of each channel at the emotional valence level. In our experiment, the encoder expressed 12 different emotions by touching the decoder's arm or uttering a syllable /e/, and the decoder judged the emotion. The results showed that the categorical average accuracy of negative emotions was higher for voice than for touch, whereas that of positive emotions was marginally higher for touch than for voice. These results suggest that different channels (touch and voice) have different advantages for the perception of positive and negative emotions.
  • Fabiola Diana, Misako Kawahara, Isabella Saccardi, Ruud Hortensius, Akihiro Tanaka, Mariska E. Kret
    International Journal of Social Robotics, Sep 28, 2022  Peer-reviewed
    Abstract Historically, there has been a great deal of confusion in the literature regarding cross-cultural differences in attitudes towards artificial agents and preferences for their physical appearance. Previous studies have almost exclusively assessed attitudes using self-report measures (i.e., questionnaires). In the present study, we sought to expand our knowledge on the influence of cultural background on explicit and implicit attitudes towards robots and avatars. Using the Negative Attitudes Towards Robots Scale and the Implicit Association Test in a Japanese and Dutch sample, we investigated the effect of culture and robots’ body types on explicit and implicit attitudes across two experiments (total n = 669). Partly overlapping with our hypothesis, we found that Japanese individuals had a more positive explicit attitude towards robots compared to Dutch individuals, but no evidence of such a difference was found at the implicit level. As predicted, the implicit preference towards humans was moderate in both cultural groups, but in contrast to what we expected, neither culture nor robot embodiment influenced this preference. These results suggest that only at the explicit but not implicit level, cultural differences appear in attitudes towards robots.
  • Rika Oya, Akihiro Tanaka
    Acoustical Science and Technology, 43(5) 291-293, Sep, 2022  Peer-reviewed
  • Roza G. Kamiloğlu, Akihiro Tanaka, Sophie K. Scott, Disa A. Sauter
    Philosophical Transactions of the Royal Society B: Biological Sciences, 377(1841), Jan 3, 2022  Peer-reviewed
    Laughter is a ubiquitous social signal. Recent work has highlighted distinctions between spontaneous and volitional laughter, which differ in terms of both production mechanisms and perceptual features. Here, we test listeners' ability to infer group identity from volitional and spontaneous laughter, as well as the perceived positivity of these laughs across cultures. Dutch ( <italic>n</italic> = 273) and Japanese ( <italic>n</italic> = 131) participants listened to decontextualized laughter clips and judged (i) whether the laughing person was from their cultural in-group or an out-group; and (ii) whether they thought the laughter was produced spontaneously or volitionally. They also rated the positivity of each laughter clip. Using frequentist and Bayesian analyses, we show that listeners were able to infer group membership from both spontaneous and volitional laughter, and that performance was equivalent for both types of laughter. Spontaneous laughter was rated as more positive than volitional laughter across the two cultures, and in-group laughs were perceived as more positive than out-group laughs by Dutch but not Japanese listeners. Our results demonstrate that both spontaneous and volitional laughter can be used by listeners to infer laughers’ cultural group identity. This article is part of the theme issue ‘Voice modulation: from origin and mechanism to social impact (Part II)’.
  • Takano, Y, Gjerde, P, Osaka, E, Takeo, K, Tanaka, A
    Cognitive Studies, 28(4) 541-550, Dec, 2021  Peer-reviewed
  • 河原 美彩子, 澤田 佳子, 田中 章浩
    日本感性工学会論文誌, 20(3) 329-335, Aug, 2021  Peer-reviewed
  • Yamamoto, H.W, Kawahara, M, Tanaka, A
    Frontiers in Psychology(オンライン出版), 12:702106, Aug, 2021  Peer-reviewed
  • Mori, Y, Noguchi, Y, Tanaka, A, Ishii, K
    Culture and Brain, May, 2021  Peer-reviewed
  • Kawahara, M, Sauter, D. A, Tanaka, A
    Cognition and Emotion, 1-12, May, 2021  Peer-reviewed
    The perception of multisensory emotion cues is affected by culture. For example, East Asians rely more on vocal, as compared to facial, affective cues compared to Westerners. However, it is unknown whether these cultural differences exist in childhood, and if not, which processing style is exhibited in children. The present study tested East Asian and Western children, as well as adults from both cultural backgrounds, to probe cross-cultural similarities and differences at different ages, and to establish the weighting of each modality at different ages. Participants were simultaneously shown a face and a voice expressing either congruent or incongruent emotions, and were asked to judge whether the person was happy or angry. Replicating previous research, East Asian adults relied more on vocal cues than did Western adults. Young children from both cultural groups, however, behaved like Western adults, relying primarily on visual information. The proportion of responses based on vocal cues increased with age in East Asian, but not Western, participants. These results suggest that culture is an important factor in developmental changes in the perception of facial and vocal affective information.
  • Mori, K, Tanaka, A, Kawabata, H, Arao, H
    NeuroReport, 32 858-863, Apr, 2021  Peer-reviewed
  • Yamamoto, H. W, Kawahara, M, Kret, M. E, Tanaka, A
    Letters on Evolutionary Behavioral Science, 11(2), Dec, 2020  Peer-reviewed
  • Yamamoto, H.W, Kawahara, M, Tanaka, A
    15(6) e0234553, Jun, 2020  Peer-reviewed
  • Hisako W. Yamamoto, Misako Kawahara, Akihiro Tanaka
    The 15th International Conference on Auditory-Visual Speech Processing, Aug 10, 2019  
  • Kawahara, M, Yamamoto, H.W, Tanaka, A
    Acoustical Science and Technology, 40(5) 360-363, 2019  Peer-reviewed
  • Yamamoto, H. W, Kawahara, M, Tanaka, A
    Acoustical Science and Technology, 2019  Peer-reviewed
  • 田中章浩
    日本薬学会会報「ファルマシア」2018年11月号, 1040-1044, Nov, 2018  Invited
  • 田中章浩
    映像情報メディア学会誌, 72(1) 10-14, Jan 1, 2018  Invited
  • Yamamoto, H. W, Kawahara, M, Tanaka, A
    Proceedings of the International Conference on Auditory-Visual Speech Processing 2017, D2(S5.1) 1-4, Aug 25, 2017  Peer-reviewed
  • Kawahara, M, Sauter, D, Tanaka, A
    Proceedings of the International Conference on Auditory-Visual Speech Processing 2017, D2(S5.2) 1-6, Aug 25, 2017  Peer-reviewed
  • Kawase, M, Adachi, I, Tanaka, A
    Proceedings of the International Conference on Auditory-Visual Speech Processing 2017, D2. S5. 3, 1-4(S5.3) 1-4, Aug 25, 2017  Peer-reviewed
  • 横森文哉, 二宮大和, 森勢将雅, 田中章浩, 小澤賢司
    日本感性工学会論文誌, 15(7) 721-729, Dec, 2016  Peer-reviewed
  • MORITA Marie, TANAKA Akihiro
    The Japanese Journal of Cognitive Psychology, 14(1) 9-19, Sep, 2016  Peer-reviewed
    <p>What kinds of objects elicit aesthetic responses from people? Prior studies suggest that automatic and unconscious conceptual processing, which occur when exposed to stimuli, influence aesthetic evaluations. However, as proposed by the dual-process model of recognition memory, there are two levels of conceptual processing: an automatic and unconscious level and a non-automatic and conscious level. We examine the effects of these two dissociable processes on aesthetic evaluations with the Remember/Know procedure. We hypothesize that remember judgments reflect "recollection" (a non-automatic and conscious level of conceptual process) and know judgments reflect "familiarity" (an automatic and unconscious level of conceptual process). During an incidental learning phase, participants were exposed to 70 images and, during a recognition phase, they made both remember/know judgments and aesthetic evaluations for images (35 old and 35 new). The results indicate that aesthetic evaluations were higher for images judged as remembered compared to those judged as known. These findings suggest that non-automatic and conscious conceptual processing influences aesthetic evaluations.</p>
  • Takagi, S, Miyazawa, S, Huis In't Veld, E, de Gelder, B, Hamano, Y, Tabei, K, Tanaka, A
    Proceedings of the Internationl Conference on Facial Analysis, Animation, and Auditory-Visual Speech Processing 2015, 57-62, Sep, 2015  Peer-reviewed
  • Tanaka, A, Takagi,S, Hiramatsu, S, Huis In't, Veld, E, de Gelder, B
    Proceedings of the International Conference on Facial Analysis, Animation, and Auditory-Visual Speech Processing 2015, 63-66, Sep, 2015  Peer-reviewed
  • Takahashi, M, Tanaka, A
    Proceeding of the International Conference on Facial Analysis, Animation, and Auditory-Visual Speech Processing 2015, 185-189, Sep, 2015  Peer-reviewed
  • Ujiie, Y, Asai, T, Tanaka, A, Wakabayashi, A
    Letters on Evolutionary Behavioral Science, 6(2) 9-12, Aug, 2015  Peer-reviewed
  • Sachiko Takagi, Saori Hiramatsu, Ken-Ichi Tabei, Akihiro Tanaka
    Frontiers in integrative neuroscience, 9(1) 1-1, 2015  Peer-reviewed
    Previous studies have shown that the perception of facial and vocal affective expressions interacts with each other. Facial expressions usually dominate vocal expressions when we perceive the emotions of face-voice stimuli. In most of these studies, participants were instructed to pay attention to the face or voice. Few studies compared the perceived emotions with and without specific instructions regarding the modality to which attention should be directed. Also, these studies used combinations of the face and voice which expresses two opposing emotions, which limits the generalizability of the findings. The purpose of this study is to examine whether the emotion perception is modulated by instructions to pay attention to the face or voice using the six basic emotions. Also we examine the modality dominance between the face and voice for each emotion category. Before the experiment, we recorded faces and voices which expresses the six basic emotions and orthogonally combined these faces and voices. Consequently, the emotional valence of visual and auditory information was either congruent or incongruent. In the experiment, there were unisensory and multisensory sessions. The multisensory session was divided into three blocks according to whether an instruction was given to pay attention to a given modality (face attention, voice attention, and no instruction). Participants judged whether the speaker expressed happiness, sadness, anger, fear, disgust, or surprise. Our results revealed that instructions to pay attention to one modality and congruency of the emotions between modalities modulated the modality dominance, and the modality dominance is differed for each emotion category. In particular, the modality dominance for anger changed according to each instruction. Analyses also revealed that the modality dominance suggested by the congruency effect can be explained in terms of the facilitation effect and the interference effect.
  • 髙木幸子,平松沙織,田部井賢一,田中章浩
    認知科学, 21(3) 344-362, Sep, 2014  Peer-reviewed
  • TAKAGI Sachikoa, TABEI Ken-ichi, HUIS IN'T BELD Elisabeth, GELDER Beatrice de, TANAKA Akihiro
    The Japanese Journal of Psychonomic Science, 32(1) 29-39, 2013  Peer-reviewed
    Information derived from facial and vocal nonverbal expressions plays an important role in social communication in the real and virtual worlds. In the present study, we investigated cultural differences between Japanese and Dutch participants in the multisensory perception of emotion. We used a face and voice that expressed incongruent emotions as stimuli and conducted two experiments. We presented either the face or voice in Experiment 1, and both the face and voice in Experiment 2. We found that both visual and auditory information were important for Japanese participants judging in-group stimuli, while visual information was more important for other combinations of participants and stimuli. Additionally, we showed that the in-group advantage provided by auditory information was higher in Japanese than Dutch participants. Our findings indicate that audio-visual integration of affective information is modulated by the perceiver's cultural background, and that there are cultural differences between in-group and out-group stimuli.
  • Hidaka, S, Shibata, H, Kurihara, M, Tanaka, A, Konno, A, Maruyama, S, Gyoba, J, Hagiwara, H, Koizumi, M
    Neuroscience Research, 73(1) 73-79, Apr, 2012  Peer-reviewed
  • 宮澤史穂, 田中章浩, 西本武彦
    認知科学, 19(1) 122-130, Mar, 2012  Peer-reviewed
  • Asakawa, K, Tanaka, A, Imai, H
    Kansei Engineering International Journal, 11(1) 35-40, Feb 15, 2012  Peer-reviewed
    We investigated whether audiovisual synchrony perception for speech could change after observation of the audiovisual temporal mismatch. Previous studies have revealed that audiovisual synchrony perception is re-calibrated after exposure to a constant timing difference between auditory and visual signals in non-speech. In the present study, we examined whether this audiovisual temporal recalibration occurs at the perceptual level even for speech (monosyllables). In Experiment 1, participants performed an audiovisual simultaneity judgment task (i.e., a direct measurement of the audiovisual synchrony perception) in terms of the speech signal after observation of the speech stimuli which had a constant audiovisual lag. The results showed that the "simultaneous" responses (i.e., proportion of responses for which participants judged the auditory and visual stimuli to be synchronous) at least partly depended on exposure lag. In Experiment 2, we adopted the McGurk identification task (i.e., an indirect measurement of the audiovisual synchrony perception) to exclude the possibility that this modulation of synchrony perception was solely attributable to the response strategy using stimuli identical to those of Experiment 1. The characteristics of the McGurk effect reported by participants depended on exposure lag. Thus, it was shown that audiovisual synchrony perception for speech could be modulated following exposure to constant lag both in direct and indirect measurement. Our results suggest that temporal recalibration occurs not only in non-speech signals but also in monosyllabic speech at the perceptual level.
  • 高橋麻衣子, 田中章浩
    認知科学, 18(4) 595-603, Dec, 2011  Peer-reviewed
  • Bernard M. C. Stienen, Akihiro Tanaka, Beatrice de Gelder
    PLOS ONE, 6(10), Oct, 2011  Peer-reviewed
    Multisensory integration may occur independently of visual attention as previously shown with compound face-voice stimuli. We investigated in two experiments whether the perception of whole body expressions and the perception of voices influence each other when observers are not aware of seeing the bodily expression. In the first experiment participants categorized masked happy and angry bodily expressions while ignoring congruent or incongruent emotional voices. The onset between target and mask varied from -50 to +133 ms. Results show that the congruency between the emotion in the voice and the bodily expressions influences audiovisual perception independently of the visibility of the stimuli. In the second experiment participants categorized the emotional voices combined with masked bodily expressions as fearful or happy. This experiment showed that bodily expressions presented outside visual awareness still influence prosody perception. Our experiments show that audiovisual integration between bodily expressions and affective prosody can take place outside and independent of visual awareness.
  • 田中章浩
    認知科学, 18(3) 416-427, Sep, 2011  Invited
  • Tanaka, A, Asakawa, K, Imai, H
    Neuro Report, 22(14) 684-688, Sep, 2011  Peer-reviewed
  • 高橋麻衣子, 田中章浩
    教育心理学研究, 59(2) 179-192, Jun, 2011  Peer-reviewed
  • Koizumi, A, Tanaka, A, Imai, H, Hiramatsu, S, Hiramoto, E, Sato, T, de Gelder, B
    EXPERIMENTAL BRAIN RESEARCH, 213(2-3) 275-282, Apr, 2011  Peer-reviewed
    Anxious individuals have been shown to interpret others&apos; emotional states negatively. Since most studies have used facial expressions as emotional cues, we examined whether trait anxiety affects the recognition of emotion in a dynamic face and voice that were presented in synchrony. The face and voice cues conveyed either matched (e.g., happy face and voice) or mismatched emotions (e.g., happy face and angry voice). Participants with high or low trait anxiety were to indicate the perceived emotion using one of the cues while ignoring the other. The results showed that individuals with high trait anxiety were more likely to interpret others&apos; emotions in a negative manner, putting more weight on the to-be-ignored angry cues. This interpretation bias was found regardless of the cue modality (i.e., face or voice). Since trait anxiety did not affect recognition of the face or voice cues presented in isolation, this interpretation bias appears to reflect an altered integration of the face and voice cues among anxious individuals.
  • Asakawa, K, Tanaka, A, Sakamoto, S, Iwaya, Y, Suzuki., Y
    Acoustical Science and Technology, 32(3) 125-128, Mar, 2011  Peer-reviewed
  • 高橋麻衣子, 田中章浩
    認知心理学研究, 8(2) 131-143, Feb, 2011  Peer-reviewed
  • Yutaka Sato, Koichi Mori, Toshizo Koizumi, Yasuyo Minagawa-Kawai, Akihiro Tanaka, Emi Ozawa, Yoko Wakaba, Reiko Mazuka
    FRONTIERS IN PSYCHOLOGY, 2(70), 2011  Peer-reviewed
    Developmental stuttering is a speech disorder in fluency characterized by repetitions, prolongations, and silent blocks, especially in the initial parts of utterances. Although their symptoms are motor related, people who stutter show abnormal patterns of cerebral hemispheric dominance in both anterior and posterior language areas. It is unknown whether the abnormal functional lateralization in the posterior language area starts during childhood or emerges as a consequence of many years of stuttering. In order to address this issue, we measured the lateralization of hemodynamic responses in the auditory cortex during auditory speech processing in adults and children who stutter, including preschoolers, with near-infrared spectroscopy. We used the analysis-resynthesis technique to prepare two types of stimuli: (i) a phonemic contrast embedded in Japanese spoken words (/itta/ vs. /itte/) and (ii) a prosodic contrast (/itta/ vs. /itta?/). In the baseline blocks, only/itta/tokens were presented. In phonemic contrast blocks, /itta/ and /itte/ tokens were presented pseudo-randomly, and /itta/ and /itta?/ tokens in prosodic contrast blocks. In adults and children who do not stutter, there was a clear left-hemispheric advantage for the phonemic contrast compared to the prosodic contrast. Adults and children who stutter, however, showed no significant difference between the two stimulus conditions. A subject-by-subject analysis revealed that not a single subject who stutters showed a left advantage in the phonemic contrast over the prosodic contrast condition. These results indicate that the functional lateralization for auditory speech processing is in disarray among those who stutter, even at preschool age. These results shed light on the neural pathophysiology of developmental stuttering.
  • Akihiro Tanaka, Shuichi Sakamoto, Yôiti Suzuki
    Acoustical Science and Technology, 32(6) 264-267, 2011  Peer-reviewed
  • 田中章浩
    電子情報通信学会技術研究報告, HIP2010-66, 25-28, Nov, 2010  Invited
  • Miyazawa, S, Tanaka, A, Sakamoto, S, Nishimoto, T
    Proceedings of the International Conference on Auditory-Visual Speech Processing 2010, 190-193, Oct, 2010  Peer-reviewed
  • Koizumi, A, Tanaka, A, Imai, H, Hiramatsu, S, Hiramoto, E, Sato, T, de Gelder, B
    Proceedings of the International Conference on Auditory-Visual Speech Processing 2010, 194-198, Oct, 2010  Peer-reviewed
  • Tanaka, A, Koizumi, A, Imai, H, Hiramatsu, S, Hiramoto, E, de Gelder, B
    Proceedings of the International Conference on Auditory-Visual Speech Processing 2010, 49-53, Oct, 2010  Peer-reviewed
  • Akihiro Tanaka, Ai Koizumi, Hisato Imai, Saori Hiramatsu, Eriko Hiramoto, Beatrice de Gelder
    PSYCHOLOGICAL SCIENCE, 21(9) 1259-1262, Sep, 2010  Peer-reviewed
    Cultural differences in emotion perception have been reported mainly for facial expressions and to a lesser extent for vocal expressions. However, the way in which the perceiver combines auditory and visual cues may itself be subject to cultural variability. Our study investigated cultural differences between Japanese and Dutch participants in the multisensory perception of emotion. A face and a voice, expressing either congruent or incongruent emotions, were presented on each trial. Participants were instructed to judge the emotion expressed in one of the two sources. The effect of to-be-ignored voice information on facial judgments was larger in Japanese than in Dutch participants, whereas the effect of to-be-ignored face information on vocal judgments was smaller in Japanese than in Dutch participants. This result indicates that Japanese people are more attuned than Dutch people to vocal processing in the multisensory perception of emotion. Our findings provide the first evidence that multisensory integration of affective information is modulated by perceivers&apos; cultural background.

Misc.

 66
  • Nakamura, K. A, Tanaka, A
    PsyArXiv, Mar, 2023  
  • 田中章浩, 清水大地, 小手川正二郎
    映像情報メディア学会誌, 75(5) 614-620, Sep, 2021  
  • Rika Oya, Akihiro Tanaka
    Mar 12, 2021  
    <p>Previous research has revealed that nonverbal touch can communicate several emotions. This study compared the perception of emotions expressed by touch with that expressed by voice to examine the suitability of these channels to convey positive or negative emotions. In our experiment, the encoder expressed 12 emotions, including complex ones, by touching the decoder’s arm or uttering a syllable /e/, and the decoder judged the emotion. The results showed that positive emotions, such as love and gratitude, tended to be perceived more correctly when expressed by touch, while negative emotions such as sadness and disgust were perceived more correctly when expressed by voice. Interestingly, the average accuracy for touch and voice did not differ under the free expression method. These results suggest that different channels (touch and voice) have different superiority on the perception of positive and negative emotions.</p>
  • Rika Oya, Akihiro Tanaka
    Mar 12, 2021  
    <p>Previous research has revealed that nonverbal touch can communicate several emotions. This study compared the perception of emotions expressed by touch with that expressed by voice to examine the suitability of these channels to convey positive or negative emotions. In our experiment, the encoder expressed 12 emotions, including complex ones, by touching the decoder’s arm or uttering a syllable /e/, and the decoder judged the emotion. The results showed that positive emotions, such as love and gratitude, tended to be perceived more correctly when expressed by touch, while negative emotions such as sadness and disgust were perceived more correctly when expressed by voice. Interestingly, the average accuracy for touch and voice did not differ under the free expression method. These results suggest that different channels (touch and voice) have different superiority on the perception of positive and negative emotions.</p>

Books and Other Publications

 11

Presentations

 273

Research Projects

 29

Academic Activities

 5

Social Activities

 43

Media Coverage

 30