研究者業績

田中 章浩

タナカ アキヒロ  (Akihiro Tanaka)

基本情報

所属
東京女子大学 現代教養学部心理・コミュニケーション学科心理学専攻 教授
学位
博士(心理学)(東京大学)

研究者番号
80396530
J-GLOBAL ID
200901077725261773
researchmap会員ID
5000089644

外部リンク

論文

 72
  • 新井田統, 小森智康, 酒向慎司, 田中章浩, 布川清彦
    電子情報通信学会誌 107(3) 237-243 2024年3月  
  • 澤田佳子, 河原美彩子, 田中章浩
    日本感性工学会論文誌 22(4) 405-416 2023年12月  査読有り
  • Rika Oya, Akihiro Tanaka
    i-Perception 14(2) 204166952311604-204166952311604 2023年3月21日  査読有り
    Previous research has revealed that several emotions can be perceived via touch. What advantages does touch have over other nonverbal communication channels? In our study, we compared the perception of emotions from touch with that from voice to examine the advantages of each channel at the emotional valence level. In our experiment, the encoder expressed 12 different emotions by touching the decoder's arm or uttering a syllable /e/, and the decoder judged the emotion. The results showed that the categorical average accuracy of negative emotions was higher for voice than for touch, whereas that of positive emotions was marginally higher for touch than for voice. These results suggest that different channels (touch and voice) have different advantages for the perception of positive and negative emotions.
  • Fabiola Diana, Misako Kawahara, Isabella Saccardi, Ruud Hortensius, Akihiro Tanaka, Mariska E. Kret
    International Journal of Social Robotics 2022年9月28日  査読有り
    Abstract Historically, there has been a great deal of confusion in the literature regarding cross-cultural differences in attitudes towards artificial agents and preferences for their physical appearance. Previous studies have almost exclusively assessed attitudes using self-report measures (i.e., questionnaires). In the present study, we sought to expand our knowledge on the influence of cultural background on explicit and implicit attitudes towards robots and avatars. Using the Negative Attitudes Towards Robots Scale and the Implicit Association Test in a Japanese and Dutch sample, we investigated the effect of culture and robots’ body types on explicit and implicit attitudes across two experiments (total n = 669). Partly overlapping with our hypothesis, we found that Japanese individuals had a more positive explicit attitude towards robots compared to Dutch individuals, but no evidence of such a difference was found at the implicit level. As predicted, the implicit preference towards humans was moderate in both cultural groups, but in contrast to what we expected, neither culture nor robot embodiment influenced this preference. These results suggest that only at the explicit but not implicit level, cultural differences appear in attitudes towards robots.
  • Rika Oya, Akihiro Tanaka
    Acoustical Science and Technology 43(5) 291-293 2022年9月  査読有り
  • Roza G. Kamiloğlu, Akihiro Tanaka, Sophie K. Scott, Disa A. Sauter
    Philosophical Transactions of the Royal Society B: Biological Sciences 377(1841) 2022年1月3日  査読有り
    Laughter is a ubiquitous social signal. Recent work has highlighted distinctions between spontaneous and volitional laughter, which differ in terms of both production mechanisms and perceptual features. Here, we test listeners' ability to infer group identity from volitional and spontaneous laughter, as well as the perceived positivity of these laughs across cultures. Dutch ( <italic>n</italic> = 273) and Japanese ( <italic>n</italic> = 131) participants listened to decontextualized laughter clips and judged (i) whether the laughing person was from their cultural in-group or an out-group; and (ii) whether they thought the laughter was produced spontaneously or volitionally. They also rated the positivity of each laughter clip. Using frequentist and Bayesian analyses, we show that listeners were able to infer group membership from both spontaneous and volitional laughter, and that performance was equivalent for both types of laughter. Spontaneous laughter was rated as more positive than volitional laughter across the two cultures, and in-group laughs were perceived as more positive than out-group laughs by Dutch but not Japanese listeners. Our results demonstrate that both spontaneous and volitional laughter can be used by listeners to infer laughers’ cultural group identity. This article is part of the theme issue ‘Voice modulation: from origin and mechanism to social impact (Part II)’.
  • Takano, Y, Gjerde, P, Osaka, E, Takeo, K, Tanaka, A
    Cognitive Studies 28(4) 541-550 2021年12月  査読有り
  • 河原 美彩子, 澤田 佳子, 田中 章浩
    日本感性工学会論文誌 20(3) 329-335 2021年8月  査読有り
  • Yamamoto, H.W, Kawahara, M, Tanaka, A
    Frontiers in Psychology(オンライン出版) 12:702106 2021年8月  査読有り
  • Mori, Y, Noguchi, Y, Tanaka, A, Ishii, K
    Culture and Brain 2021年5月  査読有り
  • Kawahara, M, Sauter, D. A, Tanaka, A
    Cognition and Emotion 1-12 2021年5月  査読有り
  • Mori, K, Tanaka, A, Kawabata, H, Arao, H
    NeuroReport 32 858-863 2021年4月  査読有り
  • Yamamoto, H. W, Kawahara, M, Kret, M. E, Tanaka, A
    Letters on Evolutionary Behavioral Science 11(2) 2020年12月  査読有り
  • Yamamoto, H.W, Kawahara, M, Tanaka, A
    PLoS ONE 15(6) e0234553 2020年6月  査読有り
  • Hisako W. Yamamoto, Misako Kawahara, Akihiro Tanaka
    The 15th International Conference on Auditory-Visual Speech Processing 2019年8月10日  
  • Kawahara, M, Yamamoto, H.W, Tanaka, A
    Acoustical Science and Technology 40(5) 360-363 2019年  査読有り
  • Yamamoto, H. W, Kawahara, M, Tanaka, A
    Acoustical Science and Technology 2019年  査読有り
  • 田中章浩
    日本薬学会会報「ファルマシア」2018年11月号 1040-1044 2018年11月  招待有り
  • 田中章浩
    映像情報メディア学会誌 72(1) 10-14 2018年1月1日  招待有り
  • Yamamoto, H. W, Kawahara, M, Tanaka, A
    Proceedings of the International Conference on Auditory-Visual Speech Processing 2017 D2(S5.1) 1-4 2017年8月25日  査読有り
  • Kawahara, M, Sauter, D, Tanaka, A
    Proceedings of the International Conference on Auditory-Visual Speech Processing 2017 D2(S5.2) 1-6 2017年8月25日  査読有り
  • Kawase, M, Adachi, I, Tanaka, A
    Proceedings of the International Conference on Auditory-Visual Speech Processing 2017 D2. S5. 3, 1-4(S5.3) 1-4 2017年8月25日  査読有り
  • 横森文哉, 二宮大和, 森勢将雅, 田中章浩, 小澤賢司
    日本感性工学会論文誌 15(7) 721-729 2016年12月  査読有り
  • 森田磨里絵, 田中章浩
    認知心理学研究 14(1) 9-19 2016年9月  査読有り
    <p>人はどのようなものを見ると美しさや好ましさを感じるのだろうか.先行研究では,刺激に接触することにより自動的かつ無意識的に行われる概念情報の処理過程が,美的評価に影響することが示されている.しかし,再認の二重過程モデルに見られるように,概念情報の処理過程には,自動的で無意識的なレベルと,非自動的で意識的に行われるレベルの二つが存在する.そこで本研究では,Remember/Know手続きを用いて,意識的な概念情報の処理過程と無意識的な概念情報の処理過程が美的評価に及ぼす影響について検討した.意識的な概念処理過程である回想を反映するRemember判断と,無意識的な概念処理過程である熟知性を反映するKnow判断を比較することで,二つの概念処理過程が美的評価に及ぼす相対的な影響の大きさについても検討した.実験では,複数枚の画像を学習させた後,再認課題としてRemember/Know判断と美的評価を行わせた.その結果,Remember判断された画像のほうがKnow判断された画像よりも美的評価が有意に高くなり,意識的な概念情報の処理が美的評価をより高める可能性が示された.</p>
  • Takagi, S, Miyazawa, S, Huis In't Veld, E, de Gelder, B, Hamano, Y, Tabei, K, Tanaka, A
    Proceedings of the Internationl Conference on Facial Analysis, Animation, and Auditory-Visual Speech Processing 2015 57-62 2015年9月  査読有り
  • Tanaka, A, Takagi,S, Hiramatsu, S, Huis In't, Veld, E, de Gelder, B
    Proceedings of the International Conference on Facial Analysis, Animation, and Auditory-Visual Speech Processing 2015 63-66 2015年9月  査読有り
  • Takahashi, M, Tanaka, A
    Proceeding of the International Conference on Facial Analysis, Animation, and Auditory-Visual Speech Processing 2015 185-189 2015年9月  査読有り
  • Ujiie, Y, Asai, T, Tanaka, A, Wakabayashi, A
    Letters on Evolutionary Behavioral Science 6(2) 9-12 2015年8月  査読有り
  • Sachiko Takagi, Saori Hiramatsu, Ken-Ichi Tabei, Akihiro Tanaka
    Frontiers in integrative neuroscience 9(1) 1-1 2015年  査読有り
    Previous studies have shown that the perception of facial and vocal affective expressions interacts with each other. Facial expressions usually dominate vocal expressions when we perceive the emotions of face-voice stimuli. In most of these studies, participants were instructed to pay attention to the face or voice. Few studies compared the perceived emotions with and without specific instructions regarding the modality to which attention should be directed. Also, these studies used combinations of the face and voice which expresses two opposing emotions, which limits the generalizability of the findings. The purpose of this study is to examine whether the emotion perception is modulated by instructions to pay attention to the face or voice using the six basic emotions. Also we examine the modality dominance between the face and voice for each emotion category. Before the experiment, we recorded faces and voices which expresses the six basic emotions and orthogonally combined these faces and voices. Consequently, the emotional valence of visual and auditory information was either congruent or incongruent. In the experiment, there were unisensory and multisensory sessions. The multisensory session was divided into three blocks according to whether an instruction was given to pay attention to a given modality (face attention, voice attention, and no instruction). Participants judged whether the speaker expressed happiness, sadness, anger, fear, disgust, or surprise. Our results revealed that instructions to pay attention to one modality and congruency of the emotions between modalities modulated the modality dominance, and the modality dominance is differed for each emotion category. In particular, the modality dominance for anger changed according to each instruction. Analyses also revealed that the modality dominance suggested by the congruency effect can be explained in terms of the facilitation effect and the interference effect.
  • 髙木幸子,平松沙織,田部井賢一,田中章浩
    認知科学 21(3) 344-362 2014年9月  査読有り
  • 高木 幸子, 田部井 賢一, HUIS IN'T VELD Elisabeth, GELDER Beatrice de, 田中 章浩
    基礎心理学研究 32(1) 29-39 2013年  査読有り
    Information derived from facial and vocal nonverbal expressions plays an important role in social communication in the real and virtual worlds. In the present study, we investigated cultural differences between Japanese and Dutch participants in the multisensory perception of emotion. We used a face and voice that expressed incongruent emotions as stimuli and conducted two experiments. We presented either the face or voice in Experiment 1, and both the face and voice in Experiment 2. We found that both visual and auditory information were important for Japanese participants judging in-group stimuli, while visual information was more important for other combinations of participants and stimuli. Additionally, we showed that the in-group advantage provided by auditory information was higher in Japanese than Dutch participants. Our findings indicate that audio-visual integration of affective information is modulated by the perceiver's cultural background, and that there are cultural differences between in-group and out-group stimuli.
  • Hidaka, S, Shibata, H, Kurihara, M, Tanaka, A, Konno, A, Maruyama, S, Gyoba, J, Hagiwara, H, Koizumi, M
    Neuroscience Research 73(1) 73-79 2012年4月  査読有り
  • 宮澤史穂, 田中章浩, 西本武彦
    認知科学 19(1) 122-130 2012年3月  査読有り
  • Asakawa, K, Tanaka, A, Imai, H
    Kansei Engineering International Journal 11(1) 35-40 2012年2月15日  査読有り
    We investigated whether audiovisual synchrony perception for speech could change after observation of the audiovisual temporal mismatch. Previous studies have revealed that audiovisual synchrony perception is re-calibrated after exposure to a constant timing difference between auditory and visual signals in non-speech. In the present study, we examined whether this audiovisual temporal recalibration occurs at the perceptual level even for speech (monosyllables). In Experiment 1, participants performed an audiovisual simultaneity judgment task (i.e., a direct measurement of the audiovisual synchrony perception) in terms of the speech signal after observation of the speech stimuli which had a constant audiovisual lag. The results showed that the "simultaneous" responses (i.e., proportion of responses for which participants judged the auditory and visual stimuli to be synchronous) at least partly depended on exposure lag. In Experiment 2, we adopted the McGurk identification task (i.e., an indirect measurement of the audiovisual synchrony perception) to exclude the possibility that this modulation of synchrony perception was solely attributable to the response strategy using stimuli identical to those of Experiment 1. The characteristics of the McGurk effect reported by participants depended on exposure lag. Thus, it was shown that audiovisual synchrony perception for speech could be modulated following exposure to constant lag both in direct and indirect measurement. Our results suggest that temporal recalibration occurs not only in non-speech signals but also in monosyllabic speech at the perceptual level.
  • 高橋麻衣子, 田中章浩
    認知科学 18(4) 595-603 2011年12月  査読有り
  • Bernard M. C. Stienen, Akihiro Tanaka, Beatrice de Gelder
    PLOS ONE 6(10) 2011年10月  査読有り
    Multisensory integration may occur independently of visual attention as previously shown with compound face-voice stimuli. We investigated in two experiments whether the perception of whole body expressions and the perception of voices influence each other when observers are not aware of seeing the bodily expression. In the first experiment participants categorized masked happy and angry bodily expressions while ignoring congruent or incongruent emotional voices. The onset between target and mask varied from -50 to +133 ms. Results show that the congruency between the emotion in the voice and the bodily expressions influences audiovisual perception independently of the visibility of the stimuli. In the second experiment participants categorized the emotional voices combined with masked bodily expressions as fearful or happy. This experiment showed that bodily expressions presented outside visual awareness still influence prosody perception. Our experiments show that audiovisual integration between bodily expressions and affective prosody can take place outside and independent of visual awareness.
  • 田中章浩
    認知科学 18(3) 416-427 2011年9月  招待有り
  • Tanaka, A, Asakawa, K, Imai, H
    Neuro Report 22(14) 684-688 2011年9月  査読有り
  • 高橋麻衣子, 田中章浩
    教育心理学研究 59(2) 179-192 2011年6月  査読有り
  • Koizumi, A, Tanaka, A, Imai, H, Hiramatsu, S, Hiramoto, E, Sato, T, de Gelder, B
    EXPERIMENTAL BRAIN RESEARCH 213(2-3) 275-282 2011年4月  査読有り
    Anxious individuals have been shown to interpret others&apos; emotional states negatively. Since most studies have used facial expressions as emotional cues, we examined whether trait anxiety affects the recognition of emotion in a dynamic face and voice that were presented in synchrony. The face and voice cues conveyed either matched (e.g., happy face and voice) or mismatched emotions (e.g., happy face and angry voice). Participants with high or low trait anxiety were to indicate the perceived emotion using one of the cues while ignoring the other. The results showed that individuals with high trait anxiety were more likely to interpret others&apos; emotions in a negative manner, putting more weight on the to-be-ignored angry cues. This interpretation bias was found regardless of the cue modality (i.e., face or voice). Since trait anxiety did not affect recognition of the face or voice cues presented in isolation, this interpretation bias appears to reflect an altered integration of the face and voice cues among anxious individuals.
  • Asakawa, K, Tanaka, A, Sakamoto, S, Iwaya, Y, Suzuki., Y
    Acoustical Science and Technology 32(3) 125-128 2011年3月  査読有り
  • 高橋麻衣子, 田中章浩
    認知心理学研究 8(2) 131-143 2011年2月  査読有り
  • Yutaka Sato, Koichi Mori, Toshizo Koizumi, Yasuyo Minagawa-Kawai, Akihiro Tanaka, Emi Ozawa, Yoko Wakaba, Reiko Mazuka
    FRONTIERS IN PSYCHOLOGY 2(70) 2011年  査読有り
    Developmental stuttering is a speech disorder in fluency characterized by repetitions, prolongations, and silent blocks, especially in the initial parts of utterances. Although their symptoms are motor related, people who stutter show abnormal patterns of cerebral hemispheric dominance in both anterior and posterior language areas. It is unknown whether the abnormal functional lateralization in the posterior language area starts during childhood or emerges as a consequence of many years of stuttering. In order to address this issue, we measured the lateralization of hemodynamic responses in the auditory cortex during auditory speech processing in adults and children who stutter, including preschoolers, with near-infrared spectroscopy. We used the analysis-resynthesis technique to prepare two types of stimuli: (i) a phonemic contrast embedded in Japanese spoken words (/itta/ vs. /itte/) and (ii) a prosodic contrast (/itta/ vs. /itta?/). In the baseline blocks, only/itta/tokens were presented. In phonemic contrast blocks, /itta/ and /itte/ tokens were presented pseudo-randomly, and /itta/ and /itta?/ tokens in prosodic contrast blocks. In adults and children who do not stutter, there was a clear left-hemispheric advantage for the phonemic contrast compared to the prosodic contrast. Adults and children who stutter, however, showed no significant difference between the two stimulus conditions. A subject-by-subject analysis revealed that not a single subject who stutters showed a left advantage in the phonemic contrast over the prosodic contrast condition. These results indicate that the functional lateralization for auditory speech processing is in disarray among those who stutter, even at preschool age. These results shed light on the neural pathophysiology of developmental stuttering.
  • Akihiro Tanaka, Shuichi Sakamoto, Yôiti Suzuki
    Acoustical Science and Technology 32(6) 264-267 2011年  査読有り
  • 田中章浩
    電子情報通信学会技術研究報告 HIP2010-66, 25-28 2010年11月  招待有り
  • Miyazawa, S, Tanaka, A, Sakamoto, S, Nishimoto, T
    Proceedings of the International Conference on Auditory-Visual Speech Processing 2010 190-193 2010年10月  査読有り
  • Koizumi, A, Tanaka, A, Imai, H, Hiramatsu, S, Hiramoto, E, Sato, T, de Gelder, B
    Proceedings of the International Conference on Auditory-Visual Speech Processing 2010 194-198 2010年10月  査読有り
  • Tanaka, A, Koizumi, A, Imai, H, Hiramatsu, S, Hiramoto, E, de Gelder, B
    Proceedings of the International Conference on Auditory-Visual Speech Processing 2010 49-53 2010年10月  査読有り
  • Akihiro Tanaka, Ai Koizumi, Hisato Imai, Saori Hiramatsu, Eriko Hiramoto, Beatrice de Gelder
    PSYCHOLOGICAL SCIENCE 21(9) 1259-1262 2010年9月  査読有り
    Cultural differences in emotion perception have been reported mainly for facial expressions and to a lesser extent for vocal expressions. However, the way in which the perceiver combines auditory and visual cues may itself be subject to cultural variability. Our study investigated cultural differences between Japanese and Dutch participants in the multisensory perception of emotion. A face and a voice, expressing either congruent or incongruent emotions, were presented on each trial. Participants were instructed to judge the emotion expressed in one of the two sources. The effect of to-be-ignored voice information on facial judgments was larger in Japanese than in Dutch participants, whereas the effect of to-be-ignored face information on vocal judgments was smaller in Japanese than in Dutch participants. This result indicates that Japanese people are more attuned than Dutch people to vocal processing in the multisensory perception of emotion. Our findings provide the first evidence that multisensory integration of affective information is modulated by perceivers&apos; cultural background.

MISC

 66
  • Nakamura, K. A, Tanaka, A
    PsyArXiv 2023年3月  
  • 田中章浩, 清水大地, 小手川正二郎
    映像情報メディア学会誌 75(5) 614-620 2021年9月  
  • Rika Oya, Akihiro Tanaka
    2021年3月12日  
    <p>Previous research has revealed that nonverbal touch can communicate several emotions. This study compared the perception of emotions expressed by touch with that expressed by voice to examine the suitability of these channels to convey positive or negative emotions. In our experiment, the encoder expressed 12 emotions, including complex ones, by touching the decoder’s arm or uttering a syllable /e/, and the decoder judged the emotion. The results showed that positive emotions, such as love and gratitude, tended to be perceived more correctly when expressed by touch, while negative emotions such as sadness and disgust were perceived more correctly when expressed by voice. Interestingly, the average accuracy for touch and voice did not differ under the free expression method. These results suggest that different channels (touch and voice) have different superiority on the perception of positive and negative emotions.</p>
  • Rika Oya, Akihiro Tanaka
    2021年3月12日  
    <p>Previous research has revealed that nonverbal touch can communicate several emotions. This study compared the perception of emotions expressed by touch with that expressed by voice to examine the suitability of these channels to convey positive or negative emotions. In our experiment, the encoder expressed 12 emotions, including complex ones, by touching the decoder’s arm or uttering a syllable /e/, and the decoder judged the emotion. The results showed that positive emotions, such as love and gratitude, tended to be perceived more correctly when expressed by touch, while negative emotions such as sadness and disgust were perceived more correctly when expressed by voice. Interestingly, the average accuracy for touch and voice did not differ under the free expression method. These results suggest that different channels (touch and voice) have different superiority on the perception of positive and negative emotions.</p>

書籍等出版物

 11

講演・口頭発表等

 273

共同研究・競争的資金等の研究課題

 29

学術貢献活動

 5

社会貢献活動

 43

メディア報道

 30