研究者業績

田中 章浩

タナカ アキヒロ  (Akihiro Tanaka)

基本情報

所属
東京女子大学 現代教養学部心理・コミュニケーション学科心理学専攻 教授
学位
博士(心理学)(東京大学)

研究者番号
80396530
J-GLOBAL ID
200901077725261773
researchmap会員ID
5000089644

外部リンク

論文

 72
  • Tanaka, A, Asakawa, K, Imai, H
    Proceedings of the International Conference on Auditory-Visual Speech Processing 2009 113-116 2009年9月  査読有り
  • Asakawa, K, Tanaka, A, Imai, H
    Proceedings of the 31st Annual Meeting of the Cognitive Science Society 1669-1673 2009年8月  査読有り
  • Sakamoto, S, Tanaka, A, Numahata, S, Imai, A, Takagi, T, Suzuki, Y
    Proceedings of the International Conference on Auditory-Visual Speech Processing 2009 9-12 2009年8月  査読有り
  • Akihiro Tanaka, Shuichi Sakamoto, Komi Tsumura, Yoiti Suzuki
    NEUROREPORT 20(5) 473-477 2009年3月  査読有り
    This study investigated the effects of intermodal timing differences and speed differences on word intelligibility of auditory-visual speech. Words were presented under visual-only, auditory-only, and auditory-visual conditions. Two types of auditory-visual conditions were used: asynchronous and expansion conditions. In the asynchronous conditions, the audio lag was 0-400 ms. In the expansion conditions, the auditory signal was time expanded (0-400 ms), whereas the visual signal was kept at the original speed. Results showed that word intelligibility was higher in the auditory-visual conditions than in the auditory-only condition. The results of auditory-visual benefit revealed that the benefit at the end of words declined as the amount of time expansion increased, although it did not decline in the asynchronous conditions. NeuroReport 20:473-477 (C) 2009 Wolters Kluwer Health vertical bar Lippincott Williams & Wilkins.
  • Shuichi Sakamoto, Akihiro Tanaka, Komi Tsumura, Yôiti Suzuki
    Journal on Multimodal User Interfaces 2(3) 199-203 2009年  査読有り
    This study investigated effects of asynchrony between speech signal and moving image of talker's face induced by time-expansion of the speech signal on speech intelligibility. Word intelligibility test was performed to younger listeners. Japanese 4-mora words were uttered by a female speaker. Each word was processed with STRAIGHT software to expand the speech signal by from 0 to 400 ms. These signals were combined with moving image of talker's face which was kept at original speed. This test was performed under three conditions: visual-only, auditory-only, and auditory-visual (AV) condition. Results showed that intelligibility scores under AV condition were statistically higher than those under auditory-only condition even when the speech signal was expanded by 400 ms. These results suggest that moving image of talker's face is effective to enhance speech intelligibility if the lag between the speech signal and moving image of talker's face does not exceed 400 ms. © OpenInterface Association 2009.
  • 高野陽太郎, 田中章浩
    認知科学 15(3) 536-554 2008年9月  査読有り
  • Sakamoto, S, Tanaka, A, Numahata, S, Imai, A, Takagi, T, Suzuki, Y
    Proceedings of the International Conference on Auditory-Visual Speech Processing 2008 79-82 2008年9月  査読有り
  • Takahashi, M, Tanaka, A
    Studies in Language Sciences 7 283-298 2008年4月  査読有り
  • 西村竜一, 末永司, 鈴木陽一, 田中章浩
    日本音響学会誌 64(2) 63-72 2008年2月  査読有り
  • Tanaka, A, Sakamoto, S, Tsumura, T, Suzuki, Y
    Proceedings of the 19th International Congress on Acoustics CD-ROM英文6頁 2007年9月  招待有り
  • Tanaka, A, Sakamoto, S, Tsumura, T, Suzuki, Y
    Proceedings of the International Conference on Auditory-Visual Speech Processing 2007 258-263 2007年8月  査読有り
  • Sakamoto, S, Tanaka, A, Tsumura, T, Suzuki, Y
    Proceedings of the International Conference on Auditory-Visual Speech Processing 2007 258-263 2007年8月  査読有り
  • 田中 章浩, 浅川 香, 今井 久登
    日本認知心理学会発表論文集 2007 20-20 2007年  
    視覚刺激と聴覚刺激が統合されるための呈示タイミング差の許容範囲(時間窓)はおよそ200ms程度であるが,単純な刺激(フラッシュ光と純音)を用いた場合,一定の呈示タイミング差(例:音が200ms遅延)で呈示され続けると,時間窓が拡大する。しかし,この時間窓の補正が,生態学的に妥当性が高く,刺激構造も複雑である音声刺激においても同様に生じるのかは明らかではない。そこで本研究では音声刺激を用い,McGurk効果の生起率を統合の指標とした実験を実施した。実験の結果,音声が233ms遅延するタイミング差に順応した場合,映像と音声が同期するタイミングに順応した場合と比べて,音声が遅延した条件でのMcGurk効果の生起率が上昇した。この結果は音声においても時間窓の補正が機能することを示唆し,視聴覚メディアへの応用が期待できる。
  • Yohtaro Takano, Akihiro Tanaka
    QUARTERLY JOURNAL OF EXPERIMENTAL PSYCHOLOGY 60(11) 1555-1584 2007年  査読有り
    In a mirror, left and right are said to look reversed. Surprisingly, this very familiar phenomenon, mirror reversal, has still no agreed-upon account to date. This study compared a variety of accounts in the light of empirical data. In Experiment 1, 102 students judged whether the mirror image of a person or a character looked reversed or not in 15 settings and also judged the directional relation between its components. In Experiment 2, 52 students made the reversal judgements in 13 settings. It was found for the first time that a substantial proportion of people denied the left-right mirror reversal of a person, whereas virtually all of them did recognize that of a character. This discrepancy strongly suggested that these two kinds of mirror reversal are produced by different processes, respectively. A number of findings including this discrepancy clearly contradicted two accounts that are currently active: the one based on the priority of the up-down and front-back axes over the left-right axis, and the one based on the physical rotation of an object. All the findings were consistent with an account that considered mirror reversal a complex of three different phenomena produced by three different processes, respectively.
  • 佐藤裕, 森浩一, 小泉敏三, 皆川泰代, 田中章浩, 小沢恵美, 若葉陽子
    音声言語医学 47(4) 384-389 2006年  査読有り
  • 田中章浩, 森浩一, 小泉敏三, 佐藤裕, 田内光
    Audiology Japan 48(1) 72-78 2005年3月  査読有り
    人工内耳は一般に音韻の聴取が良くなるように設計・調整されており, 韻律の聴取は必ずしも良くない上, 音韻聴取に干渉する可能性もある。本研究では成人人工内耳装用者 (N24, SPEAK) を対象に単語聴取試験を行い, 1) アクセント型が音韻聴取に与える影響, 2) 音圧が音韻およびアクセント型の聴取に与える影響について検討した。提示音圧は快適レベルの前後に4レベル設定し, 試験単語には2モーラの有意味語に2種類のピッチアクセント (高低型, 低高型) を付与した語を使用した。その結果, 同じ音韻でもアクセント型や音圧の違いによって異なる音韻に異聴する例が認められ, 音韻と韻律の相互作用が示唆された。アクセントの聴取成績は提示音圧で有意差を生じなかったが, 音韻聴取は音圧上昇に伴って向上した。術後の訓練では種々のアクセントや抑揚を含んだ発話を訓練材料に含める必要があることが示唆された。
  • Akihiro Tanaka, Kuninori Nakamura
    Psychological Reports 95(3 I) 723-734 2004年12月  査読有り
    Previous studies of second language aptitude have mainly used verbal stimuli in memory tasks. Memory for musical stimuli has not been used in aptitude studies although music and language have structural similarity. In this study, 30 Japanese university students who speak English as a second language (19 men, M = 21.3 yr., SD=1.8) participated in the experiment as volunteers. They performed verbal memory tasks, musical memory tasks, and English pronunciation tasks. Factor analysis indicated that verbal and musical memory abilities are better represented as a unitary factor rather than two independent factors. Further, a path analysis supported the hypothesis that the memory for both verbal and musical tasks affects proficiency of second language pronunciation, including prosodic features such as stress in word or intonation through a couple of sentences. The memory factor was interpreted as reflecting the performance of "auditory working memory.".
  • Tanaka, A, Nakamura, K
    Studies in Language Science 3 185-195 2004年3月  査読有り
  • 佐藤裕, 森浩一, 小泉敏三, 皆川泰代, 田中章浩, 小沢恵美
    音声言語医学 45(3) 181-186 2004年  査読有り
  • Koichi Mori, Akihiro Tanaka, Toshizo Koizumi, Hikaru Tauchi
    Cochlear Implants International 5(1) 107-109 2004年  査読有り
  • 田中章浩, 高野陽太郎
    音楽知覚認知研究 8(2) 81-91 2002年12月  査読有り
  • A Tanaka, Y Takano
    PROCEEDINGS OF THE TWENTY-SECOND ANNUAL CONFERENCE OF THE COGNITIVE SCIENCE SOCIETY 1059-1059 2000年  査読有り

MISC

 66
  • Nakamura, K. A, Tanaka, A
    PsyArXiv 2023年3月  
  • 田中章浩, 清水大地, 小手川正二郎
    映像情報メディア学会誌 75(5) 614-620 2021年9月  
  • Rika Oya, Akihiro Tanaka
    2021年3月12日  
    <p>Previous research has revealed that nonverbal touch can communicate several emotions. This study compared the perception of emotions expressed by touch with that expressed by voice to examine the suitability of these channels to convey positive or negative emotions. In our experiment, the encoder expressed 12 emotions, including complex ones, by touching the decoder’s arm or uttering a syllable /e/, and the decoder judged the emotion. The results showed that positive emotions, such as love and gratitude, tended to be perceived more correctly when expressed by touch, while negative emotions such as sadness and disgust were perceived more correctly when expressed by voice. Interestingly, the average accuracy for touch and voice did not differ under the free expression method. These results suggest that different channels (touch and voice) have different superiority on the perception of positive and negative emotions.</p>
  • Rika Oya, Akihiro Tanaka
    2021年3月12日  
    <p>Previous research has revealed that nonverbal touch can communicate several emotions. This study compared the perception of emotions expressed by touch with that expressed by voice to examine the suitability of these channels to convey positive or negative emotions. In our experiment, the encoder expressed 12 emotions, including complex ones, by touching the decoder’s arm or uttering a syllable /e/, and the decoder judged the emotion. The results showed that positive emotions, such as love and gratitude, tended to be perceived more correctly when expressed by touch, while negative emotions such as sadness and disgust were perceived more correctly when expressed by voice. Interestingly, the average accuracy for touch and voice did not differ under the free expression method. These results suggest that different channels (touch and voice) have different superiority on the perception of positive and negative emotions.</p>

書籍等出版物

 11

講演・口頭発表等

 273

共同研究・競争的資金等の研究課題

 29

学術貢献活動

 5

社会貢献活動

 43

メディア報道

 30