研究者業績

新田 徹

ニッタ トオル  (Tohru Nitta)

基本情報

所属
東京女子大学 現代教養学部 数理科学科 情報数理科学専攻 教授
学位
博士(工学)(筑波大学)

連絡先
tnittalab.twcu.ac.jp
researchmap会員ID
0000045389

外部リンク

委員歴

 1

論文

 83
  • S. Yamauch, T. Nitta, T. Ohnishi
    arXiv Preprint arXiv: 2411.05816v1 2024年11月  
  • Tohru Nitta
    Nonlinear Theory and Its Applications, IEICE 14(2) 175-192 2023年4月  査読有り
  • Tohru Nitta
    Proceedings of the 2022 International Symposium on Nonlinear Theory and its Applications (NOLTA2022) 248-251 2022年12月  査読有り
  • Y. Okawa, S. Kanoga, T. Hoshino, T. Nitta
    Proceedings of the 44th Annual International Conference of the IEEE Engineering in Medicine and Biology Society (EMBC2022) 3232-3235 2022年7月  査読有り
  • Tohru Nitta
    arXiv Preprint arXiv:1806.04884v3 2022年5月  
  • Y. Okawa, T. Nitta
    Proceedings of 13th Asia Pacific Signal and Information Processing Association Annual Summit and Conference (APSIPA ASC) 187-192 2021年12月  査読有り
  • Tohru Nitta, Hui Hu Gan
    Proceedings of Joint 11th International Conference on Soft Computing and Intelligent Systems and 21st International Symposium on Advanced Intelligent Systems, SCIS & ISIS2020 1-3 2020年12月  査読有り筆頭著者責任著者
  • Tohru Nitta
    arXiv Preprint arXiv:1806.04884v2 2020年  
  • Tohru Nitta, Masaki Kobayashi, Danilo P. Mandic
    IEEE TRANSACTIONS ON SIGNAL PROCESSING 67(15) 3985-3994 2019年8月  査読有り筆頭著者責任著者
    We provide a rigorous account of the equivalence between the complex-valued widely linear estimation method and the quaternion involution widely linear estimation method with their vector-valued real linear estimation counterparts. This is achieved by an account of degrees of freedom and by providing matrix mappings between a complex variable and an isomorphic bivariate real vector, and a quaternion variable versus a quadri-variate real vector. Furthermore, we show that the parameters in the complex-valued linear estimation method, the complex-valued widely linear estimation method, the quaternion linear estimation method, the quaternion semi-widely linear estimation method, and the quaternion involution widely linear estimation method include distinct geometric structures imposed on complex numbers and quaternions, respectively, whereas the real-valued linear estimation methods do not exhibit any structure. This key difference explains, both in theoretical and practical terms, the advantage of estimation in division algebras (complex, quaternion) over their multivariate real vector counterparts. In addition, we discuss the computational complexities of the estimators of the hypercomplex widely linear estimation methods.
  • Tohru Nitta
    MATHEMATICAL METHODS IN THE APPLIED SCIENCES 41(11) 4170-4178 2018年7月  査読有り筆頭著者責任著者
    It has been reported that training deep neural networks is more difficult than training shallow neural networks. Hinton etal. proposed deep belief networks with a learning algorithm that trains one layer at a time. A much better generalization can be achieved when pre-training each layer with an unsupervised learning algorithm. Since then, deep neural networks have been extensively studied. On the other hand, it has been revealed that singular points affect the training dynamics of the learning models such as neural networks and cause a standstill of training. Naturally, training deep neural networks suffer singular points. As described in this paper, we present a deep neural network model that has fewer singular points than the usual one. First, we demonstrate that some singular points in the deep real-valued neural network, which is equivalent to a deep complex-valued neural network, have been resolved as its inherent property. Such deep neural networks are less likely to become trapped in local minima or plateaus caused by critical points. Results of experiments on the two spirals problem, which has an extreme nonlinearity, support our theory. Copyright (c) 2017 John Wiley & Sons, Ltd.
  • Tohru Nitta, Yasuaki Kuroe
    IEEE TRANSACTIONS ON NEURAL NETWORKS AND LEARNING SYSTEMS 29(5) 1689-1702 2018年5月  査読有り筆頭著者責任著者
    In this paper, we first extend the Wirtinger derivative which is defined for complex functions to hyperbolic functions, and derive the hyperbolic gradient operator yielding the steepest descent direction by using it. Next, we derive the hyperbolic backpropagation learning algorithms for some multilayered hyperbolic neural networks (NNs) using the hyperbolic gradient operator. It is shown that the use of the Wirtinger derivative reduces the effort necessary for the derivation of the learning algorithms by half, simplifies the representation of the learning algorithms, and makes their computer programs easier to code. In addition, we discuss the differences between the derived Hyperbolic-BP rules and the complex-valued backpropagation learning rule (Complex-BP). Finally, we make some experiments with the derived learning algorithms. As a result, we find that the convergence rates of the Hyperbolic-BP learning algorithms are high even if the fully activation functions are used, and discover that the Hyperbolic-BP learning algorithm for the hyperbolic NN with the split-type hyperbolic activation function has an ability to learn hyperbolic rotation as its inherent property.
  • Tohru NItta
    arXiv Preprint arXiv: 1806.04884 2018年  筆頭著者責任著者
  • Tohru Nitta
    IEEE TRANSACTIONS ON NEURAL NETWORKS AND LEARNING SYSTEMS 28(10) 2282-2293 2017年10月  査読有り筆頭著者責任著者
    We present a theoretical analysis of singular points of artificial deep neural networks, resulting in providing deep neural network models having no critical points introduced by a hierarchical structure. It is considered that such deep neural network models have good nature for gradient-based optimization. First, we show that there exist a large number of critical points introduced by a hierarchical structure in deep neural networks as straight lines, depending on the number of hidden layers and the number of hidden neurons. Second, we derive a sufficient condition for deep neural networks having no critical points introduced by a hierarchical structure, which can be applied to general deep neural networks. It is also shown that the existence of critical points introduced by a hierarchical structure is determined by the rank and the regularity of weight matrices for a specific class of deep neural networks. Finally, two kinds of implementation methods of the sufficient conditions to have no critical points are provided. One is a learning algorithm that can avoid critical points introduced by the hierarchical structure during learning (called avoidant learning algorithm). The other is a neural network that does not have some critical points introduced by the hierarchical structure as an inherent property (called avoidant neural network).
  • Tohru Nitta
    Proceedings of the 23rd International Conference on Neural Information Processing, ICONIP2016-Kyoto, Part IV, LNCS 9950 389-396 2016年  査読有り筆頭著者責任著者
    In this paper, we analyze a deep neural network model from the viewpoint of singularities. First, we show that there exist a large number of critical points introduced by a hierarchical structure in the deep neural network as straight lines. Next, we derive sufficient conditions for the deep neural network having no critical points introduced by a hierarchical structure.
  • Yili Xia, Cyrus Jahanchahi, Tohru Nitta, Danilo P. Mandic
    IEEE TRANSACTIONS ON NEURAL NETWORKS AND LEARNING SYSTEMS 26(12) 3287-3292 2015年12月  査読有り
    The quaternion widely linear (WL) estimator has been recently introduced for optimal second-order modeling of the generality of quaternion data, both second-order circular (proper) and second-order noncircular (improper). Experimental evidence exists of its performance advantage over the conventional strictly linear (SL) as well as the semi-WL (SWL) estimators for improper data. However, rigorous theoretical and practical performance bounds are still missing in the literature, yet this is crucial for the development of quaternion valued learning systems for 3-D and 4-D data. To this end, based on the orthogonality principle, we introduce a rigorous closed-form solution to quantify the degree of performance benefits, in terms of the mean square error, obtained when using the WL models. The cases when the optimal WL estimation can simplify into the SWL or the SL estimation are also discussed.
  • Tohru Nitta
    ARCHIVES OF NEUROSCIENCE 2(4) 2015年10月  査読有り招待有り筆頭著者責任著者
    Context: Recently, the singular points of neural networks have attracted attention from the artificial intelligence community, and their interesting properties have been demonstrated. The objective of this study is to provide an overview of studies on the singularities of complex-valued neural networks.Evidence Acquisition: This review is based on the relevant literature on complex-valued neural networks and singular points.Results: Review of the studies and available literature on the subject area shows that the singular points of complex-valued neural networks have negative effects on learning, as do those of real-valued neural networks. However, the nature of the singular points in complex-valued neural networks is superior in quality, and the methods for improving the learning performance have been proposed.Conclusions: A complex-valued neural network could be a promising learning method from the viewpoint of a singularity.
  • Tohru Nitta
    NEURAL COMPUTATION 27(5) 1120-1141 2015年5月  査読有り筆頭著者
    This letter investigates the characteristics of the complex-valued neuron model with parameters represented by polar coordinates (called polar variable complex-valued neuron). The parameters of the polar variable complex-valued neuron are unidentifiable. The plateau phenomenon can occur during learning of the polar variable complex-valued neuron. Furthermore, computer simulations suggest that a single polar variable complex-valued neuron has the following characteristics in the case of using the steepest gradient-descent method with square error: (1) unidentifiable parameters (singular points) degrade the learning speed and (2) a plateau can occur during learning. When the weight is attracted to the singular point, the learning tends to become stuck. However, computer simulations also show that the steepest gradient-descent method with amplitude-phase error and the complex-valued natural gradient method could reduce the effects of the singular points. The learning dynamics near singular points depends on the error functions and the training algorithms used.
  • Tohru Nitta
    Journal of Computer and Communications 2(1) 27-32 2014年  査読有り筆頭著者責任著者
  • Tohru Nitta
    International Journal of Advanced Computer Science and Applications 5(7) 193-198 2014年  査読有り筆頭著者責任著者
  • Tohru Nitta
    Proceedings of the 6th International Conference on Agents and Artificial Intelligence, ICAART2014-Anger 1 526-531 2014年  査読有り筆頭著者責任著者
    In this paper, the characteristics of the complex-valued neuron model with parameters represented by polar coordinates (called polar variable complex-valued neuron) are investigated. The main results are as reported below. The polar variable complex-valued neuron is unidentifiable: there exists a parameter that does not affect the output value of the neuron and one cannot identify its value. The plateau phenomenon can occur during learning of the polar variable complex-valued neuron: the learning error does not decrease in a period. Furthermore, it is suggested by computer simulations that a single polar variable complex-valued neuron has the following characteristics: (a) Unidentifiable parameters (singular points) degrade the learning speed. (b) A plateau can occur during learning. When the weight is attracted to the singular point, the learning tends to be stuck.
  • Tohru Nitta
    NEURAL NETWORKS 43 1-7 2013年7月  査読有り筆頭著者責任著者
    Most of local minima caused by the hierarchical structure can be resolved by extending the real-valued neural network to complex numbers. It was proved in 2000 that a critical point of the real-valued neural network with H - 1 hidden neurons always gives many critical points of the real-valued neural network with H hidden neurons. These critical points consist of many lines in the parameter space which could be local minima or saddle points. Local minima cause plateaus which have a strong negative influence on learning. However, most of the critical points of complex-valued neural network are saddle points unlike those of the real-valued neural network. This is a prominent property of the complex-valued neural network. (c) 2013 Elsevier Ltd. All rights reserved.
  • Eckhard Hitzer, Tohru Nitta, Yasuaki Kuroe
    Advances in Applied Clifford Algebras 23(2) 377-404 2013年5月24日  査読有り
    We survey the development of Clifford's geometric algebra and some of its engineering applications during the last 15 years. Several recently developed applications and their merits are discussed in some detail. We thus hope to clearly demonstrate the benefit of developing problem solutions in a unified framework for algebra and geometry with the widest possible scope: from quantum computing and electromagnetism to satellite navigation, from neural computing to camera geometry, image processing, robotics and beyond.
  • Tohru Nitta
    International Journal of Advanced Computer Science and Applications 4(9) 68-73 2013年  査読有り筆頭著者責任著者
  • Tohru Nitta
    Applied Mathematics 4(12) 1616-1620 2013年  査読有り筆頭著者責任著者
  • Tohru Nitta
    Proceedings of 17th International Conference on Knowledge-Based and Intelligent Information & Engineering Systems, KES2013-Kitakyushu 269-275 2013年  査読有り筆頭著者責任著者
    A critical point is a point on which the derivatives of an error function are all zero. It has been shown in the literatures that the critical points caused by the hierarchical structure of the real-valued neural network could be local minima or saddle points, whereas most of the critical points caused by the hierarchical structure are saddle points in the case of complex-valued neural networks. Several studies have demonstrated that that kind of singularity has a negative effect on learning dynamics in neural networks. In this paper, we will demonstrate via some examples that the decomposition of high-dimensional NNs into real-valued NNs equivalent to the original NNs yields the NNs that do not have critical points based on the hierarchical structure. (C) 2013 The Authors. Published by Elsevier B.V.
  • Tohru Nitta
    International Journal of Organizational and Collective Intelligence 3(2) 81-116 2012年  査読有り筆頭著者責任著者
  • Tohru Nitta
    Proceedings of International Conference on Neural Information Processing (Lecture Notes in Computer Science), ICONIP2011-Shanghai 519-525 2011年  査読有り筆頭著者責任著者
    In this paper, we formulate a Clifford-valued widely linear estimation framework. Clifford number is a hypercomplex number that generalizes real, complex numbers, quaternions, and higher dimensional numbers. And also, as a first step, we will give a theoretical foundation for a quaternion-valued widely linear estimation framework. The estimation error obtained with the quaternion-valued widely linear estimation method is proven to be smaller than that obtained using the usual quaternion-valued linear estimation method.
  • 新田徹
    電子情報通信学会論文誌D J93-D(8) 1614-1621 2010年  査読有り筆頭著者責任著者
  • Tohru Nitta
    Internatinoal Journal of Neural Systems 18(2) 123-134 2008年4月  査読有り筆頭著者責任著者
    This paper will prove the uniqueness theorem for 3-layered complex-valued neural networks where the threshold parameters of the hidden neurons can take non-zeros. That is, if a 3-layered complex-valued neural network is irreducible, the 3-layered complex-valued neural network that approximates a given complex-valued function is uniquely determined up to a finite group on the transformations of the learnable parameters of the complex-valued neural network.
  • Tohru Nitta, Sven Buchholz
    Proceedings of International Joint Conference on Neural Networks, IJCNN'08-HongKong 2973-2979 2008年  査読有り筆頭著者責任著者
    In this paper, the basic properties, especially decision boundary, of the hyperbolic neurons used in the hyperbolic neural networks are investigated. And also, a non-split hyperbolic sigmoid activation function is proposed.
  • NITTA T.
    Proc. IJCAI-2007 Workshop on Complex-Valued Neural Networks and Neuro-Computing : Novel Methods, Applications and Implementations, Jan. 2-7 2007年  査読有り筆頭著者責任著者
  • NITTA T.
    Neural Information Processing-Letters and Reviews 10(10) 237-242 2006年  査読有り筆頭著者責任著者
  • Nitta T.
    Neural Information Processing -Letters and Reviews, 5(2) 33-39 2004年  査読有り筆頭著者責任著者
  • Neural Information Processing - Letters and Reviews 2(3) 53-56 2004年  査読有り筆頭著者責任著者
  • Tohru Nitta
    Neural Computation 16(1) 73-97 2004年1月  査読有り筆頭著者責任著者
    This letter presents some results of an analysis on the decision boundaries of complex-valued neural networks whose weights, threshold values, input and output signals are all complex numbers. The main results may be summarized as follows. (1) A decision boundary of a single complex-valued neuron consists of two hypersurfaces that intersect orthogonally, and divides a decision region into four equal sections. The XOR problem and the detection of symmetry problem that cannot be solved with two-layered real-valued neural networks, can be solved by two-layered complex-valued neural networks with the orthogonal decision boundaries, which reveals a potent computational power of complex-valued neural nets. Furthermore, the fading equalization problem can be successfully solved by the two-layered complex-valued neural network with the highest generalization ability. (2) A decision boundary of a three-layered complex-valued neural network has the orthogonal property as a basic structure, and its two hypersurfaces approach orthogonality as all the net inputs to each hidden neuron grow. In particular, most of the decision boundaries in the three-layered complex-valued neural network inetersect orthogonally when the network is trained using Complex-BP algorithm. As a result, the orthogonality of the decision boundaries improves its generalization ability. (3) The average of the learning speed of the Complex-BP is several times faster than that of the Real-BP. The standard deviation of the learning speed of the Complex-BP is smaller than that of the Real-BP. It seems that the complex-valued neural network and the related algorithm are natural for learning complex-valued patterns for the above reasons.
  • Tohru Nitta
    Artificial Neural Networks and Neural Information Processing, Lecture Notes in Computer Science (Proceedings of International Conference on Artificial Neural Networks/International Conference on Neural Information Processing, ICANN/ICONIP'03-Istanbul) 2714 993-1000 2003年  査読有り筆頭著者責任著者
  • Tohru Nitta
    Neural Networks 16(8) 1101-1105 2003年  査読有り筆頭著者責任著者
    This letter presents some results on the computational power of complex-valued neurons. The main results may be summarized as follows. The XOR problem and the detection of symmetry problem which cannot be solved with a single real-valued neuron (i.e. a two-layered real-valued neural network), can be solved with a single complex-valued neuron (i.e. a two-layered complex-valued neural network) with the orthogonal decision boundaries, which reveals the potent computational power of complex-valued neurons. Furthermore, the fading equalization problem can be successfully solved with a single complex-valued neuron with the highest generalization ability. (C) 2003 Elsevier Ltd. All rights reserved.
  • Tohru Nitta
    Systems and Computers in Japan 34(14) 54-62 2003年  査読有り筆頭著者責任著者
    A complex neural network is obtained from an ordinary network by extending the (real-valued) parameters, such as the weights and the thresholds, to complex values. Applications to problems involving complex numbers, such as communications systems, are expected. This paper presents the following uniqueness theorem. When a complex function is given, the three-layered neural network that approximates the function is uniquely determined by a certain finite group, if it is irreducible. The above finite group specifies the redundancy of the parameters in the complex neural network, but has a structure which is different from that of the real-valued neural network. The order of the finite group is examined, and it is shown that the redundancy of the complex-valued neural network is an exponent multiple of the redundancy of the real-valued neural network. Analysis of the redundancy is important in the theoretical investigation of the basic characteristics of complex-valued neural networks, such as the local minimum property. A sufficient condition is derived for the given three-layered complex-valued neural network to be minimal. The above results are shown, in essence, by extending the approach of Sussmann for real-valued neural networks.
  • Tohru Nitta
    Neurocomputing 50(C) 291-303 2003年  査読有り筆頭著者責任著者
    This paper shows the differences between the real-valued neural network and the complex-valued neural network by analyzing their fundamental properties from the view of architectures. The main results may be summarized as follows: (a) A single complex-valued neuron with n-inputs is equivalent to two real-valued neurons with 2n-inputs which have a restriction on a set of weight parameters. (b) The decision boundary of a single complex-valued neuron consists of two hypersurfaces which intersect orthogonally. (c) The decision boundary of a three-layered complex-valued neural network has the orthogonal structure. (d) The orthogonality of the decision boundary in the three-layered Complex-BP network can improve its generalization ability. (e) The average of the learning speed of the Complex-BP is several times faster than that of the Real-BP, and the standard deviation of the learning speed of the Complex-BP is smaller than that of the Real-BP. (C) 2002 Elsevier Science B.V. All rights reserved.
  • Tohru Nitta
    Proceedings of 6th International Conference on Knowledge-based Intelligent Information Engineering Systems & Allied Technologies, KES2002 I 628-632 2002年  査読有り招待有り筆頭著者責任著者
  • Tohru Nitta
    Proceedings of International Conference on Neural Information Processing, ICONIP'02-Singapore 3 1099-1103 2002年  査読有り筆頭著者責任著者
  • Tohru Nitta
    Neurocomputing 49(1-4) 423-428 2002年  査読有り筆頭著者責任著者
    In this letter, we will clarify the redundancy of the parameters of the complex-valued neural network. The results may be summarized as follows. There exist the four transformations which can cause the redundancy of the parameters of the complex-valued neural network, including the two transformations which can cause the redundancy of the parameters of the real-valued neural network (i.e., the interchange of the two hidden neurons and the sign flip of the parameters on the hidden neurons). (C) 2002 Elsevier Science B.V. All rights reserved.
  • 新田徹
    電子情報通信学会論文誌 DII J85-D-II(5) 796-804 2002年  査読有り筆頭著者責任著者
  • Tohru Nitta
    Proceedings of 5th International Conference on Knowledge-based Intelligent Information Engineering Systems & Allied Technologies, KES2001 550-554 2001年  査読有り招待有り筆頭著者責任著者
  • Kenji Nishida, Tohru Nitta, Toshio Tanaka, Hiroaki Inayoshi
    Proceedings of the Joint Conference on Information Sciences 1 815-818 2000年  査読有り
    In human memory, impressive objects (those with attachments to strong emotions, such as happiness and sadness) are retained easily, while non-impressive objects (those without attachments to strong emotions) are not. Furthermore, one can easily acquire systematic knowledge about a favorite field, while it is difficult to acquire such knowledge about a non-favorite field. Emotions thus seem to play an important role in retaining the memory of an object and in forming a conceptual memory of objects. We have therefore developed an emotion-memory model, and in this paper, we present a simulation result on concept forming.
  • Tohru Nitta
    Neural Processing Letters 12(3) 239-246 2000年  査読有り筆頭著者責任著者
  • Tohru Nitta
    Proceedings of Toward a Science of Consciousness 29-30 1999年  査読有り筆頭著者責任著者
  • H. Inayoshi, T. Tanaka, K. Nishida, T. Nitta
    Proceedings of Toward a Science of Consciousness 48-49 1999年  査読有り
  • Tohru Nitta
    Proceedings of International Conference on Neural Information Processing, ICONIP'99-Perth 95-100 1999年  査読有り筆頭著者責任著者
  • Tohru Nitta, Toshio Tanaka, Kenji Nishida, Hiroaki Inayoshi
    Proceedings of the IEEE International Conference on Systems, Man and Cybernetics 2 342-347 1999年  査読有り筆頭著者責任著者
    In this paper, we propose a computational model of personality (called personality model) for the purpose of implementing non-intellectual functions of human mind on computer systems. The personality model will be formulated based on psychoanalysis, assuming that defensive mechanism plays an essential role in a personality. Inductive probability will be employed for modeling defense mechanism. The personality model is useful for the expression of feelings, and will be used in virtual reality, computer game characters, agent secretaries, and robotics.

MISC

 10
  • 新田 徹
    電子情報通信学会技術研究報告 : 信学技報 112(480) 7-12 2013年3月13日  
    本稿は,階層構造に基づいた危点を持たないニューラルネットワーク(NN)を実現しようとする試みである.まず,実NNおよび複素NNが階層構造に基づいた危点を持だないための十分条件を導く.次に,その十分条件の応用として,高次元NNをそれと等価な実NNあるいは複素NNに分解することによって,階層構造に基づいた危点を持たない実NNおよび複素NNが構成できることを示す.
  • 新田 徹
    計測と制御 = Journal of the Society of Instrument and Control Engineers 51(4) 384-389 2012年4月10日  
  • 新田 徹
    電子情報通信学会誌 = THE JOURNAL OF THE INSTITUTE OF ELECTRONICS, INFOMATION AND COMMUNICATION ENGINEERS 87(6) 450-453 2004年6月1日  
    本稿では,複素ニューロンの計算能力の一端を紹介する.まず,通常の実数型の単一ニューロンでは解くことができない排他的論理和問題(XOR問題)と対称性検出問題を,単一複素ニューロンを使って解くことができることを示す.その際,複素ニューロンの決定表面が直交した二つの超平面から構成されていることを利用する.次に,具体的な応用例として,通信分野におけるフェージング等化問題が単一複素ニューロンを使うことによって,うまく解けることを示す.その際にも直交した決定表面がうまく活用される.
  • 新田 徹
    電子情報通信学会誌 83(8) 612-615 2000年8月25日  
  • 田中 敏雄, 西田 健次, 稲吉 宏明, 新田 徹
    電子情報通信学会技術研究報告. NC, ニューロコンピューティング 100(96) 41-48 2000年5月19日  
    扁桃体の情動の機能と海馬の連合機能を結び付けたニューラルネットワークによる情動・記憶モデルを提案する。扁桃体では、感覚器からの入力に対して価値判断が行われ、好き/嫌いの情動が発現する。一方、海馬では、感覚器からの入力からオブジェクトの学習/認識が行われる。扁桃体と海馬を情動を介して結び付けることにより、オブジェクトに対しても好き/嫌いの情動が発現することを計算機シミュレーションによって示す。

書籍等出版物

 11

講演・口頭発表等

 2

共同研究・競争的資金等の研究課題

 2

広報活動等

 6
  • 件名
    夢ナビ ミニ講義「ニューラルネットワークを複素数化するとどうなる?」
    開始年月日
    2023/04/01
    終了年月日
    2023/04/24
    概要
    高校生向けの講義動画を作成した。
  • 件名
    専攻紹介動画「学科コンセプトムービー」 (情報数理科学科)
    開始年月日
    2023/05/24
    終了年月日
    2023/07/24
    概要
    2025年度に設置予定の情報数理科学科の紹介動画に出演した。
  • 件名
    研究紹介動画「挑戦する知性」
    開始年月日
    2023/07/10
    終了年月日
    2023/09/18
    概要
    研究紹介動画「挑戦する知性」に出演し、自身の研究紹介を行った。
  • 件名
    夢ナビ講義・研究室訪問
    開始年月日
    2023/07/16
    終了年月日
    2023/07/16
    概要
    高校生がオンライン上の研究室に訪問する体で、自身の研究紹介を行うとともに質問に答えた。30分1コマを3コマ実施した。
  • 件名
    東女数理 春の学校2024「怖くない。AIに触れてみよう」
    開始年月日
    2024/03/20
    終了年月日
    2024/03/20
    概要
    オープンキャンパスの一環として、AI (人工知能)に関する講義 (90分間)を実習も交えながら行った。
  • 件名
    TOSHIN TIMES 2024年7月1日号
    開始年月日
    2024/05/01
    終了年月日
    2024/06/05
    概要
    東進本部発行のTOSHIN TIMESの「デジタル時代のパスポート~社会を変えるデータサイエンス~ 東京女子大学編」と題した記事の掲載に協力した。