CVClient

小橋 昌司

コバシ ショウジ  (Syoji Kobashi)

基本情報

所属
兵庫県立大学 工学研究科 教授 (研究所長)
学位
博士(工学)(姫路工業大学)

研究者番号
00332966
ORCID ID
 https://orcid.org/0000-0003-3659-4114
J-GLOBAL ID
200901031674454407
researchmap会員ID
6000003807

外部リンク

論文

 299
  • Rashedur Rahman, Naomi Yagi, Keigo Hayashi, Akihiro Maruo, Hirotsugu Muratsu, Syoji Kobashi
    Scientific Reports 14(1) 8004-8004 2024年12月  査読有り最終著者責任著者
    Pelvic fractures pose significant challenges in medical diagnosis due to the complex structure of the pelvic bones. Timely diagnosis of pelvic fractures is critical to reduce complications and mortality rates. While computed tomography (CT) is highly accurate in detecting pelvic fractures, the initial diagnostic procedure usually involves pelvic X-rays (PXR). In recent years, many deep learning-based methods have been developed utilizing ImageNet-based transfer learning for diagnosing hip and pelvic fractures. However, the ImageNet dataset contains natural RGB images which are different than PXR. In this study, we proposed a two-step transfer learning approach that improved the diagnosis of pelvic fractures in PXR images. The first step involved training a deep convolutional neural network (DCNN) using synthesized PXR images derived from 3D-CT by digitally reconstructed radiographs (DRR). In the second step, the classification layers of the DCNN were fine-tuned using acquired PXR images. The performance of the proposed method was compared with the conventional ImageNet-based transfer learning method. Experimental results demonstrated that the proposed DRR-based method, using 20 synthesized PXR images for each CT, achieved superior performance with the area under the receiver operating characteristic curves (AUROCs) of 0.9327 and 0.8014 for visible and invisible fractures, respectively. The ImageNet-based method yields AUROCs of 0.8908 and 0.7308 for visible and invisible fractures, respectively.
  • Daisuke FUJITA, Yuki ADACHI, Syoji KOBASHI
    Journal of Japan Society for Fuzzy Theory and Intelligent Informatics 36(2) 610-615 2024年5月15日  査読有り最終著者
  • Kenta Takatsuji, Yoshikazu Kida, Kenta Sasaki, Daisuke Fujita, Yusuke Kobayashi, Tsuyoshi Sukenari, Yoshihiro Kotoura, Masataka Minami, Syoji Kobashi, Kenji Takahashi
    The Journal of bone and joint surgery. American volume 2024年5月14日  査読有り
    BACKGROUND: Ultrasonography is used to diagnose osteochondritis dissecans (OCD) of the humerus; however, its reliability depends on the technical proficiency of the examiner. Recently, computer-aided diagnosis (CAD) using deep learning has been applied in the field of medical science, and high diagnostic accuracy has been reported. We aimed to develop a deep learning-based CAD system for OCD detection on ultrasound images and to evaluate the accuracy of OCD detection using the CAD system. METHODS: The CAD process comprises 2 steps: humeral capitellum detection using an object-detection algorithm and OCD classification using an image classification network. Four-directional ultrasound images of the elbow of the throwing arm of 196 baseball players (mean age, 11.2 years), including 104 players with normal findings and 92 with OCD, were used for training and validation. An external dataset of 20 baseball players (10 with normal findings and 10 with OCD) was used to evaluate the accuracy of the CAD system. A confusion matrix and the area under the receiver operating characteristic curve (AUC) were used to evaluate the system. RESULTS: Clinical evaluation using the external dataset resulted in high AUCs in all 4 directions: 0.969 for the anterior long axis, 0.966 for the anterior short axis, 0.996 for the posterior long axis, and 0.993 for the posterior short axis. The accuracy of OCD detection thus exceeded 0.9 in all 4 directions. CONCLUSIONS: We propose a deep learning-based CAD system to detect OCD lesions on ultrasound images. The CAD system achieved high accuracy in all 4 directions of the elbow. This CAD system with a deep learning model may be useful for OCD screening during medical checkups to reduce the probability of missing an OCD lesion. LEVEL OF EVIDENCE: Diagnostic Level II. See Instructions for Authors for a complete description of levels of evidence.
  • Kenta Sasaki, Daisuke Fujita, Kenta Takatsuji, Yoshihiro Kotoura, Masataka Minami, Yusuke Kobayashi, Tsuyoshi Sukenari, Yoshikazu Kida, Kenji Takahashi, Syoji Kobashi
    International Journal of Computer Assisted Radiology and Surgery 2024年1月17日  査読有り最終著者責任著者
    PURPOSE: Osteochondritis dissecans (OCD) of the humeral capitellum is a common cause of elbow disorders, particularly among young throwing athletes. Conservative treatment is the preferred treatment for managing OCD, and early intervention significantly influences the possibility of complete disease resolution. The purpose of this study is to develop a deep learning-based classification model in ultrasound images for computer-aided diagnosis. METHODS: This paper proposes a deep learning-based OCD classification method in ultrasound images. The proposed method first detects the humeral capitellum detection using YOLO and then estimates the OCD probability of the detected region probability using VGG16. We hypothesis that the performance will be improved by eliminating unnecessary regions. To validate the performance of the proposed method, it was applied to 158 subjects (OCD: 67, Normal: 91) using five-fold-cross-validation. RESULTS: The study demonstrated that the humeral capitellum detection achieved a mean average precision (mAP) of over 0.95, while OCD probability estimation achieved an average accuracy of 0.890, precision of 0.888, recall of 0.927, F1 score of 0.894, and an area under the curve (AUC) of 0.962. On the other hand, when the classification model was constructed for the entire image, accuracy, precision, recall, F1 score, and AUC were 0.806, 0.806, 0.932, 0.843, and 0.928, respectively. The findings suggest the high-performance potential of the proposed model for OCD classification in ultrasonic images. CONCLUSION: This paper introduces a deep learning-based OCD classification method. The experimental results emphasize the effectiveness of focusing on the humeral capitellum for OCD classification in ultrasound images. Future work should involve evaluating the effectiveness of employing the proposed method by physicians during medical check-ups for OCD.
  • Kenta Sasaki, Daisuke Fujita, Syoji Kobashi
    The 24th International Symposium on Advanced Intelligent Systems (ISIS), 519-524 2023年12月  査読有り最終著者責任著者

MISC

 238
  • 佐々木研太, 藤田大輔, 高辻謙太, 琴浦義浩, 南昌孝, 小林雄輔, 祐成毅, 木田圭重, 高橋謙治, 小橋昌司
    日本医用画像工学会大会予稿集(CD-ROM) 41st 2022年  
  • 西尾 祥一, Hossain Belayat, 八木 直美, 新居 学, 平中 崇文, 小橋 昌司
    日本医用画像工学会大会予稿集 38回 492-497 2019年7月  
    整形外科手術は腹腟鏡手術や開腹手術と比較して手術工程および使用する手術器具が多く,外科手術中に医療器具の受け渡しを行う看護師は大きな負担を強いられている.我々は過去に人工膝関節置換術を対象とした整形外科手術における手術室看護師を支援するためのナビゲーションシステムを提案した.この研究では畳み込みニューラルネットワークを用いて手術画像全体に基づいた画像認識により手術工程の認識を試みたが,実用化に必要とされる精度には及ばなかった.本研究では整形外科手術における手術工程の認識精度の改善を実現するために,手術映像から取得したフレーム毎に物体検出(YOLO)を行い,器具のクラス情報と位置座標を検出する.スマートグラス(眼鏡型のデバイス)を用いて記録した整形外科手術映像は手術間で照明環境や撮影角度が大きく異なっており,それらの影響を低減させるための最適なデータの前処理法やデータ拡張法を検討した.(著者抄録)
  • 久保有輝, 井城一輝, 盛田健人, 新居学, 無藤智之, 田中洋, 乾浩明, 小橋昌司, 信原克哉
    電子情報通信学会技術研究報告 117(518(MI2017 63-106)) 93‐98 2018年3月12日  
  • 盛田健人, 盛田健人, ALAM Saadia Binte, 新居学, 若田ゆき, 安藤久美子, 石藏礼一, 清水昭伸, 小橋昌司
    電子情報通信学会技術研究報告 117(518(MI2017 63-106)) 87‐91 2018年3月12日  
  • 丸居航, ALAM Saadia Binte, 寒重之, 柴田政彦, KOH Min‐sung, 小橋昌司
    システム制御情報学会研究発表講演会講演論文集(CD-ROM) 61st ROMBUNNO.345‐2 2017年5月23日  

講演・口頭発表等

 197

担当経験のある科目(授業)

 17

共同研究・競争的資金等の研究課題

 25

学術貢献活動

 5

社会貢献活動

 2

メディア報道

 11