Kenji Yoshitsugu, Eisuke Shimizu, Hiroki Nishimura, Rohan Khemlani, Shintaro Nakayama, Tadamasa Takemura
Bioengineering 11(3) 273-273 2024年3月12日 査読有り責任著者
Ophthalmological services face global inadequacies, especially in low- and middle-income countries, which are marked by a shortage of practitioners and equipment. This study employed a portable slit lamp microscope with video capabilities and cloud storage for more equitable global diagnostic resource distribution. To enhance accessibility and quality of care, this study targets corneal opacity, which is a global cause of blindness. This study has two purposes. The first is to detect corneal opacity from videos in which the anterior segment of the eye is captured. The other is to develop an AI pipeline to detect corneal opacities. First, we extracted image frames from videos and processed them using a convolutional neural network (CNN) model. Second, we manually annotated the images to extract only the corneal margins, adjusted the contrast with CLAHE, and processed them using the CNN model. Finally, we performed semantic segmentation of the cornea using annotated data. The results showed an accuracy of 0.8 for image frames and 0.96 for corneal margins. Dice and IoU achieved a score of 0.94 for semantic segmentation of the corneal margins. Although corneal opacity detection from video frames seemed challenging in the early stages of this study, manual annotation, corneal extraction, and CLAHE contrast adjustment significantly improved accuracy. The incorporation of manual annotation into the AI pipeline, through semantic segmentation, facilitated high accuracy in detecting corneal opacity.