Cornell University
Library
Cornell UniversityLibrary

eCommons

Help
Log In(current)
  1. Home
  2. Weill Cornell Medicine
  3. Medical College Research and Papers
  4. Department of Otolaryngology - Head and Neck Surgery
  5. A Deep-Learning Model for Multi-class Audio Classification of Vocal Fold Pathologies in Office Stroboscopy.

A Deep-Learning Model for Multi-class Audio Classification of Vocal Fold Pathologies in Office Stroboscopy.

File(s)
39907244.pdf (209.19 KB)
Permanent Link(s)
https://hdl.handle.net/1813/116830
Collections
Department of Otolaryngology - Head and Neck Surgery
Author
Kim, Y.E.
Dobko, M.
Li, H.
Shao, T.
Periyakoil, P.
Tipton, C.
Colasacco, C.
Serpedin, A.
Elemento, O.
Sabuncu, M.
Pitman, M.
Sulica, L.
Rameau, A.
Abstract

OBJECTIVE: To develop and validate a deep-learning classifier trained on voice data extracted from videolaryngostroboscopy recordings, differentiating between three different vocal fold (VF) states: healthy (HVF), unilateral paralysis (UVFP), and VF lesions, including benign and malignant pathologies. METHODS: Patients with UVFP (n = 105), VF lesions (n = 63), and HVF (n = 41) were retrospectively identified. Voice samples were extracted from stroboscopic videos (Pentax Laryngeal Strobe Model 9400), including sustained /i/ phonation, pitch glide, and /i/ sniff task. Extracted audio files were converted into Mel-spectrograms. Voice samples were independently divided into training (80%), validation (10%), and test (10%) by patient. Pretrained ResNet18 models were trained to classify (1) HVF and pathological VF (lesions and UVFP), and (2) HVF, UVFP, and VF lesions. Both classifiers were further validated on an external dataset consisting of 12 UVFP, 13 VF lesions, and 15 HVF patients. Model performances were evaluated by accuracy and F1-score. RESULTS: When evaluated on a hold-out test set, the binary classifier demonstrated stronger performance compared to the multi-class classifier (accuracy 83% vs. 40%; F1-score 0.90 vs. 0.36). When evaluated on an external dataset, the binary classifier achieved an accuracy of 63% and F1-score of 0.48, compared to 35% and 0.25 for the multi-class classifier. CONCLUSIONS: Deep-learning classifiers differentiating HVF, UVFP, and VF lesions were developed using voice data from stroboscopic videos. Although healthy and pathological voice were differentiated with moderate accuracy, multi-class classification lowered model performance. The model performed poorly on an external dataset. Voice captured in stroboscopic videos may have limited diagnostic value, though further studies are needed. LEVEL OF EVIDENCE: 4 Laryngoscope, 2025.

Journal / Series
The Laryngoscope
Date Issued
2025-02-05
Publisher
Wiley
Keywords
artificial intelligence
•
computer‐aided diagnosis
•
convolutional neural network
•
deep learning
•
laryngology
•
WCM Library Coordinated Deposit
Related DOI
https://doi.org/10.1002/lary.32036
Previously Published as
Kim YE, Dobko M, Li H, Shao T, Periyakoil P, Tipton C, Colasacco C, Serpedin A, Elemento O, Sabuncu M, Pitman M, Sulica L, Rameau A. A Deep-Learning Model for Multi-class Audio Classification of Vocal Fold Pathologies in Office Stroboscopy. Laryngoscope. 2025. Epub 20250205. doi: 10.1002/lary.32036. PMID: 39907244.
Rights
Attribution-NonCommercial-NoDerivatives 4.0 International
Rights URI
https://creativecommons.org/licenses/by-nc-nd/4.0/
Type
article

Site Statistics | Help

About eCommons | Policies | Terms of use | Contact Us

copyright Ā© 2002-2026 Cornell University Library | Privacy | Web Accessibility Assistance