Unsupervised domain adaptation for automated knee osteoarthritis phenotype classification
Publication in refereed journal

替代計量分析
.

其它資訊
摘要Background: Osteoarthritis (OA) is a global healthcare problem. The increasing population of OA patients demands a greater bandwidth of imaging and diagnostics. It is important to provide automatic and objective diagnostic techniques to address this challenge. This study demonstrates the utility of unsupervised domain adaptation (UDA) for automated OA phenotype classification.

Methods: We collected 318 and 960 three-dimensional double-echo steady-state magnetic resonance images from the Osteoarthritis Initiative (OAI) dataset as the source dataset for phenotype cartilage/meniscus and subchondral bone, respectively. Fifty three-dimensional turbo spin echo (TSE)/fast spin echo (FSE) MR images from our institute were collected as the target datasets. For each patient, the degree of knee OA was initially graded according to the MRI Knee Osteoarthritis Knee Score before being converted to binary OA phenotype labels. The proposed four-step UDA pipeline included (I) pre-processing, which involved automatic segmentation and region-of-interest cropping; (II) source classifier training, which involved pre-training a convolutional neural network (CNN) encoder for phenotype classification using the source dataset; (III) target encoder adaptation, which involved unsupervised adjustment of the source encoder to the target encoder using both the source and target datasets; and (IV) target classifier validation, which involved statistical analysis of the classification performance evaluated by the area under the receiver operating characteristic curve (AUROC), sensitivity, specificity and accuracy. We compared our model on the target data with the source pre-trained model and the model trained with the target data from scratch.

Results: For phenotype cartilage/meniscus, our model has the best performance out of the three models, giving 0.90 [95% confidence interval (CI): 0.79-1.02] of the AUROC score, while the other two model show 0.52 (95% CI: 0.13-0.90) and 0.76 (95% CI: 0.53-0.98). For phenotype subchondral bone, our model gave 0.75 (95% CI: 0.56-0.94) at AUROC, which has a close performance of the source pre-trained model (0.76, 95% CI: 0.55-0.98), and better than the model trained from scratch on the target dataset only (0.53, 95% CI: 0.33-0.73).

Conclusions: By utilising a large, high-quality source dataset for training, the proposed UDA approach enhances the performance of automated OA phenotype classification for small target datasets. As a result, our technique enables improved downstream analysis of locally collected datasets with a small sample size.
著者Zhong J, Yao Y, Cahill DG, Xiao F, Li S, Lee J, Ho KK, Ong MT, Griffith JF, Chen W
期刊名稱Quantitative Imaging in Medicine and Surgery
出版年份2023
月份11
卷號13
期次11
出版社AME Publishing Company
頁次7444 - 7458
國際標準期刊號2223-4292
電子國際標準期刊號2223-4306
語言美式英語

上次更新時間 2024-09-08 於 10:34