Treffer: 基于可变形注意力 transformer 的胃肠癌病理图像 细胞核自动分割方法.
Weitere Informationen
Objective To achieve automatic segmentation of cell nuclei in gastrointestinal cancer pathological images by using a deep learning algorithm, so as to assist in the quantitative analysis of subsequent pathological images. Methods A total of 59 patients with gastrointestinal cancer treated in Ruijin Hospital, Shanghai Jiao Tong University School of Medicine from Jan 2022 to Feb 2022, were selected as the research objects. Python and LabelMe were used for data anonymization, image segmentation, and region of interest annotation of patients’ pathological images. A total of 944 pathological images were included, and 9 703 nuclei were annotated. Then, a new semantic segmentation model based on deep learning was constructed. The model introduced deformable attention transformer (DAT) to realize automatic, accurate and efficient segmentation of pathological image nuclei. Finally, multiple segmentation evaluation criteria are used to evaluate the model’s performance. Results The mean absolute error of the segmentation results of the model proposed in this paper was 0.112 6, and the dice coefficient (Dice) was 0.721 5. Its effect was significantly better than the U-net baseline model, and it was ahead of models such as ResU-net++, R2Unet and R2AttUnet. Moreover, the segmentation results were relatively stable with good generalization. Conclusion The segmentation model established in this study can accurately identify and segment the nuclei in the pathological images, with good robustness and generalization, which is helpful to play an auxiliary diagnostic role in practical applications. [ABSTRACT FROM AUTHOR]
目的 使用深度学习算法实现胃肠癌病理图像的细胞核自动分割, 辅助后续病理图像的定量分析. 方 法 以 2022 年 1 月—2022 年 2 月在上海交通大学医学院附属瑞金医院就诊的 59 例胃肠癌患者为研究对象, 采用 python 和 LabelMe 对患者的病理图像进行数据脱敏、图片切割和感兴趣区域标注, 共纳入 944 张病理图像, 标注了 9 703 个 细 胞 核 . 通 过 构 建 一 种 基 于 深 度 学 习 的 新 型 语 义 分 割 模 型, 模 型 引 入 可 变 型 注 意 力 transformer (deformable attention transformer, DAT), 实现了病理图像细胞核自动、精准、高效分割, 并采用多种分割评价标准 评估模型性能. 结 果 模型分割结果的平均绝对误差值 (mean absolute error, MAE)为 0.112 6, 骰子系数 (dice coefficient, Dice)为 0.721 5, 其效果明显优于 U-net 基线模型, 并领先于 ResU-net++、R2Unet 和 R2AttUnet 等模型, 且分割结果相对稳定, 泛化性好. 结论 本研究建立的分割模型能够精准识别并分割出病理图像中的细胞核, 鲁 棒性和泛化性较好, 有助于在实际应用中辅助诊断. [ABSTRACT FROM AUTHOR]