Introduction

EC is one of the most common malignant tumors in the female reproductive system, primarily originating from the endometrium and typically occurring in postmenopausal women1,2,3. The incidence of EC significantly increases with the age of women. The typical clinical manifestations of EC include abnormal uterine bleeding (especially postmenopausal bleeding), pelvic pain, and increased vaginal discharge4. Although these symptoms are common, they serve as important indicators in the early diagnosis of EC. EC of different types and grades varies in growth rate and invasiveness, with severe cases potentially spreading to surrounding tissues and distant organs, such as the fallopian tubes, ovaries, and peritoneum5. Epidemiological studies have shown that the incidence of EC is significantly higher in developed countries than in developing countries, closely related to the widespread increase in chronic diseases such as aging, obesity, and diabetes. Additionally, long-term use of estrogen replacement therapy (ERT), infertility treatments, and genetic factors (such as a family history of EC) are also considered significant risk factors for the development of EC6.

Although the diagnosis of EC mainly relies on histopathological examination, its early symptoms lack Spe and are often mistaken for more common non-neoplastic diseases, such as endometrial hyperplasia or endometriosis7,8. Therefore, early recognition and differentiation of these symptoms, especially the diagnosis of postmenopausal bleeding, face significant challenges. Traditional imaging examinations such as ultrasound, CT, and magnetic resonance imaging (MRI) have limitations in the diagnosis of EC, often failing to clearly display the depth of tumor invasion and the spread to surrounding tissues, leading to missed diagnoses or misdiagnoses9,10,11,12. With the rapid development of artificial intelligence (AI) technology, its application in the field of medical imaging is increasingly becoming a focus. Especially in the differential diagnosis and prognostic prediction of EC, AI-assisted MRI technology shows great potential and value13,14. MRI, as a non-invasive, high-resolution imaging technique, has been widely used in the clinical diagnosis of EC. However, traditional MRI image interpretation relies on the experience and professional knowledge of clinical doctors, which poses a risk of subjectivity and misdiagnosis in complex cases15. The introduction of AI technology provides new possibilities for solving these problems, enabling AI to extract complex features from a large amount of imaging data through deep learning algorithms and big data analysis, thereby improving the diagnostic AC and efficiency of EC16,17.

This article aims to analyze the value of deep learning algorithms combined with MRI in the risk diagnosis of EC and the prediction of postoperative recurrence in patients. By optimizing the deep learning CNN architecture ResNet-101 and introducing spatial attention and channel attention modules, the aim is to enhance the diagnostic AC and PR for high-risk EC patients. The specific goal is to evaluate the performance of the improved model in the risk identification of EC and the prediction of postoperative recurrence, and to compare it with traditional models, in hopes of providing a more effective diagnostic tool for clinical practice.

Literature review

The application of AI technology in medical imaging has been increasingly widespread. Clinically, through machine learning and deep learning techniques, AI can identify disease characteristics, quantify lesion areas, and assist doctors in making more accurate diagnoses and treatment plans. To improve the computational intensity and time consumption of existing algorithms for whole-slide tumor image analysis, Rong et al. (2023)18 used Yolo (HD-Yolo) for histology-based detection and verified that HD-Yolo was superior to existing analysis methods in terms of nuclear detection, classification AC, and computational time for three types of tumor tissues. Shen et al. (2023)19 proposed a medical image segmentation algorithm based on deep neural network technology in response to the problem of edge blurring and noise interference in medical image segmentation. This algorithm uses a similar U-Net backbone structure and obtains medical image segmentation results through a decoder path with residual and convolutional structures. It was verified that for medical images with complex shapes and adhesion between lesions and normal tissues, and it can effectively improve segmentation AC. Zhang et al. (2023)20 designed an improved U-type network (BN-U-Net) algorithm and applied it to the segmentation of spinal MRI medical images in 22 research subjects. The results presented that the processing time of the BN-U-Net al.gorithm was visibly less as against fully convolutional network (FCN) and U-Net al.gorithms, while AC, Sen, and Spe were higher, proving that the algorithm visibly improved the quality of spinal MRI images through automatic segmentation of MRI images. Gao et al. (2020)21 proposed a new method for identifying outliers in unbalanced datasets, using the concept of imaging complexity, enabling deep learning models to optimally learn the inherent imaging features related to a single class, thereby effectively capturing image complexity and enhancing feature learning. Guan et al. (2023)22 proposed the collaborative integration of model-based learning and data-driven learning in three key components. The first component uses a linear vector space framework to capture the global dependencies of image features; the second component uses a deep network to learn the mapping from linear vector space to nonlinear manifolds; the third component uses a sparse model to capture local residual features. The model was evaluated using MRI data and was found to improve reconstruction in the presence of data perturbations and/or novel image features. Although hysteroscopy combined with endometrial biopsy is the gold standard for diagnosing endometrial lesions, the experience of physicians remains crucial for accurate diagnosis. Raimondo et al. (2024)23 developed a deep learning model to automatically detect and classify endometrial lesions in hysteroscopic images. The model was replied with 1,500 images from 266 patients, and the results indicated that while the introduction of clinical data could slightly enhance the model’s diagnostic capability, the overall performance still needs improvement, suggesting that future studies need to further optimize the model to enhance its practicality.

In addition, AI technology can not only assist in the diagnosis of patient conditions but also be applied to the prediction of patient prognosis. She et al. (2020)24 developed a deep learning survival neural network model based on non-small cell lung cancer (LC) case data and compared it with the tumor, node, and metastasis staging system for LC-specific survival. It was found that the deep learning survival neural network model was more promising than the tumor, node, and metastasis stage on the test dataset in predicting LC-specific survival rates. This novel analysis method can provide reliable individual survival information and treatment recommendations. Zhong et al. (2022)25 developed deep learning features for the prediction of metastasis and prognostic stratification of clinical stage I non-small cell LC and found that a higher deep learning score predicted poorer overall survival and relapse-free survival, suggesting that deep learning features could accurately predict the disease and stratify the prognosis of clinical stage I non-small cell LC. Dong et al. (2020)26 constructed a deep learning imaging nomogram based on CT images of 730 patients with locally advanced gastric cancer to preoperatively predict the number of lymph node metastases in patients. The validation found that the deep learning imaging nomogram had good discrimination for the number of lymph node metastases in patients, visibly better than the conventionally used clinical N staging, tumor size, and clinical models, and was visibly related to patient survival rates. Jiang et al. (2024)27 developed and externally validated a deep learning-based prognostic stratification system for automatically predicting the overall survival and cancer-specific survival of patients with resected colorectal cancer (CRC). It was found that the prognosis was worse, and the disease-specific survival was shorter in the high-risk scoring group as against the low-risk scoring group. It was ultimately proposed that attention-based unsupervised deep learning could robustly provide prognosis for the clinical outcomes of CRC patients, be promoted in different populations, and serve as a potential new prognostic tool in the clinical decision-making of CRC management. Jiang et al. (2022)28 conducted a retrospective analysis of two positron emission tomography (PET) datasets of diffuse large B-cell lymphoma. A 3D U-Net architecture was trained on patches randomly sampled from the patient’s PET images, and ultimately, an FCN model with a U-Net architecture was proposed. This model can accurately segment lymphoma lesions and provide quantitative information for accurately predicting tumor volume and prognosis in patients.

In summary, AI technology has made visible progress in the application of medical image analysis and prognostic prediction, especially in tumor detection, image segmentation, and survival prediction. Existing studies have shown that AI technology can visibly improve diagnostic AC, reduce processing time, and provide more accurate prognostic assessments through deep learning and machine learning algorithms. However, the application of these technologies in EC still needs further exploration. This article aims to explore the application and value of AI-assisted MRI technology in the differential diagnosis and prognostic prediction of EC, in hopes of providing new ideas and methods for clinical diagnosis and treatment.

In this article, PubMed and Google Scholar databases were initially utilized for literature retrieval, with keywords including “artificial intelligence”, “medical imaging”, “deep learning”, “tumor detection”, etc. The search time range spanned from 2010 to 2024. Based on predefined inclusion and exclusion criteria, relevant studies were selected, with a particular focus on the application of machine learning and deep learning techniques in medical imaging. The specific inclusion criteria encompassed published peer-reviewed studies with subjects covering various tumor types such as lung cancer, liver cancer, breast cancer. For data extraction, it collected information on model types, performance evaluation metrics (such as AC, Sen, Spe), and application examples from each literature. In summary, this article aims to comprehensively collate the current progress of AI technology in medical imaging to guide subsequent research and clinical practice.

Research mode

Model construction

An image processing model based on deep learning residual network was constructed. Based on ResNet-10129, attention mechanism was introduced into the model to improve the concentration of the model on the region of interest. The attention module includes spatial attention and channel attention, which are combined to improve the feature representation ability of the model. The overall running framework of the model is illustrated in Fig. 1.

Fig. 1
figure 1

Overall operational framework of the improved model.

To make the model pay more attention to the important channel features in the feature graph tensor, a channel attention module is added to each node of the ResNet-101 architecture to obtain a new residual module (Fig. 2). The operation steps of this channel attention residual module are as follows: first, the image features are processed by global average pooling, and then the generated feature images are compressed. Then, two fully connected layers are used to model the relationship between the channels, so that the weight of the output features is consistent with that of the input features. Sigmoid activation function was used to normalize the weight, and the obtained weight was used to represent the attention level of each feature channel. The feature weight value of the channel was normalized.

$$\:\text{Z}={\upphi\:}\left(\text{M}\right(\text{A}\text{v}\text{e}\text{r}\text{a}\text{g}\text{e}\:\text{p}\text{o}\text{o}\text{l}\text{i}\text{n}\text{g}\left(\text{L}\right)\left)\right)+\text{M}\left(\text{M}\text{a}\text{x}\text{i}\text{m}\text{u}\text{m}\:\text{p}\text{o}\text{o}\text{l}\right(\text{L}\left)\right))$$
(1)
$$\:\text{L}\text{*}=\text{Z}\diamond\:\text{L}$$
(2)

Z represents attention weight, L represents input feature, \(\:L\text{*}\) represents weighted feature, \(\:{\upphi\:}\)() represents ReLU activation function, M means multi-layer perceptron, \(\:\text{A}\text{v}\text{e}\text{r}\text{a}\text{g}\text{e}\:\text{p}\text{o}\text{o}\text{l}\text{i}\text{n}\text{g}\left(\text{L}\right)\) means global average pooling of input feature L, \(\:\text{M}\text{a}\text{x}\text{i}\text{m}\text{u}\text{m}\:\text{p}\text{o}\text{o}\text{l}\left(\text{L}\right)\) represents global maximum pooling of input feature L.

Fig. 2
figure 2

Operation steps of spatial attention residual module.

The application of spatial attention mechanism can improve the model’s attention to important spatial information. The spatial attention module is added to each node of the ResNet-101 architecture to obtain a new spatial attention residual module (Fig. 3). The specific operation steps are as follows: The input features are processed using max pooling and global average pooling to obtain two channel descriptions. The two channel descriptions are jointly applied to transform the double-layer feature map into a single-layer feature map. After a 5 × 5 convolutional operation, the Sigmoid activation function is used to obtain the weight coefficients.

Fig. 3
figure 3

Operation steps of spatial attention residual module.

Due to the serial structure having the stacking effect of more nonlinear activation functions, better non-residual blocks can be obtained. Therefore, in the model, the spatial attention module and the channel attention module exist in a serial form. This form can simultaneously calculate the information of different channels in the feature map as well as the local spatial information of each channel, thereby enhancing the model’s ability to learn image features. Thus, an improved ResNet-101 model based on spatial attention and channel attention mechanisms was completed.

Evaluation indicators of the model

AC, PR, RE, and F1 were used to evaluate the diagnostic performance of the model. Traditional ResNet-101, SA-ResNet-101, CA-ResNet-101 models were introduced. Using ROC curve, analysis of model on the prognosis of patients with cervical cancer risk and prediction performance was performed.

$${\text{AC}} = \left( {{\text{TP}} + {\text{TN}}} \right)/{\text{total number of patients}}$$
(3)
$${\text{PR}} = {\text{TP}}/\left( {{\text{TP}} + {\text{FP}}} \right)$$
(4)
$${\text{RE}} = {\text{TP}}/\left( {{\text{TP}} + {\text{FN}}} \right)$$
(5)
$${\text{F}}1 = 2 \times {\text{TP}} \times {\text{RE}}/\left( {{\text{PR}} + {\text{RE}}} \right)$$
(6)

TP denotes true positive, FN denotes false negative, TN denotes true negative, and FP denotes false positive.

Datasets collection

Retrospectively, 210 patients with EC who underwent pelvic MRI examinations at the imaging center of XXX Hospital from January 2021 to May 2024 were included as study samples. Among them, 140 cases were used as the test set, and 70 cases as the validation set. All patients were pathologically confirmed, and information such as basic patient data (Table 1), imaging pictures, and postoperative recurrence was collected, taking whether the patient has a recurrence after surgery as the endpoint event. According to the ESMO-ESTRO-ESP guidelines, patients were divided into low-risk EC and high-risk EC.

The patients’ MRI images were transported to the workstation, where image processing algorithms were utilized to segment and reconstruct the images.

Inclusion criteria: (1) Patients diagnosed with EC; (2) Age between 30 and 64 years; (3) Patients who underwent pelvic MRI within the specified time and had complete imaging data; (4) Complete clinical information, including basic data, imaging pictures, postoperative recurrence status, etc.

Exclusion criteria: (1) Patients with other malignant tumors; (2) Substandard MRI image quality (such as motion artifacts, severe noise); (3) Incomplete follow-up data.

Table 1 General information of patients in the test set and validation set.

Experimental environment

The experiments were primarily conducted under the TensorFlow deep learning framework, accelerated by GPU. The models and training code were written in Python version 3.6, with the integrated development environment being PyCharm. The configuration was as follows: the graphics card was NVIDIA GeForce RTX 2080 Ti, with 64GB of memory, a central processor of AMD Ryzen Threadripper 2950X, and the system was Windows 10.

Parameters setting

The model parameters were as follows: the convolutional kernel size was 5 × 5, the convolution operation step was 1, the filter size was 5 × 5, the depth was set to 101 layers, the scaling factor was 16, the regularization parameter was 2, and the initial learning rate was 0.01.

Statistical processing

SPSPS 22.0 statistical software was employed. Quantitative data conforming to normal distribution were presented as mean ± sd (\(\:\overline{\text{x}}\) ± s), quantitative data that did not conform to normal distribution were expressed using the median and interquartile range, and categorical data were expressed using frequency and percentage (%). Non-normally distributed quantitative data were analyzed by Mann-Whitney test, normally distributed quantitative data by one-way ANOVA, and categorical data by chi-square test. The diagnostic performance of each model was assessed by plotting the ROC curves, and the AUC was calculated to compare the Sen and Spe of different models. A two-tailed test with P < 0.05 was considered the standard for statistical significance. The statistical analysis results indicated that the improved deep learning model demonstrated significant statistical advantages in the diagnosis of high-risk EC patients and in the prediction of postoperative recurrence.

Performance evaluation

Imaging data of cases

In Fig. 4, an MRI image of a female patient showed low signal intensity on T1WI, elevated signal intensity on T2WI, and increased signal on DWI, with the endometrium being abnormally thickened. The pathological results indicated that the patient had endometrioid carcinoma, which was moderately to highly differentiated, and the depth of cancer cell infiltration was less than half the thickness of the myometrium, also involving the cervical canal.

Fig. 4
figure 4

MRI image of a 56-year-old woman with menostaxis for 11 days.

Figure 5 presents the MRI image of a female patient with moderate signal on T1WI, high signal on T2WI, and high signal on DWI, and abnormal thickening of the endometrium. The pathological results suggested that the patient was endometrial adenocarcinoma, moderately or highly differentiated, and the depth of infiltration of cancer cells was greater than half of the thickness of the myometrium.

Fig. 5
figure 5

MRI image of a 50-year-old female patient with prolonged menstrual period for half a year and irregular vaginal bleeding admitted for examination.

Diagnostic performance of different models for EC

In the validation set, there were 45 cases of low-risk EC and 25 cases of high-risk EC. According to the ROC curve analysis (Fig. 6), the AUC of the proposed model was 0.918, the SA-ResNet-101 model was 0.760, the CA-ResNet-101 model was 0.758, and the traditional ResNet-101 model was 0.613. The AUC of the proposed model for the diagnosis of high-risk EC was markedly larger.

Further comparison of the evaluation indicators suggested that the AC, PR, RE, and F1 values of the proposed model were markedly higher as against the other three models (P  < 0.05) (Fig. 7).

Fig. 6
figure 6

Diagnostic ROC curves of four models for high risk of EC (A–D) are the proposed model, SA-ResNet-101 model, CA-ResNet-101 model, and traditional ResNet-101 model, respectively).

Fig. 7
figure 7

Diagnostic AC, PR, RE, F1 values for EC risk for the four models. *As against the proposed model, P  < 0.05.

Diagnostic performance of different models for patient prognosis

After follow-up data collation, among the 70 patients in the validation set, 13 cases (9 at the primary site, 3 in lymph nodes, and 1 in the abdominal cavity) were observed to have recurrence after surgery, while 57 cases (34 at the primary site, 11 in lymph nodes, 8 in the cervix, and 4 in the abdominal cavity) did not experience recurrence. Figure 8 illustrates that the AUC of the proposed model was 0.926, the SA-ResNet-101 model was 0.729, the CA-ResNet-101 model was 0.767, and the traditional ResNet-101 model was 0.620. The AUC of the proposed model for postoperative recurrence prediction of patients was markedly larger.

Figure 9 illustrates that the AC, PR, RE, and F1 values of the proposed model for postoperative recurrence prediction of patients were markedly higher as against the other three models (P  < 0.05).

Fig. 8
figure 8

ROC curves of four models for predicting postoperative recurrence of patients (A–D) are the proposed model, SA-ResNet-101 model, CA-ResNet-101 model, and traditional ResNet-101 model, respectively).

Fig. 9
figure 9

Predicted AC, PR, RE, and F1 values of the four models for postoperative recurrence. *As against the proposed model, P  < 0.05.

Discussion

In this article, a deep learning-based model was developed, which, combined with MRI, successfully enhanced the diagnostic AC and predictive ability for postoperative recurrence of EC. By incorporating spatial attention and channel attention modules, the improved model demonstrated significantly higher Sen and Spe in identifying high-risk EC patients compared to the traditional ResNet-101 model and other comparative models. Furthermore, in predicting postoperative recurrence, the improved model also showed higher AC, PR, RE, and F1 score. These results indicate that deep learning technology holds significant potential in improving the diagnosis and management of EC.

The early diagnosis of EC is primarily challenging due to its nonspecific symptoms, such as abnormal bleeding and menstrual irregularities, which are often confused with other gynecological diseases30,31. Although imaging examinations like ultrasound, CT, and MRI are widely used, they have limited Sen for small or locally invasive tumors. Confirmation of diagnosis typically relies on histopathological examination, but the process is often complex due to the diversity of pathological types and grading32,33. Therefore, integrating clinical manifestations, imaging, and pathological results is crucial for accurate diagnosis. Deep learning image segmentation algorithms can precisely identify key structures in medical imaging, providing quantitative analysis and diagnostic support for physicians34.

Based on this, this article used ResNet-101 as the foundation, and introduced spatial attention and channel attention modules for optimization, allowing the model to calculate the information of different channels in the feature map and the local spatial information of each channel at the same time. In the past, deep learning models often needed to process a large amount of feature information when dealing with complex tasks. Traditional CNN such as ResNet-101 usually globally weight and process the features of each channel when dealing with feature maps, without fully considering the interaction between different channels and local spatial information35. The introduction of spatial attention and channel attention modules enables the model to be more flexible and accurate when learning features, effectively improving the model’s ability to learn complex features and relationships36. To further analyze the performance of the model, 210 EC patients were retrospectively included as study samples. Among the 70 cases in the validation set, there were 45 cases of low-risk EC and 25 cases of high-risk EC. Using ROC curve analysis, it was found that the AUC of the proposed model (0.918) for the diagnosis of high-risk EC was visibly larger as against the traditional ResNet-101 model (0.613), the SA-ResNet-101 model (0.760), and the CA-ResNet-101 model (0.758). This is similar to the research results of Men et al. (2018)37, indicating that the proposed model has better Sen and Spe in the diagnosis of high-risk EC, and can more effectively identify high-risk patients, which is of great significance for clinical decision-making and patient management. The study by Bús et al. (2021)38, based on data from patients who underwent radical hysterectomy, analyzed the reliability of preoperative MRI in the staging of early EC. They found that conventional MRI had a low Sen but high Spe in EC staging, emphasizing the importance of combining other imaging methods for more accurate assessment. However, the model proposed in this article showed significantly higher AC, PR, RE, and F1 values in EC risk diagnosis than traditional methods (P < 0.05), which differs from the conclusions of Bús et al. This discrepancy may be attributed to several reasons: firstly, the model in this article incorporated deep learning technology, leveraging improved feature extraction and analysis capabilities to more accurately capture the risk characteristics of EC. Secondly, the MRI method in the study by Bús et al. may not have fully utilized the latest imaging processing technologies, leading to its insufficient Sen. Therefore, the advantage of the improved model in this article lies in its comprehensive performance and discriminative ability, which makes it perform more outstandingly in the diagnosis of high-risk EC.

The prediction of postoperative recurrence probability is meaningful for improving patient prognosis. By integrating pathological factors, molecular markers, imaging examinations, hematological indicators, and individual characteristics, it is possible to identify the risk of recurrence early, adjust follow-up and treatment plans in a timely manner, thereby improving the survival rate and quality of life (QoL) of patients39,40. Eriksson et al. (2021)41 used ProMisE for preoperative tumor recurrence classification and prediction in EC women. Compared with the ESMO, the combination of demographic characteristics, ultrasound examination results, and ProMisE subtypes has better preoperative predictive ability for tumor recurrence or progression, supporting its application in preoperative risk stratification of EC women. In this article, in the validation set, there were 13 cases of postoperative recurrence and 57 cases without recurrence. The AUC of the proposed model (0.926) for predicting postoperative recurrence in patients was visibly larger as against the traditional ResNet-101 model (0.620), the SA-ResNet-101 model (0.729), and the CA-ResNet-101 model (0.767). This is similar to the aforementioned research results, suggesting that the improved model has better Sen and Spe in predicting postoperative recurrence, and can more effectively identify high-risk recurrence. It can provide a more accurate basis for clinical decision-making, optimize treatment and follow-up plans, and improve patient prognosis42. In addition, the AC, PR, RE, and F1 values of the proposed model for predicting postoperative recurrence in patients were visibly higher as against the other three models (P < 0.05). This high predictive AC can help doctors intervene earlier in patients at high risk of recurrence, provide personalized treatment strategies, and reduce the risk of postoperative recurrence and progression. However, the F1 value of the improved model was relatively low. The reason for this may be the balance problem between PR and RE in the model, which may lead to insufficient performance of the model in identifying specific categories or situations. Therefore, future improvements should focus on optimizing the model’s training data, adjusting the model’s threshold, or further optimizing the model’s deep learning structure to improve its performance in complex medical images.

Conclusion

Research contribution

By using ResNet-101 and introducing spatial attention and channel attention modules, the diagnostic and postoperative recurrence prediction capabilities for high-risk EC patients have been successfully enhanced. Compared with traditional models, the improved model shows superior performance in diagnosing high-risk EC in the validation set, with higher Sen and Spe, and also shows excellent predictive AC in postoperative recurrence prediction. In summary, the improved model can not only help doctors discover and diagnose high-risk EC patients earlier but also effectively predict the risk of postoperative recurrence, providing a scientific basis for the formulation of personalized treatment and follow-up plans, and helping to improve patients’ survival rates and QoL. These findings emphasize the potential and importance of deep learning image segmentation algorithms in the field of medical imaging, and further verification and promotion of the application of this technology should be carried out in future research and clinical practice.

Future works and research limitations

Although the improved deep learning model has performed well in the diagnosis and postoperative recurrence prediction of EC, there are still some deficiencies and directions for future improvement.

Firstly, the sample size of this article is relatively small, especially for the validation set of postoperative recurrence prediction. This may affect the model’s extensive applicability and generalization ability, and it is necessary to expand the sample size in the future to verify the stability and reliability of the model. Secondly, although spatial attention and channel attention modules have been introduced, further optimization of the model’s deep learning architecture and parameter settings is still needed to improve the model’s Sen and Spe when dealing with complex medical images and pathological features. In addition, this article mainly relies on medical imaging and pathological results as the main basis for model training and verification, and in the future, more clinical data and molecular marker information can be integrated to establish a more comprehensive and multi-level prediction model. The dataset lacked specific information on the location of recurrences (such as local regions or distant metastases), and this limitation may restrict a comprehensive understanding of the recurrence prediction performance.

Future research directions include further optimizing the algorithms and architecture of deep learning models, exploring multi-modal data fusion methods, such as combining imaging, molecular biology, and clinical features, to enhance the comprehensive diagnostic ability of EC. In addition, applying remote monitoring technology and big data analysis to achieve real-time tracking and prediction of long-term follow-up and treatment effects of patients will help the development and promotion of personalized medicine. Additionally, to enhance the completeness and Acc of the study, future research should consider collecting and analyzing detailed data on the location of recurrences to further assess the impact of different recurrence sites on the performance of predictive models. This will aid in formulating more precise clinical decisions and personalized treatment plans.