-
Longitudinal Changes in Functional Brain Network Properties Following Surgical Glioma Resection.3 weeks agoBrain tumors significantly disrupt brain network organization, yet the temporal dynamics of network reorganization following surgical intervention remain poorly understood. This study investigated longitudinal changes in functional brain network properties across pre-surgical, post-surgical, and follow-up time points in glioma patients. Using graph theory analysis of resting-state functional magnetic resonance imaging (fMRI) data, we examined whole-brain network metrics as well as the connections involving perilesional and contralesional regions. Results revealed significant alterations in network topology over time, with distinct patterns of reorganization in perilesional and contralesional regions, suggesting mechanisms of plasticity and recovery in brain network architecture following tumor resection.Clinical Relevance-These findings have significant implications for surgical planning and post-operative care, suggesting the need for therapeutic approaches that consider both local and distant network effects. The demonstrated importance of contralesional adaptation particularly warrants attention in rehabilitation strategies, potentially opening new avenues for targeted interventions in recovery.CancerAccessCare/ManagementAdvocacy
-
Detection of Metastatic Tissues in Histopathologic Images using DenseNet-121 with Data Augmentation.3 weeks agoThe identification of metastatic tissues in histopathological scans is a critical step in cancer detection. This research utilizes DenseNet-121 to automate the detection of metastatic tissue using the CAMELYON17 dataset, along with some data augmentation techniques to improve model generalization. The results show that DenseNet-121 with data augmentation outperforms ResNet-18 and ResNet-50 in terms of accuracy and F1-score in detecting metastatic tissue, achieving impressive results with a testing accuracy of 0.98 and an F1-score of 0.98, which is higher than previous state-of-the-art methods. Moreover, the model's performance was tested on the CAMELYON16 dataset, and it still maintained relatively high accuracy on previously unseen images. These results suggest that DenseNet-121 could be a valuable assistive tool to pathologists, potentially accelerating cancer diagnoses and improving diagnostic reliability.CancerAccessAdvocacy
-
Symptom Monitoring in Oncological Patients Using Wrist-Worn Wearables: A Machine Learning Approach.3 weeks agoReliable and continuous psycho-physical symptoms monitoring is essential to improve cancer patients' quality of life. Scales and questionnaires, traditionally used in clinical contexts, provide valuable information regarding patients' overall well-being. However, their intrinsic characteristics do not allow continuous symptoms monitoring. The integration of wearable sensors and artificial intelligence algorithms promises to revolutionize health monitoring, providing continuous and pervasive recordings. In this study, we assessed the performance of machine learning (ML) algorithms to predict the presence of nine common symptoms experienced by cancer patients, using physiological signals and self-rated symptoms in real-world context, i.e. at patient's home. Features were extracted from electrodermal activity (EDA), skin temperature (TEMP), and accelerometer (ACC) data. A principal component analysis was implemented to merge the extracted features, selecting the first components, describing the 90% of the total variance, to feed three ML algorithms - logistic regression (LogReg), support vector machine (SVM), and random forest (RF). A bootstrap approach was used to enhance the robustness of the results. SVM and RF provide consistently better performance compared to LogReg, achieving a better balance across the evaluated performance metrics. Tiredness achieved the highest F1-score (91.68% ± 2.81%) with SVM. Other symptoms such as malaise, drowsiness, anxiety, appetite, nausea and pain, achieved F1-scores above 70%.Despite the limitation of a small sample size and not accounting for the time of the day, these preliminary findings suggest the feasibility of such an approach, having the potential of improving cancer patient care.Clinical Relevance- The integration of wearable devices and machine learning offers a promising solution for continuous psycho-physical symptoms monitoring, enabling early intervention and personalized treatment strategies.CancerAccessCare/Management
-
Deep Learning-Driven Radiomic Feature Extraction for Predicting Complete Pathological Response to Neoadjuvant Chemotherapy in Breast Cancer from 18F-Fluorodeoxyglucose Positron Emission Tomography/Computed Tomography Scans.3 weeks agoThis study aimed to assess the potential of 18FFluorodeoxyglucose Positron Emission Tomography/Computed Tomography (18F-FDG PET/CT) parameters, including advanced texture features, to predict pathological complete response (pCR) after the first course of neoadjuvant chemotherapy (NAC) in breast cancer follow-up patients. This approach evaluated pCR after the first course of NAC by combining information from functional images, anatomical images, and clinical data. A total of 204 breast cancer patients underwent 18F-FDG PET/CT imaging for NAC assessment. From these delayed PET/CT scans, we extracted both metabolic and radiomic features, combining imaging parameters with the breast cancer molecular subtype information for each patient to improve pCR prediction. Lesion segmentation was automated using the no-new-Net (nnUNet) deep learning model. To predict pCR, we employed machine learning classifiers, including Random Forest, XGBoost, and Support Vector Machine. Among all tested models, the highest prediction performance was achieved when PET/CT features (both baseline and follow-up) were combined with breast cancer subtype information. The analysis was conducted on the entire dataset (Human Epidermal Growth Factor Receptor 2 (HER2), Luminal, and Triple-Negative (TN). Moreover, separate analyses were performed specifically on HER2 tumors (N=76) and TN tumors (N=52). The combined model achieved a mean balanced accuracy of 0.76 ± 0.09, surpassing the individual models for HER2 (0.67 ± 0.08) and TN (0.65 ± 0.06). These findings show the importance of integrating baseline and follow-up PET/CT radiomic features, texture analysis, and clinical information for more accurate pCR prediction after the first course of NAC in breast cancer patients. Overall, the features extracted from baseline data and follow-up data or after the first course of NAC, combined with information of breast cancer subtype, offer strong predictive value for pCR in follow-up patients.Clinical Relevance-By providing a more accurate assessment of treatment response after the first course of NAC, this approach empowers clinicians to make artificial intelligence-driven decisions, customize therapy plans for individual patients, and avoid ineffective treatments. Consequently, this strategy could improve patient outcomes and optimize therapeutic efficacy.CancerAccessCare/Management
-
Glioblastoma Overall Survival Prediction With Vision Transformers.3 weeks agoGlioblastoma is one of the most aggressive and common brain tumors, with a median survival of 10-15 months. Predicting Overall Survival (OS) is critical for personalizing treatment strategies and aligning clinical decisions with patient outcomes. In this study, we propose a novel Artificial Intelligence (AI) approach for OS prediction using Magnetic Resonance Imaging (MRI) images, exploiting Vision Transformers (ViTs) to extract hidden features directly from MRI images, eliminating the need of tumor segmentation. Unlike traditional approaches, our method simplifies the workflow and reduces computational resource requirements.The proposed model was evaluated on the BRATS dataset, reaching an accuracy of 62.5% on the test set, comparable to the top-performing methods. Additionally, it demonstrated balanced performance across precision, recall, and F1 score, overcoming the best model in these metrics. The dataset size limits the generalization of the ViT which typically requires larger datasets compared to convolutional neural networks. This limitation in generalization is observed across all the cited studies. This work highlights the applicability of ViTs for downsampled medical imaging tasks and establishes a foundation for OS prediction models that are computationally efficient and do not rely on segmentation.CancerAccessCare/ManagementAdvocacy
-
Automated Radiomics Analysis from Multi-Modal Image Segmentation for Predicting Triple Negative Breast Cancer.3 weeks agoThis study aims to investigate whether quantitative radiomic features extracted from Positron Emission Tomography/Computed Tomography (PET/CT) could differentiate triple-negative breast cancer (TNBC) from non-triple-negative breast cancer (non-TNBC). We propose a pipeline that combines deep learning for cancer lesion segmentation with machine learning techniques to classify TNBC. Our approach leveraged the radiomic features extracted from 18F-fluorodeoxyglucose PET/CT. This retrospective study included the PET/CT images of 217 patients with breast cancer (57 TNBC and 160 non-TNBC) admitted to Georges-François Leclerc Hospital. The tumor regions of interest were automatically segmented on PET images using a deep learning model and mapped to CT scans. Radiomic features were extracted from 3D tumor volumes and machine learning classifiers were built using stratified 5-fold cross-validation. Recursive feature elimination was employed to rank and select the most relevant radiomic features, thereby enhancing classification performance. The model was evaluated using the F1-score, area under the receiver operating characteristic (ROC) curve (AUC), accuracy, sensitivity, and specificity. The proposed method achieved promising performance, with an F1-score of 0.90 ± 0.02, an accuracy of 0.86 ± 0.07, a sensitivity of 0.91 ± 0.06, and an AUC of 0.88 ± 0.04, using the top-ranked features. The metrics were evaluated as the average over a five-fold cross-validation. Radiomic features extracted from PET and CT scans provide valuable prognostic insights for the identification of TNBC. This study demonstrated that machine learning algorithms based on radiomic features and automated PET/CT segmentation can accurately distinguish TNBC from non-TNBC.Clinical relevance- This study demonstrates the potential of image-based radiomic analysis combined with machine learning to differentiate triple-negative breast cancer (TNBC) from non-TNBC. By using deep learning for automatic tumor segmentation and feature extraction, this approach offers a non-invasive, quantitative tool that can improve TNBC diagnosis and the efficiency of treatment strategies. These advancements may help clinicians provide more reliable insights, while reducing the likelihood of misclassification.CancerAccessCare/ManagementAdvocacy
-
Predicting the efficacy of first-line therapy for patients with colorectal cancer liver metastases using CT imaging and clinical data.3 weeks agoColorectal cancer (CRC) patients are highly prone to liver metastasis (CRLM), which often becomes the leading cause of death in this population. Predicting the efficacy of first-line therapies is crucial for clinicians to develop personalized treatment strategies for CRLM patients. In this paper, we propose a novel multimodal cross-attention model that integrates contrast-enhanced liver CT imaging and clinical data to predict the therapeutic efficacy of first-line treatment in CRLM patients. Our approach utilizes the nnUNetv2 model to segment liver and intratumoral regions from CT scans. Radiomics features are extracted from the segmented tumor regions, followed by a feature selection process to identify key predictors of treatment efficacy. In parallel, highly correlated clinical variables are identified and preprocessed. The selected radiomic features and clinical variables are processed through two separate branches with identical structures, each incorporating a multi-head cross-attention module to enable efficient exchange and alignment of multimodal information. The fused multimodal features are subsequently used to predict therapeutic outcomes. Experiments conducted on a dataset of 177 patients demonstrate that our multimodal learning model outperforms uni-modal models and existing deep learning methods, achieving an AUC of 0.7195. This approach highlights the potential of integrating imaging and clinical data for improved treatment efficacy prediction in CRLM.Clinical relevance- This study highlights the importance of integrating contrast-enhanced liver CT imaging and clinical data for predicting the efficacy of first-line therapies in CRLM patients.CancerAccessCare/Management
-
Conditional Score-based Diffusion Models for Lung CT Scans Generation.3 weeks agoChest CT scans are essential in diagnosing lung abnormalities, including lung cancer, but their utility in training deep learning models is often pushed back by limited data availability, high labeling costs, and privacy concerns. To address these challenges, this study explores the use of score-based diffusion models for the conditional generation of lung CT scans slices. Two generation scenarios are explored: one limited to lung segmentation masks and another incorporating both lung and nodule segmentation mappings to guide the synthesis process. The proposed methods are custom U-Net architecture models trained to predict the scores in Variance Preserving (VP) and Variance Exploding (VE) Stochastic Differential Equations (SDEs), composing the primary ground for comparison in conditional sample generation. The results demonstrate the VP SDEs model's superiority in generating high-fidelity images, as evidenced by high SSIM (0.894) and PSNR (28.6) values, as well as low domain-specific FID (173.4), MMD (0.0133) and ECS (0.78) scores. The generated images consistently followed the conditional mapping guidance during the generation process, effectively producing realistic lung and nodule structures, highlighting their potential for data augmentation in medical imaging tasks. While the models achieved notable success in generating accurate 2D lung CT scan slices given simple conditional image region mappings, future work surrounds the extension of these methods to 3D conditional generation and the use of richer conditional mappings to account for broader anatomical variations. Nevertheless, this study holds promise for improvement in computer-aided systems through the support in deep learning model training for lung disease diagnosis and classification.CancerChronic respiratory diseaseAccess
-
Simulation-Based Bioelectronic Modeling for Clustering Cancer Cells by Malignancy in Biosensing Applications.3 weeks agoThis study presents a novel approach to organic electrochemical transistor (OECT)-based biosensing, integrating unsupervised clustering for cancer cell differentiation. By analyzing impedance and frequency response features, this work demonstrates the ability of OECTs to capture distinct electrical signatures associated with cellular metastatic potential. A synthetic dataset was generated to simulate the electrical behavior of different cell lines, where membrane capacitance, double-layer capacitance, and crossover frequency were identified as key parameters for cell interaction. K-means clustering was employed to identify inherent patterns within the data, revealing distinct groupings of cell states based on electrical properties, which map their metastatic behavior. This proof-of-concept study not only establishes OECTs as a viable tool for cancer cell differentiation but also highlights the transformative potential of machine learning in the development of next-generation biosensing chips for cancer diagnostics and screening.CancerAccessAdvocacy
-
Explainable AI Radiomics in Prostate Cancer Aggressiveness Prediction using different quantitative Diffusion MRI models.3 weeks agoProstate cancer (PCa) is one of the most frequently diagnosed cancers in men with the average age of diagnosis at 66 years. Accuracy in the early characterization of PCa is the major unmet need in disease management in order to stratify patients with indolent disease or patients with high risk for aggressive disease at an early stage. To this end, a retrospective collection of 202 histopathologically proven PCa patients was explored through quantitative diffusion MRI modelling radiomics to automatically classify Gleason score (GS) between GS<7 and GS≥7 aiming to reduce unnecessary biopsy. The classification was conducted by training a variety of classifiers with T2 and quantitative diffusion MRI data, and the explainability analysis was assessed using the Shapley Additive Explanations (SHAP). The best model in terms of performance was the combination of T2 and the diffusion-derived micro-perfusion fraction parametric map from the intravoxel incoherent motions (IVIM) model, exhibiting a mean accuracy of 80.91% and an AUC of 85.29%. The findings of our work suggest that tissue structural information and blood microperfusion play a significant role in predicting PCa aggressiveness.Clinical Relevance-This work establishes an automated classification of PCa aggressiveness using quantitative diffusion models accompanying T2 weighted images towards reducing the large variability across centers improving the rate of referrals for unnecessary invasive procedures such as biopsy.CancerAccessCare/ManagementAdvocacy