-
Statistical Insights into Fibronectin Networks in the Extracellular Matrix.3 weeks agoThe extracellular matrix (ECM), a complex network of proteins and carbohydrates, regulates key cellular and developmental processes. While computational methods for characterising collagen topology are well-established, the organisation of fibronectin (FN), another vital ECM protein, remains comparatively underexplored. FN's more intricate structure and thinner fibrillar arrays make existing collagen-based methods less effective for its analysis. This work aims to lay the groundwork for studying clinical tumour images from head and neck cancer patients, with the goal of integrating it into a broader multimodal framework to predict resistance to immunotherapy.Our approach leverages handcrafted feature extraction combined with standard machine learning algorithms to identify key discriminative statistical measures distinguishing between FN assembled by control fibroblasts and tumour-like fibroblasts. These include the alignment index, closeness and betweenness centrality measures derived from graph representations, and branch length. Despite the success of state-of-the-art (SOTA) methods in other domains, we demonstrate that our handcrafted feature-based approach achieves competitive performance on our dataset. Our results demonstrate that domain-specific feature engineering can effectively complement SOTA techniques, especially in biomedical applications where interpretability is crucial.CancerCare/Management
-
PriorPath: Coarse-To-Fine Approach for Controlled De-Novo Pathology Semantic Masks Generation.3 weeks agoIncorporating artificial intelligence (AI) into digital pathology offers promising prospects for automating and enhancing tasks such as image analysis and diagnostic processes. However, the diversity of tissue samples and the necessity for meticulous image labeling often result in biased datasets, constraining the applicability of algorithms trained on them. To harness synthetic histopathological images to cope with this challenge, it is essential not only to produce photorealistic images but also to be able to exert control over the cellular characteristics they depict. Previous studies used methods to generate, from random noise, semantic masks that captured the spatial distribution of the tissue. These masks were then used as a prior for conditional generative approaches to produce photorealistic histopathological images. However, as with many other generative models, this solution exhibits mode collapse as the model fails to capture the full diversity of the underlying data distribution. In this work, we present a pipeline, coined PriorPath, that generates detailed, realistic, semantic masks derived from coarse-grained images delineating tissue regions. This approach enables control over the spatial arrangement of the generated masks and, consequently, the resulting synthetic images. We demonstrated the efficacy of our method across three cancer types, skin, prostate, and lung, showcasing PriorPath's capability to cover the semantic mask space and to provide better similarity to real masks compared to previous methods. Our approach allows for specifying desired tissue distributions and obtaining both photorealistic masks and images within a single platform, thus providing a state-of-the-art, controllable solution for generating histopathological images to facilitate AI for computational pathology.Clinical relevance- The generation of synthetic histopathological images with precise semantic control over tissue distributions offers significant potential for advancing computational pathology. By allowing AI developers and pathologists to specify desired tissue features and configurations, our approach facilitates the creation of robust and unbiased AI models, supporting clinical decision support tools, improved diagnostics, and early cancer detection.CancerCare/Management
-
Resolution Enhancement of Prostate 3D MRI and Ultrasound Using Implicit Neural Representations.3 weeks agoProstate Magnetic Resonance Imaging (MRI) and ultrasound (US) imaging play a crucial role in the diagnosis and management of prostate diseases. However, spatial and axial resolution limitations can hinder the accurate detection of lesions, affecting clinical decision-making. Traditional deep learning-based super-resolution (SR) methods, such as Convolutional Neural Networks (CNNs) and Generative Adversarial Networks (GANs), have demonstrated success in enhancing medical image quality; however, they often suffer from high computational costs and rigid grid-based representations. In this work, we explore the application of Implicit Neural Representations (INRs), specifically Sinusoidal Representation Networks (SIREN), for super-resolution reconstruction of prostate MRI and US images. While INRs leverage continuous function representations to enhance spatial and axial resolution and preserve fine anatomical structures, our work focuses on the novel application of SIREN to this specific medical imaging task. To further improve reconstruction quality, we propose a hybrid loss function combining Mean Square Error (MSE) and Structural Similarity Index Measure (SSIM). Experimental results demonstrate that our approach effectively restores high-resolution details, improving lesion visibility and aiding radiologists in more accurate diagnosis.Clinical Relevance- Limitations in the spatial and axial resolution of prostate MRI and US can hinder accurate lesion detection, leading to diagnostic uncertainty and the need for additional imaging studies or biopsies. This increases healthcare costs and patient burden. The proposed super-resolution approach using Implicit Neural Representations (INRs) enhances image quality while preserving fine anatomical structures, enabling radiologists to extract more information from existing scans. By improving lesion visibility and diagnostic accuracy, this method has the potential to reduce the need for additional procedures, ultimately leading to cost savings and improved patient outcomes.CancerCare/Management
-
Language Model of Lung Nodules in LNDb Medical Reports †.3 weeks agoThe description of lung nodules in medical reports may play a critical role in clinical decision-making and longitudinal analysis. However, the unstructured nature of many medical reports poses challenges for accessing, analyzing, and reusing this information. To address this, we propose a method to analyze medical reports in Portuguese, derived from the LNDb dataset. A multi-step approach-comprising sentence relevance classification, named entity recognition, and relation extraction-was implemented. The goal was to identify and organize key information related to lung nodules, such as location, size, and characteristics, to enable its use for statistical metrics or to facilitate reannotation of imaging data. The different steps of the approach apply transformer-based models, including BioBERTpt and BERTimbau. The best performance was achieved using BERTimbau-large, with an F1 score of 0.87 for named entity recognition and an accuracy of 0.69 for relation extraction. Although the relation extraction step proved particularly challenging, the results demonstrate the potential of this method to improve the efficiency and accuracy of nodule analysis. The adoption of automatic tools like this in clinical practice is an inevitable step forward, offering significant time savings and improved accuracy in treatment.CancerChronic respiratory diseaseCare/Management
-
Fine-Grained Prompting in Large Language Models for Accurate and Efficient TNM Staging from Radiology Reports.3 weeks agoWe propose a novel prompt engineering approach called Fine-Grained Prompting (FGP) for TNM staging to enhance the performance of large language models for extracting and classifying TNM staging information from radiology reports. FGP divides the TNM staging definitions into subtasks and integrates their responses to predict the TNM stage by shortening prompts and simplifying tasks. FGP demonstrates a superior performance compared to basic prompt engineering, showing an 18.5% improvement in T accuracy for lung cancer. Furthermore, an evaluation of clinician TNM staging time for lung cancer using an application software based on FGP results showed that the time efficiency more than doubled compared to standard manual processes. These findings highlight the potential of FGP to address existing challenges and set a new standard for AI-assisted cancer staging, ultimately enhancing clinical efficiency and patient outcomes.CancerChronic respiratory diseaseCare/Management
-
LoRA-fine-tuned Large Vision Models for Automated Assessment of Post-SBRT Lung Injury.3 weeks agoThis study investigates the efficacy of Low-Rank Adaptation (LoRA) for fine-tuning large Vision Models, DinoV2 and SwinV2, to diagnose Radiation-Induced Lung Injury (RILI) from X-ray CT scans following Stereotactic Body Radiation Therapy (SBRT). To evaluate the robustness and efficiency of this approach, we compare LoRA with traditional full fine-tuning and inference-only (no fine-tuning) methods. Cropped images of two sizes (50 mm3 and 75 mm3), centered at the treatment isocenter, in addition to different adaptation techniques for adapting the 2D LVMs for 3D data were used to determine the sensitivity of the models to spatial context. Experimental results show that LoRA achieves comparable or superior performance to traditional fine-tuning while significantly reducing computational costs and training times by requiring fewer trainable parameters.Clinical Relevance- This study improves the detection of Radiation-Induced Lung Injury (RILI) in lung cancer patients following SBRT, enabling AI-driven diagnosis to support clinical decision making.CancerChronic respiratory diseaseCare/ManagementAdvocacy
-
Towards MR-only radiotherapy: AI -generation of synthetic CT from Zero-TE MRI for Head and Neck cancer patients.3 weeks agoThis study proposes to create a synthetic CT (sCT) from Zero-Time Echo MRI (ZTE) using deep learning for MRI-only radiotherapy planning and verification. ZTE and CT were collected prospectively from 17 patients undergoing external beam radiotherapy of the head and neck. A new contribution to the Unet architecture was made with the addition of deep residual units and attention gates. The attention deep residual Unet (ADR-Unet) approach was validated quantitatively and qualitatively with comparison to Unet++. Leave-one-out cross validation was performed. The results showed the superiority of ADR-Unet (MAE=75.54±11.4HU) compared to Unet++ (MAE=80.51 ± 8.48HU) and several state-of-the art approaches. The contrast-to-noise ratios (CNR) of the generated DRRs were computed. CNR of lateral right DRRs (CNRADR-Unet =44.47 ± 6.23db, CNRUnet++ =44.33 ± 5.65db) were close to CNR of the planning CT based DRR (44.33 ± 5.65db). The same tendency was observed for anterior posterior DRRs. Future work will focus on the evaluation of sCT for dose calculation.Clinical Relevance- The purpose of this work is to implement MR-only radiotherapy. This could eliminate the need for CT scans by using MRI for both target delineation and dose calculation, significantly reducing additional radiation exposure to patients, minimizing image registration uncertainties associated with CT/MRI fusion, reducing patient visits and imaging procedures costs.CancerCare/Management
-
Immunohistochemical information integrated pre-training improves HER2 status prediction from whole slide images of breast cancer.3 weeks agoThe evaluation of Human Epidermal Growth Factor Receptor 2 (HER2) expression is of paramount importance for the precise treatment of breast cancer. Immunohistochemistry (IHC) is the established gold standard for HER2 assessment, but it does come with substantial costs. To alleviate the requirement for costly IHC tests, this paper develops a groundbreaking patch-level feature encoder called HI-MAE, for the first time, that makes use of hematoxylin and eosin (H&E) and IHC information in pre-training masked autoencoder (MAE) for more effective feature encoding in H&E stained images. The HI-MAE integrated with various multiple instance learning (MIL) models enables the direct prediction of HER2 status from H&E stained whole slide images (WSIs). Through evaluation on the TCGA-BRCA dataset, we demonstrate a significant improvement in HER2 status prediction by integrating IHC information into the feature encoding for H&E stained images, with the AUC increasing from 0.59 to 0.74. Our HI-MAE model opens new avenues for future research, facilitating the incorporation of IHC information into other classification tasks, particularly for precise biomarker predictions.Clinical relevance- By incorporating IHC images into the pre-training process, we enable the prediction of HER2 status directly from routine H&E-stained tissue sections under the guidance of IHC information, thereby eliminating the need for IHC images during the testing phase. This approach significantly reduces the staining costs associated with IHC.CancerCare/Management
-
Trustworthy assessment of 2D model for lung CT scans.3 weeks agoImmunotherapy (IO) has revolutionized the treatment of non-small cell lung cancer (NSCLC). However, determining the best candidates for this therapy is still a challenge. Currently, biomarkers such as PD-L1 are used in clinical practice to guide treatment decisions, but their predictive power is limited. Therefore, there is an urgent need for new models to more accurately identify which patients will benefit most from immunotherapy.In this context, Artificial Intelligence (AI) has shown promising results in deriving novel data-driven biomarkers from medical imaging, offering a promising approach to enhance patient selection and treatment stratification. These AI-driven biomarkers have not yet been widely adopted in clinical practice due to various concerns, including the time required for radiologists to perform segmentation. In this study, a 2D ResNet50 model, pre-trained on 1.35 million of radiologic images, was employed to process 2D lung CT scans from patients with advanced NSCLC treated at Fondazione IRCCS Istituto Nazionale dei Tumori in Milan. The model was designed to predict poor responders, defined as patients with an overall survival of less than six months, using radiological images acquired before the initiation of IO treatment (baseline CT scans). Our model achieved an F1-score of 0.74 on the test set. To assess the model's robustness, a fairness evaluation was conducted across different demographic subgroups, specifically sex, and age, and a two-sample independent t-test was performed to assess statistical differences between these groups. Our analysis highlights fairness concerns within the model predictions, with significant p-values (p < 0.05), suggesting that sex and age may be confounding factors for the model prediction. Further investigations are required to mitigate these biases and ensure equitable model performance across diverse patient populations.Clinical relevance - The model identifies poor responders (patients with an overall survival of less than 6 months), potentially preventing unnecessary IO administration in NSCLC patients unlikely to benefit from the therapy. Additionally, it evaluates how variations in data distribution could impact the model performance.CancerChronic respiratory diseaseCare/Management
-
A Text-Image Network for Isocitrate Dehydrogenase(IDH) Mutation Status Prediction in Glioma Diagnosis Using Multimodal MRI and Radiology Report.3 weeks agoBrain glioma is a very serious disease and the mutation status of isocitrate dehydrogenase (IDH) is an important factor in its diagnosis. There is a clear link between the prognosis of glioma and the IDH mutation, and knowing the status of the IDH mutation helps physicians plan treatment strategies. However, current methods for detecting IDH mutations are costly and not always practical. Clinically, there is a recognized relationship between magnetic resonance images (MRI) images and the status of the IDH mutation. In recent years, many machine learning methods have been developed to predict the IDH mutation using magnetic resonance images. Most of these studies focus solely on the modality of the magnetic resonance image and ignore the text in radiology reports, which contains valuable diagnostic information. This limits the benefits of a multimodal approach in clinical diagnosis. To address this gap, our study proposes a multimodal deep learning model that uses 3D MRI images and text reports to predict IDH mutation status. We evaluated our method using the BraTS20 challenge dataset, with the text modality annotated by the First Affiliated Hospital of Zhengzhou University in China. Compared with state-of-the-art methods, our approach improves the accuracy of predicting IDH mutation status by 4%, demonstrating better overall performance.CancerCare/Management