CoroSAM
Interactive Coronary Artery Segmentation with Segment Anything Model
Paper * Getting Started * Pretrained Models * GUI
CoroSAM is a deep learning framework for interactive coronary artery segmentation in coronary angiograms, built on a computationally efficient SAM-based architecture with custom convolutional adapters.
This is the official implementation of the paper published in Computer Methods and Programs in Biomedicine.
Table of Contents
- Installation
- Pretrained checkpoints
- ARCADE dataset
- Preprocessing
- Training
- Testing
- Testing on different datasets
- GUI application
- Citation
- Acknowledgments
Installation
1. Create virtual environment & install PyTorch
First, install PyTorch following the official installation guide.
Recommended version: torch==2.6.0+cu124
2. Clone repository
cd corosam
3. Install dependencies
Pretrained checkpoints
External
Download and place in checkpoints/Pretrained/:
| Model | Source | Path |
|---|---|---|
| LiteMedSAM | GitHub | checkpoints/Pretrained/lite_medsam.pth |
| SAMMed2D | GitHub | checkpoints/Pretrained/sam-med2d_b.pth |
CoroSAM
Our pretrained CoroSAM model trained on ARCADE is available here:
Save as: checkpoints/CoroSAM/CoroSAM_Final_Training.pt
ARCADE dataset preparation
Download
- Download ARCADE from Zenodo
- Extract to your workspace
- Use only the
syntaxsubset for this project
arcade/
+-- syntax/
+-- train/
+-- val/
+-- test/
Preprocessing
Transform ARCADE COCO annotations into training-ready format.
1. Configure paths
Edit preprocessing_config.yaml:
seed: 2025
2. Run preprocessing pipeline
python preprocessing/convert_coco_to_binary_masks.py
# Step 2: Merge train+val and apply augmentation
python preprocessing/data_augmentation.py
# Step 3: Create k-fold splits
python preprocessing/split_dataset.py
Output structure:
syntax/
+-- train/ # Original train set
+-- val/ # Original val set
+-- test/ # Test set
+-- train_all/ # Merged train+val
| +-- images/
| +-- annotations/
| +-- images_augmented/
| +-- annotations_augmented/
+-- kf_split/ # 5-fold cross-validation
+-- set1/
+-- set2/
+-- ...
Training
Train CoroSAM on your data with flexible configurations.
Configuration
Edit train_config.yaml:
dataset_root: "C:/path/to/arcade/syntax"
k_fold_path: "C:/path/to/arcade/syntax/kf_split"
# Model
model_name: "LiteMedSAM"
exp_name: "CoroSAM_Training"
# Adapters
use_adapters: true
use_conv_adapters: true
channel_reduction: 0.25
# Training
n_folds: 5 # 5-fold CV or set to 1 for single run
epochs: 25
batch_size: 4
lr: 0.0005
# Logging
use_wandb: true
proj_name: "CoroSAM"
Run training
K-fold cross-validation:
Single training run:
train_path: "C:/path/to/arcade/syntax/train_all"
val_path: "C:/path/to/arcade/syntax/test"
Testing
Comprehensive evaluation with detailed metrics and visualizations.
Configure testing
Edit test_config.yaml:
model_name: "LiteMedSAM"
checkpoint: "checkpoints/CoroSAM/CoroSAM_Final_Training.pt"
# Dataset
test_path: "C:/path/to/arcade/syntax/test"
results_path: "results/CoroSAM_ARCADE_Test"
# Options
save_predictions: true # Save visualization images
Run testing
Testing on different datasets
CoroSAM can be evaluated on any custom dataset!
Requirements
Your dataset must follow the ARCADE preprocessing output structure:
dataset_name/
+-- test/ (or any folder name)
+-- images/
| +-- dataset_name_1.png
| +-- dataset_name_2.png
| +-- ...
+-- annotations/
+-- dataset_name_1_gt.png
+-- dataset_name_2_gt.png
+-- ...
Quick test
test_path: "path/to/your_dataset/test"
checkpoint: "checkpoints/CoroSAM/corosam_pretrained.pth"
GUI application
Interactive segmentation with a user-friendly interface.
Launch GUI
Citation
If you find CoroSAM useful in your research, please cite our paper:
title={CoroSAM: adaptation of the Segment Anything Model for interactive segmentation in Coronary angiograms},
journal={Computer Methods and Programs in Biomedicine},
year={2025},
publisher={Elsevier},
doi={10.1016/j.cmpb.2025.108587},
url={https://www.sciencedirect.com/science/article/pii/S0169260725005887}
}
Acknowledgments
This project builds upon excellent open-source work:
- Segment Anything Model (SAM): facebookresearch/segment-anything
- MedSAM: bowang-lab/MedSAM
- SAM-Med2D: OpenGVLab/SAM-Med2D