Dark Mode

Skip to content

Navigation Menu

Sign in
Appearance settings

Search code, repositories, users, issues, pull requests...

Provide feedback

We read every piece of feedback, and take your input very seriously.

Saved searches

Use saved searches to filter your results more quickly

Sign up
Appearance settings

mife-git/corosam

Repository files navigation

CoroSAM

Interactive Coronary Artery Segmentation with Segment Anything Model

Paper * Getting Started * Pretrained Models * GUI


CoroSAM is a deep learning framework for interactive coronary artery segmentation in coronary angiograms, built on a computationally efficient SAM-based architecture with custom convolutional adapters.

This is the official implementation of the paper published in Computer Methods and Programs in Biomedicine.


Table of Contents

  • Installation
  • Pretrained checkpoints
  • ARCADE dataset
  • Preprocessing
  • Training
  • Testing
  • Testing on different datasets
  • GUI application
  • Citation
  • Acknowledgments

Installation

1. Create virtual environment & install PyTorch

First, install PyTorch following the official installation guide.

Recommended version: torch==2.6.0+cu124

2. Clone repository

git clone https://github.com/mife-git/corosam.git
cd corosam

3. Install dependencies

pip install -r requirements.txt

Pretrained checkpoints

External

Download and place in checkpoints/Pretrained/:

Model Source Path
LiteMedSAM GitHub checkpoints/Pretrained/lite_medsam.pth
SAMMed2D GitHub checkpoints/Pretrained/sam-med2d_b.pth

CoroSAM

Our pretrained CoroSAM model trained on ARCADE is available here:

Download CoroSAM Checkpoint

Save as: checkpoints/CoroSAM/CoroSAM_Final_Training.pt


ARCADE dataset preparation

Download

  1. Download ARCADE from Zenodo
  2. Extract to your workspace
  3. Use only the syntax subset for this project
arcade/
+-- syntax/
+-- train/
+-- val/
+-- test/

Preprocessing

Transform ARCADE COCO annotations into training-ready format.

1. Configure paths

Edit preprocessing_config.yaml:

dataset_root: "C:/path/to/arcade/syntax"
seed: 2025

2. Run preprocessing pipeline

# Step 1: Convert COCO to binary masks
python preprocessing/convert_coco_to_binary_masks.py

# Step 2: Merge train+val and apply augmentation
python preprocessing/data_augmentation.py

# Step 3: Create k-fold splits
python preprocessing/split_dataset.py

Output structure:

syntax/
+-- train/ # Original train set
+-- val/ # Original val set
+-- test/ # Test set
+-- train_all/ # Merged train+val
| +-- images/
| +-- annotations/
| +-- images_augmented/
| +-- annotations_augmented/
+-- kf_split/ # 5-fold cross-validation
+-- set1/
+-- set2/
+-- ...

Training

Train CoroSAM on your data with flexible configurations.

Configuration

Edit train_config.yaml:

# Dataset
dataset_root: "C:/path/to/arcade/syntax"
k_fold_path: "C:/path/to/arcade/syntax/kf_split"

# Model
model_name: "LiteMedSAM"
exp_name: "CoroSAM_Training"

# Adapters
use_adapters: true
use_conv_adapters: true
channel_reduction: 0.25

# Training
n_folds: 5 # 5-fold CV or set to 1 for single run
epochs: 25
batch_size: 4
lr: 0.0005

# Logging
use_wandb: true
proj_name: "CoroSAM"

Run training

K-fold cross-validation:

python training/train.py --config train_config.yaml

Single training run:

n_folds: 1
train_path: "C:/path/to/arcade/syntax/train_all"
val_path: "C:/path/to/arcade/syntax/test"

Testing

Comprehensive evaluation with detailed metrics and visualizations.

Configure testing

Edit test_config.yaml:

# Model
model_name: "LiteMedSAM"
checkpoint: "checkpoints/CoroSAM/CoroSAM_Final_Training.pt"

# Dataset
test_path: "C:/path/to/arcade/syntax/test"
results_path: "results/CoroSAM_ARCADE_Test"

# Options
save_predictions: true # Save visualization images

Run testing

python testing/test.py --config test_config.yaml

Testing on different datasets

CoroSAM can be evaluated on any custom dataset!

Requirements

Your dataset must follow the ARCADE preprocessing output structure:

dataset_name/
+-- test/ (or any folder name)
+-- images/
| +-- dataset_name_1.png
| +-- dataset_name_2.png
| +-- ...
+-- annotations/
+-- dataset_name_1_gt.png
+-- dataset_name_2_gt.png
+-- ...

Quick test

# test_config.yaml
test_path: "path/to/your_dataset/test"
checkpoint: "checkpoints/CoroSAM/corosam_pretrained.pth"
python testing/test.py --config test_config.yaml

GUI application

Interactive segmentation with a user-friendly interface.

Launch GUI

python gui/gui_corosam.py

Citation

If you find CoroSAM useful in your research, please cite our paper:

@article{corosam2025,
title={CoroSAM: adaptation of the Segment Anything Model for interactive segmentation in Coronary angiograms},
journal={Computer Methods and Programs in Biomedicine},
year={2025},
publisher={Elsevier},
doi={10.1016/j.cmpb.2025.108587},
url={https://www.sciencedirect.com/science/article/pii/S0169260725005887}
}

Acknowledgments

This project builds upon excellent open-source work:

About

CoroSAM is a deep learning framework for interactive coronary artery segmentation in coronary angiograms, built on a computationally efficient SAM-based architecture with custom convolutional adapters. This is the official implementation of the paper published in Computer Methods and Programs in Biomedicine.

Topics

Resources

Readme

License

MIT license

Stars

Watchers

Forks

Releases

No releases published

Packages

Contributors

Languages