Neural Affine Optimization for Image Registration

Hang Zhang, Jiacheng Wang, Xiang Chen, Renjiu Hu, Min Liu, Yaonan Wang, Rongguang Wang, Jinming Duan, Noel Codella
Cornell University | Vanderbilt University | Hunan University | University of Pennsylvania | University of Manchester | Microsoft

Abstract

Affine registration is crucial in medical image analysis but faces challenges when matching sparse features, such as retinal vessels and filamentous collagen fibers in second-harmonic generation (SHG) and bright-field (BF) images. End-to-end learning-based approaches struggle with minimal effective gradients from loss back-propagation of these sparse features, while descriptor matching methods, though helpful, lack fidelity loss and leave the matching process open-looped. To address these issues, we propose Neural Affine Optimization (NeOn), which implicitly approximates discrete optimization using a few neural network layers, combined with a sampling-regression layer to handle affine transformations. NeOn allows iterative refinement with fidelity loss and provides a flexible transition between a purely affine configuration and a linear weighted blend of affine and deformation fields.

Demo: Deformation Change Among Iterations

Example 1: Retinal Vessel Alignment

Deformation Change GIF - Example 1
Moving Image deformable change over time.
Static Deformation Visualization - Example 1
Fixed Image.

Example 2: SHG-BF Image Alignment

Deformation Change GIF - Example 2
Moving Image deformable change over time.
Static Deformation Visualization - Example 2
Fixed Image.

Problem Statement

Gradient Backflow Limitation
Visual illustration of the issue of limited gradient backflow. The comparison shows retinal vessel and SHG images with their corresponding horizontal translation differences, highlighting areas where loss gradient can be effectively back-propagated.

Method

NeOn Framework
The proposed Neural Affine Optimization (NeOn) framework. Our method combines neural network layers with a sampling-regression approach for handling affine transformations.

Datasets

FIRE Dataset

Mono-modal retinal vessel dataset with 39 subjects and 134 image pairs (2912×2912 pixels, 45° FOV)

CF-FA Dataset

Multi-modal diabetic retinal dataset with 59 subjects, combining Fractional Anisotropy and Color Fundus images (720×576 pixels)

SHG-BF Dataset

Learn2Reg Challenge 2024 Task 3 dataset with 10 image pairs, combining Bright Field microscopy with Second Harmonic Generation imaging

Results

Qualitative Results
Qualitative comparison of registration results on the FIRE dataset, demonstrating NeOn's superior performance in handling sparse features.
For the quantitative comparison, please refer to the details in the orignal paper.

Implementation

FIRE Dataset

  1. Feature Extraction:
    • Use LWNet (GitHub) to extract vessel features
    • Store extracted features in 'FIRE_cnn' folder
  2. Run Optimization:
    python test_neon_fire.py --ori_size '(2912,2912)' --img_size '(1024,1024)' temp=0.001 ks=1 alpha=1
    Parameters:
    • ori_size: Original image dimensions
    • img_size: Input image size for processing
    • temp: Temperature parameter
    • ks: Kernel size
    • alpha: Blending parameter

CF-FA Dataset

  1. Feature Extraction:
    • For fundus images: Use LWNet (GitHub)
    • For FA images: Use DeepVesselSeg4FA (GitHub)
    • Store all features in 'CFFA_cnn' folder
  2. Run Optimization:
    python test_neon_cffa.py --ori_size '(576,720)' --img_size '(576,720)' temp=0.001 ks=1 alpha=1

SHG-BF Dataset

  1. Feature Extraction Training:
    • We provide contrastive-based COMIR feature extraction with XFeat prealignment
    • python train_shgbf.py -m tiramisuAndXfeatComplex111Msk1Ps128 -bs 1 --gpu_id 0 \
          ti_pretrained=1 enable_grad_xfeat=1 xf_pretrained=1 --is_msk 1 --patch_size 128
  2. Run Optimization:
    python test_neon_shgbf.py -m tiramisuAndXfeatComplex -bs 1 --gpu_id 0 --load_ckpt none --is_first_half 1

Citation

@article{zhang2024neural,
    title={Neural Affine Optimization for Image Registration},
    author={Zhang, Hang and Wang, Jiacheng and Chen, Xiang and Hu, Renjiu and Liu, Min and 
            Wang, Yaonan and Wang, Rongguang and Duan, Jinming and Codella, Noel},
    journal={IEEE Transactions on Medical Imaging},
    year={2024}
}

@article{wang2024fidelity,
    title={Fidelity-Imposed Displacement Editing for the Learn2Reg 2024 SHG-BF Challenge},
    author={Wang, Jiacheng and Chen, Xiang and Hu, Renjiu and Wang, Rongguang and Liu, Min and 
            Wang, Yaonan and Wang, Jiazheng and Li, Hao and Zhang, Hang},
    journal={arXiv preprint arXiv:2410.20812},
    year={2024}
}