Skip to main navigation Skip to search Skip to main content

Cache-aware GPU memory scheduling scheme for CT back-projection

Research output: Chapter in Book/Report/Conference proceedingConference contributionpeer-review

11 Scopus citations

Abstract

Graphic process units (GPUs) are well suited to computing-intensive tasks and are among the fastest solutions to perform Computed Tomography (CT) reconstruction. As previous research shows, the bottleneck of GPU-implementation is not the computational power, but the memory bandwidth. We propose a cache-aware memory-scheduling scheme for the back-projection, which can ensure a better load-balancing between GPU processors and the GPU memory. The proposed reshuffling method can be directly applied on existing GPU-accelerated CT reconstruction pipelines. The experimental results show that our optimization can achieve speedup ranging from 1.18-1.48. Our cache-optimization method is particular effective for low-resolution volumes with high resolution projections.

Original languageEnglish
Title of host publicationIEEE Nuclear Science Symposuim and Medical Imaging Conference, NSS/MIC 2010
Pages2248-2251
Number of pages4
DOIs
StatePublished - 2010
Event2010 IEEE Nuclear Science Symposium, Medical Imaging Conference, NSS/MIC 2010 and 17th International Workshop on Room-Temperature Semiconductor X-ray and Gamma-ray Detectors, RTSD 2010 - Knoxville, TN, United States
Duration: Oct 30 2010Nov 6 2010

Publication series

NameIEEE Nuclear Science Symposium Conference Record

Conference

Conference2010 IEEE Nuclear Science Symposium, Medical Imaging Conference, NSS/MIC 2010 and 17th International Workshop on Room-Temperature Semiconductor X-ray and Gamma-ray Detectors, RTSD 2010
Country/TerritoryUnited States
CityKnoxville, TN
Period10/30/1011/6/10

Fingerprint

Dive into the research topics of 'Cache-aware GPU memory scheduling scheme for CT back-projection'. Together they form a unique fingerprint.

Cite this