Department of Information Technology, Lebanese French University, Erbil, Kurdistan Region, Iraq
Received date: November 06, 2015; Accepted date: January 08, 2016; Published date: January 18, 2016
Citation: Al-Ameen Z (2016) An Expeditious Deblurring for Computed Tomography Medical Images Using a Gaussian Prior Deconvolution. J Tomogr Simul 1:103.
Copyright: © 2016 Al-Ameen Z. This is an open-access article distributed under the terms of the Creative Commons Attribution License, which permits unrestricted use, distribution, and reproduction in any medium, provided the original author and source are credited.
Visit for more related articles at Journal of Simulation and Computation
The blurring artifact may affect computed tomography (CT) images due to various real-world limitations. Such prevalentdegradation is usually difficult to avert and it highly contributes in concealing important medical information that already exists in an image. As a consequence, the visual quality of the recorded images is reduced tremendously. Thence, different deblurring concepts have been introduced to address this ill-posed problem. The drawbacks with many of the contemporarydeblurring methods are high complexity and processing times. Hence, a Gaussian prior deconvolution is adopted in this study because of its ability to provide an efficient and fast processing which is convenient for CT images. Intensive tests have been conducted to attest the validity of this algorithm, for which both naturally and synthetically degraded CT images are utilized. Furthermore, the quality of the synthetic data is measured
using two advanced image quality metrics of feature similarity and structural similarity. The results obtained from the conducted experiments and their related performance assessments revealed the effectiveness and favorability of the adopted algorithm, in that it outperformed many famous algorithms in terms of recorded accuracy, speed and visual quality.
Computed tomography; Deconvolution; Gaussian prior; Image deblurring
Computed tomography (CT) is an imaging modality that is widely used in the medical field . However, diverse artifacts are perceived on the CT images, including blur , low-contrast , metal , noise , out-of-field  and ring . One artifact of interest is image blurring, which conceals important medical information and often leads to poor quality results . Such artifact happens mainly due to the loss of data during its acquisition , utilizing low radiation dose  and wrong system settings . The science of medical imaging is continuously improving to produce better quality images, where this leads to better detection and diagnosis of many diseases . Hence, this artifact must be reduced to obtain better visual quality results. In the mathematical context, deconvolution is defined as an algorithm-based procedure that is utilized to reverse the effects of convolution on obtained data. This concept is widely used in the fields of computer vision and image processing because of its potential use in many engineering and scientific disciplines . In the field of image processing, deconvolution which is also called deblurring is the process of recovering a good quality image from its latent version . Thence, different deblurring concepts have been introduced to address this ill-posed problem. The commondeblurring methods may have many drawbacks, such as low recovering ability, noise accentuation, ringing artifacts cartoon-like artifacts and staircase artifacts. On the other hand, the drawbacks of many of the contemporarydeblurring methods are the high complexity and processing times. Hence, a Gaussian prior deconvolution is adopted in this study because of its ability to provide an efficient and fast processing which is convenient for CT images. For this, both naturally and synthetically degraded CT images are utilized to attest the validity of this algorithm. In addition, the quality of the synthetic data is measured using two advanced image quality metrics of feature similarity and structural similarity. Finally, the remaining of this article is organized as follows: In Section 4, the adopted methodisintensively explained. In Section 5, the experimental resultsareexhibitedanddiscussed in details. In Section 6a concise closure is given.
Gaussian prior deconvolution
According to Levin et al. , a Gaussian prior deconvolution was described in details, while its use with a predesigned point spread function (PSF) to process digital images that are degraded by motion blur was clarified in depth. In general, using a Gaussian prior to process digital images is expeditious. However, such algorithm tends to produce boundary artifacts, wrap-around artifacts and over-smooth the processed images. The boundary artifacts are not a key problem especially if the processed image is large. Likewise, the wrap-around artifacts were dealt with using a simple technique. Moreover, additional smoothness is required in many situations because such feature helps to attenuate the latent noise of CT images. Itisworth stating that a Gaussian prior is more likely to allocate its derivatives equally over the entire image. This algorithmfunctions in the frequency domainby utilizing the following equation:
where, R(v,w) is the recovered image, H(v,w) is the optical transfer function (OTF) which is the Fourier transform of the PSF, Ĥ(u,v) is the complex conjugate of the OTF, I(v,w) is the Fourier transform of the blurry image, Ψ is a weighting scalar, (·) is an element-wise multiplication process, Gx and Gy are the Fourier transform of the horizontal and vertical derivative filters gx=[1-1] and gy=[1-1]T. The sizes of Gx and Gy are similar to the size of the processed image, Ĝx and Ĝy are the complex conjugate of Gx and Gy , and (v,w) are coordinates in the frequency domain.
In this section, the indispensableempirical arrangements are described in details. The necessaryexperiments are achieved using a dataset of naturally and synthetically blurred CT images to apprehend its performance abilities and provide qualitative and quantitative assessments for the adopted algorithm. In this study, the synthetically blurred images are used for comparison purposes while the naturally blurred images are utilized for empirical purposes. The synthetic degraded images are generated by convolving images of sharp detailswith a PSF of a Gaussian type. Such PSF can be created using the following equation :
where, σ is a positive constant that signifies the intensity of blurring. To demonstrate the processingpotential of the adopted algorithm, it is comparedwith differentprominent deblurring algorithmsoftruncated singular value decomposition , Landweber  and Richardson- Lucy . For further performance assessments, trustworthyimage quality metrics, such as feature similarity (FSIM)  and structural similarity (SSIM)  are utilizedwith synthetically blurred CT images. The aforementioned metrics provide enriched information about the features and structural resemblance between a reference image and its degraded or retrieved versions. The output of FSIM and SSIM metrics falls in the range between zero and one, in which a valuearoundzero denotes a poor image quality, while a value around one denotes a high image quality. For both natural and synthetic data, magnified portions of CT images are used to demonstrate the important imagedetailsandfocuson the contents and contours of the processed images (Figures 1-3).
Figures 1-3 exhibit the naturally blurred CT images, their magnified portions and their processed versions by the adopted algorithm. Figure 4 shows the chosen true CT images and their magnified portions for comparison purposes. Figures 5 and 6 illustrate the results of the performed comparisons between the adopted algorithm and other comparablealgorithms. Table 1 displays the recorded accuracies and processing times of the performed comparisons. Figures 7 and 8 represent the analytical graphs of Table 1. In this study, all experiments are carried out using MATLAB with an 8 GB of memory and a 2.3 Core i5 processor. As seen in Figures 1-3, satisfactory results are achievedconcerningthe naturally degraded images, wherein the latent medical details are displayed better and clearer than their degraded versions. However, certain boundary artifacts appeared in the processed images, in which such artifacts are prevalent and inevitable in many deblurring algorithms (Figures 4-8 and Table 1).
Figure 5: Processing a synthetically blurred CT image: (a) Amagnified portion of a CT image; (b) Blurred portion by σ=1.5; images are deblurred by: (c) Richardson-Lucy (Iterations=20); (d) Landweber (Iterations=25); (e) Truncated Singular Value Decomposition (Tolerance=0.1); (f) Gaussian Prior (Ψ=0.01).
Figure 6: Processing a synthetically blurred CT image: (a) Amagnified portion of a CT image; (b) Blurred portion by σ=2.5; images are deblurred by: (c) Richardson-Lucy (Iterations=25); (d) Landweber (Iterations=30); (e) Truncated singular value decomposition (Tolerance=0.05); (f) Gaussian Prior (Ψ=0.0095).
|Truncated Singular Value Decomposition||σ=1.5||Tolerance=0.1||0.8690||0.7603||0.05664|
Note: The bold values indicate the best achieved results
Table 1: The recorded accuracies and processing times of the performed comparisons.
Using the obtained results of comparisons as apparent in Figures 4-8 and Table 1, the adopted algorithm delivered the best performance in terms ofrecorded accuracy, speed and visual quality as it scored the best performance with the fastest implementations. Moreover, the important contours and fine details of the retrievedimages appear clearer than the other comparativealgorithms. Regarding the comparable algorithms, the truncated singular value decomposition was the poorest in terms of visual quality and quality metrics. However, its implementation time wasrelatively fast. On the other hand, the Landweber algorithm performed well in many aspects as its performance was quite similar to the adopted algorithm. However, the implementation time was pretty high, in that it was nearly ten times higher than the adopted algorithm. When it comes to Richardson- Lucy, it achieved a moderate performance concerning the perceived quality. However, its implementation time was the highest among the other comparable algorithms. From the analytical graphs in Figures 7 and 8 the recorded scores vary from one algorithm to another due to the implementation of variousdeblurring concepts. Nonetheless, the adopted algorithm achieved acceptable scoresusing FSIM and SSIM quality metrics. Thisspecifies that the adopted algorithm delivered ahighsimilarity between the reference and the recovered images in terms of vital features and structural components. This is important because such factors are accomplishedusing a straightforward algorithm that includes few parameters, simple calculations and does not employ the iterative feature which is predominant inmany deblurring algorithms. Finally, it is believed that the application of a Gaussian prior deconvolution can be extended to includefurther medical modalities, such as, positron emission tomography, magnetic resonance imaging and ultrasound.
In this study, the likelihood of usingGaussian prior deconvolution withblurry CT images is demonstrated by testing it with two categories of degraded images. Accordingly, the naturally degraded images are employed as the experimental data, while the synthetically degraded images are utilized as the comparison data. The performance of the adopted algorithmis satisfactory to some extent because good quality results are obtained in short processing times. However, its unaddressed drawback is the undesirable boundary artifacts which are prevalent in many deblurring methods. In terms of benchmarking, two advanced image quality metrics of FSIM and SSIM are used with the syntheticdata to compare the quality of results for the adopted algorithm with other state-of-the-art deblurring algorithms. As a result, the Gaussian prior deconvolution produced promising results since it outperformed the other comparable methods in terms of recorded accuracy, speed and visual quality.