alexa An Expeditious Deblurring for Computed Tomography Medical Images Using a Gaussian Prior Deconvolution | Open Access Journals
Journal of Simulation and Computation
Like us on:
Make the best use of Scientific Research and information from our 700+ peer reviewed, Open Access Journals that operates with the help of 50,000+ Editorial Board Members and esteemed reviewers and 1000+ Scientific associations in Medical, Clinical, Pharmaceutical, Engineering, Technology and Management Fields.
Meet Inspiring Speakers and Experts at our 3000+ Global Conferenceseries Events with over 600+ Conferences, 1200+ Symposiums and 1200+ Workshops on
Medical, Pharma, Engineering, Science, Technology and Business

An Expeditious Deblurring for Computed Tomography Medical Images Using a Gaussian Prior Deconvolution

Al-Ameen Z*

Department of Information Technology, Lebanese French University, Erbil, Kurdistan Region, Iraq

*Corresponding Author:
Al-Ameen Z
Department of Information Technology
Lebanese French University
Erbil, Kurdistan Region, Iraq
Tel: 00964750441 2721
E-mail: [email protected]

Received date: November 06, 2015; Accepted date: January 08, 2016; Published date: January 18, 2016

Citation: Al-Ameen Z (2016) An Expeditious Deblurring for Computed Tomography Medical Images Using a Gaussian Prior Deconvolution. J Tomogr Simul 1:103.

Copyright: © 2016 Al-Ameen Z. This is an open-access article distributed under the terms of the Creative Commons Attribution License, which permits unrestricted use, distribution, and reproduction in any medium, provided the original author and source are credited.

Visit for more related articles at Journal of Simulation and Computation

Abstract

The blurring artifact may affect computed tomography (CT) images due to various real-world limitations. Such prevalentdegradation is usually difficult to avert and it highly contributes in concealing important medical information that already exists in an image. As a consequence, the visual quality of the recorded images is reduced tremendously. Thence, different deblurring concepts have been introduced to address this ill-posed problem. The drawbacks with many of the contemporarydeblurring methods are high complexity and processing times. Hence, a Gaussian prior deconvolution is adopted in this study because of its ability to provide an efficient and fast processing which is convenient for CT images. Intensive tests have been conducted to attest the validity of this algorithm, for which both naturally and synthetically degraded CT images are utilized. Furthermore, the quality of the synthetic data is measured
using two advanced image quality metrics of feature similarity and structural similarity. The results obtained from the conducted experiments and their related performance assessments revealed the effectiveness and favorability of the adopted algorithm, in that it outperformed many famous algorithms in terms of recorded accuracy, speed and visual quality.

Keywords

Computed tomography; Deconvolution; Gaussian prior; Image deblurring

Introduction

Computed tomography (CT) is an imaging modality that is widely used in the medical field [1]. However, diverse artifacts are perceived on the CT images, including blur [2], low-contrast [3], metal [4], noise [5], out-of-field [6] and ring [7]. One artifact of interest is image blurring, which conceals important medical information and often leads to poor quality results [8]. Such artifact happens mainly due to the loss of data during its acquisition [9], utilizing low radiation dose [2] and wrong system settings [10]. The science of medical imaging is continuously improving to produce better quality images, where this leads to better detection and diagnosis of many diseases [11]. Hence, this artifact must be reduced to obtain better visual quality results. In the mathematical context, deconvolution is defined as an algorithm-based procedure that is utilized to reverse the effects of convolution on obtained data. This concept is widely used in the fields of computer vision and image processing because of its potential use in many engineering and scientific disciplines [12]. In the field of image processing, deconvolution which is also called deblurring is the process of recovering a good quality image from its latent version [13]. Thence, different deblurring concepts have been introduced to address this ill-posed problem. The commondeblurring methods may have many drawbacks, such as low recovering ability, noise accentuation, ringing artifacts cartoon-like artifacts and staircase artifacts. On the other hand, the drawbacks of many of the contemporarydeblurring methods are the high complexity and processing times. Hence, a Gaussian prior deconvolution is adopted in this study because of its ability to provide an efficient and fast processing which is convenient for CT images. For this, both naturally and synthetically degraded CT images are utilized to attest the validity of this algorithm. In addition, the quality of the synthetic data is measured using two advanced image quality metrics of feature similarity and structural similarity. Finally, the remaining of this article is organized as follows: In Section 4, the adopted methodisintensively explained. In Section 5, the experimental resultsareexhibitedanddiscussed in details. In Section 6a concise closure is given.

Gaussian prior deconvolution

According to Levin et al. [14], a Gaussian prior deconvolution was described in details, while its use with a predesigned point spread function (PSF) to process digital images that are degraded by motion blur was clarified in depth. In general, using a Gaussian prior to process digital images is expeditious. However, such algorithm tends to produce boundary artifacts, wrap-around artifacts and over-smooth the processed images. The boundary artifacts are not a key problem especially if the processed image is large. Likewise, the wrap-around artifacts were dealt with using a simple technique. Moreover, additional smoothness is required in many situations because such feature helps to attenuate the latent noise of CT images. Itisworth stating that a Gaussian prior is more likely to allocate its derivatives equally over the entire image. This algorithmfunctions in the frequency domainby utilizing the following equation:

Equation

where, R(v,w) is the recovered image, H(v,w) is the optical transfer function (OTF) which is the Fourier transform of the PSF, Ĥ(u,v) is the complex conjugate of the OTF, I(v,w) is the Fourier transform of the blurry image, Ψ is a weighting scalar, (·) is an element-wise multiplication process, Gx and Gy are the Fourier transform of the horizontal and vertical derivative filters gx=[1-1] and gy=[1-1]T. The sizes of Gx and Gy are similar to the size of the processed image, Ĝx and Ĝy are the complex conjugate of Gx and Gy , and (v,w) are coordinates in the frequency domain.

Results and Discussion

In this section, the indispensableempirical arrangements are described in details. The necessaryexperiments are achieved using a dataset of naturally and synthetically blurred CT images to apprehend its performance abilities and provide qualitative and quantitative assessments for the adopted algorithm. In this study, the synthetically blurred images are used for comparison purposes while the naturally blurred images are utilized for empirical purposes. The synthetic degraded images are generated by convolving images of sharp detailswith a PSF of a Gaussian type. Such PSF can be created using the following equation [15]:

Equation

where, σ is a positive constant that signifies the intensity of blurring. To demonstrate the processingpotential of the adopted algorithm, it is comparedwith differentprominent deblurring algorithmsoftruncated singular value decomposition [16], Landweber [17] and Richardson- Lucy [18]. For further performance assessments, trustworthyimage quality metrics, such as feature similarity (FSIM) [19] and structural similarity (SSIM) [20] are utilizedwith synthetically blurred CT images. The aforementioned metrics provide enriched information about the features and structural resemblance between a reference image and its degraded or retrieved versions. The output of FSIM and SSIM metrics falls in the range between zero and one, in which a valuearoundzero denotes a poor image quality, while a value around one denotes a high image quality. For both natural and synthetic data, magnified portions of CT images are used to demonstrate the important imagedetailsandfocuson the contents and contours of the processed images (Figures 1-3).

tomography-simulation-blurred

Figure 1: Processing naturally blurred CT images: (C1) Real blurred full-size CT images; (C2) Magnified portions; (C3) Portions from top to bottom are processed by a Gaussian prior deconvolution with (Ψ=0.015).

tomography-simulation-naturally

Figure 2: Processing naturally blurred CT images: (C1) Real blurred full-size CT images; (C2) Magnified portions; (C3) Portions from top to bottom are processed by a Gaussian prior deconvolution with (Ψ=0.009).

tomography-simulation-Gaussian

Figure 3: Processing naturally blurred CT images: (C1) Real blurred full-size CT images; (C2) Magnified portions; (C3) Portions from top to bottom are processed by a Gaussian prior deconvolution with (Ψ=0.01).

Figures 1-3 exhibit the naturally blurred CT images, their magnified portions and their processed versions by the adopted algorithm. Figure 4 shows the chosen true CT images and their magnified portions for comparison purposes. Figures 5 and 6 illustrate the results of the performed comparisons between the adopted algorithm and other comparablealgorithms. Table 1 displays the recorded accuracies and processing times of the performed comparisons. Figures 7 and 8 represent the analytical graphs of Table 1. In this study, all experiments are carried out using MATLAB with an 8 GB of memory and a 2.3 Core i5 processor. As seen in Figures 1-3, satisfactory results are achievedconcerningthe naturally degraded images, wherein the latent medical details are displayed better and clearer than their degraded versions. However, certain boundary artifacts appeared in the processed images, in which such artifacts are prevalent and inevitable in many deblurring algorithms (Figures 4-8 and Table 1).

tomography-simulation-magnified

Figure 4: True CT images and their magnified portions.

tomography-simulation-synthetically

Figure 5: Processing a synthetically blurred CT image: (a) Amagnified portion of a CT image; (b) Blurred portion by σ=1.5; images are deblurred by: (c) Richardson-Lucy (Iterations=20); (d) Landweber (Iterations=25); (e) Truncated Singular Value Decomposition (Tolerance=0.1); (f) Gaussian Prior (Ψ=0.01).

tomography-simulation-Amagnified

Figure 6: Processing a synthetically blurred CT image: (a) Amagnified portion of a CT image; (b) Blurred portion by σ=2.5; images are deblurred by: (c) Richardson-Lucy (Iterations=25); (d) Landweber (Iterations=30); (e) Truncated singular value decomposition (Tolerance=0.05); (f) Gaussian Prior (Ψ=0.0095).

tomography-simulation-analytical

Figure 7: The analytical graph of the average accuracy achieved by FSIM and SSIM metrics.

tomography-simulation-algorithms

Figure 8: The analytical graph of the average time consumed by the comparable algorithms.

Methods σ Value Variables FSIM SSIM Time
Blurry Images σ=1.5 Kernel=15×15 0.8644 0.8234 N/A
σ=2.5 kernel =25×25 0.7904 0.7296 N/A
Average 0.8274 0.7765 N/A
Richardson-Lucy σ=1.5 Iterations=20 0.8891 0.8190 0.36700
σ=2.5 Iterations=25 0.8916 0.7986 0.42449
Average 0.89035 0.8088 0.39574
Landweber σ=1.5 Iterations=25 0.9154 0.8553 0.32754
σ=2.5 Iterations=30 0.8907 0.8069 0.40546
Average 0.90305 0.8311 0.3665
Truncated Singular Value Decomposition σ=1.5 Tolerance=0.1 0.8690 0.7603 0.05664
σ=2.5 Tolerance=0.05 0.8428 0.6990 0.06008
Average 0.8559 0.72965 0.05836
Gaussian Prior σ=1.5 Ψ=0.01 0.9158 0.8661 0.03122
σ=2.5 Ψ=0.0095 0.8969 0.8080 0.03463
Average 0.90635 0.83705 0.03292

Table 1: The recorded accuracies and processing times of the performed comparisons.

Using the obtained results of comparisons as apparent in Figures 4-8 and Table 1, the adopted algorithm delivered the best performance in terms ofrecorded accuracy, speed and visual quality as it scored the best performance with the fastest implementations. Moreover, the important contours and fine details of the retrievedimages appear clearer than the other comparativealgorithms. Regarding the comparable algorithms, the truncated singular value decomposition was the poorest in terms of visual quality and quality metrics. However, its implementation time wasrelatively fast. On the other hand, the Landweber algorithm performed well in many aspects as its performance was quite similar to the adopted algorithm. However, the implementation time was pretty high, in that it was nearly ten times higher than the adopted algorithm. When it comes to Richardson- Lucy, it achieved a moderate performance concerning the perceived quality. However, its implementation time was the highest among the other comparable algorithms. From the analytical graphs in Figures 7 and 8 the recorded scores vary from one algorithm to another due to the implementation of variousdeblurring concepts. Nonetheless, the adopted algorithm achieved acceptable scoresusing FSIM and SSIM quality metrics. Thisspecifies that the adopted algorithm delivered ahighsimilarity between the reference and the recovered images in terms of vital features and structural components. This is important because such factors are accomplishedusing a straightforward algorithm that includes few parameters, simple calculations and does not employ the iterative feature which is predominant inmany deblurring algorithms. Finally, it is believed that the application of a Gaussian prior deconvolution can be extended to includefurther medical modalities, such as, positron emission tomography, magnetic resonance imaging and ultrasound.

Conclusion

In this study, the likelihood of usingGaussian prior deconvolution withblurry CT images is demonstrated by testing it with two categories of degraded images. Accordingly, the naturally degraded images are employed as the experimental data, while the synthetically degraded images are utilized as the comparison data. The performance of the adopted algorithmis satisfactory to some extent because good quality results are obtained in short processing times. However, its unaddressed drawback is the undesirable boundary artifacts which are prevalent in many deblurring methods. In terms of benchmarking, two advanced image quality metrics of FSIM and SSIM are used with the syntheticdata to compare the quality of results for the adopted algorithm with other state-of-the-art deblurring algorithms. As a result, the Gaussian prior deconvolution produced promising results since it outperformed the other comparable methods in terms of recorded accuracy, speed and visual quality.

References

Select your language of interest to view the total content in your interested language
Post your comment

Share This Article

Relevant Topics

Article Usage

  • Total views: 7944
  • [From(publication date):
    January-2016 - Aug 20, 2017]
  • Breakdown by view type
  • HTML page views : 7862
  • PDF downloads :82
 

Post your comment

captcha   Reload  Can't read the image? click here to refresh

Peer Reviewed Journals
 
Make the best use of Scientific Research and information from our 700 + peer reviewed, Open Access Journals
International Conferences 2017-18
 
Meet Inspiring Speakers and Experts at our 3000+ Global Annual Meetings

Contact Us

 
© 2008-2017 OMICS International - Open Access Publisher. Best viewed in Mozilla Firefox | Google Chrome | Above IE 7.0 version
adwords