An Incisive Purview on the Artificial Intelligence in the Field of Imaging
Received Date: Aug 10, 2018 / Accepted Date: Aug 17, 2018 / Published Date: Aug 24, 2018
Artificial Intelligence plays a crucial role in enabling the industry to achieve these objectives, be it analytics in personalized medicine, cloud computing in collaboration, or wearable devices in remote and self-health monitoring. As the pharmaceutical industry becomes increasingly more connected, information and communication technologies will fundamentally reshape both the consumption and delivery of medications. The industry must prepare for the future by embracing next-generation technologies and systems throughout the life sciences value chain. In the following review, we have discussed the impact of AI in Healthcare Imaging and how AI has the capability to metamorphose the entire Radiological and the Healthcare Industry.
Keywords: Artificial intelligence; Imaging; Healthcare; Radiology
Before we dive into Artificial Intelligence (AI), it is necessary to define the term. We can begin by defining what we mean by intelligence. One idea put forward by Simmons and Chappell states that intelligence is an overt ability to solve specific problems and an innate ability to learn solutions for new problems . Considering this, we can define AI to be an artificial entity capable of solving problems and learning solutions for new problems. This implicitly requires the entity to be able to perceive its environment (i.e., detect input data and its parameters), search and perform pattern recognition (i.e., identify and recognize features of the problem), plan and execute an appropriate course of action and perform inductive reasoning to derive general principles (i.e., learn from experience) . The applied science of AI seeks to improve computer systems until their function is equal to, or greater than, that of a human performing the same task . In a medical context, AI has been examined and implemented for several decades, and is now commonly employed in important clinical roles, such as computerized ECG analysis and arterial blood gas interpretation . However, only recently has AI been applied within radiology, enabled by the development of computer systems that can perform sophisticated image analysis . With ongoing technological advances, it is possible that computers will one day supersede the role of the radiologist, and in so doing eliminate human error. Some argue, however, that even if computer systems advance to such a level and become economically viable, AI should never function as any more than an adjunct to the clinical acumen of the radiologist . This chapter explores the current diverse applications of AI in imaging, examines the challenges facing its implementation in the clinical environment, and looks forward to the opportunities it presents in the future.
Why Do We Need AI in Imaging?
The roles of the radiologist are manifold, encompassing being gatekeeper for a valuable service, patient safety guardian and expert diagnostician. It is the diagnostic role that AI seeks to challenge.
Advances in both imaging and AI technology have placed greater scrutiny on the function of the radiologist as diagnostician, which essentially entails two processes: image examination followed by interpretation of findings. These require the ability to visually perceive an image, and the cognitive facility to apply pattern recognition to differentiate normal from abnormal . This is fraught with difficulty and human interpretation of images can often overlook findings and attract interpretation errors. Since Garland’s , revelation that radiologists are indeed prone to human errors, various investigators have attempted to quantify their frequency and impact . The 2004 RADPEER study, which analyzed 20,286 cases involving over 250 radiologists, reported an error rate of 3-4%, in line with that reported by other studies . A 2014 study by Kim and Mansfield retrospectively examined 656 imaging cases with delayed diagnoses, and discovered 1269 errors . More significantly, they noted that the correct diagnoses had not been recognized on subsequent radiologic examinations in 196 of 656 cases (30%), and categorized the most common types of errors as under-reading, satisfaction of search, faulty reasoning, and location of the finding. This study can however be criticized for not recording the years of experience of the radiologists involved, nor the clinical significance of each error or the complexity of the cases. This last point is especially significant as it has been suggested that so-called errors in very difficult cases can be regarded as acceptable variation in opinion . Additionally, increasing case volume and fatigue, anatomical variation, and incorrect patient positioning can all contribute to misdiagnosis [12,13]. Clearly, the frequency of errors is representative of the multi-factorial nature of radiological misinterpretation. So we know that errors are made, but what is their clinical impact? A 1995 study examining correlation of 100 body MRI reports between two experienced radiologists found disagreements in 39 of these, of which 23 constituted major differences, resulting in a significant change in patient management . A subsequent retrospective study found that in 49 (19%) of 259 patients with non-small cell lung cancer presenting as a nodular lesion on chest X-ray, the lesions were missed. [15,16] Of these 49, the delayed diagnosis allowed 21 patients (43%) to progress from stage T1 to T2, with an associated drop in 5-year survival from 60-80% to 40-50%. Clearly, radiologist error results in misses and delays in diagnosis, which can in turn lead to worse patient outcomes. In the current age of digital imaging databases and electronic health record systems, the application of AI in radiology is postulated to achieve not only more reliable, but also faster and cheaper image interpretation.
The Evolution of AI in Imaging
The idea of using AI in imaging can be traced back to the 1960s. Following the successes of computers in other branches of science and industry, radiology investigators attempted to exploit their “unique and indispensable capacity to retain large numbers of facts and to accept an exact and detailed program of instructions describing how to interrelate these facts in order to provide a statistically weighted answer”. However, the potential for computerized image interpretation was recognized long before the technology caught up with the vision. In Winsberg et al.  created a device capable of detecting changes in optical densities of mammogram films and highlighting areas with shaded rectangles to indicate differences between left and right breast . Similarly, Lodwick et al. among others attempted to create computer systems capable of automatically diagnosing conditions from radiological images [19,20]. Although interesting results were achieved from quite novel applications of computers, these forays were ultimately unsuccessful as the computing power was insufficient, digital images were not readily available and advanced imaging processing techniques did not exist.
In the 1980s, a paradigm shift occurred. Recognition of the limitations of computing power led to the development of systems aimed at supporting radiologists rather than replacing them. Thereafter, a number of different techniques, including rule-based and case-based reasoning, Bayesian networks and hypertext were proposed . However, by far the most effective and successful technique, and the one used by most current AI systems in radiology, is the Artificial Neural Network (ANN). ANNs have risen to become the most popular AI technique in modern medicine . These computer systems are based upon the function of the human brain. They comprise networks of highly interconnected computer processes that take on the role of neurons, performing parallel calculations for data processing, joined together by weighted connections. The knowledge base of the system encodes the weighting of each connection, and each ‘neuron’ uses this weighting, informed by mathematical reasoning, to decide whether to activate other ‘neurons’ down the line [3,6,21]. ANNs present many advantages that have contributed to them becoming the dominant form of AI in radiology. ANNs can be ‘trained’ through supervised learning, which involves comparing the expected from the actual output. However, they can also learn through unsupervised learning, whereby the system adjusts the weighting of its connections using observations of, and correlation with, the input data [3,22]. Through unsupervised learning the ANN can continue advancing and improving with each case, ensuring increasingly reliable diagnoses as time goes on, independent of expert input . It also enables the ANN to extrapolate its knowledge of simple cases to tackle more challenging ones . In addition, as both images and human observations can be used as inputs into the ANNs, the system can continue to be updated by personalized expert knowledge . This means that the system can continue to learn in a similar way to the human brain, but unlike humans, they never forget something they have learnt. The most common use of ANN systems in imaging are within AI based Computer Aided Detection (CAD) programmers. This is a software implementation of AI that analyses images and highlights areas of concern, before prompting further inspection by the radiologist .
Current Uses of AI in Imaging
AI based CAD is routinely used in breast cancer screening programs in the USA. It provides a second opinion to the radiologist’s initial read of mammograms  and so, unlike automated computerized diagnosis, is designed to augment a clinician’s performance in the detection of suspicious lesions, rather than replacing the radiologist . The impact of CAD in mammography has been extensively investigated. A prospective study in 2006 examined the effect of using CAD in 21,349 mammograms. Images were read without and then with CAD, and the differences in recall rate, detection rate and Positive Predictive Values (PPV) were calculated. CAD led to 199 additional women being recalled and 21 additional biopsies, yielding 8 cases of cancer. This improved the detection rate by 7.62%, with an increase in recall rate from 9.84% to 10.77%, while the PPV fell from 41% to 40.8% . The authors concluded that CAD can increase the detection of cancer with an acceptable increase in recall rate and minimal impact on PPV. Other investigators have supported these findings. Another 2006 prospective study looked at 9520 mammograms. CAD improved the cancer detection rate by 13.3%, with an increase in recall rate from 6.2% to 7.8%, and a non-significant increase in PPV from 21.9% to 26.3% . In the same year Ko et al. examined 5016 mammograms and found that the addition of CAD increased the cancer detection rate by 4.7% compared with a human reader alone, with a 2% increase in recall rate and a 2.4% increase in PPV . In all of these studies, it is important to note that the radiologists were familiar with CAD and were aware a CAD system would be used. As such they may have been less vigilant for micro-calcifications, this being a finding that CAD systems are especially well designed to detect (CAD demarcates micro calcifications, nodules, and parenchymal tissue distortion in mammography), which may have exaggerated the impact of CAD [29,30]. The addition of CAD appears to improve lesion detection for an individual reporter, but how does it compare with a second radiologist reviewing the image? A 2007 study attempted to answer this question, with an analysis of 6,381 consecutive screening mammograms. These were interpreted by a primary reader without and then with CAD, followed by a second reader, who was aware of the primary reader’s findings but blinded to the CAD analysis. The outcomes of CAD and the second reader were then compared. The only difference was in two patients, both of whom were recalled by the second reader and had also been flagged by CAD but subsequently dismissed by the primary reader . The authors concluded that a human second reader or the use of a CAD system can increase the cancer detection rate. Although their small numbers obviated any statistical significance, their paper highlights an important potential limitation of using CAD: regardless of how many suspicious features it highlights, the decision to act on these ultimately comes back to the initial reader, who may be confounded by satisfaction of search and other human factors. These biases probably differ with a second human reader. A 2008 systematic review on CAD in breast screening pooled 27 studies that compared either single reading with single reading plus CAD, or single reading with double reading. It concluded that, while CAD increases recall rates, there was insufficient evidence to claim it improves cancer detection rate . Research in the field of CAD largely originates from the US where, unlike the UK, second reading of a mammogram is not standard practice. The exceptions to this are the UK based CADET trials. In 2006, CADET 1 retrospectively examined 10,267 mammograms, comparing the initial double reading with a reread by an individual radiologist with CAD. They found that CAD resulted in a significant increase in cancer detection rate of 6.5%, with an increase in recall rate of 2.1% , concluding that CAD is more effective. However, shortcomings of a retrospective study design aside, it is not clear if the re-readers used a digitized version of the original film prior to CAD, and if so what image enhancement capabilities were available to them, which may have exaggerated the advantages of CAD. CADET 2 followed this up in 2008 with a prospective analysis, randomly allocating 31,293 mammograms to either CAD only, single reader with CAD, double readers only and double readers with CAD. They found the cancer detection rate of a single reader with CAD was equivalent to double reading, with a small increase in recall rate . The evidence for the benefits of CAD in mammography therefore remains equivocal, although the effectiveness of AI in imaging may well differ amongst other modalities. A literature review of 20 years of publications examining radiologists’ error rates suggests that they differ between imaging modalities, and the same may apply to CAD (although there is currently insufficient evidence to support this) .
Lung Cancer and Other Imaging
The association between early detection and staging of lung cancer with better survival rates is well established . Yet, like breast lesions, lung nodules can prove difficult to detect on chest X-rays and CT scans (up to 19% of lung nodules may be missed on chest X-ray , and 38% on CT . The combination of CAD with a radiologist has consistently been found to improve nodule detection rates in both chest radiography (Kligerman et al. ) and CT (Das et al. , Yuan et al. ) compared with either CAD or radiologist alone. A 2016 study pitted four different CAD systems against using 50 CT scans that contained lung nodules that had been previously missed by radiologists . The CAD systems detected 56-70% of the lung lesions originally missed, including 17% of cancers under 3 mm and 69-78% of cancers between 3-6 mm, sizes that are often overlooked by expert observers [41,42]. This findings suggest that not only can CAD systems outperform human readers in recognizing difficult lesions, but also that they may prove an invaluable tool in detecting small, stage one pulmonary nodules. Interest is also growing in the AI technique of temporal subtraction, which highlights interval changes between successive imaging . Its use in analyzing bone scans was shown to increase accuracy of diagnoses whilst also drastically reducing reading times from 134 seconds to 91 seconds .
Challenges of AI in Imaging
Whilst use of AI based CAD has shown potential in augmenting radiologist performance in mammography and lung imaging, False Positive (FP) rates remain a major barrier to its wide scale use in health industries. Normal structures can be incorrectly highlighted as abnormal by CAD software. Distinguishing true abnormal lesions from these false positive prompts is both time consuming for radiologists and impacts recall rates, throughput and costs. The prevalence of CAD related FPs in mammography is 1 to 2.2 per examination and 4.6 to 11 in chest CT [45-50]. Such high FP rates negatively impact on patient recall rates, and so contribute to the unsustainability of current CAD use in real clinical settings. In addition, increased recall rates lead to a higher number of unnecessary invasive procedures being carried out [51-54]. However, advancements in image processing techniques and CAD analysis over the last decade have successfully reduced FP. These recent improvements in FP rates exemplify the constant and rapid development in the technology and algorithms that AI is built upon, and point to the probability that widespread imaging CAD will one day be both entirely feasible and highly effective.
Another limitation of AI based CAD software, related to its reliance on a very large number of single associations, is that it is very difficult to identify the reasoning in any decision made by an ANN system. In fact, the system is almost completely incapable of providing an explanation for its diagnoses [3,6,22]. Though the ANN algorithm is likely to be correct, full responsibility for the patient remains with the radiologist and, as Teach and Shortleaf explain . Doctors rarely follow the advice of a computer system if they cannot see the reasoning underlying that advice. It is also difficult to change or remove a connection that a CAD system has already made, and as such medical discoveries that challenge previous theories may result in the system becoming outdated . A question mark also hangs over liability in CAD imaging: with 5% to 12% of all US medical malpractice lawsuits being directed at radiology, [56,57] if an error or misdiagnosis is made by a machine, does responsibility lie with the manufacturer or with the radiologist? Data on the cost effectiveness of CAD in imaging is limited. A 2011 cost-effectiveness study of CAD involvement in the UK’s national breast screening program concluded that due to high costs of the equipment and training, and without improvements in efficacy such as a reduction in the recall rate, CAD does not appear to be a cost-effective alternative to double reading in breast screening in the NHS.
The Future of AI in Imaging
AI software has thus proven itself a competent second reader, still limited by the significant, although improving, high FP rate. However, confidence in the future of imaging AI is evidenced by IBM’s $1 billion investment into its Watson Health Project, an algorithm already tested in healthcare. This investment will see Watson acquire 30 billion images from which to learn , that will allow its algorithm to pool from the most extensive knowledge base to date, with the additional advantage of having access to patients’ supporting information; history, blood tests, genome sequences, and previous imaging. With these data resources, future AI technology may be able to invoke algorithms that are far more precise and efficient, solving the problems of high FP and recall rates and able to detect abnormalities on any imaging modality, including diagnosis of rare and challenging cases that might otherwise be missed . This could lead to the reintroduction of the automated computerized diagnosis programmers first envisioned in the 1980s, designed to replace radiologists. However, IBM’s venture is not yet fully developed and its future capabilities remain to be seen. Another recently described area of interest is artificial swarm intelligence. This technology is built upon the adage that ‘many minds are better than one’, by providing a global online platform for radiologists to share and integrate decision making in complex cases and so amplify an individual’s knowledge base and problem solving ability. Preliminary studies have displayed this type of collective intelligence to be useful and it may forge a new frontier for AI in radiology [60,61]. However, in the foreseeable future, it is likely that integration of AI based CAD with the Picture Archiving and Communications System (PACS), commonly used in western hospitals, will have the biggest impact in reinforcing AI’s role in radiology. The integration of these two systems in the workplace would allow for the routine clinical use of CAD as an adjunct to radiologist interpretation of all modalities. This promises to help safeguard the quality of patient care as well as enhancing workflow, providing FPs rates are adequately managed [62,63] The next evolutionary step is a move from the current CAD systems that can identify a single common disease, to a more comprehensively intelligent system that can identify multiple, challenging diagnoses using a knowledge base far beyond the average physician. Such an AI system, working in conjunction with PACS, could not only recognize and classify lesions but could also, for example, diagnose and quantify cardiomegaly, grade vertebral fractures and produce a differential diagnosis for interstitial lung disease . With regard to the impact this might have on the radiologist, an advanced AI, working in harmony with a human radiologist, has the potential to help manage workload, enhance individual performance and abate human error. CAD can function within the radiologist’s workflow in two ways: 1. The radiologist reads the image first without CAD output and then request a display of CAD findings and make a final diagnosis . 2. CAD displays its findings first to be interpreted by the radiologist who then makes the final diagnosis . Method 1 is postulated to negatively influence a reader’s initial diagnosis and sufficiently impact interpretation time whilst method 2 may instill false confidence in the CAD programmers that may give rise to false negative rates . CAD-PACS integration will likely encourage radiologists to rely on CAD output as the primary diagnostic tool, in contrast to its current use as a “second opinion”. Beyond the remit of radiological diagnostician, artificial intelligence could also revolutionize prediction of patient outcomes. By recording the eventual fate of the patient whose images and other data they process, ANN systems may, by reference to prior similar cases, predict future patient prognosis and outcomes with increasing accuracy . This would present a paradigm shift away from prediction models that place patients within statistical groups, towards more individualized predictions based upon artificial intelligence analysis.
- Simmons AB, Chappell SG (1998)Artificial intelligence-definition and practice. IEEE Journal of Oceanic Engineering 13: 14-42.
- Minsky M (1961) Steps toward artificial intelligence. Proceedings of the IRE 49: 8-30.
- Kahn CE (1994) Artificial intelligence in radiology: decision support systems. RadioGraphics 14: 849-861.
- Miller RA (1994) Medical Diagnostic Decision Support Systems--Past, Present, And Future: A Threaded Bibliography and Brief Commentary. Journal of the American Medical Informatics Association 1: 8-27.
- Siegel E (2012) Artificial intelligence and diagnostic radiology: Not quite ready to welcome our computer overlords. Applied Radiology 41: 8.
- Amato F, López A, Peña-Méndez EM, Vaňhara P, Hampl PA, et al. (2013) Artificial neural networks in medical diagnosis. Journal of Applied Biomedicine 11: 47-58.
- Krupinski EA (2003) The Future of Image Perception in Radiology. Academic Radiology 10: s1-3.
- Garland LH (1949) On the scientific evaluation of diagnostic procedures. Radiology 52: 309-328.
- Borgstede JP, Lewis RS, Bhargavan M, Sunshine JH (2004) RADPEER quality assurance program: a multifacility study of interpretive disagreement rates. Journal of the American College of Radiology 1: 59-65.
- Kim YW, Mansfield LT (2014) Fool Me Twice: Delayed Diagnoses in Radiology with Emphasis on Perpetuated Errors. American Journal of Roentgenology 202: 465-470.
- Robinson PJ (1997) Radiology's Achilles' heel: error and variation in the interpretation of the Röntgen image. The British Journal of Radiology 70: 1085-1098.
- Renfrew DL, Franken EA, Berbaum KS, Weigelt FH, Abu-Yousef MM (1992) Error in radiology: classification and lessons in 182 cases presented at a problem case conference. Radiology 183: 145-150.
- Pinto A (2010) Spectrum of diagnostic errors in radiology. World Journal of Radiology 2: 377.
- Wakeley CJ, Jones AM, Kabala JE, Prince D, Goddard PR (1995) Audit of the value of double reading magnetic resonance imaging films. The British Journal of Radiology 68: 358-360.
- Quekel LG, Kessels AG, Goei R, van Engelshoven JM (1999) Miss rate of lung cancer on the chest radiograph in clinical practice. Chest 115: 720-724.
- Scott WJ, Howington J, Feigenberg S, Movsas B, Pisters K (2007) Treatment of non-small cell lung cancer stage I and stage II: ACCP evidence-based clinical practice guidelines. Chest 132: 2340S-2342S.
- Lodwick GS, Haun CL, Smith WE, Keller RF, Robertson ED (1963) Computer diagnosis of primary bone tumors: A preliminary report. Radiology 80: 273-275.
- Winsberg F, Elkin M, Macy Jr J, Bordaz V, Weymouth W (1967) Detection of radiographic abnormalities in mammograms by means of optical scanning and computer analysis. Radiology 89: 211-215.
- Kruger RP, Townes JR, Hall DL, Dwyer SJ, Lodwick GS (1972) Automated radiographic diagnosis via feature extraction and classification of cardiac size and shape descriptors. IEEE Transactions on Biomedical Engineering, pp: 174-186.
- Meyers PH, Nice CM, Becker HC, Nettleton WJ, Sweeney JW, et al. (1964) Automated computer analysis of radiographic images. Radiology 83: 1029-1034.
- Ramesh AN, Kambhampati C, Monson JR, Drew PJ (2004) Artificial intelligence in medicine. Annals of the Royal College of Surgeons of England 86: 334.
- Ding S, Li H, Su C, Yu J, Jin F (2013) Evolutionary artificial neural networks: a review. Artificial Intelligence Review 39: 251-260.
- U.S.FDA (2012) Administration, Guidance Documents (Medical Devices and Radiation Emitting Products)-Guidance for Industry and FDA Staff-Clinical Performance Assessment: Considerations for Computer-Assisted Detection Devices Applied to Radiology Images and Radiology Device Data-Premarket Approval (PMA) and Premarket Notification [510(k)] Submissions.
- Doi K (2007) Computer-aided diagnosis in medical imaging: historical review, current status and future potential. Computerized Medical Imaging and Graphics 31: 198-211.
- Castellino RA (2005) Computer aided detection (CAD): an overview. Cancer Imaging 5: 17.
- Morton MJ, Whaley DH, Brandt KR, Amrami KK (2006) Screening mammograms: interpretation with computer-aided detection-prospective evaluation. Radiology 239: 375-383.
- Dean JC, Ilvento CC (2006) Improved cancer detection using computer-aided detection with diagnostic and screening mammography: prospective study of 104 cancers. American Journal of Roentgenology 187: 20-28.
- Ko JM, Nicholas MJ, Mendel JB, Slanetz PJ (2006) Prospective assessment of computer-aided detection in interpretation of screening mammography. American Journal of Roentgenology 187: 1483-1491.
- Yu S, Guan L (2000) A CAD system for the automatic detection of clustered microcalcifications in digitized mammogram films. IEEE Transactions on Medical Imaging 19: 115-126.
- Nishikawa RM (2007) Current status and future directions of computer-aided diagnosis in mammography. Computerized Medical Imaging and Graphics 31: 224-235.
- Georgian-Smith D, Moore RH, Halpern E, Yeh ED, Rafferty EA, et al. (2007) Blinded comparison of computer-aided detection with human second reading in screening mammography. American Journal of Roentgenology 189: 1135-1141.
- Taylor P, Potts HW (2008) Computer aids and human second reading as interventions in screening mammography: two systematic reviews to compare effects on cancer detection and recall rate. European Journal of Cancer 44: 798-807.
- Gilbert FJ, Astley SM, McGee MA, Gillan MG, Boggis CR, et al. (2006) Single reading with computer-aided detection and double reading of screening mammograms in the United Kingdom National Breast Screening Program. Radiology 241: 47-53.
- Gilbert FJ, Astley SM, Gillan MG, Agbaje OF, Wallis MG, et al. (2008) CADET II group. CADET II: A prospective trial of computer-aided detection (CAD) in the UK Breast Screening Programme. Journal of Clinical Oncology 26: 508.
- Goddard P, Leslie A, Jones A, Wakeley C, Kabala J (2001) Error in radiology. The British Journal of Radiology 74: 949-951.
- Heuvers ME, Hegmans JP, Stricker BH, Aerts JG (2012) Improving lung cancer survival; time to move on. BMC Pulmonary Medicine 12: 77.
- Li F, Sone S, Abe H, MacMahon H, Armato SG, et al. (2002) Lung cancers missed at low-dose helical CT screening in a general population: comparison of clinical, histopathologic, and imaging findings. Radiology 225: 673-683.
- Kligerman S, Cai L, White CS (2013) The effect of computer-aided detection on radiologist performance in the detection of lung cancers previously missed on a chest radiograph. Journal of Thoracic Imaging 28: 244-252.
- Das M, Mühlenbruch G, Mahnken AH, Flohr TG, Gündel L, et al. (2006) Small pulmonary nodules: effect of two computer-aided detection systems on radiologist performance. Radiology 241: 564-571.
- Yuan R, Vos PM, Cooperberg PL (2006) Computer-aided detection in screening CT for pulmonary nodules. American Journal of Roentgenology 186: 1280-1287.
- Liang M, Tang W, Xu DM, Jirapatnakul AC, Reeves AP, et al. (2016) Low-dose CT screening for lung cancer: computer-aided detection of missed lung cancers. Radiology 281: 279-288.
- Sahiner B, Chan HP, Hadjiiski LM, Cascade PN, Kazerooni EA, et al. (2009) Effect of CAD on radiologists' detection of lung nodules on thoracic CT scans: analysis of an observer performance study by nodule size. Academic Radiology 16: 1518-1530.
- Shiraishi J, Li Q, Appelbaum D, Doi K (2011) Computer-aided diagnosis and artificial intelligence in clinical imaging. In Seminars in Nuclear Medicine 41: 449-462.
- Shiraishi J, Appelbaum D, Pu Y, Li Q, Pesce L, et al. (2007) Usefulness of temporal subtraction images for identification of interval changes in successive whole-body bone scans: JAFROC analysis of radiologists’ performance. Academic Radiology 14: 959-966.
- Yang SK, Moon WK, Cho N, Park JS, Cha JH, et al. (2007) Screening mammography–detected cancers: sensitivity of a computer-aided detection system applied to full-field digital mammograms. Radiology 244: 104-111.
- Friedemann B, Uwe F, Karim B, Serge M, Silivia O, et al. (2003) Computer aided detection (CAD) in direct digital full field mammography. In Digital Mammography, pp: 253-256.
- Lee N, Laine AF, Márquez G, Levsky JM, Gohagan JK (2009) Potential of computer-aided diagnosis to improve CT lung cancer screening. IEEE Reviews in Biomedical Engineering 2: 136-146.
- Armato SG, Giger ML, Moran CJ, Blackburn JT, Doi K, et al. (1999) Computerized detection of pulmonary nodules on CT scans. Radiographics 19: 1303-1311.
- Lee Y, Hara T, Fujita H, Itoh S, Ishigaki T (2001) Automated detection of pulmonary nodules in helical CT images based on an improved template-matching technique. IEEE Transactions on Medical Imaging 20: 595-604.
- Suzuki K, Armato SG, Li F, Sone S (2003) Massive training artificial neural network (MTANN) for reduction of false positives in computerized detection of lung nodules in low‐dose computed tomography. Medical Physics 30: 1602-1617.
- Farag AA, El-Baz A, Gimel’farb G, El-Ghar MA, Eldiasty T (2005) Quantitative nodule detection in low dose chest CT scans: new template modeling and evaluation for CAD system design. In International Conference on Medical Image Computing and Computer-Assisted Intervention, pp: 720-728.
- Messay T, Hardie RC, Rogers SK (2010) A new computationally efficient CAD system for pulmonary nodule detection in CT imagery. Medical Image Analysis 14: 390-406.
- De Hoop B, De Boo DW, Gietema HA, Van Hoorn F, Mearadji B, et al. (2010) Computer-aided detection of lung cancer on chest radiographs: effect on observer performance. Radiology 257: 532-540.
- Setio AA, Ciompi F, Litjens G, Gerke P, Jacobs C, et al. (2016) Pulmonary nodule detection in CT images: false positive reduction using multi-view convolutional networks. IEEE Transactions on Medical Imaging 35: 1160-1169.
- Teach RL, Shortliffe EH (1981) An analysis of physician attitudes regarding computer-based clinical consultation systems. In Use and Impact of Computers in Clinical Medicine, pp: 68-85.
- Pinto A, Brunese L, Pinto F, Reali R, Daniele S, et al. (2012) The concept of error and malpractice in radiology. In Seminars in Ultrasound, CT and MRI 33: 275-279.
- Guerriero C, Gillan MG, Cairns J, Wallis MG, Gilbert FJ (2011) Is computer aided detection (CAD) cost effective in screening mammography? A model based on the CADET II study. BMC Health Services Research 11: 11.
- (2016) IBM makes a quantum processor available for use online. Physics Today.
- Doi K (2005) Current status and future potential of computer-aided diagnosis in medical imaging. The British Journal of Radiology 78: s3-s19.
- Wolf M, Krause J, Carney PA, Bogart A, Kurvers RH (2015) Collective intelligence meets medical decision-making: the collective outperforms the best radiologist. PLoS One 10: e0134269.
- Palmer DW, Piraino DW, Obuchowski NA, Bullen JA (2014) Emergent diagnoses from a collective of radiologists: algorithmic versus social consensus strategies. In International Conference on Swarm Intelligence, pp: 222-229.
- Dubey RB, Hanmandlu M (2012) Integration of CAD into PACS, 2012 2nd International Conference on Power, Control and Embedded Systems, Institute of Electrical and Electronics Engineers (IEEE).
- Le AH, Liu B, Huang HK (2009) Integration of computer-aided diagnosis/detection (CAD) results in a PACS environment using CAD–PACS toolkit and DICOM SR. International Journal of Computer Assisted Radiology and Surgery 4: 317-329.
- Dayhoff JE, DeLeo JM (2001) Artificial neural networks: opening the black box. Cancer: Interdisciplinary International Journal of the American Cancer Society 91: 1615-1635.
Citation: Basak S (2018) An Incisive Purview on the Artificial Intelligence in the Field of Imaging. J Health Educ Res Dev 6: 269. DOI: 10.4172/2380-5439.1000269
Copyright: © 2018 Basak S. This is an open-access article distributed under the terms of the Creative Commons Attribution License, which permits unrestricted use, distribution, and reproduction in any medium, provided the original author and source are credited.
Select your language of interest to view the total content in your interested language
Share This Article
- Total views: 538
- [From(publication date): 0-0 - Feb 20, 2019]
- Breakdown by view type
- HTML page views: 504
- PDF downloads: 34