ISSN: 2153-0777
Journal of Bioengineering and Bioelectronics

Like us on:

Make the best use of Scientific Research and information from our 700+ peer reviewed, Open Access Journals that operates with the help of 50,000+ Editorial Board Members and esteemed reviewers and 1000+ Scientific associations in Medical, Clinical, Pharmaceutical, Engineering, Technology and Management Fields.
Meet Inspiring Speakers and Experts at our 3000+ Global Conferenceseries Events with over 600+ Conferences, 1200+ Symposiums and 1200+ Workshops on Medical, Pharma, Engineering, Science, Technology and Business

High-Throughput Screening: What are we missing?

Mahesh Uttamchandani*
Defence Medical and Environmental Research Institute, DSO National Laboratories, 27 Medical Drive, 117510, Singapore
Corresponding Author : Dr. Mahesh Uttamchandani
Defence Medical and Environmental Research Institute
DSO National Laboratories
27 Medical Drive, 117510
Singapore, Republic of Singapore
Tel: +65 6485 7214
Fax: +65 6485 7033
E-mail: mahesh@dso.org.sg
Received November 19, 2012; Accepted November 21, 2012; Published November 23, 2012
Citation: Mahesh U (2012) High-Throughput Screening: What are we missing? J Biochips Tiss Chips 2:e120. doi:10.4172/2153-0777.1000e120
Copyright: © 2012 Mahesh U. This is an open-access article distributed under the terms of the Creative Commons Attribution License, which permits unrestricted use, distribution, and reproduction in any medium, provided the original author and source are credited.
Related article at
DownloadPubmed DownloadScholar Google

Visit for more related articles at Journal of Bioengineering and Bioelectronics

Small molecules have big potential. Nowhere is this metaphor more compelling than in the pharmaceutical industry. Drug discovery is guided by High-Throughput Screening (HTS) which, in turn, guides therapy. HTS has its unique challenges, as few platforms are able to meet the opposing demands of high data quality, high-throughput and low cost. Increased miniaturization, through micro- or nano-chip-based approaches as well as on-bead screening, is amongst the novel ways in which data can be assimilated at an even higher rate [1]. However, these advances do not in themselves expand the yield and knowledge from screening expeditions, pointing to intrinsic factors beyond simply the throughput attainable. By reflecting on the hits and misses of HTS in this editorial, there is offered three of the personal perspectives on how screening efforts can be made more fruitful.
The pharmaceutical industry is heavily reliant on HTS for identifying hits from massive compound libraries. Screening represents an important early step within the arduous journey of drug development. Every year millions of compounds are screened against thousands of putative targets in the search for bioactive ligands, both in academia and industry. Yet the yields from such screens are often limited, with only several hundred compounds being identified as putative hits, from libraries numbering ten thousand or larger. Through lead optimization, preclinical testing and phased clinical trials, only one molecule from a set of ten thousand may actually emerge as an approved therapeutic.
The role for large-scale screening was evident from the early 1990s [2,3] and its principle has changed little since then. Essentially the problem that HTS addresses is this: How does one find one or more compounds with desired properties for a particular target from within repertoires of thousands to millions? An analogy is to find a key, from amongst millions, which fits a single lock. Thus, arose the concept of massively parallel screening, fuelled by the rise of combinatorial chemistry in creating molecular diversity. HTS and combinatorial chemistry were the two pillars driving drug discovery over the last two decades. Yet, there is a general feeling in the field that these technologies have failed to deliver on their true potential. On close examination, there are ways in which HTS can be approached differently to improve its value to drug discovery. Let us examine here how this may be achieved.
The current state of the art in screening is well represented by 384-well and 1536-well plates, offering reaction volumes that range from 10-20 μl to 2.5-5 μl, respectively [4]. Under this solutionphase format, every assay conceivable can be monitored through luminescence, fluorescence or color changes using a diversity of platebased instrumentation [5]. Robotic liquid handlers are purpose-built to perform micro-pipetting at blistering speeds, enabling push-button operations where a single facility can conduct over a million screening assays within 1-3 months. Increased throughput can be readily attained with screening on solid-support, through the use of microarrays and on-bead screening by reducing reactions to the nanoscale [6,7]. So generating large datasets today for HTS is no longer an obstacle.
While hits are the exciting output of such biological screens, rarely is the rest of the data consolidated and analyzed for the wealth of information it contains. A negative result is also information– it highlights molecular events and the chemical profiles of molecules that do not/minimally perturb the target. As Thomas Edison described it, “I have not failed, I have just found 10,000 ways that do not work.” Similarly, the information yielded from screens can translate into knowledge of off-target effects, experience that other practitioners can apply/replicate. However with the hits taking the limelight, the majority of screening datasets remain undisclosed, unreported and unpublished.
My first suggestion is that we look at results from HTS systematically and holistically. By throwing away the bathwater and keeping only what we think is the baby, we limit ourselves from the true contribution of HTS. Even single concentration screens performed using a thousand unpurified compounds against a single target which can yield valuable information into molecular preferences. If this is done systematically, we will reach a point where we can map comprehensively interaction data-maps from millions of compounds against thousands of proteins. Such a matrix will prove useful when evaluating the off-target effects of small molecules hits from across multiple screens [8]. It is considerably easier to find a ligand for a given target protein in vitro than it is to find drugs with the pharmacokinetic profiles effective in vivo, within the human body, hence the need to understand the behavior of as many proteins with small molecules. This information can be readily captured through HTS datasets.
Taking HTS a step further, improved experimental design can aid in the characterization hit selectivity during the screening step at the outset, improving the selection of hits for downstream optimization. Traditionally, it is the onward lead development phase that is tasked to confer selectivity onto the selected hit [9]. This is to say that one looks for a potent molecule first during HTS and only then looks into how to make it selective during lead development. This convention could be revisited, as it is not usually potency that is the deciding factor, it is usually the molecule’s selectivity that has an equal or greater impact on its success as a drug. Why then wait to address selectivity, if it can be done at the outset, within the HTS phase itself? Accordingly, there are ways to perform screens not only with targets, but also anti-targets upfront, to ensure that the hits selected fit the criteria of potency as well as selectivity, during HTS [10]. Such an approach would reduce hit attrition, even though it may come with an increased cost for the initial screening phase.
Thirdly a lot of time and effort is spent on hit re-validation, to confirm whether the hits acquired are in fact “real” hits. Many factors can contribute to false positive hits in HTS. The hit may bind to the target, but may not actually modulate its biological activity. Perhaps the hit may bind and modulate the activity of the target, but in an undesirable/suboptimal way. The challenge is thus to be able to identify molecules that desirably perturb the activity of the target, which often depends on the design of the bioassay being used. For simplicity, screening most often relies on target binding assays. Such binding events may thus be attributed to small molecule interactions to sites not responsible for target activity. One solution to this problem is to perform screening with both the native target and its functionally crippled analogue (that is to say, a denatured or mutated version of the target protein) [11,12]. If both the crippled and normal target binds to the same hits, these are likely to be false positives, as it is unlikely these molecules would contribute to the sought after functional effects. Such a strategy would enable the isolation of only functional hits within the HTS phase, diminishing false positive rates.
To conclude, there is great untapped potential in HTS. While technologies will expand screening capacity, the guiding principles provided here should make screening efforts a lot more fulfilling and productive. As screening is a high-risk process, positive hits as an outcome are not always guaranteed. Even so, each HTS initiative provides useful information and screening results that should be captured and valued. A better design of the overall screening effort would likely enable us to prioritize and focus on hits that will most likely succeed in the subsequent pre-clinical and clinical phases. These strategies could produce worthy outcomes from screening expeditions, ultimately increasing the rates of positive results from HTS and the yields from drug discovery.
References












--
Post your comment

Share This Article

Recommended Journals

Article Usage

  • Total views: 13043
  • [From(publication date):
    January-2013 - Sep 01, 2024]
  • Breakdown by view type
  • HTML page views : 8682
  • PDF downloads : 4361
Top