Remember this is an absolutely critical step, almost everything else can be fixed.
Proof that modulation of the identified target in a model system has the desired impact on biological activity and can be linked to therapeutic utility. This might be achieved by identification of genetic mutations in the human population for example CCR5 mutations and HIV, gene knock-out studies in mice, the use of antibodies, drug-resistant mutations, or siRNA experiments, for example siRNA and neuropathic pain. It is worth noting that 51% of phase II clinical trial failures are due to lack of efficacy DOI, many of these failures can certainly be attributed to inadequate target validation. Failure at this stage of development comes after huge amounts of money and resources have been committed to the project.
The Centre for Therapeutic Target Validation platform brings together information on the relationships between potential drug targets and diseases. The core concept is to identify evidence of an association between a target and disease from various data types.The Centre for Therapeutic Target Validation is a pre competitive public-private venture that aims to provide evidence on the biological validity of therapeutic targets and provide an initial assessment of the likely effectiveness of pharmacological intervention on these targets, using genome-scale experiments and analysis. The platform currently contains 28,931 targets, 3,049,882 associations for 10,053 diseases.
A target can be a protein, protein complex or RNA molecule, but we integrate evidence through the gene that codes for the target. In the same way, we describe diseases through a structure of relationships called the Experimental Factor Ontology (EFO) that allows us to bring together evidence across different but related diseases.The platform supports workflows starting from either a target or disease and presents the evidence for target – disease associations in a number of ways through association and evidence pages.
The current version contains (DisGeNET v3.0) contains 429111 associations, between 17181 genes and 14619 diseases, disorders and clinical or abnormal human phenotypes.
Open Targets is a public-private partnership that uses human genetics and genomics data for systematic drug target identification and prioritisation. The current focus is on oncology, immunology and neurodegeneration.
Generating and interpreting the data required to identify a good drug target demands a diverse set of skills, backgrounds, evidence types and technologies, which do not exist today in any single entity. Open Targets brings together expertise from seven complementary institutions to systematically identify and prioritise targets from which safe and effective medicines can be developed.
There may also be known small molecules that exert the desired effect but perhaps with a sub-optimal profile. It is absolutely critical to use good quality probes to avoid generating misleading results. Chemical Probes are an increasingly important resource.
A gene knockout is a genetic technique in which one of an organism's genes are made inoperative and the role of the gene can be inferred from the difference between the knockout and normal organisms. However a gene knockout may have an impact on development of the organism and so conditional knockout which allows gene deletion in a tissue or time specific manner have been developed. In a further refinement tissue-specific conditional Knockout describes a case in which a target gene can be specifically inactivated in specific tissue; in all other tissues, the target gene exhibits a functional expression.
However a publication in Immunity Volume 43, Issue 1, p200–209, 21 July 2015 DOI raises concerns about the procedure.
Targeted mutagenesis in mice is a powerful tool for functional analysis of genes. However, genetic variation between embryonic stem cells (ESCs) used for targeting (previously almost exclusively 129-derived) and recipient strains (often C57BL/6J) typically results in congenic mice in which the targeted gene is flanked by ESC-derived passenger DNA potentially containing mutations.
So we have a situation in which "passenger mutations” apparently show up flanking the targeted gene, and these may confound interpretation of the observed phenotype.
For instance, it was thought that the gene Casp1 was the principal player that triggered an inflammatory response and cell death pathway in response to foreign organisms, a step involved in lethal shock. That’s because, according to a 1995 study and subsequent work, Casp1 knockout mice did not go into septic shock when challenged with molecules signaling foreign invaders. However, in 2011, researchers at Genentech showed that many Casp1 knockout mice also harbored a mutated Casp11 gene from 129 strain mice. The researchers showed that the passenger mutation to Casp11 was partly responsible for the animals’ resistance to shock.
There is now a web-based tool Me-PaMuFind-It that allows scientists to easily get a list of potential passenger mutations present in any 129-derived transgenic mouse.
Small interfering RNA
Small interfering RNA (siRNA) is a class of double-stranded RNA that plays a role in the RNA interference pathway first identified by Sir David Baulcombe. siRNA can be used to interfere with the expression of specific genes with complementary nucleotide sequences effectively knocking down the gene of interest. It should be noted however that genes with incomplete complementarity can be inadvertently downregulated by the siRNA.
It may also be important to determine the effect of modulation of the target in healthy cells to evaluate possible mechanism-based toxicity. Owing to pleiotropic effects, the same target might have different functions in different organ systems or at different time points during development and adulthood. Further hints for potential adverse events can be obtained from data on knockout mice and from genetic deficiencies in humans.
Another way to deplete endogenous proteins, Trim-Away a technique to degrade endogenous proteins acutely in mammalian cells without prior modification of the genome or mRNA. Trim-Away harnesses the cellular protein degradation machinery to remove unmodified native proteins within minutes of application.
We reasoned that the antibody receptor and ubiquitin ligase TRIM21 could be used as a tool to drive the degradation of endogenous proteins by using a 3-step strategy: first, the introduction of exogenous TRIM21; second, the introduction of an antibody against the protein of interest; and third, TRIM21-mediated ubiquitination followed by degradation of the antibody-bound protein of interest.
There is more information on the MRC website
Reproducibility of target validation studies
Unfortunately the reproducibility of some target identification/validation studies has been questioned; the presentations of a webcast devoted to a discussion of the issue of reproducibility are available here. It is difficult to estimate the number of studies that cannot be reproduced but anecdotal reports would suggest more than half. A recent study, reported in Science 28 August 2015: Vol. 349 no. 6251 DOI looking at psychological science, attempted to replicate published work suggests that 39% of effects replicated the original result. Also Amgen, tried to replicate 53 'landmark' cancer studies and failed to replicate the original studies in all but six occasions, Nature 483, 531–533 (29 March 2012) DOI.
This quote from In the Pipeline is perhaps a useful reminder.
A robust result can probably be reproduced even if you switch to a different buffer, or if your cell lines have been passaged a different number of times, or if the concentration of the test molecule is a bit off, etc. The more persnickity and local the conditions have to be, the less robust your result is, and in general (sad to say) the lower the odds of it having a real-world impact in drug discovery. There are certainly important things that can only be demonstrated under very precise conditions, don’t get me wrong – but when you’re expecting umpteen thousand patients to take your drug candidate and show real effects, your underlying hypothesis needs to be able to take a good kicking and still come through.
The impact of may not be uncovered until much later, it has been suggested that this is a major factor in the failure rate for phase 2 clinical trials DOI
In almost two-thirds of the projects, there were inconsistencies between published data and in-house data that either considerably prolonged the duration of the target validation process or, in most cases, resulted in termination of the projects
There have been several studies looking at the possible causes of the failure to reproduce work, in 2011, an evaluation4 of 246 antibodies used in epigenetic studies found that one-quarter failed tests for specificity, meaning that they often bound to more than one target. Four antibodies were perfectly specific — but to the wrong target Reproducibility crisis: Blame it on the antibodies. More recently a similar issue arose in oestrogen receptor beta research, DOI
Another paper "The antibody horror show: an introductory guide for the perplexed" DOI
We here perform a rigorous validation of 13 anti-ERβ antibodies, using well-characterized controls and a panel of validation methods. We conclude that only one antibody, the rarely used monoclonal PPZ0506, specifically targets ERβ in immunohistochemistry….While our study focuses on ERβ, we do not think that antibodies towards ERβ are significantly poorer than those targeting other proteins, and it is not unlikely that this problem generates similar obstacles in many other fields.
Colourful as this may appear, the outcomes for the community are uniformly grim, including badly damaged scientific careers, wasted public funding, and contaminated literature. As antibodies are amongst the most important of everyday reagents in cell biology and biochemistry, I have tried here to gently suggest a few possible solutions, including: a move towards using recombinant antibodies; obligatory unique identification of antibodies, their immunogens, and their producers; centralized international banking of standard antibodies and their ligands; routine, accessible open-source documentation of user experience with antibodies; and antibody-user certification.
The misidentification of cell lines is a stubborn problem in the biomedical sciences, contributing to the growing concerns about errors, false conclusions and irreproducible experiments DOI
The problem was highlighted in a recent editorial
Here we attempt to make a conservative estimate of this ‘contaminated’ literature. We found 32,755 articles reporting on research with misidentified cells, in turn cited by an estimated half a million other papers. The contamination of the literature is not decreasing over time and is anything but restricted to countries in the periphery of global science.
For companies like Genentech, though, the economic driver is to make sure that scientists there know exactly which cells they're using. Neve runs a massive cell bank at the Bay Area biotech giant, and like other biotech and pharmaceutical companies, it maintains strict quality controls. When scientists at his company find an intriguing result from an academic lab, the first thing they do is try to replicate the result. Neve said often they can't, and misidentified cells are a common reason.
Issues with Fluorescent probes, Ca2+ signaling events in many different cell types are tracked with fluorescent Ca2+ indicators, such as Fluo-4, Rhod-2, and Fura-2, and can be inhibited with the Ca2+ chelator BAPTA. Smith et al. DOI found that these commonly used reagents inhibited the Na,K-ATPase, a membrane protein that exchanges intracellular Na+ for extracellular K+ and thus helps set the resting membrane potential and regulate cellular volume. This is such a nonstop critical function that this enzyme alone accounts for about 40% of the ATP usage in a cell, and the Fluorescent probes in various cell lines inhibit the ATPase’s activity by 30 to 80%. Since calcium concentrations are used in so many assays "a critical review of data obtained with chemical Ca2+ indicators may be necessary".
Human cancer cell lines are critical components of in vitro molecular biology and it should come as no surprise to those who work in the area there can be significant differences between batches of cells that were derived from a single cell. This variability has been discussed in some detail "Genetic and transcriptional evolution alters cancer cell line drug response" Link Genomic analyses of 106 human cell lines grown in two laboratories to show extensive clonal diversity.
This should not be surprising since many cancer cell lines have disrupted DNA repair mechanisms and a loss in the fidelity of DNA replication. This feature can be used to evaluate for resistance to a chemotherapeutic agent, however it does complicate trying to reproduce literature findings.
When the 27 MCF7 strains were tested against 321 anti-cancer compounds, we uncovered considerably different drug responses: at least 75% of compounds that strongly inhibited some strains were completely inactive in others.
The promise and peril of chemical probes, Nature Chemical Biology 11, 536–541 (2015) DOI.
Target validation using chemical probes, Nat Chem Biol. 2013 Apr;9(4):195-9, DOI.
Passenger Mutations Confound Interpretation of All Genetically Modified Congenic Mice, Immunity , Volume 43 , Issue 1 , 200 - 209, DOI
Identifying and validating novel targets with in vivo disease models: Guidelines for study design, Drug Discovery Today Volume 12, Numbers 11/12 June 2007. DOI
Common pitfalls in preclinical cancer target validation Nature Reviews Cancer 17, 425–440 (2017) DOI
Updated 28 December 2018