Browsed by
Tag: RNAi screening

Making sense of siGENOME deconvolution

Making sense of siGENOME deconvolution

As discussed previously, deconvoluted Dharmacon siGENOME pools often give surprising results.  (Deconvolution is the process of testing the 4 siRNAs in a pool individually.  This is usually done in the validation phase of siRNA screens.)

One way to compare the relative contribution of target gene and off-target effects is to calculate the correlation between reagents having the same target gene or the same seed sequence.  One of the first things we do when analysing single siRNA screens is to calculate a robust form of the intraclass correlation (rICC, see discussion at bottom for more about this).

Recently we were analysing deconvolution data from Adamson et al. (2012) and calculated the following rICC’s.  (The phenotype measured was relative homologous recombination.)

Grouping variable  rICC    95% confidence interval

Target gene        0.040   -0.021-0.099
Antisense 7mer     0.383   0.357-0.413
Sense 7mer         0.093   0.054-0.129

Besides the order of magnitude difference between target gene and antisense seed correlation (which is commonly observed in RNAi screens), what stands out is the ~2-fold difference between the correlation by target gene and sense seed.

Very little of the the sense strand should be loaded into RISC, if the siRNAs were designed with appropriate thermodynamic considerations (the 5′ antisense end should be less stable than the 5′ sense end, to ensure that the antisense strand is preferentially loaded into RISC).

The above correlations suggest that some not insubstantial amount of sense strand is making it into the RISC complex.

Here is the distribution of delta-delta-G for siPOOLs and siGENOME siRNAs targeting the same 500 human kinases (see bottom of post for discussion of calculation).  A positive delta-delta G means that the sense end is more thermodynamically stable than the antisense end, favouring the loading of the antisense strand into RISC.

 

 

This discrepancy in delta-delta G is also consistent with comparison of mRNA knockdown:

The siGENOME knockdown data comes from 774 genes analysed by qPCR in Simpson et al. (2008).  The siPOOL knockdown data is from 223 genes where we have done qPCR validation.

Of note, the siGENOME pools were tested at 100 nM, whereas siPOOLs were tested at 1 nM.

(It should be mentioned that, although consistent with the observed differences in ddG, this is only an indirect comparison, and delta-delta G is not the only determinant of functional siRNAs.)

 

Notes on intraclass correlation

Intraclass correlation measures the agreement between multiple measurements (in this case, multiple siRNAs with the same target gene, or multiple siRNAs with the same seed sequence).   One could also pair off all the repeated measures and calculate correlation using standard methods (parametrically using Pearson’s method, or non-parametrically using Spearman’s method).  The main problem with such an approach is that there is no natural way to determine which measure goes in the x or y column.  Correlations are normally between different variables (e.g. height and weight).  In a case of repeated measures, there is no natural order, so the intraclass correlation (ICC) is the more correct way to measure the similarity of within-group measurements.  As ICC depends on a normal distribution, datasets must first be examined, and if necessary, transformed beforehand.

Robust methods have the advantage of permitting the use of untransformed data, which is especially useful when running scripts across hundreds of screening dataset features.  The algorithm we use calculates a robust approximation of the ICC by combining resampling and non-parametric correlation.

Here is the algorithm, in a nutshell:

  1. Group observations (e.g. cell count) by the grouping variable (e.g. target gene or antisense seed)
  2. Randomly assign one value of each group to the x or y column (groups with one 1 observation are skipped)
    • for example, if the grouping variable is target gene and siRNAs targeting PLK1 had the values 23, 30, 37, 45, the program would randomly choose 1 of the values for the x column and another for the y column
  3. Calcule Spearman’s rho (non-parametric measure of correlation)
  4. Repeat steps 1-3 a set number of times (e.g. 300) and store the calculated rho’s
  5. Calculate mean of the rho values from 4.  This is the robust approximation of the ICC (rICC).
    • Values from 4 are also used to calculate confidence intervals.

The program that calculates this is available upon request.

Notes on calculating delta-delta G

Delta-delta G was calculated using the Vienna RNA package, as detailed here: https://www.biostars.org/p/58979/ (in answer by Brad Chapman).

The delta-delta G was calculated using 3 terminal bps.  We found that that ddG of the terminal 3 bps had the strongest correlation with observed knockdown.  Others (e.g. Schwarz et al., 2003 and Khvorova et al., 2003) have also used the terminal 4 bps.

 

Want to receive regular blog updates? Sign up for our siTOOLs Newsletter:

Follow us or share this post:
The final RNAiL?

The final RNAiL?

A recent article in The Scientist asks whether, in light of a paper by Lin et al. showing phenotypic discrepancies between RNAi and CRISPR, this is not ‘the last nail in the coffin for RNAi as a screening tool’?

The paper in question found that a gene (MELK) that had been shown by many RNAi-based studies to be critical for several cancer types shows no effect when knocked out via CRISPR.  They also report that in relevant published genome-wide screens, MELK was not at the top of the hit lists.

Does this mean that the papers that used RNAi were unlucky and off-target effects were responsible for their observed phenotypes?

Gray et al. identified MELK as a gene of interest based on microarray experiments.  They then designed RNAi experiments to test its role in proliferation.  Assuming that this study and the subsequent ones followed good RNAi experimental design (using reagents with varying seed sequences, testing the correlation between gene knockdown and phenotypic strength, etc.), we can be fairly confident that MELK is involved in proliferation.  It might not be the most essential player, which would explain why it is not at the top of screening hit lists.  And screening lists have the draw-back of enriching for off-target hits.

Another possibility is that Lin et al. have observed a known complicating feature of knock-out screens: genetic compensation.  Although they undertake experiments to address this issue, it could be that compensation takes place too quickly for their experiments to rule it out.  Furthermore, they could have addressed this issue by testing knock-down reagents themselves, and checking whether genes they hypothesise as responsible for the supposed off-target effect in the published RNAi work are in fact down-regulated.  C911 reagents could also be used to test for off-target effects.  This is extra work, but given that they are disputing the results in many published studies, this seems justified.

As regards the role of RNAi in screening, The Scientist concludes with the following (suggesting that their answer to the question of whether this is the final nail is also No):

In the meantime, one obvious solution to the problem of target identification and validation is to use both CRISPR and RNAi to validate a target before it moves into clinical research, rather than relying on a single method. “We have CRISPR and short hairpin reagents for every gene in the human genome,” said Bernards. “So when we see a phenotype with CRISPR, we validate with short hairpin, and the other way around. I think that would be ideal.”

Although we agree that validating CRISPR hits with RNAi reagents is important (especially if drugability is a concern), one has to be careful with RNAi reagents, like single siRNAs/shRNAs or low-complexity pools, that are susceptible to seed-based off-target effects.  For validating CRISPR screening hits, siPOOLs provide the best protection against unwanted off-target effects, saving you time, money, and disappointment during the validation phase.

 

Want to receive regular blog updates? Sign up for our siTOOLs Newsletter:

Follow us or share this post:
Where’s the beef?

Where’s the beef?

In our last blog entry, we discussed a classic RNAi screening paper from 2005 that showed that the top 3 screening hits were were due to off-target effects.

In this post, we analyse a more recent genome-wide RNAi screen by Hasson et al., looking in more detail at what proportion of top screening hits are due to on- vs. off-target effects.

Hasson et al. used the Silencer Select library, a second-generation siRNA library designed to optimise on-target knock down, and chemically modified to reduce off-target effects.  Each gene is covered by 3 different siRNAs.

To begin the analysis, we ranked the screened siRNAs in descending order of % Parkin translocation, the study’s main readout.

We then performed a hypergeometric test on all genes covered by the ranked siRNAs.  For example, if gene A has three siRNAs that rank 30, 44, and 60, we calculate a p-value for the likelihood of having siRNAs that rank that highly (more details provided at bottom of this post).  It’s the underlying principle of the RSA algorithm, widely used in RNAi screening hit selection.  If the 3 siRNAs for gene B have a ranking of 25, 1000, and 1500, the p-value will be higher (worse) than for gene A.

The same type of hypergeometric testing was done for the siRNA seeds in the ranked list.  For example, if the seed ATCGAA was found in siRNAs having ranks of 11, 300, 4000, and 6000, we would calculate the p-value for those rankings.  Seeds are over-represented in siRNAs at the top of the ranked list will have lower p-values.

After doing these hypergeometric tests, we had a gene p-value and a seed p-value for each row in the ranked list.  We could then look at each row in the ranked list estimate whether the phenotypic is due to an on- or off-target effect by comparing the gene and seed p-values.  [As a cutoff, we said that the effect is due to one of either gene or seed if the difference in p-value is at least two orders of magnitude.  If the difference is less than this, the cause was considered ambiguous.]

After assigning the effect as gene/seed/ambiguous, we then calculated the cumulative percent of hits by effect at each position in the ranked list.   Those fractions were then plotted as a stacked area chart (here, looking at the top 200 siRNAs from the screen):

 

The on-target effect is sandwiched between the massive ‘bun’ of off-target effects and ambiguous cause.  We are reminded of these classic commercials from the 80s:

 

Want to receive regular blog updates? Sign up for our siTOOLs Newsletter:

 

Note on p-value calculations:

P-values were calculated using the cumulative hyper-geometric test (tests the probability of finding that many or more instances of members belonging to the particular group, in our case a particular gene or seed sequence).  The p-value associated with a gene or seed is the best p-value for all the performed tests.  For example, assume a gene had siRNAs with the following ranks: 5, 20, 1000.  The first test calculates the p-value for finding 1 (of the 3) siRNAs when taking a sample of 5 siRNAs.  The next test calculates the p-value for finding 2 (of 3) siRNAs when taking a sample of 20 siRNAs.  And the last is the probability of getting 3 (of 3) siRNAs when taking a sample of 1000.  If the best p-value came from the second test (2 of 3 siRNAs found in a sample size of 20), that is the p-value that the gene receives.  This is also the approach used by the RSA (redundant siRNA activity) algorithm.  One advantage of RSA is that it can compensate for variable knock down efficiency of the siRNAs covering a gene (e.g. if 1 of 3 gives little knockdown).

Follow us or share this post:
Classic Papers Series: Lin et al. show RNAi screen dominated by seed effects

Classic Papers Series: Lin et al. show RNAi screen dominated by seed effects

Over the coming months, we will highlight a number of seminal papers in the RNAi field.

The first such paper is from 2005 by Lin et al. of Abbott Laboratories, who showed that the top hits from their RNAi screen were due to seed-based off-target effects, rather than the intended (and at that time, rather expected) on-target effect.

The authors screened 507 human kinases with 1 siRNA per gene, using a HIF-1 reporter assay to identify genes regulating hypoxia-induced HIF-1 response.

In the validation phase of their screen, they tested new siRNAs for hit genes, but found that they failed to reproduce the observed effect, even when using siRNAs that had a better on-target knock down than the pass 1 siRNAs.

Figure 1A.  Left panel shows on-target knock down of pass 1 siRNA for GRK4 (O) and the new design (N).  Centre panel shows Western blot of protein  levels  Right panel shows HIF-1 reporter activity for positive control (HIF1A) and the original (O) and new (N) siRNAs.

The on-target knock down is much-improved for the new design, yet its reporter activity is indistinguishable from negative control.  Yet the pass 1 siRNA with poor knock down gives almost as strong a result as HIF1A (positive control).

By qPCR, they then showed that GRK4(O) and another one of the top 3 siRNAs silence HIF1A (the positive control gene).  Using a number of different target constructs they also nicely show that it was due to seed-based targeting in the 3′ UTR.

Although the authors screened at a high initial concentration (100 nM), the observed off-targets persisted at 5 nM, suggesting that just screening at lower concentrations would not have improved their results.

The authors conclude:

In addition, due to the large percentage of the off- target hits generated in the screening, using a redundant library without pooling in the primary screen could significantly reduce the efforts required to eliminate off-target false positives and therefore, will be a more efficient design than using a pooled library.

This is true for low-complexity pools, but high-complexity pools can overcome this problem by providing a single reliable result for each screened gene.

Want to receive regular blog updates? Sign up for our siTOOLs Newsletter:

Follow us or share this post:
The nasty, ugly fact of off-target effects

The nasty, ugly fact of off-target effects

Once upon a time, it was imagined that siRNAs specifically knock down the intended target gene.

Unfortunately, this turned out to be wrong.

The disappointing results from siRNA screening following the initial high hopes brings to mind T.H. Huxley‘s famous quote about a beautiful theory being killed by an ugly, nasty little fact.

As pointed out by S.J. Gould in Eight Little Pigs, the origin of the quote is given in the autobiography of Sir Francis Galton.

Galton’s autobiography is inspirational.  As one reviewer put it, “there is a feeling of calmness and awe that comes from knowing that a person of his genius, wisdom and versatility actually existed.”  He was 70, and had already made significant contributions to genetics, statistics, meteorology, and geography, when he published his first major work on fingerprints, which would become the basis for modern forensic fingerprint analysis.

His work on fingerprints also provides the context for the famous Huxley quote:

Much has been written, but the last word has not 
been said, on the rationale of these curious papillary 
ridges ; why in one man and in one finger they form 
whorls and in another loops. I may mention a 
characteristic anecdote of Herbert Spencer in con- 
nection with this. He asked me to show him my 
Laboratory and to take his prints, which I did. Then 
I spoke of the failure to discover the origin of these 
patterns, and how the fingers of unborn children had 
been dissected to ascertain their earliest stages, and so 
forth. Spencer remarked that this was beginning in 
the wrong way ; that I ought to consider the purpose 
the ridges had to fulfil, and to work backwards. 
Here, he said, it was obvious that the delicate mouths 
of the sudorific glands required the protection given 
to them by the ridges on either side of them, and 
therefrom he elaborated a consistent and ingenious 
hypothesis at great length. 

I replied that his arguments were beautiful and 
deserved to be true, but it happened that the mouths 
of the ducts did not run in the valleys between the crests, 
but along the crests of the ridges themselves. He 
burst into a good-humoured and uproarious laugh, and 
told me the famous story which I have heard from 
each of the other two who were present on the 
occurrence. Huxley was one of them. Spencer, 
during a pause in conversation at dinner at the 
Athenaeum, said, "You would little think it, but I 
once wrote a tragedy." Huxley answered promptly, 
" I know the catastrophe." Spencer declared it was 
impossible, for he had never spoken about it before 
then. Huxley insisted. Spencer asked what it was. 
Huxley replied, "A beautiful theory, killed by a 
nasty, ugly little fact."  

Memories of My Life, pp 257-258
Follow us or share this post:
Don’t swallow the fly

Don’t swallow the fly

There was a PI who screened one s i [RNA],

Oh, I don’t know why they screened one s i …

siRNA screens have a high false positive rate, due to pervasive off-target effects.

Confirming ‘hits’ from single-siRNA screens is a lot of work.  For low-complexity pool screens, it’s even worse (and, as we will discuss in a later post, less likely to result in true genes of interest).

Progressively, one accumulates a nearly indigestible set of experiments and analyses.

On the in vitro side:

  • Screen additional siRNAs for ‘hit’ genes.
  • Do quantitative PCR.  Single siRNAs vary significantly in their effectiveness, so look for correlation between knock-down and phenotypic strength.
  • Create C911 versions of hit siRNAs as off-target controls.  To rule out a confounding on-target effect, do qPCR.
  • Screen additional siRNAs with the same seed sequence as off-target controls (as done in a recent paper).
  • For low-complexity pools, test the siRNAs individually.

On the in silico side:

  • For each hit siRNA, look at plots of phenotypic effects of siRNAs with same seed sequence.
  • Adjust phenotypic scores based on predicted off-target effects for seeds.
  • Run off-target hit selection tools (like Haystack or GESS), to see if hit genes also show up as strong off-targets.

Does it really have to be so complicated?

Wouldn’t you prefer being able to trust your phenotypic readout?

Better yet, how about hits that don’t turn out to be mostly false-positives?

There is a simpler, better way.

siPools maximize the separation between on-target signal and off-target noise, making interpretation of RNAi phenotypes as clear as possible.

Follow us or share this post:
Celebrating 11 years of off-target effects

Celebrating 11 years of off-target effects

OT_bday_cake

 

This year marks the 11th anniversary of Jackson et al.‘s seminal paper on siRNA off-target effects.

The past decade of high-throughput siRNA screening is largely a deductive footnote to their observation that “…the vast majority of the transcript expression patterns were siRNA-specific rather than target-specific“.

  • 2005, Lin et al. show that top hits from RNAi screen are due to off-target effects
  • 2009, Bushman et al. report poor overlap between hits from HIV host factor screens
  • 2012, Marine et al. show that correlation between siRNAs for same gene is near zero, while seed sequences (involved in off-target effects) account for ~50 times more screening variance

marine_cors

  • 2013, Hasson et al. find little overlap between hits from a mitophagy assay run in parallel with different siRNA libraries

Wouldn’t it have been a minor miracle if the phenotype from the following transcriptional profile were due to  knockdown of the intended gene? (intended gene: Scyl1, gene actually responsible for phenotype: Mad2L1, source)

sirna_MA

We are not saying that siRNA screens are not useful.  There is some signal amongst the off-target noise.  But luck and a lot of work are required.  Among the top genes from the resulting ‘hit’ list, one must hope that a story can be made (TOMM7, a major character in the Hasson paper, was relatively far down the hit list, and its known location in the mitochondrial outer membrane made it more than a lucky guess).

But there is a better way.  By maximising the separation between on-target signal and off-target noise, siPools can provide clearer phenotypes, thereby reducing wasted effort and dependance on luck.

siPool_MA

 

Notes:

Birthday cake created using Fig 2b from Jackson et al. (heat map showing deregulation of off-target transcripts by siRNAs against IGF1R).

Calculation of variance explained by genes vs. seeds (from Marine et al.):

by-gene R = .073; by-seed R = .53; .53 ^ 2 / .073 ^ 2 = 52.71

 

Follow us or share this post:

Like what you see? Mouse over icons to Follow / Share