Author: Andrew Walsh

Clearly compensating

Clearly compensating

Genetic compensation by transcriptional adaptation is a process whereby knocking out a gene (e.g by CRISPR or TALEN) results in the deregulation of genes that make up for the loss of gene function.

A 2015 study by Rossi et al. (discussed previously) alerted researchers that CRISPR/TALEN knock-out experiments may be subject to such effects.

Genetic adaption or compensation had been well known to mouse researchers creating knock-out lines.  In fact, one of our company founders also ran into this when trying to confirm an RNAi phenotype in a knock-out mouse line.  The knock-out mice, though not completely healthy, did not confirm the RNAi phenotype.

A paper published a couple years before the Rossi paper also showed clearly that knock-outs can create off-target effects via transcriptional adaptation.

Hall et al. showed with an siRNA screen that the centrosomal protein Azi1 was required for ciliogenesis in mouse fibroblasts, confirming previous work in zebrafish and fly.

Their Azi1 siRNA targeted the 3′ UTR, and they were able to rescue the phenotype with a plasmid expressing just the CDS (bar at far right), confirming that their phenotype was due to on-target knockdown:

However, knock-out mouse embryonic fibroblast cells (created by gene trapping) did not show any differences in in the number of cilia, centrosomes, or centrioles compared to wildtype (+/+ is wild type, Gt/Gt is the homozygous knock-out):

The one phenotypic difference they observed was that male knock-out mice were infertile, due to defective formation of sperm flagella.  Female mice had normal fertility.  Both were compensating, but only one showed a visible phenotype.

The authors note the benefits of RNAi in comparison to knock-out screening:

Discrepancies between the phenotypic severity observed with siRNA knock-down versus genetic deletion has previously been attributed to the acute nature of knock-down, allowing less time for compensation to occur

The excitement surrounding CRISPR should not diminish the continued value of RNAi screening.

Pooling only 4 siRNAs increases off-target effects

Pooling only 4 siRNAs increases off-target effects

In a previous post, we showed how siRNA pools with small numbers of siRNAs can exacerbate off-target effects.

Low-complexity pools (with 4 siRNAs per gene) should thus lead to overall stronger off-target effects than single siRNAs.

This phenomenon was addressed in a bioinformatics paper a few years back.  The authors created a model to predict gene phenotypes based on the combined on-target and off-target effects of siRNAs.

The siRNAs were screened either individually (Ambion and Qiagen), or in pools of four (Dharmacon siGENOME), in 3 different batcterial-infection assays (B. abortus, B. henselae, and S. typhimurium).

The model assumed that each siRNA silenced its on-target gene to the same level.  For off-target silencing, they used the predictions from TargetScan, a program for calculating seed-based knockdown by miRNAs or siRNAs.

In order to assess model quality, they checked how similar the gene phenotype predictions were when using different reagents types in the same pathogen-infection screen.

The following figure shows the rank-biased overlap (a measure of how similar lists are with regards to top- and bottom-ranked items), when estimating siGENOME off-target knockdown in one of 2 ways:

A) using the maximum TargetScan score for any of the 4 siRNAs in the siGENOME pool

B) using the mean TargetScan score for the 4 siRNAs

If low-complexity pooling increases the degree of off-target effects, we would expect the maximum TargetScan score to produce better model concordance.

And that is what the authors found.  (the two plots show the rank-biased overlap for the top and bottom of the phenotype ranked lists, respectively)

The off-target effect of a 4-siRNA, low-complexity pool is best described by the strongest off-target effect of any of the individual siRNAs.

As discussed in our NAR paper, pooling a minimum of 15 siRNAs is required to reliably prevent off-target effects.

Citations of our Nucleic Acids Research Paper

Citations of our Nucleic Acids Research Paper

Our 2014 Nucleic Acids Research paper provides an excellent overview of the siPOOL technology.  Google Scholar shows that our paper has been cited 64 times.

To put this into perspective, the 2012 PLoS One paper on C911 controls by Buehler et al. has 72 citations.  C911 controls are probably the most effective way to determine whether a single-siRNA phenotype is due to an off-target effect.

These citation numbers show that siPOOLs have good mind share when researchers consider the issue of RNAi off-target effects.

We have noticed, however, that in some cases our NAR paper is cited to justify approaches that we do not endorse.

For example, two recent papers (1, 2) cite our paper as support for the use of Dharmacon ON-TARGETplus 4-siRNA pools to reduce the potential for off-target effects.

Our paper shows, however, that high-complexity siRNA pools (> 15 siRNAs) are needed to reliably reduce off-target effects.

We have also discussed how low-complexity siRNA pools can in fact exacerbate off-target effects.

There’s an old saying that any publicity is good publicity, and we are certainly thankful that these authors have referenced our paper, even if we don’t agree with the interpretations.

And we are especially grateful to all the researchers who have purchased siPOOLs and referred to our products in their publications.

Low complexity pooling does not prevent siRNA off-targets

Low complexity pooling does not prevent siRNA off-targets

Summary: Low-complexity siRNA pooling (e.g. Dharmacon siGENOME SMARTpools) does not prevent siRNA off-targets.  It may in fact exacerbate off-target effects.  Only high-complexity pooling (siPOOLs) can reliably ensure on-target phenotypes.

Low-complexity pooling increases the number of siRNA off-targets

One of the claims often made in favour of low-complexity pooling (e.g Dharmacon siGENOME SMARTpools) is that this pooling reduces the number of seed-based off-target effects compared to single siRNAs.

If this were true, we would expect different low-complexity siRNA pools for the same gene to give similar phenotypes.  But this is not the case.

Published expression data shows that low-complexity pooling actually increases the number of off-targets.

Kittler et al. (2007) looked at the effect of combining differing number of siRNAs in low to medium complexity siRNA pools (siRNA pools sizes were: 1, 3, 5, 9, and 12).

Their work showed that the number of down-regulated genes (50% or greater silencing) actually increases when small numbers of siRNAs are combined.  Only when larger numbers of siRNAs are combined does the number of off-targets start to drop:

 

 

[The figure is based on data from GEO dataset GSE6807.  Down-regulated genes are those whose expression is reduced by 50% or more.  Note that the orange point is taken from our 2014 NAR paper, as we are not aware of other published expression datasets with this many pooled siRNAs.  A few caveats with combining these datasets are that they use different target genes, siRNA concentrations, and the data comes from a different expression platform.]

Low-complexity pooling: a bad solution for siRNA off-targets

Low-complexity pooling does not get rid of the main problem associated with single siRNAs: seed-based off-target effects.   Based the above analysis, it can make it even worse.  It also prevents use of the most effective computational measures against seed effects.

Redundant siRNA Activity (RSA) is a common on-target hit analysis method for single-siRNA screens.  It checks how over-represented the siRNAs for a gene are at the top of a ranked screening list.  If a gene has 2 or more siRNAs near the top of the list, it will score better than a gene that only has a single siRNA near the top of the list.  This is one way to reduce the influence of strong off-target siRNAs.

Correcting single siRNA values by seed medians has also been shown to be an effective way to increase the on-target signal in screens.  This correction is not effective for low-complexity pools, since each pool can contain 3-4 different seeds.

Off-target based hit detection algorithms (e.g. Haystack and GESS) are also only effective for single-siRNA screens.  The advantage of these algorithms is that it permits the detection of hit genes that were not screened with on-target siRNAs.  These algorithms are not effective for low-complexity pool screens.

Our recommendation: do not convert single siRNAs into low-complexity pools, rather use high-complexity siPOOLs to confirm hits

We do not recommend that screeners combine their single siRNA libraries into low-complexity pools (e.g. combining 3 Silencer Select siRNAs for the same target gene).  If possible, it is better to screen the siRNAs individually and then apply seed-based correction, RSA and seed-based hit-detection algorithms.

The time saved by only screening one well per target may prove illusory when the deconvolution experiments show that the individual siRNAs have divergent phenotypes.

It is probably better to deal with off-target effects up front (by screening single siRNAs) than to be surprised by them later in the screen (during pool deconvolution).

Reliable high-complexity siPOOLs, as independent on-target reagents, can then be used to confirm screening hits.

siTOOLs also now has RNAi screening libraries available.  Please contact us for more information.

What is the probability of an siRNA off-target phenotype?

What is the probability of an siRNA off-target phenotype?

Summary:   Conventional siRNAs have a high probability of giving off-target phenotypes.  siRNA off-target effects can be reduced by using more specific reagents or narrowing the assay focus (to reduce the number of relevant genes).  Even when the assay is relatively focused, more specific reagents significantly increase the probability of observing on-target effects.

Probability of siRNA off-target phenotype depends on reagent specificity and assay biology

The probability of getting an off-target effect from an siRNA depends on several factors, the main ones being reagent specificity and assay biology.  If an siRNA down-regulates a large number of genes, or if an assay phenotype can be induced by a large number of genes, the probability of observing an off-target phenotype increases.

siRNAs can down-regulate many off-target genes

Garcia et al. (2011) compiled 164 different microarray experiments measuring gene expression following transfection with siRNAs.  The mean number of down-regulated genes in these experiments was 132 and the median was 68 (down-regulated genes were silenced by 50% or more).

As noted in earlier studies of gene expression following siRNA treatment (e.g. Jackson et al. 2003), few of the down-regulated genes are shared between siRNAs with the same target gene.  This suggests that the down-regulated genes are not the downstream result of target gene knockdown (i.e. they are mostly off-target).

High-complexity pooling of siRNAs (e.g. with siPOOLs) can reduce the number of down-regulated genes.

The following figure, based on data from Hannus et al. 2014, shows the difference between the gene expression changes caused by a single siRNA (left) and a high-complexity siRNA pool (siPOOL, right), which also includes that same single siRNA:

 

Estimating the probability of siRNA off-target phenotypes

Assuming different numbers of down-regulated genes (off-target) and different numbers of potent genes involved in assay pathways, we can try to estimate the probability of an siRNA giving an off-target effect.

The following plot shows the probability of getting an off-target effect when:

  • assuming RNAi reagents down-regulate varying numbers of off-target genes (5, 25, 50, 100)
    • down-regulated means that gene expression is reduced by 50% or more
    • in the Garcia paper dataset, the mean is 132 and median is 68
  • assuming different numbers of assay-potent genes
    • an assay-potent gene is one whose down-regulation by 50% or more is sufficient to produce a hit phenotype
    • for assays with more general phenotypes (e.g. cell count) we would expect more  assay-potent genes

 

We can see that even if there are only 20 assay-potent genes, there’s a nearly 10% chance of getting an off-target phenotype when siRNAs down-regulate 100 off-target genes (which is close to the average observed in the Garcia dataset).

In a genome-wide screen of 20,000 genes with 3 siRNAs per gene, we would thus expect 2,000 off-target siRNAs.

In contrast, a more specific reagent that only down-regulates 5 off-target genes only has a 0.5% change of producing an off-target phenotype.  For the above-mentioned genome-wide RNAi screen, we would expect only 100 off-target siRNAs (a 20-fold reduction).

The importance of RNAi reagent specificity

The above analysis demonstrates the importance of using specific siRNA reagents.

Changing an assay to make the phenotypic readout narrower (to reduce the number of genes capable of inducing a phenotype) is one way to reduce the risk of off-target phenotypes.  But this may be a lot of work and is not necessarily desirable or even possible.

A more ideal solution is the use of a specific RNAi reagent, like siPOOLs.

postscript

As the number of assay-potent genes increases, the probability of getting an off-target phenotype approaches one.

The following plot (same format as the one above) shows the distribution

 

The p-values were calculated using the hypergeometric distribution, assuming a population size of 20,000 (the approximate number of protein-coding genes in the human genome).

Note that one of the major simplifying assumptions of the above analysis is that all siRNAs have the same number of down-regulated off-target genes.

Is it important to avoid microRNA binding sites during siRNA design?

Is it important to avoid microRNA binding sites during siRNA design?

Summary: To address the question of whether one should avoid microRNA binding sites during siRNA design, we examined whether removing siRNAs that share seeds with native microRNAs would reduce the dominance of seed-based off-target effects in RNAi screening.

siRNA design and native microRNA target sites

Recently, we discussed a review of genomics screening strategies.  The authors state:

RNAi screens are powerful and readily implemented discovery tools but suffer from shortcomings arising from their high levels of false negatives and false positives (OTEs) as can be seen when comparing the low concordance among the candidate genes detected in different screens using the same species of virus, e.g., HIV-1, HRV, or IAV (Booker et al., 2011; Bushman et al., 2009; Hao et al., 2013; Perreira et al., 2015; Zhu et al., 2014).

To address these concerns, improvements in the design and synthesis of next-gen RNAi library reagents have been implemented including the elimination of siRNAs with seed sequences that are complementary to microRNA binding sites.

Given that off-target effects via microRNA-like binding are the main source of RNAi screening phenotypes, avoiding native microRNA sites during siRNA design seems like a reasonable strategy.  But does it make much difference in actual RNAi screens?

Hasson et al. 2013 performed a mitophagy screen using the Silencer Select siRNA library.  About 12% of the ~65,000 screened siRNAs have a 7-mer seed shared by a miRBase microRNA.

The screen’s main phenotypic readout, % Parkin translocation (PPT), is strongly affected by seed effects.   The intra-class correlation for siRNAs with the same seed is ~.51 (versus ~.06 for siRNAs with the same target gene).  There appears to be no difference between how siRNAs with or without microRNA seeds behave:

Is it important to avoid microRNA binding sites during siRNA design?

The same thing is found if we look at a less specific phenotype like cell count (which should be more broadly susceptible to off-target effects, as more genes should affect this phenotype):

Is it important to avoid microRNA binding sites during siRNA design?

And if we look at seeds that are enriched at the top of the screening list (sorted by descending PPT), we also don’t see much difference between siRNAs with or without native microRNA seeds.  (Note that the seed p-value is calculated in a similar way to RSA, based on how over-represented a seed is towards the top of a ranked list)

Is it important to avoid microRNA binding sites during siRNA design?

We also examined a general phenotypic readout (cell viability) in a dozen large-scale RNAi screens.

For some screens, we do see a slight shift in the values for siRNAs with or without native microRNA seeds.

For example, a genome-wide screen of Panda et al. 2017 (also using the Silencer Select library) shows a slight decrease in viability for siRNAs with native microRNA seeds:

Is it important to avoid microRNA binding sites during siRNA design?

Removing those siRNAs does not change the dominance of seed-base off-targets.

The intra-class correlation (ICC) for siRNAs with the same 7-mer seed is ~.53, with or without the inclusion of siRNAs with native microRNA seeds, while ICC for siRNAs with the same target gene is only  ~.06.

Coming back to the quote from the review article on genomic screening, next-gen RNAi library reagents that avoid native microRNA seeds are not expected to be much better than siRNAs that include them.

The most effective way to avoid seed-based off-target effects is to use high-complexity siRNA pools (siPOOLs). Learn more about siPOOLs

 

Correcting seed-based off-target effects in RNAi screens

Correcting seed-based off-target effects in RNAi screens

Summary: Correcting for seed-based off-targets can improve the results from RNAi screening.  However, the correlation between siRNAs for the same gene is still poor and the strongest screening hits remain difficult to interpret.

Seed-based off-target correction has little effect on reagent reproducibility

Given that seed-based off-targets are the main cause of phenotypes in RNAi screening, trying to correct for those effects makes good sense.

The dominance of seed-based off-targets means that independent siRNAs for the same gene usually show poor correlation.

If one could correct for the seed effect, the correlation between siRNAs targeting the same gene may improve.

One straightforward way to do seed correction is to subtract the ‘seed median’ from each siRNA.  (The seed median is the median for all siRNAs having the given seed.)

This was the approach used by Grohar et al. in a recent genome-wide survey of EWS-FLI1 splicing (involved in Ewing sarcoma).  They used the Silencer Select library, which has 3 siRNAs per target gene.

After seed correction, there is only minor improvement in the correlation between siRNAs targeting the same gene.  The intra-class correlation (ICC) improves from 0.031 to 0.037.  The ICC for siRNAs with the same 7-mer seed decreases from 0.576 to 0.261.

Although we have reduced the seed-based signal, it has not resulted in a correspondingly large improvement in the gene-based signal.

More sophisticated seed correction can improve reagent correlation

Grohar et al. used a simple seed-median subtraction method to correct their screening results.

A more sophisticated method (scsR) was developed by Franceshini et al. for seed-based correction of screening data.  It corrects using the mean value for siRNAs with the same seed, and weighs the correction using the standard deviation the values.  This allows seeds with a more consistent effect to contribute more to the data normalisation.

Applying the scsR method to the Grohar data, ICC for siRNAs targeting the same gene increases from 0.031 to 0.041.  It is better than the increase with seed-median subtraction (0.037), but is still only a fairly minor improvement (plot created using random selection of 10,000 pairs of siRNAs that target the same gene):

 

Off-target correction increases double-hit rate in top siRNAs of RNAi screen

The following plot shows the count for single-hit and double-hit genes as we go through the top 1000 siRNAs (of ~60K screened in total).  Double-hit means that the gene is covered by 2 (or more) hit siRNAs.

Despite the small improvement in reagent correlation, the double-hit rate is essentially the same using simple seed-median subtraction or the more advanced scsR method.

Furthermore, the number of double-hits is higher than what we’d expect by chance.

This shows that, despite the noise from off-target effects, there is some on-target signal that can be detected.

siRNAs with the strongest phenotypes remain difficult to interpret

Despite the fact that the double-hit count is higher than expected by change, most of the genes targeted by the strongest siRNAs are single-hits.  siRNAs with the strongest phenotypes remain difficult to interpret.

Seed correction is best suited for single-siRNA libraries.  Low-complexity pools, like siGENOME or ON-TARGETplus, are less amenable to effective seed correction since there are (usually) 4 different seeds per pool.  This reduces the effectiveness of seed-based correction, even though seed-based off-target effects remain the primary determinant of observed phenotypes (as discussed here, here , and here).

The best way to correct for seed-based off-targets is to avoid them in the first place.  Using more specific reagents, like high-complexity siPOOLs, is the key to generating interpretable RNAi screening results.

For help with seed correction or other RNAi screening data analysis with the Phenovault, contact us at info@sitools.de

Little correlation between Dharmacon siGENOME and ON-TARGETplus reagents

Little correlation between Dharmacon siGENOME and ON-TARGETplus reagents

The most common way to validate hits from Dharmacon siGENOME screens is to test the individual siRNAs from candidate pool hits (siGENOME reagents are low-complexity pools of 4 siRNAs).  In this deconvolution round, we normally see that the individual siRNAs for genes behave very differently and seed effects dominate (discussed here and here).

One could argue that deconvolution is not the correct way to validate candidate hits (even though it’s the method recommended by Dharmacon),  as testing the siRNAs individually will result in seed effects that are suppressed when the siRNAs are pooled.  One problem with this argument is that low-complexity pooling does not get rid of off-target effects (e.g. Fig 5 in this paper), something that is better done via high-complexity pooling.  But assuming it were true, validating with a second Dharmacon pool would be better.

Tejedor et al. (2015) performed a genome-wide Dharmacon siGENOME screen for regulators of Fas/CD95 alternative splicing.  ~1500 genes were identified by a deep-sequencing approach.  ~400 of those were confirmed by high-throughput capillary electrophoresis (HTCE, LabChip).  They then retested those ~400 genes (again by HTCE) using Dharmacon ON-TARGETplus pools.

The following plot shows the values for the siGENOME and ON-TARGETplus pools for the same genes (i.e. each point corresponds to 1 gene).

What’s measured is the percent of splice variants that include exon 6 following siRNA treatment.  That was compared to the values for a plate negative control (untransfected wells) and converted to a robust Z-score.  This is the main readout from the paper.

The Pearson correlation improves if the strong outlier at -150 for siGENOME is removed (R = 0.25), while the Spearman correlation is unchanged.

We see that a fairly small number of genes are giving reproducibly strong phenotypes (e.g. 13 of 400 have robust Z-scores less than -15 for both siGENOME and ON-TARGETplus reagents).

If we remove those 13 strong hit genes, the correlation approaches zero:

Even if the strong outlier for siGENOME is removed, the correlation is still near zero:

Although using a second Dharmacon pool removes some of the arbitrariness of defining validated hits (e.g. saying that 3 of 4 siRNAs must exceed a Z-score cut-off of X, or 2 of 4 siRNAs must exceed a Z-score cut-off of Y), the end result is similar:  A few strong  genes show reproducible phenotypes, while many of the strongest screening hits show inconsistent results.  The main problem, off-target effects in the main screen, is not fixed.

postscript

Tejedor et al. say that 200 genes were confirmed by ON-TARGETplus validation.  They consider a gene confirmed if the absolute value of the robust Z-score is greater than 2.  The Z-score is calculated using the median for untransfected plate controls.  I suspect that a significant proportion of randomly selected genes would also have passed this cut-off.

In table S3 (which has the ON-TARGETplus validation results), there are actually only 177 genes (including 2 controls) that meet this cutoff.  The supplementary methods state: Genes for which Z was >2 or <-2 were considered as positive, and a total number of 200 genes were finally selected as high confidence hits.

Which suggests that genes outside the cut-off were chosen to bring the number up to 200.

But if we look at the Excel sheet with the ‘200 hit genes’, it has 200 rows, but only 199 genes.  The header was included in the count.

This type of off-by-one error is probably not that uncommon.  In a case like this, it does not matter so much.

One case where it did matter was in the Duke/Potti scandal.  The forensic bioinformatics work of the heroes of the Duke scandal found that, when trying to reproduce the results from published software, one of the input files caused problems because of an off-by-one error created by a column header.  That was one of many difficulties in reproducing the Potti paper’s results which eventually led to its exposure.

Orthogonal design in software and RNAi screening

Orthogonal design in software and RNAi screening

The software engineering classic The Pragmatic Progammer popularised the benefits of orthogonality in software design.  They introduce the concept by describing a decidedly non-orthogonal system:

You’re on a helicopter tour of the Grand Canyon when the pilot, who made the obvious mistake of eating fish for lunch , suddenly groans and faints. Fortunately, he left you hovering 100 feet above the ground. You rationalize that the collective pitch lever [2] controls overall lift, so lowering it slightly will start a gentle descent to the ground. However, when you try it, you discover that life isn’t that simple. The helicopter’s nose drops , and you start to spiral down to the left. Suddenly you discover that you’re flying a system where every control input has secondary effects. Lower the left-hand lever and you need to add compensating backward movement to the right-hand stick and push the right pedal. But then each of these changes affects all of the other controls again. Suddenly you’re juggling an unbelievably complex system, where every change impacts all the other inputs. Your workload is phenomenal: your hands and feet are constantly moving, trying to balance all the interacting forces.

[2] Helicopters have four basic controls. The cyclic is the stick you hold in your right hand. Move it, and the helicopter moves in the corresponding direction. Your left hand holds the collective pitch lever. Pull up on this and you increase the pitch on all the blades, generating lift. At the end of the pitch lever is the throttle . Finally you have two foot pedals, which vary the amount of tail rotor thrust and so help turn the helicopter.

As the authors explain:

The basic idea of orthogonality is that things that are not related conceptually should not be related in the system. Parts of the architecture that really have nothing to do with the other, such as the database and the UI [user interface], should not need to be changed together. A change to one should not cause a change to the other.

This applies to many types of design, not just for computer systems.  The plumber should not have to depend on the electrician to fix a broken pipe.

The principle has also been used in RNAi screening, notably by Perreira et al. who introduce the MORR (Multiple Orthologous RNAi Reagent) method to increase confidence in screening hits.  Comparing the results of siRNAs from different manufacturers  is important, but because they operate by the same mechanism (including the off-target effect), they are not really orthologous.  More orthologous would be the comparison between RNAi and CRISPR experiments, which sometimes show discrepancies that point to interesting biology.

To confirm RNAi screening hits, ‘partial orthogonality’ may be preferable.  If screening hits are due to either on-target or off-target effects, confirmation with RNAi reagents that only have one or the other would be better than using CRISPR, where it is difficult to interpret the reason for discrepancies (e.g. is there no phenotype  because of genetic compensation?).

One could use C911s to create a version of the siRNA that, in theory, maintains off-target effects but eliminates on-target effects.  We have observed, however, that C911s often give substantial knockdown of the original target gene (in some ways, C911s are like very good microRNAs).  To be sure that a positive effect with C911s is not due to partial knockdown, one would also need to test that via qPCR.  C911s can create a lot of work.

Far better would be to confirm screening results with siPOOLs, which provide robust knockdown and minimal off-target effects.

One place RNAi practitioners would hope not to find orthogonality is the relationship between on-target knockdown and phenotypic strength.

Since the early days of RNAi, positive correlation between knockdown and phenotypic strength has been suggested as a means to confirms screening results.  Reagents with a better knockdown should give a stronger phenotype.

To test this, we obtained qPCR data for over 2000 siRNAs (Neumann et al.) and checked the performance of those siRNAs against the designated hit genes from an endocytosis screen (Collinet et al.).

If the siRNAs work as expected, those siRNAs with better knockdown should give stronger phenotypes than those with weaker knockdown.

There were 100 genes from the Collinet hits for which there were 3 siRNAs with qPCR data.

For those 100 siRNAs triplets, we compared the phenotypic ranks with the knockdown ranks.  (We were agnostic about the direction of phenotypic strength, and checked whether knockdown and phenotype were consistent when phenotype scores were ranked in either ascending or descending order).  For example, if siRNAs A, B, and C have phenotypic scores of 100, 90, 70 and knockdown of 15%, 20%, 30% remaining mRNA, we would say that phenotypic strength is consistent with knockdown (and because we were agnostic about phenotypic direction, we would also say it was consistent if siRNAs A, B, and C had scores of 70, 90, 100).

The observed number of cases where knockdown rank was consistent with phenotypic rank was then compared to an empirical null distribution, obtained by first randomising the knockdown data for the siRNA triplets before comparison to phenotypic strength.  This randomisation was performed 300 times.  This provides an estimate of what level of agreement between knockdown and phenotype would be expected by chance.  The standard deviation (SD) from this null distribution was then used to convert the difference between observed and expected counts into SD units.

The Collinet dataset provides data for 40 different features.  The above procedure was carried out for each of the 40 features.

To take one feature (Number vesicles EGF) as an example, we observed 34 cases where knockdown was consistent with phenotypic strength.  By chance, we would expect 33.4 (with a standard deviation of 4.9).  The difference in SD units is (34-33.4)/4.9 = 0.1.

As can be seen in the following box plot, the number of SD units between observed and expected counts of knockdown/phenotype agreement for the 40 features is centered near zero (median is 0.1 SD units):

This suggests that there is very little, if any, enrichment in cases where siRNA knockdown strength is correlated with phenotypic strength.  The orthogonality between knockdown and phenotype, given the poor correlation between siRNAs with the same on-target gene, is unfortunately not unexpected.

“Phenoville” – RNAi & CRISPR Screening Strategies

“Phenoville” – RNAi & CRISPR Screening Strategies

Pleasantville is a movie based on an interesting idea: two teenagers are magically transported through their TV to a town called Pleasantville set in the 1950s where everything is perfect (and also black-and-white).  As they discover the complex, imperfect emotions hidden below the idyllic surface, the black-and-white characters and objects start to gain colour.

In loss-of-function genetic screening, some reagents and screening formats may also give rise to a narrow, black-and-white view of a biological process.  A sort of “Phenoville”.  This was illustrated nicely in a recent review of screening strategies for human-virus interactions by Perreira et al. (2016).

The authors performed screens for human rhinovirus (HRV) infection using arrayed RNAi reagents (siRNAs) and pooled CRISPR reagents (sgRNAs), and then compared the resulting hit lists.

The arrayed RNAi screen produced over 160 high-confidence candidate genes, whereas the CRISPR screen only found 2.  The authors comment:

“The comparison of these two screening approaches side-by-side, using the same cells and virus, raises an interesting point. The number of host factors found for HRV14 was far greater using the MORR/RIGER approach [i.e. RNAi performed with multiple orthologous RNAi reagents and analysed by RNAi gene enrichment ranking method] and is approaching a systems level understanding based on bioinformatic analyses and the near saturation of, or enrichment for, multiple complexes and pathways (Fig. 4) (Perreira et al., 2015). By comparison our matched pooled CRISPR/Cas9 screen for HRV-HFs yielded two high-confidence candidates based on reagent redundancy, ICAM1, the known receptor for HRV14, and EXOC4, a gene involved in exocyst targeting and vesicular transport (He & Guo, 2009). Given the known role of ICAM1 as the host receptor for most HRVs, these results point to entry as the major viral lifecycle stage interrogated by a pooled functional genomic screening approach using a population of randomly biallelic null cells infected by a cytopathic virus.”

In simple terms, RNAi screening produced a richer data set that revealed system level interactions whereas CRISPR screening yielded a small number of specific hits that only affected an early-stage pathway. The ‘systems level understanding’ is nicely shown in the following diagram of the RNAi hits.  The red box at the top left is the only gene (ICAM1) that was common to the RNAi and CRISPR screens.

Perreira et al. conclude that arrayed siRNA screens permit the detection of a larger number of viral dependency factors, albeit with a significant tradeoff in a greater number of false positive hits (mainly due to off-target effects).  In contrast, pooled screens with CRISPR sgRNAs using cell survival as a readout, as also seen with most haploid cell screens, display limited sensitivity but excellent specificity in finding host genes that act early on in viral replication (e.g. ICAM1).

In Perreira et al.‘s words:

“… given the currently available functional genomic strategies if the goal is to find viral entry factors (e.g., host receptors) with high specificity its best to use a pooled survival screen, but alternatively if the aim is to obtain with relative ease a more comprehensive set of host factors, albeit with more prevalent false positives, than an arrayed siRNA screen would be the preferred method.”

Summarizing two options for genetic screeners:

  1. Arrayed RNAi screens
    • provide a richer view of the underlying biology
    • produce more false positives from OTEs
    • produce false negatives from OTEs
  2. Pooled CRISPR screens
    • provide a narrower view of the underlying biology
    • produce fewer false positives
    • produce false negatives because of genetic compensation

Off-target effects (OTEs) are the primary cause of false positives, and the resultant higher assay noise also increases the number of false negatives in arrayed RNAi screens. Reagents like siPOOLs minimize the risk of off-target effects and reduce assay noise.

One key factor not mentioned by Perreira et al. is the presence of genetic compensation in gene knockout approaches.

Putting genetic compensation in terms of human actors, imagine that you are investigating the function of bus drivers in Pleasantville.  To induce loss-of-function, assume that aliens will be abducting the bus drivers.  If the bus drivers are abducted in their sleep (equivalent to a CRISPR knock-out), you may not get a good idea of their function when you film the next day.  People may be compensating by driving, biking or staying home.  Alternatively, the bus company may have found emergency replacement drivers.

Now suppose the bus drivers are abducted in the middle of the day while driving their routes (equivalent to an RNAi knock-down).  The film will show buses crashing (hopefully without any serious injuries, since this is just a TV show!) and the public transportation system will suddenly come to a halt.

RNAi gene knockdown screens with siPOOLs  can provide a significant advantage over CRISPR gene knockout screens in obtaining a system level understanding in biological models.

Want to receive regular blog updates? Sign up for our siTOOLs Newsletter:

Loading
error

Like what you see? Mouse over icons to Follow / Share