Browsed by
Month: June 2017

Unexpected Mutations after CRISPR in vivo editing – post-commentary

Unexpected Mutations after CRISPR in vivo editing – post-commentary

You might have heard or participated in the global discussion over the recently published Nature Commentary that described >1000 off-target mutations in CRISPR-edited mice.

The paper reported a small study involving three mice but gained enough virality online to trigger a significant drop in share prices of companies founded based on CRISPR gene-editing – Editas Medicine, CRISPR Therapeutics and Intellia Therapeutics.

Here is a summary of the study, with respective concerns raised by the scientific community regarding the validity of the findings. These are highlighted *in blue with further explanations below:

  • FVB/NJ mice were used in the study.These mice are a highly inbred strain (F87 on Dec 2002) originating from the NIH but transferred to The Jackson Laboratory for maintenance and sale. They are homozygous for the Pde6brd1 allele, subjecting them to early onset retinal degeneration.

 

  • The same authors previously published a pretty decent paper where they functionally characterized a rescue of the retinal degeneration by correcting what was thought to be a nonsense mutation (Y347X, C>A) at exon7 of the Pde6β subunit. The same “rescued” mice, edited by CRISPR (F03 and F05), along with the control co-housed mouse that did not undergo editing, were used in this subsequent sequencing study. *Concern 1

 

  • The CRISPR mutation was performed by introducing the sgRNA via a pX335 plasmid (which would co-express Cas9D10A nickase) into FVB/NJ zygotes, alongside a single-stranded oligo which acts as a donor to introduce a controlled mutation at the Pde6b. WT Cas9 protein was also introduced. *Concern 2

 

  • DNA was isolated from spleen of the mice and whole genome sequencing was performed with an Illumina HiSeq 2500 sequencer with a 50X coverage for CRISPR-treated mice and 30X coverage for the control mouse.

 

  • The authors used three different algorithms to detect variants – Mutect, Lofreq and Strelka. The number of single nucleotide variants (SNVs) and insertion deletions (indels) detected that were absent in the control mouse are shown below for the two CRISPR-edited mice.

   

Overlap of SNV/indels detected in two CRISPR-edited mice – F03 mouse (blue), F05 mouse (green).

 

  • Each of the variants were filtered against the FVB/NJ genome in the mouse dbSNP database (v138) and also against 36 other mouse strains from the Mouse Genome Project (v3). As none of the variants detected were found in these database genomes, the authors concluded they had to arise through CRISPR-editing. *Concern 3

 

  • Interestingly, the top 50 predicted off-target sites showed no mutations. And in sites where mutations were detected, there was no significant sequence homology against the sgRNA used. The authors conclude in silico modelling fails to predict off-target sites. *Concern 4.

A number of criticisms have been raised regarding the study and the four main concerns highlighted are explained below:

Concern 1: The study only involved three mice, hence is too underpowered to draw any statistically significant conclusions. Further, the choice of control mouse simply being a co-housed mouse (no mention of its background) may fail to capture any genetic alterations induced by the experimental procedure or by genetic drift within a colony.

More appropriate controls may have included a mouse produced with a sham-injected zygote, a mouse where only Cas9 was introduced without an sgRNA, and a mouse with only sgRNA and ssDNA donor.

Parent mice should also have been sequenced to check if variants detected were already in the existing strain.

Concern 2: Cas9 was introduced both as a protein and in a plasmid. Talk about overkill! Though the plasmid form of Cas9 is the nickase version, where 2 sgRNAs are required to produce a double-strand break, having high levels of active Cas9 floating about has been demonstrated to increase the incidence of off-target effects.

Concern 3: Even though the authors filtered the variants found against mouse genome databases, this may not be sufficient to capture the extent of genetic drift that occurs over multiple generations of in-breeding.

Gaetan Burgio wrote that from his experience, the reference genomes found in databases often fail to capture the amount of variants that are specific to every breeding facility. Often large numbers of reference mice (1oo mouse exomes from > 50 founders) have to be sequenced to determine if SNPs were specific to the mouse strain and not induced by the test condition.

Editas and George Church’s group from Harvard also highlighted the high amount of overlap in SNVs/indels between the two CRISPR-edited mice which..

“strongly suggests the vast majority of these mutations were present in the animals of origin. The odds of  the exact nucleotide changes occurring in the exact same position of the exact same gene at the exact same ratios in almost every case are effectively zero.”

Concern 4: Apart from the flaw that only one sgRNA was studied, Church’s group also claim the sgRNA studied had a high off-target profile. This sgRNA would apparently have failed their criteria for use as a therapeutic candidate. The table below shows the number of predicted off-target sites when allowing for 1-3 mismatches from the sgRNA sequence.

Predicted off-target profile of sgRNA used in study
Off-target sites with 1 mismatch 1
Off-target sites with 2 mismatches 1
Off-target sites with 3 mismatches 24

 

What was surprising from the study however, was that despite the high off-targeting potential, mutations were not seen at predicted off-target sites.

The consensus therefore, by both Church’s group and the authors of the study was that one cannot rely on in silico prediction alone to account for off-target effects.

Calls are now being made to validate the study using the appropriate controls, or to compare the variants obtained with other more updated mouse genome SNP databases. I expect we will not hear the last of this study.

The study however, does re-enforce our message in a previous blogpost of validating CRISPR experiments with other techniques to establish gene function. It also highlights the extensive genetic heterogeneity seen now not only between cell lines, but between mouse strains. As always we recommend not being swept up in the hype, but to remain scientifically skeptical.

Want to receive regular blog updates? Sign up for our siTOOLs Newsletter:

Follow us or share this post:
Making sense of siGENOME deconvolution

Making sense of siGENOME deconvolution

As discussed previously, deconvoluted Dharmacon siGENOME pools often give surprising results.  (Deconvolution is the process of testing the 4 siRNAs in a pool individually.  This is usually done in the validation phase of siRNA screens.)

One way to compare the relative contribution of target gene and off-target effects is to calculate the correlation between reagents having the same target gene or the same seed sequence.  One of the first things we do when analysing single siRNA screens is to calculate a robust form of the intraclass correlation (rICC, see discussion at bottom for more about this).

Recently we were analysing deconvolution data from Adamson et al. (2012) and calculated the following rICC’s.  (The phenotype measured was relative homologous recombination.)

Grouping variable  rICC    95% confidence interval

Target gene        0.040   -0.021-0.099
Antisense 7mer     0.383   0.357-0.413
Sense 7mer         0.093   0.054-0.129

Besides the order of magnitude difference between target gene and antisense seed correlation (which is commonly observed in RNAi screens), what stands out is the ~2-fold difference between the correlation by target gene and sense seed.

Very little of the the sense strand should be loaded into RISC, if the siRNAs were designed with appropriate thermodynamic considerations (the 5′ antisense end should be less stable than the 5′ sense end, to ensure that the antisense strand is preferentially loaded into RISC).

The above correlations suggest that some not insubstantial amount of sense strand is making it into the RISC complex.

Here is the distribution of delta-delta-G for siPOOLs and siGENOME siRNAs targeting the same 500 human kinases (see bottom of post for discussion of calculation).  A positive delta-delta G means that the sense end is more thermodynamically stable than the antisense end, favouring the loading of the antisense strand into RISC.

 

 

This discrepancy in delta-delta G is also consistent with comparison of mRNA knockdown:

The siGENOME knockdown data comes from 774 genes analysed by qPCR in Simpson et al. (2008).  The siPOOL knockdown data is from 223 genes where we have done qPCR validation.

Of note, the siGENOME pools were tested at 100 nM, whereas siPOOLs were tested at 1 nM.

(It should be mentioned that, although consistent with the observed differences in ddG, this is only an indirect comparison, and delta-delta G is not the only determinant of functional siRNAs.)

 

Notes on intraclass correlation

Intraclass correlation measures the agreement between multiple measurements (in this case, multiple siRNAs with the same target gene, or multiple siRNAs with the same seed sequence).   One could also pair off all the repeated measures and calculate correlation using standard methods (parametrically using Pearson’s method, or non-parametrically using Spearman’s method).  The main problem with such an approach is that there is no natural way to determine which measure goes in the x or y column.  Correlations are normally between different variables (e.g. height and weight).  In a case of repeated measures, there is no natural order, so the intraclass correlation (ICC) is the more correct way to measure the similarity of within-group measurements.  As ICC depends on a normal distribution, datasets must first be examined, and if necessary, transformed beforehand.

Robust methods have the advantage of permitting the use of untransformed data, which is especially useful when running scripts across hundreds of screening dataset features.  The algorithm we use calculates a robust approximation of the ICC by combining resampling and non-parametric correlation.

Here is the algorithm, in a nutshell:

  1. Group observations (e.g. cell count) by the grouping variable (e.g. target gene or antisense seed)
  2. Randomly assign one value of each group to the x or y column (groups with one 1 observation are skipped)
    • for example, if the grouping variable is target gene and siRNAs targeting PLK1 had the values 23, 30, 37, 45, the program would randomly choose 1 of the values for the x column and another for the y column
  3. Calcule Spearman’s rho (non-parametric measure of correlation)
  4. Repeat steps 1-3 a set number of times (e.g. 300) and store the calculated rho’s
  5. Calculate mean of the rho values from 4.  This is the robust approximation of the ICC (rICC).
    • Values from 4 are also used to calculate confidence intervals.

The program that calculates this is available upon request.

Notes on calculating delta-delta G

Delta-delta G was calculated using the Vienna RNA package, as detailed here: https://www.biostars.org/p/58979/ (in answer by Brad Chapman).

The delta-delta G was calculated using 3 terminal bps.  We found that that ddG of the terminal 3 bps had the strongest correlation with observed knockdown.  Others (e.g. Schwarz et al., 2003 and Khvorova et al., 2003) have also used the terminal 4 bps.

 

Want to receive regular blog updates? Sign up for our siTOOLs Newsletter:

Follow us or share this post:

Like what you see? Mouse over icons to Follow / Share