Problem
------
Diagnose Targets identifies holes in the coverage of a targetted experiment, but it only reports them doesn't list the actual missing loci
Solution
------
This commit implements an optional intervals file output listing the exact loci that did not pass filters
Itemized changes
--------------
* Cache callable statuses (to avoid recalculation)
* Add functionality to output missing intervals
* Implement new tool to qualify the missing intervals (QualifyMissingIntervals) by gc content, size, type of missing coverage and origin (coding sequence, intron, ...)
Problem
-------
When the interval had no reads, it was being sent to the VCF before the intervals that just got processed, therefore violating the sort order of the VCF.
Solution
--------
Use a linked hash map, and make the insertion and removal all happen in one place regardless of having reads or not. Since the input is ordered, the output has to be ordered as well.
Itemized changes
--------------
* Clean up code duplication in LocusStratification and SampleStratification
* Add number of uncovered sites and number of low covered sites to the VCF output.
* Add new VCF format fields
* Fix outputting multiple status when threshold is 0 (ratio must be GREATER THAN not equal to the threshold to get reported)
[fixes#48780333]
[fixes#48787311]
* This version inherits from the original SW implementation so it can use the same matrix creation method.
* A bunch of refactoring was done to the original version to clean it up a bit and to have it do the
right thing for indels at the edges of the alignments.
* Enum added for the overhang strategy to use; added implementation for the INDEL version of this strategy.
* Lots of systematic testing added for this implementation.
* NOT HOOKED UP TO HAPLOTYPE CALLER YET. Committing so that people can play around with this for now.
-Diff engine output is now included in the actual exception message thrown as a
result of an MD5 mismatch, which allows it to be conveniently viewed on the
main page of a build in Bamboo.
Minor Additional Improvements:
-WalkerTestSpec now auto-detects test class name via new JVMUtils.getCallingClass()
method, and the test class name is now included as a regular part of integration
test output for each test.
-Fix race condition in MD5DB.ensureMd5DbDirectory()
-integrationtests dir is now cleaned by "ant clean"
GSA-915 #resolve
* bitset could legitimately be in an unfinished state but we were trying to access it without finalizing.
* added --cancer_mode argument per Mark's suggestion to force the user to explicitly enable multi-sample mode.
* tests were easiest to implement as integration tests (this was a really complicated case).
Problem
-------
The DeBruijn assembler was too slow. The cause of the slowness was the need to construct many kmer graphs (from max read length in the interval to 11 kmer, in increments of 6 bp). This need to build many kmer graphs was because the assembler (1) needed long kmers to assemble through regions where a shorter kmer was non-unique in the reference, as we couldn't split cycles in the reference (2) shorter kmers were needed to be sensitive to differences from the reference near the edge of reads, which would be lost often when there was chain of kmers of longer length that started before and after the variant.
Solution
--------
The read threading assembler uses a fixed kmer, in this implementation by default two graphs with 10 and 25 kmers. The algorithm operates as follows:
identify all non-unique kmers of size K among all reads and the reference
for each sequence (ref and read):
find a unique starting position of the sequence in the graph by matching to a unique kmer, or starting a new source node if non exist
for each base in the sequence from the starting vertex kmer:
look at the existing outgoing nodes of current vertex V. If the base in sequence matches the suffix of outgoing vertex N, read the sequence to N, and continue
If no matching next vertex exists, find a unique vertex with kmer K. If one exists, merge the sequence into this vertex, and continue
If a merge vertex cannot be found, create a new vertex (note this vertex may have a kmer identical to another in the graph, if it is not unique) and thread the sequence to this vertex, and continue
This algorithm has a key property: it can robustly use a very short kmer without introducing cycles, as we will create paths through the graph through regions that aren't unique w.r.t. the sequence at the given kmer size. This allows us to assemble well with even very short kmers.
This commit includes many critical changes to the haplotype caller to make it fast, sensitive, and accurate on deep and shallow WGS and exomes, the key changes are highlighted below:
-- The ReadThreading assembler keeps track of the maximum edge multiplicity per sample in the graph, so that we prune per sample, not across all samples. This change is essential to operate effectively when there are many deep samples (i.e., 100 exomes)
-- A new pruning algorithm that will only prune linear paths where the maximum edge weight among all edges in the path have < pruningFactor. This makes pruning more robust when you have a long chain of bases that have high multiplicity at the start but only barely make it back into the main path in the graph.
-- We now do a global SmithWaterman to compute the cigar of a Path, instead of the previous bubble-based SmithWaterman optimization. This change is essential for us to get good variants from our paths when the kmer size is small. It also ensures that we produce a cigar from a path that only depends only the sequence of bases in the path, unlike the previous approach which would depend on both the bases and the way the path was decomposed into vertices, which depended on the kmer size we used.
-- Removed MergeHeadlessIncomingSources, which was introducing problems in the graphs in some cases, and just isn't the safest operation. Since we build a kmer graph of size 10, this operation is no longer necessary as it required a perfect match of 10 bp to merge anyway.
-- The old DebruijnAssembler is still available with a command line option
-- The number of paths we take forward from the each assembly graph is now capped at a factor per sample, so that we allow 128 paths for a single sample up to 10 x nSamples as necessary. This is an essential change to make the system work well for large numbers of samples.
-- Add a global mismapping parameter to the HC likelihood calculation: The phredScaledGlobalReadMismappingRate reflects the average global mismapping rate of all reads, regardless of their mapping quality. This term effects the probability that a read originated from the reference haploytype, regardless of its edit distance from the reference, in that the read could have originated from the reference haplotype but from another location in the genome. Suppose a read has many mismatches from the reference, say like 5, but has a very high mapping quality of 60. Without this parameter, the read would contribute 5 * Q30 evidence in favor of its 5 mismatch haplotype compared to reference, potentially enough to make a call off that single read for all of these events. With this parameter set to Q30, though, the maximum evidence against the reference that this (and any) read could contribute against reference is Q30. -- Controllable via a command line argument, defaulting to Q60 rate. Results from 20:10-11 mb for branch are consistent with the previous behavior, but this does help in cases where you have rare very divergent haplotypes
-- Reduced ActiveRegionExtension from 200 bp to 100 bp, which is a performance win and the large extension is largely unnecessary with the short kmers used with the read threading assembler
Infrastructure changes / improvements
-------------------------------------
-- Refactored BaseGraph to take a subclass of BaseEdge, so that we can use a MultiSampleEdge in the ReadThreadingAssembler
-- Refactored DeBruijnAssembler, moving common functionality into LocalAssemblyEngine, which now more directly manages the subclasses, requiring them to only implement a assemble() method that takes ref and reads and provides a List<SeqGraph>, which the LocalAssemblyEngine takes forward to compute haplotypes and other downstream operations. This allows us to have only a limited amount of code that differentiates the Debruijn and ReadThreading assemblers
-- Refactored active region trimming code into ActiveRegionTrimmer class
-- Cleaned up the arguments in HaplotypeCaller, reorganizing them and making arguments @Hidden and @Advanced as appropriate. Renamed several arguments now that the read threading assembler is the default
-- LocalAssemblyEngineUnitTest reads in the reference sequence from b37, and assembles with synthetic reads intervals from 10-11 mbs with only the reference sequence as well as artificial snps, deletions, and insertions.
-- Misc. updates to Smith Waterman code. Added generic interface to called not surpisingly SmithWaterman, making it easier to have alternative implementations.
-- Many many more unit tests throughout the entire assembler, and in random utilities
-- the Queue jobreport PDF script now provides a high-level summary of the de-scattered runtimes of each analysis, so that its easy to see where your script is spending its time across scatters.
* This is emerging now because BWA-MEM produces lots of reads that are not primary alignments
* The ConstrainedMateFixingManager class used by IndelRealigner was mis-adjusting SAM flags because it
was getting confused by these secondary alignments
* Added unit test to cover this case
Only try to clip adaptors when both reads of the pair are on opposite strands
-- Read pairs that have unusual alignments, such as two reads both oriented like:
<-----
<-----
where previously having their adaptors clipped as though the standard calculation of the insert size was meaningful, which it is not for such oddly oriented pairs. This caused us to clip extra good bases from reads.
-- Update MD5s due change in adaptor clipping, which add some coverage in some places
-- Previous version would trim down 2M2D2M into 2M if you asked for the first 2 bases, but this can result in incorrect alignment of the bases to the reference as the bases no longer span the full reference interval expected. Fixed and added unit tests
Output didn't "mix-up" the genotypes, it outputed the same HET vs HET (e.g.) 3 times rather than the combinations of HET vs {HET, HOM, HOM_REF}, etc.
This was only a problem in the text, _not_ the actual numbers, which were outputted correctly.
- Updated MD5's after looking at diffs to verify that the change is what I expected.
-Changes in Java 7 related to comparators / sorting produce a large number
of innocuous differences in our test output. Updating expectations now
that we've moved to using Java 7 internally.
-Also incorporate Eric's fix to the GATKSAMRecordUnitTest to prevent
intermittent failures.
This class, being unused, was no longer getting packaged into the
GATK release jar by bcel, and so attempting to run its unit test
on the release jar was producing an error.
-Even though we're no longer compiling/using contracts in tests,
we still need the cofoja jar in the classpath when testing the
release jars due to some bad behavior on the part of TestNG in
not being able to handle missing annotation classes.
-We don't need to package the cofoja classes in the actual GATK
jar, however (and we never have).
RR counts are represented as offsets from the first count, but that wasn't being done
correctly when counts are adjusted on the fly. Also, we were triggering the expensive
conversion and writing to binary tags even when we weren't going to write the read
to disk.
The code has been updated so that unconverted counts are passed to the GATKSAMRecord
and it knows how to encode the tag correctly. Also, there are now methods to write
to the reduced counts array without forcing the conversion (and methods that do force
the conversion).
Also:
1. counts are now maintained as ints whenever possible. Only the GATKSAMRecord knows
about the internal encoding.
2. as discussed in meetings today, we updated the encoding so that it can now handle
a range of values that extends to 255 instead of 127 (and is backwards compatible).
3. tests have been moved from SyntheticReadUnitTest to GATKSAMRecordUnitTest accordingly.
Added also R script used to process everything into a couple of ggplot-friendly data frames.
Functionality is basically the same. Enhancements:
-- Add annotation to log axiom and Exome Chip AC along with LOF results for concordance comparisons.
-- General Cleanup.
-- Used base path for files as a variable in case directory structure in gsa-hpprojects changes again.
-- Output also per-pool data by subsetting genotypes per pool and comparing with corresponding genotypes from Axiom, exome chip and omni.
-- Commit R scripts that load all tables and crunch them to analyze them.