Problem:
--------
PairHMM was generating positive likelihoods (even after the re-work of the model)
Solution:
---------
The caching idices were never re-initializing the initial conditions in the first position of the deletion matrix. Also the match matrix was being wrongly initialized (there is not necessarily a match in the first position). This commit fixes both issues on both the Logless and the Log10 versions of the PairHMM.
Summarized Changes:
------------------
* Redesign the matrices to have only 1 col/row of padding instead of 2.
* PairHMM class now owns the caching of the haplotype (keeps track of last haplotypes, and decides where the caching should start)
* Initial condition (in the deletionMatrix) is now updated every time the haplotypes differ in length (this was wrong in the previous version)
* Adjust the prior and probability matrices to be one based (logless)
* Update Log10PairHMM to work with prior and probability matrices as well
* Move prior and probability matrices to parent class
* Move and rename padded lengths to parent class to simplify interface and prevent off by one errors in new implementations
* Simple cleanup of PairHMMUnitTest class for a little speedup
* Updated HC and UG integration test MD5's because of the new initialization (without enforcing match on first base).
* Create static indices for the transition probabilities (for better readability)
[fixes#47399227]
* As reported here: http://gatkforums.broadinstitute.org/discussion/comment/4270#Comment_4270
* This was a commit into the variant.jar; the changes here are a rev of that jar and handling of errors in VF
* Added integration test to confirm failure with User Error
* Removed illegal header line in KB test VCF that was causing related tests to fail.
* Very trivial, but I happened to see this code and it drove me nuts so I felt compelled to refactor it.
* Instead of iterating over keys in map to get the values, just iterate over the values...
-TestNG fails to report errors that occur in static initializer blocks before any tests are run
in its XML reports. This was causing Bamboo to claim that tests had passed even though there
were pre-test errors.
-This is a temporary fix until we can find a way to get TestNG to report errors that occur both
outside of test methods and outside of @Before* methods.
-- When consecutive intervals were within the bandpass filter size the ActiveRegion traversal engine would create
duplicate active regions.
-- Now when flushing the activity profile after we jump to a new interval we remove the extra states which are outside
of the current interval.
-- Added integration test which ensures that the output VCF contains no duplicate records. Was failing test before this commit.
-A UserException is now thrown if either the fai or dict file for the
reference does not exist, with pointers to instructions for creating
these files.
-Gets rid of problematic file locking that was causing intermittent
errors on our farm.
-Integration tests to verify that correct exceptions are thrown in
the case of a missing fai / dict file.
GSA-866 #resolve
-The algorithm for finding the intersection of two sets of intervals
relies on the sortedness of the intervals within each set, but the engine
was not sorting the intervals before attempting to find the intersection.
-The result was that if one or both interval lists was unsorted / lexicographically
sorted, we would often fail to find the intersection correctly.
-Now the IntervalBinding sorts all sets of intervals before returning them,
solving the problem.
-Added an integration test for this case.
GSA-909 #resolve
-- Graphs with cycles from the bottom node to one of the middle nodes would introduce an infinite cycle in the algorithm. Created unit test that reproduced the issue, and then fixed the underlying issue.
-- Only try to genotype PASSing records in the alleles file
-- Don't attempt to genotype multiple records with the same start location. Instead take the first record and throw a warning message.
-- Sometimes it's desireable to specify a set of "good" regions and filter out other stuff (like say an alignability mask or a "good regions" mask). But by default, the -mask argument in VF will only filter sites inside a particular mask. New argument -filterNotInMask will reverse default logic and filter outside of a given mask.
-- Added integration test, and made sure we also test with a BED rod.
* Moved to protected for packaging purposes.
* Cleaned up and removed debugging output.
* Fixed logic for epsilons so that we really only test significant differences between BAMs.
* Other small fixes (e.g. don't include low quality reduced reads in overall qual).
* Most RR integration tests now automatically run the quals test on output.
* A few are disabled because we expect them to fail in various locations (e.g. due to downsampling).
The Problem:
------------
the SAM spec does not allow multiple @PG tags with the same id. Our @PG tag writing routines were allowing that to happen with the boolean parameter "keep_all_pg_records".
How this fixes it:
------------------
This commit removes that option from all the utility functions and cleans up the code around the classes that used these methods off-spec.
Summarized changes:
-------------------
* Remove keep_all_pg_records option from setupWriter utility methos in Util
* Update all walkers to now replace the last @PG tag of the same walker (if it already exists)
* Cleanup NWaySamFileWriter now that it doesn't need to keep track of the keep_all_pg_records variable
* Simplify the multiple implementations to setupWriter
Bamboo:
-------
http://gsabamboo.broadinstitute.org/browse/GSAUNSTABLE-PARALLEL31
Issue Tracker:
--------------
[fixes 47100885]
-- Corrected logic to pick biallelic vc to left align.
-- Added integration test to make sure this feature is tested and feature to trim bases is also tested.
The current implementation of the PairHMM had issues with the probabilities and the state machines. Probabilities were not adding up to one because:
# Initial conditions were not being set properly
# Emission probabilities in the last row were not adding up to 1
The following commit fixes both by
# averaging all potential start locations (giving an equal prior to the state machine in it's first iteration -- allowing the read to start it's alignment anywhere in the haplotype with equal probability)
# discounting all paths that end in deletions by not adding the last row of the deletion matrix and summing over all paths ending in matches and insertions (this saves us from a fourth matrix to represent the end state)
Summarized changes:
* Fix LoglessCachingPairHMM and Log10PairHMM according to the new algorithm
* Refactor probabilities check to throw exception if we ever encounter probabilities greater than 1.
* Rename LoglessCachingPairHMM to LoglessPairHMM (this is the default implementation in the HC now)
* Rename matrices to matchMatrix, insertionMatrix and deletionMatrix for clarity
* Rename metric lengths to read and haplotype lengths for clarity
* Rename private methods to initializePriors (distance) and initializeProbabilities (constants) for clarity
* Eliminate first row constants (because they're not used anyway!) and directly assign initial conditions in the deletionMatrix
* Remove unnecessary parameters from updateCell()
* Fix the expected probabilities coming from the exact model in PairHMMUnitTest
* Neatify PairHMM class (removed unused methods) and PairHMMUnitTest (removed unused variables)
* Update MD5s: Probabilities have changed according to the new PairHMM model and as expected HC and UG integration tests have new MD5s.
[fix 47164949]
that are tested), resulting in slightly different numbers of calls to the RNG, and ultimately
different sets of selected variants.
This commits updates the md5 values for the validation site selector integration test to reflect
these new random subsets of variants that are selected.
-- Added ability to trim common bases in front of indels before left-aligning. Otherwise, records may not be left-aligned if they have common bases, as they will be mistaken by complext records.
-- Added ability to split multiallelic records and then left align them, otherwise we miss a lot of good left-aligneable indels.
-- Motivated by this, renamed walker to LeftAlignAndTrimVariants.
-- Code refactoring, cleanup and bring up to latest coding standards.
-- Added unit testing to make sure left alignment is performed correctly for all offsets.
-- Changed phase 3 HC script to new syntax. Add command line options, more memory and reduce alt alleles because jobs keep crashing.
Currently, the multi-allelic test is covering the following case:
Eval A T,C
Comp A C
reciprocate this so that the reverse can be covered.
Eval A C
Comp A T,C
And furthermore, modify ConcordanceMetrics to more properly handle the situation where multiple alternate alleles are available in the comp. It was possible for an eval C/C sample to match a comp T/T sample, so long as the C allele were also present in at least one other comp sample.
This comes from the fact that "truth" reference alleles can be paired with *any* allele also present in the truth VCF, while truth het/hom var sites are restricted to having to match only the alleles present in the genotype. The reason that truth ref alleles are special case is as follows, imagine:
Eval: A G,T 0/0 2/0 2/2 1/1
Comp: A C,T 0/0 1/0 0/0 0/0
Even though the alt allele of the comp is a C, the assessment of genotypes should be as follows:
Sample1: ref called ref
Sample2: alleles don't match (the alt allele of the comp was not assessed in eval)
Sample3: ref called hom-var
Sample4: alleles don't match (the alt allele of the eval was not assessed in comp)
Before this change, Sample2 was evaluated as "het called het" (as the T allele in eval happens to also be in the comp record, just not in the comp sample). Thus: apply current
logic to comp hom-refs, and the more restrictive logic ("you have to match an allele in the comp genotype") when the comp is not reference.
Also in this commit,major refactoring and testing for MathUtils. A large number of methods were not used at all in the codebase, these methods were removed:
- dotProduct(several types). logDotProduct is used extensively, but not the real-space version.
- vectorSum
- array shuffle, random subset
- countOccurances (general forms, the char form is used in the codebase)
- getNMaxElements
- array permutation
- sorted array permutation
- compare floats
- sum() (for integer arrays and lists).
Final keyword was extensively added to MathUtils.
The ratio() and percentage() methods were revised to error out with non-positive denominators, except in the case of 0/0 (which returns 0.0 (ratio), or 0.0% (percentage)). Random sampling code was updated to make use of the cleaner implementations of generating permutations in MathUtils (allowing the array permutation code to be retired).
The PaperGenotyper still made use of one of these array methods, since it was the only walker it was migrated into the genotyper itself.
In addition, more extensive tests were added for
- logBinomialCoefficient (Newton's identity should always hold)
- logFactorial
- log10sumlog10 and its approximation
All unit tests pass
-- These new algorithms are more powerful than the restricted diamond merging algoriths, in that they can merge nodes with multiple incoming and outgoing edges. Together the splitter + merger algorithms will correctly merge many more cases than the original headless and tailless diamond merger.
-- Refactored haplotype caller infrastructure into graphs package, code cleanup
-- Cleanup new merging / splitting algorithms, with proper docs and unit tests
-- Fix bug in zipping of linear chains. Because the multiplicity can be 0, protect ourselves with a max function call
-- Fix BaseEdge.max unit test
-- Add docs and some more unit tests
-- Move error correct from DeBruijnGraph to DeBruijnAssembler
-- Replaced uses of System.out.println with logger.info
-- Don't make multiplicity == 0 nodes look like they should be pruned
-- Fix toString of Path
-- Previous algorithms were applying pruneGraph inappropriately on the raw sequence graph (where each vertex is a single base). This results in overpruning of the graph, as prunegraph really relied on the zipping of linear chains (and the sharing of weight this provides) to avoid over-pruning the graph. Probably we should think hard about this. This commit fixes this logic, so we zip the graph between pruning
-- In this process ID's a fundamental problem with how we were trimming away vertices that occur on a path from the reference source to sink. In fact, we were leaving in any vertex that happened to be accessible from source, any vertices in cycles, and any vertex that wasn't the absolute end of a chain going to a sink. The new algorithm fixes all of this, using a BaseGraphIterator that's a general approach to walking the base graph. Other routines that use the same traversal idiom refactored to use this iterator. Added unit tests for all of these capabilities.
-- Created new BaseGraphIterator, which abstracts common access patterns to graph, and use this where appropriate