Previous fixes and tests only covered trailing soft-clips. Now that up front
hard-clipping is working properly though, we were failing on those in the tool.
Added a patch for this as well as a separate test independent of the soft-clips
to make sure that it's working properly.
* Increase the memory limit for HTSLIB - Bam shuffling just eats up a ton of memory.
* Concurrent HTSLIB processes need unique temp files the bam shuffling step was messing up with the temporary files and failing without returning zero. Fixed it by giving a unique name to each process.
This time we don't accidentally drop reads (phew), but this bug does cause us not to
update the alignment start of the mate. Fixed and added unit test to cover it.
-- Currently we don't support writing a BAM file from the haplotype caller when nct is enabled. Check in initialize if this is the case, and throw a UserException
Github was intermittently rejecting large pushes that were in fact
fast-forward updates as being non-fast-forward. Try to prevent this
by ensuring that all refs are up-to-date and properly checked out
after branch filtering and before doing a source release.
-- Previous version emitted command lines that look like:
##HaplotypeCaller="analysis_type=HaplotypeCaller input_file=[private/testdata/reduced.readNotFullySpanningDeletion.bam] ..."
the new version provides additional information on when the GATK was run and the GATK version in a nicer format:
##GATKCommandLine=<ID=HaplotypeCaller,Version=2.5-206-gbc7be2b,Date="Thu Jun 20 11:09:01 EDT 2013",Epoch=1371740941197,CommandLineOptions="analysis_type=HaplotypeCaller input_file=[private/testdata/reduced.readNotFullySpanningDeletion.bam] read_buffer_size=null phone_home=AWS ...">
-- Additionally, the command line options are emitted sequentially in the file, so you can see a running record of how a VCF was produced, such as this example from the integration test:
##GATKCommandLine=<ID=HaplotypeCaller,Version=2.5-206-gbc7be2b,Date="Thu Jun 20 11:09:01 EDT 2013",Epoch=1371740941197,CommandLineOptions="lots of stuff">
##GATKCommandLine=<ID=SelectVariants,Version=2.5-206-gbc7be2b,Date="Thu Jun 20 11:16:23 EDT 2013",Epoch=1371741383277,CommandLineOptions="lots of stuff">
-- Removed the ProtectedEngineFeaturesIntegrationTest
-- Actual unit tests for these features!
Improved AnalyzeCovariates (AC) integration test.
Renamed AC test files ending with .grp to .table
Implementation:
* Removed RECAL_PDF/CSV_FILE from RecalibrationArgumentCollection (RAC). Updated rest of the code accordingly.
* Fixed BQSRIntegrationTest to work with new changes
Implemtation details:
* Added tool class *.AnalyzeCovariates
* Added convenient addAll method to Utils to be able to add elements of an array.
* Added parameter comparison methods to RecalibrationArgumentCollection class in order to verify that multiple imput recalibration report are compatible and comparable.
* Modified the BQSR.R script to handle up to 3 different recalibration tables (-BQSR, -before and -after) and removed some irrelevant arguments (or argument values) from the output.
* Added an integration test class.
-Collapses zero-length and repeated cigar elements, neither of which
can necessarily be handled correctly by downstream code (like LIBS).
-Consolidation is done before read filters, because not all read filters
behave correctly with non-consoliated cigars.
-Examined other uses of consolidateCigar() throughout the GATK, and
found them to not be redundant with the new engine-level consolidation
(they're all on artificially-created cigars in the HaplotypeCaller
and SmithWaterman classes)
-Improved comments in SAMDataSource.applyDecoratingIterators()
-Updated MD5s; differences were examined and found to be innocuous
-Two tests: -Unit test for ReadFormattingIterator
-Integration test for correct handling of zero-length
cigar elements by the GATK engine as a whole
-This argument was completely redundant with the engine-level -dfrac
argument.
-Could produce unintended consequences if used in conjunction with
engine-level downsampling arguments.
-- Changed default HMM model.
-- Removed check.
-- Changed md5's: PL's in the high 100s change by a point or two due to new implementation.
-- Resulting performance improvement is about 30 to 50% less runtime when using -glm INDEL.
-- numPruningSamples allows one to specify that the minPruning factor must be met by this many samples for a path to be considered good (e.g. seen twice in three samples). By default this is just one sample.
-- adding unit test to test this new functionality
-- It was being applied in the wrong order (after the first call to the underlying MalformedReadFilter) so if your first read was malformed you'd blow up there instead of being fixed properly. Added integration tests to ensure this continues to work.
-- [delivers #49538319]
-- When doing cross-species comparisons and studying population history and ancient DNA data, having SOME measure of confidence is needed at every single site that doesn't depend on the reference base, even in a naive per-site SNP mode. Old versions of GATK provided GQ and some wrong PL values at reference sites but these were wrong. This commit addresses this need by adding a new UG command line argument, -allSitePLs, that, if enabled will:
a) Emit all 3 ALT snp alleles in the ALT column.
b) Emit all corresponding 10 PL values.
It's up to the user to process these PL values downstream to make sense of these. Note that, in order to follow VCF spec, the QUAL field in a reference call when there are non-null ALT alleles present will be zero, so QUAL will be useless and filtering will need to be done based on other fields.
-- Tweaks and fixes to processing pipelines for Reich lab.
-- VariantContextWriterStorage was gzipping the intermediate files that would be merged in, but the mergeInto function couldn't read those outputs, and we'd throw a very strange error. Now tmp. VCFs aren't compressed, even if the final VCF is. Added integrationtest to ensure this behavior works going forward.
-- [delivers #47399279]
1. Have the RMSMappingQuality annotation take into account the fact that reduced reads represent multiple reads.
2. The rank sume tests should not be using reduced reads (because they do not represent distinct observations).
3. Fixed a massive bug in the BaseQualityRankSumTest annotation! It was not using the base qualities but rather
the read likelihoods?!
Added a unit test for Rank Sum Tests to prove that the distributions are correctly getting assigned appropriate p-values.
Also, and just as importantly, the test shows that using reduced reads in the rank sum tests skews the results and
makes insignificant distributions look significant (so it can falsely cause the filtering of good sites).
Also included in this commit is a massive refactor of the RankSumTest class as requested by the reviewer.
IsComplex will be used to designate calls as representing complex events which have multiple
correct allele representations. Then call sets can get points for including them but will
not get penalized for missing them (because they may have used a different representation).
This is currently the biggest bane when trying to characterize FNs these days.
The confidence will be used to refactor the consensus making algorithm for the truth status
of the NA12878 KB. The previous version allowed for 2 tiers: reviews and everything else.
But that is problematic when some of the input sets are of higher quality than others
because when they disagree the calls become discordant and we lose that information.
The new framework will allow each call to have its own associated confidence. Then when
determining the consensus truth status we probabilistically calculate it from the
various confidences, so that nothing is hard coded in anymore.
Note that I added some unit tests to ensure the outcome that I expect for various scenarios
and then implemented a very rough version of the estimator that successfully produced those
outcomes.
HOWEVER, THIS IS NOT COMPLETE AND NEITHER FUNCTIONALITY IS HOOKED UP AT ALL.
Rather, this is an interim commit. The only goal here is to get these fields added to the MVC
for the upcoming release so that Jacob (who prefers to work with stable) can add the
necessary functionality to IGV for us.
-- Previous version created FILTERs for each possible alt allele when that site was set to monomorphic by BEAGLE. So if you had a A/C SNP in the original file and beagle thought it was AC=0, then you'd get a record with BGL_RM_WAS_A in the FILTER field. This obviously would cause problems for indels, as so the tool was blowing up in this case. Now beagle sets the filter field to BGL_SET_TO_MONOMORPHIC and sets the info field annotation OriginalAltAllele to A instead. This works in general with any type of allele.
-- Here's an example output line from the previous and current versions:
old: 20 64150 rs7274499 C . 3041.68 BGL_RM_WAS_A AN=566;DB;DP=1069;Dels=0.00;HRun=0;HaplotypeScore=238.33;LOD=3.5783;MQ=83.74;MQ0=0;NumGenotypesChanged=1;OQ=1949.35;QD=10.95;SB=-6918.88
new: 20 64062 . G . 100.39 BGL_SET_TO_MONOMORPHIC AN=566;DP=1108;Dels=0.00;HRun=2;HaplotypeScore=221.59;LOD=-0.5051;MQ=85.69;MQ0=0;NumGenotypesChanged=1;OQ=189.66;OriginalAltAllele=A;QD=15.81;SB=-6087.15
-- update MD5s to reflect these changes
-- [delivers #50847721]
-WalkerTest now deletes *.idx files on exit
-ArtificialBAMBuilder now deletes *.bai files on exit
-VariantsToBinaryPed walker now deletes its temp files on exit