This change doesn't affect the performance of the Indel Realigner at all (as per tests).
This is just a request from the Picard side (where further testing is happening).
Make MQ threshold a parameter (compare to M1 by setting to zero)
Add logic for multiple alternate alleles in tumor
Exclude MQ0 normal reads from normal LOD calculation
Fix path errors in Dream_Evaluations.md
Move M2 eval scripts out of walkers package so they run
Previous version of OverclippedReadFilter would only filter a read if both ends of a read had a soft-clipped block.
This adds a boolean option to relax that requirement, and only require 1 soft-clipped block, while also filtering on read length - softclipped length
CRAM now requires .bai index, just like BAM.
Test updates:
- Updated existing MD5s, as TLEN has changed.
- Tests multiple contigs.
- Tests several intervals per contig.
- Tests when `.cram.bai` is missing, even when `.cram.crai` is present.
Updated gatk docs for CRAM support, including:
- Arguments that work for both BAM and CRAM listed as such.
- Arguments that don't work for CRAM either explicitly say "BAM" or "doesn't work for CRAM".
- Instructions on how to recreate a `.cram.bai` using cramtools.
Cleaned up IntelliJ IDEA warnings regarding `Arrays.asList()` -> `Collections.singletonList()`.
Changed a division by -10.0 to a multiplication by -.1 in QualUtils (typically multiplication is faster than division).
Addresses performance issue #1081.
When using CatVariants, VCF files were being sorted solely on the base
pair position of the first record, ignoring the chromosome. This can
become problematic when merging files from different chromosomes,
espeically if you have multiple VCFs per chromosome.
As an example, assume the following 3 lines are all in separate files:
1 10
1 100
2 20
The merged VCF from CatVariants (without -assumeSorted) would read:
1 10
2 20
1 100
This has the potential to break tools that expect chromosomes to be
contiguous within a VCF file.
This commit changes the comparator from one of Pair<Integer, File> to
one of Pair<VariantContext, File>. We construct a
VariantContextComparator from the provided reference, which will sort
the first record by chromosome and position properly. Additionally, if
-assumeSorted is given, we simply use a null VariantContext as the first
record, which will all be equal (as all will be null)
Add oxoG read count annotation and add as default annotation
Add ##SAMPLE VCF header line in accordance with TCGA VCF spec, specifying "File" line in sample header with BAM file name and "SampleName" with BAM sample name (Don't print sample file path if --no_cmdline_in_header is specified to help with test consistency)
Turn on active region assembly-based physical phasing for M2
Clean up M2-related annotations so UG doesn't crash if M2 annotations are called
added "str_contraction" artifact filter (improves specificity, especially in exomes)
refactored out VCF constants and added descriptions
added "artifact detection mode" for PON creation
added "str_contraction" artifact filter (improves specificity, especially in exomes)
added new dream evaulation markdown
added results for SMC 4
fixed up documentation, moved location to /dsde/working/mutect/dream_smc, and checked in scala script
added "artifact detection mode" for PON creation
added "str_contraction" artifact filter (improves specificity, especially in exomes)
fixed bug which would overwrite germline_risk filter errors
updated "how to" documents and records
fixed license text
thinned down FP regression test from 700 sites to 100. we have better ways (DREAM, NN) to check accuracy of the method and 100 is good enough to catch regressions
why oh why do the MD5-based unit tests produce different results on different machine architectures? I hate that :/
Thanks to GG, LDG and DR -- test should now produce the same results regardless of machine architecture
disabled downsampling... hopefully in the final attempt to make this work cross architecture!
enforced LOGLESS_CACHING... hopefully in the final final attempt to make this work cross architecture!
refactored out VCF constants and added descriptions
-We now pull htsjdk and picard from maven central.
-Updated the GATK codebase as necessary to adapt to changes in the Feature
interface.
-Since VCFHeader now requires that all header lines have unique keys, uniquified
the keys of GVCFBlock header lines by including the min/max GQ in the key.
Updated MD5s accordingly.
-Other MD5s changed as a result of an htsjdk fix to eliminate "-0" in VCF output.
Previously, if a SNP occurred in sample A at a position that was in the middle of a deletion for sample B,
sample B would be genotyped as homozygous reference there (but it's NOT reference - there's a deletion).
Now, sample B is genotyped as having a symbolic DEL allele.
Minor cleanup added. Note that I also removed Laura's previous fix for this problem.
Existing integration tests change because I've added a new header line to the VCF being output.
I also added several tests for the new functionality showing:
1. genotyping from separate and already combined gvcfs give the same output
2. genotyping over multiple spanning deletions works
3. combining works too
Existing unit tests also cover this case.
Exclude MQ0BySample
Move SD and TRA to new StandardUGAnnotation interface
There is now annotation interface (StandardUGAnnotation) holding annots that are standard in UG but should't be used as they are now with HC. This allows us to not have to exclude these annotations explicitly in HC, but still be able to use them for development purposes.
Fairly minor if plentiful fixes to various gatkdocs. Merging this without formal review since all tests pass, the gatkdocs build, and no one really wants to review corrections to grammar, typos and layout for 120+ documents. Review will be done by users in production ;-)
When -qsub-broad is specified instead of -qsub, use the "h_vmem" parameter
instead of "h_rss" to specify memory limit requests.
Also cause the GridEngine native arguments to be output by default to the logger,
instead of only when in debug mode.
The GATK command line header keys were being repeated in the VCF and
subsequently lost to a single key value by HTSJDK. This resolves
the issue by appending the name of the walker after the text
"GATKCommandLine" and a number after that if the same walker was
used more than once in the form: GATKCommandLine.(walker name) for
the first occurrence of the walker, and GATKCommandLine.(walker name).#
where # is the number of the occurrence of the walker (e.g.
GATKCommandLine.SomeWalker.2 for the second occurrence of SomeWalker).
Integration test added to EngineFeaturesIntegrationTest to verify
two runs of same walker follow expected form.
Resolves#909
See also: HTSJDK #43
Build a ReferenceContext in ActiveRegionWalkers to pass in to annotation engine so we can call the TandemRepeatAnnotator from M2
Make TandemRepeatAnnotator default annotation for M2.
Setup (but don't use yet) HC-style contamination downsampling.
New HC integration test with TandemRepeatAnnotator
- ASEReadCounter (public tool) replce Tuuli's script to produce the input to Manny's tool.
It count the number of reads that support the ref allele and the alt allele, filtereing low qual reads and bases and keep only properPaired reads
- ASECaller (private tool) take both RNA and DNA, and produce ontingencyTables ** still under development **
minor changes in other tools:
- update RNA HC variant calling scala script
- expose FS method pValueForContingencyTable to be able to call it from ASEcaller
In ASEReadCounter:
- allow different option to deal with overlaping read from the same fragment
- add option to ignore or include indels in the pileups
- add option to disabled DuplicateRead
add ASEReadCounterIntegrationTest.java and files for the test
Now, instead of stripping out the GQs for mono sites, we transfer them to the RGQ.
This is extremely useful for people who want to know how confident the hom ref genotype calls are.
Perhaps this is just what CRSP needs for pertinent negatives.
Note that I also changed the tool to no longer use the GenotypeSummaries annotation by default since
it was adding some seemingly unnecessary annotations (like mean GQ now that we keep the GQ around and
number of no-calls). Let me know if this was a mistake (although Laura gave me a thumbs up).
* The value of this element (default true) determines whether Queue will explicitly run this walker over unmapped reads
* This patch fixes a runtime error when FindCoveredIntervals was used with Queue
* PT 81777160
* TextCigarCodec.decode() is now static, and the getSingleton() method is gone
* MergingSamRecordIterator now wants a Collection<SamReader> rather than Collection<SAMFileReader> in the constructor
* SeekableBufferedStream now correctly reads the requested number of bytes, removed workaround in GATKBAMIndex
* Removed unused annotations (CCC and HWP)
* Renamed one of the two GC annotations to "IGC" (for Interval GC)
* Revved picard & htsjdk (GATK constants are now removed from htsjdk)
* PT 82046038
-- Active Region Traversal was using per sample limits on the number of reads that were too low, especially now that we are running one sample at a time. This caused issues with high confidence variants being dropped in high coverage data.
-- HaplotypeCallerGVCFIntegrationTest PL/annotation changes due to using more reads in those tests
-- Removed a CountReadsInActiveRegionsIntegrationTest test for excessive coverage because the read coverage no longer goes over the limits in ART
Add multi-allele test for info field annotations
Fix to process all types of INFO annotations
roll back to previous version, removes INFO and FORMAT
Correct @return for VariantAnnotatorEngine.getNonReferenceAlleles()
Enhance comments and clean up multi-allelic logic, handle header info number = R
only parse counts of A & R
Add INFO for AC
update MD5
Performance enhancement, only parse multiallelic with a count A or R
Make argument final in getNonReferenceAlleles()
Code cleanup, add exceptions for bad expression/allele size mismatch and missing header info for an expression
Change exception to warning for expression value/number of alleles check
remove adevertised exceptions
* PT 84242218
* Note that FORMAT fields behave the same as INFO fields - if the annotation has a count of A (one entry per Alt Allele), it is split across the multiple output lines. Otherwise, the entire list is output with each field
Add more logging to annotators, change loggers from info to warn
Add comments to testStrandBiasBySample()
Clarify comments in testStrandBiasBySample
remove logic for not prcossing an indel if strand bias (SB) was not computed
remove per variant warnings in annotate()
Log warnings if using the wrong annotator or missing a pedgree file
Log test failures once in annotate(), because HaplotypeCaller does not call initialize(). Avoid using exceptions
Fix so only log once in annotate(), Hardey-Weinberg does not require pedigree files, fix test MD5s so pass
Check if founderIds == null
Update MD5s from HaplotypeCaller integrations tests and clean up code
Change logic so SnpEff does not throw excpetions, change engine to utils in imports
Update test MD5s, return immediately if cannot annotate in SnpEff.initialization()
Post peer review, add more logging warnings
Update MD5 for testHaplotypeCallerMultiSampleComplex1, return null if PossibleDeNovo.annotate() is not called by VariantAnnotator
Story:
-----
https://www.pivotaltracker.com/story/show/80684230
Changes:
-------
- Corrected the bug: AlignmentUtils#createReadAlignedToRef was
not realigning against the reference but the best haplotype for
the read.
Test:
----
- Added integration test in HaplotypeCallerIntegrationTest to check
that the bug has been fixed.
- Fixed md5s modified by this change; these are cause due to small
changes in the state of the random-number generator and read vs
variant site overlapping.
Reading the multiple GATKText files as a single stream, especially with new top level target executable jar files pointing to a lib folder.
Don't dirty the build with a new GATKText.properties if input files are unmodified.
Stop warning on undocumented abstract classes.
Fixed ClassNotFoundException/NoClassDefFoundError by fixing ResourceBundleExtractorDoclet artifact.
Excluding Exceptions from documentation.
Removed custom log4j dependency from ResourceBundleExtractorDoclet.
Stop generating the dependency reduced pom during shade.
Stop regenerating gsalib when the files are already up to date.
Disabled mvn site generation from external-example.
Moved top level target symlinks to package jar files to under target/package.
Executable jar files are placed under target/executable with the new target[/lib] directories.
Under top level target, symlinks to *either* the package *or* the executable jars replace what was a symlink to the package jar path.
Allow disabling of the shade package.
ant-bridge.sh by default only builds executable jars, and doesn't package by default, as did the old ant build.xml.
Added a new package_path.sh utility script for other scripts to use instead of anything in the target folder.
Fixed off by one error in size calculation IntervalUtils.scatterContigIntervals().
In test for fewer files than intervals, adjusted expected intervals.
In test for more files than intervals, adjusted expected exception.
remove final keyword before refMap and altMap, constructHaplotype() changes their values
return ArtificialHaplotype from constructHaplotype instaed of passing as an argument
Add logic so arraycopy does not throw an IndexOutOfBoundsException, add test for a long insert
remove TODO comment after activeProbThreshold
recover static ACTIVE_PROB_THRESHOLD for unit tests
Add min/max values for active_probability_threshold parameter
Move activeProbThreshold parameter to GATKArguemtnCollection
define ACTIVE_PROB_THRESHOLD in unit tests
add construction of argCollection in in ctor
Move arguments from GATKArgumentCollection to ActiveRegionWalker
Throw exception if threshold < 0 or > 1 in ActivityProfile ctor
max propogation distance parameter to ActiveRegionWalker for AcrtivityProfile
Use polymorphic getMaxProbPropagationDistance() so BandPassActivityProfile computes the crrect region size cutoff
Get the maxProbPropagationDistance from the super class's method, instead of directly, this is safer
Removed extraneous command line imports and make maxProbPropagationDistance a hidden argument
remove limit check for activeProbThreshold, not necessary because the check is made when imput as a command line arg
Remove extra 'region' in the doxygen param description for maxProbPropagationDistance
* This argument forces GATK to always write every record in the VCF format field, even if some records at the end are missing and could be removed
* Revved htsjdk and picard
* PT 70993484
Changes:
-------
* Updated current unit and integration test to use the new API components.
* Added unit tests for new classes AFPriorProvider and AFCalculatorProviders.
* Added integration test for mixed ploidy GenotypeGVCFs and CombineGVCFs
Changes:
-------
* GenotypingEngine uses now a AFCalc provider instead of
its own thread-local with one-time initialized and fixed
AF calculator.
* All walkers that use a GenotypingEngine now are passing
the appropiate AF calculator provider. For now most
just use a fix calculator (FixedAFCalculatorProvider)
except GenotypeGVCFs as this one now can cope with
mixture of ploidies failing-over to a general-ploidy
calculator when the preferred implementation is not
capable to handle a site's analysis.
* Arguments involved are --no_cmdline_in_header, --sites_only, and --bcf for VCF files and --bam_compression, --simplifyBAM, --disable_bam_indexing, and --generate_md5 for BAM files
* PT 52740563
* Removed ReadUtils.createSAMFileWriterWithCompression(), replaced with ReadUtils.createSAMFileWriter(), which applies all appropriate engine-level arguments
* Replaced hard-coded field names in ArgumentDefinitionField (Queue extension generator) with a Reflections-based lookup that will fail noisily during extension generation if there's an error
Explicitly including gatk/queue test-jar artifacts in package test classpaths.
SelectVariantsIntegrationTest#testInvalidJexl now resets the JexlEngine silent flag that VariantFiltration.initialize() toggles.
External example no longer tries to unpack nonexistent gatk artifact jars during package tests.
Same changes fixed the problem for GenotypeGVCFs and CombineGVCFs.
Stories:
- https://www.pivotaltracker.com/story/show/77626044
- https://www.pivotaltracker.com/story/show/77626854
Changes:
- Generalized the code for the merging in GATKVariantContextUtils to cope
with ploidy != 2.
- GenotypeGVCFs now check that the input's ploidy conform to the '-ploidy'
argument.
- Moved out Refernce Confidence VC merging code from GATKVariantContextUtils
so that we can keep new code in protected.
Caveats:
- GenotypeGVCFs only can deal with input files that have the same ploidy in
all positions; the one that the user MUST indicate in the -ploidy argument
(if different to the default 2).
- CombineGVCFs won't necessarely complain if its passed mixed ploidy
inputs but you won't be able to genotype it with GenotypeGVCFs.
Test:
- Removed deprecated unit tests for GATKVariantContextUtils.
- Moved unit-tests regarding GVCF merging from GATKVariantContextUtilsUnitTest
to ReferenceConfidenceVariantContextUtilsUnitTest.
- Added unit test for new code for mapping genotype indices between allele
index encoding in GenotypeLikelihoodCalculator.
- GenotypeGVCFs and CombineGVCFs original integration test are unaffected
by the change.
- Added tetraploid run integration tests to check on non-diploid execution
of GenotypeGVCFs and CombineGVCFs.
Changed tests and scripts to use gatkdir full path instead of relative testdata/qscripts symbolic links.
Although symlinks not created, left the symlink deletion script execution with a comment about future removal.
Re-enabled example UG pipeline queue test.
Replaced all hardcoded strings of {public,private}/testdata with BaseTest variables.
Refactored temp list creation method from ListFileUtilsUnitTest to BaseTest.createTempListFile.
Removed list files with hardcoded paths, now using createTempListFile instead with private test dir variable.
Story:
https://www.pivotaltracker.com/story/show/77250524
Changes:
- Remove the annotating code in GeneralPloidyExactAFCalc (GPEAFC) class.
- Added the asAlleleList to GenotypeAlleleCounts class and get (GPEAFC) to use that instead of implementing its own (nicer and more reusable code).
- Removed the explicit addition of AlleleCountBySample fields to the VCF header by the walker initialize
- Added utility methods in Utils to wrap and int[] array into a List<Integer>, and double[] array into a List<Double> efficiently.
Test:
- Added unit-testing for asAlleleList in GenotypeAlleleCountsUnitTest (within testFirst and testNext).
- Added unit-testing for new methods in Utils : asList(int[]) and asList(double[])
- Changed UG General Ploidy test to add explicitly those annotations.
- Non-trivial changes in integration tests involving non-diploid runs (namelly haploid and tetraploid) as they are not showing
those annotations anylonger, so the MD5s have been changed accordingly.
Changes in several walker to use new sample, allele closed lists and new GenotypingEngine constructors signatures
Rebase adoption of new calculation system in walkers
If any pair of variants occurs on all used haplotypes together, then we propagate that information into the gVCF.
Can be enabled with the --tryPhysicalPhasing argument.
- Read groups that are excluded by sample_name, platform, or read_group arguments no longer appear in the header
- The performance penalty associated with filtering by read group has been essentially eliminated
- Partial fulfillment of PT 73075482
Stories:
https://www.pivotaltracker.com/story/show/70222086https://www.pivotaltracker.com/story/show/67961652
Changes:
Done some changes that I missed in relation with making sure that all PairHMM implentations use the same interface; as a consequence we were running always the standard PairHMM.
Fixed some additional bugs detected when running it on full wgs single sample and exom multi sample data set.
Updated some integration test md5s.
Fixing GraphBased bugs with new master code
Fixed ReadLikelihoods.changeReads difficult to spot bug.
Changed PairHMM interface to fix a bug
Fixed missing changes for various PairHMM implementations to get them to use the new structure.
Fixed various bugs only detectable when running with full sample(s).
Believe to have fixed the lack of annotations in UG runs
Fixed integrationt test MD5s
Updating some md5s
Fixed yet another md5 probably left out by mistake
The array structure should be faster to populate and query (no properly benchmarked) and reduce memory footprint considerably.
Nevertheless removing PairHMM factor (using likelihoodEngine Random) it only achieves a speed up of 15% in some example WGS dataset
i.e. there are other bigger bottle necks in the system. Bamboo tests also seem to run significantly faster with this change.
Stories:
https://www.pivotaltracker.com/story/show/70222086https://www.pivotaltracker.com/story/show/67961652
Changes:
- ReadLikelihoods added to substitute Map<String,PerSampleReadLikelihoods>
- Operation that involve changes in full sets of ReadLikelihoods have been moved into that class.
- Simplified a bit the code that handles the downsampling of reads based on contamination
Caveats:
- Still we keep Map<String,PerReadAlleleLikelihoodsMap> around to pass to annotators..., didn't feel like change the interface of so many public classes in this pull-request.