To reduce merge conflicts, this commit modifies contents of files, while file renamings are in previous commit.
See previous commit message for list of changes.
To reduce merge conflicts, this commit only renames files, while file modifications are in next commit.
Some updates/fixes here are actually included in the next commit.
= Maven updates
Moved artifacts to new package names:
* private/queue-private -> private/gatk-queue-private
* private/gatk-private -> private/gatk-tools-private
* public/gatk-package -> protected/gatk-package-distribution
* public/queue-package -> protected/gatk-queue-package-distribution
* protected/gatk-protected -> protected/gatk-tools-protected
* public/queue-framework -> public/gatk-queue
* public/gatk-framework -> public/gatk-tools-public
New poms for new artifacts and packages:
* private/gatk-package-internal
* private/gatk-queue-package-internal
* private/gatk-queue-extensions-internal
* protected/gatk-queue-extensions-distribution
* public/gatk-engine
Updated references to StingText.properties to GATKText.properties.
Updated ant-bridge.sh to use gatk.* properties instead of sting.*.
= Engine updates
Renaming files containing engine parts from o.b.gatk.tools to o.b.gatk.engine.
Changed package references from tools to engine for CommandLineGATK, GenomeAnalysisEngine, ReadMetrics, ReadProperties, and WalkerManager.
Changed package reference tools.phonehome to engine.phonehome.
Renamed classes *Sting* to *GATK*, such as ReviewedGATKException.
= Test updates
Moved gatk example resources.
Moved test engine files from tools to engine packages.
Moved resources for phonehome to proper package.
Moved test classes under o.b.gatk into packages:
* o.b.g.utils.{BaseTest,ExampleToCopyUnitTest,GATKTextReporter,MD5DB,MD5Mismatch,TestNGTestTransformer}
* o.b.g.engine.walkers.WalkerTest
Updated package names in DependencyAnalyzerOutputLoaderUnitTest's data.
= Queue updates
Moving queue scripts to location where generated extensions can be used.
Renamed *.q to *.scala, updating licenses previously missed by git hooks.
Moved queue extensions to new artifact gatk-queue-extensions.
Fixed import statments frequently merge-conflicting on FullProcessingPipeline.scala.
= BWA
Added README on how to obtain and include bwa as a library.
Updated libbwa build.
Fixed packaged names under bwa/java implementation.
Updated contents of BWCAligner native implementation.
= Other fixes
Don't duplicate the resource bundle entries by both unpacking *and* appending.
(partial fix) Staged engine and utils poms to build GATKText.properties, once Utils random generator dependency on GATK engine is fixed.
Re-enabled custom testng listeners/reporters and moved testng dependencies to the gatk-root.
Updated comments referencing Sting with GATK.
Moved a couple untangled classes from gatk-tools-public to gatk-utils and gatk-engine.
GATK classes accessing package protected htsjdk classes changed to new package names.
POMs updated to support merging of sam/tribble/variant -> htsjdk and changes to picard artifact.
RodSystemValidation outputs changed due to variant codec packages changes, requiring test md5 updates.
Second of four commits for picard/htsjdk package rename.
GATK classes accessing package protected htsjdk classes will need new package names.
POMs will merge sam/tribble/variant into htsjdk.
Move only, contents updated in next commit.
First of four commits for picard/htsjdk package rename.
The JNI treats shared memory as critical memory and doesn't allow any
parallel reads or writes to it until the native code finishes. This is
not a problem *per se* it is the right thing to do, but we need to
enable **-nct** when running the haplotype caller and with it have
multiple native PairHMM running for each map call.
Move to a copy based memory sharing where the JNI simply copies the
memory over to C++ and then has no blocked critical memory when running,
allowing -nct to work.
This version is slightly (almost unnoticeably) slower with -nct 1, but
scales better with -nct 2-4 (we haven't tested anything beyond that
because we know the GATK falls apart with higher levels of parallelism
* Make VECTOR_LOGLESS_CACHING the default implementation for PairHMM.
* Changed version number in pom.xml under public/VectorPairHMM
* VectorPairHMM can now be compiled using gcc 4.8.x
* Modified define-* to get rid of gcc warnings for extra tokens after #undefs
* Added a Linux kernel version check for AVX - gcc's __builtin_cpu_supports function does not check whether the kernel supports AVX or not.
* Updated PairHMM profiling code to update and print numbers only in single-thread mode
* Edited README.md, pom.xml and Makefile for users to pass path to gcc 4.8.x if necessary
* Moved all cpuid inline assembly to single function Changed info message to clog from cinfo
* Modified version in pom.xml in VectorPairHMM from 3.1 to 3.2
* Deleted some unnecessary code
* Modified C++ sandbox to print per interval timing
Story:
https://www.pivotaltracker.com/story/show/68220438
Changes:
- PL-less input genotypes are now uncalled and so non-variant sites when combining GVCFs.
- HC GVCF/BP_RESOLUTION Mode now outputs non-variant sites in sites covered by deletions.
- Fixed existing tests
Test:
- HaplotypeCallerGVCFIntegrationTest
- ReferenceConfidenceModelUnitTest
- CombineGVCFsIntegrationTest
* This addresses PT Bug 69741902
* Added a required IMR argument to FilePointer, BAMScheduler, IntervalSharder, and SAMDataSource
* This rule is used by FilePointer.combine and FilePointer.union
* Added unit and integration tests
story:
https://www.pivotaltracker.com/story/show/69648104
description:
This read transformer will refactor cigar strings that contain N-D-N elements to one N element (with total length of the three refactored elements).
This is intended primarily for users of RNA-Seq data handling programs such as TopHat2.
Currently we consider that the internal N-D-N motif is illegal and we error out when we encounter it. By refactoring the cigar string of
those specific reads, users of TopHat and other tools can circumvent this problem without affecting the rest of their dataset.
edit: address review comments - change the tool's name and change the tool to be a readTransformer instead of read filter
Description:
Transforms a delegation dependency from HC to UG genotyping engine into a reusage by inhertance where HC and UG engines inherit from a common superclass GenotyperEngine
that implements the common parts. A side-effect some of the code is now more clear and redundant code has been removed.
Changes have a few consequence for the end user. HC has now a few more user arguments, those that control the functionality that HC was borrowing directly from UGE.
Added -ploidy argument although it is contraint to be 2 for now.
Added -out_mode EMIT_ALL_SITES|EMIT_VARIANTS_ONLY ...
Added -allSitePLs flag.
Stories:
https://www.pivotaltracker.com/story/show/68017394
Changes:
- Moved (HC's) GenotyperEngine to HaplotypeCallerGenotyperEngine (HCGE). Then created a engine superclass class GenotypingEngine (GE) that contains common parts between HCGE and the UG counterpart 'UnifiedGenotypingEngine' (UGE). Simplified the code and applied the template pattern to accomodate for small diferences in behaviour between both caller
engines. (There is still room for improvement though).
- Moved inner classes and enums to top-level components for various reasons including making them shorter and simpler names to refer to them.
- Create a HomoSpiens class for Human specific constants; even if they are good default for most users we need to clearly identify the human assumption across the code if we want to make
GATK work with any species in general; i.e. any reference to HomoSapiens, except as a default value for a user argument, should smell.
- Fixed a bug deep in the genotyping calculation we were taking on fixed values for snp and indel heterozygisity to be the default for Human ignoring user arguments.
- GenotypingLikehooldCalculationCModel.Model to Gen.*Like.*Calc.*Model.Name; not a definitive solution though as names are used often in conditionals that perhaps should be member methods of the
GenLikeCalc classes.
- Renamed LikelihoodCalculationEngine to ReadLikelihoodCalculationEngine to distinguish them clearly from Genotype likelihood calculation engines.
- Changed copy by explicity argument listing to a clone/reflexion solution for casting between genotypers argument collection classes.
- Created GenotypeGivenAllelesUtils to collect methods needed nearly exclusively by the GGA mode.
Tests :
- StandardCallerArgumentCollectionUnitTest (check copy by cloning/reflexion).
- All existing integration and unit tests for modified classes.
-Only works with single-sample vcfs
-As with bams, the user must provide a file mapping the absolute path to
each vcf whose samples are to be renamed to the new sample name for that
vcf. The argument is the same as for bams: --sample_rename_mapping_file,
and the mapping file may contain a mix of bam and vcf files should the
user wish.
-It's an error to attempt to remap the sample names of a multi-sample
or sites-only vcf
-Implemented at the codec level at the instant the vcf header is first
read in to minimize the chances of downstream code examining vcf
headers/records before renaming occurs.
-Integration tests are in sting, unit tests are in picard
-Rev picard et. al. to 1.111.1902
Following reviewers comments the command line interface has been simplified.
All extra strict validations are performed by default (as before) and the
user has to indicate which one he/she does not want to use with --validationTypeToExclude.
Before he/she was able to indicate the only ones to apply with --validationType but that has been scrapped out.
Stories:
- https://www.pivotaltracker.com/story/show/68725164
Changes:
- Removed validateType argument.
- Improved documentation.
- Added some warnning log message on suspicious argument combinations.
Tests:
- ValidateVariantsIntegrationTest#*
-- This results in much more consistent distribution of LOD scores for SNPs and Indels.
-- Removing genotype summary stats since they are now produced by default.
-- Added functionality to specify certain subsets of the training data to be used in Tranche file generation, -good:tranche=true set.vcf
More concretelly Picard's strict VCF validation does not like that there is alternative alleles that are not participating in any genotype call across samples.
This is an issue with GVCF in the single-sample pipeline where this is certainly expected with <NON_REF> and other relative unlikely alleles.
To solve this issue we allow the user to exclude some of the strict validations using a new argument --validationTypeToExclude. In order to avoid the validation
issue with GVCF the user needs to add the following to the command line: '--validationTypeToExclude ALLELES'
Story:
https://www.pivotaltracker.com/story/show/68725164
Changes:
- Added validateTypeToExclude argument to ValidateVariants walker.
- Implemented the selective exclusion of validation types.
- Added new info and improved existing documentation of the ValidateVariants walker.
Tests:
- ValidateVariantsIntegrationTest#testUnusedAlleleError
- ValidateVariantsIntegrationTest#testUnusedAlleleFix
In some cases, the program records were being removed from the BAM headers by the GATK engine
before we applied the check for reduced reads (so we did not fail appropriately). Pushed up the
check to happen before the PG tags are modified and added a unit test to ensure it stays that way.
It turns out that some UG tests still used reduced bams so I switched to use different ones.
Based on reviewer feedback, made it more generic so that it's easy to add new unsupported tools.
Previously it required you to create a single sample VCF and then to pass that in to the tool, but
Geraldine convinced me that this was a pain for users (because they usually have multi-sample VCFs).
Instead now you can pass in a multi-sample VCF and specify which sample's genotypes should be used
for the IUPAC encoding. Therefore the argument changed from '--useIUPAC' to '--use_IUPAC_sample NA12878'.
Stories:
https://www.pivotaltracker.com/story/show/66263868
Bug:
The problem was due to the way we were calculating the fix penalty of a large deletion or insertion. In this case we calculate the alignment likelihood of the portion
or read or haplotype deletion as the penalty of that deletion/insertion without going through the full pair-hmm process. For large events this resulted in a 0 in
in linear scale computations that ins transformed into an infinity in log scale.
Changes:
- Change to use log10 scale for calculate those penalties.
- Minor addition of .gitignore to hide ./public/external-example/target which is generated by the building process.