input file. Output is an R-parseable tsv.
git-svn-id: file:///humgen/gsa-scr1/gsa-engineering/svn_contents/trunk@5984 348d0f76-0448-11de-a6fe-93d51630548a
(Oh yes, there was a r5982, in case you were wondering. It was the first
tentative git -> svn commit, and just added an updated tribble jar. It went great except for the fact that svn didn't mark the jar as binary, causing a textual diff for 500k of binary data to be generated in the notification email, cause Gsa_svn_list to very probably choke on the notification email rather than deliver it. Now let us never speak of r5982 again...)
git-svn-id: file:///humgen/gsa-scr1/gsa-engineering/svn_contents/trunk@5983 348d0f76-0448-11de-a6fe-93d51630548a
b) Bug fix: multiallelic indel records where not being treated properly by VQSR because vc.isIndel() returns false with them. Correct general treatment for now is to do (vc.isIndel()||vc.isMixed()).
git-svn-id: file:///humgen/gsa-scr1/gsa-engineering/svn_contents/trunk@5973 348d0f76-0448-11de-a6fe-93d51630548a
Temporarily disabled contracts in integrationtests until we can find the cause
of the new error that's cropping up for Ryan.
git-svn-id: file:///humgen/gsa-scr1/gsa-engineering/svn_contents/trunk@5967 348d0f76-0448-11de-a6fe-93d51630548a
playground / oneoffs to public / private. Currently implemented as an svn ->
svn merge, but will have to be tweaked to do a proper svn -> git merge.
git-svn-id: file:///humgen/gsa-scr1/gsa-engineering/svn_contents/trunk@5964 348d0f76-0448-11de-a6fe-93d51630548a
VariantContextUtils now was a utility function that creates a sitesOnlyVariantContext from an input VC
Add complex merge test of SNPs and indels from the new batch merge wiki in :
http://www.broadinstitute.org/gsa/wiki/index.php/Merging_batched_call_sets
with multiple alleles for an indel. Created a BatchMergeIntegrationTest that uses GGA with the complex merged input alleles to genotype SNPs and Indels with multiple alleles simultaneously in NA12878. Looks great.
git-svn-id: file:///humgen/gsa-scr1/gsa-engineering/svn_contents/trunk@5959 348d0f76-0448-11de-a6fe-93d51630548a
a) All rank sum tests now work for indels including multiallelic sites. For the latter cases, rank sum test is REF vs most common allele
b) Redid computation of HaplotypeScore for indels. It's now trivially easy to do because we are already computing likelihoods of each read vs haplotypes in GL computation so we reuse that if available. For multiallelic case, we score against N haplotypes where N is total called alleles.
Drawback is that all cases need information contained in likelihood table that stores likelihood for each pileup element, for each allele. If this table is not available we dont annotate, so we can only fully annotate indels right now when running UG but not when running VariantAnnotator alone.
git-svn-id: file:///humgen/gsa-scr1/gsa-engineering/svn_contents/trunk@5947 348d0f76-0448-11de-a6fe-93d51630548a
Using samtools to merge the low pass bams before cleaning to avoid "Too many open files." with 1500+ bams.
Other minor cleanup as pointed out by the IntelliJ scala plugin.
git-svn-id: file:///humgen/gsa-scr1/gsa-engineering/svn_contents/trunk@5942 348d0f76-0448-11de-a6fe-93d51630548a
a) Genotype given alleles with indels
b) Genotyping and computing likelihoods of multi-allelic sites.
When GGA option is enabled, indels will be called on regular pileups, not on extended pileups (extended pileups will be removed shortly in a next iteration). As a result, likelihood computation is suboptimal since we can't see reads that start with an insertion right after a position, and hence quality of some insertions is removed and we could be missing a few marginal calls, but it makes everything else much simpler.
For multiallelic sites, we currently can't call them in discovery mode but we can genotype them and compute/report full PL's on them (annotation support comes in next commit). There are several suboptimal approximations made in exact model to compute this. Ideally, joint likelihood Pr(Data | AC1=i,AC2=j..) should be computed but this is hard. Instead, marginal likelihoods are computed Pr(Data | ACi=k) for all i,k, and QUAL is based on highest likelihood allele.
git-svn-id: file:///humgen/gsa-scr1/gsa-engineering/svn_contents/trunk@5941 348d0f76-0448-11de-a6fe-93d51630548a
Upped the WGP VQSR memory to 32g to power through the filtering whole genome. TODO: Figure out what the right amount is.
git-svn-id: file:///humgen/gsa-scr1/gsa-engineering/svn_contents/trunk@5940 348d0f76-0448-11de-a6fe-93d51630548a