BBMap SNP call pipeline from WGS or RNA-seq data
>h1tg000001l
ATCTTCAAACCAACTGCGCCCACACCCACTTTCATCATTTCTCCAGACGATAGGGCATTA
ATCATGGCACGGTTCACTGTAATTGAAAATCTTGTATCAGATGGTTTACTAGCATAAGTG
TGTGTTTAATTTTGCAATACGCTTTCCTCACATCTTTGTTCTTAGTGTTGAGGTTGCAGT
GCTCACCATACATCAATTCATAAGTGATTAGAAGAGACAGAGAACAAGGGGCAATAGCAA
AGCATTTGCTCACTTTGGCACCATTCGGACTGCACTCAATAGTTTATATGGTTTAATAAT
CATCCCTGCATATATCATGCAAGAACTCTTTGAATTTGTAATCACCGTGCATGCACACTT
TCGCCAAATTTCTATGGCAAGTCTGCATTGATTCCTTTTTGCTCAACGGATCACTAATTG
TCCTGTATAGTTAGAGCCCGTTCACAAACCTCGCACAGGCAAATCACCTGCAAAGGTCAA
CATCGTTTTCAACGTCAGCGGAGCAATGTTGAACTAAAATTAGCATACTCTAAAAGAATG
RNA-seq~SNPcall-bbmap-callvariants -c 16 -m 128 input_1/ input_2/Shiitake_XR1.asm.bp.hap1.p_ctg.fasta
##fileformat=VCFv4.2
##BBMapVersion=38.96
##ploidy=2
##rarity=1.00000
##minallelefraction=0.10000
##reads=703054
##pairedReads=703054
##properlyPairedReads=624842
##properPairRate=0.8888
##readLengthAvg=140.87
pp RNA-seq~SNPcall-bbmap-callvariants -c 16 -m 128 input_1/ input_2/Shiitake_XR1.asm.bp.hap1.p_ctg.fasta Checking the realpath of input files. 0 input_1/ 1 /yoshitake/test/RNA-seq~SNPcall-bbmap-callvariants/input_1/Le1-1-501-701_1.fastq.gz 1 /yoshitake/test/RNA-seq~SNPcall-bbmap-callvariants/input_1/Le1-1-501-701_2.fastq.gz 1 /yoshitake/test/RNA-seq~SNPcall-bbmap-callvariants/input_1/Le1-12-501-708_1.fastq.gz 1 /yoshitake/test/RNA-seq~SNPcall-bbmap-callvariants/input_1/Le1-12-501-708_2.fastq.gz 1 /yoshitake/test/RNA-seq~SNPcall-bbmap-callvariants/input_1/Le1-17-502-703_2.fastq.gz 1 /yoshitake/test/RNA-seq~SNPcall-bbmap-callvariants/input_1/Le1-13-502-701_2.fastq.gz 1 /yoshitake/test/RNA-seq~SNPcall-bbmap-callvariants/input_1/Le1-17-502-703_1.fastq.gz 1 /yoshitake/test/RNA-seq~SNPcall-bbmap-callvariants/input_1/Le1-13-502-701_1.fastq.gz 0 input_2/Shiitake_XR1.asm.bp.hap1.p_ctg.fasta script: /yoshitake/PortablePipeline/PortablePipeline/scripts/RNA-seq~SNPcall-bbmap-callvariants "$scriptdir"/mapping-illumina~bbmap broadinstitute/gatk:4.3.0.0 centos:centos6 quay.io/biocontainers/bbmap:38.96--h5c4e2a8_1 quay.io/biocontainers/picard:2.18.27--0 using docker + set -o pipefail ++ date +%s + time0=1677603411 + echo start at 1677603411 start at 1677603411 + bash /yoshitake/PortablePipeline/PortablePipeline/scripts/mapping-illumina~bbmap -c 16 -m 128 -i '' -j 'maxindel=100000 maxsites2=10000' -b ON -o '' input_1/ input_2/Shiitake_XR1.asm.bp.hap1.p_ctg.fasta Checking the realpath of input files. 0 input_1/ 1 /yoshitake/test/RNA-seq~SNPcall-bbmap-callvariants/input_1/Le1-1-501-701_1.fastq.gz 1 /yoshitake/test/RNA-seq~SNPcall-bbmap-callvariants/input_1/Le1-1-501-701_2.fastq.gz 1 /yoshitake/test/RNA-seq~SNPcall-bbmap-callvariants/input_1/Le1-12-501-708_1.fastq.gz 1 /yoshitake/test/RNA-seq~SNPcall-bbmap-callvariants/input_1/Le1-12-501-708_2.fastq.gz 1 /yoshitake/test/RNA-seq~SNPcall-bbmap-callvariants/input_1/Le1-17-502-703_2.fastq.gz 1 /yoshitake/test/RNA-seq~SNPcall-bbmap-callvariants/input_1/Le1-13-502-701_2.fastq.gz 1 /yoshitake/test/RNA-seq~SNPcall-bbmap-callvariants/input_1/Le1-17-502-703_1.fastq.gz 1 /yoshitake/test/RNA-seq~SNPcall-bbmap-callvariants/input_1/Le1-13-502-701_1.fastq.gz 0 input_2/Shiitake_XR1.asm.bp.hap1.p_ctg.fasta broadinstitute/gatk:4.3.0.0 centos:centos6 quay.io/biocontainers/bbmap:38.96--h5c4e2a8_1 quay.io/biocontainers/picard:2.18.27--0 using docker + set -o pipefail + exec ++ tee log.txt + LANG=C + threads=16 + three=16 ++ expr 16 / 2 + threads2=8 ++ expr 16 - 2 + threads1=14 + '[' 14 -lt 1 ']' ++ free -g ++ grep Mem ++ sed -e 's/Mem: *\([0-9]*\) .*/\1/' + memG=251 ++ expr 251 '*' 3 / 4 + memG3=188 + echo ' #####SYSTEM ENVIRONMENT##### threads=16 memory=251G ############################ ' #####SYSTEM ENVIRONMENT##### threads=16 memory=251G ############################ ++ date +%s + time0=1677603411 + echo start at 1677603411 start at 1677603411 + echo -e 'Checking paramter settings...\n' Checking paramter settings... + indexing_param=k=13 + mapping_param='maxindel=100000 maxsites2=10000' + out_files= + for i in '$opt_o' + out_files+=' rpkm=rpkm.tsv' + for i in '$opt_o' + out_files+=' covstats=covstats.tsv' + mapped_only_bam=ON ++ echo input_2/Shiitake_XR1.asm.bp.hap1.p_ctg.fasta ++ grep '[.]gz$' ++ wc -l ++ true + '[' 0 = 1 ']' + ref=input_2/Shiitake_XR1.asm.bp.hap1.p_ctg.fasta + FUNC_RUN_DOCKER quay.io/biocontainers/bbmap:38.96--h5c4e2a8_1 bbmap.sh threads=16 k=13 ref=input_2/Shiitake_XR1.asm.bp.hap1.p_ctg.fasta + PP_RUN_IMAGE=quay.io/biocontainers/bbmap:38.96--h5c4e2a8_1 + shift + PP_RUN_DOCKER_CMD=("${@}") ++ date +%Y%m%d_%H%M%S_%3N + PPDOCNAME=pp20230301_015651_639_11352 + echo pp20230301_015651_639_11352 ++ id -u ++ id -g + docker run --name pp20230301_015651_639_11352 -v /yoshitake/test/RNA-seq~SNPcall-bbmap-callvariants:/yoshitake/test/RNA-seq~SNPcall-bbmap-callvariants -w /yoshitake/test/RNA-seq~SNPcall-bbmap-callvariants -u 2007:600 -i --rm quay.io/biocontainers/bbmap:38.96--h5c4e2a8_1 bbmap.sh threads=16 k=13 ref=input_2/Shiitake_XR1.asm.bp.hap1.p_ctg.fasta java -ea -Xmx109177m -Xms109177m -cp /usr/local/opt/bbmap-38.96-1/current/ align2.BBMap build=1 overwrite=true fastareadlen=500 threads=16 k=13 ref=input_2/Shiitake_XR1.asm.bp.hap1.p_ctg.fasta Executing align2.BBMap [build=1, overwrite=true, fastareadlen=500, threads=16, k=13, ref=input_2/Shiitake_XR1.asm.bp.hap1.p_ctg.fasta] Version 38.96 Set threads to 16 No output file. NOTE: Ignoring reference file because it already appears to have been processed. NOTE: If you wish to regenerate the index, please manually delete ref/genome/1/summary.txt Set genome to 1 Loaded Reference: 0.393 seconds. Loading index for chunk 1-1, build 1 Generated Index: 1.168 seconds. No reads to process; quitting. Total time: 1.686 seconds. + cat + pppair1=() + pppair2=() + ppsingle=() + IFS= + read i ++ find input_1// ++ egrep '(_R1.*|_1)[.]f((ast|)(q|a)|na|sa)(|[.]gz)$' ++ echo input_1//Le1-1-501-701_1.fastq.gz ++ egrep '_1[.]f((ast|)(q|a)|na|sa)(|[.]gz)$' ++ wc -l + '[' 1 = 1 ']' ++ echo input_1//Le1-1-501-701_1.fastq.gz ++ sed 's/_1[.]\(f\(\(ast\|\)\(q\|a\)\|na\|sa\)\(\|[.]gz\)\)$/_2.\1/' + temppair2=input_1//Le1-1-501-701_2.fastq.gz + '[' -e input_1//Le1-1-501-701_2.fastq.gz ']' + pppair1+=("$i") + pppair2+=("$temppair2") + IFS= + read i ++ echo input_1//Le1-12-501-708_1.fastq.gz ++ egrep '_1[.]f((ast|)(q|a)|na|sa)(|[.]gz)$' ++ wc -l + '[' 1 = 1 ']' ++ echo input_1//Le1-12-501-708_1.fastq.gz ++ sed 's/_1[.]\(f\(\(ast\|\)\(q\|a\)\|na\|sa\)\(\|[.]gz\)\)$/_2.\1/' + temppair2=input_1//Le1-12-501-708_2.fastq.gz + '[' -e input_1//Le1-12-501-708_2.fastq.gz ']' + pppair1+=("$i") + pppair2+=("$temppair2") + IFS= + read i ++ echo input_1//Le1-17-502-703_1.fastq.gz ++ egrep '_1[.]f((ast|)(q|a)|na|sa)(|[.]gz)$' ++ wc -l + '[' 1 = 1 ']' ++ echo input_1//Le1-17-502-703_1.fastq.gz ++ sed 's/_1[.]\(f\(\(ast\|\)\(q\|a\)\|na\|sa\)\(\|[.]gz\)\)$/_2.\1/' + temppair2=input_1//Le1-17-502-703_2.fastq.gz + '[' -e input_1//Le1-17-502-703_2.fastq.gz ']' + pppair1+=("$i") + pppair2+=("$temppair2") + IFS= + read i ++ echo input_1//Le1-13-502-701_1.fastq.gz ++ egrep '_1[.]f((ast|)(q|a)|na|sa)(|[.]gz)$' ++ wc -l + '[' 1 = 1 ']' ++ echo input_1//Le1-13-502-701_1.fastq.gz ++ sed 's/_1[.]\(f\(\(ast\|\)\(q\|a\)\|na\|sa\)\(\|[.]gz\)\)$/_2.\1/' + temppair2=input_1//Le1-13-502-701_2.fastq.gz + '[' -e input_1//Le1-13-502-701_2.fastq.gz ']' + pppair1+=("$i") + pppair2+=("$temppair2") + IFS= + read i + IFS= + read i ++ find input_1// ++ egrep '[.]f((ast|)(q|a)|na|sa)(|[.]gz)$' + ppinputcheck=0 + for j in '${pppair1[@]:-}' '${pppair2[@]:-}' '${ppsingle[@]:-}' + '[' input_1//Le1-1-501-701_1.fastq.gz = input_1//Le1-1-501-701_1.fastq.gz ']' + ppinputcheck=1 + break + '[' 1 = 0 ']' + IFS= + read i + ppinputcheck=0 + for j in '${pppair1[@]:-}' '${pppair2[@]:-}' '${ppsingle[@]:-}' + '[' input_1//Le1-1-501-701_1.fastq.gz = input_1//Le1-1-501-701_2.fastq.gz ']' + for j in '${pppair1[@]:-}' '${pppair2[@]:-}' '${ppsingle[@]:-}' + '[' input_1//Le1-12-501-708_1.fastq.gz = input_1//Le1-1-501-701_2.fastq.gz ']' + for j in '${pppair1[@]:-}' '${pppair2[@]:-}' '${ppsingle[@]:-}' + '[' input_1//Le1-17-502-703_1.fastq.gz = input_1//Le1-1-501-701_2.fastq.gz ']' + for j in '${pppair1[@]:-}' '${pppair2[@]:-}' '${ppsingle[@]:-}' + '[' input_1//Le1-13-502-701_1.fastq.gz = input_1//Le1-1-501-701_2.fastq.gz ']' + for j in '${pppair1[@]:-}' '${pppair2[@]:-}' '${ppsingle[@]:-}' + '[' input_1//Le1-1-501-701_2.fastq.gz = input_1//Le1-1-501-701_2.fastq.gz ']' + ppinputcheck=1 + break + '[' 1 = 0 ']' + IFS= + read i + ppinputcheck=0 + for j in '${pppair1[@]:-}' '${pppair2[@]:-}' '${ppsingle[@]:-}' + '[' input_1//Le1-1-501-701_1.fastq.gz = input_1//Le1-12-501-708_1.fastq.gz ']' + for j in '${pppair1[@]:-}' '${pppair2[@]:-}' '${ppsingle[@]:-}' + '[' input_1//Le1-12-501-708_1.fastq.gz = input_1//Le1-12-501-708_1.fastq.gz ']' + ppinputcheck=1 + break + '[' 1 = 0 ']' + IFS= + read i + ppinputcheck=0 + for j in '${pppair1[@]:-}' '${pppair2[@]:-}' '${ppsingle[@]:-}' + '[' input_1//Le1-1-501-701_1.fastq.gz = input_1//Le1-12-501-708_2.fastq.gz ']' + for j in '${pppair1[@]:-}' '${pppair2[@]:-}' '${ppsingle[@]:-}' + '[' input_1//Le1-12-501-708_1.fastq.gz = input_1//Le1-12-501-708_2.fastq.gz ']' + for j in '${pppair1[@]:-}' '${pppair2[@]:-}' '${ppsingle[@]:-}' + '[' input_1//Le1-17-502-703_1.fastq.gz = input_1//Le1-12-501-708_2.fastq.gz ']' + for j in '${pppair1[@]:-}' '${pppair2[@]:-}' '${ppsingle[@]:-}' + '[' input_1//Le1-13-502-701_1.fastq.gz = input_1//Le1-12-501-708_2.fastq.gz ']' + for j in '${pppair1[@]:-}' '${pppair2[@]:-}' '${ppsingle[@]:-}' + '[' input_1//Le1-1-501-701_2.fastq.gz = input_1//Le1-12-501-708_2.fastq.gz ']' + for j in '${pppair1[@]:-}' '${pppair2[@]:-}' '${ppsingle[@]:-}' + '[' input_1//Le1-12-501-708_2.fastq.gz = input_1//Le1-12-501-708_2.fastq.gz ']' + ppinputcheck=1 + break + '[' 1 = 0 ']' + IFS= + read i + ppinputcheck=0 + for j in '${pppair1[@]:-}' '${pppair2[@]:-}' '${ppsingle[@]:-}' + '[' input_1//Le1-1-501-701_1.fastq.gz = input_1//Le1-17-502-703_2.fastq.gz ']' + for j in '${pppair1[@]:-}' '${pppair2[@]:-}' '${ppsingle[@]:-}' + '[' input_1//Le1-12-501-708_1.fastq.gz = input_1//Le1-17-502-703_2.fastq.gz ']' + for j in '${pppair1[@]:-}' '${pppair2[@]:-}' '${ppsingle[@]:-}' + '[' input_1//Le1-17-502-703_1.fastq.gz = input_1//Le1-17-502-703_2.fastq.gz ']' + for j in '${pppair1[@]:-}' '${pppair2[@]:-}' '${ppsingle[@]:-}' + '[' input_1//Le1-13-502-701_1.fastq.gz = input_1//Le1-17-502-703_2.fastq.gz ']' + for j in '${pppair1[@]:-}' '${pppair2[@]:-}' '${ppsingle[@]:-}' + '[' input_1//Le1-1-501-701_2.fastq.gz = input_1//Le1-17-502-703_2.fastq.gz ']' + for j in '${pppair1[@]:-}' '${pppair2[@]:-}' '${ppsingle[@]:-}' + '[' input_1//Le1-12-501-708_2.fastq.gz = input_1//Le1-17-502-703_2.fastq.gz ']' + for j in '${pppair1[@]:-}' '${pppair2[@]:-}' '${ppsingle[@]:-}' + '[' input_1//Le1-17-502-703_2.fastq.gz = input_1//Le1-17-502-703_2.fastq.gz ']' + ppinputcheck=1 + break + '[' 1 = 0 ']' + IFS= + read i + ppinputcheck=0 + for j in '${pppair1[@]:-}' '${pppair2[@]:-}' '${ppsingle[@]:-}' + '[' input_1//Le1-1-501-701_1.fastq.gz = input_1//Le1-13-502-701_2.fastq.gz ']' + for j in '${pppair1[@]:-}' '${pppair2[@]:-}' '${ppsingle[@]:-}' + '[' input_1//Le1-12-501-708_1.fastq.gz = input_1//Le1-13-502-701_2.fastq.gz ']' + for j in '${pppair1[@]:-}' '${pppair2[@]:-}' '${ppsingle[@]:-}' + '[' input_1//Le1-17-502-703_1.fastq.gz = input_1//Le1-13-502-701_2.fastq.gz ']' + for j in '${pppair1[@]:-}' '${pppair2[@]:-}' '${ppsingle[@]:-}' + '[' input_1//Le1-13-502-701_1.fastq.gz = input_1//Le1-13-502-701_2.fastq.gz ']' + for j in '${pppair1[@]:-}' '${pppair2[@]:-}' '${ppsingle[@]:-}' + '[' input_1//Le1-1-501-701_2.fastq.gz = input_1//Le1-13-502-701_2.fastq.gz ']' + for j in '${pppair1[@]:-}' '${pppair2[@]:-}' '${ppsingle[@]:-}' + '[' input_1//Le1-12-501-708_2.fastq.gz = input_1//Le1-13-502-701_2.fastq.gz ']' + for j in '${pppair1[@]:-}' '${pppair2[@]:-}' '${ppsingle[@]:-}' + '[' input_1//Le1-17-502-703_2.fastq.gz = input_1//Le1-13-502-701_2.fastq.gz ']' + for j in '${pppair1[@]:-}' '${pppair2[@]:-}' '${ppsingle[@]:-}' + '[' input_1//Le1-13-502-701_2.fastq.gz = input_1//Le1-13-502-701_2.fastq.gz ']' + ppinputcheck=1 + break + '[' 1 = 0 ']' + IFS= + read i + ppinputcheck=0 + for j in '${pppair1[@]:-}' '${pppair2[@]:-}' '${ppsingle[@]:-}' + '[' input_1//Le1-1-501-701_1.fastq.gz = input_1//Le1-17-502-703_1.fastq.gz ']' + for j in '${pppair1[@]:-}' '${pppair2[@]:-}' '${ppsingle[@]:-}' + '[' input_1//Le1-12-501-708_1.fastq.gz = input_1//Le1-17-502-703_1.fastq.gz ']' + for j in '${pppair1[@]:-}' '${pppair2[@]:-}' '${ppsingle[@]:-}' + '[' input_1//Le1-17-502-703_1.fastq.gz = input_1//Le1-17-502-703_1.fastq.gz ']' + ppinputcheck=1 + break + '[' 1 = 0 ']' + IFS= + read i + ppinputcheck=0 + for j in '${pppair1[@]:-}' '${pppair2[@]:-}' '${ppsingle[@]:-}' + '[' input_1//Le1-1-501-701_1.fastq.gz = input_1//Le1-13-502-701_1.fastq.gz ']' + for j in '${pppair1[@]:-}' '${pppair2[@]:-}' '${ppsingle[@]:-}' + '[' input_1//Le1-12-501-708_1.fastq.gz = input_1//Le1-13-502-701_1.fastq.gz ']' + for j in '${pppair1[@]:-}' '${pppair2[@]:-}' '${ppsingle[@]:-}' + '[' input_1//Le1-17-502-703_1.fastq.gz = input_1//Le1-13-502-701_1.fastq.gz ']' + for j in '${pppair1[@]:-}' '${pppair2[@]:-}' '${ppsingle[@]:-}' + '[' input_1//Le1-13-502-701_1.fastq.gz = input_1//Le1-13-502-701_1.fastq.gz ']' + ppinputcheck=1 + break + '[' 1 = 0 ']' + IFS= + read i + mkdir -p output + (( i = 0 )) + xargs '-d\n' -I '{}' -P 1 bash -c '{}' + (( i < 4 )) ++ basename input_1//Le1-1-501-701_1.fastq.gz + prefix=output/Le1-1-501-701_1.fastq.gz + local_out_files=' rpkm=output/Le1-1-501-701_1.fastq.gz_rpkm.tsv covstats=output/Le1-1-501-701_1.fastq.gz_covstats.tsv' + echo 'PPDOCNAME=pp`date +%Y%m%d_%H%M%S_%3N`_$RANDOM; echo $PPDOCNAME >> /yoshitake/test/RNA-seq~SNPcall-bbmap-callvariants/pp-docker-list; docker run --name ${PPDOCNAME} -v $PWD:$PWD -w $PWD -u 2007:600 -i --rm quay.io/biocontainers/bbmap:38.96--h5c4e2a8_1 bbmap.sh ref="input_2/Shiitake_XR1.asm.bp.hap1.p_ctg.fasta" threads="14" in1="input_1//Le1-1-501-701_1.fastq.gz" in2="input_1//Le1-1-501-701_2.fastq.gz" out="output/Le1-1-501-701_1.fastq.gz".temp.bam maxindel=100000 maxsites2=10000 rpkm=output/Le1-1-501-701_1.fastq.gz_rpkm.tsv covstats=output/Le1-1-501-701_1.fastq.gz_covstats.tsv; PPDOCNAME=pp`date +%Y%m%d_%H%M%S_%3N`_$RANDOM; echo $PPDOCNAME >> /yoshitake/test/RNA-seq~SNPcall-bbmap-callvariants/pp-docker-list; docker run --name ${PPDOCNAME} -v $PWD:$PWD -w $PWD -u 2007:600 -i --rm quay.io/biocontainers/bbmap:38.96--h5c4e2a8_1 bash run-samtools.sh "output/Le1-1-501-701_1.fastq.gz".temp.bam "14" "ON" "6553"' + (( i++ )) + (( i < 4 )) ++ basename input_1//Le1-12-501-708_1.fastq.gz + prefix=output/Le1-12-501-708_1.fastq.gz + local_out_files=' rpkm=output/Le1-12-501-708_1.fastq.gz_rpkm.tsv covstats=output/Le1-12-501-708_1.fastq.gz_covstats.tsv' + echo 'PPDOCNAME=pp`date +%Y%m%d_%H%M%S_%3N`_$RANDOM; echo $PPDOCNAME >> /yoshitake/test/RNA-seq~SNPcall-bbmap-callvariants/pp-docker-list; docker run --name ${PPDOCNAME} -v $PWD:$PWD -w $PWD -u 2007:600 -i --rm quay.io/biocontainers/bbmap:38.96--h5c4e2a8_1 bbmap.sh ref="input_2/Shiitake_XR1.asm.bp.hap1.p_ctg.fasta" threads="14" in1="input_1//Le1-12-501-708_1.fastq.gz" in2="input_1//Le1-12-501-708_2.fastq.gz" out="output/Le1-12-501-708_1.fastq.gz".temp.bam maxindel=100000 maxsites2=10000 rpkm=output/Le1-12-501-708_1.fastq.gz_rpkm.tsv covstats=output/Le1-12-501-708_1.fastq.gz_covstats.tsv; PPDOCNAME=pp`date +%Y%m%d_%H%M%S_%3N`_$RANDOM; echo $PPDOCNAME >> /yoshitake/test/RNA-seq~SNPcall-bbmap-callvariants/pp-docker-list; docker run --name ${PPDOCNAME} -v $PWD:$PWD -w $PWD -u 2007:600 -i --rm quay.io/biocontainers/bbmap:38.96--h5c4e2a8_1 bash run-samtools.sh "output/Le1-12-501-708_1.fastq.gz".temp.bam "14" "ON" "6553"' + (( i++ )) + (( i < 4 )) ++ basename input_1//Le1-17-502-703_1.fastq.gz + prefix=output/Le1-17-502-703_1.fastq.gz + local_out_files=' rpkm=output/Le1-17-502-703_1.fastq.gz_rpkm.tsv covstats=output/Le1-17-502-703_1.fastq.gz_covstats.tsv' + echo 'PPDOCNAME=pp`date +%Y%m%d_%H%M%S_%3N`_$RANDOM; echo $PPDOCNAME >> /yoshitake/test/RNA-seq~SNPcall-bbmap-callvariants/pp-docker-list; docker run --name ${PPDOCNAME} -v $PWD:$PWD -w $PWD -u 2007:600 -i --rm quay.io/biocontainers/bbmap:38.96--h5c4e2a8_1 bbmap.sh ref="input_2/Shiitake_XR1.asm.bp.hap1.p_ctg.fasta" threads="14" in1="input_1//Le1-17-502-703_1.fastq.gz" in2="input_1//Le1-17-502-703_2.fastq.gz" out="output/Le1-17-502-703_1.fastq.gz".temp.bam maxindel=100000 maxsites2=10000 rpkm=output/Le1-17-502-703_1.fastq.gz_rpkm.tsv covstats=output/Le1-17-502-703_1.fastq.gz_covstats.tsv; PPDOCNAME=pp`date +%Y%m%d_%H%M%S_%3N`_$RANDOM; echo $PPDOCNAME >> /yoshitake/test/RNA-seq~SNPcall-bbmap-callvariants/pp-docker-list; docker run --name ${PPDOCNAME} -v $PWD:$PWD -w $PWD -u 2007:600 -i --rm quay.io/biocontainers/bbmap:38.96--h5c4e2a8_1 bash run-samtools.sh "output/Le1-17-502-703_1.fastq.gz".temp.bam "14" "ON" "6553"' + (( i++ )) + (( i < 4 )) ++ basename input_1//Le1-13-502-701_1.fastq.gz + prefix=output/Le1-13-502-701_1.fastq.gz + local_out_files=' rpkm=output/Le1-13-502-701_1.fastq.gz_rpkm.tsv covstats=output/Le1-13-502-701_1.fastq.gz_covstats.tsv' + echo 'PPDOCNAME=pp`date +%Y%m%d_%H%M%S_%3N`_$RANDOM; echo $PPDOCNAME >> /yoshitake/test/RNA-seq~SNPcall-bbmap-callvariants/pp-docker-list; docker run --name ${PPDOCNAME} -v $PWD:$PWD -w $PWD -u 2007:600 -i --rm quay.io/biocontainers/bbmap:38.96--h5c4e2a8_1 bbmap.sh ref="input_2/Shiitake_XR1.asm.bp.hap1.p_ctg.fasta" threads="14" in1="input_1//Le1-13-502-701_1.fastq.gz" in2="input_1//Le1-13-502-701_2.fastq.gz" out="output/Le1-13-502-701_1.fastq.gz".temp.bam maxindel=100000 maxsites2=10000 rpkm=output/Le1-13-502-701_1.fastq.gz_rpkm.tsv covstats=output/Le1-13-502-701_1.fastq.gz_covstats.tsv; PPDOCNAME=pp`date +%Y%m%d_%H%M%S_%3N`_$RANDOM; echo $PPDOCNAME >> /yoshitake/test/RNA-seq~SNPcall-bbmap-callvariants/pp-docker-list; docker run --name ${PPDOCNAME} -v $PWD:$PWD -w $PWD -u 2007:600 -i --rm quay.io/biocontainers/bbmap:38.96--h5c4e2a8_1 bash run-samtools.sh "output/Le1-13-502-701_1.fastq.gz".temp.bam "14" "ON" "6553"' + (( i++ )) + (( i < 4 )) java -ea -Xmx109179m -Xms109179m -cp /usr/local/opt/bbmap-38.96-1/current/ align2.BBMap build=1 overwrite=true fastareadlen=500 ref=input_2/Shiitake_XR1.asm.bp.hap1.p_ctg.fasta threads=14 in1=input_1//Le1-1-501-701_1.fastq.gz in2=input_1//Le1-1-501-701_2.fastq.gz out=output/Le1-1-501-701_1.fastq.gz.temp.bam maxindel=100000 maxsites2=10000 rpkm=output/Le1-1-501-701_1.fastq.gz_rpkm.tsv covstats=output/Le1-1-501-701_1.fastq.gz_covstats.tsv Executing align2.BBMap [build=1, overwrite=true, fastareadlen=500, ref=input_2/Shiitake_XR1.asm.bp.hap1.p_ctg.fasta, threads=14, in1=input_1//Le1-1-501-701_1.fastq.gz, in2=input_1//Le1-1-501-701_2.fastq.gz, out=output/Le1-1-501-701_1.fastq.gz.temp.bam, maxindel=100000, maxsites2=10000, rpkm=output/Le1-1-501-701_1.fastq.gz_rpkm.tsv, covstats=output/Le1-1-501-701_1.fastq.gz_covstats.tsv] Version 38.96 Set threads to 14 Retaining first best site only for ambiguous mappings. NOTE: Ignoring reference file because it already appears to have been processed. NOTE: If you wish to regenerate the index, please manually delete ref/genome/1/summary.txt Set genome to 1 Loaded Reference: 0.393 seconds. Loading index for chunk 1-1, build 1 Generated Index: 1.164 seconds. Analyzed Index: 3.715 seconds. Found samtools 1.15 Started output stream: 0.078 seconds. Cleared Memory: 0.213 seconds. Processing reads in paired-ended mode. Started read stream. Started 14 mapping threads. Checking the realpath of input files. 0 input_1/ 1 /yoshitake/test/RNA-seq~SNPcall-bbmap-callvariants/input_1/Le1-1-501-701_1.fastq.gz 1 /yoshitake/test/RNA-seq~SNPcall-bbmap-callvariants/input_1/Le1-1-501-701_2.fastq.gz 1 /yoshitake/test/RNA-seq~SNPcall-bbmap-callvariants/input_1/Le1-12-501-708_1.fastq.gz 1 /yoshitake/test/RNA-seq~SNPcall-bbmap-callvariants/input_1/Le1-12-501-708_2.fastq.gz 1 /yoshitake/test/RNA-seq~SNPcall-bbmap-callvariants/input_1/Le1-17-502-703_2.fastq.gz 1 /yoshitake/test/RNA-seq~SNPcall-bbmap-callvariants/input_1/Le1-13-502-701_2.fastq.gz 1 /yoshitake/test/RNA-seq~SNPcall-bbmap-callvariants/input_1/Le1-17-502-703_1.fastq.gz 1 /yoshitake/test/RNA-seq~SNPcall-bbmap-callvariants/input_1/Le1-13-502-701_1.fastq.gz 0 input_2/Shiitake_XR1.asm.bp.hap1.p_ctg.fasta script: /yoshitake/PortablePipeline/PortablePipeline/scripts/RNA-seq~SNPcall-bbmap-callvariants "$scriptdir"/mapping-illumina~bbmap broadinstitute/gatk:4.3.0.0 centos:centos6 quay.io/biocontainers/bbmap:38.96--h5c4e2a8_1 quay.io/biocontainers/picard:2.18.27--0 using docker + set -o pipefail ++ date +%s + time0=1677603490 + echo start at 1677603490 start at 1677603490 + bash /yoshitake/PortablePipeline/PortablePipeline/scripts/mapping-illumina~bbmap -c 16 -m 128 -i '' -j 'maxindel=100000 maxsites2=10000' -b ON -o ' ' input_1/ input_2/Shiitake_XR1.asm.bp.hap1.p_ctg.fasta Checking the realpath of input files. 0 input_1/ 1 /yoshitake/test/RNA-seq~SNPcall-bbmap-callvariants/input_1/Le1-1-501-701_1.fastq.gz 1 /yoshitake/test/RNA-seq~SNPcall-bbmap-callvariants/input_1/Le1-1-501-701_2.fastq.gz 1 /yoshitake/test/RNA-seq~SNPcall-bbmap-callvariants/input_1/Le1-12-501-708_1.fastq.gz 1 /yoshitake/test/RNA-seq~SNPcall-bbmap-callvariants/input_1/Le1-12-501-708_2.fastq.gz 1 /yoshitake/test/RNA-seq~SNPcall-bbmap-callvariants/input_1/Le1-17-502-703_2.fastq.gz 1 /yoshitake/test/RNA-seq~SNPcall-bbmap-callvariants/input_1/Le1-13-502-701_2.fastq.gz 1 /yoshitake/test/RNA-seq~SNPcall-bbmap-callvariants/input_1/Le1-17-502-703_1.fastq.gz 1 /yoshitake/test/RNA-seq~SNPcall-bbmap-callvariants/input_1/Le1-13-502-701_1.fastq.gz 0 input_2/Shiitake_XR1.asm.bp.hap1.p_ctg.fasta broadinstitute/gatk:4.3.0.0 centos:centos6 quay.io/biocontainers/bbmap:38.96--h5c4e2a8_1 quay.io/biocontainers/picard:2.18.27--0 using docker + set -o pipefail + exec ++ tee log.txt + LANG=C + threads=16 + three=16 ++ expr 16 / 2 + threads2=8 ++ expr 16 - 2 + threads1=14 + '[' 14 -lt 1 ']' ++ free -g ++ grep Mem ++ sed -e 's/Mem: *\([0-9]*\) .*/\1/' + memG=251 ++ expr 251 '*' 3 / 4 + memG3=188 + echo ' #####SYSTEM ENVIRONMENT##### threads=16 memory=251G ############################ ' #####SYSTEM ENVIRONMENT##### threads=16 memory=251G ############################ ++ date +%s + time0=1677603491 + echo start at 1677603491 start at 1677603491 + echo -e 'Checking paramter settings...\n' Checking paramter settings... + indexing_param=k=13 + mapping_param='maxindel=100000 maxsites2=10000' + out_files= + mapped_only_bam=ON ++ echo input_2/Shiitake_XR1.asm.bp.hap1.p_ctg.fasta ++ grep '[.]gz$' ++ wc -l ++ true + '[' 0 = 1 ']' + ref=input_2/Shiitake_XR1.asm.bp.hap1.p_ctg.fasta + FUNC_RUN_DOCKER quay.io/biocontainers/bbmap:38.96--h5c4e2a8_1 bbmap.sh threads=16 k=13 ref=input_2/Shiitake_XR1.asm.bp.hap1.p_ctg.fasta + PP_RUN_IMAGE=quay.io/biocontainers/bbmap:38.96--h5c4e2a8_1 + shift + PP_RUN_DOCKER_CMD=("${@}") ++ date +%Y%m%d_%H%M%S_%3N + PPDOCNAME=pp20230301_015811_642_8631 + echo pp20230301_015811_642_8631 ++ id -u ++ id -g + docker run --name pp20230301_015811_642_8631 -v /yoshitake/test/RNA-seq~SNPcall-bbmap-callvariants:/yoshitake/test/RNA-seq~SNPcall-bbmap-callvariants -w /yoshitake/test/RNA-seq~SNPcall-bbmap-callvariants -u 2007:600 -i --rm quay.io/biocontainers/bbmap:38.96--h5c4e2a8_1 bbmap.sh threads=16 k=13 ref=input_2/Shiitake_XR1.asm.bp.hap1.p_ctg.fasta java -ea -Xmx109180m -Xms109180m -cp /usr/local/opt/bbmap-38.96-1/current/ align2.BBMap build=1 overwrite=true fastareadlen=500 threads=16 k=13 ref=input_2/Shiitake_XR1.asm.bp.hap1.p_ctg.fasta Executing align2.BBMap [build=1, overwrite=true, fastareadlen=500, threads=16, k=13, ref=input_2/Shiitake_XR1.asm.bp.hap1.p_ctg.fasta] Version 38.96 Set threads to 16 No output file. NOTE: Ignoring reference file because it already appears to have been processed. NOTE: If you wish to regenerate the index, please manually delete ref/genome/1/summary.txt Set genome to 1 Loaded Reference: 0.388 seconds. Loading index for chunk 1-1, build 1 Generated Index: 1.167 seconds. No reads to process; quitting. Total time: 1.679 seconds. + cat + pppair1=() + pppair2=() + ppsingle=() + IFS= + read i ++ find input_1// ++ egrep '(_R1.*|_1)[.]f((ast|)(q|a)|na|sa)(|[.]gz)$' ++ echo input_1//Le1-1-501-701_1.fastq.gz ++ egrep '_1[.]f((ast|)(q|a)|na|sa)(|[.]gz)$' ++ wc -l + '[' 1 = 1 ']' ++ echo input_1//Le1-1-501-701_1.fastq.gz ++ sed 's/_1[.]\(f\(\(ast\|\)\(q\|a\)\|na\|sa\)\(\|[.]gz\)\)$/_2.\1/' + temppair2=input_1//Le1-1-501-701_2.fastq.gz + '[' -e input_1//Le1-1-501-701_2.fastq.gz ']' + pppair1+=("$i") + pppair2+=("$temppair2") + IFS= + read i ++ echo input_1//Le1-12-501-708_1.fastq.gz ++ egrep '_1[.]f((ast|)(q|a)|na|sa)(|[.]gz)$' ++ wc -l + '[' 1 = 1 ']' ++ echo input_1//Le1-12-501-708_1.fastq.gz ++ sed 's/_1[.]\(f\(\(ast\|\)\(q\|a\)\|na\|sa\)\(\|[.]gz\)\)$/_2.\1/' + temppair2=input_1//Le1-12-501-708_2.fastq.gz + '[' -e input_1//Le1-12-501-708_2.fastq.gz ']' + pppair1+=("$i") + pppair2+=("$temppair2") + IFS= + read i ++ echo input_1//Le1-17-502-703_1.fastq.gz ++ egrep '_1[.]f((ast|)(q|a)|na|sa)(|[.]gz)$' ++ wc -l + '[' 1 = 1 ']' ++ echo input_1//Le1-17-502-703_1.fastq.gz ++ sed 's/_1[.]\(f\(\(ast\|\)\(q\|a\)\|na\|sa\)\(\|[.]gz\)\)$/_2.\1/' + temppair2=input_1//Le1-17-502-703_2.fastq.gz + '[' -e input_1//Le1-17-502-703_2.fastq.gz ']' + pppair1+=("$i") + pppair2+=("$temppair2") + IFS= + read i ++ echo input_1//Le1-13-502-701_1.fastq.gz ++ wc -l ++ egrep '_1[.]f((ast|)(q|a)|na|sa)(|[.]gz)$' + '[' 1 = 1 ']' ++ echo input_1//Le1-13-502-701_1.fastq.gz ++ sed 's/_1[.]\(f\(\(ast\|\)\(q\|a\)\|na\|sa\)\(\|[.]gz\)\)$/_2.\1/' + temppair2=input_1//Le1-13-502-701_2.fastq.gz + '[' -e input_1//Le1-13-502-701_2.fastq.gz ']' + pppair1+=("$i") + pppair2+=("$temppair2") + IFS= + read i + IFS= + read i ++ find input_1// ++ egrep '[.]f((ast|)(q|a)|na|sa)(|[.]gz)$' + ppinputcheck=0 + for j in '${pppair1[@]:-}' '${pppair2[@]:-}' '${ppsingle[@]:-}' + '[' input_1//Le1-1-501-701_1.fastq.gz = input_1//Le1-1-501-701_1.fastq.gz ']' + ppinputcheck=1 + break + '[' 1 = 0 ']' + IFS= + read i + ppinputcheck=0 + for j in '${pppair1[@]:-}' '${pppair2[@]:-}' '${ppsingle[@]:-}' + '[' input_1//Le1-1-501-701_1.fastq.gz = input_1//Le1-1-501-701_2.fastq.gz ']' + for j in '${pppair1[@]:-}' '${pppair2[@]:-}' '${ppsingle[@]:-}' + '[' input_1//Le1-12-501-708_1.fastq.gz = input_1//Le1-1-501-701_2.fastq.gz ']' + for j in '${pppair1[@]:-}' '${pppair2[@]:-}' '${ppsingle[@]:-}' + '[' input_1//Le1-17-502-703_1.fastq.gz = input_1//Le1-1-501-701_2.fastq.gz ']' + for j in '${pppair1[@]:-}' '${pppair2[@]:-}' '${ppsingle[@]:-}' + '[' input_1//Le1-13-502-701_1.fastq.gz = input_1//Le1-1-501-701_2.fastq.gz ']' + for j in '${pppair1[@]:-}' '${pppair2[@]:-}' '${ppsingle[@]:-}' + '[' input_1//Le1-1-501-701_2.fastq.gz = input_1//Le1-1-501-701_2.fastq.gz ']' + ppinputcheck=1 + break + '[' 1 = 0 ']' + IFS= + read i + ppinputcheck=0 + for j in '${pppair1[@]:-}' '${pppair2[@]:-}' '${ppsingle[@]:-}' + '[' input_1//Le1-1-501-701_1.fastq.gz = input_1//Le1-12-501-708_1.fastq.gz ']' + for j in '${pppair1[@]:-}' '${pppair2[@]:-}' '${ppsingle[@]:-}' + '[' input_1//Le1-12-501-708_1.fastq.gz = input_1//Le1-12-501-708_1.fastq.gz ']' + ppinputcheck=1 + break + '[' 1 = 0 ']' + IFS= + read i + ppinputcheck=0 + for j in '${pppair1[@]:-}' '${pppair2[@]:-}' '${ppsingle[@]:-}' + '[' input_1//Le1-1-501-701_1.fastq.gz = input_1//Le1-12-501-708_2.fastq.gz ']' + for j in '${pppair1[@]:-}' '${pppair2[@]:-}' '${ppsingle[@]:-}' + '[' input_1//Le1-12-501-708_1.fastq.gz = input_1//Le1-12-501-708_2.fastq.gz ']' + for j in '${pppair1[@]:-}' '${pppair2[@]:-}' '${ppsingle[@]:-}' + '[' input_1//Le1-17-502-703_1.fastq.gz = input_1//Le1-12-501-708_2.fastq.gz ']' + for j in '${pppair1[@]:-}' '${pppair2[@]:-}' '${ppsingle[@]:-}' + '[' input_1//Le1-13-502-701_1.fastq.gz = input_1//Le1-12-501-708_2.fastq.gz ']' + for j in '${pppair1[@]:-}' '${pppair2[@]:-}' '${ppsingle[@]:-}' + '[' input_1//Le1-1-501-701_2.fastq.gz = input_1//Le1-12-501-708_2.fastq.gz ']' + for j in '${pppair1[@]:-}' '${pppair2[@]:-}' '${ppsingle[@]:-}' + '[' input_1//Le1-12-501-708_2.fastq.gz = input_1//Le1-12-501-708_2.fastq.gz ']' + ppinputcheck=1 + break + '[' 1 = 0 ']' + IFS= + read i + ppinputcheck=0 + for j in '${pppair1[@]:-}' '${pppair2[@]:-}' '${ppsingle[@]:-}' + '[' input_1//Le1-1-501-701_1.fastq.gz = input_1//Le1-17-502-703_2.fastq.gz ']' + for j in '${pppair1[@]:-}' '${pppair2[@]:-}' '${ppsingle[@]:-}' + '[' input_1//Le1-12-501-708_1.fastq.gz = input_1//Le1-17-502-703_2.fastq.gz ']' + for j in '${pppair1[@]:-}' '${pppair2[@]:-}' '${ppsingle[@]:-}' + '[' input_1//Le1-17-502-703_1.fastq.gz = input_1//Le1-17-502-703_2.fastq.gz ']' + for j in '${pppair1[@]:-}' '${pppair2[@]:-}' '${ppsingle[@]:-}' + '[' input_1//Le1-13-502-701_1.fastq.gz = input_1//Le1-17-502-703_2.fastq.gz ']' + for j in '${pppair1[@]:-}' '${pppair2[@]:-}' '${ppsingle[@]:-}' + '[' input_1//Le1-1-501-701_2.fastq.gz = input_1//Le1-17-502-703_2.fastq.gz ']' + for j in '${pppair1[@]:-}' '${pppair2[@]:-}' '${ppsingle[@]:-}' + '[' input_1//Le1-12-501-708_2.fastq.gz = input_1//Le1-17-502-703_2.fastq.gz ']' + for j in '${pppair1[@]:-}' '${pppair2[@]:-}' '${ppsingle[@]:-}' + '[' input_1//Le1-17-502-703_2.fastq.gz = input_1//Le1-17-502-703_2.fastq.gz ']' + ppinputcheck=1 + break + '[' 1 = 0 ']' + IFS= + read i + ppinputcheck=0 + for j in '${pppair1[@]:-}' '${pppair2[@]:-}' '${ppsingle[@]:-}' + '[' input_1//Le1-1-501-701_1.fastq.gz = input_1//Le1-13-502-701_2.fastq.gz ']' + for j in '${pppair1[@]:-}' '${pppair2[@]:-}' '${ppsingle[@]:-}' + '[' input_1//Le1-12-501-708_1.fastq.gz = input_1//Le1-13-502-701_2.fastq.gz ']' + for j in '${pppair1[@]:-}' '${pppair2[@]:-}' '${ppsingle[@]:-}' + '[' input_1//Le1-17-502-703_1.fastq.gz = input_1//Le1-13-502-701_2.fastq.gz ']' + for j in '${pppair1[@]:-}' '${pppair2[@]:-}' '${ppsingle[@]:-}' + '[' input_1//Le1-13-502-701_1.fastq.gz = input_1//Le1-13-502-701_2.fastq.gz ']' + for j in '${pppair1[@]:-}' '${pppair2[@]:-}' '${ppsingle[@]:-}' + '[' input_1//Le1-1-501-701_2.fastq.gz = input_1//Le1-13-502-701_2.fastq.gz ']' + for j in '${pppair1[@]:-}' '${pppair2[@]:-}' '${ppsingle[@]:-}' + '[' input_1//Le1-12-501-708_2.fastq.gz = input_1//Le1-13-502-701_2.fastq.gz ']' + for j in '${pppair1[@]:-}' '${pppair2[@]:-}' '${ppsingle[@]:-}' + '[' input_1//Le1-17-502-703_2.fastq.gz = input_1//Le1-13-502-701_2.fastq.gz ']' + for j in '${pppair1[@]:-}' '${pppair2[@]:-}' '${ppsingle[@]:-}' + '[' input_1//Le1-13-502-701_2.fastq.gz = input_1//Le1-13-502-701_2.fastq.gz ']' + ppinputcheck=1 + break + '[' 1 = 0 ']' + IFS= + read i + ppinputcheck=0 + for j in '${pppair1[@]:-}' '${pppair2[@]:-}' '${ppsingle[@]:-}' + '[' input_1//Le1-1-501-701_1.fastq.gz = input_1//Le1-17-502-703_1.fastq.gz ']' + for j in '${pppair1[@]:-}' '${pppair2[@]:-}' '${ppsingle[@]:-}' + '[' input_1//Le1-12-501-708_1.fastq.gz = input_1//Le1-17-502-703_1.fastq.gz ']' + for j in '${pppair1[@]:-}' '${pppair2[@]:-}' '${ppsingle[@]:-}' + '[' input_1//Le1-17-502-703_1.fastq.gz = input_1//Le1-17-502-703_1.fastq.gz ']' + ppinputcheck=1 + break + '[' 1 = 0 ']' + IFS= + read i + ppinputcheck=0 + for j in '${pppair1[@]:-}' '${pppair2[@]:-}' '${ppsingle[@]:-}' + '[' input_1//Le1-1-501-701_1.fastq.gz = input_1//Le1-13-502-701_1.fastq.gz ']' + for j in '${pppair1[@]:-}' '${pppair2[@]:-}' '${ppsingle[@]:-}' + '[' input_1//Le1-12-501-708_1.fastq.gz = input_1//Le1-13-502-701_1.fastq.gz ']' + for j in '${pppair1[@]:-}' '${pppair2[@]:-}' '${ppsingle[@]:-}' + '[' input_1//Le1-17-502-703_1.fastq.gz = input_1//Le1-13-502-701_1.fastq.gz ']' + for j in '${pppair1[@]:-}' '${pppair2[@]:-}' '${ppsingle[@]:-}' + '[' input_1//Le1-13-502-701_1.fastq.gz = input_1//Le1-13-502-701_1.fastq.gz ']' + ppinputcheck=1 + break + '[' 1 = 0 ']' + IFS= + read i + mkdir -p output + (( i = 0 )) + (( i < 4 )) + xargs '-d\n' -I '{}' -P 1 bash -c '{}' ++ basename input_1//Le1-1-501-701_1.fastq.gz + prefix=output/Le1-1-501-701_1.fastq.gz + local_out_files= + echo 'PPDOCNAME=pp`date +%Y%m%d_%H%M%S_%3N`_$RANDOM; echo $PPDOCNAME >> /yoshitake/test/RNA-seq~SNPcall-bbmap-callvariants/pp-docker-list; docker run --name ${PPDOCNAME} -v $PWD:$PWD -w $PWD -u 2007:600 -i --rm quay.io/biocontainers/bbmap:38.96--h5c4e2a8_1 bbmap.sh ref="input_2/Shiitake_XR1.asm.bp.hap1.p_ctg.fasta" threads="14" in1="input_1//Le1-1-501-701_1.fastq.gz" in2="input_1//Le1-1-501-701_2.fastq.gz" out="output/Le1-1-501-701_1.fastq.gz".temp.bam maxindel=100000 maxsites2=10000 ; PPDOCNAME=pp`date +%Y%m%d_%H%M%S_%3N`_$RANDOM; echo $PPDOCNAME >> /yoshitake/test/RNA-seq~SNPcall-bbmap-callvariants/pp-docker-list; docker run --name ${PPDOCNAME} -v $PWD:$PWD -w $PWD -u 2007:600 -i --rm quay.io/biocontainers/bbmap:38.96--h5c4e2a8_1 bash run-samtools.sh "output/Le1-1-501-701_1.fastq.gz".temp.bam "14" "ON" "6553"' + (( i++ )) + (( i < 4 )) ++ basename input_1//Le1-12-501-708_1.fastq.gz + prefix=output/Le1-12-501-708_1.fastq.gz + local_out_files= + echo 'PPDOCNAME=pp`date +%Y%m%d_%H%M%S_%3N`_$RANDOM; echo $PPDOCNAME >> /yoshitake/test/RNA-seq~SNPcall-bbmap-callvariants/pp-docker-list; docker run --name ${PPDOCNAME} -v $PWD:$PWD -w $PWD -u 2007:600 -i --rm quay.io/biocontainers/bbmap:38.96--h5c4e2a8_1 bbmap.sh ref="input_2/Shiitake_XR1.asm.bp.hap1.p_ctg.fasta" threads="14" in1="input_1//Le1-12-501-708_1.fastq.gz" in2="input_1//Le1-12-501-708_2.fastq.gz" out="output/Le1-12-501-708_1.fastq.gz".temp.bam maxindel=100000 maxsites2=10000 ; PPDOCNAME=pp`date +%Y%m%d_%H%M%S_%3N`_$RANDOM; echo $PPDOCNAME >> /yoshitake/test/RNA-seq~SNPcall-bbmap-callvariants/pp-docker-list; docker run --name ${PPDOCNAME} -v $PWD:$PWD -w $PWD -u 2007:600 -i --rm quay.io/biocontainers/bbmap:38.96--h5c4e2a8_1 bash run-samtools.sh "output/Le1-12-501-708_1.fastq.gz".temp.bam "14" "ON" "6553"' + (( i++ )) + (( i < 4 )) ++ basename input_1//Le1-17-502-703_1.fastq.gz + prefix=output/Le1-17-502-703_1.fastq.gz + local_out_files= + echo 'PPDOCNAME=pp`date +%Y%m%d_%H%M%S_%3N`_$RANDOM; echo $PPDOCNAME >> /yoshitake/test/RNA-seq~SNPcall-bbmap-callvariants/pp-docker-list; docker run --name ${PPDOCNAME} -v $PWD:$PWD -w $PWD -u 2007:600 -i --rm quay.io/biocontainers/bbmap:38.96--h5c4e2a8_1 bbmap.sh ref="input_2/Shiitake_XR1.asm.bp.hap1.p_ctg.fasta" threads="14" in1="input_1//Le1-17-502-703_1.fastq.gz" in2="input_1//Le1-17-502-703_2.fastq.gz" out="output/Le1-17-502-703_1.fastq.gz".temp.bam maxindel=100000 maxsites2=10000 ; PPDOCNAME=pp`date +%Y%m%d_%H%M%S_%3N`_$RANDOM; echo $PPDOCNAME >> /yoshitake/test/RNA-seq~SNPcall-bbmap-callvariants/pp-docker-list; docker run --name ${PPDOCNAME} -v $PWD:$PWD -w $PWD -u 2007:600 -i --rm quay.io/biocontainers/bbmap:38.96--h5c4e2a8_1 bash run-samtools.sh "output/Le1-17-502-703_1.fastq.gz".temp.bam "14" "ON" "6553"' + (( i++ )) + (( i < 4 )) ++ basename input_1//Le1-13-502-701_1.fastq.gz + prefix=output/Le1-13-502-701_1.fastq.gz + local_out_files= + echo 'PPDOCNAME=pp`date +%Y%m%d_%H%M%S_%3N`_$RANDOM; echo $PPDOCNAME >> /yoshitake/test/RNA-seq~SNPcall-bbmap-callvariants/pp-docker-list; docker run --name ${PPDOCNAME} -v $PWD:$PWD -w $PWD -u 2007:600 -i --rm quay.io/biocontainers/bbmap:38.96--h5c4e2a8_1 bbmap.sh ref="input_2/Shiitake_XR1.asm.bp.hap1.p_ctg.fasta" threads="14" in1="input_1//Le1-13-502-701_1.fastq.gz" in2="input_1//Le1-13-502-701_2.fastq.gz" out="output/Le1-13-502-701_1.fastq.gz".temp.bam maxindel=100000 maxsites2=10000 ; PPDOCNAME=pp`date +%Y%m%d_%H%M%S_%3N`_$RANDOM; echo $PPDOCNAME >> /yoshitake/test/RNA-seq~SNPcall-bbmap-callvariants/pp-docker-list; docker run --name ${PPDOCNAME} -v $PWD:$PWD -w $PWD -u 2007:600 -i --rm quay.io/biocontainers/bbmap:38.96--h5c4e2a8_1 bash run-samtools.sh "output/Le1-13-502-701_1.fastq.gz".temp.bam "14" "ON" "6553"' + (( i++ )) + (( i < 4 )) java -ea -Xmx109193m -Xms109193m -cp /usr/local/opt/bbmap-38.96-1/current/ align2.BBMap build=1 overwrite=true fastareadlen=500 ref=input_2/Shiitake_XR1.asm.bp.hap1.p_ctg.fasta threads=14 in1=input_1//Le1-1-501-701_1.fastq.gz in2=input_1//Le1-1-501-701_2.fastq.gz out=output/Le1-1-501-701_1.fastq.gz.temp.bam maxindel=100000 maxsites2=10000 Executing align2.BBMap [build=1, overwrite=true, fastareadlen=500, ref=input_2/Shiitake_XR1.asm.bp.hap1.p_ctg.fasta, threads=14, in1=input_1//Le1-1-501-701_1.fastq.gz, in2=input_1//Le1-1-501-701_2.fastq.gz, out=output/Le1-1-501-701_1.fastq.gz.temp.bam, maxindel=100000, maxsites2=10000] Version 38.96 Set threads to 14 Retaining first best site only for ambiguous mappings. NOTE: Ignoring reference file because it already appears to have been processed. NOTE: If you wish to regenerate the index, please manually delete ref/genome/1/summary.txt Set genome to 1 Loaded Reference: 0.390 seconds. Loading index for chunk 1-1, build 1 Generated Index: 1.137 seconds. Analyzed Index: 3.456 seconds. Found samtools 1.15 Started output stream: 0.077 seconds. Cleared Memory: 0.216 seconds. Processing reads in paired-ended mode. Started read stream. Started 14 mapping threads. Detecting finished threads: 0, 1, 2, 3, 4, 5, 6, 7, 8, 9, 10, 11, 12, 13 ------------------ Results ------------------ Genome: 1 Key Length: 13 Max Indel: 100000 Minimum Score Ratio: 0.56 Mapping Mode: normal Reads Used: 2000000 (302000000 bases) Mapping: 196.157 seconds. Reads/sec: 10195.91 kBases/sec: 1539.58 Pairing data: pct pairs num pairs pct bases num bases mated pairs: 6.0242% 60242 6.0242% 18193084 bad pairs: 0.4714% 4714 0.4714% 1423628 insert size avg: 292.18 Read 1 data: pct reads num reads pct bases num bases mapped: 6.9196% 69196 6.9196% 10448596 unambiguous: 5.0213% 50213 5.0213% 7582163 ambiguous: 1.8983% 18983 1.8983% 2866433 low-Q discards: 0.0000% 0 0.0000% 0 perfect best site: 3.6137% 36137 3.6137% 5456687 semiperfect site: 3.6238% 36238 3.6238% 5471938 rescued: 0.4217% 4217 Match Rate: NA NA 36.7648% 10022613 Error Rate: 47.7687% 33054 63.2324% 17238085 Sub Rate: 45.2584% 31317 0.9840% 268259 Del Rate: 10.4096% 7203 61.6727% 16812873 Ins Rate: 17.3464% 12003 0.5757% 156953 N Rate: 0.0361% 25 0.0028% 771 Read 2 data: pct reads num reads pct bases num bases mapped: 6.8722% 68722 6.8722% 10377022 unambiguous: 4.9803% 49803 4.9803% 7520253 ambiguous: 1.8919% 18919 1.8919% 2856769 low-Q discards: 0.0000% 0 0.0000% 0 perfect best site: 3.2269% 32269 3.2269% 4872619 semiperfect site: 3.2366% 32366 3.2366% 4887266 rescued: 0.5981% 5981 Match Rate: NA NA 37.3713% 9913213 Error Rate: 53.0179% 36436 62.6258% 16612276 Sub Rate: 50.7450% 34874 1.1370% 301606 Del Rate: 10.2773% 7063 60.8802% 16149229 Ins Rate: 17.2429% 11850 0.6086% 161441 N Rate: 0.0757% 52 0.0029% 762 Total time: 201.581 seconds. calulating flagstat... Sorting and indexing bam [bam_sort_core] merging from 0 files and 14 in-memory blocks... Counting mapped reads... java -ea -Xmx109193m -Xms109193m -cp /usr/local/opt/bbmap-38.96-1/current/ align2.BBMap build=1 overwrite=true fastareadlen=500 ref=input_2/Shiitake_XR1.asm.bp.hap1.p_ctg.fasta threads=14 in1=input_1//Le1-12-501-708_1.fastq.gz in2=input_1//Le1-12-501-708_2.fastq.gz out=output/Le1-12-501-708_1.fastq.gz.temp.bam maxindel=100000 maxsites2=10000 Executing align2.BBMap [build=1, overwrite=true, fastareadlen=500, ref=input_2/Shiitake_XR1.asm.bp.hap1.p_ctg.fasta, threads=14, in1=input_1//Le1-12-501-708_1.fastq.gz, in2=input_1//Le1-12-501-708_2.fastq.gz, out=output/Le1-12-501-708_1.fastq.gz.temp.bam, maxindel=100000, maxsites2=10000] Version 38.96 Set threads to 14 Retaining first best site only for ambiguous mappings. NOTE: Ignoring reference file because it already appears to have been processed. NOTE: If you wish to regenerate the index, please manually delete ref/genome/1/summary.txt Set genome to 1 Loaded Reference: 0.376 seconds. Loading index for chunk 1-1, build 1 Generated Index: 1.131 seconds. Analyzed Index: 3.415 seconds. Found samtools 1.15 Started output stream: 0.057 seconds. Cleared Memory: 0.214 seconds. Processing reads in paired-ended mode. Started read stream. Started 14 mapping threads. Detecting finished threads: 0, 1, 2, 3, 4, 5, 6, 7, 8, 9, 10, 11, 12, 13 ------------------ Results ------------------ Genome: 1 Key Length: 13 Max Indel: 100000 Minimum Score Ratio: 0.56 Mapping Mode: normal Reads Used: 2000000 (302000000 bases) Mapping: 58.933 seconds. Reads/sec: 33936.77 kBases/sec: 5124.45 Pairing data: pct pairs num pairs pct bases num bases mated pairs: 0.6166% 6166 0.6166% 1862132 bad pairs: 0.0829% 829 0.0829% 250358 insert size avg: 417.26 Read 1 data: pct reads num reads pct bases num bases mapped: 0.8171% 8171 0.8171% 1233821 unambiguous: 0.5003% 5003 0.5003% 755453 ambiguous: 0.3168% 3168 0.3168% 478368 low-Q discards: 0.0000% 0 0.0000% 0 perfect best site: 0.4606% 4606 0.4606% 695506 semiperfect site: 0.4628% 4628 0.4628% 698828 rescued: 0.0489% 489 Match Rate: NA NA 15.3850% 1187753 Error Rate: 43.6299% 3565 84.6150% 6532460 Sub Rate: 38.7590% 3167 0.3571% 27571 Del Rate: 10.8799% 889 84.0183% 6486393 Ins Rate: 18.6146% 1521 0.2396% 18496 N Rate: 0.0122% 1 0.0000% 1 Read 2 data: pct reads num reads pct bases num bases mapped: 0.8057% 8057 0.8057% 1216607 unambiguous: 0.4938% 4938 0.4938% 745638 ambiguous: 0.3119% 3119 0.3119% 470969 low-Q discards: 0.0000% 0 0.0000% 0 perfect best site: 0.4006% 4006 0.4006% 604906 semiperfect site: 0.4035% 4035 0.4035% 609285 rescued: 0.0759% 759 Match Rate: NA NA 17.3427% 1166567 Error Rate: 50.2668% 4050 82.6568% 5559955 Sub Rate: 46.2083% 3723 0.4822% 32433 Del Rate: 10.6739% 860 81.9134% 5509951 Ins Rate: 17.4755% 1408 0.2612% 17571 N Rate: 0.0621% 5 0.0005% 36 Total time: 64.253 seconds. calulating flagstat... Sorting and indexing bam [bam_sort_core] merging from 0 files and 14 in-memory blocks... Counting mapped reads... java -ea -Xmx109181m -Xms109181m -cp /usr/local/opt/bbmap-38.96-1/current/ align2.BBMap build=1 overwrite=true fastareadlen=500 ref=input_2/Shiitake_XR1.asm.bp.hap1.p_ctg.fasta threads=14 in1=input_1//Le1-17-502-703_1.fastq.gz in2=input_1//Le1-17-502-703_2.fastq.gz out=output/Le1-17-502-703_1.fastq.gz.temp.bam maxindel=100000 maxsites2=10000 Executing align2.BBMap [build=1, overwrite=true, fastareadlen=500, ref=input_2/Shiitake_XR1.asm.bp.hap1.p_ctg.fasta, threads=14, in1=input_1//Le1-17-502-703_1.fastq.gz, in2=input_1//Le1-17-502-703_2.fastq.gz, out=output/Le1-17-502-703_1.fastq.gz.temp.bam, maxindel=100000, maxsites2=10000] Version 38.96 Set threads to 14 Retaining first best site only for ambiguous mappings. NOTE: Ignoring reference file because it already appears to have been processed. NOTE: If you wish to regenerate the index, please manually delete ref/genome/1/summary.txt Set genome to 1 Loaded Reference: 0.373 seconds. Loading index for chunk 1-1, build 1 Generated Index: 1.079 seconds. Analyzed Index: 3.421 seconds. Found samtools 1.15 Started output stream: 0.072 seconds. Cleared Memory: 0.188 seconds. Processing reads in paired-ended mode. Started read stream. Started 14 mapping threads. Detecting finished threads: 0, 1, 2, 3, 4, 5, 6, 7, 8, 9, 10, 11, 12, 13 ------------------ Results ------------------ Genome: 1 Key Length: 13 Max Indel: 100000 Minimum Score Ratio: 0.56 Mapping Mode: normal Reads Used: 2000000 (302000000 bases) Mapping: 807.637 seconds. Reads/sec: 2476.36 kBases/sec: 373.93 Pairing data: pct pairs num pairs pct bases num bases mated pairs: 28.3931% 283931 28.3931% 85747162 bad pairs: 2.5956% 25956 2.5956% 7838712 insert size avg: 339.04 Read 1 data: pct reads num reads pct bases num bases mapped: 32.5483% 325483 32.5483% 49147933 unambiguous: 21.6019% 216019 21.6019% 32618869 ambiguous: 10.9464% 109464 10.9464% 16529064 low-Q discards: 0.0000% 0 0.0000% 0 perfect best site: 20.6655% 206655 20.6655% 31204905 semiperfect site: 20.7156% 207156 20.7156% 31280556 rescued: 1.7673% 17673 Match Rate: NA NA 31.7415% 48059882 Error Rate: 36.4949% 118786 68.2563% 103347167 Sub Rate: 34.2766% 111566 0.4938% 747709 Del Rate: 7.8980% 25707 67.5399% 102262431 Ins Rate: 9.4446% 30741 0.2226% 337027 N Rate: 0.0418% 136 0.0022% 3315 Read 2 data: pct reads num reads pct bases num bases mapped: 32.2561% 322561 32.2561% 48706711 unambiguous: 21.3931% 213931 21.3931% 32303581 ambiguous: 10.8630% 108630 10.8630% 16403130 low-Q discards: 0.0000% 0 0.0000% 0 perfect best site: 17.8848% 178848 17.8848% 27006048 semiperfect site: 17.9266% 179266 17.9266% 27069166 rescued: 2.1757% 21757 Match Rate: NA NA 32.4895% 47428610 Error Rate: 44.5292% 143636 67.5075% 98548430 Sub Rate: 42.5829% 137358 0.6448% 941299 Del Rate: 7.7231% 24912 66.6350% 97274658 Ins Rate: 9.4052% 30338 0.2278% 332473 N Rate: 0.0741% 239 0.0030% 4329 Total time: 812.905 seconds. calulating flagstat... Sorting and indexing bam [bam_sort_core] merging from 0 files and 14 in-memory blocks... Counting mapped reads... java -ea -Xmx109194m -Xms109194m -cp /usr/local/opt/bbmap-38.96-1/current/ align2.BBMap build=1 overwrite=true fastareadlen=500 ref=input_2/Shiitake_XR1.asm.bp.hap1.p_ctg.fasta threads=14 in1=input_1//Le1-13-502-701_1.fastq.gz in2=input_1//Le1-13-502-701_2.fastq.gz out=output/Le1-13-502-701_1.fastq.gz.temp.bam maxindel=100000 maxsites2=10000 Executing align2.BBMap [build=1, overwrite=true, fastareadlen=500, ref=input_2/Shiitake_XR1.asm.bp.hap1.p_ctg.fasta, threads=14, in1=input_1//Le1-13-502-701_1.fastq.gz, in2=input_1//Le1-13-502-701_2.fastq.gz, out=output/Le1-13-502-701_1.fastq.gz.temp.bam, maxindel=100000, maxsites2=10000] Version 38.96 Set threads to 14 Retaining first best site only for ambiguous mappings. NOTE: Ignoring reference file because it already appears to have been processed. NOTE: If you wish to regenerate the index, please manually delete ref/genome/1/summary.txt Set genome to 1 Loaded Reference: 0.382 seconds. Loading index for chunk 1-1, build 1 Generated Index: 1.153 seconds. Analyzed Index: 3.522 seconds. Found samtools 1.15 Started output stream: 0.066 seconds. Cleared Memory: 0.195 seconds. Processing reads in paired-ended mode. Started read stream. Started 14 mapping threads. Detecting finished threads: 0, 1, 2, 3, 4, 5, 6, 7, 8, 9, 10, 11, 12, 13 ------------------ Results ------------------ Genome: 1 Key Length: 13 Max Indel: 100000 Minimum Score Ratio: 0.56 Mapping Mode: normal Reads Used: 2000000 (302000000 bases) Mapping: 250.440 seconds. Reads/sec: 7985.95 kBases/sec: 1205.88 Pairing data: pct pairs num pairs pct bases num bases mated pairs: 10.0927% 100927 10.0927% 30479954 bad pairs: 0.8867% 8867 0.8867% 2677834 insert size avg: 381.70 Read 1 data: pct reads num reads pct bases num bases mapped: 11.6839% 116839 11.6839% 17642689 unambiguous: 8.1950% 81950 8.1950% 12374450 ambiguous: 3.4889% 34889 3.4889% 5268239 low-Q discards: 0.0000% 0 0.0000% 0 perfect best site: 7.5404% 75404 7.5404% 11386004 semiperfect site: 7.5537% 75537 7.5537% 11406087 rescued: 0.6799% 6799 Match Rate: NA NA 33.2878% 17247522 Error Rate: 35.4462% 41415 66.7094% 34564390 Sub Rate: 32.7006% 38207 0.4974% 257723 Del Rate: 7.6969% 8993 65.9495% 34170663 Ins Rate: 10.3048% 12040 0.2625% 136004 N Rate: 0.0462% 54 0.0028% 1440 Read 2 data: pct reads num reads pct bases num bases mapped: 11.5805% 115805 11.5805% 17486555 unambiguous: 8.1197% 81197 8.1197% 12260747 ambiguous: 3.4608% 34608 3.4608% 5225808 low-Q discards: 0.0000% 0 0.0000% 0 perfect best site: 6.2792% 62792 6.2792% 9481592 semiperfect site: 6.2914% 62914 6.2914% 9500014 rescued: 0.8684% 8684 Match Rate: NA NA 34.6408% 17006518 Error Rate: 45.7507% 52982 65.3551% 32085334 Sub Rate: 43.3466% 50198 0.6998% 343567 Del Rate: 7.5782% 8776 64.3814% 31607304 Ins Rate: 10.1117% 11710 0.2739% 134463 N Rate: 0.0786% 91 0.0041% 2007 Total time: 255.895 seconds. calulating flagstat... Sorting and indexing bam [bam_sort_core] merging from 0 files and 14 in-memory blocks... Counting mapped reads... ++ date + echo completion at Wed Mar 1 02:20:44 JST 2023 completion at Wed Mar 1 02:20:44 JST 2023 ++ date +%s + time_fin=1677604844 ++ echo 'scale=2; (1677604844 - 1677603491)/60' ++ bc + echo -e 'Total running time is 22.55 min' Total running time is 22.55 min + echo 'Run completed!' Run completed! + post_processing + '[' 2 = 1 ']' + exit ++ echo input_2/Shiitake_XR1.asm.bp.hap1.p_ctg.fasta ++ grep '[.]gz$' ++ wc -l ++ true + '[' 0 = 1 ']' + ref=input_2/Shiitake_XR1.asm.bp.hap1.p_ctg.fasta ++ echo input_2/Shiitake_XR1.asm.bp.hap1.p_ctg.fasta ++ sed 's/[.]\(fa\|fasta\|fsa\|fna\)$//' + refbase=input_2/Shiitake_XR1.asm.bp.hap1.p_ctg + FUNC_RUN_DOCKER quay.io/biocontainers/bbmap:38.96--h5c4e2a8_1 samtools faidx input_2/Shiitake_XR1.asm.bp.hap1.p_ctg.fasta + PP_RUN_IMAGE=quay.io/biocontainers/bbmap:38.96--h5c4e2a8_1 + shift + PP_RUN_DOCKER_CMD=("${@}") ++ date +%Y%m%d_%H%M%S_%3N + PPDOCNAME=pp20230301_022044_781_15034 + echo pp20230301_022044_781_15034 ++ id -u ++ id -g + docker run --name pp20230301_022044_781_15034 -v /yoshitake/test/RNA-seq~SNPcall-bbmap-callvariants:/yoshitake/test/RNA-seq~SNPcall-bbmap-callvariants -w /yoshitake/test/RNA-seq~SNPcall-bbmap-callvariants -u 2007:600 -i --rm quay.io/biocontainers/bbmap:38.96--h5c4e2a8_1 samtools faidx input_2/Shiitake_XR1.asm.bp.hap1.p_ctg.fasta + FUNC_RUN_DOCKER quay.io/biocontainers/picard:2.18.27--0 picard -Xmx104857M CreateSequenceDictionary R=input_2/Shiitake_XR1.asm.bp.hap1.p_ctg.fasta O=input_2/Shiitake_XR1.asm.bp.hap1.p_ctg.dict + PP_RUN_IMAGE=quay.io/biocontainers/picard:2.18.27--0 + shift + PP_RUN_DOCKER_CMD=("${@}") ++ date +%Y%m%d_%H%M%S_%3N + PPDOCNAME=pp20230301_022045_717_10950 + echo pp20230301_022045_717_10950 ++ id -u ++ id -g + docker run --name pp20230301_022045_717_10950 -v /yoshitake/test/RNA-seq~SNPcall-bbmap-callvariants:/yoshitake/test/RNA-seq~SNPcall-bbmap-callvariants -w /yoshitake/test/RNA-seq~SNPcall-bbmap-callvariants -u 2007:600 -i --rm quay.io/biocontainers/picard:2.18.27--0 picard -Xmx104857M CreateSequenceDictionary R=input_2/Shiitake_XR1.asm.bp.hap1.p_ctg.fasta O=input_2/Shiitake_XR1.asm.bp.hap1.p_ctg.dict /usr/local/bin/picard: line 5: warning: setlocale: LC_ALL: cannot change locale (en_US.UTF-8): No such file or directory INFO 2023-02-28 17:20:46 CreateSequenceDictionary ********** NOTE: Picard's command line syntax is changing. ********** ********** For more information, please see: ********** https://github.com/broadinstitute/picard/wiki/Command-Line-Syntax-Transition-For-Users-(Pre-Transition) ********** ********** The command line looks like this in the new syntax: ********** ********** CreateSequenceDictionary -R input_2/Shiitake_XR1.asm.bp.hap1.p_ctg.fasta -O input_2/Shiitake_XR1.asm.bp.hap1.p_ctg.dict ********** 17:20:47.401 INFO NativeLibraryLoader - Loading libgkl_compression.so from jar:file:/usr/local/share/picard-2.18.27-0/picard.jar!/com/intel/gkl/native/libgkl_compression.so [Tue Feb 28 17:20:47 GMT 2023] CreateSequenceDictionary OUTPUT=input_2/Shiitake_XR1.asm.bp.hap1.p_ctg.dict REFERENCE=input_2/Shiitake_XR1.asm.bp.hap1.p_ctg.fasta TRUNCATE_NAMES_AT_WHITESPACE=true NUM_SEQUENCES=2147483647 VERBOSITY=INFO QUIET=false VALIDATION_STRINGENCY=STRICT COMPRESSION_LEVEL=5 MAX_RECORDS_IN_RAM=500000 CREATE_INDEX=false CREATE_MD5_FILE=false GA4GH_CLIENT_SECRETS=client_secrets.json USE_JDK_DEFLATER=false USE_JDK_INFLATER=false [Tue Feb 28 17:20:47 GMT 2023] Executing as ?@49b6953b65e9 on Linux 3.10.0-1160.36.2.el7.x86_64 amd64; OpenJDK 64-Bit Server VM 11.0.1+13-LTS; Deflater: Intel; Inflater: Intel; Provider GCS is not available; Picard version: 2.18.27-SNAPSHOT [Tue Feb 28 17:20:47 GMT 2023] picard.sam.CreateSequenceDictionary done. Elapsed time: 0.01 minutes. Runtime.totalMemory()=2147483648 + cat + mkdir -p output.temp output2 + read i + xargs '-d\n' -I '{}' -P 1 bash -c '{}' + ls output/Le1-1-501-701_1.fastq.gz.bam output/Le1-12-501-708_1.fastq.gz.bam output/Le1-13-502-701_1.fastq.gz.bam output/Le1-17-502-703_1.fastq.gz.bam ++ basenmae output/Le1-1-501-701_1.fastq.gz.bam .bam /yoshitake/PortablePipeline/PortablePipeline/scripts/RNA-seq~SNPcall-bbmap-callvariants: 行 56: basenmae: コマンドが見つかりません + j= ++ onerror 61 ++ status=1 ++ script=/yoshitake/PortablePipeline/PortablePipeline/scripts/RNA-seq~SNPcall-bbmap-callvariants ++ line=61 ++ shift ++ set +x ------------------------------------------------------------ Error occured on /yoshitake/PortablePipeline/PortablePipeline/scripts/RNA-seq~SNPcall-bbmap-callvariants [Line 61]: Status 1 PID: 348967 User: yoshitake.kazutoshi Current directory: /yoshitake/test/RNA-seq~SNPcall-bbmap-callvariants Command line: /yoshitake/PortablePipeline/PortablePipeline/scripts/RNA-seq~SNPcall-bbmap-callvariants ------------------------------------------------------------ PID: 348965 pp runtime error. Checking the realpath of input files. 0 input_1/ 1 /yoshitake/test/RNA-seq~SNPcall-bbmap-callvariants/input_1/Le1-1-501-701_1.fastq.gz 1 /yoshitake/test/RNA-seq~SNPcall-bbmap-callvariants/input_1/Le1-1-501-701_2.fastq.gz 1 /yoshitake/test/RNA-seq~SNPcall-bbmap-callvariants/input_1/Le1-12-501-708_1.fastq.gz 1 /yoshitake/test/RNA-seq~SNPcall-bbmap-callvariants/input_1/Le1-12-501-708_2.fastq.gz 1 /yoshitake/test/RNA-seq~SNPcall-bbmap-callvariants/input_1/Le1-17-502-703_2.fastq.gz 1 /yoshitake/test/RNA-seq~SNPcall-bbmap-callvariants/input_1/Le1-13-502-701_2.fastq.gz 1 /yoshitake/test/RNA-seq~SNPcall-bbmap-callvariants/input_1/Le1-17-502-703_1.fastq.gz 1 /yoshitake/test/RNA-seq~SNPcall-bbmap-callvariants/input_1/Le1-13-502-701_1.fastq.gz 0 input_2/Shiitake_XR1.asm.bp.hap1.p_ctg.fasta script: /yoshitake/PortablePipeline/PortablePipeline/scripts/RNA-seq~SNPcall-bbmap-callvariants "$scriptdir"/mapping-illumina~bbmap broadinstitute/gatk:4.3.0.0 centos:centos6 quay.io/biocontainers/bbmap:38.96--h5c4e2a8_1 quay.io/biocontainers/picard:2.18.27--0 using docker + set -o pipefail ++ date +%s + time0=1677627175 + echo start at 1677627175 start at 1677627175 ++ echo input_2/Shiitake_XR1.asm.bp.hap1.p_ctg.fasta ++ grep '[.]gz$' ++ wc -l ++ true + '[' 0 = 1 ']' + ref=input_2/Shiitake_XR1.asm.bp.hap1.p_ctg.fasta ++ echo input_2/Shiitake_XR1.asm.bp.hap1.p_ctg.fasta ++ sed 's/[.]\(fa\|fasta\|fsa\|fna\)$//' + refbase=input_2/Shiitake_XR1.asm.bp.hap1.p_ctg + FUNC_RUN_DOCKER quay.io/biocontainers/bbmap:38.96--h5c4e2a8_1 samtools faidx input_2/Shiitake_XR1.asm.bp.hap1.p_ctg.fasta + PP_RUN_IMAGE=quay.io/biocontainers/bbmap:38.96--h5c4e2a8_1 + shift + PP_RUN_DOCKER_CMD=("${@}") ++ date +%Y%m%d_%H%M%S_%3N + PPDOCNAME=pp20230301_083255_301_249 + echo pp20230301_083255_301_249 ++ id -u ++ id -g + docker run --name pp20230301_083255_301_249 -v /yoshitake/test/RNA-seq~SNPcall-bbmap-callvariants:/yoshitake/test/RNA-seq~SNPcall-bbmap-callvariants -w /yoshitake/test/RNA-seq~SNPcall-bbmap-callvariants -u 2007:600 -i --rm quay.io/biocontainers/bbmap:38.96--h5c4e2a8_1 samtools faidx input_2/Shiitake_XR1.asm.bp.hap1.p_ctg.fasta + FUNC_RUN_DOCKER quay.io/biocontainers/picard:2.18.27--0 picard -Xmx104857M CreateSequenceDictionary R=input_2/Shiitake_XR1.asm.bp.hap1.p_ctg.fasta O=input_2/Shiitake_XR1.asm.bp.hap1.p_ctg.dict + PP_RUN_IMAGE=quay.io/biocontainers/picard:2.18.27--0 + shift + PP_RUN_DOCKER_CMD=("${@}") ++ date +%Y%m%d_%H%M%S_%3N + PPDOCNAME=pp20230301_083256_208_845 + echo pp20230301_083256_208_845 ++ id -u ++ id -g + docker run --name pp20230301_083256_208_845 -v /yoshitake/test/RNA-seq~SNPcall-bbmap-callvariants:/yoshitake/test/RNA-seq~SNPcall-bbmap-callvariants -w /yoshitake/test/RNA-seq~SNPcall-bbmap-callvariants -u 2007:600 -i --rm quay.io/biocontainers/picard:2.18.27--0 picard -Xmx104857M CreateSequenceDictionary R=input_2/Shiitake_XR1.asm.bp.hap1.p_ctg.fasta O=input_2/Shiitake_XR1.asm.bp.hap1.p_ctg.dict /usr/local/bin/picard: line 5: warning: setlocale: LC_ALL: cannot change locale (en_US.UTF-8): No such file or directory INFO 2023-02-28 23:32:57 CreateSequenceDictionary ********** NOTE: Picard's command line syntax is changing. ********** ********** For more information, please see: ********** https://github.com/broadinstitute/picard/wiki/Command-Line-Syntax-Transition-For-Users-(Pre-Transition) ********** ********** The command line looks like this in the new syntax: ********** ********** CreateSequenceDictionary -R input_2/Shiitake_XR1.asm.bp.hap1.p_ctg.fasta -O input_2/Shiitake_XR1.asm.bp.hap1.p_ctg.dict ********** 23:32:57.543 INFO NativeLibraryLoader - Loading libgkl_compression.so from jar:file:/usr/local/share/picard-2.18.27-0/picard.jar!/com/intel/gkl/native/libgkl_compression.so [Tue Feb 28 23:32:57 GMT 2023] CreateSequenceDictionary OUTPUT=input_2/Shiitake_XR1.asm.bp.hap1.p_ctg.dict REFERENCE=input_2/Shiitake_XR1.asm.bp.hap1.p_ctg.fasta TRUNCATE_NAMES_AT_WHITESPACE=true NUM_SEQUENCES=2147483647 VERBOSITY=INFO QUIET=false VALIDATION_STRINGENCY=STRICT COMPRESSION_LEVEL=5 MAX_RECORDS_IN_RAM=500000 CREATE_INDEX=false CREATE_MD5_FILE=false GA4GH_CLIENT_SECRETS=client_secrets.json USE_JDK_DEFLATER=false USE_JDK_INFLATER=false [Tue Feb 28 23:32:57 GMT 2023] Executing as ?@02a181fd478b on Linux 3.10.0-1160.36.2.el7.x86_64 amd64; OpenJDK 64-Bit Server VM 11.0.1+13-LTS; Deflater: Intel; Inflater: Intel; Provider GCS is not available; Picard version: 2.18.27-SNAPSHOT [Tue Feb 28 23:32:57 GMT 2023] picard.sam.CreateSequenceDictionary done. Elapsed time: 0.00 minutes. Runtime.totalMemory()=2147483648 To get help, see http://broadinstitute.github.io/picard/index.html#GettingHelp Exception in thread "main" picard.PicardException: /yoshitake/test/RNA-seq~SNPcall-bbmap-callvariants/input_2/Shiitake_XR1.asm.bp.hap1.p_ctg.dict already exists. Delete this file and try again, or specify a different output file. at picard.sam.CreateSequenceDictionary.doWork(CreateSequenceDictionary.java:209) at picard.cmdline.CommandLineProgram.instanceMain(CommandLineProgram.java:295) at picard.cmdline.PicardCommandLine.instanceMain(PicardCommandLine.java:103) at picard.cmdline.PicardCommandLine.main(PicardCommandLine.java:113) PID: 403851 pp runtime error. Checking the realpath of input files. 0 input_1/ 1 /yoshitake/test/RNA-seq~SNPcall-bbmap-callvariants/input_1/Le1-1-501-701_1.fastq.gz 1 /yoshitake/test/RNA-seq~SNPcall-bbmap-callvariants/input_1/Le1-1-501-701_2.fastq.gz 1 /yoshitake/test/RNA-seq~SNPcall-bbmap-callvariants/input_1/Le1-12-501-708_1.fastq.gz 1 /yoshitake/test/RNA-seq~SNPcall-bbmap-callvariants/input_1/Le1-12-501-708_2.fastq.gz 1 /yoshitake/test/RNA-seq~SNPcall-bbmap-callvariants/input_1/Le1-17-502-703_2.fastq.gz 1 /yoshitake/test/RNA-seq~SNPcall-bbmap-callvariants/input_1/Le1-13-502-701_2.fastq.gz 1 /yoshitake/test/RNA-seq~SNPcall-bbmap-callvariants/input_1/Le1-17-502-703_1.fastq.gz 1 /yoshitake/test/RNA-seq~SNPcall-bbmap-callvariants/input_1/Le1-13-502-701_1.fastq.gz 0 input_2/Shiitake_XR1.asm.bp.hap1.p_ctg.fasta script: /yoshitake/PortablePipeline/PortablePipeline/scripts/RNA-seq~SNPcall-bbmap-callvariants "$scriptdir"/mapping-illumina~bbmap broadinstitute/gatk:4.3.0.0 centos:centos6 quay.io/biocontainers/bbmap:38.96--h5c4e2a8_1 quay.io/biocontainers/picard:2.18.27--0 using docker + set -o pipefail ++ date +%s + time0=1677627548 + echo start at 1677627548 start at 1677627548 ++ echo input_2/Shiitake_XR1.asm.bp.hap1.p_ctg.fasta ++ grep '[.]gz$' ++ wc -l ++ true + '[' 0 = 1 ']' + ref=input_2/Shiitake_XR1.asm.bp.hap1.p_ctg.fasta ++ echo input_2/Shiitake_XR1.asm.bp.hap1.p_ctg.fasta ++ sed 's/[.]\(fa\|fasta\|fsa\|fna\)$//' + refbase=input_2/Shiitake_XR1.asm.bp.hap1.p_ctg + FUNC_RUN_DOCKER quay.io/biocontainers/bbmap:38.96--h5c4e2a8_1 samtools faidx input_2/Shiitake_XR1.asm.bp.hap1.p_ctg.fasta + PP_RUN_IMAGE=quay.io/biocontainers/bbmap:38.96--h5c4e2a8_1 + shift + PP_RUN_DOCKER_CMD=("${@}") ++ date +%Y%m%d_%H%M%S_%3N + PPDOCNAME=pp20230301_083908_407_13623 + echo pp20230301_083908_407_13623 ++ id -u ++ id -g + docker run --name pp20230301_083908_407_13623 -v /yoshitake/test/RNA-seq~SNPcall-bbmap-callvariants:/yoshitake/test/RNA-seq~SNPcall-bbmap-callvariants -w /yoshitake/test/RNA-seq~SNPcall-bbmap-callvariants -u 2007:600 -i --rm quay.io/biocontainers/bbmap:38.96--h5c4e2a8_1 samtools faidx input_2/Shiitake_XR1.asm.bp.hap1.p_ctg.fasta + rm -f input_2/Shiitake_XR1.asm.bp.hap1.p_ctg.dict + FUNC_RUN_DOCKER quay.io/biocontainers/picard:2.18.27--0 picard -Xmx104857M CreateSequenceDictionary R=input_2/Shiitake_XR1.asm.bp.hap1.p_ctg.fasta O=input_2/Shiitake_XR1.asm.bp.hap1.p_ctg.dict + PP_RUN_IMAGE=quay.io/biocontainers/picard:2.18.27--0 + shift + PP_RUN_DOCKER_CMD=("${@}") ++ date +%Y%m%d_%H%M%S_%3N + PPDOCNAME=pp20230301_083909_334_23984 + echo pp20230301_083909_334_23984 ++ id -u ++ id -g + docker run --name pp20230301_083909_334_23984 -v /yoshitake/test/RNA-seq~SNPcall-bbmap-callvariants:/yoshitake/test/RNA-seq~SNPcall-bbmap-callvariants -w /yoshitake/test/RNA-seq~SNPcall-bbmap-callvariants -u 2007:600 -i --rm quay.io/biocontainers/picard:2.18.27--0 picard -Xmx104857M CreateSequenceDictionary R=input_2/Shiitake_XR1.asm.bp.hap1.p_ctg.fasta O=input_2/Shiitake_XR1.asm.bp.hap1.p_ctg.dict /usr/local/bin/picard: line 5: warning: setlocale: LC_ALL: cannot change locale (en_US.UTF-8): No such file or directory INFO 2023-02-28 23:39:10 CreateSequenceDictionary ********** NOTE: Picard's command line syntax is changing. ********** ********** For more information, please see: ********** https://github.com/broadinstitute/picard/wiki/Command-Line-Syntax-Transition-For-Users-(Pre-Transition) ********** ********** The command line looks like this in the new syntax: ********** ********** CreateSequenceDictionary -R input_2/Shiitake_XR1.asm.bp.hap1.p_ctg.fasta -O input_2/Shiitake_XR1.asm.bp.hap1.p_ctg.dict ********** 23:39:10.680 INFO NativeLibraryLoader - Loading libgkl_compression.so from jar:file:/usr/local/share/picard-2.18.27-0/picard.jar!/com/intel/gkl/native/libgkl_compression.so [Tue Feb 28 23:39:10 GMT 2023] CreateSequenceDictionary OUTPUT=input_2/Shiitake_XR1.asm.bp.hap1.p_ctg.dict REFERENCE=input_2/Shiitake_XR1.asm.bp.hap1.p_ctg.fasta TRUNCATE_NAMES_AT_WHITESPACE=true NUM_SEQUENCES=2147483647 VERBOSITY=INFO QUIET=false VALIDATION_STRINGENCY=STRICT COMPRESSION_LEVEL=5 MAX_RECORDS_IN_RAM=500000 CREATE_INDEX=false CREATE_MD5_FILE=false GA4GH_CLIENT_SECRETS=client_secrets.json USE_JDK_DEFLATER=false USE_JDK_INFLATER=false [Tue Feb 28 23:39:10 GMT 2023] Executing as ?@dfee3dc54eee on Linux 3.10.0-1160.36.2.el7.x86_64 amd64; OpenJDK 64-Bit Server VM 11.0.1+13-LTS; Deflater: Intel; Inflater: Intel; Provider GCS is not available; Picard version: 2.18.27-SNAPSHOT [Tue Feb 28 23:39:11 GMT 2023] picard.sam.CreateSequenceDictionary done. Elapsed time: 0.01 minutes. Runtime.totalMemory()=2147483648 + cat + mkdir -p output.temp output2 + read i + xargs '-d\n' -I '{}' -P 1 bash -c '{}' + ls output/Le1-1-501-701_1.fastq.gz.bam output/Le1-12-501-708_1.fastq.gz.bam output/Le1-13-502-701_1.fastq.gz.bam output/Le1-17-502-703_1.fastq.gz.bam ++ basename output/Le1-1-501-701_1.fastq.gz.bam .bam + j=Le1-1-501-701_1.fastq.gz + echo -n 'PPDOCNAME=pp`date +%Y%m%d_%H%M%S_%3N`_$RANDOM; echo $PPDOCNAME >> /yoshitake/test/RNA-seq~SNPcall-bbmap-callvariants/pp-docker-list; docker run --name ${PPDOCNAME} -v $PWD:$PWD -w $PWD -u 2007:600 -i --rm quay.io/biocontainers/picard:2.18.27--0 picard -Xmx104857M AddOrReplaceReadGroups I="output/Le1-1-501-701_1.fastq.gz.bam" O=output.temp/"Le1-1-501-701_1.fastq.gz"_addrg.bam SO=coordinate RGID="Le1-1-501-701_1.fastq.gz" RGLB=library RGPL=Illumina RGPU=Illumina RGSM="Le1-1-501-701_1.fastq.gz"; ' + echo -n '(PPDOCNAME=pp`date +%Y%m%d_%H%M%S_%3N`_$RANDOM; echo $PPDOCNAME >> /yoshitake/test/RNA-seq~SNPcall-bbmap-callvariants/pp-docker-list; docker run --name ${PPDOCNAME} -v $PWD:$PWD -w $PWD -u 2007:600 -i --rm quay.io/biocontainers/bbmap:38.96--h5c4e2a8_1 samtools view -h output.temp/"Le1-1-501-701_1.fastq.gz"_addrg.bam)|bash run-awk-replace.sh|(PPDOCNAME=pp`date +%Y%m%d_%H%M%S_%3N`_$RANDOM; echo $PPDOCNAME >> /yoshitake/test/RNA-seq~SNPcall-bbmap-callvariants/pp-docker-list; docker run --name ${PPDOCNAME} -v $PWD:$PWD -w $PWD -u 2007:600 -i --rm quay.io/biocontainers/bbmap:38.96--h5c4e2a8_1 samtools view -Sb -o output.temp/"Le1-1-501-701_1.fastq.gz"_addrg_repN.bam; ' + echo -n 'PPDOCNAME=pp`date +%Y%m%d_%H%M%S_%3N`_$RANDOM; echo $PPDOCNAME >> /yoshitake/test/RNA-seq~SNPcall-bbmap-callvariants/pp-docker-list; docker run --name ${PPDOCNAME} -v $PWD:$PWD -w $PWD -u 2007:600 -i --rm quay.io/biocontainers/bbmap:38.96--h5c4e2a8_1 samtools index output.temp/"Le1-1-501-701_1.fastq.gz"_addrg_repN.bam; ' + echo 'PPDOCNAME=pp`date +%Y%m%d_%H%M%S_%3N`_$RANDOM; echo $PPDOCNAME >> /yoshitake/test/RNA-seq~SNPcall-bbmap-callvariants/pp-docker-list; docker run --name ${PPDOCNAME} -v $PWD:$PWD -w $PWD -u 2007:600 -i --rm broadinstitute/gatk:4.3.0.0 gatk -Xmx104857M SplitNCigarReads -R "input_2/Shiitake_XR1.asm.bp.hap1.p_ctg.fasta" -I output.temp/"Le1-1-501-701_1.fastq.gz"_addrg_repN.bam -O output2/"Le1-1-501-701_1.fastq.gz".bam' + read i ++ basename output/Le1-12-501-708_1.fastq.gz.bam .bam + j=Le1-12-501-708_1.fastq.gz + echo -n 'PPDOCNAME=pp`date +%Y%m%d_%H%M%S_%3N`_$RANDOM; echo $PPDOCNAME >> /yoshitake/test/RNA-seq~SNPcall-bbmap-callvariants/pp-docker-list; docker run --name ${PPDOCNAME} -v $PWD:$PWD -w $PWD -u 2007:600 -i --rm quay.io/biocontainers/picard:2.18.27--0 picard -Xmx104857M AddOrReplaceReadGroups I="output/Le1-12-501-708_1.fastq.gz.bam" O=output.temp/"Le1-12-501-708_1.fastq.gz"_addrg.bam SO=coordinate RGID="Le1-12-501-708_1.fastq.gz" RGLB=library RGPL=Illumina RGPU=Illumina RGSM="Le1-12-501-708_1.fastq.gz"; ' + echo -n '(PPDOCNAME=pp`date +%Y%m%d_%H%M%S_%3N`_$RANDOM; echo $PPDOCNAME >> /yoshitake/test/RNA-seq~SNPcall-bbmap-callvariants/pp-docker-list; docker run --name ${PPDOCNAME} -v $PWD:$PWD -w $PWD -u 2007:600 -i --rm quay.io/biocontainers/bbmap:38.96--h5c4e2a8_1 samtools view -h output.temp/"Le1-12-501-708_1.fastq.gz"_addrg.bam)|bash run-awk-replace.sh|(PPDOCNAME=pp`date +%Y%m%d_%H%M%S_%3N`_$RANDOM; echo $PPDOCNAME >> /yoshitake/test/RNA-seq~SNPcall-bbmap-callvariants/pp-docker-list; docker run --name ${PPDOCNAME} -v $PWD:$PWD -w $PWD -u 2007:600 -i --rm quay.io/biocontainers/bbmap:38.96--h5c4e2a8_1 samtools view -Sb -o output.temp/"Le1-12-501-708_1.fastq.gz"_addrg_repN.bam; ' + echo -n 'PPDOCNAME=pp`date +%Y%m%d_%H%M%S_%3N`_$RANDOM; echo $PPDOCNAME >> /yoshitake/test/RNA-seq~SNPcall-bbmap-callvariants/pp-docker-list; docker run --name ${PPDOCNAME} -v $PWD:$PWD -w $PWD -u 2007:600 -i --rm quay.io/biocontainers/bbmap:38.96--h5c4e2a8_1 samtools index output.temp/"Le1-12-501-708_1.fastq.gz"_addrg_repN.bam; ' + echo 'PPDOCNAME=pp`date +%Y%m%d_%H%M%S_%3N`_$RANDOM; echo $PPDOCNAME >> /yoshitake/test/RNA-seq~SNPcall-bbmap-callvariants/pp-docker-list; docker run --name ${PPDOCNAME} -v $PWD:$PWD -w $PWD -u 2007:600 -i --rm broadinstitute/gatk:4.3.0.0 gatk -Xmx104857M SplitNCigarReads -R "input_2/Shiitake_XR1.asm.bp.hap1.p_ctg.fasta" -I output.temp/"Le1-12-501-708_1.fastq.gz"_addrg_repN.bam -O output2/"Le1-12-501-708_1.fastq.gz".bam' + read i ++ basename output/Le1-13-502-701_1.fastq.gz.bam .bam bash: -c: 行 1: 構文エラー: 予期しないファイル終了 (EOF) です + j=Le1-13-502-701_1.fastq.gz + echo -n 'PPDOCNAME=pp`date +%Y%m%d_%H%M%S_%3N`_$RANDOM; echo $PPDOCNAME >> /yoshitake/test/RNA-seq~SNPcall-bbmap-callvariants/pp-docker-list; docker run --name ${PPDOCNAME} -v $PWD:$PWD -w $PWD -u 2007:600 -i --rm quay.io/biocontainers/picard:2.18.27--0 picard -Xmx104857M AddOrReplaceReadGroups I="output/Le1-13-502-701_1.fastq.gz.bam" O=output.temp/"Le1-13-502-701_1.fastq.gz"_addrg.bam SO=coordinate RGID="Le1-13-502-701_1.fastq.gz" RGLB=library RGPL=Illumina RGPU=Illumina RGSM="Le1-13-502-701_1.fastq.gz"; ' + echo -n '(PPDOCNAME=pp`date +%Y%m%d_%H%M%S_%3N`_$RANDOM; echo $PPDOCNAME >> /yoshitake/test/RNA-seq~SNPcall-bbmap-callvariants/pp-docker-list; docker run --name ${PPDOCNAME} -v $PWD:$PWD -w $PWD -u 2007:600 -i --rm quay.io/biocontainers/bbmap:38.96--h5c4e2a8_1 samtools view -h output.temp/"Le1-13-502-701_1.fastq.gz"_addrg.bam)|bash run-awk-replace.sh|(PPDOCNAME=pp`date +%Y%m%d_%H%M%S_%3N`_$RANDOM; echo $PPDOCNAME >> /yoshitake/test/RNA-seq~SNPcall-bbmap-callvariants/pp-docker-list; docker run --name ${PPDOCNAME} -v $PWD:$PWD -w $PWD -u 2007:600 -i --rm quay.io/biocontainers/bbmap:38.96--h5c4e2a8_1 samtools view -Sb -o output.temp/"Le1-13-502-701_1.fastq.gz"_addrg_repN.bam; ' + echo -n 'PPDOCNAME=pp`date +%Y%m%d_%H%M%S_%3N`_$RANDOM; echo $PPDOCNAME >> /yoshitake/test/RNA-seq~SNPcall-bbmap-callvariants/pp-docker-list; docker run --name ${PPDOCNAME} -v $PWD:$PWD -w $PWD -u 2007:600 -i --rm quay.io/biocontainers/bbmap:38.96--h5c4e2a8_1 samtools index output.temp/"Le1-13-502-701_1.fastq.gz"_addrg_repN.bam; ' + echo 'PPDOCNAME=pp`date +%Y%m%d_%H%M%S_%3N`_$RANDOM; echo $PPDOCNAME >> /yoshitake/test/RNA-seq~SNPcall-bbmap-callvariants/pp-docker-list; docker run --name ${PPDOCNAME} -v $PWD:$PWD -w $PWD -u 2007:600 -i --rm broadinstitute/gatk:4.3.0.0 gatk -Xmx104857M SplitNCigarReads -R "input_2/Shiitake_XR1.asm.bp.hap1.p_ctg.fasta" -I output.temp/"Le1-13-502-701_1.fastq.gz"_addrg_repN.bam -O output2/"Le1-13-502-701_1.fastq.gz".bam' + read i ++ basename output/Le1-17-502-703_1.fastq.gz.bam .bam bash: -c: 行 1: 構文エラー: 予期しないファイル終了 (EOF) です + j=Le1-17-502-703_1.fastq.gz + echo -n 'PPDOCNAME=pp`date +%Y%m%d_%H%M%S_%3N`_$RANDOM; echo $PPDOCNAME >> /yoshitake/test/RNA-seq~SNPcall-bbmap-callvariants/pp-docker-list; docker run --name ${PPDOCNAME} -v $PWD:$PWD -w $PWD -u 2007:600 -i --rm quay.io/biocontainers/picard:2.18.27--0 picard -Xmx104857M AddOrReplaceReadGroups I="output/Le1-17-502-703_1.fastq.gz.bam" O=output.temp/"Le1-17-502-703_1.fastq.gz"_addrg.bam SO=coordinate RGID="Le1-17-502-703_1.fastq.gz" RGLB=library RGPL=Illumina RGPU=Illumina RGSM="Le1-17-502-703_1.fastq.gz"; ' + echo -n '(PPDOCNAME=pp`date +%Y%m%d_%H%M%S_%3N`_$RANDOM; echo $PPDOCNAME >> /yoshitake/test/RNA-seq~SNPcall-bbmap-callvariants/pp-docker-list; docker run --name ${PPDOCNAME} -v $PWD:$PWD -w $PWD -u 2007:600 -i --rm quay.io/biocontainers/bbmap:38.96--h5c4e2a8_1 samtools view -h output.temp/"Le1-17-502-703_1.fastq.gz"_addrg.bam)|bash run-awk-replace.sh|(PPDOCNAME=pp`date +%Y%m%d_%H%M%S_%3N`_$RANDOM; echo $PPDOCNAME >> /yoshitake/test/RNA-seq~SNPcall-bbmap-callvariants/pp-docker-list; docker run --name ${PPDOCNAME} -v $PWD:$PWD -w $PWD -u 2007:600 -i --rm quay.io/biocontainers/bbmap:38.96--h5c4e2a8_1 samtools view -Sb -o output.temp/"Le1-17-502-703_1.fastq.gz"_addrg_repN.bam; ' + echo -n 'PPDOCNAME=pp`date +%Y%m%d_%H%M%S_%3N`_$RANDOM; echo $PPDOCNAME >> /yoshitake/test/RNA-seq~SNPcall-bbmap-callvariants/pp-docker-list; docker run --name ${PPDOCNAME} -v $PWD:$PWD -w $PWD -u 2007:600 -i --rm quay.io/biocontainers/bbmap:38.96--h5c4e2a8_1 samtools index output.temp/"Le1-17-502-703_1.fastq.gz"_addrg_repN.bam; ' + echo 'PPDOCNAME=pp`date +%Y%m%d_%H%M%S_%3N`_$RANDOM; echo $PPDOCNAME >> /yoshitake/test/RNA-seq~SNPcall-bbmap-callvariants/pp-docker-list; docker run --name ${PPDOCNAME} -v $PWD:$PWD -w $PWD -u 2007:600 -i --rm broadinstitute/gatk:4.3.0.0 gatk -Xmx104857M SplitNCigarReads -R "input_2/Shiitake_XR1.asm.bp.hap1.p_ctg.fasta" -I output.temp/"Le1-17-502-703_1.fastq.gz"_addrg_repN.bam -O output2/"Le1-17-502-703_1.fastq.gz".bam' + read i bash: -c: 行 1: 構文エラー: 予期しないファイル終了 (EOF) です bash: -c: 行 1: 構文エラー: 予期しないファイル終了 (EOF) です ++ onerror 62 ++ status=123 ++ script=/yoshitake/PortablePipeline/PortablePipeline/scripts/RNA-seq~SNPcall-bbmap-callvariants ++ line=62 ++ shift ++ set +x ------------------------------------------------------------ Error occured on /yoshitake/PortablePipeline/PortablePipeline/scripts/RNA-seq~SNPcall-bbmap-callvariants [Line 62]: Status 123 PID: 405362 User: yoshitake.kazutoshi Current directory: /yoshitake/test/RNA-seq~SNPcall-bbmap-callvariants Command line: /yoshitake/PortablePipeline/PortablePipeline/scripts/RNA-seq~SNPcall-bbmap-callvariants ------------------------------------------------------------ PID: 405360 pp runtime error. Checking the realpath of input files. 0 input_1/ 1 /yoshitake/test/RNA-seq~SNPcall-bbmap-callvariants/input_1/Le1-1-501-701_1.fastq.gz 1 /yoshitake/test/RNA-seq~SNPcall-bbmap-callvariants/input_1/Le1-1-501-701_2.fastq.gz 1 /yoshitake/test/RNA-seq~SNPcall-bbmap-callvariants/input_1/Le1-12-501-708_1.fastq.gz 1 /yoshitake/test/RNA-seq~SNPcall-bbmap-callvariants/input_1/Le1-12-501-708_2.fastq.gz 1 /yoshitake/test/RNA-seq~SNPcall-bbmap-callvariants/input_1/Le1-17-502-703_2.fastq.gz 1 /yoshitake/test/RNA-seq~SNPcall-bbmap-callvariants/input_1/Le1-13-502-701_2.fastq.gz 1 /yoshitake/test/RNA-seq~SNPcall-bbmap-callvariants/input_1/Le1-17-502-703_1.fastq.gz 1 /yoshitake/test/RNA-seq~SNPcall-bbmap-callvariants/input_1/Le1-13-502-701_1.fastq.gz 0 input_2/Shiitake_XR1.asm.bp.hap1.p_ctg.fasta script: /yoshitake/PortablePipeline/PortablePipeline/scripts/RNA-seq~SNPcall-bbmap-callvariants "$scriptdir"/mapping-illumina~bbmap broadinstitute/gatk:4.3.0.0 centos:centos6 quay.io/biocontainers/bbmap:38.96--h5c4e2a8_1 quay.io/biocontainers/picard:2.18.27--0 using docker + set -o pipefail ++ date +%s + time0=1677627617 + echo start at 1677627617 start at 1677627617 ++ echo input_2/Shiitake_XR1.asm.bp.hap1.p_ctg.fasta ++ grep '[.]gz$' ++ wc -l ++ true + '[' 0 = 1 ']' + ref=input_2/Shiitake_XR1.asm.bp.hap1.p_ctg.fasta ++ echo input_2/Shiitake_XR1.asm.bp.hap1.p_ctg.fasta ++ sed 's/[.]\(fa\|fasta\|fsa\|fna\)$//' + refbase=input_2/Shiitake_XR1.asm.bp.hap1.p_ctg + FUNC_RUN_DOCKER quay.io/biocontainers/bbmap:38.96--h5c4e2a8_1 samtools faidx input_2/Shiitake_XR1.asm.bp.hap1.p_ctg.fasta + PP_RUN_IMAGE=quay.io/biocontainers/bbmap:38.96--h5c4e2a8_1 + shift + PP_RUN_DOCKER_CMD=("${@}") ++ date +%Y%m%d_%H%M%S_%3N + PPDOCNAME=pp20230301_084017_338_885 + echo pp20230301_084017_338_885 ++ id -u ++ id -g + docker run --name pp20230301_084017_338_885 -v /yoshitake/test/RNA-seq~SNPcall-bbmap-callvariants:/yoshitake/test/RNA-seq~SNPcall-bbmap-callvariants -w /yoshitake/test/RNA-seq~SNPcall-bbmap-callvariants -u 2007:600 -i --rm quay.io/biocontainers/bbmap:38.96--h5c4e2a8_1 samtools faidx input_2/Shiitake_XR1.asm.bp.hap1.p_ctg.fasta + rm -f input_2/Shiitake_XR1.asm.bp.hap1.p_ctg.dict + FUNC_RUN_DOCKER quay.io/biocontainers/picard:2.18.27--0 picard -Xmx104857M CreateSequenceDictionary R=input_2/Shiitake_XR1.asm.bp.hap1.p_ctg.fasta O=input_2/Shiitake_XR1.asm.bp.hap1.p_ctg.dict + PP_RUN_IMAGE=quay.io/biocontainers/picard:2.18.27--0 + shift + PP_RUN_DOCKER_CMD=("${@}") ++ date +%Y%m%d_%H%M%S_%3N + PPDOCNAME=pp20230301_084018_231_32683 + echo pp20230301_084018_231_32683 ++ id -u ++ id -g + docker run --name pp20230301_084018_231_32683 -v /yoshitake/test/RNA-seq~SNPcall-bbmap-callvariants:/yoshitake/test/RNA-seq~SNPcall-bbmap-callvariants -w /yoshitake/test/RNA-seq~SNPcall-bbmap-callvariants -u 2007:600 -i --rm quay.io/biocontainers/picard:2.18.27--0 picard -Xmx104857M CreateSequenceDictionary R=input_2/Shiitake_XR1.asm.bp.hap1.p_ctg.fasta O=input_2/Shiitake_XR1.asm.bp.hap1.p_ctg.dict /usr/local/bin/picard: line 5: warning: setlocale: LC_ALL: cannot change locale (en_US.UTF-8): No such file or directory INFO 2023-02-28 23:40:19 CreateSequenceDictionary ********** NOTE: Picard's command line syntax is changing. ********** ********** For more information, please see: ********** https://github.com/broadinstitute/picard/wiki/Command-Line-Syntax-Transition-For-Users-(Pre-Transition) ********** ********** The command line looks like this in the new syntax: ********** ********** CreateSequenceDictionary -R input_2/Shiitake_XR1.asm.bp.hap1.p_ctg.fasta -O input_2/Shiitake_XR1.asm.bp.hap1.p_ctg.dict ********** 23:40:20.001 INFO NativeLibraryLoader - Loading libgkl_compression.so from jar:file:/usr/local/share/picard-2.18.27-0/picard.jar!/com/intel/gkl/native/libgkl_compression.so [Tue Feb 28 23:40:20 GMT 2023] CreateSequenceDictionary OUTPUT=input_2/Shiitake_XR1.asm.bp.hap1.p_ctg.dict REFERENCE=input_2/Shiitake_XR1.asm.bp.hap1.p_ctg.fasta TRUNCATE_NAMES_AT_WHITESPACE=true NUM_SEQUENCES=2147483647 VERBOSITY=INFO QUIET=false VALIDATION_STRINGENCY=STRICT COMPRESSION_LEVEL=5 MAX_RECORDS_IN_RAM=500000 CREATE_INDEX=false CREATE_MD5_FILE=false GA4GH_CLIENT_SECRETS=client_secrets.json USE_JDK_DEFLATER=false USE_JDK_INFLATER=false [Tue Feb 28 23:40:20 GMT 2023] Executing as ?@a530dca5357e on Linux 3.10.0-1160.36.2.el7.x86_64 amd64; OpenJDK 64-Bit Server VM 11.0.1+13-LTS; Deflater: Intel; Inflater: Intel; Provider GCS is not available; Picard version: 2.18.27-SNAPSHOT [Tue Feb 28 23:40:20 GMT 2023] picard.sam.CreateSequenceDictionary done. Elapsed time: 0.01 minutes. Runtime.totalMemory()=2147483648 + cat + mkdir -p output.temp output2 + ls output/Le1-1-501-701_1.fastq.gz.bam output/Le1-12-501-708_1.fastq.gz.bam output/Le1-13-502-701_1.fastq.gz.bam output/Le1-17-502-703_1.fastq.gz.bam + read i + xargs '-d\n' -I '{}' -P 1 bash -c '{}' ++ basename output/Le1-1-501-701_1.fastq.gz.bam .bam + j=Le1-1-501-701_1.fastq.gz + echo -n 'PPDOCNAME=pp`date +%Y%m%d_%H%M%S_%3N`_$RANDOM; echo $PPDOCNAME >> /yoshitake/test/RNA-seq~SNPcall-bbmap-callvariants/pp-docker-list; docker run --name ${PPDOCNAME} -v $PWD:$PWD -w $PWD -u 2007:600 -i --rm quay.io/biocontainers/picard:2.18.27--0 picard -Xmx104857M AddOrReplaceReadGroups I="output/Le1-1-501-701_1.fastq.gz.bam" O=output.temp/"Le1-1-501-701_1.fastq.gz"_addrg.bam SO=coordinate RGID="Le1-1-501-701_1.fastq.gz" RGLB=library RGPL=Illumina RGPU=Illumina RGSM="Le1-1-501-701_1.fastq.gz"; ' + echo -n '(PPDOCNAME=pp`date +%Y%m%d_%H%M%S_%3N`_$RANDOM; echo $PPDOCNAME >> /yoshitake/test/RNA-seq~SNPcall-bbmap-callvariants/pp-docker-list; docker run --name ${PPDOCNAME} -v $PWD:$PWD -w $PWD -u 2007:600 -i --rm quay.io/biocontainers/bbmap:38.96--h5c4e2a8_1 samtools view -h output.temp/"Le1-1-501-701_1.fastq.gz"_addrg.bam)|bash run-awk-replace.sh|(PPDOCNAME=pp`date +%Y%m%d_%H%M%S_%3N`_$RANDOM; echo $PPDOCNAME >> /yoshitake/test/RNA-seq~SNPcall-bbmap-callvariants/pp-docker-list; docker run --name ${PPDOCNAME} -v $PWD:$PWD -w $PWD -u 2007:600 -i --rm quay.io/biocontainers/bbmap:38.96--h5c4e2a8_1 samtools view -Sb -o output.temp/"Le1-1-501-701_1.fastq.gz"_addrg_repN.bam); ' + echo -n 'PPDOCNAME=pp`date +%Y%m%d_%H%M%S_%3N`_$RANDOM; echo $PPDOCNAME >> /yoshitake/test/RNA-seq~SNPcall-bbmap-callvariants/pp-docker-list; docker run --name ${PPDOCNAME} -v $PWD:$PWD -w $PWD -u 2007:600 -i --rm quay.io/biocontainers/bbmap:38.96--h5c4e2a8_1 samtools index output.temp/"Le1-1-501-701_1.fastq.gz"_addrg_repN.bam; ' + echo 'PPDOCNAME=pp`date +%Y%m%d_%H%M%S_%3N`_$RANDOM; echo $PPDOCNAME >> /yoshitake/test/RNA-seq~SNPcall-bbmap-callvariants/pp-docker-list; docker run --name ${PPDOCNAME} -v $PWD:$PWD -w $PWD -u 2007:600 -i --rm broadinstitute/gatk:4.3.0.0 gatk -Xmx104857M SplitNCigarReads -R "input_2/Shiitake_XR1.asm.bp.hap1.p_ctg.fasta" -I output.temp/"Le1-1-501-701_1.fastq.gz"_addrg_repN.bam -O output2/"Le1-1-501-701_1.fastq.gz".bam' + read i ++ basename output/Le1-12-501-708_1.fastq.gz.bam .bam + j=Le1-12-501-708_1.fastq.gz + echo -n 'PPDOCNAME=pp`date +%Y%m%d_%H%M%S_%3N`_$RANDOM; echo $PPDOCNAME >> /yoshitake/test/RNA-seq~SNPcall-bbmap-callvariants/pp-docker-list; docker run --name ${PPDOCNAME} -v $PWD:$PWD -w $PWD -u 2007:600 -i --rm quay.io/biocontainers/picard:2.18.27--0 picard -Xmx104857M AddOrReplaceReadGroups I="output/Le1-12-501-708_1.fastq.gz.bam" O=output.temp/"Le1-12-501-708_1.fastq.gz"_addrg.bam SO=coordinate RGID="Le1-12-501-708_1.fastq.gz" RGLB=library RGPL=Illumina RGPU=Illumina RGSM="Le1-12-501-708_1.fastq.gz"; ' + echo -n '(PPDOCNAME=pp`date +%Y%m%d_%H%M%S_%3N`_$RANDOM; echo $PPDOCNAME >> /yoshitake/test/RNA-seq~SNPcall-bbmap-callvariants/pp-docker-list; docker run --name ${PPDOCNAME} -v $PWD:$PWD -w $PWD -u 2007:600 -i --rm quay.io/biocontainers/bbmap:38.96--h5c4e2a8_1 samtools view -h output.temp/"Le1-12-501-708_1.fastq.gz"_addrg.bam)|bash run-awk-replace.sh|(PPDOCNAME=pp`date +%Y%m%d_%H%M%S_%3N`_$RANDOM; echo $PPDOCNAME >> /yoshitake/test/RNA-seq~SNPcall-bbmap-callvariants/pp-docker-list; docker run --name ${PPDOCNAME} -v $PWD:$PWD -w $PWD -u 2007:600 -i --rm quay.io/biocontainers/bbmap:38.96--h5c4e2a8_1 samtools view -Sb -o output.temp/"Le1-12-501-708_1.fastq.gz"_addrg_repN.bam); ' + echo -n 'PPDOCNAME=pp`date +%Y%m%d_%H%M%S_%3N`_$RANDOM; echo $PPDOCNAME >> /yoshitake/test/RNA-seq~SNPcall-bbmap-callvariants/pp-docker-list; docker run --name ${PPDOCNAME} -v $PWD:$PWD -w $PWD -u 2007:600 -i --rm quay.io/biocontainers/bbmap:38.96--h5c4e2a8_1 samtools index output.temp/"Le1-12-501-708_1.fastq.gz"_addrg_repN.bam; ' + echo 'PPDOCNAME=pp`date +%Y%m%d_%H%M%S_%3N`_$RANDOM; echo $PPDOCNAME >> /yoshitake/test/RNA-seq~SNPcall-bbmap-callvariants/pp-docker-list; docker run --name ${PPDOCNAME} -v $PWD:$PWD -w $PWD -u 2007:600 -i --rm broadinstitute/gatk:4.3.0.0 gatk -Xmx104857M SplitNCigarReads -R "input_2/Shiitake_XR1.asm.bp.hap1.p_ctg.fasta" -I output.temp/"Le1-12-501-708_1.fastq.gz"_addrg_repN.bam -O output2/"Le1-12-501-708_1.fastq.gz".bam' + read i ++ basename output/Le1-13-502-701_1.fastq.gz.bam .bam + j=Le1-13-502-701_1.fastq.gz + echo -n 'PPDOCNAME=pp`date +%Y%m%d_%H%M%S_%3N`_$RANDOM; echo $PPDOCNAME >> /yoshitake/test/RNA-seq~SNPcall-bbmap-callvariants/pp-docker-list; docker run --name ${PPDOCNAME} -v $PWD:$PWD -w $PWD -u 2007:600 -i --rm quay.io/biocontainers/picard:2.18.27--0 picard -Xmx104857M AddOrReplaceReadGroups I="output/Le1-13-502-701_1.fastq.gz.bam" O=output.temp/"Le1-13-502-701_1.fastq.gz"_addrg.bam SO=coordinate RGID="Le1-13-502-701_1.fastq.gz" RGLB=library RGPL=Illumina RGPU=Illumina RGSM="Le1-13-502-701_1.fastq.gz"; ' + echo -n '(PPDOCNAME=pp`date +%Y%m%d_%H%M%S_%3N`_$RANDOM; echo $PPDOCNAME >> /yoshitake/test/RNA-seq~SNPcall-bbmap-callvariants/pp-docker-list; docker run --name ${PPDOCNAME} -v $PWD:$PWD -w $PWD -u 2007:600 -i --rm quay.io/biocontainers/bbmap:38.96--h5c4e2a8_1 samtools view -h output.temp/"Le1-13-502-701_1.fastq.gz"_addrg.bam)|bash run-awk-replace.sh|(PPDOCNAME=pp`date +%Y%m%d_%H%M%S_%3N`_$RANDOM; echo $PPDOCNAME >> /yoshitake/test/RNA-seq~SNPcall-bbmap-callvariants/pp-docker-list; docker run --name ${PPDOCNAME} -v $PWD:$PWD -w $PWD -u 2007:600 -i --rm quay.io/biocontainers/bbmap:38.96--h5c4e2a8_1 samtools view -Sb -o output.temp/"Le1-13-502-701_1.fastq.gz"_addrg_repN.bam); ' + echo -n 'PPDOCNAME=pp`date +%Y%m%d_%H%M%S_%3N`_$RANDOM; echo $PPDOCNAME >> /yoshitake/test/RNA-seq~SNPcall-bbmap-callvariants/pp-docker-list; docker run --name ${PPDOCNAME} -v $PWD:$PWD -w $PWD -u 2007:600 -i --rm quay.io/biocontainers/bbmap:38.96--h5c4e2a8_1 samtools index output.temp/"Le1-13-502-701_1.fastq.gz"_addrg_repN.bam; ' + echo 'PPDOCNAME=pp`date +%Y%m%d_%H%M%S_%3N`_$RANDOM; echo $PPDOCNAME >> /yoshitake/test/RNA-seq~SNPcall-bbmap-callvariants/pp-docker-list; docker run --name ${PPDOCNAME} -v $PWD:$PWD -w $PWD -u 2007:600 -i --rm broadinstitute/gatk:4.3.0.0 gatk -Xmx104857M SplitNCigarReads -R "input_2/Shiitake_XR1.asm.bp.hap1.p_ctg.fasta" -I output.temp/"Le1-13-502-701_1.fastq.gz"_addrg_repN.bam -O output2/"Le1-13-502-701_1.fastq.gz".bam' + read i ++ basename output/Le1-17-502-703_1.fastq.gz.bam .bam + j=Le1-17-502-703_1.fastq.gz + echo -n 'PPDOCNAME=pp`date +%Y%m%d_%H%M%S_%3N`_$RANDOM; echo $PPDOCNAME >> /yoshitake/test/RNA-seq~SNPcall-bbmap-callvariants/pp-docker-list; docker run --name ${PPDOCNAME} -v $PWD:$PWD -w $PWD -u 2007:600 -i --rm quay.io/biocontainers/picard:2.18.27--0 picard -Xmx104857M AddOrReplaceReadGroups I="output/Le1-17-502-703_1.fastq.gz.bam" O=output.temp/"Le1-17-502-703_1.fastq.gz"_addrg.bam SO=coordinate RGID="Le1-17-502-703_1.fastq.gz" RGLB=library RGPL=Illumina RGPU=Illumina RGSM="Le1-17-502-703_1.fastq.gz"; ' + echo -n '(PPDOCNAME=pp`date +%Y%m%d_%H%M%S_%3N`_$RANDOM; echo $PPDOCNAME >> /yoshitake/test/RNA-seq~SNPcall-bbmap-callvariants/pp-docker-list; docker run --name ${PPDOCNAME} -v $PWD:$PWD -w $PWD -u 2007:600 -i --rm quay.io/biocontainers/bbmap:38.96--h5c4e2a8_1 samtools view -h output.temp/"Le1-17-502-703_1.fastq.gz"_addrg.bam)|bash run-awk-replace.sh|(PPDOCNAME=pp`date +%Y%m%d_%H%M%S_%3N`_$RANDOM; echo $PPDOCNAME >> /yoshitake/test/RNA-seq~SNPcall-bbmap-callvariants/pp-docker-list; docker run --name ${PPDOCNAME} -v $PWD:$PWD -w $PWD -u 2007:600 -i --rm quay.io/biocontainers/bbmap:38.96--h5c4e2a8_1 samtools view -Sb -o output.temp/"Le1-17-502-703_1.fastq.gz"_addrg_repN.bam); ' + echo -n 'PPDOCNAME=pp`date +%Y%m%d_%H%M%S_%3N`_$RANDOM; echo $PPDOCNAME >> /yoshitake/test/RNA-seq~SNPcall-bbmap-callvariants/pp-docker-list; docker run --name ${PPDOCNAME} -v $PWD:$PWD -w $PWD -u 2007:600 -i --rm quay.io/biocontainers/bbmap:38.96--h5c4e2a8_1 samtools index output.temp/"Le1-17-502-703_1.fastq.gz"_addrg_repN.bam; ' + echo 'PPDOCNAME=pp`date +%Y%m%d_%H%M%S_%3N`_$RANDOM; echo $PPDOCNAME >> /yoshitake/test/RNA-seq~SNPcall-bbmap-callvariants/pp-docker-list; docker run --name ${PPDOCNAME} -v $PWD:$PWD -w $PWD -u 2007:600 -i --rm broadinstitute/gatk:4.3.0.0 gatk -Xmx104857M SplitNCigarReads -R "input_2/Shiitake_XR1.asm.bp.hap1.p_ctg.fasta" -I output.temp/"Le1-17-502-703_1.fastq.gz"_addrg_repN.bam -O output2/"Le1-17-502-703_1.fastq.gz".bam' + read i /usr/local/bin/picard: line 5: warning: setlocale: LC_ALL: cannot change locale (en_US.UTF-8): No such file or directory INFO 2023-02-28 23:40:21 AddOrReplaceReadGroups ********** NOTE: Picard's command line syntax is changing. ********** ********** For more information, please see: ********** https://github.com/broadinstitute/picard/wiki/Command-Line-Syntax-Transition-For-Users-(Pre-Transition) ********** ********** The command line looks like this in the new syntax: ********** ********** AddOrReplaceReadGroups -I output/Le1-1-501-701_1.fastq.gz.bam -O output.temp/Le1-1-501-701_1.fastq.gz_addrg.bam -SO coordinate -RGID Le1-1-501-701_1.fastq.gz -RGLB library -RGPL Illumina -RGPU Illumina -RGSM Le1-1-501-701_1.fastq.gz ********** 23:40:22.113 INFO NativeLibraryLoader - Loading libgkl_compression.so from jar:file:/usr/local/share/picard-2.18.27-0/picard.jar!/com/intel/gkl/native/libgkl_compression.so [Tue Feb 28 23:40:22 GMT 2023] AddOrReplaceReadGroups INPUT=output/Le1-1-501-701_1.fastq.gz.bam OUTPUT=output.temp/Le1-1-501-701_1.fastq.gz_addrg.bam SORT_ORDER=coordinate RGID=Le1-1-501-701_1.fastq.gz RGLB=library RGPL=Illumina RGPU=Illumina RGSM=Le1-1-501-701_1.fastq.gz VERBOSITY=INFO QUIET=false VALIDATION_STRINGENCY=STRICT COMPRESSION_LEVEL=5 MAX_RECORDS_IN_RAM=500000 CREATE_INDEX=false CREATE_MD5_FILE=false GA4GH_CLIENT_SECRETS=client_secrets.json USE_JDK_DEFLATER=false USE_JDK_INFLATER=false [Tue Feb 28 23:40:22 GMT 2023] Executing as ?@1869a7c02967 on Linux 3.10.0-1160.36.2.el7.x86_64 amd64; OpenJDK 64-Bit Server VM 11.0.1+13-LTS; Deflater: Intel; Inflater: Intel; Provider GCS is not available; Picard version: 2.18.27-SNAPSHOT INFO 2023-02-28 23:40:22 AddOrReplaceReadGroups Created read-group ID=Le1-1-501-701_1.fastq.gz PL=Illumina LB=library SM=Le1-1-501-701_1.fastq.gz [Tue Feb 28 23:40:25 GMT 2023] picard.sam.AddOrReplaceReadGroups done. Elapsed time: 0.05 minutes. Runtime.totalMemory()=2583691264 [1m[31mUSAGE: [32m[1m[31m [-h] [0m[1m[31mAvailable Programs: [0m[37m-------------------------------------------------------------------------------------- [0m[31mBase Calling: Tools that process sequencing machine data, e.g. Illumina base calls, and detect sequencing level attributes, e.g. adapters[0m [32m CheckIlluminaDirectory (Picard) [36mAsserts the validity for specified Illumina basecalling data. [0m [32m CollectIlluminaBasecallingMetrics (Picard) [36mCollects Illumina Basecalling metrics for a sequencing run. [0m [32m CollectIlluminaLaneMetrics (Picard) [36mCollects Illumina lane metrics for the given BaseCalling analysis directory.[0m [32m ExtractIlluminaBarcodes (Picard) [36mTool determines the barcode for each read in an Illumina lane. [0m [32m IlluminaBasecallsToFastq (Picard) [36mGenerate FASTQ file(s) from Illumina basecall read data. [0m [32m IlluminaBasecallsToSam (Picard) [36mTransforms raw Illumina sequencing data into an unmapped SAM, BAM or CRAM file.[0m [32m MarkIlluminaAdapters (Picard) [36mReads a SAM/BAM/CRAM file and rewrites it with new adapter-trimming tags. [0m [37m-------------------------------------------------------------------------------------- [0m[31mCopy Number Variant Discovery: Tools that analyze read coverage to detect copy number variants.[0m [32m AnnotateIntervals [36mAnnotates intervals with GC content, mappability, and segmental-duplication content[0m [32m CallCopyRatioSegments [36mCalls copy-ratio segments as amplified, deleted, or copy-number neutral[0m [32m CombineSegmentBreakpoints [31m(EXPERIMENTAL Tool) [36mCombine the breakpoints of two segment files and annotate the resulting intervals with chosen columns from each file.[0m [32m CreateReadCountPanelOfNormals [36mCreates a panel of normals for read-count denoising[0m [32m DenoiseReadCounts [36mDenoises read counts to produce denoised copy ratios[0m [32m DetermineGermlineContigPloidy [36mDetermines the baseline contig ploidy for germline samples given counts data[0m [32m FilterIntervals [36mFilters intervals based on annotations and/or count statistics[0m [32m GermlineCNVCaller [36mCalls copy-number variants in germline samples given their counts and the output of DetermineGermlineContigPloidy[0m [32m MergeAnnotatedRegions [31m(EXPERIMENTAL Tool) [36mMerge annotated genomic regions based entirely on touching/overlapping intervals.[0m [32m MergeAnnotatedRegionsByAnnotation [31m(EXPERIMENTAL Tool) [36mMerge annotated genomic regions within specified distance if annotation value(s) are exactly the same.[0m [32m ModelSegments [36mModels segmented copy ratios from denoised copy ratios and segmented minor-allele fractions from allelic counts[0m [32m PlotDenoisedCopyRatios [36mCreates plots of denoised copy ratios[0m [32m PlotModeledSegments [36mCreates plots of denoised and segmented copy-ratio and minor-allele-fraction estimates[0m [32m PostprocessGermlineCNVCalls [36mPostprocesses the output of GermlineCNVCaller and generates VCFs and denoised copy ratios[0m [32m TagGermlineEvents [31m(EXPERIMENTAL Tool) [36mDo a simplistic tagging of germline events in a tumor segment file.[0m [37m-------------------------------------------------------------------------------------- [0m[31mCoverage Analysis: Tools that count coverage, e.g. depth per allele[0m [32m ASEReadCounter [36mGenerates table of filtered base counts at het sites for allele specific expression[0m [32m AnalyzeSaturationMutagenesis [31m(BETA Tool) [36m(EXPERIMENTAL) Processes reads from a MITESeq or other saturation mutagenesis experiment.[0m [32m CollectAllelicCounts [36mCollects reference and alternate allele counts at specified sites[0m [32m CollectAllelicCountsSpark [36mCollects reference and alternate allele counts at specified sites[0m [32m CollectF1R2Counts [36mCollect F1R2 read counts for the Mutect2 orientation bias mixture model filter[0m [32m CollectReadCounts [36mCollects read counts at specified intervals[0m [32m CountBases [36mCount bases in a SAM/BAM/CRAM file[0m [32m CountBasesSpark [36mCounts bases in the input SAM/BAM[0m [32m CountReads [36mCount reads in a SAM/BAM/CRAM file[0m [32m CountReadsSpark [36mCounts reads in the input SAM/BAM[0m [32m DepthOfCoverage [31m(BETA Tool) [36mGenerate coverage summary information for reads data[0m [32m GatherNormalArtifactData [36mCombine output files from GetNormalArtifactData in the order defined by a sequence dictionary[0m [32m GeneExpressionEvaluation [31m(BETA Tool) [36mEvaluate gene expression from RNA-seq reads aligned to genome.[0m [32m GetNormalArtifactData [36mCollects data for training normal artifact filter[0m [32m GetPileupSummaries [36mTabulates pileup metrics for inferring contamination[0m [32m LocalAssembler [31m(BETA Tool) [36mLocal assembler for SVs[0m [32m Pileup [36mPrints read alignments in samtools pileup format[0m [32m PileupSpark [31m(BETA Tool) [36mPrints read alignments in samtools pileup format[0m [37m-------------------------------------------------------------------------------------- [0m[31mDiagnostics and Quality Control: Tools that collect sequencing quality related and comparative metrics[0m [32m AccumulateQualityYieldMetrics (Picard) [36mCombines multiple QualityYieldMetrics files into a single file.[0m [32m AccumulateVariantCallingMetrics (Picard) [36mCombines multiple Variant Calling Metrics files into a single file[0m [32m AnalyzeCovariates [36mEvaluate and compare base quality score recalibration (BQSR) tables[0m [32m BamIndexStats (Picard) [36mGenerate index statistics from a BAM file[0m [32m CalcMetadataSpark [31m(BETA Tool) [36m(Internal) Collects read metrics relevant to structural variant discovery[0m [32m CalculateContamination [36mCalculate the fraction of reads coming from cross-sample contamination[0m [32m CalculateFingerprintMetrics (Picard) [36mCalculate statistics on fingerprints, checking their viability[0m [32m CalculateReadGroupChecksum (Picard) [36mCreates a hash code based on the read groups (RG). [0m [32m CheckDuplicateMarking (Picard) [36mChecks the consistency of duplicate markings.[0m [32m CheckFingerprint (Picard) [36mComputes a fingerprint from the supplied input (SAM/BAM/CRAM or VCF) file and compares it to the provided genotypes[0m [32m CheckPileup [36mCompare GATK's internal pileup to a reference Samtools mpileup[0m [32m CheckTerminatorBlock (Picard) [36mAsserts the provided gzip file's (e.g., BAM) last block is well-formed; RC 100 otherwise[0m [32m ClusterCrosscheckMetrics (Picard) [36mClusters the results of a CrosscheckFingerprints run by LOD score[0m [32m CollectAlignmentSummaryMetrics (Picard) [36mProduces a summary of alignment metrics from a SAM or BAM file. [0m [32m CollectArraysVariantCallingMetrics (Picard) [36mCollects summary and per-sample from the provided arrays VCF file[0m [32m CollectBaseDistributionByCycle (Picard) [36mChart the nucleotide distribution per cycle in a SAM or BAM file[0m [32m CollectBaseDistributionByCycleSpark [31m(BETA Tool) [36mCollects base distribution per cycle in SAM/BAM/CRAM file(s).[0m [32m CollectGcBiasMetrics (Picard) [36mCollect metrics regarding GC bias. [0m [32m CollectHiSeqXPfFailMetrics (Picard) [36mClassify PF-Failing reads in a HiSeqX Illumina Basecalling directory into various categories.[0m [32m CollectHsMetrics (Picard) [36mCollects hybrid-selection (HS) metrics for a SAM or BAM file. [0m [32m CollectIndependentReplicateMetrics (Picard) [31m(EXPERIMENTAL Tool) [36mEstimates the rate of independent replication rate of reads within a bam. [0m [32m CollectInsertSizeMetrics (Picard) [36mCollect metrics about the insert size distribution of a paired-end library. [0m [32m CollectInsertSizeMetricsSpark [31m(BETA Tool) [36mCollects insert size distribution information on alignment data[0m [32m CollectJumpingLibraryMetrics (Picard) [36mCollect jumping library metrics. [0m [32m CollectMultipleMetrics (Picard) [36mCollect multiple classes of metrics. [0m [32m CollectMultipleMetricsSpark [31m(BETA Tool) [36mRuns multiple metrics collection modules for a given alignment file[0m [32m CollectOxoGMetrics (Picard) [36mCollect metrics to assess oxidative artifacts.[0m [32m CollectQualityYieldMetrics (Picard) [36mCollect metrics about reads that pass quality thresholds and Illumina-specific filters. [0m [32m CollectQualityYieldMetricsSpark [31m(BETA Tool) [36mCollects quality yield metrics from SAM/BAM/CRAM file(s).[0m [32m CollectRawWgsMetrics (Picard) [36mCollect whole genome sequencing-related metrics. [0m [32m CollectRnaSeqMetrics (Picard) [36mProduces RNA alignment metrics for a SAM or BAM file. [0m [32m CollectRrbsMetrics (Picard) [36mCollects metrics from reduced representation bisulfite sequencing (Rrbs) data. [0m [32m CollectSamErrorMetrics (Picard) [36mProgram to collect error metrics on bases stratified in various ways.[0m [32m CollectSequencingArtifactMetrics (Picard) [36mCollect metrics to quantify single-base sequencing artifacts. [0m [32m CollectTargetedPcrMetrics (Picard) [36mCalculate PCR-related metrics from targeted sequencing data. [0m [32m CollectVariantCallingMetrics (Picard) [36mCollects per-sample and aggregate (spanning all samples) metrics from the provided VCF file[0m [32m CollectWgsMetrics (Picard) [36mCollect metrics about coverage and performance of whole genome sequencing (WGS) experiments.[0m [32m CollectWgsMetricsWithNonZeroCoverage (Picard)[31m(EXPERIMENTAL Tool) [36mCollect metrics about coverage and performance of whole genome sequencing (WGS) experiments. [0m [32m CompareBaseQualities [36mCompares the base qualities of two SAM/BAM/CRAM files[0m [32m CompareDuplicatesSpark [31m(BETA Tool) [36mDetermine if two potentially identical BAMs have the same duplicate reads[0m [32m CompareMetrics (Picard) [36mCompare two metrics files.[0m [32m CompareSAMs (Picard) [36mCompare two input SAM/BAM/CRAM files. [0m [32m ConvertHaplotypeDatabaseToVcf (Picard) [36mConvert Haplotype database file to vcf[0m [32m ConvertSequencingArtifactToOxoG (Picard) [36mExtract OxoG metrics from generalized artifacts metrics. [0m [32m CrosscheckFingerprints (Picard) [36mChecks that all data in the input files appear to have come from the same individual[0m [32m CrosscheckReadGroupFingerprints (Picard) [36mDEPRECATED: USE CrosscheckFingerprints. [0m [32m DumpTabixIndex [36mDumps a tabix index file.[0m [32m EstimateLibraryComplexity (Picard) [36mEstimates the numbers of unique molecules in a sequencing library. [0m [32m ExtractFingerprint (Picard) [36mComputes a fingerprint from the input file.[0m [32m FlagStat [36mAccumulate flag statistics given a BAM file[0m [32m FlagStatSpark [36mSpark tool to accumulate flag statistics[0m [32m GatherPileupSummaries [36mCombine output files from GetPileupSummary in the order defined by a sequence dictionary[0m [32m GetSampleName [36mEmit a single sample name[0m [32m IdentifyContaminant (Picard) [36mComputes a fingerprint from the supplied SAM/BAM file, given a contamination estimate.[0m [32m LiftOverHaplotypeMap (Picard) [36mLifts over a haplotype database from one reference to another[0m [32m MeanQualityByCycle (Picard) [36mCollect mean quality by cycle.[0m [32m MeanQualityByCycleSpark [31m(BETA Tool) [36mMeanQualityByCycle on Spark[0m [32m QualityScoreDistribution (Picard) [36mChart the distribution of quality scores. [0m [32m QualityScoreDistributionSpark [31m(BETA Tool) [36mQualityScoreDistribution on Spark[0m [32m ValidateSamFile (Picard) [36mValidates a SAM/BAM/CRAM file.[0m [32m ViewSam (Picard) [36mPrints a SAM or BAM file to the screen[0m [37m-------------------------------------------------------------------------------------- [0m[31mExample Tools: Example tools that show developers how to implement new tools[0m [32m ExampleMultiFeatureWalker [36mExample of a MultiFeatureWalker subclass.[0m [32m HtsgetReader [31m(EXPERIMENTAL Tool) [36mDownload a file using htsget[0m [37m-------------------------------------------------------------------------------------- [0m[31mFlow Based Tools: Tools designed specifically to operate on flow based data[0m [32m CalculateAverageCombinedAnnotations [31m(EXPERIMENTAL Tool) [36mDivides annotations that were summed by genomicsDB by number of samples to calculate average.[0m [32m FlowFeatureMapper [31m(EXPERIMENTAL Tool) [36mMap/find features in BAM file, output VCF. Initially mapping SNVs[0m [32m GroundTruthReadsBuilder [31m(EXPERIMENTAL Tool) [36mProduces a flexible and robust ground truth set for base calling training[0m [32m SplitCRAM [31m(EXPERIMENTAL Tool) [36mSplit CRAM files to smaller files efficiently[0m [37m-------------------------------------------------------------------------------------- [0m[31mGenotyping Arrays Manipulation: Tools that manipulate data generated by Genotyping arrays[0m [32m BpmToNormalizationManifestCsv (Picard) [36mProgram to convert an Illumina bpm file into a bpm.csv file.[0m [32m CombineGenotypingArrayVcfs (Picard) [36mProgram to combine multiple genotyping array VCF files into one VCF.[0m [32m CompareGtcFiles (Picard) [36mCompares two GTC files.[0m [32m CreateBafRegressMetricsFile (Picard) [36mProgram to generate a picard metrics file from the output of the bafRegress tool.[0m [32m CreateExtendedIlluminaManifest (Picard) [36mCreate an Extended Illumina Manifest for usage by the Picard tool GtcToVcf[0m [32m CreateVerifyIDIntensityContaminationMetricsFile (Picard) [36mProgram to generate a picard metrics file from the output of the VerifyIDIntensity tool.[0m [32m GtcToVcf (Picard) [36mProgram to convert an Illumina GTC file to a VCF[0m [32m MergePedIntoVcf (Picard) [36mProgram to merge a single-sample ped file from zCall into a single-sample VCF.[0m [32m VcfToAdpc (Picard) [36mProgram to convert an Arrays VCF to an ADPC file.[0m [37m-------------------------------------------------------------------------------------- [0m[31mIntervals Manipulation: Tools that process genomic intervals in various formats[0m [32m BedToIntervalList (Picard) [36mConverts a BED file to a Picard Interval List. [0m [32m CompareIntervalLists [36mCompare two interval lists for equality[0m [32m IntervalListToBed (Picard) [36mConverts an Picard IntervalList file to a BED file.[0m [32m IntervalListTools (Picard) [36mA tool for performing various IntervalList manipulations[0m [32m LiftOverIntervalList (Picard) [36mLifts over an interval list from one reference build to another. [0m [32m PreprocessIntervals [36mPrepares bins for coverage collection[0m [32m SplitIntervals [36mSplit intervals into sub-interval files.[0m [37m-------------------------------------------------------------------------------------- [0m[31mMetagenomics: Tools that perform metagenomic analysis, e.g. microbial community composition and pathogen detection[0m [32m PathSeqBuildKmers [36mBuilds set of host reference k-mers[0m [32m PathSeqBuildReferenceTaxonomy [36mBuilds a taxonomy datafile of the microbe reference[0m [32m PathSeqBwaSpark [36mStep 2: Aligns reads to the microbe reference[0m [32m PathSeqFilterSpark [36mStep 1: Filters low quality, low complexity, duplicate, and host reads[0m [32m PathSeqPipelineSpark [36mCombined tool that performs all steps: read filtering, microbe reference alignment, and abundance scoring[0m [32m PathSeqScoreSpark [36mStep 3: Classifies pathogen-aligned reads and generates abundance scores[0m [37m-------------------------------------------------------------------------------------- [0m[31mMethylation-Specific Tools: Tools that perform methylation calling, processing bisulfite sequenced, methylation-aware aligned BAM[0m [32m MethylationTypeCaller [31m(EXPERIMENTAL Tool) [36mIdentify methylated bases from bisulfite sequenced, methylation-aware BAMs[0m [37m-------------------------------------------------------------------------------------- [0m[31mOther: Miscellaneous tools, e.g. those that aid in data streaming[0m [32m CreateHadoopBamSplittingIndex [31m(BETA Tool) [36mCreate a Hadoop BAM splitting index[0m [32m FifoBuffer (Picard) [36mProvides a large, FIFO buffer that can be used to buffer input and output streams between programs.[0m [32m GatherBQSRReports [36mGathers scattered BQSR recalibration reports into a single file[0m [32m GatherTranches [31m(BETA Tool) [36mGathers scattered VQSLOD tranches into a single file[0m [32m IndexFeatureFile [36mCreates an index for a feature file, e.g. VCF or BED file.[0m [32m ParallelCopyGCSDirectoryIntoHDFSSpark [31m(BETA Tool) [36mParallel copy a file or directory from Google Cloud Storage into the HDFS file system used by Spark[0m [32m PrintBGZFBlockInformation [31m(EXPERIMENTAL Tool) [36mPrint information about the compressed blocks in a BGZF format file[0m [32m ReadAnonymizer [31m(EXPERIMENTAL Tool) [36mReplace bases in reads with reference bases.[0m [32m ReblockGVCF [36mCondenses homRef blocks in a single-sample GVCF[0m [32m SortGff (Picard) [36mSorts a gff3 file, and adds flush directives[0m [37m-------------------------------------------------------------------------------------- [0m[31mRead Data Manipulation: Tools that manipulate read data in SAM, BAM or CRAM format[0m [32m AddCommentsToBam (Picard) [36mAdds comments to the header of a BAM file.[0m [32m AddOATag (Picard) [36mRecord current alignment information to OA tag.[0m [32m AddOrReplaceReadGroups (Picard) [36mAssigns all the reads in a file to a single new read-group.[0m [32m AddOriginalAlignmentTags [31m(EXPERIMENTAL Tool) [36mAdds Original Alignment tag and original mate contig tag[0m [32m ApplyBQSR [36mApply base quality score recalibration[0m [32m ApplyBQSRSpark [31m(BETA Tool) [36mApply base quality score recalibration on Spark[0m [32m BQSRPipelineSpark [31m(BETA Tool) [36mBoth steps of BQSR (BaseRecalibrator and ApplyBQSR) on Spark[0m [32m BamToBfq (Picard) [36mConverts a BAM file into a BFQ (binary fastq formatted) file[0m [32m BaseRecalibrator [36mGenerates recalibration table for Base Quality Score Recalibration (BQSR)[0m [32m BaseRecalibratorSpark [31m(BETA Tool) [36mGenerate recalibration table for Base Quality Score Recalibration (BQSR) on Spark[0m [32m BuildBamIndex (Picard) [36mGenerates a BAM index ".bai" file. [0m [32m BwaAndMarkDuplicatesPipelineSpark [31m(BETA Tool) [36mTakes name-sorted file and runs BWA and MarkDuplicates.[0m [32m BwaSpark [31m(BETA Tool) [36mAlign reads to a given reference using BWA on Spark[0m [32m CleanSam (Picard) [36mCleans a SAM/BAM/CRAM files, soft-clipping beyond-end-of-reference alignments and setting MAPQ to 0 for unmapped reads[0m [32m ClipReads [36mClip reads in a SAM/BAM/CRAM file[0m [32m CollectDuplicateMetrics (Picard) [36mCollect Duplicate metrics from marked file.[0m [32m ConvertHeaderlessHadoopBamShardToBam [31m(BETA Tool) [36mConvert a headerless BAM shard into a readable BAM[0m [32m DownsampleByDuplicateSet [31m(BETA Tool) [36mDiscard a set fraction of duplicate sets from a UMI-grouped bam[0m [32m DownsampleSam (Picard) [36mDownsample a SAM or BAM file.[0m [32m ExtractOriginalAlignmentRecordsByNameSpark [31m(BETA Tool) [36mSubsets reads by name[0m [32m FastqToSam (Picard) [36mConverts a FASTQ file to an unaligned BAM or SAM file[0m [32m FilterSamReads (Picard) [36mSubsets reads from a SAM/BAM/CRAM file by applying one of several filters.[0m [32m FixMateInformation (Picard) [36mVerify mate-pair information between mates and fix if needed.[0m [32m FixMisencodedBaseQualityReads [36mFix Illumina base quality scores in a SAM/BAM/CRAM file[0m [32m GatherBamFiles (Picard) [36mConcatenate efficiently BAM files that resulted from a scattered parallel analysis[0m [32m LeftAlignIndels [36mLeft-aligns indels from reads in a SAM/BAM/CRAM file[0m [32m MarkDuplicates (Picard) [36mIdentifies duplicate reads. [0m [32m MarkDuplicatesSpark [36mMarkDuplicates on Spark[0m [32m MarkDuplicatesWithMateCigar (Picard) [36mIdentifies duplicate reads, accounting for mate CIGAR. [0m [32m MergeBamAlignment (Picard) [36mMerge alignment data from a SAM or BAM with data in an unmapped BAM file. [0m [32m MergeSamFiles (Picard) [36mMerges multiple SAM/BAM/CRAM (and/or) files into a single file. [0m [32m PositionBasedDownsampleSam (Picard) [36mDownsample a SAM or BAM file to retain a subset of the reads based on the reads location in each tile in the flowcell.[0m [32m PostProcessReadsForRSEM [31m(BETA Tool) [36mReorder reads before running RSEM[0m [32m PrintDistantMates [36mUnmaps reads with distant mates.[0m [32m PrintReads [36mPrint reads in the SAM/BAM/CRAM file[0m [32m PrintReadsHeader [36mPrint the header from a SAM/BAM/CRAM file[0m [32m PrintReadsSpark [36mPrintReads on Spark[0m [32m ReorderSam (Picard) [36mReorders reads in a SAM or BAM file to match ordering in a second reference file.[0m [32m ReplaceSamHeader (Picard) [36mReplaces the SAMFileHeader in a SAM/BAM/CRAM file. [0m [32m RevertBaseQualityScores [36mRevert Quality Scores in a SAM/BAM/CRAM file[0m [32m RevertOriginalBaseQualitiesAndAddMateCigar (Picard)[36mReverts the original base qualities and adds the mate cigar tag to read-group files[0m [32m RevertSam (Picard) [36mReverts SAM/BAM/CRAM files to a previous state. [0m [32m RevertSamSpark [31m(BETA Tool) [36mReverts SAM, BAM or CRAM files to a previous state.[0m [32m SamFormatConverter (Picard) [36mConvert a BAM file to a SAM file, or a SAM to a BAM[0m [32m SamToFastq (Picard) [36mConverts a SAM/BAM/CRAM file to FASTQ.[0m [32m SamToFastqWithTags (Picard) [36mConverts a SAM or BAM file to FASTQ alongside FASTQs created from tags.[0m [32m SetNmAndUqTags (Picard) [36mDEPRECATED: Use SetNmMdAndUqTags instead.[0m [32m SetNmMdAndUqTags (Picard) [36mFixes the NM, MD, and UQ tags in a SAM/BAM/CRAM file [0m [32m SimpleMarkDuplicatesWithMateCigar (Picard) [31m(EXPERIMENTAL Tool) [36mExamines aligned records in the supplied SAM or BAM file to locate duplicate molecules.[0m [32m SortSam (Picard) [36mSorts a SAM, BAM or CRAM file. [0m [32m SortSamSpark [31m(BETA Tool) [36mSortSam on Spark (works on SAM/BAM/CRAM)[0m [32m SplitNCigarReads [36mSplit Reads with N in Cigar[0m [32m SplitReads [36mOutputs reads from a SAM/BAM/CRAM by read group, sample and library name[0m [32m SplitSamByLibrary (Picard) [36mSplits a SAM/BAM/CRAM file into individual files by library[0m [32m SplitSamByNumberOfReads (Picard) [36mSplits a SAM/BAM/CRAM file to multiple files.[0m [32m TransferReadTags [31m(EXPERIMENTAL Tool) [36mIncorporate read tags in a SAM file to that of a matching SAM file[0m [32m UmiAwareMarkDuplicatesWithMateCigar (Picard) [31m(EXPERIMENTAL Tool) [36mIdentifies duplicate reads using information from read positions and UMIs. [0m [32m UnmarkDuplicates [36mClears the 0x400 duplicate SAM flag[0m [37m-------------------------------------------------------------------------------------- [0m[31mReference: Tools that analyze and manipulate FASTA format references[0m [32m BaitDesigner (Picard) [36mDesigns oligonucleotide baits for hybrid selection reactions.[0m [32m BwaMemIndexImageCreator [36mCreate a BWA-MEM index image file for use with GATK BWA tools[0m [32m CheckReferenceCompatibility [31m(EXPERIMENTAL Tool) [36mCheck a BAM/VCF for compatibility against specified references.[0m [32m CompareReferences [31m(EXPERIMENTAL Tool) [36mDisplay reference comparison as a tab-delimited table and summarize reference differences.[0m [32m ComposeSTRTableFile [36mComposes a genome-wide STR location table used for DragSTR model auto-calibration[0m [32m CountBasesInReference [36mCount the numbers of each base in a reference file[0m [32m CreateSequenceDictionary (Picard) [36mCreates a sequence dictionary for a reference sequence. [0m [32m ExtractSequences (Picard) [36mSubsets intervals from a reference sequence to a new FASTA file.[0m [32m FastaAlternateReferenceMaker [36mCreate an alternative reference by combining a fasta with a vcf.[0m [32m FastaReferenceMaker [36mCreate snippets of a fasta file[0m [32m FindBadGenomicKmersSpark [31m(BETA Tool) [36mIdentifies sequences that occur at high frequency in a reference[0m [32m NonNFastaSize (Picard) [36mCounts the number of non-N bases in a fasta file.[0m [32m NormalizeFasta (Picard) [36mNormalizes lines of sequence in a FASTA file to be of the same length.[0m [32m ScatterIntervalsByNs (Picard) [36mWrites an interval list created by splitting a reference at Ns.[0m [32m ShiftFasta [31m(BETA Tool) [36mCreates a shifted fasta file and shift_back file[0m [37m-------------------------------------------------------------------------------------- [0m[31mShort Variant Discovery: Tools that perform variant calling and genotyping for short variants (SNPs, SNVs and Indels)[0m [32m CalibrateDragstrModel [36mestimates the parameters for the DRAGstr model[0m [32m CombineGVCFs [36mMerges one or more HaplotypeCaller GVCF files into a single GVCF with appropriate annotations[0m [32m GenomicsDBImport [36mImport VCFs to GenomicsDB[0m [32m GenotypeGVCFs [36mPerform joint genotyping on one or more samples pre-called with HaplotypeCaller[0m [32m GnarlyGenotyper [31m(BETA Tool) [36mPerform "quick and dirty" joint genotyping on one or more samples pre-called with HaplotypeCaller[0m [32m HaplotypeBasedVariantRecaller [31m(EXPERIMENTAL Tool) [36mCalculate likelihood matrix for each Allele in VCF against a set of Reads limited by a set of Haplotypes[0m [32m HaplotypeCaller [36mCall germline SNPs and indels via local re-assembly of haplotypes[0m [32m HaplotypeCallerSpark [31m(BETA Tool) [36mHaplotypeCaller on Spark[0m [32m LearnReadOrientationModel [36mGet the maximum likelihood estimates of artifact prior probabilities in the orientation bias mixture model filter[0m [32m MergeMutectStats [36mMerge the stats output by scatters of a single Mutect2 job[0m [32m Mutect2 [36mCall somatic SNVs and indels via local assembly of haplotypes[0m [32m RampedHaplotypeCaller [31m(EXPERIMENTAL Tool) [36mCall germline SNPs and indels via local re-assembly of haplotypes (ramped version)[0m [32m ReadsPipelineSpark [31m(BETA Tool) [36mRuns BWA (if specified), MarkDuplicates, BQSR, and HaplotypeCaller on unaligned or aligned reads to generate a VCF.[0m [37m-------------------------------------------------------------------------------------- [0m[31mStructural Variant Discovery: Tools that detect structural variants [0m [32m CollectSVEvidence [31m(BETA Tool) [36mGathers paired-end and split read evidence files for use in the GATK-SV pipeline.[0m [32m CondenseDepthEvidence [31m(EXPERIMENTAL Tool) [36mMerges adjacent DepthEvidence records.[0m [32m CpxVariantReInterpreterSpark [31m(BETA Tool) [36m(Internal) Tries to extract simple variants from a provided GATK-SV CPX.vcf[0m [32m DiscoverVariantsFromContigAlignmentsSAMSpark [31m(BETA Tool) [36m(Internal) Examines aligned contigs from local assemblies and calls structural variants[0m [32m ExtractSVEvidenceSpark [31m(BETA Tool) [36m(Internal) Extracts evidence of structural variations from reads[0m [32m FindBreakpointEvidenceSpark [31m(BETA Tool) [36m(Internal) Produces local assemblies of genomic regions that may harbor structural variants[0m [32m JointGermlineCNVSegmentation [31m(BETA Tool) [36mCombine segmented gCNV VCFs.[0m [32m PrintReadCounts [31m(EXPERIMENTAL Tool) [36mPrints count files for CNV determination.[0m [32m PrintSVEvidence [31m(EXPERIMENTAL Tool) [36mMerges SV evidence records.[0m [32m SVAnnotate [36mAdds gene overlap and variant consequence annotations to SV VCF from GATK-SV pipeline[0m [32m SVCluster [31m(BETA Tool) [36mClusters structural variants[0m [32m SiteDepthtoBAF [31m(EXPERIMENTAL Tool) [36mConvert SiteDepth to BafEvidence[0m [32m StructuralVariantDiscoverer [31m(BETA Tool) [36m(Internal) Examines aligned contigs from local assemblies and calls structural variants or their breakpoints[0m [32m StructuralVariationDiscoveryPipelineSpark [31m(BETA Tool) [36mRuns the structural variation discovery workflow on a single sample[0m [32m SvDiscoverFromLocalAssemblyContigAlignmentsSpark [31m(BETA Tool) [36m(Internal) Examines aligned contigs from local assemblies and calls structural variants or their breakpoints[0m [37m-------------------------------------------------------------------------------------- [0m[31mVariant Evaluation and Refinement: Tools that evaluate and refine variant calls, e.g. with annotations not offered by the engine[0m [32m AlleleFrequencyQC [31m(BETA Tool) [36mGeneral-purpose tool for variant evaluation (% in dbSNP, genotype concordance, Ti/Tv ratios, and a lot more)[0m [32m AnnotateVcfWithBamDepth [36m(Internal) Annotate a vcf with a bam's read depth at each variant locus[0m [32m AnnotateVcfWithExpectedAlleleFraction [36m(Internal) Annotate a vcf with expected allele fractions in pooled sequencing[0m [32m CalculateGenotypePosteriors [36mCalculate genotype posterior probabilities given family and/or known population genotypes[0m [32m CalculateMixingFractions [36m(Internal) Calculate proportions of different samples in a pooled bam[0m [32m Concordance [36mEvaluate concordance of an input VCF against a validated truth VCF[0m [32m CountFalsePositives [31m(BETA Tool) [36mCount PASS variants[0m [32m CountVariants [36mCounts variant records in a VCF file, regardless of filter status.[0m [32m CountVariantsSpark [36mCountVariants on Spark[0m [32m EvaluateInfoFieldConcordance [31m(BETA Tool) [36mEvaluate concordance of info fields in an input VCF against a validated truth VCF[0m [32m FilterFuncotations [31m(EXPERIMENTAL Tool) [36mFilter variants based on clinically-significant Funcotations.[0m [32m FindMendelianViolations (Picard) [36mFinds mendelian violations of all types within a VCF[0m [32m FuncotateSegments [31m(BETA Tool) [36mFunctional annotation for segment files. The output formats are not well-defined and subject to change.[0m [32m Funcotator [36mFunctional Annotator[0m [32m FuncotatorDataSourceDownloader [36mData source downloader for Funcotator.[0m [32m GenotypeConcordance (Picard) [36mCalculates the concordance between genotype data of one sample in each of two VCFs - truth (or reference) vs. calls.[0m [32m MergeMutect2CallsWithMC3 [31m(EXPERIMENTAL Tool) [36mUNSUPPORTED. FOR EVALUATION ONLY. Merge M2 calls with MC[0m [32m ReferenceBlockConcordance [36mEvaluate GVCF reference block concordance of an input GVCF against a truth GVCF[0m [32m ValidateBasicSomaticShortMutations [31m(EXPERIMENTAL Tool) [36mCheck variants against tumor-normal bams representing the same samples, though not the ones from the actual calls.[0m [32m ValidateVariants [36mValidate VCF[0m [32m VariantEval [31m(BETA Tool) [36mGeneral-purpose tool for variant evaluation (% in dbSNP, genotype concordance, Ti/Tv ratios, and a lot more)[0m [32m VariantsToTable [36mExtract fields from a VCF file to a tab-delimited table[0m [37m-------------------------------------------------------------------------------------- [0m[31mVariant Filtering: Tools that filter variants by annotating the FILTER column[0m [32m ApplyVQSR [36m Apply a score cutoff to filter variants based on a recalibration table[0m [32m CNNScoreVariants [36mApply a Convolutional Neural Net to filter annotated variants[0m [32m CNNVariantTrain [31m(EXPERIMENTAL Tool) [36mTrain a CNN model for filtering variants[0m [32m CNNVariantWriteTensors [31m(EXPERIMENTAL Tool) [36mWrite variant tensors for training a CNN to filter variants[0m [32m CreateSomaticPanelOfNormals [31m(BETA Tool) [36mMake a panel of normals for use with Mutect2[0m [32m ExtractVariantAnnotations [31m(BETA Tool) [36mExtracts site-level variant annotations, labels, and other metadata from a VCF file to HDF5 files[0m [32m FilterAlignmentArtifacts [31m(EXPERIMENTAL Tool) [36mFilter alignment artifacts from a vcf callset.[0m [32m FilterMutectCalls [36mFilter somatic SNVs and indels called by Mutect2[0m [32m FilterVariantTranches [36mApply tranche filtering[0m [32m FilterVcf (Picard) [36mHard filters a VCF.[0m [32m MTLowHeteroplasmyFilterTool [36mIf too many low het sites, filter all low het sites[0m [32m NuMTFilterTool [36mUses the median autosomal coverage and the allele depth to determine whether the allele might be a NuMT[0m [32m ScoreVariantAnnotations [31m(BETA Tool) [36mScores variant calls in a VCF file based on site-level annotations using a previously trained model[0m [32m TrainVariantAnnotationsModel [31m(BETA Tool) [36mTrains a model for scoring variant calls based on site-level annotations[0m [32m VariantFiltration [36mFilter variant calls based on INFO and/or FORMAT annotations[0m [32m VariantRecalibrator [36mBuild a recalibration model to score variant quality for filtering purposes[0m [37m-------------------------------------------------------------------------------------- [0m[31mVariant Manipulation: Tools that manipulate variant call format (VCF) data[0m [32m FixVcfHeader (Picard) [36mReplaces or fixes a VCF header.[0m [32m GatherVcfs (Picard) [36mGathers multiple VCF files from a scatter operation into a single VCF file[0m [32m GatherVcfsCloud [31m(BETA Tool) [36mGathers multiple VCF files from a scatter operation into a single VCF file[0m [32m LeftAlignAndTrimVariants [36mLeft align and trim vairants[0m [32m LiftoverVcf (Picard) [36mLifts over a VCF file from one reference build to another. [0m [32m MakeSitesOnlyVcf (Picard) [36mCreates a VCF that contains all the site-level information for all records in the input VCF but no genotype information.[0m [32m MakeVcfSampleNameMap (Picard) [36mCreates a TSV from sample name to VCF/GVCF path, with one line per input.[0m [32m MergeVcfs (Picard) [36mCombines multiple variant files into a single variant file[0m [32m PrintVariantsSpark [36mPrints out variants from the input VCF.[0m [32m RemoveNearbyIndels [36m(Internal) Remove indels from the VCF file that are close to each other.[0m [32m RenameSampleInVcf (Picard) [36mRenames a sample within a VCF or BCF.[0m [32m SelectVariants [36mSelect a subset of variants from a VCF file[0m [32m SortVcf (Picard) [36mSorts one or more VCF files. [0m [32m SplitVcfs (Picard) [36mSplits SNPs and INDELs into separate files. [0m [32m UpdateVCFSequenceDictionary [36mUpdates the sequence dictionary in a variant file.[0m [32m UpdateVcfSequenceDictionary (Picard) [36mTakes a VCF and a second file that contains a sequence dictionary and updates the VCF with the new sequence dictionary.[0m [32m VariantAnnotator [36mTool for adding annotations to VCF files[0m [32m VcfFormatConverter (Picard) [36mConverts VCF to BCF or BCF to VCF. [0m [32m VcfToIntervalList (Picard) [36mConverts a VCF or BCF file to a Picard Interval List[0m [37m-------------------------------------------------------------------------------------- [0m *********************************************************************** A USER ERROR has occurred: '-Xmx104857M' is not a valid command. *********************************************************************** Set the system property GATK_STACKTRACE_ON_USER_EXCEPTION (--java-options '-DGATK_STACKTRACE_ON_USER_EXCEPTION=true') to print the stack trace. Using GATK jar /gatk/gatk-package-4.3.0.0-local.jar Running: java -Dsamjdk.use_async_io_read_samtools=false -Dsamjdk.use_async_io_write_samtools=true -Dsamjdk.use_async_io_write_tribble=false -Dsamjdk.compression_level=2 -jar /gatk/gatk-package-4.3.0.0-local.jar -Xmx104857M SplitNCigarReads -R input_2/Shiitake_XR1.asm.bp.hap1.p_ctg.fasta -I output.temp/Le1-1-501-701_1.fastq.gz_addrg_repN.bam -O output2/Le1-1-501-701_1.fastq.gz.bam /usr/local/bin/picard: line 5: warning: setlocale: LC_ALL: cannot change locale (en_US.UTF-8): No such file or directory INFO 2023-02-28 23:40:33 AddOrReplaceReadGroups ********** NOTE: Picard's command line syntax is changing. ********** ********** For more information, please see: ********** https://github.com/broadinstitute/picard/wiki/Command-Line-Syntax-Transition-For-Users-(Pre-Transition) ********** ********** The command line looks like this in the new syntax: ********** ********** AddOrReplaceReadGroups -I output/Le1-12-501-708_1.fastq.gz.bam -O output.temp/Le1-12-501-708_1.fastq.gz_addrg.bam -SO coordinate -RGID Le1-12-501-708_1.fastq.gz -RGLB library -RGPL Illumina -RGPU Illumina -RGSM Le1-12-501-708_1.fastq.gz ********** 23:40:34.049 INFO NativeLibraryLoader - Loading libgkl_compression.so from jar:file:/usr/local/share/picard-2.18.27-0/picard.jar!/com/intel/gkl/native/libgkl_compression.so [Tue Feb 28 23:40:34 GMT 2023] AddOrReplaceReadGroups INPUT=output/Le1-12-501-708_1.fastq.gz.bam OUTPUT=output.temp/Le1-12-501-708_1.fastq.gz_addrg.bam SORT_ORDER=coordinate RGID=Le1-12-501-708_1.fastq.gz RGLB=library RGPL=Illumina RGPU=Illumina RGSM=Le1-12-501-708_1.fastq.gz VERBOSITY=INFO QUIET=false VALIDATION_STRINGENCY=STRICT COMPRESSION_LEVEL=5 MAX_RECORDS_IN_RAM=500000 CREATE_INDEX=false CREATE_MD5_FILE=false GA4GH_CLIENT_SECRETS=client_secrets.json USE_JDK_DEFLATER=false USE_JDK_INFLATER=false [Tue Feb 28 23:40:34 GMT 2023] Executing as ?@04a734825eac on Linux 3.10.0-1160.36.2.el7.x86_64 amd64; OpenJDK 64-Bit Server VM 11.0.1+13-LTS; Deflater: Intel; Inflater: Intel; Provider GCS is not available; Picard version: 2.18.27-SNAPSHOT INFO 2023-02-28 23:40:34 AddOrReplaceReadGroups Created read-group ID=Le1-12-501-708_1.fastq.gz PL=Illumina LB=library SM=Le1-12-501-708_1.fastq.gz [Tue Feb 28 23:40:34 GMT 2023] picard.sam.AddOrReplaceReadGroups done. Elapsed time: 0.01 minutes. Runtime.totalMemory()=2147483648 [1m[31mUSAGE: [32m [1m[31m [-h] [0m[1m[31mAvailable Programs: [0m[37m-------------------------------------------------------------------------------------- [0m[31mBase Calling: Tools that process sequencing machine data, e.g. Illumina base calls, and detect sequencing level attributes, e.g. adapters[0m [32m CheckIlluminaDirectory (Picard) [36mAsserts the validity for specified Illumina basecalling data. [0m [32m CollectIlluminaBasecallingMetrics (Picard) [36mCollects Illumina Basecalling metrics for a sequencing run. [0m [32m CollectIlluminaLaneMetrics (Picard) [36mCollects Illumina lane metrics for the given BaseCalling analysis directory.[0m [32m ExtractIlluminaBarcodes (Picard) [36mTool determines the barcode for each read in an Illumina lane. [0m [32m IlluminaBasecallsToFastq (Picard) [36mGenerate FASTQ file(s) from Illumina basecall read data. [0m [32m IlluminaBasecallsToSam (Picard) [36mTransforms raw Illumina sequencing data into an unmapped SAM, BAM or CRAM file.[0m [32m MarkIlluminaAdapters (Picard) [36mReads a SAM/BAM/CRAM file and rewrites it with new adapter-trimming tags. [0m [37m-------------------------------------------------------------------------------------- [0m[31mCopy Number Variant Discovery: Tools that analyze read coverage to detect copy number variants.[0m [32m AnnotateIntervals [36mAnnotates intervals with GC content, mappability, and segmental-duplication content[0m [32m CallCopyRatioSegments [36mCalls copy-ratio segments as amplified, deleted, or copy-number neutral[0m [32m CombineSegmentBreakpoints [31m(EXPERIMENTAL Tool) [36mCombine the breakpoints of two segment files and annotate the resulting intervals with chosen columns from each file.[0m [32m CreateReadCountPanelOfNormals [36mCreates a panel of normals for read-count denoising[0m [32m DenoiseReadCounts [36mDenoises read counts to produce denoised copy ratios[0m [32m DetermineGermlineContigPloidy [36mDetermines the baseline contig ploidy for germline samples given counts data[0m [32m FilterIntervals [36mFilters intervals based on annotations and/or count statistics[0m [32m GermlineCNVCaller [36mCalls copy-number variants in germline samples given their counts and the output of DetermineGermlineContigPloidy[0m [32m MergeAnnotatedRegions [31m(EXPERIMENTAL Tool) [36mMerge annotated genomic regions based entirely on touching/overlapping intervals.[0m [32m MergeAnnotatedRegionsByAnnotation [31m(EXPERIMENTAL Tool) [36mMerge annotated genomic regions within specified distance if annotation value(s) are exactly the same.[0m [32m ModelSegments [36mModels segmented copy ratios from denoised copy ratios and segmented minor-allele fractions from allelic counts[0m [32m PlotDenoisedCopyRatios [36mCreates plots of denoised copy ratios[0m [32m PlotModeledSegments [36mCreates plots of denoised and segmented copy-ratio and minor-allele-fraction estimates[0m [32m PostprocessGermlineCNVCalls [36mPostprocesses the output of GermlineCNVCaller and generates VCFs and denoised copy ratios[0m [32m TagGermlineEvents [31m(EXPERIMENTAL Tool) [36mDo a simplistic tagging of germline events in a tumor segment file.[0m [37m-------------------------------------------------------------------------------------- [0m[31mCoverage Analysis: Tools that count coverage, e.g. depth per allele[0m [32m ASEReadCounter [36mGenerates table of filtered base counts at het sites for allele specific expression[0m [32m AnalyzeSaturationMutagenesis [31m(BETA Tool) [36m(EXPERIMENTAL) Processes reads from a MITESeq or other saturation mutagenesis experiment.[0m [32m CollectAllelicCounts [36mCollects reference and alternate allele counts at specified sites[0m [32m CollectAllelicCountsSpark [36mCollects reference and alternate allele counts at specified sites[0m [32m CollectF1R2Counts [36mCollect F1R2 read counts for the Mutect2 orientation bias mixture model filter[0m [32m CollectReadCounts [36mCollects read counts at specified intervals[0m [32m CountBases [36mCount bases in a SAM/BAM/CRAM file[0m [32m CountBasesSpark [36mCounts bases in the input SAM/BAM[0m [32m CountReads [36mCount reads in a SAM/BAM/CRAM file[0m [32m CountReadsSpark [36mCounts reads in the input SAM/BAM[0m [32m DepthOfCoverage [31m(BETA Tool) [36mGenerate coverage summary information for reads data[0m [32m GatherNormalArtifactData [36mCombine output files from GetNormalArtifactData in the order defined by a sequence dictionary[0m [32m GeneExpressionEvaluation [31m(BETA Tool) [36mEvaluate gene expression from RNA-seq reads aligned to genome.[0m [32m GetNormalArtifactData [36mCollects data for training normal artifact filter[0m [32m GetPileupSummaries [36mTabulates pileup metrics for inferring contamination[0m [32m LocalAssembler [31m(BETA Tool) [36mLocal assembler for SVs[0m [32m Pileup [36mPrints read alignments in samtools pileup format[0m [32m PileupSpark [31m(BETA Tool) [36mPrints read alignments in samtools pileup format[0m [37m-------------------------------------------------------------------------------------- [0m[31mDiagnostics and Quality Control: Tools that collect sequencing quality related and comparative metrics[0m [32m AccumulateQualityYieldMetrics (Picard) [36mCombines multiple QualityYieldMetrics files into a single file.[0m [32m AccumulateVariantCallingMetrics (Picard) [36mCombines multiple Variant Calling Metrics files into a single file[0m [32m AnalyzeCovariates [36mEvaluate and compare base quality score recalibration (BQSR) tables[0m [32m BamIndexStats (Picard) [36mGenerate index statistics from a BAM file[0m [32m CalcMetadataSpark [31m(BETA Tool) [36m(Internal) Collects read metrics relevant to structural variant discovery[0m [32m CalculateContamination [36mCalculate the fraction of reads coming from cross-sample contamination[0m [32m CalculateFingerprintMetrics (Picard) [36mCalculate statistics on fingerprints, checking their viability[0m [32m CalculateReadGroupChecksum (Picard) [36mCreates a hash code based on the read groups (RG). [0m [32m CheckDuplicateMarking (Picard) [36mChecks the consistency of duplicate markings.[0m [32m CheckFingerprint (Picard) [36mComputes a fingerprint from the supplied input (SAM/BAM/CRAM or VCF) file and compares it to the provided genotypes[0m [32m CheckPileup [36mCompare GATK's internal pileup to a reference Samtools mpileup[0m [32m CheckTerminatorBlock (Picard) [36mAsserts the provided gzip file's (e.g., BAM) last block is well-formed; RC 100 otherwise[0m [32m ClusterCrosscheckMetrics (Picard) [36mClusters the results of a CrosscheckFingerprints run by LOD score[0m [32m CollectAlignmentSummaryMetrics (Picard) [36mProduces a summary of alignment metrics from a SAM or BAM file. [0m [32m CollectArraysVariantCallingMetrics (Picard) [36mCollects summary and per-sample from the provided arrays VCF file[0m [32m CollectBaseDistributionByCycle (Picard) [36mChart the nucleotide distribution per cycle in a SAM or BAM file[0m [32m CollectBaseDistributionByCycleSpark [31m(BETA Tool) [36mCollects base distribution per cycle in SAM/BAM/CRAM file(s).[0m [32m CollectGcBiasMetrics (Picard) [36mCollect metrics regarding GC bias. [0m [32m CollectHiSeqXPfFailMetrics (Picard) [36mClassify PF-Failing reads in a HiSeqX Illumina Basecalling directory into various categories.[0m [32m CollectHsMetrics (Picard) [36mCollects hybrid-selection (HS) metrics for a SAM or BAM file. [0m [32m CollectIndependentReplicateMetrics (Picard) [31m(EXPERIMENTAL Tool) [36mEstimates the rate of independent replication rate of reads within a bam. [0m [32m CollectInsertSizeMetrics (Picard) [36mCollect metrics about the insert size distribution of a paired-end library. [0m [32m CollectInsertSizeMetricsSpark [31m(BETA Tool) [36mCollects insert size distribution information on alignment data[0m [32m CollectJumpingLibraryMetrics (Picard) [36mCollect jumping library metrics. [0m [32m CollectMultipleMetrics (Picard) [36mCollect multiple classes of metrics. [0m [32m CollectMultipleMetricsSpark [31m(BETA Tool) [36mRuns multiple metrics collection modules for a given alignment file[0m [32m CollectOxoGMetrics (Picard) [36mCollect metrics to assess oxidative artifacts.[0m [32m CollectQualityYieldMetrics (Picard) [36mCollect metrics about reads that pass quality thresholds and Illumina-specific filters. [0m [32m CollectQualityYieldMetricsSpark [31m(BETA Tool) [36mCollects quality yield metrics from SAM/BAM/CRAM file(s).[0m [32m CollectRawWgsMetrics (Picard) [36mCollect whole genome sequencing-related metrics. [0m [32m CollectRnaSeqMetrics (Picard) [36mProduces RNA alignment metrics for a SAM or BAM file. [0m [32m CollectRrbsMetrics (Picard) [36mCollects metrics from reduced representation bisulfite sequencing (Rrbs) data. [0m [32m CollectSamErrorMetrics (Picard) [36mProgram to collect error metrics on bases stratified in various ways.[0m [32m CollectSequencingArtifactMetrics (Picard) [36mCollect metrics to quantify single-base sequencing artifacts. [0m [32m CollectTargetedPcrMetrics (Picard) [36mCalculate PCR-related metrics from targeted sequencing data. [0m [32m CollectVariantCallingMetrics (Picard) [36mCollects per-sample and aggregate (spanning all samples) metrics from the provided VCF file[0m [32m CollectWgsMetrics (Picard) [36mCollect metrics about coverage and performance of whole genome sequencing (WGS) experiments.[0m [32m CollectWgsMetricsWithNonZeroCoverage (Picard)[31m(EXPERIMENTAL Tool) [36mCollect metrics about coverage and performance of whole genome sequencing (WGS) experiments. [0m [32m CompareBaseQualities [36mCompares the base qualities of two SAM/BAM/CRAM files[0m [32m CompareDuplicatesSpark [31m(BETA Tool) [36mDetermine if two potentially identical BAMs have the same duplicate reads[0m [32m CompareMetrics (Picard) [36mCompare two metrics files.[0m [32m CompareSAMs (Picard) [36mCompare two input SAM/BAM/CRAM files. [0m [32m ConvertHaplotypeDatabaseToVcf (Picard) [36mConvert Haplotype database file to vcf[0m [32m ConvertSequencingArtifactToOxoG (Picard) [36mExtract OxoG metrics from generalized artifacts metrics. [0m [32m CrosscheckFingerprints (Picard) [36mChecks that all data in the input files appear to have come from the same individual[0m [32m CrosscheckReadGroupFingerprints (Picard) [36mDEPRECATED: USE CrosscheckFingerprints. [0m [32m DumpTabixIndex [36mDumps a tabix index file.[0m [32m EstimateLibraryComplexity (Picard) [36mEstimates the numbers of unique molecules in a sequencing library. [0m [32m ExtractFingerprint (Picard) [36mComputes a fingerprint from the input file.[0m [32m FlagStat [36mAccumulate flag statistics given a BAM file[0m [32m FlagStatSpark [36mSpark tool to accumulate flag statistics[0m [32m GatherPileupSummaries [36mCombine output files from GetPileupSummary in the order defined by a sequence dictionary[0m [32m GetSampleName [36mEmit a single sample name[0m [32m IdentifyContaminant (Picard) [36mComputes a fingerprint from the supplied SAM/BAM file, given a contamination estimate.[0m [32m LiftOverHaplotypeMap (Picard) [36mLifts over a haplotype database from one reference to another[0m [32m MeanQualityByCycle (Picard) [36mCollect mean quality by cycle.[0m [32m MeanQualityByCycleSpark [31m(BETA Tool) [36mMeanQualityByCycle on Spark[0m [32m QualityScoreDistribution (Picard) [36mChart the distribution of quality scores. [0m [32m QualityScoreDistributionSpark [31m(BETA Tool) [36mQualityScoreDistribution on Spark[0m [32m ValidateSamFile (Picard) [36mValidates a SAM/BAM/CRAM file.[0m [32m ViewSam (Picard) [36mPrints a SAM or BAM file to the screen[0m [37m-------------------------------------------------------------------------------------- [0m[31mExample Tools: Example tools that show developers how to implement new tools[0m [32m ExampleMultiFeatureWalker [36mExample of a MultiFeatureWalker subclass.[0m [32m HtsgetReader [31m(EXPERIMENTAL Tool) [36mDownload a file using htsget[0m [37m-------------------------------------------------------------------------------------- [0m[31mFlow Based Tools: Tools designed specifically to operate on flow based data[0m [32m CalculateAverageCombinedAnnotations [31m(EXPERIMENTAL Tool) [36mDivides annotations that were summed by genomicsDB by number of samples to calculate average.[0m [32m FlowFeatureMapper [31m(EXPERIMENTAL Tool) [36mMap/find features in BAM file, output VCF. Initially mapping SNVs[0m [32m GroundTruthReadsBuilder [31m(EXPERIMENTAL Tool) [36mProduces a flexible and robust ground truth set for base calling training[0m [32m SplitCRAM [31m(EXPERIMENTAL Tool) [36mSplit CRAM files to smaller files efficiently[0m [37m-------------------------------------------------------------------------------------- [0m[31mGenotyping Arrays Manipulation: Tools that manipulate data generated by Genotyping arrays[0m [32m BpmToNormalizationManifestCsv (Picard) [36mProgram to convert an Illumina bpm file into a bpm.csv file.[0m [32m CombineGenotypingArrayVcfs (Picard) [36mProgram to combine multiple genotyping array VCF files into one VCF.[0m [32m CompareGtcFiles (Picard) [36mCompares two GTC files.[0m [32m CreateBafRegressMetricsFile (Picard) [36mProgram to generate a picard metrics file from the output of the bafRegress tool.[0m [32m CreateExtendedIlluminaManifest (Picard) [36mCreate an Extended Illumina Manifest for usage by the Picard tool GtcToVcf[0m [32m CreateVerifyIDIntensityContaminationMetricsFile (Picard) [36mProgram to generate a picard metrics file from the output of the VerifyIDIntensity tool.[0m [32m GtcToVcf (Picard) [36mProgram to convert an Illumina GTC file to a VCF[0m [32m MergePedIntoVcf (Picard) [36mProgram to merge a single-sample ped file from zCall into a single-sample VCF.[0m [32m VcfToAdpc (Picard) [36mProgram to convert an Arrays VCF to an ADPC file.[0m [37m-------------------------------------------------------------------------------------- [0m[31mIntervals Manipulation: Tools that process genomic intervals in various formats[0m [32m BedToIntervalList (Picard) [36mConverts a BED file to a Picard Interval List. [0m [32m CompareIntervalLists [36mCompare two interval lists for equality[0m [32m IntervalListToBed (Picard) [36mConverts an Picard IntervalList file to a BED file.[0m [32m IntervalListTools (Picard) [36mA tool for performing various IntervalList manipulations[0m [32m LiftOverIntervalList (Picard) [36mLifts over an interval list from one reference build to another. [0m [32m PreprocessIntervals [36mPrepares bins for coverage collection[0m [32m SplitIntervals [36mSplit intervals into sub-interval files.[0m [37m-------------------------------------------------------------------------------------- [0m[31mMetagenomics: Tools that perform metagenomic analysis, e.g. microbial community composition and pathogen detection[0m [32m PathSeqBuildKmers [36mBuilds set of host reference k-mers[0m [32m PathSeqBuildReferenceTaxonomy [36mBuilds a taxonomy datafile of the microbe reference[0m [32m PathSeqBwaSpark [36mStep 2: Aligns reads to the microbe reference[0m [32m PathSeqFilterSpark [36mStep 1: Filters low quality, low complexity, duplicate, and host reads[0m [32m PathSeqPipelineSpark [36mCombined tool that performs all steps: read filtering, microbe reference alignment, and abundance scoring[0m [32m PathSeqScoreSpark [36mStep 3: Classifies pathogen-aligned reads and generates abundance scores[0m [37m-------------------------------------------------------------------------------------- [0m[31mMethylation-Specific Tools: Tools that perform methylation calling, processing bisulfite sequenced, methylation-aware aligned BAM[0m [32m MethylationTypeCaller [31m(EXPERIMENTAL Tool) [36mIdentify methylated bases from bisulfite sequenced, methylation-aware BAMs[0m [37m-------------------------------------------------------------------------------------- [0m[31mOther: Miscellaneous tools, e.g. those that aid in data streaming[0m [32m CreateHadoopBamSplittingIndex [31m(BETA Tool) [36mCreate a Hadoop BAM splitting index[0m [32m FifoBuffer (Picard) [36mProvides a large, FIFO buffer that can be used to buffer input and output streams between programs.[0m [32m GatherBQSRReports [36mGathers scattered BQSR recalibration reports into a single file[0m [32m GatherTranches [31m(BETA Tool) [36mGathers scattered VQSLOD tranches into a single file[0m [32m IndexFeatureFile [36mCreates an index for a feature file, e.g. VCF or BED file.[0m [32m ParallelCopyGCSDirectoryIntoHDFSSpark [31m(BETA Tool) [36mParallel copy a file or directory from Google Cloud Storage into the HDFS file system used by Spark[0m [32m PrintBGZFBlockInformation [31m(EXPERIMENTAL Tool) [36mPrint information about the compressed blocks in a BGZF format file[0m [32m ReadAnonymizer [31m(EXPERIMENTAL Tool) [36mReplace bases in reads with reference bases.[0m [32m ReblockGVCF [36mCondenses homRef blocks in a single-sample GVCF[0m [32m SortGff (Picard) [36mSorts a gff3 file, and adds flush directives[0m [37m-------------------------------------------------------------------------------------- [0m[31mRead Data Manipulation: Tools that manipulate read data in SAM, BAM or CRAM format[0m [32m AddCommentsToBam (Picard) [36mAdds comments to the header of a BAM file.[0m [32m AddOATag (Picard) [36mRecord current alignment information to OA tag.[0m [32m AddOrReplaceReadGroups (Picard) [36mAssigns all the reads in a file to a single new read-group.[0m [32m AddOriginalAlignmentTags [31m(EXPERIMENTAL Tool) [36mAdds Original Alignment tag and original mate contig tag[0m [32m ApplyBQSR [36mApply base quality score recalibration[0m [32m ApplyBQSRSpark [31m(BETA Tool) [36mApply base quality score recalibration on Spark[0m [32m BQSRPipelineSpark [31m(BETA Tool) [36mBoth steps of BQSR (BaseRecalibrator and ApplyBQSR) on Spark[0m [32m BamToBfq (Picard) [36mConverts a BAM file into a BFQ (binary fastq formatted) file[0m [32m BaseRecalibrator [36mGenerates recalibration table for Base Quality Score Recalibration (BQSR)[0m [32m BaseRecalibratorSpark [31m(BETA Tool) [36mGenerate recalibration table for Base Quality Score Recalibration (BQSR) on Spark[0m [32m BuildBamIndex (Picard) [36mGenerates a BAM index ".bai" file. [0m [32m BwaAndMarkDuplicatesPipelineSpark [31m(BETA Tool) [36mTakes name-sorted file and runs BWA and MarkDuplicates.[0m [32m BwaSpark [31m(BETA Tool) [36mAlign reads to a given reference using BWA on Spark[0m [32m CleanSam (Picard) [36mCleans a SAM/BAM/CRAM files, soft-clipping beyond-end-of-reference alignments and setting MAPQ to 0 for unmapped reads[0m [32m ClipReads [36mClip reads in a SAM/BAM/CRAM file[0m [32m CollectDuplicateMetrics (Picard) [36mCollect Duplicate metrics from marked file.[0m [32m ConvertHeaderlessHadoopBamShardToBam [31m(BETA Tool) [36mConvert a headerless BAM shard into a readable BAM[0m [32m DownsampleByDuplicateSet [31m(BETA Tool) [36mDiscard a set fraction of duplicate sets from a UMI-grouped bam[0m [32m DownsampleSam (Picard) [36mDownsample a SAM or BAM file.[0m [32m ExtractOriginalAlignmentRecordsByNameSpark [31m(BETA Tool) [36mSubsets reads by name[0m [32m FastqToSam (Picard) [36mConverts a FASTQ file to an unaligned BAM or SAM file[0m [32m FilterSamReads (Picard) [36mSubsets reads from a SAM/BAM/CRAM file by applying one of several filters.[0m [32m FixMateInformation (Picard) [36mVerify mate-pair information between mates and fix if needed.[0m [32m FixMisencodedBaseQualityReads [36mFix Illumina base quality scores in a SAM/BAM/CRAM file[0m [32m GatherBamFiles (Picard) [36mConcatenate efficiently BAM files that resulted from a scattered parallel analysis[0m [32m LeftAlignIndels [36mLeft-aligns indels from reads in a SAM/BAM/CRAM file[0m [32m MarkDuplicates (Picard) [36mIdentifies duplicate reads. [0m [32m MarkDuplicatesSpark [36mMarkDuplicates on Spark[0m [32m MarkDuplicatesWithMateCigar (Picard) [36mIdentifies duplicate reads, accounting for mate CIGAR. [0m [32m MergeBamAlignment (Picard) [36mMerge alignment data from a SAM or BAM with data in an unmapped BAM file. [0m [32m MergeSamFiles (Picard) [36mMerges multiple SAM/BAM/CRAM (and/or) files into a single file. [0m [32m PositionBasedDownsampleSam (Picard) [36mDownsample a SAM or BAM file to retain a subset of the reads based on the reads location in each tile in the flowcell.[0m [32m PostProcessReadsForRSEM [31m(BETA Tool) [36mReorder reads before running RSEM[0m [32m PrintDistantMates [36mUnmaps reads with distant mates.[0m [32m PrintReads [36mPrint reads in the SAM/BAM/CRAM file[0m [32m PrintReadsHeader [36mPrint the header from a SAM/BAM/CRAM file[0m [32m PrintReadsSpark [36mPrintReads on Spark[0m [32m ReorderSam (Picard) [36mReorders reads in a SAM or BAM file to match ordering in a second reference file.[0m [32m ReplaceSamHeader (Picard) [36mReplaces the SAMFileHeader in a SAM/BAM/CRAM file. [0m [32m RevertBaseQualityScores [36mRevert Quality Scores in a SAM/BAM/CRAM file[0m [32m RevertOriginalBaseQualitiesAndAddMateCigar (Picard)[36mReverts the original base qualities and adds the mate cigar tag to read-group files[0m [32m RevertSam (Picard) [36mReverts SAM/BAM/CRAM files to a previous state. [0m [32m RevertSamSpark [31m(BETA Tool) [36mReverts SAM, BAM or CRAM files to a previous state.[0m [32m SamFormatConverter (Picard) [36mConvert a BAM file to a SAM file, or a SAM to a BAM[0m [32m SamToFastq (Picard) [36mConverts a SAM/BAM/CRAM file to FASTQ.[0m [32m SamToFastqWithTags (Picard) [36mConverts a SAM or BAM file to FASTQ alongside FASTQs created from tags.[0m [32m SetNmAndUqTags (Picard) [36mDEPRECATED: Use SetNmMdAndUqTags instead.[0m [32m SetNmMdAndUqTags (Picard) [36mFixes the NM, MD, and UQ tags in a SAM/BAM/CRAM file [0m [32m SimpleMarkDuplicatesWithMateCigar (Picard) [31m(EXPERIMENTAL Tool) [36mExamines aligned records in the supplied SAM or BAM file to locate duplicate molecules.[0m [32m SortSam (Picard) [36mSorts a SAM, BAM or CRAM file. [0m [32m SortSamSpark [31m(BETA Tool) [36mSortSam on Spark (works on SAM/BAM/CRAM)[0m [32m SplitNCigarReads [36mSplit Reads with N in Cigar[0m [32m SplitReads [36mOutputs reads from a SAM/BAM/CRAM by read group, sample and library name[0m [32m SplitSamByLibrary (Picard) [36mSplits a SAM/BAM/CRAM file into individual files by library[0m [32m SplitSamByNumberOfReads (Picard) [36mSplits a SAM/BAM/CRAM file to multiple files.[0m [32m TransferReadTags [31m(EXPERIMENTAL Tool) [36mIncorporate read tags in a SAM file to that of a matching SAM file[0m [32m UmiAwareMarkDuplicatesWithMateCigar (Picard) [31m(EXPERIMENTAL Tool) [36mIdentifies duplicate reads using information from read positions and UMIs. [0m [32m UnmarkDuplicates [36mClears the 0x400 duplicate SAM flag[0m [37m-------------------------------------------------------------------------------------- [0m[31mReference: Tools that analyze and manipulate FASTA format references[0m [32m BaitDesigner (Picard) [36mDesigns oligonucleotide baits for hybrid selection reactions.[0m [32m BwaMemIndexImageCreator [36mCreate a BWA-MEM index image file for use with GATK BWA tools[0m [32m CheckReferenceCompatibility [31m(EXPERIMENTAL Tool) [36mCheck a BAM/VCF for compatibility against specified references.[0m [32m CompareReferences [31m(EXPERIMENTAL Tool) [36mDisplay reference comparison as a tab-delimited table and summarize reference differences.[0m [32m ComposeSTRTableFile [36mComposes a genome-wide STR location table used for DragSTR model auto-calibration[0m [32m CountBasesInReference [36mCount the numbers of each base in a reference file[0m [32m CreateSequenceDictionary (Picard) [36mCreates a sequence dictionary for a reference sequence. [0m [32m ExtractSequences (Picard) [36mSubsets intervals from a reference sequence to a new FASTA file.[0m [32m FastaAlternateReferenceMaker [36mCreate an alternative reference by combining a fasta with a vcf.[0m [32m FastaReferenceMaker [36mCreate snippets of a fasta file[0m [32m FindBadGenomicKmersSpark [31m(BETA Tool) [36mIdentifies sequences that occur at high frequency in a reference[0m [32m NonNFastaSize (Picard) [36mCounts the number of non-N bases in a fasta file.[0m [32m NormalizeFasta (Picard) [36mNormalizes lines of sequence in a FASTA file to be of the same length.[0m [32m ScatterIntervalsByNs (Picard) [36mWrites an interval list created by splitting a reference at Ns.[0m [32m ShiftFasta [31m(BETA Tool) [36mCreates a shifted fasta file and shift_back file[0m [37m-------------------------------------------------------------------------------------- [0m[31mShort Variant Discovery: Tools that perform variant calling and genotyping for short variants (SNPs, SNVs and Indels)[0m [32m CalibrateDragstrModel [36mestimates the parameters for the DRAGstr model[0m [32m CombineGVCFs [36mMerges one or more HaplotypeCaller GVCF files into a single GVCF with appropriate annotations[0m [32m GenomicsDBImport [36mImport VCFs to GenomicsDB[0m [32m GenotypeGVCFs [36mPerform joint genotyping on one or more samples pre-called with HaplotypeCaller[0m [32m GnarlyGenotyper [31m(BETA Tool) [36mPerform "quick and dirty" joint genotyping on one or more samples pre-called with HaplotypeCaller[0m [32m HaplotypeBasedVariantRecaller [31m(EXPERIMENTAL Tool) [36mCalculate likelihood matrix for each Allele in VCF against a set of Reads limited by a set of Haplotypes[0m [32m HaplotypeCaller [36mCall germline SNPs and indels via local re-assembly of haplotypes[0m [32m HaplotypeCallerSpark [31m(BETA Tool) [36mHaplotypeCaller on Spark[0m [32m LearnReadOrientationModel [36mGet the maximum likelihood estimates of artifact prior probabilities in the orientation bias mixture model filter[0m [32m MergeMutectStats [36mMerge the stats output by scatters of a single Mutect2 job[0m [32m Mutect2 [36mCall somatic SNVs and indels via local assembly of haplotypes[0m [32m RampedHaplotypeCaller [31m(EXPERIMENTAL Tool) [36mCall germline SNPs and indels via local re-assembly of haplotypes (ramped version)[0m [32m ReadsPipelineSpark [31m(BETA Tool) [36mRuns BWA (if specified), MarkDuplicates, BQSR, and HaplotypeCaller on unaligned or aligned reads to generate a VCF.[0m [37m-------------------------------------------------------------------------------------- [0m[31mStructural Variant Discovery: Tools that detect structural variants [0m [32m CollectSVEvidence [31m(BETA Tool) [36mGathers paired-end and split read evidence files for use in the GATK-SV pipeline.[0m [32m CondenseDepthEvidence [31m(EXPERIMENTAL Tool) [36mMerges adjacent DepthEvidence records.[0m [32m CpxVariantReInterpreterSpark [31m(BETA Tool) [36m(Internal) Tries to extract simple variants from a provided GATK-SV CPX.vcf[0m [32m DiscoverVariantsFromContigAlignmentsSAMSpark [31m(BETA Tool) [36m(Internal) Examines aligned contigs from local assemblies and calls structural variants[0m [32m ExtractSVEvidenceSpark [31m(BETA Tool) [36m(Internal) Extracts evidence of structural variations from reads[0m [32m FindBreakpointEvidenceSpark [31m(BETA Tool) [36m(Internal) Produces local assemblies of genomic regions that may harbor structural variants[0m [32m JointGermlineCNVSegmentation [31m(BETA Tool) [36mCombine segmented gCNV VCFs.[0m [32m PrintReadCounts [31m(EXPERIMENTAL Tool) [36mPrints count files for CNV determination.[0m [32m PrintSVEvidence [31m(EXPERIMENTAL Tool) [36mMerges SV evidence records.[0m [32m SVAnnotate [36mAdds gene overlap and variant consequence annotations to SV VCF from GATK-SV pipeline[0m [32m SVCluster [31m(BETA Tool) [36mClusters structural variants[0m [32m SiteDepthtoBAF [31m(EXPERIMENTAL Tool) [36mConvert SiteDepth to BafEvidence[0m [32m StructuralVariantDiscoverer [31m(BETA Tool) [36m(Internal) Examines aligned contigs from local assemblies and calls structural variants or their breakpoints[0m [32m StructuralVariationDiscoveryPipelineSpark [31m(BETA Tool) [36mRuns the structural variation discovery workflow on a single sample[0m [32m SvDiscoverFromLocalAssemblyContigAlignmentsSpark [31m(BETA Tool) [36m(Internal) Examines aligned contigs from local assemblies and calls structural variants or their breakpoints[0m [37m-------------------------------------------------------------------------------------- [0m[31mVariant Evaluation and Refinement: Tools that evaluate and refine variant calls, e.g. with annotations not offered by the engine[0m [32m AlleleFrequencyQC [31m(BETA Tool) [36mGeneral-purpose tool for variant evaluation (% in dbSNP, genotype concordance, Ti/Tv ratios, and a lot more)[0m [32m AnnotateVcfWithBamDepth [36m(Internal) Annotate a vcf with a bam's read depth at each variant locus[0m [32m AnnotateVcfWithExpectedAlleleFraction [36m(Internal) Annotate a vcf with expected allele fractions in pooled sequencing[0m [32m CalculateGenotypePosteriors [36mCalculate genotype posterior probabilities given family and/or known population genotypes[0m [32m CalculateMixingFractions [36m(Internal) Calculate proportions of different samples in a pooled bam[0m [32m Concordance [36mEvaluate concordance of an input VCF against a validated truth VCF[0m [32m CountFalsePositives [31m(BETA Tool) [36mCount PASS variants[0m [32m CountVariants [36mCounts variant records in a VCF file, regardless of filter status.[0m [32m CountVariantsSpark [36mCountVariants on Spark[0m [32m EvaluateInfoFieldConcordance [31m(BETA Tool) [36mEvaluate concordance of info fields in an input VCF against a validated truth VCF[0m [32m FilterFuncotations [31m(EXPERIMENTAL Tool) [36mFilter variants based on clinically-significant Funcotations.[0m [32m FindMendelianViolations (Picard) [36mFinds mendelian violations of all types within a VCF[0m [32m FuncotateSegments [31m(BETA Tool) [36mFunctional annotation for segment files. The output formats are not well-defined and subject to change.[0m [32m Funcotator [36mFunctional Annotator[0m [32m FuncotatorDataSourceDownloader [36mData source downloader for Funcotator.[0m [32m GenotypeConcordance (Picard) [36mCalculates the concordance between genotype data of one sample in each of two VCFs - truth (or reference) vs. calls.[0m [32m MergeMutect2CallsWithMC3 [31m(EXPERIMENTAL Tool) [36mUNSUPPORTED. FOR EVALUATION ONLY. Merge M2 calls with MC[0m [32m ReferenceBlockConcordance [36mEvaluate GVCF reference block concordance of an input GVCF against a truth GVCF[0m [32m ValidateBasicSomaticShortMutations [31m(EXPERIMENTAL Tool) [36mCheck variants against tumor-normal bams representing the same samples, though not the ones from the actual calls.[0m [32m ValidateVariants [36mValidate VCF[0m [32m VariantEval [31m(BETA Tool) [36mGeneral-purpose tool for variant evaluation (% in dbSNP, genotype concordance, Ti/Tv ratios, and a lot more)[0m [32m VariantsToTable [36mExtract fields from a VCF file to a tab-delimited table[0m [37m-------------------------------------------------------------------------------------- [0m[31mVariant Filtering: Tools that filter variants by annotating the FILTER column[0m [32m ApplyVQSR [36m Apply a score cutoff to filter variants based on a recalibration table[0m [32m CNNScoreVariants [36mApply a Convolutional Neural Net to filter annotated variants[0m [32m CNNVariantTrain [31m(EXPERIMENTAL Tool) [36mTrain a CNN model for filtering variants[0m [32m CNNVariantWriteTensors [31m(EXPERIMENTAL Tool) [36mWrite variant tensors for training a CNN to filter variants[0m [32m CreateSomaticPanelOfNormals [31m(BETA Tool) [36mMake a panel of normals for use with Mutect2[0m [32m ExtractVariantAnnotations [31m(BETA Tool) [36mExtracts site-level variant annotations, labels, and other metadata from a VCF file to HDF5 files[0m [32m FilterAlignmentArtifacts [31m(EXPERIMENTAL Tool) [36mFilter alignment artifacts from a vcf callset.[0m [32m FilterMutectCalls [36mFilter somatic SNVs and indels called by Mutect2[0m [32m FilterVariantTranches [36mApply tranche filtering[0m [32m FilterVcf (Picard) [36mHard filters a VCF.[0m [32m MTLowHeteroplasmyFilterTool [36mIf too many low het sites, filter all low het sites[0m [32m NuMTFilterTool [36mUses the median autosomal coverage and the allele depth to determine whether the allele might be a NuMT[0m [32m ScoreVariantAnnotations [31m(BETA Tool) [36mScores variant calls in a VCF file based on site-level annotations using a previously trained model[0m [32m TrainVariantAnnotationsModel [31m(BETA Tool) [36mTrains a model for scoring variant calls based on site-level annotations[0m [32m VariantFiltration [36mFilter variant calls based on INFO and/or FORMAT annotations[0m [32m VariantRecalibrator [36mBuild a recalibration model to score variant quality for filtering purposes[0m [37m-------------------------------------------------------------------------------------- [0m[31mVariant Manipulation: Tools that manipulate variant call format (VCF) data[0m [32m FixVcfHeader (Picard) [36mReplaces or fixes a VCF header.[0m [32m GatherVcfs (Picard) [36mGathers multiple VCF files from a scatter operation into a single VCF file[0m [32m GatherVcfsCloud [31m(BETA Tool) [36mGathers multiple VCF files from a scatter operation into a single VCF file[0m [32m LeftAlignAndTrimVariants [36mLeft align and trim vairants[0m [32m LiftoverVcf (Picard) [36mLifts over a VCF file from one reference build to another. [0m [32m MakeSitesOnlyVcf (Picard) [36mCreates a VCF that contains all the site-level information for all records in the input VCF but no genotype information.[0m [32m MakeVcfSampleNameMap (Picard) [36mCreates a TSV from sample name to VCF/GVCF path, with one line per input.[0m [32m MergeVcfs (Picard) [36mCombines multiple variant files into a single variant file[0m [32m PrintVariantsSpark [36mPrints out variants from the input VCF.[0m [32m RemoveNearbyIndels [36m(Internal) Remove indels from the VCF file that are close to each other.[0m [32m RenameSampleInVcf (Picard) [36mRenames a sample within a VCF or BCF.[0m [32m SelectVariants [36mSelect a subset of variants from a VCF file[0m [32m SortVcf (Picard) [36mSorts one or more VCF files. [0m [32m SplitVcfs (Picard) [36mSplits SNPs and INDELs into separate files. [0m [32m UpdateVCFSequenceDictionary [36mUpdates the sequence dictionary in a variant file.[0m [32m UpdateVcfSequenceDictionary (Picard) [36mTakes a VCF and a second file that contains a sequence dictionary and updates the VCF with the new sequence dictionary.[0m [32m VariantAnnotator [36mTool for adding annotations to VCF files[0m [32m VcfFormatConverter (Picard) [36mConverts VCF to BCF or BCF to VCF. [0m [32m VcfToIntervalList (Picard) [36mConverts a VCF or BCF file to a Picard Interval List[0m [37m-------------------------------------------------------------------------------------- [0m *********************************************************************** A USER ERROR has occurred: '-Xmx104857M' is not a valid command. *********************************************************************** Set the system property GATK_STACKTRACE_ON_USER_EXCEPTION (--java-options '-DGATK_STACKTRACE_ON_USER_EXCEPTION=true') to print the stack trace. Using GATK jar /gatk/gatk-package-4.3.0.0-local.jar Running: java -Dsamjdk.use_async_io_read_samtools=false -Dsamjdk.use_async_io_write_samtools=true -Dsamjdk.use_async_io_write_tribble=false -Dsamjdk.compression_level=2 -jar /gatk/gatk-package-4.3.0.0-local.jar -Xmx104857M SplitNCigarReads -R input_2/Shiitake_XR1.asm.bp.hap1.p_ctg.fasta -I output.temp/Le1-12-501-708_1.fastq.gz_addrg_repN.bam -O output2/Le1-12-501-708_1.fastq.gz.bam /usr/local/bin/picard: line 5: warning: setlocale: LC_ALL: cannot change locale (en_US.UTF-8): No such file or directory INFO 2023-02-28 23:40:41 AddOrReplaceReadGroups ********** NOTE: Picard's command line syntax is changing. ********** ********** For more information, please see: ********** https://github.com/broadinstitute/picard/wiki/Command-Line-Syntax-Transition-For-Users-(Pre-Transition) ********** ********** The command line looks like this in the new syntax: ********** ********** AddOrReplaceReadGroups -I output/Le1-13-502-701_1.fastq.gz.bam -O output.temp/Le1-13-502-701_1.fastq.gz_addrg.bam -SO coordinate -RGID Le1-13-502-701_1.fastq.gz -RGLB library -RGPL Illumina -RGPU Illumina -RGSM Le1-13-502-701_1.fastq.gz ********** 23:40:41.719 INFO NativeLibraryLoader - Loading libgkl_compression.so from jar:file:/usr/local/share/picard-2.18.27-0/picard.jar!/com/intel/gkl/native/libgkl_compression.so [Tue Feb 28 23:40:41 GMT 2023] AddOrReplaceReadGroups INPUT=output/Le1-13-502-701_1.fastq.gz.bam OUTPUT=output.temp/Le1-13-502-701_1.fastq.gz_addrg.bam SORT_ORDER=coordinate RGID=Le1-13-502-701_1.fastq.gz RGLB=library RGPL=Illumina RGPU=Illumina RGSM=Le1-13-502-701_1.fastq.gz VERBOSITY=INFO QUIET=false VALIDATION_STRINGENCY=STRICT COMPRESSION_LEVEL=5 MAX_RECORDS_IN_RAM=500000 CREATE_INDEX=false CREATE_MD5_FILE=false GA4GH_CLIENT_SECRETS=client_secrets.json USE_JDK_DEFLATER=false USE_JDK_INFLATER=false [Tue Feb 28 23:40:41 GMT 2023] Executing as ?@9dff24c3b9b6 on Linux 3.10.0-1160.36.2.el7.x86_64 amd64; OpenJDK 64-Bit Server VM 11.0.1+13-LTS; Deflater: Intel; Inflater: Intel; Provider GCS is not available; Picard version: 2.18.27-SNAPSHOT INFO 2023-02-28 23:40:41 AddOrReplaceReadGroups Created read-group ID=Le1-13-502-701_1.fastq.gz PL=Illumina LB=library SM=Le1-13-502-701_1.fastq.gz [Tue Feb 28 23:40:47 GMT 2023] picard.sam.AddOrReplaceReadGroups done. Elapsed time: 0.09 minutes. Runtime.totalMemory()=2600468480 [1m[31mUSAGE: [32m [1m[31m [-h] [0m[1m[31mAvailable Programs: [0m[37m-------------------------------------------------------------------------------------- [0m[31mBase Calling: Tools that process sequencing machine data, e.g. Illumina base calls, and detect sequencing level attributes, e.g. adapters[0m [32m CheckIlluminaDirectory (Picard) [36mAsserts the validity for specified Illumina basecalling data. [0m [32m CollectIlluminaBasecallingMetrics (Picard) [36mCollects Illumina Basecalling metrics for a sequencing run. [0m [32m CollectIlluminaLaneMetrics (Picard) [36mCollects Illumina lane metrics for the given BaseCalling analysis directory.[0m [32m ExtractIlluminaBarcodes (Picard) [36mTool determines the barcode for each read in an Illumina lane. [0m [32m IlluminaBasecallsToFastq (Picard) [36mGenerate FASTQ file(s) from Illumina basecall read data. [0m [32m IlluminaBasecallsToSam (Picard) [36mTransforms raw Illumina sequencing data into an unmapped SAM, BAM or CRAM file.[0m [32m MarkIlluminaAdapters (Picard) [36mReads a SAM/BAM/CRAM file and rewrites it with new adapter-trimming tags. [0m [37m-------------------------------------------------------------------------------------- [0m[31mCopy Number Variant Discovery: Tools that analyze read coverage to detect copy number variants.[0m [32m AnnotateIntervals [36mAnnotates intervals with GC content, mappability, and segmental-duplication content[0m [32m CallCopyRatioSegments [36mCalls copy-ratio segments as amplified, deleted, or copy-number neutral[0m [32m CombineSegmentBreakpoints [31m(EXPERIMENTAL Tool) [36mCombine the breakpoints of two segment files and annotate the resulting intervals with chosen columns from each file.[0m [32m CreateReadCountPanelOfNormals [36mCreates a panel of normals for read-count denoising[0m [32m DenoiseReadCounts [36mDenoises read counts to produce denoised copy ratios[0m [32m DetermineGermlineContigPloidy [36mDetermines the baseline contig ploidy for germline samples given counts data[0m [32m FilterIntervals [36mFilters intervals based on annotations and/or count statistics[0m [32m GermlineCNVCaller [36mCalls copy-number variants in germline samples given their counts and the output of DetermineGermlineContigPloidy[0m [32m MergeAnnotatedRegions [31m(EXPERIMENTAL Tool) [36mMerge annotated genomic regions based entirely on touching/overlapping intervals.[0m [32m MergeAnnotatedRegionsByAnnotation [31m(EXPERIMENTAL Tool) [36mMerge annotated genomic regions within specified distance if annotation value(s) are exactly the same.[0m [32m ModelSegments [36mModels segmented copy ratios from denoised copy ratios and segmented minor-allele fractions from allelic counts[0m [32m PlotDenoisedCopyRatios [36mCreates plots of denoised copy ratios[0m [32m PlotModeledSegments [36mCreates plots of denoised and segmented copy-ratio and minor-allele-fraction estimates[0m [32m PostprocessGermlineCNVCalls [36mPostprocesses the output of GermlineCNVCaller and generates VCFs and denoised copy ratios[0m [32m TagGermlineEvents [31m(EXPERIMENTAL Tool) [36mDo a simplistic tagging of germline events in a tumor segment file.[0m [37m-------------------------------------------------------------------------------------- [0m[31mCoverage Analysis: Tools that count coverage, e.g. depth per allele[0m [32m ASEReadCounter [36mGenerates table of filtered base counts at het sites for allele specific expression[0m [32m AnalyzeSaturationMutagenesis [31m(BETA Tool) [36m(EXPERIMENTAL) Processes reads from a MITESeq or other saturation mutagenesis experiment.[0m [32m CollectAllelicCounts [36mCollects reference and alternate allele counts at specified sites[0m [32m CollectAllelicCountsSpark [36mCollects reference and alternate allele counts at specified sites[0m [32m CollectF1R2Counts [36mCollect F1R2 read counts for the Mutect2 orientation bias mixture model filter[0m [32m CollectReadCounts [36mCollects read counts at specified intervals[0m [32m CountBases [36mCount bases in a SAM/BAM/CRAM file[0m [32m CountBasesSpark [36mCounts bases in the input SAM/BAM[0m [32m CountReads [36mCount reads in a SAM/BAM/CRAM file[0m [32m CountReadsSpark [36mCounts reads in the input SAM/BAM[0m [32m DepthOfCoverage [31m(BETA Tool) [36mGenerate coverage summary information for reads data[0m [32m GatherNormalArtifactData [36mCombine output files from GetNormalArtifactData in the order defined by a sequence dictionary[0m [32m GeneExpressionEvaluation [31m(BETA Tool) [36mEvaluate gene expression from RNA-seq reads aligned to genome.[0m [32m GetNormalArtifactData [36mCollects data for training normal artifact filter[0m [32m GetPileupSummaries [36mTabulates pileup metrics for inferring contamination[0m [32m LocalAssembler [31m(BETA Tool) [36mLocal assembler for SVs[0m [32m Pileup [36mPrints read alignments in samtools pileup format[0m [32m PileupSpark [31m(BETA Tool) [36mPrints read alignments in samtools pileup format[0m [37m-------------------------------------------------------------------------------------- [0m[31mDiagnostics and Quality Control: Tools that collect sequencing quality related and comparative metrics[0m [32m AccumulateQualityYieldMetrics (Picard) [36mCombines multiple QualityYieldMetrics files into a single file.[0m [32m AccumulateVariantCallingMetrics (Picard) [36mCombines multiple Variant Calling Metrics files into a single file[0m [32m AnalyzeCovariates [36mEvaluate and compare base quality score recalibration (BQSR) tables[0m [32m BamIndexStats (Picard) [36mGenerate index statistics from a BAM file[0m [32m CalcMetadataSpark [31m(BETA Tool) [36m(Internal) Collects read metrics relevant to structural variant discovery[0m [32m CalculateContamination [36mCalculate the fraction of reads coming from cross-sample contamination[0m [32m CalculateFingerprintMetrics (Picard) [36mCalculate statistics on fingerprints, checking their viability[0m [32m CalculateReadGroupChecksum (Picard) [36mCreates a hash code based on the read groups (RG). [0m [32m CheckDuplicateMarking (Picard) [36mChecks the consistency of duplicate markings.[0m [32m CheckFingerprint (Picard) [36mComputes a fingerprint from the supplied input (SAM/BAM/CRAM or VCF) file and compares it to the provided genotypes[0m [32m CheckPileup [36mCompare GATK's internal pileup to a reference Samtools mpileup[0m [32m CheckTerminatorBlock (Picard) [36mAsserts the provided gzip file's (e.g., BAM) last block is well-formed; RC 100 otherwise[0m [32m ClusterCrosscheckMetrics (Picard) [36mClusters the results of a CrosscheckFingerprints run by LOD score[0m [32m CollectAlignmentSummaryMetrics (Picard) [36mProduces a summary of alignment metrics from a SAM or BAM file. [0m [32m CollectArraysVariantCallingMetrics (Picard) [36mCollects summary and per-sample from the provided arrays VCF file[0m [32m CollectBaseDistributionByCycle (Picard) [36mChart the nucleotide distribution per cycle in a SAM or BAM file[0m [32m CollectBaseDistributionByCycleSpark [31m(BETA Tool) [36mCollects base distribution per cycle in SAM/BAM/CRAM file(s).[0m [32m CollectGcBiasMetrics (Picard) [36mCollect metrics regarding GC bias. [0m [32m CollectHiSeqXPfFailMetrics (Picard) [36mClassify PF-Failing reads in a HiSeqX Illumina Basecalling directory into various categories.[0m [32m CollectHsMetrics (Picard) [36mCollects hybrid-selection (HS) metrics for a SAM or BAM file. [0m [32m CollectIndependentReplicateMetrics (Picard) [31m(EXPERIMENTAL Tool) [36mEstimates the rate of independent replication rate of reads within a bam. [0m [32m CollectInsertSizeMetrics (Picard) [36mCollect metrics about the insert size distribution of a paired-end library. [0m [32m CollectInsertSizeMetricsSpark [31m(BETA Tool) [36mCollects insert size distribution information on alignment data[0m [32m CollectJumpingLibraryMetrics (Picard) [36mCollect jumping library metrics. [0m [32m CollectMultipleMetrics (Picard) [36mCollect multiple classes of metrics. [0m [32m CollectMultipleMetricsSpark [31m(BETA Tool) [36mRuns multiple metrics collection modules for a given alignment file[0m [32m CollectOxoGMetrics (Picard) [36mCollect metrics to assess oxidative artifacts.[0m [32m CollectQualityYieldMetrics (Picard) [36mCollect metrics about reads that pass quality thresholds and Illumina-specific filters. [0m [32m CollectQualityYieldMetricsSpark [31m(BETA Tool) [36mCollects quality yield metrics from SAM/BAM/CRAM file(s).[0m [32m CollectRawWgsMetrics (Picard) [36mCollect whole genome sequencing-related metrics. [0m [32m CollectRnaSeqMetrics (Picard) [36mProduces RNA alignment metrics for a SAM or BAM file. [0m [32m CollectRrbsMetrics (Picard) [36mCollects metrics from reduced representation bisulfite sequencing (Rrbs) data. [0m [32m CollectSamErrorMetrics (Picard) [36mProgram to collect error metrics on bases stratified in various ways.[0m [32m CollectSequencingArtifactMetrics (Picard) [36mCollect metrics to quantify single-base sequencing artifacts. [0m [32m CollectTargetedPcrMetrics (Picard) [36mCalculate PCR-related metrics from targeted sequencing data. [0m [32m CollectVariantCallingMetrics (Picard) [36mCollects per-sample and aggregate (spanning all samples) metrics from the provided VCF file[0m [32m CollectWgsMetrics (Picard) [36mCollect metrics about coverage and performance of whole genome sequencing (WGS) experiments.[0m [32m CollectWgsMetricsWithNonZeroCoverage (Picard)[31m(EXPERIMENTAL Tool) [36mCollect metrics about coverage and performance of whole genome sequencing (WGS) experiments. [0m [32m CompareBaseQualities [36mCompares the base qualities of two SAM/BAM/CRAM files[0m [32m CompareDuplicatesSpark [31m(BETA Tool) [36mDetermine if two potentially identical BAMs have the same duplicate reads[0m [32m CompareMetrics (Picard) [36mCompare two metrics files.[0m [32m CompareSAMs (Picard) [36mCompare two input SAM/BAM/CRAM files. [0m [32m ConvertHaplotypeDatabaseToVcf (Picard) [36mConvert Haplotype database file to vcf[0m [32m ConvertSequencingArtifactToOxoG (Picard) [36mExtract OxoG metrics from generalized artifacts metrics. [0m [32m CrosscheckFingerprints (Picard) [36mChecks that all data in the input files appear to have come from the same individual[0m [32m CrosscheckReadGroupFingerprints (Picard) [36mDEPRECATED: USE CrosscheckFingerprints. [0m [32m DumpTabixIndex [36mDumps a tabix index file.[0m [32m EstimateLibraryComplexity (Picard) [36mEstimates the numbers of unique molecules in a sequencing library. [0m [32m ExtractFingerprint (Picard) [36mComputes a fingerprint from the input file.[0m [32m FlagStat [36mAccumulate flag statistics given a BAM file[0m [32m FlagStatSpark [36mSpark tool to accumulate flag statistics[0m [32m GatherPileupSummaries [36mCombine output files from GetPileupSummary in the order defined by a sequence dictionary[0m [32m GetSampleName [36mEmit a single sample name[0m [32m IdentifyContaminant (Picard) [36mComputes a fingerprint from the supplied SAM/BAM file, given a contamination estimate.[0m [32m LiftOverHaplotypeMap (Picard) [36mLifts over a haplotype database from one reference to another[0m [32m MeanQualityByCycle (Picard) [36mCollect mean quality by cycle.[0m [32m MeanQualityByCycleSpark [31m(BETA Tool) [36mMeanQualityByCycle on Spark[0m [32m QualityScoreDistribution (Picard) [36mChart the distribution of quality scores. [0m [32m QualityScoreDistributionSpark [31m(BETA Tool) [36mQualityScoreDistribution on Spark[0m [32m ValidateSamFile (Picard) [36mValidates a SAM/BAM/CRAM file.[0m [32m ViewSam (Picard) [36mPrints a SAM or BAM file to the screen[0m [37m-------------------------------------------------------------------------------------- [0m[31mExample Tools: Example tools that show developers how to implement new tools[0m [32m ExampleMultiFeatureWalker [36mExample of a MultiFeatureWalker subclass.[0m [32m HtsgetReader [31m(EXPERIMENTAL Tool) [36mDownload a file using htsget[0m [37m-------------------------------------------------------------------------------------- [0m[31mFlow Based Tools: Tools designed specifically to operate on flow based data[0m [32m CalculateAverageCombinedAnnotations [31m(EXPERIMENTAL Tool) [36mDivides annotations that were summed by genomicsDB by number of samples to calculate average.[0m [32m FlowFeatureMapper [31m(EXPERIMENTAL Tool) [36mMap/find features in BAM file, output VCF. Initially mapping SNVs[0m [32m GroundTruthReadsBuilder [31m(EXPERIMENTAL Tool) [36mProduces a flexible and robust ground truth set for base calling training[0m [32m SplitCRAM [31m(EXPERIMENTAL Tool) [36mSplit CRAM files to smaller files efficiently[0m [37m-------------------------------------------------------------------------------------- [0m[31mGenotyping Arrays Manipulation: Tools that manipulate data generated by Genotyping arrays[0m [32m BpmToNormalizationManifestCsv (Picard) [36mProgram to convert an Illumina bpm file into a bpm.csv file.[0m [32m CombineGenotypingArrayVcfs (Picard) [36mProgram to combine multiple genotyping array VCF files into one VCF.[0m [32m CompareGtcFiles (Picard) [36mCompares two GTC files.[0m [32m CreateBafRegressMetricsFile (Picard) [36mProgram to generate a picard metrics file from the output of the bafRegress tool.[0m [32m CreateExtendedIlluminaManifest (Picard) [36mCreate an Extended Illumina Manifest for usage by the Picard tool GtcToVcf[0m [32m CreateVerifyIDIntensityContaminationMetricsFile (Picard) [36mProgram to generate a picard metrics file from the output of the VerifyIDIntensity tool.[0m [32m GtcToVcf (Picard) [36mProgram to convert an Illumina GTC file to a VCF[0m [32m MergePedIntoVcf (Picard) [36mProgram to merge a single-sample ped file from zCall into a single-sample VCF.[0m [32m VcfToAdpc (Picard) [36mProgram to convert an Arrays VCF to an ADPC file.[0m [37m-------------------------------------------------------------------------------------- [0m[31mIntervals Manipulation: Tools that process genomic intervals in various formats[0m [32m BedToIntervalList (Picard) [36mConverts a BED file to a Picard Interval List. [0m [32m CompareIntervalLists [36mCompare two interval lists for equality[0m [32m IntervalListToBed (Picard) [36mConverts an Picard IntervalList file to a BED file.[0m [32m IntervalListTools (Picard) [36mA tool for performing various IntervalList manipulations[0m [32m LiftOverIntervalList (Picard) [36mLifts over an interval list from one reference build to another. [0m [32m PreprocessIntervals [36mPrepares bins for coverage collection[0m [32m SplitIntervals [36mSplit intervals into sub-interval files.[0m [37m-------------------------------------------------------------------------------------- [0m[31mMetagenomics: Tools that perform metagenomic analysis, e.g. microbial community composition and pathogen detection[0m [32m PathSeqBuildKmers [36mBuilds set of host reference k-mers[0m [32m PathSeqBuildReferenceTaxonomy [36mBuilds a taxonomy datafile of the microbe reference[0m [32m PathSeqBwaSpark [36mStep 2: Aligns reads to the microbe reference[0m [32m PathSeqFilterSpark [36mStep 1: Filters low quality, low complexity, duplicate, and host reads[0m [32m PathSeqPipelineSpark [36mCombined tool that performs all steps: read filtering, microbe reference alignment, and abundance scoring[0m [32m PathSeqScoreSpark [36mStep 3: Classifies pathogen-aligned reads and generates abundance scores[0m [37m-------------------------------------------------------------------------------------- [0m[31mMethylation-Specific Tools: Tools that perform methylation calling, processing bisulfite sequenced, methylation-aware aligned BAM[0m [32m MethylationTypeCaller [31m(EXPERIMENTAL Tool) [36mIdentify methylated bases from bisulfite sequenced, methylation-aware BAMs[0m [37m-------------------------------------------------------------------------------------- [0m[31mOther: Miscellaneous tools, e.g. those that aid in data streaming[0m [32m CreateHadoopBamSplittingIndex [31m(BETA Tool) [36mCreate a Hadoop BAM splitting index[0m [32m FifoBuffer (Picard) [36mProvides a large, FIFO buffer that can be used to buffer input and output streams between programs.[0m [32m GatherBQSRReports [36mGathers scattered BQSR recalibration reports into a single file[0m [32m GatherTranches [31m(BETA Tool) [36mGathers scattered VQSLOD tranches into a single file[0m [32m IndexFeatureFile [36mCreates an index for a feature file, e.g. VCF or BED file.[0m [32m ParallelCopyGCSDirectoryIntoHDFSSpark [31m(BETA Tool) [36mParallel copy a file or directory from Google Cloud Storage into the HDFS file system used by Spark[0m [32m PrintBGZFBlockInformation [31m(EXPERIMENTAL Tool) [36mPrint information about the compressed blocks in a BGZF format file[0m [32m ReadAnonymizer [31m(EXPERIMENTAL Tool) [36mReplace bases in reads with reference bases.[0m [32m ReblockGVCF [36mCondenses homRef blocks in a single-sample GVCF[0m [32m SortGff (Picard) [36mSorts a gff3 file, and adds flush directives[0m [37m-------------------------------------------------------------------------------------- [0m[31mRead Data Manipulation: Tools that manipulate read data in SAM, BAM or CRAM format[0m [32m AddCommentsToBam (Picard) [36mAdds comments to the header of a BAM file.[0m [32m AddOATag (Picard) [36mRecord current alignment information to OA tag.[0m [32m AddOrReplaceReadGroups (Picard) [36mAssigns all the reads in a file to a single new read-group.[0m [32m AddOriginalAlignmentTags [31m(EXPERIMENTAL Tool) [36mAdds Original Alignment tag and original mate contig tag[0m [32m ApplyBQSR [36mApply base quality score recalibration[0m [32m ApplyBQSRSpark [31m(BETA Tool) [36mApply base quality score recalibration on Spark[0m [32m BQSRPipelineSpark [31m(BETA Tool) [36mBoth steps of BQSR (BaseRecalibrator and ApplyBQSR) on Spark[0m [32m BamToBfq (Picard) [36mConverts a BAM file into a BFQ (binary fastq formatted) file[0m [32m BaseRecalibrator [36mGenerates recalibration table for Base Quality Score Recalibration (BQSR)[0m [32m BaseRecalibratorSpark [31m(BETA Tool) [36mGenerate recalibration table for Base Quality Score Recalibration (BQSR) on Spark[0m [32m BuildBamIndex (Picard) [36mGenerates a BAM index ".bai" file. [0m [32m BwaAndMarkDuplicatesPipelineSpark [31m(BETA Tool) [36mTakes name-sorted file and runs BWA and MarkDuplicates.[0m [32m BwaSpark [31m(BETA Tool) [36mAlign reads to a given reference using BWA on Spark[0m [32m CleanSam (Picard) [36mCleans a SAM/BAM/CRAM files, soft-clipping beyond-end-of-reference alignments and setting MAPQ to 0 for unmapped reads[0m [32m ClipReads [36mClip reads in a SAM/BAM/CRAM file[0m [32m CollectDuplicateMetrics (Picard) [36mCollect Duplicate metrics from marked file.[0m [32m ConvertHeaderlessHadoopBamShardToBam [31m(BETA Tool) [36mConvert a headerless BAM shard into a readable BAM[0m [32m DownsampleByDuplicateSet [31m(BETA Tool) [36mDiscard a set fraction of duplicate sets from a UMI-grouped bam[0m [32m DownsampleSam (Picard) [36mDownsample a SAM or BAM file.[0m [32m ExtractOriginalAlignmentRecordsByNameSpark [31m(BETA Tool) [36mSubsets reads by name[0m [32m FastqToSam (Picard) [36mConverts a FASTQ file to an unaligned BAM or SAM file[0m [32m FilterSamReads (Picard) [36mSubsets reads from a SAM/BAM/CRAM file by applying one of several filters.[0m [32m FixMateInformation (Picard) [36mVerify mate-pair information between mates and fix if needed.[0m [32m FixMisencodedBaseQualityReads [36mFix Illumina base quality scores in a SAM/BAM/CRAM file[0m [32m GatherBamFiles (Picard) [36mConcatenate efficiently BAM files that resulted from a scattered parallel analysis[0m [32m LeftAlignIndels [36mLeft-aligns indels from reads in a SAM/BAM/CRAM file[0m [32m MarkDuplicates (Picard) [36mIdentifies duplicate reads. [0m [32m MarkDuplicatesSpark [36mMarkDuplicates on Spark[0m [32m MarkDuplicatesWithMateCigar (Picard) [36mIdentifies duplicate reads, accounting for mate CIGAR. [0m [32m MergeBamAlignment (Picard) [36mMerge alignment data from a SAM or BAM with data in an unmapped BAM file. [0m [32m MergeSamFiles (Picard) [36mMerges multiple SAM/BAM/CRAM (and/or) files into a single file. [0m [32m PositionBasedDownsampleSam (Picard) [36mDownsample a SAM or BAM file to retain a subset of the reads based on the reads location in each tile in the flowcell.[0m [32m PostProcessReadsForRSEM [31m(BETA Tool) [36mReorder reads before running RSEM[0m [32m PrintDistantMates [36mUnmaps reads with distant mates.[0m [32m PrintReads [36mPrint reads in the SAM/BAM/CRAM file[0m [32m PrintReadsHeader [36mPrint the header from a SAM/BAM/CRAM file[0m [32m PrintReadsSpark [36mPrintReads on Spark[0m [32m ReorderSam (Picard) [36mReorders reads in a SAM or BAM file to match ordering in a second reference file.[0m [32m ReplaceSamHeader (Picard) [36mReplaces the SAMFileHeader in a SAM/BAM/CRAM file. [0m [32m RevertBaseQualityScores [36mRevert Quality Scores in a SAM/BAM/CRAM file[0m [32m RevertOriginalBaseQualitiesAndAddMateCigar (Picard)[36mReverts the original base qualities and adds the mate cigar tag to read-group files[0m [32m RevertSam (Picard) [36mReverts SAM/BAM/CRAM files to a previous state. [0m [32m RevertSamSpark [31m(BETA Tool) [36mReverts SAM, BAM or CRAM files to a previous state.[0m [32m SamFormatConverter (Picard) [36mConvert a BAM file to a SAM file, or a SAM to a BAM[0m [32m SamToFastq (Picard) [36mConverts a SAM/BAM/CRAM file to FASTQ.[0m [32m SamToFastqWithTags (Picard) [36mConverts a SAM or BAM file to FASTQ alongside FASTQs created from tags.[0m [32m SetNmAndUqTags (Picard) [36mDEPRECATED: Use SetNmMdAndUqTags instead.[0m [32m SetNmMdAndUqTags (Picard) [36mFixes the NM, MD, and UQ tags in a SAM/BAM/CRAM file [0m [32m SimpleMarkDuplicatesWithMateCigar (Picard) [31m(EXPERIMENTAL Tool) [36mExamines aligned records in the supplied SAM or BAM file to locate duplicate molecules.[0m [32m SortSam (Picard) [36mSorts a SAM, BAM or CRAM file. [0m [32m SortSamSpark [31m(BETA Tool) [36mSortSam on Spark (works on SAM/BAM/CRAM)[0m [32m SplitNCigarReads [36mSplit Reads with N in Cigar[0m [32m SplitReads [36mOutputs reads from a SAM/BAM/CRAM by read group, sample and library name[0m [32m SplitSamByLibrary (Picard) [36mSplits a SAM/BAM/CRAM file into individual files by library[0m [32m SplitSamByNumberOfReads (Picard) [36mSplits a SAM/BAM/CRAM file to multiple files.[0m [32m TransferReadTags [31m(EXPERIMENTAL Tool) [36mIncorporate read tags in a SAM file to that of a matching SAM file[0m [32m UmiAwareMarkDuplicatesWithMateCigar (Picard) [31m(EXPERIMENTAL Tool) [36mIdentifies duplicate reads using information from read positions and UMIs. [0m [32m UnmarkDuplicates [36mClears the 0x400 duplicate SAM flag[0m [37m-------------------------------------------------------------------------------------- [0m[31mReference: Tools that analyze and manipulate FASTA format references[0m [32m BaitDesigner (Picard) [36mDesigns oligonucleotide baits for hybrid selection reactions.[0m [32m BwaMemIndexImageCreator [36mCreate a BWA-MEM index image file for use with GATK BWA tools[0m [32m CheckReferenceCompatibility [31m(EXPERIMENTAL Tool) [36mCheck a BAM/VCF for compatibility against specified references.[0m [32m CompareReferences [31m(EXPERIMENTAL Tool) [36mDisplay reference comparison as a tab-delimited table and summarize reference differences.[0m [32m ComposeSTRTableFile [36mComposes a genome-wide STR location table used for DragSTR model auto-calibration[0m [32m CountBasesInReference [36mCount the numbers of each base in a reference file[0m [32m CreateSequenceDictionary (Picard) [36mCreates a sequence dictionary for a reference sequence. [0m [32m ExtractSequences (Picard) [36mSubsets intervals from a reference sequence to a new FASTA file.[0m [32m FastaAlternateReferenceMaker [36mCreate an alternative reference by combining a fasta with a vcf.[0m [32m FastaReferenceMaker [36mCreate snippets of a fasta file[0m [32m FindBadGenomicKmersSpark [31m(BETA Tool) [36mIdentifies sequences that occur at high frequency in a reference[0m [32m NonNFastaSize (Picard) [36mCounts the number of non-N bases in a fasta file.[0m [32m NormalizeFasta (Picard) [36mNormalizes lines of sequence in a FASTA file to be of the same length.[0m [32m ScatterIntervalsByNs (Picard) [36mWrites an interval list created by splitting a reference at Ns.[0m [32m ShiftFasta [31m(BETA Tool) [36mCreates a shifted fasta file and shift_back file[0m [37m-------------------------------------------------------------------------------------- [0m[31mShort Variant Discovery: Tools that perform variant calling and genotyping for short variants (SNPs, SNVs and Indels)[0m [32m CalibrateDragstrModel [36mestimates the parameters for the DRAGstr model[0m [32m CombineGVCFs [36mMerges one or more HaplotypeCaller GVCF files into a single GVCF with appropriate annotations[0m [32m GenomicsDBImport [36mImport VCFs to GenomicsDB[0m [32m GenotypeGVCFs [36mPerform joint genotyping on one or more samples pre-called with HaplotypeCaller[0m [32m GnarlyGenotyper [31m(BETA Tool) [36mPerform "quick and dirty" joint genotyping on one or more samples pre-called with HaplotypeCaller[0m [32m HaplotypeBasedVariantRecaller [31m(EXPERIMENTAL Tool) [36mCalculate likelihood matrix for each Allele in VCF against a set of Reads limited by a set of Haplotypes[0m [32m HaplotypeCaller [36mCall germline SNPs and indels via local re-assembly of haplotypes[0m [32m HaplotypeCallerSpark [31m(BETA Tool) [36mHaplotypeCaller on Spark[0m [32m LearnReadOrientationModel [36mGet the maximum likelihood estimates of artifact prior probabilities in the orientation bias mixture model filter[0m [32m MergeMutectStats [36mMerge the stats output by scatters of a single Mutect2 job[0m [32m Mutect2 [36mCall somatic SNVs and indels via local assembly of haplotypes[0m [32m RampedHaplotypeCaller [31m(EXPERIMENTAL Tool) [36mCall germline SNPs and indels via local re-assembly of haplotypes (ramped version)[0m [32m ReadsPipelineSpark [31m(BETA Tool) [36mRuns BWA (if specified), MarkDuplicates, BQSR, and HaplotypeCaller on unaligned or aligned reads to generate a VCF.[0m [37m-------------------------------------------------------------------------------------- [0m[31mStructural Variant Discovery: Tools that detect structural variants [0m [32m CollectSVEvidence [31m(BETA Tool) [36mGathers paired-end and split read evidence files for use in the GATK-SV pipeline.[0m [32m CondenseDepthEvidence [31m(EXPERIMENTAL Tool) [36mMerges adjacent DepthEvidence records.[0m [32m CpxVariantReInterpreterSpark [31m(BETA Tool) [36m(Internal) Tries to extract simple variants from a provided GATK-SV CPX.vcf[0m [32m DiscoverVariantsFromContigAlignmentsSAMSpark [31m(BETA Tool) [36m(Internal) Examines aligned contigs from local assemblies and calls structural variants[0m [32m ExtractSVEvidenceSpark [31m(BETA Tool) [36m(Internal) Extracts evidence of structural variations from reads[0m [32m FindBreakpointEvidenceSpark [31m(BETA Tool) [36m(Internal) Produces local assemblies of genomic regions that may harbor structural variants[0m [32m JointGermlineCNVSegmentation [31m(BETA Tool) [36mCombine segmented gCNV VCFs.[0m [32m PrintReadCounts [31m(EXPERIMENTAL Tool) [36mPrints count files for CNV determination.[0m [32m PrintSVEvidence [31m(EXPERIMENTAL Tool) [36mMerges SV evidence records.[0m [32m SVAnnotate [36mAdds gene overlap and variant consequence annotations to SV VCF from GATK-SV pipeline[0m [32m SVCluster [31m(BETA Tool) [36mClusters structural variants[0m [32m SiteDepthtoBAF [31m(EXPERIMENTAL Tool) [36mConvert SiteDepth to BafEvidence[0m [32m StructuralVariantDiscoverer [31m(BETA Tool) [36m(Internal) Examines aligned contigs from local assemblies and calls structural variants or their breakpoints[0m [32m StructuralVariationDiscoveryPipelineSpark [31m(BETA Tool) [36mRuns the structural variation discovery workflow on a single sample[0m [32m SvDiscoverFromLocalAssemblyContigAlignmentsSpark [31m(BETA Tool) [36m(Internal) Examines aligned contigs from local assemblies and calls structural variants or their breakpoints[0m [37m-------------------------------------------------------------------------------------- [0m[31mVariant Evaluation and Refinement: Tools that evaluate and refine variant calls, e.g. with annotations not offered by the engine[0m [32m AlleleFrequencyQC [31m(BETA Tool) [36mGeneral-purpose tool for variant evaluation (% in dbSNP, genotype concordance, Ti/Tv ratios, and a lot more)[0m [32m AnnotateVcfWithBamDepth [36m(Internal) Annotate a vcf with a bam's read depth at each variant locus[0m [32m AnnotateVcfWithExpectedAlleleFraction [36m(Internal) Annotate a vcf with expected allele fractions in pooled sequencing[0m [32m CalculateGenotypePosteriors [36mCalculate genotype posterior probabilities given family and/or known population genotypes[0m [32m CalculateMixingFractions [36m(Internal) Calculate proportions of different samples in a pooled bam[0m [32m Concordance [36mEvaluate concordance of an input VCF against a validated truth VCF[0m [32m CountFalsePositives [31m(BETA Tool) [36mCount PASS variants[0m [32m CountVariants [36mCounts variant records in a VCF file, regardless of filter status.[0m [32m CountVariantsSpark [36mCountVariants on Spark[0m [32m EvaluateInfoFieldConcordance [31m(BETA Tool) [36mEvaluate concordance of info fields in an input VCF against a validated truth VCF[0m [32m FilterFuncotations [31m(EXPERIMENTAL Tool) [36mFilter variants based on clinically-significant Funcotations.[0m [32m FindMendelianViolations (Picard) [36mFinds mendelian violations of all types within a VCF[0m [32m FuncotateSegments [31m(BETA Tool) [36mFunctional annotation for segment files. The output formats are not well-defined and subject to change.[0m [32m Funcotator [36mFunctional Annotator[0m [32m FuncotatorDataSourceDownloader [36mData source downloader for Funcotator.[0m [32m GenotypeConcordance (Picard) [36mCalculates the concordance between genotype data of one sample in each of two VCFs - truth (or reference) vs. calls.[0m [32m MergeMutect2CallsWithMC3 [31m(EXPERIMENTAL Tool) [36mUNSUPPORTED. FOR EVALUATION ONLY. Merge M2 calls with MC[0m [32m ReferenceBlockConcordance [36mEvaluate GVCF reference block concordance of an input GVCF against a truth GVCF[0m [32m ValidateBasicSomaticShortMutations [31m(EXPERIMENTAL Tool) [36mCheck variants against tumor-normal bams representing the same samples, though not the ones from the actual calls.[0m [32m ValidateVariants [36mValidate VCF[0m [32m VariantEval [31m(BETA Tool) [36mGeneral-purpose tool for variant evaluation (% in dbSNP, genotype concordance, Ti/Tv ratios, and a lot more)[0m [32m VariantsToTable [36mExtract fields from a VCF file to a tab-delimited table[0m [37m-------------------------------------------------------------------------------------- [0m[31mVariant Filtering: Tools that filter variants by annotating the FILTER column[0m [32m ApplyVQSR [36m Apply a score cutoff to filter variants based on a recalibration table[0m [32m CNNScoreVariants [36mApply a Convolutional Neural Net to filter annotated variants[0m [32m CNNVariantTrain [31m(EXPERIMENTAL Tool) [36mTrain a CNN model for filtering variants[0m [32m CNNVariantWriteTensors [31m(EXPERIMENTAL Tool) [36mWrite variant tensors for training a CNN to filter variants[0m [32m CreateSomaticPanelOfNormals [31m(BETA Tool) [36mMake a panel of normals for use with Mutect2[0m [32m ExtractVariantAnnotations [31m(BETA Tool) [36mExtracts site-level variant annotations, labels, and other metadata from a VCF file to HDF5 files[0m [32m FilterAlignmentArtifacts [31m(EXPERIMENTAL Tool) [36mFilter alignment artifacts from a vcf callset.[0m [32m FilterMutectCalls [36mFilter somatic SNVs and indels called by Mutect2[0m [32m FilterVariantTranches [36mApply tranche filtering[0m [32m FilterVcf (Picard) [36mHard filters a VCF.[0m [32m MTLowHeteroplasmyFilterTool [36mIf too many low het sites, filter all low het sites[0m [32m NuMTFilterTool [36mUses the median autosomal coverage and the allele depth to determine whether the allele might be a NuMT[0m [32m ScoreVariantAnnotations [31m(BETA Tool) [36mScores variant calls in a VCF file based on site-level annotations using a previously trained model[0m [32m TrainVariantAnnotationsModel [31m(BETA Tool) [36mTrains a model for scoring variant calls based on site-level annotations[0m [32m VariantFiltration [36mFilter variant calls based on INFO and/or FORMAT annotations[0m [32m VariantRecalibrator [36mBuild a recalibration model to score variant quality for filtering purposes[0m [37m-------------------------------------------------------------------------------------- [0m[31mVariant Manipulation: Tools that manipulate variant call format (VCF) data[0m [32m FixVcfHeader (Picard) [36mReplaces or fixes a VCF header.[0m [32m GatherVcfs (Picard) [36mGathers multiple VCF files from a scatter operation into a single VCF file[0m [32m GatherVcfsCloud [31m(BETA Tool) [36mGathers multiple VCF files from a scatter operation into a single VCF file[0m [32m LeftAlignAndTrimVariants [36mLeft align and trim vairants[0m [32m LiftoverVcf (Picard) [36mLifts over a VCF file from one reference build to another. [0m [32m MakeSitesOnlyVcf (Picard) [36mCreates a VCF that contains all the site-level information for all records in the input VCF but no genotype information.[0m [32m MakeVcfSampleNameMap (Picard) [36mCreates a TSV from sample name to VCF/GVCF path, with one line per input.[0m [32m MergeVcfs (Picard) [36mCombines multiple variant files into a single variant file[0m [32m PrintVariantsSpark [36mPrints out variants from the input VCF.[0m [32m RemoveNearbyIndels [36m(Internal) Remove indels from the VCF file that are close to each other.[0m [32m RenameSampleInVcf (Picard) [36mRenames a sample within a VCF or BCF.[0m [32m SelectVariants [36mSelect a subset of variants from a VCF file[0m [32m SortVcf (Picard) [36mSorts one or more VCF files. [0m [32m SplitVcfs (Picard) [36mSplits SNPs and INDELs into separate files. [0m [32m UpdateVCFSequenceDictionary [36mUpdates the sequence dictionary in a variant file.[0m [32m UpdateVcfSequenceDictionary (Picard) [36mTakes a VCF and a second file that contains a sequence dictionary and updates the VCF with the new sequence dictionary.[0m [32m VariantAnnotator [36mTool for adding annotations to VCF files[0m [32m VcfFormatConverter (Picard) [36mConverts VCF to BCF or BCF to VCF. [0m [32m VcfToIntervalList (Picard) [36mConverts a VCF or BCF file to a Picard Interval List[0m [37m-------------------------------------------------------------------------------------- [0m *********************************************************************** A USER ERROR has occurred: '-Xmx104857M' is not a valid command. *********************************************************************** Set the system property GATK_STACKTRACE_ON_USER_EXCEPTION (--java-options '-DGATK_STACKTRACE_ON_USER_EXCEPTION=true') to print the stack trace. Using GATK jar /gatk/gatk-package-4.3.0.0-local.jar Running: java -Dsamjdk.use_async_io_read_samtools=false -Dsamjdk.use_async_io_write_samtools=true -Dsamjdk.use_async_io_write_tribble=false -Dsamjdk.compression_level=2 -jar /gatk/gatk-package-4.3.0.0-local.jar -Xmx104857M SplitNCigarReads -R input_2/Shiitake_XR1.asm.bp.hap1.p_ctg.fasta -I output.temp/Le1-13-502-701_1.fastq.gz_addrg_repN.bam -O output2/Le1-13-502-701_1.fastq.gz.bam /usr/local/bin/picard: line 5: warning: setlocale: LC_ALL: cannot change locale (en_US.UTF-8): No such file or directory INFO 2023-02-28 23:40:56 AddOrReplaceReadGroups ********** NOTE: Picard's command line syntax is changing. ********** ********** For more information, please see: ********** https://github.com/broadinstitute/picard/wiki/Command-Line-Syntax-Transition-For-Users-(Pre-Transition) ********** ********** The command line looks like this in the new syntax: ********** ********** AddOrReplaceReadGroups -I output/Le1-17-502-703_1.fastq.gz.bam -O output.temp/Le1-17-502-703_1.fastq.gz_addrg.bam -SO coordinate -RGID Le1-17-502-703_1.fastq.gz -RGLB library -RGPL Illumina -RGPU Illumina -RGSM Le1-17-502-703_1.fastq.gz ********** 23:40:57.347 INFO NativeLibraryLoader - Loading libgkl_compression.so from jar:file:/usr/local/share/picard-2.18.27-0/picard.jar!/com/intel/gkl/native/libgkl_compression.so [Tue Feb 28 23:40:57 GMT 2023] AddOrReplaceReadGroups INPUT=output/Le1-17-502-703_1.fastq.gz.bam OUTPUT=output.temp/Le1-17-502-703_1.fastq.gz_addrg.bam SORT_ORDER=coordinate RGID=Le1-17-502-703_1.fastq.gz RGLB=library RGPL=Illumina RGPU=Illumina RGSM=Le1-17-502-703_1.fastq.gz VERBOSITY=INFO QUIET=false VALIDATION_STRINGENCY=STRICT COMPRESSION_LEVEL=5 MAX_RECORDS_IN_RAM=500000 CREATE_INDEX=false CREATE_MD5_FILE=false GA4GH_CLIENT_SECRETS=client_secrets.json USE_JDK_DEFLATER=false USE_JDK_INFLATER=false [Tue Feb 28 23:40:57 GMT 2023] Executing as ?@4855180c684a on Linux 3.10.0-1160.36.2.el7.x86_64 amd64; OpenJDK 64-Bit Server VM 11.0.1+13-LTS; Deflater: Intel; Inflater: Intel; Provider GCS is not available; Picard version: 2.18.27-SNAPSHOT INFO 2023-02-28 23:40:57 AddOrReplaceReadGroups Created read-group ID=Le1-17-502-703_1.fastq.gz PL=Illumina LB=library SM=Le1-17-502-703_1.fastq.gz [Tue Feb 28 23:41:10 GMT 2023] picard.sam.AddOrReplaceReadGroups done. Elapsed time: 0.22 minutes. Runtime.totalMemory()=2583691264 [1m[31mUSAGE: [32m [1m[31m [-h] [0m[1m[31mAvailable Programs: [0m[37m-------------------------------------------------------------------------------------- [0m[31mBase Calling: Tools that process sequencing machine data, e.g. Illumina base calls, and detect sequencing level attributes, e.g. adapters[0m [32m CheckIlluminaDirectory (Picard) [36mAsserts the validity for specified Illumina basecalling data. [0m [32m CollectIlluminaBasecallingMetrics (Picard) [36mCollects Illumina Basecalling metrics for a sequencing run. [0m [32m CollectIlluminaLaneMetrics (Picard) [36mCollects Illumina lane metrics for the given BaseCalling analysis directory.[0m [32m ExtractIlluminaBarcodes (Picard) [36mTool determines the barcode for each read in an Illumina lane. [0m [32m IlluminaBasecallsToFastq (Picard) [36mGenerate FASTQ file(s) from Illumina basecall read data. [0m [32m IlluminaBasecallsToSam (Picard) [36mTransforms raw Illumina sequencing data into an unmapped SAM, BAM or CRAM file.[0m [32m MarkIlluminaAdapters (Picard) [36mReads a SAM/BAM/CRAM file and rewrites it with new adapter-trimming tags. [0m [37m-------------------------------------------------------------------------------------- [0m[31mCopy Number Variant Discovery: Tools that analyze read coverage to detect copy number variants.[0m [32m AnnotateIntervals [36mAnnotates intervals with GC content, mappability, and segmental-duplication content[0m [32m CallCopyRatioSegments [36mCalls copy-ratio segments as amplified, deleted, or copy-number neutral[0m [32m CombineSegmentBreakpoints [31m(EXPERIMENTAL Tool) [36mCombine the breakpoints of two segment files and annotate the resulting intervals with chosen columns from each file.[0m [32m CreateReadCountPanelOfNormals [36mCreates a panel of normals for read-count denoising[0m [32m DenoiseReadCounts [36mDenoises read counts to produce denoised copy ratios[0m [32m DetermineGermlineContigPloidy [36mDetermines the baseline contig ploidy for germline samples given counts data[0m [32m FilterIntervals [36mFilters intervals based on annotations and/or count statistics[0m [32m GermlineCNVCaller [36mCalls copy-number variants in germline samples given their counts and the output of DetermineGermlineContigPloidy[0m [32m MergeAnnotatedRegions [31m(EXPERIMENTAL Tool) [36mMerge annotated genomic regions based entirely on touching/overlapping intervals.[0m [32m MergeAnnotatedRegionsByAnnotation [31m(EXPERIMENTAL Tool) [36mMerge annotated genomic regions within specified distance if annotation value(s) are exactly the same.[0m [32m ModelSegments [36mModels segmented copy ratios from denoised copy ratios and segmented minor-allele fractions from allelic counts[0m [32m PlotDenoisedCopyRatios [36mCreates plots of denoised copy ratios[0m [32m PlotModeledSegments [36mCreates plots of denoised and segmented copy-ratio and minor-allele-fraction estimates[0m [32m PostprocessGermlineCNVCalls [36mPostprocesses the output of GermlineCNVCaller and generates VCFs and denoised copy ratios[0m [32m TagGermlineEvents [31m(EXPERIMENTAL Tool) [36mDo a simplistic tagging of germline events in a tumor segment file.[0m [37m-------------------------------------------------------------------------------------- [0m[31mCoverage Analysis: Tools that count coverage, e.g. depth per allele[0m [32m ASEReadCounter [36mGenerates table of filtered base counts at het sites for allele specific expression[0m [32m AnalyzeSaturationMutagenesis [31m(BETA Tool) [36m(EXPERIMENTAL) Processes reads from a MITESeq or other saturation mutagenesis experiment.[0m [32m CollectAllelicCounts [36mCollects reference and alternate allele counts at specified sites[0m [32m CollectAllelicCountsSpark [36mCollects reference and alternate allele counts at specified sites[0m [32m CollectF1R2Counts [36mCollect F1R2 read counts for the Mutect2 orientation bias mixture model filter[0m [32m CollectReadCounts [36mCollects read counts at specified intervals[0m [32m CountBases [36mCount bases in a SAM/BAM/CRAM file[0m [32m CountBasesSpark [36mCounts bases in the input SAM/BAM[0m [32m CountReads [36mCount reads in a SAM/BAM/CRAM file[0m [32m CountReadsSpark [36mCounts reads in the input SAM/BAM[0m [32m DepthOfCoverage [31m(BETA Tool) [36mGenerate coverage summary information for reads data[0m [32m GatherNormalArtifactData [36mCombine output files from GetNormalArtifactData in the order defined by a sequence dictionary[0m [32m GeneExpressionEvaluation [31m(BETA Tool) [36mEvaluate gene expression from RNA-seq reads aligned to genome.[0m [32m GetNormalArtifactData [36mCollects data for training normal artifact filter[0m [32m GetPileupSummaries [36mTabulates pileup metrics for inferring contamination[0m [32m LocalAssembler [31m(BETA Tool) [36mLocal assembler for SVs[0m [32m Pileup [36mPrints read alignments in samtools pileup format[0m [32m PileupSpark [31m(BETA Tool) [36mPrints read alignments in samtools pileup format[0m [37m-------------------------------------------------------------------------------------- [0m[31mDiagnostics and Quality Control: Tools that collect sequencing quality related and comparative metrics[0m [32m AccumulateQualityYieldMetrics (Picard) [36mCombines multiple QualityYieldMetrics files into a single file.[0m [32m AccumulateVariantCallingMetrics (Picard) [36mCombines multiple Variant Calling Metrics files into a single file[0m [32m AnalyzeCovariates [36mEvaluate and compare base quality score recalibration (BQSR) tables[0m [32m BamIndexStats (Picard) [36mGenerate index statistics from a BAM file[0m [32m CalcMetadataSpark [31m(BETA Tool) [36m(Internal) Collects read metrics relevant to structural variant discovery[0m [32m CalculateContamination [36mCalculate the fraction of reads coming from cross-sample contamination[0m [32m CalculateFingerprintMetrics (Picard) [36mCalculate statistics on fingerprints, checking their viability[0m [32m CalculateReadGroupChecksum (Picard) [36mCreates a hash code based on the read groups (RG). [0m [32m CheckDuplicateMarking (Picard) [36mChecks the consistency of duplicate markings.[0m [32m CheckFingerprint (Picard) [36mComputes a fingerprint from the supplied input (SAM/BAM/CRAM or VCF) file and compares it to the provided genotypes[0m [32m CheckPileup [36mCompare GATK's internal pileup to a reference Samtools mpileup[0m [32m CheckTerminatorBlock (Picard) [36mAsserts the provided gzip file's (e.g., BAM) last block is well-formed; RC 100 otherwise[0m [32m ClusterCrosscheckMetrics (Picard) [36mClusters the results of a CrosscheckFingerprints run by LOD score[0m [32m CollectAlignmentSummaryMetrics (Picard) [36mProduces a summary of alignment metrics from a SAM or BAM file. [0m [32m CollectArraysVariantCallingMetrics (Picard) [36mCollects summary and per-sample from the provided arrays VCF file[0m [32m CollectBaseDistributionByCycle (Picard) [36mChart the nucleotide distribution per cycle in a SAM or BAM file[0m [32m CollectBaseDistributionByCycleSpark [31m(BETA Tool) [36mCollects base distribution per cycle in SAM/BAM/CRAM file(s).[0m [32m CollectGcBiasMetrics (Picard) [36mCollect metrics regarding GC bias. [0m [32m CollectHiSeqXPfFailMetrics (Picard) [36mClassify PF-Failing reads in a HiSeqX Illumina Basecalling directory into various categories.[0m [32m CollectHsMetrics (Picard) [36mCollects hybrid-selection (HS) metrics for a SAM or BAM file. [0m [32m CollectIndependentReplicateMetrics (Picard) [31m(EXPERIMENTAL Tool) [36mEstimates the rate of independent replication rate of reads within a bam. [0m [32m CollectInsertSizeMetrics (Picard) [36mCollect metrics about the insert size distribution of a paired-end library. [0m [32m CollectInsertSizeMetricsSpark [31m(BETA Tool) [36mCollects insert size distribution information on alignment data[0m [32m CollectJumpingLibraryMetrics (Picard) [36mCollect jumping library metrics. [0m [32m CollectMultipleMetrics (Picard) [36mCollect multiple classes of metrics. [0m [32m CollectMultipleMetricsSpark [31m(BETA Tool) [36mRuns multiple metrics collection modules for a given alignment file[0m [32m CollectOxoGMetrics (Picard) [36mCollect metrics to assess oxidative artifacts.[0m [32m CollectQualityYieldMetrics (Picard) [36mCollect metrics about reads that pass quality thresholds and Illumina-specific filters. [0m [32m CollectQualityYieldMetricsSpark [31m(BETA Tool) [36mCollects quality yield metrics from SAM/BAM/CRAM file(s).[0m [32m CollectRawWgsMetrics (Picard) [36mCollect whole genome sequencing-related metrics. [0m [32m CollectRnaSeqMetrics (Picard) [36mProduces RNA alignment metrics for a SAM or BAM file. [0m [32m CollectRrbsMetrics (Picard) [36mCollects metrics from reduced representation bisulfite sequencing (Rrbs) data. [0m [32m CollectSamErrorMetrics (Picard) [36mProgram to collect error metrics on bases stratified in various ways.[0m [32m CollectSequencingArtifactMetrics (Picard) [36mCollect metrics to quantify single-base sequencing artifacts. [0m [32m CollectTargetedPcrMetrics (Picard) [36mCalculate PCR-related metrics from targeted sequencing data. [0m [32m CollectVariantCallingMetrics (Picard) [36mCollects per-sample and aggregate (spanning all samples) metrics from the provided VCF file[0m [32m CollectWgsMetrics (Picard) [36mCollect metrics about coverage and performance of whole genome sequencing (WGS) experiments.[0m [32m CollectWgsMetricsWithNonZeroCoverage (Picard)[31m(EXPERIMENTAL Tool) [36mCollect metrics about coverage and performance of whole genome sequencing (WGS) experiments. [0m [32m CompareBaseQualities [36mCompares the base qualities of two SAM/BAM/CRAM files[0m [32m CompareDuplicatesSpark [31m(BETA Tool) [36mDetermine if two potentially identical BAMs have the same duplicate reads[0m [32m CompareMetrics (Picard) [36mCompare two metrics files.[0m [32m CompareSAMs (Picard) [36mCompare two input SAM/BAM/CRAM files. [0m [32m ConvertHaplotypeDatabaseToVcf (Picard) [36mConvert Haplotype database file to vcf[0m [32m ConvertSequencingArtifactToOxoG (Picard) [36mExtract OxoG metrics from generalized artifacts metrics. [0m [32m CrosscheckFingerprints (Picard) [36mChecks that all data in the input files appear to have come from the same individual[0m [32m CrosscheckReadGroupFingerprints (Picard) [36mDEPRECATED: USE CrosscheckFingerprints. [0m [32m DumpTabixIndex [36mDumps a tabix index file.[0m [32m EstimateLibraryComplexity (Picard) [36mEstimates the numbers of unique molecules in a sequencing library. [0m [32m ExtractFingerprint (Picard) [36mComputes a fingerprint from the input file.[0m [32m FlagStat [36mAccumulate flag statistics given a BAM file[0m [32m FlagStatSpark [36mSpark tool to accumulate flag statistics[0m [32m GatherPileupSummaries [36mCombine output files from GetPileupSummary in the order defined by a sequence dictionary[0m [32m GetSampleName [36mEmit a single sample name[0m [32m IdentifyContaminant (Picard) [36mComputes a fingerprint from the supplied SAM/BAM file, given a contamination estimate.[0m [32m LiftOverHaplotypeMap (Picard) [36mLifts over a haplotype database from one reference to another[0m [32m MeanQualityByCycle (Picard) [36mCollect mean quality by cycle.[0m [32m MeanQualityByCycleSpark [31m(BETA Tool) [36mMeanQualityByCycle on Spark[0m [32m QualityScoreDistribution (Picard) [36mChart the distribution of quality scores. [0m [32m QualityScoreDistributionSpark [31m(BETA Tool) [36mQualityScoreDistribution on Spark[0m [32m ValidateSamFile (Picard) [36mValidates a SAM/BAM/CRAM file.[0m [32m ViewSam (Picard) [36mPrints a SAM or BAM file to the screen[0m [37m-------------------------------------------------------------------------------------- [0m[31mExample Tools: Example tools that show developers how to implement new tools[0m [32m ExampleMultiFeatureWalker [36mExample of a MultiFeatureWalker subclass.[0m [32m HtsgetReader [31m(EXPERIMENTAL Tool) [36mDownload a file using htsget[0m [37m-------------------------------------------------------------------------------------- [0m[31mFlow Based Tools: Tools designed specifically to operate on flow based data[0m [32m CalculateAverageCombinedAnnotations [31m(EXPERIMENTAL Tool) [36mDivides annotations that were summed by genomicsDB by number of samples to calculate average.[0m [32m FlowFeatureMapper [31m(EXPERIMENTAL Tool) [36mMap/find features in BAM file, output VCF. Initially mapping SNVs[0m [32m GroundTruthReadsBuilder [31m(EXPERIMENTAL Tool) [36mProduces a flexible and robust ground truth set for base calling training[0m [32m SplitCRAM [31m(EXPERIMENTAL Tool) [36mSplit CRAM files to smaller files efficiently[0m [37m-------------------------------------------------------------------------------------- [0m[31mGenotyping Arrays Manipulation: Tools that manipulate data generated by Genotyping arrays[0m [32m BpmToNormalizationManifestCsv (Picard) [36mProgram to convert an Illumina bpm file into a bpm.csv file.[0m [32m CombineGenotypingArrayVcfs (Picard) [36mProgram to combine multiple genotyping array VCF files into one VCF.[0m [32m CompareGtcFiles (Picard) [36mCompares two GTC files.[0m [32m CreateBafRegressMetricsFile (Picard) [36mProgram to generate a picard metrics file from the output of the bafRegress tool.[0m [32m CreateExtendedIlluminaManifest (Picard) [36mCreate an Extended Illumina Manifest for usage by the Picard tool GtcToVcf[0m [32m CreateVerifyIDIntensityContaminationMetricsFile (Picard) [36mProgram to generate a picard metrics file from the output of the VerifyIDIntensity tool.[0m [32m GtcToVcf (Picard) [36mProgram to convert an Illumina GTC file to a VCF[0m [32m MergePedIntoVcf (Picard) [36mProgram to merge a single-sample ped file from zCall into a single-sample VCF.[0m [32m VcfToAdpc (Picard) [36mProgram to convert an Arrays VCF to an ADPC file.[0m [37m-------------------------------------------------------------------------------------- [0m[31mIntervals Manipulation: Tools that process genomic intervals in various formats[0m [32m BedToIntervalList (Picard) [36mConverts a BED file to a Picard Interval List. [0m [32m CompareIntervalLists [36mCompare two interval lists for equality[0m [32m IntervalListToBed (Picard) [36mConverts an Picard IntervalList file to a BED file.[0m [32m IntervalListTools (Picard) [36mA tool for performing various IntervalList manipulations[0m [32m LiftOverIntervalList (Picard) [36mLifts over an interval list from one reference build to another. [0m [32m PreprocessIntervals [36mPrepares bins for coverage collection[0m [32m SplitIntervals [36mSplit intervals into sub-interval files.[0m [37m-------------------------------------------------------------------------------------- [0m[31mMetagenomics: Tools that perform metagenomic analysis, e.g. microbial community composition and pathogen detection[0m [32m PathSeqBuildKmers [36mBuilds set of host reference k-mers[0m [32m PathSeqBuildReferenceTaxonomy [36mBuilds a taxonomy datafile of the microbe reference[0m [32m PathSeqBwaSpark [36mStep 2: Aligns reads to the microbe reference[0m [32m PathSeqFilterSpark [36mStep 1: Filters low quality, low complexity, duplicate, and host reads[0m [32m PathSeqPipelineSpark [36mCombined tool that performs all steps: read filtering, microbe reference alignment, and abundance scoring[0m [32m PathSeqScoreSpark [36mStep 3: Classifies pathogen-aligned reads and generates abundance scores[0m [37m-------------------------------------------------------------------------------------- [0m[31mMethylation-Specific Tools: Tools that perform methylation calling, processing bisulfite sequenced, methylation-aware aligned BAM[0m [32m MethylationTypeCaller [31m(EXPERIMENTAL Tool) [36mIdentify methylated bases from bisulfite sequenced, methylation-aware BAMs[0m [37m-------------------------------------------------------------------------------------- [0m[31mOther: Miscellaneous tools, e.g. those that aid in data streaming[0m [32m CreateHadoopBamSplittingIndex [31m(BETA Tool) [36mCreate a Hadoop BAM splitting index[0m [32m FifoBuffer (Picard) [36mProvides a large, FIFO buffer that can be used to buffer input and output streams between programs.[0m [32m GatherBQSRReports [36mGathers scattered BQSR recalibration reports into a single file[0m [32m GatherTranches [31m(BETA Tool) [36mGathers scattered VQSLOD tranches into a single file[0m [32m IndexFeatureFile [36mCreates an index for a feature file, e.g. VCF or BED file.[0m [32m ParallelCopyGCSDirectoryIntoHDFSSpark [31m(BETA Tool) [36mParallel copy a file or directory from Google Cloud Storage into the HDFS file system used by Spark[0m [32m PrintBGZFBlockInformation [31m(EXPERIMENTAL Tool) [36mPrint information about the compressed blocks in a BGZF format file[0m [32m ReadAnonymizer [31m(EXPERIMENTAL Tool) [36mReplace bases in reads with reference bases.[0m [32m ReblockGVCF [36mCondenses homRef blocks in a single-sample GVCF[0m [32m SortGff (Picard) [36mSorts a gff3 file, and adds flush directives[0m [37m-------------------------------------------------------------------------------------- [0m[31mRead Data Manipulation: Tools that manipulate read data in SAM, BAM or CRAM format[0m [32m AddCommentsToBam (Picard) [36mAdds comments to the header of a BAM file.[0m [32m AddOATag (Picard) [36mRecord current alignment information to OA tag.[0m [32m AddOrReplaceReadGroups (Picard) [36mAssigns all the reads in a file to a single new read-group.[0m [32m AddOriginalAlignmentTags [31m(EXPERIMENTAL Tool) [36mAdds Original Alignment tag and original mate contig tag[0m [32m ApplyBQSR [36mApply base quality score recalibration[0m [32m ApplyBQSRSpark [31m(BETA Tool) [36mApply base quality score recalibration on Spark[0m [32m BQSRPipelineSpark [31m(BETA Tool) [36mBoth steps of BQSR (BaseRecalibrator and ApplyBQSR) on Spark[0m [32m BamToBfq (Picard) [36mConverts a BAM file into a BFQ (binary fastq formatted) file[0m [32m BaseRecalibrator [36mGenerates recalibration table for Base Quality Score Recalibration (BQSR)[0m [32m BaseRecalibratorSpark [31m(BETA Tool) [36mGenerate recalibration table for Base Quality Score Recalibration (BQSR) on Spark[0m [32m BuildBamIndex (Picard) [36mGenerates a BAM index ".bai" file. [0m [32m BwaAndMarkDuplicatesPipelineSpark [31m(BETA Tool) [36mTakes name-sorted file and runs BWA and MarkDuplicates.[0m [32m BwaSpark [31m(BETA Tool) [36mAlign reads to a given reference using BWA on Spark[0m [32m CleanSam (Picard) [36mCleans a SAM/BAM/CRAM files, soft-clipping beyond-end-of-reference alignments and setting MAPQ to 0 for unmapped reads[0m [32m ClipReads [36mClip reads in a SAM/BAM/CRAM file[0m [32m CollectDuplicateMetrics (Picard) [36mCollect Duplicate metrics from marked file.[0m [32m ConvertHeaderlessHadoopBamShardToBam [31m(BETA Tool) [36mConvert a headerless BAM shard into a readable BAM[0m [32m DownsampleByDuplicateSet [31m(BETA Tool) [36mDiscard a set fraction of duplicate sets from a UMI-grouped bam[0m [32m DownsampleSam (Picard) [36mDownsample a SAM or BAM file.[0m [32m ExtractOriginalAlignmentRecordsByNameSpark [31m(BETA Tool) [36mSubsets reads by name[0m [32m FastqToSam (Picard) [36mConverts a FASTQ file to an unaligned BAM or SAM file[0m [32m FilterSamReads (Picard) [36mSubsets reads from a SAM/BAM/CRAM file by applying one of several filters.[0m [32m FixMateInformation (Picard) [36mVerify mate-pair information between mates and fix if needed.[0m [32m FixMisencodedBaseQualityReads [36mFix Illumina base quality scores in a SAM/BAM/CRAM file[0m [32m GatherBamFiles (Picard) [36mConcatenate efficiently BAM files that resulted from a scattered parallel analysis[0m [32m LeftAlignIndels [36mLeft-aligns indels from reads in a SAM/BAM/CRAM file[0m [32m MarkDuplicates (Picard) [36mIdentifies duplicate reads. [0m [32m MarkDuplicatesSpark [36mMarkDuplicates on Spark[0m [32m MarkDuplicatesWithMateCigar (Picard) [36mIdentifies duplicate reads, accounting for mate CIGAR. [0m [32m MergeBamAlignment (Picard) [36mMerge alignment data from a SAM or BAM with data in an unmapped BAM file. [0m [32m MergeSamFiles (Picard) [36mMerges multiple SAM/BAM/CRAM (and/or) files into a single file. [0m [32m PositionBasedDownsampleSam (Picard) [36mDownsample a SAM or BAM file to retain a subset of the reads based on the reads location in each tile in the flowcell.[0m [32m PostProcessReadsForRSEM [31m(BETA Tool) [36mReorder reads before running RSEM[0m [32m PrintDistantMates [36mUnmaps reads with distant mates.[0m [32m PrintReads [36mPrint reads in the SAM/BAM/CRAM file[0m [32m PrintReadsHeader [36mPrint the header from a SAM/BAM/CRAM file[0m [32m PrintReadsSpark [36mPrintReads on Spark[0m [32m ReorderSam (Picard) [36mReorders reads in a SAM or BAM file to match ordering in a second reference file.[0m [32m ReplaceSamHeader (Picard) [36mReplaces the SAMFileHeader in a SAM/BAM/CRAM file. [0m [32m RevertBaseQualityScores [36mRevert Quality Scores in a SAM/BAM/CRAM file[0m [32m RevertOriginalBaseQualitiesAndAddMateCigar (Picard)[36mReverts the original base qualities and adds the mate cigar tag to read-group files[0m [32m RevertSam (Picard) [36mReverts SAM/BAM/CRAM files to a previous state. [0m [32m RevertSamSpark [31m(BETA Tool) [36mReverts SAM, BAM or CRAM files to a previous state.[0m [32m SamFormatConverter (Picard) [36mConvert a BAM file to a SAM file, or a SAM to a BAM[0m [32m SamToFastq (Picard) [36mConverts a SAM/BAM/CRAM file to FASTQ.[0m [32m SamToFastqWithTags (Picard) [36mConverts a SAM or BAM file to FASTQ alongside FASTQs created from tags.[0m [32m SetNmAndUqTags (Picard) [36mDEPRECATED: Use SetNmMdAndUqTags instead.[0m [32m SetNmMdAndUqTags (Picard) [36mFixes the NM, MD, and UQ tags in a SAM/BAM/CRAM file [0m [32m SimpleMarkDuplicatesWithMateCigar (Picard) [31m(EXPERIMENTAL Tool) [36mExamines aligned records in the supplied SAM or BAM file to locate duplicate molecules.[0m [32m SortSam (Picard) [36mSorts a SAM, BAM or CRAM file. [0m [32m SortSamSpark [31m(BETA Tool) [36mSortSam on Spark (works on SAM/BAM/CRAM)[0m [32m SplitNCigarReads [36mSplit Reads with N in Cigar[0m [32m SplitReads [36mOutputs reads from a SAM/BAM/CRAM by read group, sample and library name[0m [32m SplitSamByLibrary (Picard) [36mSplits a SAM/BAM/CRAM file into individual files by library[0m [32m SplitSamByNumberOfReads (Picard) [36mSplits a SAM/BAM/CRAM file to multiple files.[0m [32m TransferReadTags [31m(EXPERIMENTAL Tool) [36mIncorporate read tags in a SAM file to that of a matching SAM file[0m [32m UmiAwareMarkDuplicatesWithMateCigar (Picard) [31m(EXPERIMENTAL Tool) [36mIdentifies duplicate reads using information from read positions and UMIs. [0m [32m UnmarkDuplicates [36mClears the 0x400 duplicate SAM flag[0m [37m-------------------------------------------------------------------------------------- [0m[31mReference: Tools that analyze and manipulate FASTA format references[0m [32m BaitDesigner (Picard) [36mDesigns oligonucleotide baits for hybrid selection reactions.[0m [32m BwaMemIndexImageCreator [36mCreate a BWA-MEM index image file for use with GATK BWA tools[0m [32m CheckReferenceCompatibility [31m(EXPERIMENTAL Tool) [36mCheck a BAM/VCF for compatibility against specified references.[0m [32m CompareReferences [31m(EXPERIMENTAL Tool) [36mDisplay reference comparison as a tab-delimited table and summarize reference differences.[0m [32m ComposeSTRTableFile [36mComposes a genome-wide STR location table used for DragSTR model auto-calibration[0m [32m CountBasesInReference [36mCount the numbers of each base in a reference file[0m [32m CreateSequenceDictionary (Picard) [36mCreates a sequence dictionary for a reference sequence. [0m [32m ExtractSequences (Picard) [36mSubsets intervals from a reference sequence to a new FASTA file.[0m [32m FastaAlternateReferenceMaker [36mCreate an alternative reference by combining a fasta with a vcf.[0m [32m FastaReferenceMaker [36mCreate snippets of a fasta file[0m [32m FindBadGenomicKmersSpark [31m(BETA Tool) [36mIdentifies sequences that occur at high frequency in a reference[0m [32m NonNFastaSize (Picard) [36mCounts the number of non-N bases in a fasta file.[0m [32m NormalizeFasta (Picard) [36mNormalizes lines of sequence in a FASTA file to be of the same length.[0m [32m ScatterIntervalsByNs (Picard) [36mWrites an interval list created by splitting a reference at Ns.[0m [32m ShiftFasta [31m(BETA Tool) [36mCreates a shifted fasta file and shift_back file[0m [37m-------------------------------------------------------------------------------------- [0m[31mShort Variant Discovery: Tools that perform variant calling and genotyping for short variants (SNPs, SNVs and Indels)[0m [32m CalibrateDragstrModel [36mestimates the parameters for the DRAGstr model[0m [32m CombineGVCFs [36mMerges one or more HaplotypeCaller GVCF files into a single GVCF with appropriate annotations[0m [32m GenomicsDBImport [36mImport VCFs to GenomicsDB[0m [32m GenotypeGVCFs [36mPerform joint genotyping on one or more samples pre-called with HaplotypeCaller[0m [32m GnarlyGenotyper [31m(BETA Tool) [36mPerform "quick and dirty" joint genotyping on one or more samples pre-called with HaplotypeCaller[0m [32m HaplotypeBasedVariantRecaller [31m(EXPERIMENTAL Tool) [36mCalculate likelihood matrix for each Allele in VCF against a set of Reads limited by a set of Haplotypes[0m [32m HaplotypeCaller [36mCall germline SNPs and indels via local re-assembly of haplotypes[0m [32m HaplotypeCallerSpark [31m(BETA Tool) [36mHaplotypeCaller on Spark[0m [32m LearnReadOrientationModel [36mGet the maximum likelihood estimates of artifact prior probabilities in the orientation bias mixture model filter[0m [32m MergeMutectStats [36mMerge the stats output by scatters of a single Mutect2 job[0m [32m Mutect2 [36mCall somatic SNVs and indels via local assembly of haplotypes[0m [32m RampedHaplotypeCaller [31m(EXPERIMENTAL Tool) [36mCall germline SNPs and indels via local re-assembly of haplotypes (ramped version)[0m [32m ReadsPipelineSpark [31m(BETA Tool) [36mRuns BWA (if specified), MarkDuplicates, BQSR, and HaplotypeCaller on unaligned or aligned reads to generate a VCF.[0m [37m-------------------------------------------------------------------------------------- [0m[31mStructural Variant Discovery: Tools that detect structural variants [0m [32m CollectSVEvidence [31m(BETA Tool) [36mGathers paired-end and split read evidence files for use in the GATK-SV pipeline.[0m [32m CondenseDepthEvidence [31m(EXPERIMENTAL Tool) [36mMerges adjacent DepthEvidence records.[0m [32m CpxVariantReInterpreterSpark [31m(BETA Tool) [36m(Internal) Tries to extract simple variants from a provided GATK-SV CPX.vcf[0m [32m DiscoverVariantsFromContigAlignmentsSAMSpark [31m(BETA Tool) [36m(Internal) Examines aligned contigs from local assemblies and calls structural variants[0m [32m ExtractSVEvidenceSpark [31m(BETA Tool) [36m(Internal) Extracts evidence of structural variations from reads[0m [32m FindBreakpointEvidenceSpark [31m(BETA Tool) [36m(Internal) Produces local assemblies of genomic regions that may harbor structural variants[0m [32m JointGermlineCNVSegmentation [31m(BETA Tool) [36mCombine segmented gCNV VCFs.[0m [32m PrintReadCounts [31m(EXPERIMENTAL Tool) [36mPrints count files for CNV determination.[0m [32m PrintSVEvidence [31m(EXPERIMENTAL Tool) [36mMerges SV evidence records.[0m [32m SVAnnotate [36mAdds gene overlap and variant consequence annotations to SV VCF from GATK-SV pipeline[0m [32m SVCluster [31m(BETA Tool) [36mClusters structural variants[0m [32m SiteDepthtoBAF [31m(EXPERIMENTAL Tool) [36mConvert SiteDepth to BafEvidence[0m [32m StructuralVariantDiscoverer [31m(BETA Tool) [36m(Internal) Examines aligned contigs from local assemblies and calls structural variants or their breakpoints[0m [32m StructuralVariationDiscoveryPipelineSpark [31m(BETA Tool) [36mRuns the structural variation discovery workflow on a single sample[0m [32m SvDiscoverFromLocalAssemblyContigAlignmentsSpark [31m(BETA Tool) [36m(Internal) Examines aligned contigs from local assemblies and calls structural variants or their breakpoints[0m [37m-------------------------------------------------------------------------------------- [0m[31mVariant Evaluation and Refinement: Tools that evaluate and refine variant calls, e.g. with annotations not offered by the engine[0m [32m AlleleFrequencyQC [31m(BETA Tool) [36mGeneral-purpose tool for variant evaluation (% in dbSNP, genotype concordance, Ti/Tv ratios, and a lot more)[0m [32m AnnotateVcfWithBamDepth [36m(Internal) Annotate a vcf with a bam's read depth at each variant locus[0m [32m AnnotateVcfWithExpectedAlleleFraction [36m(Internal) Annotate a vcf with expected allele fractions in pooled sequencing[0m [32m CalculateGenotypePosteriors [36mCalculate genotype posterior probabilities given family and/or known population genotypes[0m [32m CalculateMixingFractions [36m(Internal) Calculate proportions of different samples in a pooled bam[0m [32m Concordance [36mEvaluate concordance of an input VCF against a validated truth VCF[0m [32m CountFalsePositives [31m(BETA Tool) [36mCount PASS variants[0m [32m CountVariants [36mCounts variant records in a VCF file, regardless of filter status.[0m [32m CountVariantsSpark [36mCountVariants on Spark[0m [32m EvaluateInfoFieldConcordance [31m(BETA Tool) [36mEvaluate concordance of info fields in an input VCF against a validated truth VCF[0m [32m FilterFuncotations [31m(EXPERIMENTAL Tool) [36mFilter variants based on clinically-significant Funcotations.[0m [32m FindMendelianViolations (Picard) [36mFinds mendelian violations of all types within a VCF[0m [32m FuncotateSegments [31m(BETA Tool) [36mFunctional annotation for segment files. The output formats are not well-defined and subject to change.[0m [32m Funcotator [36mFunctional Annotator[0m [32m FuncotatorDataSourceDownloader [36mData source downloader for Funcotator.[0m [32m GenotypeConcordance (Picard) [36mCalculates the concordance between genotype data of one sample in each of two VCFs - truth (or reference) vs. calls.[0m [32m MergeMutect2CallsWithMC3 [31m(EXPERIMENTAL Tool) [36mUNSUPPORTED. FOR EVALUATION ONLY. Merge M2 calls with MC[0m [32m ReferenceBlockConcordance [36mEvaluate GVCF reference block concordance of an input GVCF against a truth GVCF[0m [32m ValidateBasicSomaticShortMutations [31m(EXPERIMENTAL Tool) [36mCheck variants against tumor-normal bams representing the same samples, though not the ones from the actual calls.[0m [32m ValidateVariants [36mValidate VCF[0m [32m VariantEval [31m(BETA Tool) [36mGeneral-purpose tool for variant evaluation (% in dbSNP, genotype concordance, Ti/Tv ratios, and a lot more)[0m [32m VariantsToTable [36mExtract fields from a VCF file to a tab-delimited table[0m [37m-------------------------------------------------------------------------------------- [0m[31mVariant Filtering: Tools that filter variants by annotating the FILTER column[0m [32m ApplyVQSR [36m Apply a score cutoff to filter variants based on a recalibration table[0m [32m CNNScoreVariants [36mApply a Convolutional Neural Net to filter annotated variants[0m [32m CNNVariantTrain [31m(EXPERIMENTAL Tool) [36mTrain a CNN model for filtering variants[0m [32m CNNVariantWriteTensors [31m(EXPERIMENTAL Tool) [36mWrite variant tensors for training a CNN to filter variants[0m [32m CreateSomaticPanelOfNormals [31m(BETA Tool) [36mMake a panel of normals for use with Mutect2[0m [32m ExtractVariantAnnotations [31m(BETA Tool) [36mExtracts site-level variant annotations, labels, and other metadata from a VCF file to HDF5 files[0m [32m FilterAlignmentArtifacts [31m(EXPERIMENTAL Tool) [36mFilter alignment artifacts from a vcf callset.[0m [32m FilterMutectCalls [36mFilter somatic SNVs and indels called by Mutect2[0m [32m FilterVariantTranches [36mApply tranche filtering[0m [32m FilterVcf (Picard) [36mHard filters a VCF.[0m [32m MTLowHeteroplasmyFilterTool [36mIf too many low het sites, filter all low het sites[0m [32m NuMTFilterTool [36mUses the median autosomal coverage and the allele depth to determine whether the allele might be a NuMT[0m [32m ScoreVariantAnnotations [31m(BETA Tool) [36mScores variant calls in a VCF file based on site-level annotations using a previously trained model[0m [32m TrainVariantAnnotationsModel [31m(BETA Tool) [36mTrains a model for scoring variant calls based on site-level annotations[0m [32m VariantFiltration [36mFilter variant calls based on INFO and/or FORMAT annotations[0m [32m VariantRecalibrator [36mBuild a recalibration model to score variant quality for filtering purposes[0m [37m-------------------------------------------------------------------------------------- [0m[31mVariant Manipulation: Tools that manipulate variant call format (VCF) data[0m [32m FixVcfHeader (Picard) [36mReplaces or fixes a VCF header.[0m [32m GatherVcfs (Picard) [36mGathers multiple VCF files from a scatter operation into a single VCF file[0m [32m GatherVcfsCloud [31m(BETA Tool) [36mGathers multiple VCF files from a scatter operation into a single VCF file[0m [32m LeftAlignAndTrimVariants [36mLeft align and trim vairants[0m [32m LiftoverVcf (Picard) [36mLifts over a VCF file from one reference build to another. [0m [32m MakeSitesOnlyVcf (Picard) [36mCreates a VCF that contains all the site-level information for all records in the input VCF but no genotype information.[0m [32m MakeVcfSampleNameMap (Picard) [36mCreates a TSV from sample name to VCF/GVCF path, with one line per input.[0m [32m MergeVcfs (Picard) [36mCombines multiple variant files into a single variant file[0m [32m PrintVariantsSpark [36mPrints out variants from the input VCF.[0m [32m RemoveNearbyIndels [36m(Internal) Remove indels from the VCF file that are close to each other.[0m [32m RenameSampleInVcf (Picard) [36mRenames a sample within a VCF or BCF.[0m [32m SelectVariants [36mSelect a subset of variants from a VCF file[0m [32m SortVcf (Picard) [36mSorts one or more VCF files. [0m [32m SplitVcfs (Picard) [36mSplits SNPs and INDELs into separate files. [0m [32m UpdateVCFSequenceDictionary [36mUpdates the sequence dictionary in a variant file.[0m [32m UpdateVcfSequenceDictionary (Picard) [36mTakes a VCF and a second file that contains a sequence dictionary and updates the VCF with the new sequence dictionary.[0m [32m VariantAnnotator [36mTool for adding annotations to VCF files[0m [32m VcfFormatConverter (Picard) [36mConverts VCF to BCF or BCF to VCF. [0m [32m VcfToIntervalList (Picard) [36mConverts a VCF or BCF file to a Picard Interval List[0m [37m-------------------------------------------------------------------------------------- [0m *********************************************************************** A USER ERROR has occurred: '-Xmx104857M' is not a valid command. *********************************************************************** Set the system property GATK_STACKTRACE_ON_USER_EXCEPTION (--java-options '-DGATK_STACKTRACE_ON_USER_EXCEPTION=true') to print the stack trace. Using GATK jar /gatk/gatk-package-4.3.0.0-local.jar Running: java -Dsamjdk.use_async_io_read_samtools=false -Dsamjdk.use_async_io_write_samtools=true -Dsamjdk.use_async_io_write_tribble=false -Dsamjdk.compression_level=2 -jar /gatk/gatk-package-4.3.0.0-local.jar -Xmx104857M SplitNCigarReads -R input_2/Shiitake_XR1.asm.bp.hap1.p_ctg.fasta -I output.temp/Le1-17-502-703_1.fastq.gz_addrg_repN.bam -O output2/Le1-17-502-703_1.fastq.gz.bam ++ onerror 62 ++ status=123 ++ script=/yoshitake/PortablePipeline/PortablePipeline/scripts/RNA-seq~SNPcall-bbmap-callvariants ++ line=62 ++ shift ++ set +x ------------------------------------------------------------ Error occured on /yoshitake/PortablePipeline/PortablePipeline/scripts/RNA-seq~SNPcall-bbmap-callvariants [Line 62]: Status 123 PID: 406241 User: yoshitake.kazutoshi Current directory: /yoshitake/test/RNA-seq~SNPcall-bbmap-callvariants Command line: /yoshitake/PortablePipeline/PortablePipeline/scripts/RNA-seq~SNPcall-bbmap-callvariants ------------------------------------------------------------ PID: 406239 pp runtime error. Checking the realpath of input files. 0 input_1/ 1 /yoshitake/test/RNA-seq~SNPcall-bbmap-callvariants/input_1/Le1-1-501-701_1.fastq.gz 1 /yoshitake/test/RNA-seq~SNPcall-bbmap-callvariants/input_1/Le1-1-501-701_2.fastq.gz 1 /yoshitake/test/RNA-seq~SNPcall-bbmap-callvariants/input_1/Le1-12-501-708_1.fastq.gz 1 /yoshitake/test/RNA-seq~SNPcall-bbmap-callvariants/input_1/Le1-12-501-708_2.fastq.gz 1 /yoshitake/test/RNA-seq~SNPcall-bbmap-callvariants/input_1/Le1-17-502-703_2.fastq.gz 1 /yoshitake/test/RNA-seq~SNPcall-bbmap-callvariants/input_1/Le1-13-502-701_2.fastq.gz 1 /yoshitake/test/RNA-seq~SNPcall-bbmap-callvariants/input_1/Le1-17-502-703_1.fastq.gz 1 /yoshitake/test/RNA-seq~SNPcall-bbmap-callvariants/input_1/Le1-13-502-701_1.fastq.gz 0 input_2/Shiitake_XR1.asm.bp.hap1.p_ctg.fasta script: /yoshitake/PortablePipeline/PortablePipeline/scripts/RNA-seq~SNPcall-bbmap-callvariants "$scriptdir"/mapping-illumina~bbmap broadinstitute/gatk:4.3.0.0 centos:centos6 quay.io/biocontainers/bbmap:38.96--h5c4e2a8_1 quay.io/biocontainers/picard:2.18.27--0 using docker + set -o pipefail ++ date +%s + time0=1677628606 + echo start at 1677628606 start at 1677628606 ++ echo input_2/Shiitake_XR1.asm.bp.hap1.p_ctg.fasta ++ grep '[.]gz$' ++ wc -l ++ true + '[' 0 = 1 ']' + ref=input_2/Shiitake_XR1.asm.bp.hap1.p_ctg.fasta ++ echo input_2/Shiitake_XR1.asm.bp.hap1.p_ctg.fasta ++ sed 's/[.]\(fa\|fasta\|fsa\|fna\)$//' + refbase=input_2/Shiitake_XR1.asm.bp.hap1.p_ctg + FUNC_RUN_DOCKER quay.io/biocontainers/bbmap:38.96--h5c4e2a8_1 samtools faidx input_2/Shiitake_XR1.asm.bp.hap1.p_ctg.fasta + PP_RUN_IMAGE=quay.io/biocontainers/bbmap:38.96--h5c4e2a8_1 + shift + PP_RUN_DOCKER_CMD=("${@}") ++ date +%Y%m%d_%H%M%S_%3N + PPDOCNAME=pp20230301_085646_962_10777 + echo pp20230301_085646_962_10777 ++ id -u ++ id -g + docker run --name pp20230301_085646_962_10777 -v /yoshitake/test/RNA-seq~SNPcall-bbmap-callvariants:/yoshitake/test/RNA-seq~SNPcall-bbmap-callvariants -w /yoshitake/test/RNA-seq~SNPcall-bbmap-callvariants -u 2007:600 -i --rm quay.io/biocontainers/bbmap:38.96--h5c4e2a8_1 samtools faidx input_2/Shiitake_XR1.asm.bp.hap1.p_ctg.fasta + rm -f input_2/Shiitake_XR1.asm.bp.hap1.p_ctg.dict + FUNC_RUN_DOCKER quay.io/biocontainers/picard:2.18.27--0 picard -Xmx104857M CreateSequenceDictionary R=input_2/Shiitake_XR1.asm.bp.hap1.p_ctg.fasta O=input_2/Shiitake_XR1.asm.bp.hap1.p_ctg.dict + PP_RUN_IMAGE=quay.io/biocontainers/picard:2.18.27--0 + shift + PP_RUN_DOCKER_CMD=("${@}") ++ date +%Y%m%d_%H%M%S_%3N + PPDOCNAME=pp20230301_085647_842_32484 + echo pp20230301_085647_842_32484 ++ id -u ++ id -g + docker run --name pp20230301_085647_842_32484 -v /yoshitake/test/RNA-seq~SNPcall-bbmap-callvariants:/yoshitake/test/RNA-seq~SNPcall-bbmap-callvariants -w /yoshitake/test/RNA-seq~SNPcall-bbmap-callvariants -u 2007:600 -i --rm quay.io/biocontainers/picard:2.18.27--0 picard -Xmx104857M CreateSequenceDictionary R=input_2/Shiitake_XR1.asm.bp.hap1.p_ctg.fasta O=input_2/Shiitake_XR1.asm.bp.hap1.p_ctg.dict /usr/local/bin/picard: line 5: warning: setlocale: LC_ALL: cannot change locale (en_US.UTF-8): No such file or directory INFO 2023-02-28 23:56:48 CreateSequenceDictionary ********** NOTE: Picard's command line syntax is changing. ********** ********** For more information, please see: ********** https://github.com/broadinstitute/picard/wiki/Command-Line-Syntax-Transition-For-Users-(Pre-Transition) ********** ********** The command line looks like this in the new syntax: ********** ********** CreateSequenceDictionary -R input_2/Shiitake_XR1.asm.bp.hap1.p_ctg.fasta -O input_2/Shiitake_XR1.asm.bp.hap1.p_ctg.dict ********** 23:56:49.237 INFO NativeLibraryLoader - Loading libgkl_compression.so from jar:file:/usr/local/share/picard-2.18.27-0/picard.jar!/com/intel/gkl/native/libgkl_compression.so [Tue Feb 28 23:56:49 GMT 2023] CreateSequenceDictionary OUTPUT=input_2/Shiitake_XR1.asm.bp.hap1.p_ctg.dict REFERENCE=input_2/Shiitake_XR1.asm.bp.hap1.p_ctg.fasta TRUNCATE_NAMES_AT_WHITESPACE=true NUM_SEQUENCES=2147483647 VERBOSITY=INFO QUIET=false VALIDATION_STRINGENCY=STRICT COMPRESSION_LEVEL=5 MAX_RECORDS_IN_RAM=500000 CREATE_INDEX=false CREATE_MD5_FILE=false GA4GH_CLIENT_SECRETS=client_secrets.json USE_JDK_DEFLATER=false USE_JDK_INFLATER=false [Tue Feb 28 23:56:49 GMT 2023] Executing as ?@1efc84b0078e on Linux 3.10.0-1160.36.2.el7.x86_64 amd64; OpenJDK 64-Bit Server VM 11.0.1+13-LTS; Deflater: Intel; Inflater: Intel; Provider GCS is not available; Picard version: 2.18.27-SNAPSHOT [Tue Feb 28 23:56:49 GMT 2023] picard.sam.CreateSequenceDictionary done. Elapsed time: 0.01 minutes. Runtime.totalMemory()=2147483648 + cat + mkdir -p output.temp output2 + ls output/Le1-1-501-701_1.fastq.gz.bam output/Le1-12-501-708_1.fastq.gz.bam output/Le1-13-502-701_1.fastq.gz.bam output/Le1-17-502-703_1.fastq.gz.bam + read i + xargs '-d\n' -I '{}' -P 1 bash -c '{}' ++ basename output/Le1-1-501-701_1.fastq.gz.bam .bam + j=Le1-1-501-701_1.fastq.gz + echo -n 'PPDOCNAME=pp`date +%Y%m%d_%H%M%S_%3N`_$RANDOM; echo $PPDOCNAME >> /yoshitake/test/RNA-seq~SNPcall-bbmap-callvariants/pp-docker-list; docker run --name ${PPDOCNAME} -v $PWD:$PWD -w $PWD -u 2007:600 -i --rm quay.io/biocontainers/picard:2.18.27--0 picard -Xmx104857M AddOrReplaceReadGroups I="output/Le1-1-501-701_1.fastq.gz.bam" O=output.temp/"Le1-1-501-701_1.fastq.gz"_addrg.bam SO=coordinate RGID="Le1-1-501-701_1.fastq.gz" RGLB=library RGPL=Illumina RGPU=Illumina RGSM="Le1-1-501-701_1.fastq.gz"; ' + echo -n '(PPDOCNAME=pp`date +%Y%m%d_%H%M%S_%3N`_$RANDOM; echo $PPDOCNAME >> /yoshitake/test/RNA-seq~SNPcall-bbmap-callvariants/pp-docker-list; docker run --name ${PPDOCNAME} -v $PWD:$PWD -w $PWD -u 2007:600 -i --rm quay.io/biocontainers/bbmap:38.96--h5c4e2a8_1 samtools view -h output.temp/"Le1-1-501-701_1.fastq.gz"_addrg.bam)|bash run-awk-replace.sh|(PPDOCNAME=pp`date +%Y%m%d_%H%M%S_%3N`_$RANDOM; echo $PPDOCNAME >> /yoshitake/test/RNA-seq~SNPcall-bbmap-callvariants/pp-docker-list; docker run --name ${PPDOCNAME} -v $PWD:$PWD -w $PWD -u 2007:600 -i --rm quay.io/biocontainers/bbmap:38.96--h5c4e2a8_1 samtools view -Sb -o output.temp/"Le1-1-501-701_1.fastq.gz"_addrg_repN.bam); ' + echo -n 'PPDOCNAME=pp`date +%Y%m%d_%H%M%S_%3N`_$RANDOM; echo $PPDOCNAME >> /yoshitake/test/RNA-seq~SNPcall-bbmap-callvariants/pp-docker-list; docker run --name ${PPDOCNAME} -v $PWD:$PWD -w $PWD -u 2007:600 -i --rm quay.io/biocontainers/bbmap:38.96--h5c4e2a8_1 samtools index output.temp/"Le1-1-501-701_1.fastq.gz"_addrg_repN.bam; ' + echo 'PPDOCNAME=pp`date +%Y%m%d_%H%M%S_%3N`_$RANDOM; echo $PPDOCNAME >> /yoshitake/test/RNA-seq~SNPcall-bbmap-callvariants/pp-docker-list; docker run --name ${PPDOCNAME} -v $PWD:$PWD -w $PWD -u 2007:600 -i --rm broadinstitute/gatk:4.3.0.0 gatk --java-options -Xmx104857M SplitNCigarReads -R "input_2/Shiitake_XR1.asm.bp.hap1.p_ctg.fasta" -I output.temp/"Le1-1-501-701_1.fastq.gz"_addrg_repN.bam -O output2/"Le1-1-501-701_1.fastq.gz".bam' + read i ++ basename output/Le1-12-501-708_1.fastq.gz.bam .bam + j=Le1-12-501-708_1.fastq.gz + echo -n 'PPDOCNAME=pp`date +%Y%m%d_%H%M%S_%3N`_$RANDOM; echo $PPDOCNAME >> /yoshitake/test/RNA-seq~SNPcall-bbmap-callvariants/pp-docker-list; docker run --name ${PPDOCNAME} -v $PWD:$PWD -w $PWD -u 2007:600 -i --rm quay.io/biocontainers/picard:2.18.27--0 picard -Xmx104857M AddOrReplaceReadGroups I="output/Le1-12-501-708_1.fastq.gz.bam" O=output.temp/"Le1-12-501-708_1.fastq.gz"_addrg.bam SO=coordinate RGID="Le1-12-501-708_1.fastq.gz" RGLB=library RGPL=Illumina RGPU=Illumina RGSM="Le1-12-501-708_1.fastq.gz"; ' + echo -n '(PPDOCNAME=pp`date +%Y%m%d_%H%M%S_%3N`_$RANDOM; echo $PPDOCNAME >> /yoshitake/test/RNA-seq~SNPcall-bbmap-callvariants/pp-docker-list; docker run --name ${PPDOCNAME} -v $PWD:$PWD -w $PWD -u 2007:600 -i --rm quay.io/biocontainers/bbmap:38.96--h5c4e2a8_1 samtools view -h output.temp/"Le1-12-501-708_1.fastq.gz"_addrg.bam)|bash run-awk-replace.sh|(PPDOCNAME=pp`date +%Y%m%d_%H%M%S_%3N`_$RANDOM; echo $PPDOCNAME >> /yoshitake/test/RNA-seq~SNPcall-bbmap-callvariants/pp-docker-list; docker run --name ${PPDOCNAME} -v $PWD:$PWD -w $PWD -u 2007:600 -i --rm quay.io/biocontainers/bbmap:38.96--h5c4e2a8_1 samtools view -Sb -o output.temp/"Le1-12-501-708_1.fastq.gz"_addrg_repN.bam); ' + echo -n 'PPDOCNAME=pp`date +%Y%m%d_%H%M%S_%3N`_$RANDOM; echo $PPDOCNAME >> /yoshitake/test/RNA-seq~SNPcall-bbmap-callvariants/pp-docker-list; docker run --name ${PPDOCNAME} -v $PWD:$PWD -w $PWD -u 2007:600 -i --rm quay.io/biocontainers/bbmap:38.96--h5c4e2a8_1 samtools index output.temp/"Le1-12-501-708_1.fastq.gz"_addrg_repN.bam; ' + echo 'PPDOCNAME=pp`date +%Y%m%d_%H%M%S_%3N`_$RANDOM; echo $PPDOCNAME >> /yoshitake/test/RNA-seq~SNPcall-bbmap-callvariants/pp-docker-list; docker run --name ${PPDOCNAME} -v $PWD:$PWD -w $PWD -u 2007:600 -i --rm broadinstitute/gatk:4.3.0.0 gatk --java-options -Xmx104857M SplitNCigarReads -R "input_2/Shiitake_XR1.asm.bp.hap1.p_ctg.fasta" -I output.temp/"Le1-12-501-708_1.fastq.gz"_addrg_repN.bam -O output2/"Le1-12-501-708_1.fastq.gz".bam' + read i ++ basename output/Le1-13-502-701_1.fastq.gz.bam .bam + j=Le1-13-502-701_1.fastq.gz + echo -n 'PPDOCNAME=pp`date +%Y%m%d_%H%M%S_%3N`_$RANDOM; echo $PPDOCNAME >> /yoshitake/test/RNA-seq~SNPcall-bbmap-callvariants/pp-docker-list; docker run --name ${PPDOCNAME} -v $PWD:$PWD -w $PWD -u 2007:600 -i --rm quay.io/biocontainers/picard:2.18.27--0 picard -Xmx104857M AddOrReplaceReadGroups I="output/Le1-13-502-701_1.fastq.gz.bam" O=output.temp/"Le1-13-502-701_1.fastq.gz"_addrg.bam SO=coordinate RGID="Le1-13-502-701_1.fastq.gz" RGLB=library RGPL=Illumina RGPU=Illumina RGSM="Le1-13-502-701_1.fastq.gz"; ' + echo -n '(PPDOCNAME=pp`date +%Y%m%d_%H%M%S_%3N`_$RANDOM; echo $PPDOCNAME >> /yoshitake/test/RNA-seq~SNPcall-bbmap-callvariants/pp-docker-list; docker run --name ${PPDOCNAME} -v $PWD:$PWD -w $PWD -u 2007:600 -i --rm quay.io/biocontainers/bbmap:38.96--h5c4e2a8_1 samtools view -h output.temp/"Le1-13-502-701_1.fastq.gz"_addrg.bam)|bash run-awk-replace.sh|(PPDOCNAME=pp`date +%Y%m%d_%H%M%S_%3N`_$RANDOM; echo $PPDOCNAME >> /yoshitake/test/RNA-seq~SNPcall-bbmap-callvariants/pp-docker-list; docker run --name ${PPDOCNAME} -v $PWD:$PWD -w $PWD -u 2007:600 -i --rm quay.io/biocontainers/bbmap:38.96--h5c4e2a8_1 samtools view -Sb -o output.temp/"Le1-13-502-701_1.fastq.gz"_addrg_repN.bam); ' + echo -n 'PPDOCNAME=pp`date +%Y%m%d_%H%M%S_%3N`_$RANDOM; echo $PPDOCNAME >> /yoshitake/test/RNA-seq~SNPcall-bbmap-callvariants/pp-docker-list; docker run --name ${PPDOCNAME} -v $PWD:$PWD -w $PWD -u 2007:600 -i --rm quay.io/biocontainers/bbmap:38.96--h5c4e2a8_1 samtools index output.temp/"Le1-13-502-701_1.fastq.gz"_addrg_repN.bam; ' + echo 'PPDOCNAME=pp`date +%Y%m%d_%H%M%S_%3N`_$RANDOM; echo $PPDOCNAME >> /yoshitake/test/RNA-seq~SNPcall-bbmap-callvariants/pp-docker-list; docker run --name ${PPDOCNAME} -v $PWD:$PWD -w $PWD -u 2007:600 -i --rm broadinstitute/gatk:4.3.0.0 gatk --java-options -Xmx104857M SplitNCigarReads -R "input_2/Shiitake_XR1.asm.bp.hap1.p_ctg.fasta" -I output.temp/"Le1-13-502-701_1.fastq.gz"_addrg_repN.bam -O output2/"Le1-13-502-701_1.fastq.gz".bam' + read i ++ basename output/Le1-17-502-703_1.fastq.gz.bam .bam + j=Le1-17-502-703_1.fastq.gz + echo -n 'PPDOCNAME=pp`date +%Y%m%d_%H%M%S_%3N`_$RANDOM; echo $PPDOCNAME >> /yoshitake/test/RNA-seq~SNPcall-bbmap-callvariants/pp-docker-list; docker run --name ${PPDOCNAME} -v $PWD:$PWD -w $PWD -u 2007:600 -i --rm quay.io/biocontainers/picard:2.18.27--0 picard -Xmx104857M AddOrReplaceReadGroups I="output/Le1-17-502-703_1.fastq.gz.bam" O=output.temp/"Le1-17-502-703_1.fastq.gz"_addrg.bam SO=coordinate RGID="Le1-17-502-703_1.fastq.gz" RGLB=library RGPL=Illumina RGPU=Illumina RGSM="Le1-17-502-703_1.fastq.gz"; ' + echo -n '(PPDOCNAME=pp`date +%Y%m%d_%H%M%S_%3N`_$RANDOM; echo $PPDOCNAME >> /yoshitake/test/RNA-seq~SNPcall-bbmap-callvariants/pp-docker-list; docker run --name ${PPDOCNAME} -v $PWD:$PWD -w $PWD -u 2007:600 -i --rm quay.io/biocontainers/bbmap:38.96--h5c4e2a8_1 samtools view -h output.temp/"Le1-17-502-703_1.fastq.gz"_addrg.bam)|bash run-awk-replace.sh|(PPDOCNAME=pp`date +%Y%m%d_%H%M%S_%3N`_$RANDOM; echo $PPDOCNAME >> /yoshitake/test/RNA-seq~SNPcall-bbmap-callvariants/pp-docker-list; docker run --name ${PPDOCNAME} -v $PWD:$PWD -w $PWD -u 2007:600 -i --rm quay.io/biocontainers/bbmap:38.96--h5c4e2a8_1 samtools view -Sb -o output.temp/"Le1-17-502-703_1.fastq.gz"_addrg_repN.bam); ' + echo -n 'PPDOCNAME=pp`date +%Y%m%d_%H%M%S_%3N`_$RANDOM; echo $PPDOCNAME >> /yoshitake/test/RNA-seq~SNPcall-bbmap-callvariants/pp-docker-list; docker run --name ${PPDOCNAME} -v $PWD:$PWD -w $PWD -u 2007:600 -i --rm quay.io/biocontainers/bbmap:38.96--h5c4e2a8_1 samtools index output.temp/"Le1-17-502-703_1.fastq.gz"_addrg_repN.bam; ' + echo 'PPDOCNAME=pp`date +%Y%m%d_%H%M%S_%3N`_$RANDOM; echo $PPDOCNAME >> /yoshitake/test/RNA-seq~SNPcall-bbmap-callvariants/pp-docker-list; docker run --name ${PPDOCNAME} -v $PWD:$PWD -w $PWD -u 2007:600 -i --rm broadinstitute/gatk:4.3.0.0 gatk --java-options -Xmx104857M SplitNCigarReads -R "input_2/Shiitake_XR1.asm.bp.hap1.p_ctg.fasta" -I output.temp/"Le1-17-502-703_1.fastq.gz"_addrg_repN.bam -O output2/"Le1-17-502-703_1.fastq.gz".bam' + read i /usr/local/bin/picard: line 5: warning: setlocale: LC_ALL: cannot change locale (en_US.UTF-8): No such file or directory INFO 2023-02-28 23:56:50 AddOrReplaceReadGroups ********** NOTE: Picard's command line syntax is changing. ********** ********** For more information, please see: ********** https://github.com/broadinstitute/picard/wiki/Command-Line-Syntax-Transition-For-Users-(Pre-Transition) ********** ********** The command line looks like this in the new syntax: ********** ********** AddOrReplaceReadGroups -I output/Le1-1-501-701_1.fastq.gz.bam -O output.temp/Le1-1-501-701_1.fastq.gz_addrg.bam -SO coordinate -RGID Le1-1-501-701_1.fastq.gz -RGLB library -RGPL Illumina -RGPU Illumina -RGSM Le1-1-501-701_1.fastq.gz ********** 23:56:51.350 INFO NativeLibraryLoader - Loading libgkl_compression.so from jar:file:/usr/local/share/picard-2.18.27-0/picard.jar!/com/intel/gkl/native/libgkl_compression.so [Tue Feb 28 23:56:51 GMT 2023] AddOrReplaceReadGroups INPUT=output/Le1-1-501-701_1.fastq.gz.bam OUTPUT=output.temp/Le1-1-501-701_1.fastq.gz_addrg.bam SORT_ORDER=coordinate RGID=Le1-1-501-701_1.fastq.gz RGLB=library RGPL=Illumina RGPU=Illumina RGSM=Le1-1-501-701_1.fastq.gz VERBOSITY=INFO QUIET=false VALIDATION_STRINGENCY=STRICT COMPRESSION_LEVEL=5 MAX_RECORDS_IN_RAM=500000 CREATE_INDEX=false CREATE_MD5_FILE=false GA4GH_CLIENT_SECRETS=client_secrets.json USE_JDK_DEFLATER=false USE_JDK_INFLATER=false [Tue Feb 28 23:56:51 GMT 2023] Executing as ?@a7e757820c00 on Linux 3.10.0-1160.36.2.el7.x86_64 amd64; OpenJDK 64-Bit Server VM 11.0.1+13-LTS; Deflater: Intel; Inflater: Intel; Provider GCS is not available; Picard version: 2.18.27-SNAPSHOT INFO 2023-02-28 23:56:51 AddOrReplaceReadGroups Created read-group ID=Le1-1-501-701_1.fastq.gz PL=Illumina LB=library SM=Le1-1-501-701_1.fastq.gz [Tue Feb 28 23:56:54 GMT 2023] picard.sam.AddOrReplaceReadGroups done. Elapsed time: 0.06 minutes. Runtime.totalMemory()=2617245696 23:57:02.083 INFO NativeLibraryLoader - Loading libgkl_compression.so from jar:file:/gatk/gatk-package-4.3.0.0-local.jar!/com/intel/gkl/native/libgkl_compression.so 23:57:02.265 INFO SplitNCigarReads - ------------------------------------------------------------ 23:57:02.266 INFO SplitNCigarReads - The Genome Analysis Toolkit (GATK) v4.3.0.0 23:57:02.266 INFO SplitNCigarReads - For support and documentation go to https://software.broadinstitute.org/gatk/ 23:57:02.267 INFO SplitNCigarReads - Executing as ?@190270ec2003 on Linux v3.10.0-1160.36.2.el7.x86_64 amd64 23:57:02.267 INFO SplitNCigarReads - Java runtime: OpenJDK 64-Bit Server VM v1.8.0_242-8u242-b08-0ubuntu3~18.04-b08 23:57:02.267 INFO SplitNCigarReads - Start Date/Time: February 28, 2023 11:57:02 PM GMT 23:57:02.268 INFO SplitNCigarReads - ------------------------------------------------------------ 23:57:02.268 INFO SplitNCigarReads - ------------------------------------------------------------ 23:57:02.269 INFO SplitNCigarReads - HTSJDK Version: 3.0.1 23:57:02.269 INFO SplitNCigarReads - Picard Version: 2.27.5 23:57:02.269 INFO SplitNCigarReads - Built for Spark Version: 2.4.5 23:57:02.270 INFO SplitNCigarReads - HTSJDK Defaults.COMPRESSION_LEVEL : 2 23:57:02.270 INFO SplitNCigarReads - HTSJDK Defaults.USE_ASYNC_IO_READ_FOR_SAMTOOLS : false 23:57:02.270 INFO SplitNCigarReads - HTSJDK Defaults.USE_ASYNC_IO_WRITE_FOR_SAMTOOLS : true 23:57:02.270 INFO SplitNCigarReads - HTSJDK Defaults.USE_ASYNC_IO_WRITE_FOR_TRIBBLE : false 23:57:02.270 INFO SplitNCigarReads - Deflater: IntelDeflater 23:57:02.271 INFO SplitNCigarReads - Inflater: IntelInflater 23:57:02.271 INFO SplitNCigarReads - GCS max retries/reopens: 20 23:57:02.271 INFO SplitNCigarReads - Requester pays: disabled 23:57:02.271 INFO SplitNCigarReads - Initializing engine 23:57:02.636 INFO SplitNCigarReads - Done initializing engine 23:57:02.682 INFO ProgressMeter - Starting traversal 23:57:02.682 INFO ProgressMeter - Current Locus Elapsed Minutes Reads Processed Reads/Minute 23:57:04.595 WARN IntelInflater - Zero Bytes Written : 0 23:57:04.598 INFO SplitNCigarReads - 0 read(s) filtered by: AllowAllReadsReadFilter 23:57:04.599 INFO OverhangFixingManager - Overhang Fixing Manager saved 512 reads in the first pass 23:57:04.601 INFO SplitNCigarReads - Starting traversal pass 2 23:57:06.700 WARN IntelInflater - Zero Bytes Written : 0 23:57:06.701 INFO SplitNCigarReads - 0 read(s) filtered by: AllowAllReadsReadFilter 23:57:06.702 INFO ProgressMeter - h1tg000096l:21503 0.1 291848 4357024.1 23:57:06.702 INFO ProgressMeter - Traversal complete. Processed 291848 total reads in 0.1 minutes. 23:57:07.655 INFO SplitNCigarReads - Shutting down engine [February 28, 2023 11:57:07 PM GMT] org.broadinstitute.hellbender.tools.walkers.rnaseq.SplitNCigarReads done. Elapsed time: 0.09 minutes. Runtime.totalMemory()=2649751552 Using GATK jar /gatk/gatk-package-4.3.0.0-local.jar Running: java -Dsamjdk.use_async_io_read_samtools=false -Dsamjdk.use_async_io_write_samtools=true -Dsamjdk.use_async_io_write_tribble=false -Dsamjdk.compression_level=2 -Xmx104857M -jar /gatk/gatk-package-4.3.0.0-local.jar SplitNCigarReads -R input_2/Shiitake_XR1.asm.bp.hap1.p_ctg.fasta -I output.temp/Le1-1-501-701_1.fastq.gz_addrg_repN.bam -O output2/Le1-1-501-701_1.fastq.gz.bam /usr/local/bin/picard: line 5: warning: setlocale: LC_ALL: cannot change locale (en_US.UTF-8): No such file or directory INFO 2023-02-28 23:57:09 AddOrReplaceReadGroups ********** NOTE: Picard's command line syntax is changing. ********** ********** For more information, please see: ********** https://github.com/broadinstitute/picard/wiki/Command-Line-Syntax-Transition-For-Users-(Pre-Transition) ********** ********** The command line looks like this in the new syntax: ********** ********** AddOrReplaceReadGroups -I output/Le1-12-501-708_1.fastq.gz.bam -O output.temp/Le1-12-501-708_1.fastq.gz_addrg.bam -SO coordinate -RGID Le1-12-501-708_1.fastq.gz -RGLB library -RGPL Illumina -RGPU Illumina -RGSM Le1-12-501-708_1.fastq.gz ********** 23:57:09.474 INFO NativeLibraryLoader - Loading libgkl_compression.so from jar:file:/usr/local/share/picard-2.18.27-0/picard.jar!/com/intel/gkl/native/libgkl_compression.so [Tue Feb 28 23:57:09 GMT 2023] AddOrReplaceReadGroups INPUT=output/Le1-12-501-708_1.fastq.gz.bam OUTPUT=output.temp/Le1-12-501-708_1.fastq.gz_addrg.bam SORT_ORDER=coordinate RGID=Le1-12-501-708_1.fastq.gz RGLB=library RGPL=Illumina RGPU=Illumina RGSM=Le1-12-501-708_1.fastq.gz VERBOSITY=INFO QUIET=false VALIDATION_STRINGENCY=STRICT COMPRESSION_LEVEL=5 MAX_RECORDS_IN_RAM=500000 CREATE_INDEX=false CREATE_MD5_FILE=false GA4GH_CLIENT_SECRETS=client_secrets.json USE_JDK_DEFLATER=false USE_JDK_INFLATER=false [Tue Feb 28 23:57:09 GMT 2023] Executing as ?@8ed77b40bd2d on Linux 3.10.0-1160.36.2.el7.x86_64 amd64; OpenJDK 64-Bit Server VM 11.0.1+13-LTS; Deflater: Intel; Inflater: Intel; Provider GCS is not available; Picard version: 2.18.27-SNAPSHOT INFO 2023-02-28 23:57:09 AddOrReplaceReadGroups Created read-group ID=Le1-12-501-708_1.fastq.gz PL=Illumina LB=library SM=Le1-12-501-708_1.fastq.gz [Tue Feb 28 23:57:10 GMT 2023] picard.sam.AddOrReplaceReadGroups done. Elapsed time: 0.01 minutes. Runtime.totalMemory()=2147483648 23:57:15.597 INFO NativeLibraryLoader - Loading libgkl_compression.so from jar:file:/gatk/gatk-package-4.3.0.0-local.jar!/com/intel/gkl/native/libgkl_compression.so 23:57:15.754 INFO SplitNCigarReads - ------------------------------------------------------------ 23:57:15.755 INFO SplitNCigarReads - The Genome Analysis Toolkit (GATK) v4.3.0.0 23:57:15.755 INFO SplitNCigarReads - For support and documentation go to https://software.broadinstitute.org/gatk/ 23:57:15.755 INFO SplitNCigarReads - Executing as ?@bb566256d80e on Linux v3.10.0-1160.36.2.el7.x86_64 amd64 23:57:15.756 INFO SplitNCigarReads - Java runtime: OpenJDK 64-Bit Server VM v1.8.0_242-8u242-b08-0ubuntu3~18.04-b08 23:57:15.756 INFO SplitNCigarReads - Start Date/Time: February 28, 2023 11:57:15 PM GMT 23:57:15.756 INFO SplitNCigarReads - ------------------------------------------------------------ 23:57:15.756 INFO SplitNCigarReads - ------------------------------------------------------------ 23:57:15.757 INFO SplitNCigarReads - HTSJDK Version: 3.0.1 23:57:15.757 INFO SplitNCigarReads - Picard Version: 2.27.5 23:57:15.757 INFO SplitNCigarReads - Built for Spark Version: 2.4.5 23:57:15.757 INFO SplitNCigarReads - HTSJDK Defaults.COMPRESSION_LEVEL : 2 23:57:15.757 INFO SplitNCigarReads - HTSJDK Defaults.USE_ASYNC_IO_READ_FOR_SAMTOOLS : false 23:57:15.757 INFO SplitNCigarReads - HTSJDK Defaults.USE_ASYNC_IO_WRITE_FOR_SAMTOOLS : true 23:57:15.758 INFO SplitNCigarReads - HTSJDK Defaults.USE_ASYNC_IO_WRITE_FOR_TRIBBLE : false 23:57:15.758 INFO SplitNCigarReads - Deflater: IntelDeflater 23:57:15.758 INFO SplitNCigarReads - Inflater: IntelInflater 23:57:15.758 INFO SplitNCigarReads - GCS max retries/reopens: 20 23:57:15.758 INFO SplitNCigarReads - Requester pays: disabled 23:57:15.758 INFO SplitNCigarReads - Initializing engine 23:57:16.123 INFO SplitNCigarReads - Done initializing engine 23:57:16.170 INFO ProgressMeter - Starting traversal 23:57:16.171 INFO ProgressMeter - Current Locus Elapsed Minutes Reads Processed Reads/Minute 23:57:16.973 WARN IntelInflater - Zero Bytes Written : 0 23:57:16.976 INFO SplitNCigarReads - 0 read(s) filtered by: AllowAllReadsReadFilter 23:57:16.976 INFO OverhangFixingManager - Overhang Fixing Manager saved 54 reads in the first pass 23:57:16.979 INFO SplitNCigarReads - Starting traversal pass 2 23:57:17.468 WARN IntelInflater - Zero Bytes Written : 0 23:57:17.469 INFO SplitNCigarReads - 0 read(s) filtered by: AllowAllReadsReadFilter 23:57:17.470 INFO ProgressMeter - h1tg000069l:128641 0.0 36932 1707180.3 23:57:17.471 INFO ProgressMeter - Traversal complete. Processed 36932 total reads in 0.0 minutes. 23:57:17.720 INFO SplitNCigarReads - Shutting down engine [February 28, 2023 11:57:17 PM GMT] org.broadinstitute.hellbender.tools.walkers.rnaseq.SplitNCigarReads done. Elapsed time: 0.04 minutes. Runtime.totalMemory()=2528641024 Using GATK jar /gatk/gatk-package-4.3.0.0-local.jar Running: java -Dsamjdk.use_async_io_read_samtools=false -Dsamjdk.use_async_io_write_samtools=true -Dsamjdk.use_async_io_write_tribble=false -Dsamjdk.compression_level=2 -Xmx104857M -jar /gatk/gatk-package-4.3.0.0-local.jar SplitNCigarReads -R input_2/Shiitake_XR1.asm.bp.hap1.p_ctg.fasta -I output.temp/Le1-12-501-708_1.fastq.gz_addrg_repN.bam -O output2/Le1-12-501-708_1.fastq.gz.bam /usr/local/bin/picard: line 5: warning: setlocale: LC_ALL: cannot change locale (en_US.UTF-8): No such file or directory INFO 2023-02-28 23:57:18 AddOrReplaceReadGroups ********** NOTE: Picard's command line syntax is changing. ********** ********** For more information, please see: ********** https://github.com/broadinstitute/picard/wiki/Command-Line-Syntax-Transition-For-Users-(Pre-Transition) ********** ********** The command line looks like this in the new syntax: ********** ********** AddOrReplaceReadGroups -I output/Le1-13-502-701_1.fastq.gz.bam -O output.temp/Le1-13-502-701_1.fastq.gz_addrg.bam -SO coordinate -RGID Le1-13-502-701_1.fastq.gz -RGLB library -RGPL Illumina -RGPU Illumina -RGSM Le1-13-502-701_1.fastq.gz ********** 23:57:19.401 INFO NativeLibraryLoader - Loading libgkl_compression.so from jar:file:/usr/local/share/picard-2.18.27-0/picard.jar!/com/intel/gkl/native/libgkl_compression.so [Tue Feb 28 23:57:19 GMT 2023] AddOrReplaceReadGroups INPUT=output/Le1-13-502-701_1.fastq.gz.bam OUTPUT=output.temp/Le1-13-502-701_1.fastq.gz_addrg.bam SORT_ORDER=coordinate RGID=Le1-13-502-701_1.fastq.gz RGLB=library RGPL=Illumina RGPU=Illumina RGSM=Le1-13-502-701_1.fastq.gz VERBOSITY=INFO QUIET=false VALIDATION_STRINGENCY=STRICT COMPRESSION_LEVEL=5 MAX_RECORDS_IN_RAM=500000 CREATE_INDEX=false CREATE_MD5_FILE=false GA4GH_CLIENT_SECRETS=client_secrets.json USE_JDK_DEFLATER=false USE_JDK_INFLATER=false [Tue Feb 28 23:57:19 GMT 2023] Executing as ?@8b08e65aee13 on Linux 3.10.0-1160.36.2.el7.x86_64 amd64; OpenJDK 64-Bit Server VM 11.0.1+13-LTS; Deflater: Intel; Inflater: Intel; Provider GCS is not available; Picard version: 2.18.27-SNAPSHOT INFO 2023-02-28 23:57:19 AddOrReplaceReadGroups Created read-group ID=Le1-13-502-701_1.fastq.gz PL=Illumina LB=library SM=Le1-13-502-701_1.fastq.gz [Tue Feb 28 23:57:24 GMT 2023] picard.sam.AddOrReplaceReadGroups done. Elapsed time: 0.09 minutes. Runtime.totalMemory()=2650800128 23:57:33.470 INFO NativeLibraryLoader - Loading libgkl_compression.so from jar:file:/gatk/gatk-package-4.3.0.0-local.jar!/com/intel/gkl/native/libgkl_compression.so 23:57:33.619 INFO SplitNCigarReads - ------------------------------------------------------------ 23:57:33.619 INFO SplitNCigarReads - The Genome Analysis Toolkit (GATK) v4.3.0.0 23:57:33.619 INFO SplitNCigarReads - For support and documentation go to https://software.broadinstitute.org/gatk/ 23:57:33.619 INFO SplitNCigarReads - Executing as ?@d4eecf8c91c6 on Linux v3.10.0-1160.36.2.el7.x86_64 amd64 23:57:33.620 INFO SplitNCigarReads - Java runtime: OpenJDK 64-Bit Server VM v1.8.0_242-8u242-b08-0ubuntu3~18.04-b08 23:57:33.620 INFO SplitNCigarReads - Start Date/Time: February 28, 2023 11:57:33 PM GMT 23:57:33.620 INFO SplitNCigarReads - ------------------------------------------------------------ 23:57:33.620 INFO SplitNCigarReads - ------------------------------------------------------------ 23:57:33.621 INFO SplitNCigarReads - HTSJDK Version: 3.0.1 23:57:33.621 INFO SplitNCigarReads - Picard Version: 2.27.5 23:57:33.621 INFO SplitNCigarReads - Built for Spark Version: 2.4.5 23:57:33.621 INFO SplitNCigarReads - HTSJDK Defaults.COMPRESSION_LEVEL : 2 23:57:33.621 INFO SplitNCigarReads - HTSJDK Defaults.USE_ASYNC_IO_READ_FOR_SAMTOOLS : false 23:57:33.621 INFO SplitNCigarReads - HTSJDK Defaults.USE_ASYNC_IO_WRITE_FOR_SAMTOOLS : true 23:57:33.621 INFO SplitNCigarReads - HTSJDK Defaults.USE_ASYNC_IO_WRITE_FOR_TRIBBLE : false 23:57:33.621 INFO SplitNCigarReads - Deflater: IntelDeflater 23:57:33.621 INFO SplitNCigarReads - Inflater: IntelInflater 23:57:33.621 INFO SplitNCigarReads - GCS max retries/reopens: 20 23:57:33.621 INFO SplitNCigarReads - Requester pays: disabled 23:57:33.621 INFO SplitNCigarReads - Initializing engine 23:57:33.964 INFO SplitNCigarReads - Done initializing engine 23:57:34.001 INFO ProgressMeter - Starting traversal 23:57:34.002 INFO ProgressMeter - Current Locus Elapsed Minutes Reads Processed Reads/Minute 23:57:37.245 WARN IntelInflater - Zero Bytes Written : 0 23:57:37.247 INFO SplitNCigarReads - 0 read(s) filtered by: AllowAllReadsReadFilter 23:57:37.248 INFO OverhangFixingManager - Overhang Fixing Manager saved 1984 reads in the first pass 23:57:37.250 INFO SplitNCigarReads - Starting traversal pass 2 23:57:40.982 WARN IntelInflater - Zero Bytes Written : 0 23:57:40.983 INFO SplitNCigarReads - 0 read(s) filtered by: AllowAllReadsReadFilter 23:57:40.984 INFO ProgressMeter - unmapped 0.1 491400 4223463.7 23:57:40.984 INFO ProgressMeter - Traversal complete. Processed 491400 total reads in 0.1 minutes. 23:57:42.424 INFO SplitNCigarReads - Shutting down engine [February 28, 2023 11:57:42 PM GMT] org.broadinstitute.hellbender.tools.walkers.rnaseq.SplitNCigarReads done. Elapsed time: 0.15 minutes. Runtime.totalMemory()=2900885504 Using GATK jar /gatk/gatk-package-4.3.0.0-local.jar Running: java -Dsamjdk.use_async_io_read_samtools=false -Dsamjdk.use_async_io_write_samtools=true -Dsamjdk.use_async_io_write_tribble=false -Dsamjdk.compression_level=2 -Xmx104857M -jar /gatk/gatk-package-4.3.0.0-local.jar SplitNCigarReads -R input_2/Shiitake_XR1.asm.bp.hap1.p_ctg.fasta -I output.temp/Le1-13-502-701_1.fastq.gz_addrg_repN.bam -O output2/Le1-13-502-701_1.fastq.gz.bam /usr/local/bin/picard: line 5: warning: setlocale: LC_ALL: cannot change locale (en_US.UTF-8): No such file or directory INFO 2023-02-28 23:57:43 AddOrReplaceReadGroups ********** NOTE: Picard's command line syntax is changing. ********** ********** For more information, please see: ********** https://github.com/broadinstitute/picard/wiki/Command-Line-Syntax-Transition-For-Users-(Pre-Transition) ********** ********** The command line looks like this in the new syntax: ********** ********** AddOrReplaceReadGroups -I output/Le1-17-502-703_1.fastq.gz.bam -O output.temp/Le1-17-502-703_1.fastq.gz_addrg.bam -SO coordinate -RGID Le1-17-502-703_1.fastq.gz -RGLB library -RGPL Illumina -RGPU Illumina -RGSM Le1-17-502-703_1.fastq.gz ********** 23:57:44.226 INFO NativeLibraryLoader - Loading libgkl_compression.so from jar:file:/usr/local/share/picard-2.18.27-0/picard.jar!/com/intel/gkl/native/libgkl_compression.so [Tue Feb 28 23:57:44 GMT 2023] AddOrReplaceReadGroups INPUT=output/Le1-17-502-703_1.fastq.gz.bam OUTPUT=output.temp/Le1-17-502-703_1.fastq.gz_addrg.bam SORT_ORDER=coordinate RGID=Le1-17-502-703_1.fastq.gz RGLB=library RGPL=Illumina RGPU=Illumina RGSM=Le1-17-502-703_1.fastq.gz VERBOSITY=INFO QUIET=false VALIDATION_STRINGENCY=STRICT COMPRESSION_LEVEL=5 MAX_RECORDS_IN_RAM=500000 CREATE_INDEX=false CREATE_MD5_FILE=false GA4GH_CLIENT_SECRETS=client_secrets.json USE_JDK_DEFLATER=false USE_JDK_INFLATER=false [Tue Feb 28 23:57:44 GMT 2023] Executing as ?@71e2a21c6f0b on Linux 3.10.0-1160.36.2.el7.x86_64 amd64; OpenJDK 64-Bit Server VM 11.0.1+13-LTS; Deflater: Intel; Inflater: Intel; Provider GCS is not available; Picard version: 2.18.27-SNAPSHOT INFO 2023-02-28 23:57:44 AddOrReplaceReadGroups Created read-group ID=Le1-17-502-703_1.fastq.gz PL=Illumina LB=library SM=Le1-17-502-703_1.fastq.gz [Tue Feb 28 23:57:58 GMT 2023] picard.sam.AddOrReplaceReadGroups done. Elapsed time: 0.23 minutes. Runtime.totalMemory()=2583691264 23:58:14.002 INFO NativeLibraryLoader - Loading libgkl_compression.so from jar:file:/gatk/gatk-package-4.3.0.0-local.jar!/com/intel/gkl/native/libgkl_compression.so 23:58:14.183 INFO SplitNCigarReads - ------------------------------------------------------------ 23:58:14.184 INFO SplitNCigarReads - The Genome Analysis Toolkit (GATK) v4.3.0.0 23:58:14.184 INFO SplitNCigarReads - For support and documentation go to https://software.broadinstitute.org/gatk/ 23:58:14.184 INFO SplitNCigarReads - Executing as ?@acb80e86b8fa on Linux v3.10.0-1160.36.2.el7.x86_64 amd64 23:58:14.184 INFO SplitNCigarReads - Java runtime: OpenJDK 64-Bit Server VM v1.8.0_242-8u242-b08-0ubuntu3~18.04-b08 23:58:14.184 INFO SplitNCigarReads - Start Date/Time: February 28, 2023 11:58:13 PM GMT 23:58:14.185 INFO SplitNCigarReads - ------------------------------------------------------------ 23:58:14.185 INFO SplitNCigarReads - ------------------------------------------------------------ 23:58:14.185 INFO SplitNCigarReads - HTSJDK Version: 3.0.1 23:58:14.186 INFO SplitNCigarReads - Picard Version: 2.27.5 23:58:14.186 INFO SplitNCigarReads - Built for Spark Version: 2.4.5 23:58:14.186 INFO SplitNCigarReads - HTSJDK Defaults.COMPRESSION_LEVEL : 2 23:58:14.186 INFO SplitNCigarReads - HTSJDK Defaults.USE_ASYNC_IO_READ_FOR_SAMTOOLS : false 23:58:14.186 INFO SplitNCigarReads - HTSJDK Defaults.USE_ASYNC_IO_WRITE_FOR_SAMTOOLS : true 23:58:14.186 INFO SplitNCigarReads - HTSJDK Defaults.USE_ASYNC_IO_WRITE_FOR_TRIBBLE : false 23:58:14.186 INFO SplitNCigarReads - Deflater: IntelDeflater 23:58:14.186 INFO SplitNCigarReads - Inflater: IntelInflater 23:58:14.186 INFO SplitNCigarReads - GCS max retries/reopens: 20 23:58:14.186 INFO SplitNCigarReads - Requester pays: disabled 23:58:14.186 INFO SplitNCigarReads - Initializing engine 23:58:14.557 INFO SplitNCigarReads - Done initializing engine 23:58:14.607 INFO ProgressMeter - Starting traversal 23:58:14.607 INFO ProgressMeter - Current Locus Elapsed Minutes Reads Processed Reads/Minute 23:58:24.475 WARN IntelInflater - Zero Bytes Written : 0 23:58:24.477 INFO SplitNCigarReads - 0 read(s) filtered by: AllowAllReadsReadFilter 23:58:24.478 INFO OverhangFixingManager - Overhang Fixing Manager saved 5569 reads in the first pass 23:58:24.479 INFO SplitNCigarReads - Starting traversal pass 2 23:58:24.616 INFO ProgressMeter - h1tg000001l:746357 0.2 695000 4167083.0 23:58:37.192 INFO ProgressMeter - h1tg000051l:135989 0.4 1240000 3294367.7 23:58:39.277 WARN IntelInflater - Zero Bytes Written : 0 23:58:39.278 INFO SplitNCigarReads - 0 read(s) filtered by: AllowAllReadsReadFilter 23:58:39.278 INFO ProgressMeter - h1tg000097l:38077 0.4 1352628 3289731.7 23:58:39.278 INFO ProgressMeter - Traversal complete. Processed 1352628 total reads in 0.4 minutes. 23:58:42.243 INFO SplitNCigarReads - Shutting down engine [February 28, 2023 11:58:42 PM GMT] org.broadinstitute.hellbender.tools.walkers.rnaseq.SplitNCigarReads done. Elapsed time: 0.47 minutes. Runtime.totalMemory()=5519179776 Using GATK jar /gatk/gatk-package-4.3.0.0-local.jar Running: java -Dsamjdk.use_async_io_read_samtools=false -Dsamjdk.use_async_io_write_samtools=true -Dsamjdk.use_async_io_write_tribble=false -Dsamjdk.compression_level=2 -Xmx104857M -jar /gatk/gatk-package-4.3.0.0-local.jar SplitNCigarReads -R input_2/Shiitake_XR1.asm.bp.hap1.p_ctg.fasta -I output.temp/Le1-17-502-703_1.fastq.gz_addrg_repN.bam -O output2/Le1-17-502-703_1.fastq.gz.bam + inputbams= + multiflag= ++ ls output2/Le1-1-501-701_1.fastq.gz.bam output2/Le1-12-501-708_1.fastq.gz.bam output2/Le1-13-502-701_1.fastq.gz.bam output2/Le1-17-502-703_1.fastq.gz.bam + for i in '`ls output2/*.bam`' + '[' '' = '' ']' + inputbams=output2/Le1-1-501-701_1.fastq.gz.bam + for i in '`ls output2/*.bam`' + '[' output2/Le1-1-501-701_1.fastq.gz.bam = '' ']' + multiflag=multisample=t + inputbams+=,output2/Le1-12-501-708_1.fastq.gz.bam + for i in '`ls output2/*.bam`' + '[' output2/Le1-1-501-701_1.fastq.gz.bam,output2/Le1-12-501-708_1.fastq.gz.bam = '' ']' + multiflag=multisample=t + inputbams+=,output2/Le1-13-502-701_1.fastq.gz.bam + for i in '`ls output2/*.bam`' + '[' output2/Le1-1-501-701_1.fastq.gz.bam,output2/Le1-12-501-708_1.fastq.gz.bam,output2/Le1-13-502-701_1.fastq.gz.bam = '' ']' + multiflag=multisample=t + inputbams+=,output2/Le1-17-502-703_1.fastq.gz.bam + FUNC_RUN_DOCKER quay.io/biocontainers/bbmap:38.96--h5c4e2a8_1 callvariants.sh -Xmx104857M in=output2/Le1-1-501-701_1.fastq.gz.bam,output2/Le1-12-501-708_1.fastq.gz.bam,output2/Le1-13-502-701_1.fastq.gz.bam,output2/Le1-17-502-703_1.fastq.gz.bam ref=input_2/Shiitake_XR1.asm.bp.hap1.p_ctg.fasta vcf=output.vcf multisample=t ploidy=2 + PP_RUN_IMAGE=quay.io/biocontainers/bbmap:38.96--h5c4e2a8_1 + shift + PP_RUN_DOCKER_CMD=("${@}") ++ date +%Y%m%d_%H%M%S_%3N + PPDOCNAME=pp20230301_085842_649_26537 + echo pp20230301_085842_649_26537 ++ id -u ++ id -g + docker run --name pp20230301_085842_649_26537 -v /yoshitake/test/RNA-seq~SNPcall-bbmap-callvariants:/yoshitake/test/RNA-seq~SNPcall-bbmap-callvariants -w /yoshitake/test/RNA-seq~SNPcall-bbmap-callvariants -u 2007:600 -i --rm quay.io/biocontainers/bbmap:38.96--h5c4e2a8_1 callvariants.sh -Xmx104857M in=output2/Le1-1-501-701_1.fastq.gz.bam,output2/Le1-12-501-708_1.fastq.gz.bam,output2/Le1-13-502-701_1.fastq.gz.bam,output2/Le1-17-502-703_1.fastq.gz.bam ref=input_2/Shiitake_XR1.asm.bp.hap1.p_ctg.fasta vcf=output.vcf multisample=t ploidy=2 java -ea -Xmx104857M -Xms104857M -cp /usr/local/opt/bbmap-38.96-1/current/ var2.CallVariants -Xmx104857M in=output2/Le1-1-501-701_1.fastq.gz.bam,output2/Le1-12-501-708_1.fastq.gz.bam,output2/Le1-13-502-701_1.fastq.gz.bam,output2/Le1-17-502-703_1.fastq.gz.bam ref=input_2/Shiitake_XR1.asm.bp.hap1.p_ctg.fasta vcf=output.vcf multisample=t ploidy=2 Executing var2.CallVariants2 [-Xmx104857M, in=output2/Le1-1-501-701_1.fastq.gz.bam,output2/Le1-12-501-708_1.fastq.gz.bam,output2/Le1-13-502-701_1.fastq.gz.bam,output2/Le1-17-502-703_1.fastq.gz.bam, ref=input_2/Shiitake_XR1.asm.bp.hap1.p_ctg.fasta, vcf=output.vcf, multisample=t, ploidy=2] Calculating which variants pass filters. Processing sample Le1-1-501-701_1. Loading variants. Could not find sambamba. Found samtools 1.15 Time: 1.830 seconds. Processing variants. Time: 0.234 seconds. Counting nearby variants. Time: 0.065 seconds. Processing sample Le1-12-501-708_1. Loading variants. Time: 0.087 seconds. Processing variants. Time: 0.020 seconds. Counting nearby variants. Time: 0.003 seconds. Processing sample Le1-13-502-701_1. Loading variants. Time: 0.484 seconds. Processing variants. Time: 0.047 seconds. Counting nearby variants. Time: 0.032 seconds. Processing sample Le1-17-502-703_1. Loading variants. Time: 1.155 seconds. Processing variants. Time: 0.182 seconds. Counting nearby variants. Time: 0.107 seconds. 100676 variants passed filters. 4.370 seconds. Processing second pass. Processing sample Le1-1-501-701_1. Loading variants. Time: 0.366 seconds. Processing variants. Time: 0.195 seconds. Processing sample Le1-12-501-708_1. Loading variants. Time: 0.115 seconds. Processing variants. Time: 0.085 seconds. Processing sample Le1-13-502-701_1. Loading variants. Time: 0.435 seconds. Processing variants. Time: 0.083 seconds. Processing sample Le1-17-502-703_1. Loading variants. Time: 1.151 seconds. Processing variants. Time: 0.118 seconds. Finished second pass. Writing output. Merging [(Le1-1-501-701_1, individual_Le1-1-501-701_1.vcf.gz), (Le1-12-501-708_1, individual_Le1-12-501-708_1.vcf.gz), (Le1-13-502-701_1, individual_Le1-13-502-701_1.vcf.gz), (Le1-17-502-703_1, individual_Le1-17-502-703_1.vcf.gz)] Time: 1.531 seconds. 100676 of 1912649 variants passed filters (5.2637%). Substitutions: 90317 89.7% Deletions: 5743 5.7% Insertions: 4616 4.6% Variation Rate: 1/539 Time: 10.320 seconds. Reads Processed: 1034k 100.27k reads/sec Bases Processed: 156m 15.14m bases/sec ++ date + echo completion at 2023年 3月 1日 水曜日 08:58:54 JST completion at 2023年 3月 1日 水曜日 08:58:54 JST ++ date +%s + time_fin=1677628734 ++ echo 'scale=2; (1677628734 - 1677628606)/60' ++ bc + echo -e 'Total running time is 2.13 min' Total running time is 2.13 min + echo 'Run completed!' Run completed! + post_processing + '[' 1 = 1 ']' + echo 0 + exit PID: 412203