[ https://issues.apache.org/jira/browse/SPARK-16819?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel ]
Asmaa Ali updated SPARK-16819: ------------------------------- Description: What is the reason of this exception ?! cancerdetector@cluster-cancerdetector-m:~/SparkBWA/build$ spark-submit --class SparkBWA --master yarn-cluster --deploy-mode cluster --conf spark.yarn.jar=hdfs:///user/spark/spark-assembly.jar --driver-memory 1500m --executor-memory 1500m --executor-cores 1 --archives ./bwa.zip --verbose ./SparkBWA.jar -algorithm mem -reads paired -index /Data/HumanBase/hg38 -partitions 32 ERR000589_1.filt.fastq ERR000589_2.filt.fastqhb Output_ERR000589 Using properties file: /usr/lib/spark/conf/spark-defaults.conf Adding default property: spark.executor.extraJavaOptions=-Xbootclasspath/p:/usr/local/share/google/alpn/alpn-boot-8.1.7.v20160121.jar Adding default property: spark.history.fs.logDirectory=hdfs://cluster-cancerdetector-m/user/spark/eventlog Adding default property: spark.eventLog.enabled=true Adding default property: spark.driver.maxResultSize=1920m Adding default property: spark.shuffle.service.enabled=true Adding default property: spark.yarn.historyServer.address=cluster-cancerdetector-m:18080 Adding default property: spark.sql.parquet.cacheMetadata=false Adding default property: spark.driver.memory=3840m Adding default property: spark.dynamicAllocation.maxExecutors=10000 Adding default property: spark.scheduler.minRegisteredResourcesRatio=0.0 Adding default property: spark.yarn.am.memoryOverhead=558 Adding default property: spark.yarn.am.memory=5586m Adding default property: spark.driver.extraJavaOptions=-Xbootclasspath/p:/usr/local/share/google/alpn/alpn-boot-8.1.7.v20160121.jar Adding default property: spark.master=yarn-client Adding default property: spark.executor.memory=5586m Adding default property: spark.eventLog.dir=hdfs://cluster-cancerdetector-m/user/spark/eventlog Adding default property: spark.dynamicAllocation.enabled=true Adding default property: spark.executor.cores=2 Adding default property: spark.yarn.executor.memoryOverhead=558 Adding default property: spark.dynamicAllocation.minExecutors=1 Adding default property: spark.dynamicAllocation.initialExecutors=10000 Adding default property: spark.akka.frameSize=512 Parsed arguments: master yarn-cluster deployMode cluster executorMemory 1500m executorCores 1 totalExecutorCores null propertiesFile /usr/lib/spark/conf/spark-defaults.conf driverMemory 1500m driverCores null driverExtraClassPath null driverExtraLibraryPath null driverExtraJavaOptions -Xbootclasspath/p:/usr/local/share/google/alpn/alpn-boot-8.1.7.v20160121.jar supervise false queue null numExecutors null files null pyFiles null archives file:/home/cancerdetector/SparkBWA/build/./bwa.zip mainClass SparkBWA primaryResource file:/home/cancerdetector/SparkBWA/build/./SparkBWA.jar name SparkBWA childArgs [-algorithm mem -reads paired -index /Data/HumanBase/hg38 -partitions 32 ERR000589_1.filt.fastq ERR000589_2.filt.fastqhb Output_ERR000589] jars null packages null packagesExclusions null repositories null verbose true Spark properties used, including those specified through --conf and those from the properties file /usr/lib/spark/conf/spark-defaults.conf: spark.yarn.am.memoryOverhead -> 558 spark.driver.memory -> 1500m spark.yarn.jar -> hdfs:///user/spark/spark-assembly.jar spark.executor.memory -> 5586m spark.yarn.historyServer.address -> cluster-cancerdetector-m:18080 spark.eventLog.enabled -> true spark.scheduler.minRegisteredResourcesRatio -> 0.0 spark.dynamicAllocation.maxExecutors -> 10000 spark.akka.frameSize -> 512 spark.executor.extraJavaOptions -> -Xbootclasspath/p:/usr/local/share/google/alpn/alpn-boot-8.1.7.v20160121.jar spark.sql.parquet.cacheMetadata -> false spark.shuffle.service.enabled -> true spark.history.fs.logDirectory -> hdfs://cluster-cancerdetector-m/user/spark/eventlog spark.dynamicAllocation.initialExecutors -> 10000 spark.dynamicAllocation.minExecutors -> 1 spark.yarn.executor.memoryOverhead -> 558 spark.driver.extraJavaOptions -> -Xbootclasspath/p:/usr/local/share/google/alpn/alpn-boot-8.1.7.v20160121.jar spark.eventLog.dir -> hdfs://cluster-cancerdetector-m/user/spark/eventlog spark.yarn.am.memory -> 5586m spark.driver.maxResultSize -> 1920m spark.master -> yarn-client spark.dynamicAllocation.enabled -> true spark.executor.cores -> 2 Main class: org.apache.spark.deploy.yarn.Client Arguments: --name SparkBWA --driver-memory 1500m --executor-memory 1500m --executor-cores 1 --archives file:/home/cancerdetector/SparkBWA/build/./bwa.zip --jar file:/home/cancerdetector/SparkBWA/build/./SparkBWA.jar --class SparkBWA --arg -algorithm --arg mem --arg -reads --arg paired --arg -index --arg /Data/HumanBase/hg38 --arg -partitions --arg 32 --arg ERR000589_1.filt.fastq --arg ERR000589_2.filt.fastqhb --arg Output_ERR000589 System properties: spark.yarn.am.memoryOverhead -> 558 spark.driver.memory -> 1500m spark.yarn.jar -> hdfs:///user/spark/spark-assembly.jar spark.executor.memory -> 1500m spark.yarn.historyServer.address -> cluster-cancerdetector-m:18080 spark.eventLog.enabled -> true spark.scheduler.minRegisteredResourcesRatio -> 0.0 SPARK_SUBMIT -> true spark.dynamicAllocation.maxExecutors -> 10000 spark.akka.frameSize -> 512 spark.sql.parquet.cacheMetadata -> false spark.executor.extraJavaOptions -> -Xbootclasspath/p:/usr/local/share/google/alpn/alpn-boot-8.1.7.v20160121.jar spark.app.name -> SparkBWA spark.shuffle.service.enabled -> true spark.history.fs.logDirectory -> hdfs://cluster-cancerdetector-m/user/spark/eventlog spark.dynamicAllocation.initialExecutors -> 10000 spark.dynamicAllocation.minExecutors -> 1 spark.yarn.executor.memoryOverhead -> 558 spark.driver.extraJavaOptions -> -Xbootclasspath/p:/usr/local/share/google/alpn/alpn-boot-8.1.7.v20160121.jar spark.submit.deployMode -> cluster spark.eventLog.dir -> hdfs://cluster-cancerdetector-m/user/spark/eventlog spark.yarn.am.memory -> 5586m spark.driver.maxResultSize -> 1920m spark.master -> yarn-cluster spark.dynamicAllocation.enabled -> true spark.executor.cores -> 1 Classpath elements: spark.yarn.am.memory is set but does not apply in cluster mode. spark.yarn.am.memoryOverhead is set but does not apply in cluster mode. 16/07/31 01:12:39 INFO org.apache.hadoop.yarn.client.RMProxy: Connecting to ResourceManager at cluster-cancerdet ector-m/10.132.0.2:8032 16/07/31 01:12:40 INFO org.apache.hadoop.yarn.client.api.impl.YarnClientImpl: Submitted application application_ 1467990031555_0106 Exception in thread "main" org.apache.spark.SparkException: Application application_1467990031555_0106 finished with failed status at org.apache.spark.deploy.yarn.Client.run(Client.scala:1034) at org.apache.spark.deploy.yarn.Client$.main(Client.scala:1081) at org.apache.spark.deploy.yarn.Client.main(Client.scala) at sun.reflect.NativeMethodAccessorImpl.invoke0(Native Method) at sun.reflect.NativeMethodAccessorImpl.invoke(NativeMethodAccessorImpl.java:62) at sun.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:43) at java.lang.reflect.Method.invoke(Method.java:498) at org.apache.spark.deploy.SparkSubmit$.org$apache$spark$deploy$SparkSubmit$$runMain(SparkSubmit.scala:7 31) at org.apache.spark.deploy.SparkSubmit$.doRunMain$1(SparkSubmit.scala:181) at org.apache.spark.deploy.SparkSubmit$.submit(SparkSubmit.scala:206) at org.apache.spark.deploy.SparkSubmit$.main(SparkSubmit.scala:121) at org.apache.spark.deploy.SparkSubmit.main(SparkSubmit.scala) When I tried to check the AM and executor logs. the command didn't work (I have set the yarn.log-aggregation-enable to true), So I tried to manually access into NM's log dir to see the detailed application logs. Here are the application logs from the NM's log file: 2016-07-31 01:12:40,387 INFO BlockStateChange: BLOCK* addStoredBlock: blockMap updated: 10.132.0.4:50010 is added to blk_1073742335_1511{UCState=UNDER_CONSTRUCTION, truncateBlock=null, primaryNodeIndex=-1, replicas=[ReplicaUC[[DISK]DS-19f52f20-0053-443d-bf33-dd636d8b2d07:NORMAL:10.132.0.3:50010|RBW], ReplicaUC[[DISK]DS-6b7272d9-24d2-4d77-85e2-49c492bd12a4:NORMAL:10.132.0.4:50010|RBW]]} size 0 2016-07-31 01:12:40,387 INFO BlockStateChange: BLOCK* addStoredBlock: blockMap updated: 10.132.0.3:50010 is added to blk_1073742335_1511{UCState=UNDER_CONSTRUCTION, truncateBlock=null, primaryNodeIndex=-1, replicas=[ReplicaUC[[DISK]DS-19f52f20-0053-443d-bf33-dd636d8b2d07:NORMAL:10.132.0.3:50010|RBW], ReplicaUC[[DISK]DS-6b7272d9-24d2-4d77-85e2-49c492bd12a4:NORMAL:10.132.0.4:50010|RBW]]} size 0 2016-07-31 01:12:40,391 INFO org.apache.hadoop.hdfs.StateChange: DIR* completeFile: /user/cancerdetector/.sparkStaging/application_1467990031555_0106/SparkBWA.jar is closed by DFSClient_NONMAPREDUCE_-762268348_1 2016-07-31 01:12:40,419 INFO org.apache.hadoop.hdfs.StateChange: BLOCK* allocate blk_1073742336_1512{UCState=UNDER_CONSTRUCTION, truncateBlock=null, primaryNodeIndex=-1, replicas=[ReplicaUC[[DISK]DS-19f52f20-0053-443d-bf33-dd636d8b2d07:NORMAL:10.132.0.3:50010|RBW], ReplicaUC[[DISK]DS-6b7272d9-24d2-4d77-85e2-49c492bd12a4:NORMAL:10.132.0.4:50010|RBW]]} for /user/cancerdetector/.sparkStaging/application_1467990031555_0106/bwa.zip 2016-07-31 01:12:40,445 INFO BlockStateChange: BLOCK* addStoredBlock: blockMap updated: 10.132.0.4:50010 is added to blk_1073742336_1512{UCState=UNDER_CONSTRUCTION, truncateBlock=null, primaryNodeIndex=-1, replicas=[ReplicaUC[[DISK]DS-19f52f20-0053-443d-bf33-dd636d8b2d07:NORMAL:10.132.0.3:50010|RBW], ReplicaUC[[DISK]DS-6b7272d9-24d2-4d77-85e2-49c492bd12a4:NORMAL:10.132.0.4:50010|RBW]]} size 0 2016-07-31 01:12:40,446 INFO BlockStateChange: BLOCK* addStoredBlock: blockMap updated: 10.132.0.3:50010 is added to blk_1073742336_1512{UCState=UNDER_CONSTRUCTION, truncateBlock=null, primaryNodeIndex=-1, replicas=[ReplicaUC[[DISK]DS-19f52f20-0053-443d-bf33-dd636d8b2d07:NORMAL:10.132.0.3:50010|RBW], ReplicaUC[[DISK]DS-6b7272d9-24d2-4d77-85e2-49c492bd12a4:NORMAL:10.132.0.4:50010|RBW]]} size 0 2016-07-31 01:12:40,448 INFO org.apache.hadoop.hdfs.StateChange: DIR* completeFile: /user/cancerdetector/.sparkStaging/application_1467990031555_0106/bwa.zip is closed by DFSClient_NONMAPREDUCE_-762268348_1 2016-07-31 01:12:40,495 INFO org.apache.hadoop.hdfs.StateChange: BLOCK* allocate blk_1073742337_1513{UCState=UNDER_CONSTRUCTION, truncateBlock=null, primaryNodeIndex=-1, replicas=[ReplicaUC[[DISK]DS-19f52f20-0053-443d-bf33-dd636d8b2d07:NORMAL:10.132.0.3:50010|RBW], ReplicaUC[[DISK]DS-6b7272d9-24d2-4d77-85e2-49c492bd12a4:NORMAL:10.132.0.4:50010|RBW]]} for /user/cancerdetector/.sparkStaging/application_1467990031555_0106/__spark_conf__2552000168715758347.zip 2016-07-31 01:12:40,506 INFO BlockStateChange: BLOCK* addStoredBlock: blockMap updated: 10.132.0.4:50010 is added to blk_1073742337_1513{UCState=UNDER_CONSTRUCTION, truncateBlock=null, primaryNodeIndex=-1, replicas=[ReplicaUC[[DISK]DS-19f52f20-0053-443d-bf33-dd636d8b2d07:NORMAL:10.132.0.3:50010|RBW], ReplicaUC[[DISK]DS-6b7272d9-24d2-4d77-85e2-49c492bd12a4:NORMAL:10.132.0.4:50010|RBW]]} size 0 2016-07-31 01:12:40,506 INFO BlockStateChange: BLOCK* addStoredBlock: blockMap updated: 10.132.0.3:50010 is added to blk_1073742337_1513{UCState=UNDER_CONSTRUCTION, truncateBlock=null, primaryNodeIndex=-1, replicas=[ReplicaUC[[DISK]DS-19f52f20-0053-443d-bf33-dd636d8b2d07:NORMAL:10.132.0.3:50010|RBW], ReplicaUC[[DISK]DS-6b7272d9-24d2-4d77-85e2-49c492bd12a4:NORMAL:10.132.0.4:50010|RBW]]} size 0 2016-07-31 01:12:40,509 INFO org.apache.hadoop.hdfs.StateChange: DIR* completeFile: /user/cancerdetector/.sparkStaging/application_1467990031555_0106/__spark_conf__2552000168715758347.zip is closed by DFSClient_NONMAPREDUCE_-762268348_1 2016-07-31 01:12:44,720 INFO org.apache.hadoop.hdfs.StateChange: BLOCK* allocate blk_1073742338_1514{UCState=UNDER_CONSTRUCTION, truncateBlock=null, primaryNodeIndex=-1, replicas=[ReplicaUC[[DISK]DS-19f52f20-0053-443d-bf33-dd636d8b2d07:NORMAL:10.132.0.3:50010|RBW], ReplicaUC[[DISK]DS-6b7272d9-24d2-4d77-85e2-49c492bd12a4:NORMAL:10.132.0.4:50010|RBW]]} for /user/spark/eventlog/application_1467990031555_0106_1.inprogress 2016-07-31 01:12:44,877 INFO org.apache.hadoop.hdfs.StateChange: BLOCK* fsync: /user/spark/eventlog/application_1467990031555_0106_1.inprogress for DFSClient_NONMAPREDUCE_-1111833453_14 2016-07-31 01:12:45,373 INFO BlockStateChange: BLOCK* addStoredBlock: blockMap updated: 10.132.0.4:50010 is added to blk_1073742338_1514{UCState=UNDER_CONSTRUCTION, truncateBlock=null, primaryNodeIndex=-1, replicas=[ReplicaUC[[DISK]DS-19f52f20-0053-443d-bf33-dd636d8b2d07:NORMAL:10.132.0.3:50010|RBW], ReplicaUC[[DISK]DS-6b7272d9-24d2-4d77-85e2-49c492bd12a4:NORMAL:10.132.0.4:50010|RBW]]} size 231 2016-07-31 01:12:45,375 INFO BlockStateChange: BLOCK* addStoredBlock: blockMap updated: 10.132.0.3:50010 is added to blk_1073742338_1514{UCState=UNDER_CONSTRUCTION, truncateBlock=null, primaryNodeIndex=-1, replicas=[ReplicaUC[[DISK]DS-19f52f20-0053-443d-bf33-dd636d8b2d07:NORMAL:10.132.0.3:50010|RBW], ReplicaUC[[DISK]DS-6b7272d9-24d2-4d77-85e2-49c492bd12a4:NORMAL:10.132.0.4:50010|RBW]]} size 231 2016-07-31 01:12:45,379 INFO org.apache.hadoop.hdfs.StateChange: DIR* completeFile: /user/spark/eventlog/application_1467990031555_0106_1.inprogress is closed by DFSClient_NONMAPREDUCE_-1111833453_14 2016-07-31 01:12:45,843 INFO org.apache.hadoop.hdfs.StateChange: DIR* completeFile: /user/spark/eventlog/.b7989393-f278-477c-8e83-ff5da9079e8a is closed by DFSClient_NONMAPREDUCE_-1615501432_1 2016-07-31 01:12:49,914 INFO org.apache.hadoop.hdfs.StateChange: BLOCK* allocate blk_1073742339_1515{UCState=UNDER_CONSTRUCTION, truncateBlock=null, primaryNodeIndex=-1, replicas=[ReplicaUC[[DISK]DS-19f52f20-0053-443d-bf33-dd636d8b2d07:NORMAL:10.132.0.3:50010|RBW], ReplicaUC[[DISK]DS-6b7272d9-24d2-4d77-85e2-49c492bd12a4:NORMAL:10.132.0.4:50010|RBW]]} for /user/spark/eventlog/application_1467990031555_0106_2.inprogress 2016-07-31 01:12:50,100 INFO org.apache.hadoop.hdfs.StateChange: BLOCK* fsync: /user/spark/eventlog/application_1467990031555_0106_2.inprogress for DFSClient_NONMAPREDUCE_378341726_14 2016-07-31 01:12:50,737 INFO BlockStateChange: BLOCK* addStoredBlock: blockMap updated: 10.132.0.4:50010 is added to blk_1073742339_1515{UCState=UNDER_CONSTRUCTION, truncateBlock=null, primaryNodeIndex=-1, replicas=[ReplicaUC[[DISK]DS-19f52f20-0053-443d-bf33-dd636d8b2d07:NORMAL:10.132.0.3:50010|RBW], ReplicaUC[[DISK]DS-6b7272d9-24d2-4d77-85e2-49c492bd12a4:NORMAL:10.132.0.4:50010|RBW]]} size 231 2016-07-31 01:12:50,738 INFO BlockStateChange: BLOCK* addStoredBlock: blockMap updated: 10.132.0.3:50010 is added to blk_1073742339_1515{UCState=UNDER_CONSTRUCTION, truncateBlock=null, primaryNodeIndex=-1, replicas=[ReplicaUC[[DISK]DS-19f52f20-0053-443d-bf33-dd636d8b2d07:NORMAL:10.132.0.3:50010|RBW], ReplicaUC[[DISK]DS-6b7272d9-24d2-4d77-85e2-49c492bd12a4:NORMAL:10.132.0.4:50010|RBW]]} size 231 2016-07-31 01:12:50,742 INFO org.apache.hadoop.hdfs.StateChange: DIR* completeFile: /user/spark/eventlog/application_1467990031555_0106_2.inprogress is closed by DFSClient_NONMAPREDUCE_378341726_14 2016-07-31 01:12:50,892 INFO BlockStateChange: BLOCK* addToInvalidates: blk_1073742335_1511 10.132.0.3:50010 10.132.0.4:50010 2016-07-31 01:12:50,892 INFO BlockStateChange: BLOCK* addToInvalidates: blk_1073742337_1513 10.132.0.3:50010 10.132.0.4:50010 2016-07-31 01:12:50,892 INFO BlockStateChange: BLOCK* addToInvalidates: blk_1073742336_1512 10.132.0.3:50010 10.132.0.4:50010 2016-07-31 01:12:51,804 INFO BlockStateChange: BLOCK* BlockManager: ask 10.132.0.3:50010 to delete [blk_1073742336_1512, blk_1073742337_1513, blk_1073742335_1511] 2016-07-31 01:12:54,804 INFO BlockStateChange: BLOCK* BlockManager: ask 10.132.0.4:50010 to delete [blk_1073742336_1512, blk_1073742337_1513, blk_1073742335_1511] 2016-07-31 01:12:55,868 INFO org.apache.hadoop.hdfs.StateChange: DIR* completeFile: /user/spark/eventlog/.46380a1f-b5fd-4924-96aa-f59dcae0cbec is closed by DFSClient_NONMAPREDUCE_-1615501432_1 2016-07-31 01:13:05,882 INFO org.apache.hadoop.hdfs.server.namenode.FSEditLog: Number of transactions: 244 Total time for transactions(ms): 5 Number of transactions batched in Syncs: 0 Number of syncs: 234 SyncTimes(ms): 221 2016-07-31 01:13:05,885 INFO org.apache.hadoop.hdfs.StateChange: DIR* completeFile: /user/spark/eventlog/.7273ee28-eb1c-4fe2-98d2-c5a20ebe4ffa is closed by DFSClient_NONMAPREDUCE_-1615501432_1 2016-07-31 01:13:15,892 INFO org.apache.hadoop.hdfs.StateChange: DIR* completeFile: /user/spark/eventlog/.0f640743-d06c-4583-ac95-9d520dc8f301 is closed by DFSClient_NONMAPREDUCE_-1615501432_1 2016-07-31 01:13:25,902 INFO org.apache.hadoop.hdfs.StateChange: DIR* completeFile: /user/spark/eventlog/.bc63864c-0267-47b5-bcc1-96ba81d6c9a5 is closed by DFSClient_NONMAPREDUCE_-1615501432_1 2016-07-31 01:13:35,910 INFO org.apache.hadoop.hdfs.StateChange: DIR* completeFile: /user/spark/eventlog/.93557793-2ba2-47e8-b54c-234c861b6e6c is closed by DFSClient_NONMAPREDUCE_-1615501432_1 2016-07-31 01:13:45,918 INFO org.apache.hadoop.hdfs.StateChange: DIR* completeFile: /user/spark/eventlog/.0fdf083c-3c53-4051-af16-d579f700962e is closed by DFSClient_NONMAPREDUCE_-1615501432_1 2016-07-31 01:13:55,927 INFO org.apache.hadoop.hdfs.StateChange: DIR* completeFile: /user/spark/eventlog/.834632f1-d9c6-4e14-9354-72f8c18f66d0 is closed by DFSClient_NONMAPREDUCE_-1615501432_1 2016-07-31 01:14:05,933 INFO org.apache.hadoop.hdfs.server.namenode.FSEditLog: Number of transactions: 262 Total time for transactions(ms): 5 Number of transactions batched in Syncs: 0 Number of syncs: 252 SyncTimes(ms): 236 2016-07-31 01:14:05,936 INFO org.apache.hadoop.hdfs.StateChange: DIR* completeFile: /user/spark/eventlog/.d06ef3b4-873f-464d-9cd0-e360da48e194 is closed by DFSClient_NONMAPREDUCE_-1615501432_1 2016-07-31 01:14:15,944 INFO org.apache.hadoop.hdfs.StateChange: DIR* completeFile: /user/spark/eventlog/.32ccba74-5f6c-45fc-b5db-26efb1b840e2 is closed by DFSClient_NONMAPREDUCE_-1615501432_1 2016-07-31 01:14:25,952 INFO org.apache.hadoop.hdfs.StateChange: DIR* completeFile: /user/spark/eventlog/.fef919cd-9952-4af8-a49a-e6dd2aa032f1 is closed by DFSClient_NONMAPREDUCE_-1615501432_1 2016-07-31 01:14:35,961 INFO org.apache.hadoop.hdfs.StateChange: DIR* completeFile: /user/spark/eventlog/.77ffdf36-8e42-43d8-9c1f-df6f3d11700d is closed by DFSClient_NONMAPREDUCE_-1615501432_1 2016-07-31 01:14:45,968 INFO org.apache.hadoop.hdfs.StateChange: DIR* completeFile: /user/spark/eventlog/.c31cfcbb-b47c-4169-ab0f-7ae87d4f815d is closed by DFSClient_NONMAPREDUCE_-1615501432_1 2016-07-31 01:14:55,976 INFO org.apache.hadoop.hdfs.StateChange: DIR* completeFile: /user/spark/eventlog/.6429570d-fb0a-4117-bb12-127a67e0a0b7 is closed by DFSClient_NONMAPREDUCE_-1615501432_1 2016-07-31 01:15:05,981 INFO org.apache.hadoop.hdfs.server.namenode.FSEditLog: Number of transactions: 280 Total time for transactions(ms): 6 Number of transactions batched in Syncs: 0 Number of syncs: 270 SyncTimes(ms): 253 2016-07-31 01:15:05,984 INFO org.apache.hadoop.hdfs.StateChange: DIR* completeFile: /user/spark/eventlog/.8030b18d-05f2-4520-b5c4-2fe42338b92b is closed by DFSClient_NONMAPREDUCE_-1615501432_1 2016-07-31 01:15:15,991 INFO org.apache.hadoop.hdfs.StateChange: DIR* completeFile: /user/spark/eventlog/.f608a0f4-e730-43cd-a19d-da57caac346e is closed by DFSClient_NONMAPREDUCE_-1615501432_1 2016-07-31 01:15:25,999 INFO org.apache.hadoop.hdfs.StateChange: DIR* completeFile: /user/spark/eventlog/.9d5a1f80-2f2a-43a7-84f1-b26a8c90a98f is closed by DFSClient_NONMAPREDUCE_-1615501432_1 2016-07-31 01:15:36,007 INFO org.apache.hadoop.hdfs.StateChange: DIR* completeFile: /user/spark/eventlog/.279e96fc-180c-47a5-a3ba-cfda581eedad is closed by DFSClient_NONMAPREDUCE_-1615501432_1 2016-07-31 01:15:46,015 INFO org.apache.hadoop.hdfs.StateChange: DIR* completeFile: /user/spark/eventlog/.a85bbf52-61f4-4899-98b1-23615a549774 is closed by DFSClient_NONMAPREDUCE_-1615501432_1 2016-07-31 01:15:56,023 INFO org.apache.hadoop.hdfs.StateChange: DIR* completeFile: /user/spark/eventlog/.80613e8e-7015-4aeb-81df-49884bd0eb5e is closed by DFSClient_NONMAPREDUCE_-1615501432_1 2016-07-31 01:16:06,028 INFO org.apache.hadoop.hdfs.server.namenode.FSEditLog: Number of transactions: 298 Total time for transactions(ms): 6 Number of transactions batched in Syncs: 0 Number of syncs: 288 SyncTimes(ms): 267 2016-07-31 01:16:06,031 INFO org.apache.hadoop.hdfs.StateChange: DIR* completeFile: /user/spark/eventlog/.2be7fc48-bd1c-4042-88e4-239b1c630458 is closed by DFSClient_NONMAPREDUCE_-1615501432_1 2016-07-31 01:16:16,038 INFO org.apache.hadoop.hdfs.StateChange: DIR* completeFile: /user/spark/eventlog/.40fc68a6-f003-4e35-b4b3-50bd3c4a0c82 is closed by DFSClient_NONMAPREDUCE_-1615501432_1 2016-07-31 01:16:26,045 INFO org.apache.hadoop.hdfs.StateChange: DIR* completeFile: /user/spark/eventlog/.97e7d15c-4d28-4089-b4a5-9f0935a72589 is closed by DFSClient_NONMAPREDUCE_-1615501432_1 2016-07-31 01:16:36,052 INFO org.apache.hadoop.hdfs.StateChange: DIR* completeFile: /user/spark/eventlog/.84d8e78d-90fd-419f-9000-fa04ab56955e is closed by DFSClient_NONMAPREDUCE_-1615501432_1 2016-07-31 01:16:46,059 INFO org.apache.hadoop.hdfs.StateChange: DIR* completeFile: /user/spark/eventlog/.6691cc3e-6969-4a8f-938f-272d1c96701d is closed by DFSClient_NONMAPREDUCE_-1615501432_1 2016-07-31 01:16:56,066 INFO org.apache.hadoop.hdfs.StateChange: DIR* completeFile: /user/spark/eventlog/.077143b6-281a-468c-8b2c-bcb6cd3bc27a is closed by DFSClient_NONMAPREDUCE_-1615501432_1 2016-07-31 01:17:06,070 INFO org.apache.hadoop.hdfs.server.namenode.FSEditLog: Number of transactions: 316 Total time for transactions(ms): 6 Number of transactions batched in Syncs: 0 Number of syncs: 306 SyncTimes(ms): 284 2016-07-31 01:17:06,073 INFO org.apache.hadoop.hdfs.StateChange: DIR* completeFile: /user/spark/eventlog/.817d1886-aea2-450a-a586-08677dc18d60 is closed by DFSClient_NONMAPREDUCE_-1615501432_1 2016-07-31 01:17:16,080 INFO org.apache.hadoop.hdfs.StateChange: DIR* completeFile: /user/spark/eventlog/.abd46886-1359-4c5e-8276-ea4f2969411f is closed by DFSClient_NONMAPREDUCE_-1615501432_1 2016-07-31 01:17:26,087 INFO org.apache.hadoop.hdfs.StateChange: DIR* completeFile: /user/spark/eventlog/.24625260-59be-4a9b-b47b-b8d5b76cb789 is closed by DFSClient_NONMAPREDUCE_-1615501432_1 2016-07-31 01:17:36,096 INFO org.apache.hadoop.hdfs.StateChange: DIR* completeFile: /user/spark/eventlog/.11630782-e50e-4260-a0da-99845bc3f1db is closed by DFSClient_NONMAPREDUCE_-1615501432_1 2016-07-31 01:17:46,103 INFO org.apache.hadoop.hdfs.StateChange: DIR* completeFile: /user/spark/eventlog/.16cdd027-f1b8-4cbf-a30c-2f1712f4abb5 is closed by DFSClient_NONMAPREDUCE_-1615501432_1 2016-07-31 01:17:56,111 INFO org.apache.hadoop.hdfs.StateChange: DIR* completeFile: /user/spark/eventlog/.93fb2e86-2fec-4069-b73b-632750fda603 is closed by DFSClient_NONMAPREDUCE_-1615501432_1 2016-07-31 01:18:06,116 INFO org.apache.hadoop.hdfs.server.namenode.FSEditLog: Number of transactions: 334 Total time for transactions(ms): 6 Number of transactions batched in Syncs: 0 Number of syncs: 324 SyncTimes(ms): 300 2016-07-31 01:18:06,119 INFO org.apache.hadoop.hdfs.StateChange: DIR* completeFile: /user/spark/eventlog/.b19fddda-ea90-49ab-b44d-434cce28cb67 is closed by DFSClient_NONMAPREDUCE_-1615501432_1 2016-07-31 01:18:16,127 INFO org.apache.hadoop.hdfs.StateChange: DIR* completeFile: /user/spark/eventlog/.d81ab189-bde5-4878-b82b-903983466f86 is closed by DFSClient_NONMAPREDUCE_-1615501432_1 2016-07-31 01:18:26,135 INFO org.apache.hadoop.hdfs.StateChange: DIR* completeFile: /user/spark/eventlog/.e5b51632-f714-4814-b896-59bba137b42d is closed by DFSClient_NONMAPREDUCE_-1615501432_1 2016-07-31 01:18:36,144 INFO org.apache.hadoop.hdfs.StateChange: DIR* completeFile: /user/spark/eventlog/.39791121-9399-4a22-a50c-90eaddf31ffb is closed by DFSClient_NONMAPREDUCE_-1615501432_1 2016-07-31 01:18:46,153 INFO org.apache.hadoop.hdfs.StateChange: DIR* completeFile: /user/spark/eventlog/.861c269b-5466-4855-84fd-587ed3306012 is closed by DFSClient_NONMAPREDUCE_-1615501432_1 2016-07-31 01:18:56,162 INFO org.apache.hadoop.hdfs.StateChange: DIR* completeFile: /user/spark/eventlog/.8a9ff721-bd56-4bea-b399-31bfaabe8c7c is closed by DFSClient_NONMAPREDUCE_-1615501432_1 2016-07-31 01:19:06,168 INFO org.apache.hadoop.hdfs.server.namenode.FSEditLog: Number of transactions: 352 Total time for transactions(ms): 7 Number of transactions batched in Syncs: 0 Number of syncs: 342 SyncTimes(ms): 313 2016-07-31 01:19:06,170 INFO org.apache.hadoop.hdfs.StateChange: DIR* completeFile: /user/spark/eventlog/.492bf987-4991-4533-80e2-678efa843cb9 is closed by DFSClient_NONMAPREDUCE_-1615501432_1 2016-07-31 01:19:16,178 INFO org.apache.hadoop.hdfs.StateChange: DIR* completeFile: /user/spark/eventlog/.9294c0c6-43db-4f6d-9d31-f493143b6baf is closed by DFSClient_NONMAPREDUCE_-1615501432_1 2016-07-31 01:19:26,187 INFO org.apache.hadoop.hdfs.StateChange: DIR* completeFile: /user/spark/eventlog/.341dd131-c14c-4147-bcbc-849d1d6bba8c is closed by DFSClient_NONMAPREDUCE_-1615501432_1 2016-07-31 01:19:36,196 INFO org.apache.hadoop.hdfs.StateChange: DIR* completeFile: /user/spark/eventlog/.56f92e8e-ef93-4279-a57f-472dd5d8f399 is closed by DFSClient_NONMAPREDUCE_-1615501432_1 2016-07-31 01:19:46,204 INFO org.apache.hadoop.hdfs.StateChange: DIR* completeFile: /user/spark/eventlog/.5ddcda82-b501-4043-bb54-a29902d9d234 is closed by DFSClient_NONMAPREDUCE_-1615501432_1 2016-07-31 01:19:56,212 INFO org.apache.hadoop.hdfs.StateChange: DIR* completeFile: /user/spark/eventlog/.31e7517b-2ef3-458c-9979-324d7a96302f is closed by DFSClient_NONMAPREDUCE_-1615501432_1 2016-07-31 01:20:06,218 INFO org.apache.hadoop.hdfs.server.namenode.FSEditLog: Number of transactions: 370 Total time for transactions(ms): 7 Number of transactions batched in Syncs: 0 Number of syncs: 360 SyncTimes(ms): 329 2016-07-31 01:20:06,220 INFO org.apache.hadoop.hdfs.StateChange: DIR* completeFile: /user/spark/eventlog/.5251f5df-0957-4008-b664-8d82eaa9789e is closed by DFSClient_NONMAPREDUCE_-1615501432_1 2016-07-31 01:20:16,229 INFO org.apache.hadoop.hdfs.StateChange: DIR* completeFile: /user/spark/eventlog/.3320b948-2478-4807-9ab3-d23e4945765e is closed by DFSClient_NONMAPREDUCE_-1615501432_1 2016-07-31 01:20:26,237 INFO org.apache.hadoop.hdfs.StateChange: DIR* completeFile: /user/spark/eventlog/.0928c940-e57d-4a34-a7dc-53dade7ff909 is closed by DFSClient_NONMAPREDUCE_-1615501432_1 2016-07-31 01:20:36,246 INFO org.apache.hadoop.hdfs.StateChange: DIR* completeFile: /user/spark/eventlog/.6240fcdf-696e-49c4-a883-3eda5ab89b4d is closed by DFSClient_NONMAPREDUCE_-1615501432_1 2016-07-31 01:20:46,254 INFO org.apache.hadoop.hdfs.StateChange: DIR* completeFile: /user/spark/eventlog/.5622850e-b7b0-458a-9ffa-89e134fa3fda is closed by DFSClient_NONMAPREDUCE_-1615501432_1 2016-07-31 01:20:56,262 INFO org.apache.hadoop.hdfs.StateChange: DIR* completeFile: /user/spark/eventlog/.faa076e8-490c-489f-8183-778325e0b144 is closed by DFSClient_NONMAPREDUCE_-1615501432_1 2016-07-31 01:21:06,268 INFO org.apache.hadoop.hdfs.server.namenode.FSEditLog: Number of transactions: 388 Total time for transactions(ms): 7 Number of transactions batched in Syncs: 0 Number of syncs: 378 SyncTimes(ms): 347 2016-07-31 01:21:06,270 INFO org.apache.hadoop.hdfs.StateChange: DIR* completeFile: /user/spark/eventlog/.18b2464e-9d14-4bae-95d9-f261edbdee1b is closed by DFSClient_NONMAPREDUCE_-1615501432_1 2016-07-31 01:21:16,278 INFO org.apache.hadoop.hdfs.StateChange: DIR* completeFile: /user/spark/eventlog/.6c53dd52-3996-4541-b368-e8406f99f68e is closed by DFSClient_NONMAPREDUCE_-1615501432_1 2016-07-31 01:21:26,287 INFO org.apache.hadoop.hdfs.StateChange: DIR* completeFile: /user/spark/eventlog/.8b5ac93c-b268-432d-9236-48c004c33743 is closed by DFSClient_NONMAPREDUCE_-1615501432_1 2016-07-31 01:21:36,303 INFO org.apache.hadoop.hdfs.StateChange: DIR* completeFile: /user/spark/eventlog/.22a03e6f-4531-466c-af28-e0797d6b803e is closed by DFSClient_NONMAPREDUCE_-1615501432_1 2016-07-31 01:21:46,311 INFO org.apache.hadoop.hdfs.StateChange: DIR* completeFile: /user/spark/eventlog/.1df0d173-6432-481f-af97-6632660700b0 is closed by DFSClient_NONMAPREDUCE_-1615501432_1 2016-07-31 01:21:56,319 INFO org.apache.hadoop.hdfs.StateChange: DIR* completeFile: /user/spark/eventlog/.4095d5a1-ba2d-4966-ad13-99843c51ee91 is closed by DFSClient_NONMAPREDUCE_-1615501432_1 2016-07-31 01:22:06,325 INFO org.apache.hadoop.hdfs.server.namenode.FSEditLog: Number of transactions: 406 Total time for transactions(ms): 8 Number of transactions batched in Syncs: 0 Number of syncs: 396 SyncTimes(ms): 362 2016-07-31 01:22:06,328 INFO org.apache.hadoop.hdfs.StateChange: DIR* completeFile: /user/spark/eventlog/.f35e73f9-842d-4fc2-96b3-9b70df17e7e3 is closed by DFSClient_NONMAPREDUCE_-1615501432_1 2016-07-31 01:22:16,337 INFO org.apache.hadoop.hdfs.StateChange: DIR* completeFile: /user/spark/eventlog/.fdfa32ef-5c3c-48a3-9d15-0edc1b9d5072 is closed by DFSClient_NONMAPREDUCE_-1615501432_1 2016-07-31 01:22:26,345 INFO org.apache.hadoop.hdfs.StateChange: DIR* completeFile: /user/spark/eventlog/.0315d9f7-ea5c-4a58-ad68-3f942d97676a is closed by DFSClient_NONMAPREDUCE_-1615501432_1 2016-07-31 01:22:36,353 INFO org.apache.hadoop.hdfs.StateChange: DIR* completeFile: /user/spark/eventlog/.eecbddee-6bfb-44b6-97ef-1b5eece8a982 is closed by DFSClient_NONMAPREDUCE_-1615501432_1 2016-07-31 01:22:46,362 INFO org.apache.hadoop.hdfs.StateChange: DIR* completeFile: /user/spark/eventlog/.2c363a5b-bd43-47c5-9050-b15f1f6ade77 is closed by DFSClient_NONMAPREDUCE_-1615501432_1 2016-07-31 01:22:56,449 INFO org.apache.hadoop.hdfs.StateChange: DIR* completeFile: /user/spark/eventlog/.6f4e8030-39a7-4fc8-b551-5b1d88e0885e is closed by DFSClient_NONMAPREDUCE_-1615501432_1 was: What is the reason of this exception ?! cancerdetector@cluster-cancerdetector-m:~/SparkBWA/build$ spark-submit --class SparkBWA --master yarn-cluster --deploy-mode cluster --conf spark.yarn.jar=hdfs:///user/spark/spark-assembly.jar --driver-memory 1500m --executor-memory 1500m --executor-cores 1 --archives ./bwa.zip --verbose ./SparkBWA.jar -algorithm mem -reads paired -index /Data/HumanBase/hg38 -partitions 32 ERR000589_1.filt.fastq ERR000589_2.filt.fastqhb Output_ERR000589 Using properties file: /usr/lib/spark/conf/spark-defaults.conf Adding default property: spark.executor.extraJavaOptions=-Xbootclasspath/p:/usr/local/share/google/alpn/alpn-boot-8.1.7.v20160121.jar Adding default property: spark.history.fs.logDirectory=hdfs://cluster-cancerdetector-m/user/spark/eventlog Adding default property: spark.eventLog.enabled=true Adding default property: spark.driver.maxResultSize=1920m Adding default property: spark.shuffle.service.enabled=true Adding default property: spark.yarn.historyServer.address=cluster-cancerdetector-m:18080 Adding default property: spark.sql.parquet.cacheMetadata=false Adding default property: spark.driver.memory=3840m Adding default property: spark.dynamicAllocation.maxExecutors=10000 Adding default property: spark.scheduler.minRegisteredResourcesRatio=0.0 Adding default property: spark.yarn.am.memoryOverhead=558 Adding default property: spark.yarn.am.memory=5586m Adding default property: spark.driver.extraJavaOptions=-Xbootclasspath/p:/usr/local/share/google/alpn/alpn-boot-8.1.7.v20160121.jar Adding default property: spark.master=yarn-client Adding default property: spark.executor.memory=5586m Adding default property: spark.eventLog.dir=hdfs://cluster-cancerdetector-m/user/spark/eventlog Adding default property: spark.dynamicAllocation.enabled=true Adding default property: spark.executor.cores=2 Adding default property: spark.yarn.executor.memoryOverhead=558 Adding default property: spark.dynamicAllocation.minExecutors=1 Adding default property: spark.dynamicAllocation.initialExecutors=10000 Adding default property: spark.akka.frameSize=512 Parsed arguments: master yarn-cluster deployMode cluster executorMemory 1500m executorCores 1 totalExecutorCores null propertiesFile /usr/lib/spark/conf/spark-defaults.conf driverMemory 1500m driverCores null driverExtraClassPath null driverExtraLibraryPath null driverExtraJavaOptions -Xbootclasspath/p:/usr/local/share/google/alpn/alpn-boot-8.1.7.v20160121.jar supervise false queue null numExecutors null files null pyFiles null archives file:/home/cancerdetector/SparkBWA/build/./bwa.zip mainClass SparkBWA primaryResource file:/home/cancerdetector/SparkBWA/build/./SparkBWA.jar name SparkBWA childArgs [-algorithm mem -reads paired -index /Data/HumanBase/hg38 -partitions 32 ERR000589_1.filt.fastq ERR000589_2.filt.fastqhb Output_ERR000589] jars null packages null packagesExclusions null repositories null verbose true Spark properties used, including those specified through --conf and those from the properties file /usr/lib/spark/conf/spark-defaults.conf: spark.yarn.am.memoryOverhead -> 558 spark.driver.memory -> 1500m spark.yarn.jar -> hdfs:///user/spark/spark-assembly.jar spark.executor.memory -> 5586m spark.yarn.historyServer.address -> cluster-cancerdetector-m:18080 spark.eventLog.enabled -> true spark.scheduler.minRegisteredResourcesRatio -> 0.0 spark.dynamicAllocation.maxExecutors -> 10000 spark.akka.frameSize -> 512 spark.executor.extraJavaOptions -> -Xbootclasspath/p:/usr/local/share/google/alpn/alpn-boot-8.1.7.v20160121.jar spark.sql.parquet.cacheMetadata -> false spark.shuffle.service.enabled -> true spark.history.fs.logDirectory -> hdfs://cluster-cancerdetector-m/user/spark/eventlog spark.dynamicAllocation.initialExecutors -> 10000 spark.dynamicAllocation.minExecutors -> 1 spark.yarn.executor.memoryOverhead -> 558 spark.driver.extraJavaOptions -> -Xbootclasspath/p:/usr/local/share/google/alpn/alpn-boot-8.1.7.v20160121.jar spark.eventLog.dir -> hdfs://cluster-cancerdetector-m/user/spark/eventlog spark.yarn.am.memory -> 5586m spark.driver.maxResultSize -> 1920m spark.master -> yarn-client spark.dynamicAllocation.enabled -> true spark.executor.cores -> 2 Main class: org.apache.spark.deploy.yarn.Client Arguments: --name SparkBWA --driver-memory 1500m --executor-memory 1500m --executor-cores 1 --archives file:/home/cancerdetector/SparkBWA/build/./bwa.zip --jar file:/home/cancerdetector/SparkBWA/build/./SparkBWA.jar --class SparkBWA --arg -algorithm --arg mem --arg -reads --arg paired --arg -index --arg /Data/HumanBase/hg38 --arg -partitions --arg 32 --arg ERR000589_1.filt.fastq --arg ERR000589_2.filt.fastqhb --arg Output_ERR000589 System properties: spark.yarn.am.memoryOverhead -> 558 spark.driver.memory -> 1500m spark.yarn.jar -> hdfs:///user/spark/spark-assembly.jar spark.executor.memory -> 1500m spark.yarn.historyServer.address -> cluster-cancerdetector-m:18080 spark.eventLog.enabled -> true spark.scheduler.minRegisteredResourcesRatio -> 0.0 SPARK_SUBMIT -> true spark.dynamicAllocation.maxExecutors -> 10000 spark.akka.frameSize -> 512 spark.sql.parquet.cacheMetadata -> false spark.executor.extraJavaOptions -> -Xbootclasspath/p:/usr/local/share/google/alpn/alpn-boot-8.1.7.v20160121.jar spark.app.name -> SparkBWA spark.shuffle.service.enabled -> true spark.history.fs.logDirectory -> hdfs://cluster-cancerdetector-m/user/spark/eventlog spark.dynamicAllocation.initialExecutors -> 10000 spark.dynamicAllocation.minExecutors -> 1 spark.yarn.executor.memoryOverhead -> 558 spark.driver.extraJavaOptions -> -Xbootclasspath/p:/usr/local/share/google/alpn/alpn-boot-8.1.7.v20160121.jar spark.submit.deployMode -> cluster spark.eventLog.dir -> hdfs://cluster-cancerdetector-m/user/spark/eventlog spark.yarn.am.memory -> 5586m spark.driver.maxResultSize -> 1920m spark.master -> yarn-cluster spark.dynamicAllocation.enabled -> true spark.executor.cores -> 1 Classpath elements: spark.yarn.am.memory is set but does not apply in cluster mode. spark.yarn.am.memoryOverhead is set but does not apply in cluster mode. 16/07/31 01:12:39 INFO org.apache.hadoop.yarn.client.RMProxy: Connecting to ResourceManager at cluster-cancerdet ector-m/10.132.0.2:8032 16/07/31 01:12:40 INFO org.apache.hadoop.yarn.client.api.impl.YarnClientImpl: Submitted application application_ 1467990031555_0106 Exception in thread "main" org.apache.spark.SparkException: Application application_1467990031555_0106 finished with failed status at org.apache.spark.deploy.yarn.Client.run(Client.scala:1034) at org.apache.spark.deploy.yarn.Client$.main(Client.scala:1081) at org.apache.spark.deploy.yarn.Client.main(Client.scala) at sun.reflect.NativeMethodAccessorImpl.invoke0(Native Method) at sun.reflect.NativeMethodAccessorImpl.invoke(NativeMethodAccessorImpl.java:62) at sun.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:43) at java.lang.reflect.Method.invoke(Method.java:498) at org.apache.spark.deploy.SparkSubmit$.org$apache$spark$deploy$SparkSubmit$$runMain(SparkSubmit.scala:7 31) at org.apache.spark.deploy.SparkSubmit$.doRunMain$1(SparkSubmit.scala:181) at org.apache.spark.deploy.SparkSubmit$.submit(SparkSubmit.scala:206) at org.apache.spark.deploy.SparkSubmit$.main(SparkSubmit.scala:121) at org.apache.spark.deploy.SparkSubmit.main(SparkSubmit.scala) When I tried to check the AM and executor logs. the command didn't work (I have set the yarn.log-aggregation-enable to true), So I tried to manually access into NM's log dir to see the detailed application logs. Here are the application logs from the NM's log file: 2016-07-30 19:37:23,620 INFO org.apache.hadoop.hdfs.StateChange: BLOCK* allocate blk_1073742332_1508{UCState=UNDER_CONSTRUCTION, truncateBlock=null, primaryNodeIndex=-1, replicas=[ReplicaUC[[DISK]DS-6b7272d9-24d2-4d77-85e2-49c492bd12a4:NORMAL:10.132.0.4:50010|RBW], ReplicaUC[[DISK]DS-19f52f20-0053-443d-bf33-dd636d8b2d07:NORMAL:10.132.0.3:50010|RBW]]} for /user/cancerdetector/.sparkStaging/application_1467990031555_0105/SparkBWA.jar 2016-07-30 19:37:23,807 INFO BlockStateChange: BLOCK* addStoredBlock: blockMap updated: 10.132.0.4:50010 is added to blk_1073742332_1508{UCState=UNDER_CONSTRUCTION, truncateBlock=null, primaryNodeIndex=-1, replicas=[ReplicaUC[[DISK]DS-6b7272d9-24d2-4d77-85e2-49c492bd12a4:NORMAL:10.132.0.4:50010|RBW], ReplicaUC[[DISK]DS-19f52f20-0053-443d-bf33-dd636d8b2d07:NORMAL:10.132.0.3:50010|RBW]]} size 0 2016-07-30 19:37:23,807 INFO BlockStateChange: BLOCK* addStoredBlock: blockMap updated: 10.132.0.3:50010 is added to blk_1073742332_1508{UCState=UNDER_CONSTRUCTION, truncateBlock=null, primaryNodeIndex=-1, replicas=[ReplicaUC[[DISK]DS-6b7272d9-24d2-4d77-85e2-49c492bd12a4:NORMAL:10.132.0.4:50010|RBW], ReplicaUC[[DISK]DS-19f52f20-0053-443d-bf33-dd636d8b2d07:NORMAL:10.132.0.3:50010|RBW]]} size 0 2016-07-30 19:37:23,812 INFO org.apache.hadoop.hdfs.StateChange: DIR* completeFile: /user/cancerdetector/.sparkStaging/application_1467990031555_0105/SparkBWA.jar is closed by DFSClient_NONMAPREDUCE_606595546_1 2016-07-30 19:37:23,843 INFO org.apache.hadoop.hdfs.StateChange: BLOCK* allocate blk_1073742333_1509{UCState=UNDER_CONSTRUCTION, truncateBlock=null, primaryNodeIndex=-1, replicas=[ReplicaUC[[DISK]DS-19f52f20-0053-443d-bf33-dd636d8b2d07:NORMAL:10.132.0.3:50010|RBW], ReplicaUC[[DISK]DS-6b7272d9-24d2-4d77-85e2-49c492bd12a4:NORMAL:10.132.0.4:50010|RBW]]} for /user/cancerdetector/.sparkStaging/application_1467990031555_0105/bwa.zip 2016-07-30 19:37:23,862 INFO BlockStateChange: BLOCK* addStoredBlock: blockMap updated: 10.132.0.4:50010 is added to blk_1073742333_1509{UCState=UNDER_CONSTRUCTION, truncateBlock=null, primaryNodeIndex=-1, replicas=[ReplicaUC[[DISK]DS-19f52f20-0053-443d-bf33-dd636d8b2d07:NORMAL:10.132.0.3:50010|RBW], ReplicaUC[[DISK]DS-6b7272d9-24d2-4d77-85e2-49c492bd12a4:NORMAL:10.132.0.4:50010|RBW]]} size 0 2016-07-30 19:37:23,862 INFO BlockStateChange: BLOCK* addStoredBlock: blockMap updated: 10.132.0.3:50010 is added to blk_1073742333_1509{UCState=UNDER_CONSTRUCTION, truncateBlock=null, primaryNodeIndex=-1, replicas=[ReplicaUC[[DISK]DS-19f52f20-0053-443d-bf33-dd636d8b2d07:NORMAL:10.132.0.3:50010|RBW], ReplicaUC[[DISK]DS-6b7272d9-24d2-4d77-85e2-49c492bd12a4:NORMAL:10.132.0.4:50010|RBW]]} size 0 2016-07-30 19:37:23,864 INFO org.apache.hadoop.hdfs.StateChange: DIR* completeFile: /user/cancerdetector/.sparkStaging/application_1467990031555_0105/bwa.zip is closed by DFSClient_NONMAPREDUCE_606595546_1 2016-07-30 19:37:23,911 INFO org.apache.hadoop.hdfs.StateChange: BLOCK* allocate blk_1073742334_1510{UCState=UNDER_CONSTRUCTION, truncateBlock=null, primaryNodeIndex=-1, replicas=[ReplicaUC[[DISK]DS-19f52f20-0053-443d-bf33-dd636d8b2d07:NORMAL:10.132.0.3:50010|RBW], ReplicaUC[[DISK]DS-6b7272d9-24d2-4d77-85e2-49c492bd12a4:NORMAL:10.132.0.4:50010|RBW]]} for /user/cancerdetector/.sparkStaging/application_1467990031555_0105/__spark_conf__3335387778472809466.zip 2016-07-30 19:37:23,922 INFO BlockStateChange: BLOCK* addStoredBlock: blockMap updated: 10.132.0.4:50010 is added to blk_1073742334_1510{UCState=UNDER_CONSTRUCTION, truncateBlock=null, primaryNodeIndex=-1, replicas=[ReplicaUC[[DISK]DS-19f52f20-0053-443d-bf33-dd636d8b2d07:NORMAL:10.132.0.3:50010|RBW], ReplicaUC[[DISK]DS-6b7272d9-24d2-4d77-85e2-49c492bd12a4:NORMAL:10.132.0.4:50010|RBW]]} size 0 2016-07-30 19:37:23,922 INFO BlockStateChange: BLOCK* addStoredBlock: blockMap updated: 10.132.0.3:50010 is added to blk_1073742334_1510{UCState=UNDER_CONSTRUCTION, truncateBlock=null, primaryNodeIndex=-1, replicas=[ReplicaUC[[DISK]DS-19f52f20-0053-443d-bf33-dd636d8b2d07:NORMAL:10.132.0.3:50010|RBW], ReplicaUC[[DISK]DS-6b7272d9-24d2-4d77-85e2-49c492bd12a4:NORMAL:10.132.0.4:50010|RBW]]} size 0 2016-07-30 19:37:23,925 INFO org.apache.hadoop.hdfs.StateChange: DIR* completeFile: /user/cancerdetector/.sparkStaging/application_1467990031555_0105/__spark_conf__3335387778472809466.zip is closed by DFSClient_NONMAPREDUCE_606595546_1 2016-07-30 19:37:26,235 INFO BlockStateChange: BLOCK* addToInvalidates: blk_1073742332_1508 10.132.0.3:50010 10.132.0.4:50010 2016-07-30 19:37:26,236 INFO BlockStateChange: BLOCK* addToInvalidates: blk_1073742334_1510 10.132.0.3:50010 10.132.0.4:50010 2016-07-30 19:37:26,236 INFO BlockStateChange: BLOCK* addToInvalidates: blk_1073742333_1509 10.132.0.3:50010 10.132.0.4:50010 2016-07-30 19:37:26,961 INFO BlockStateChange: BLOCK* BlockManager: ask 10.132.0.3:50010 to delete [blk_1073742332_1508, blk_1073742333_1509, blk_1073742334_1510] 2016-07-30 19:37:28,791 INFO org.apache.hadoop.hdfs.StateChange: DIR* completeFile: /user/spark/eventlog/.1b2f4ed4-0992-4bf3-a453-4c02e9ce00fe is closed by DFSClient_NONMAPREDUCE_-1615501432_1 2016-07-30 19:37:29,961 INFO BlockStateChange: BLOCK* BlockManager: ask 10.132.0.4:50010 to delete [blk_1073742332_1508, blk_1073742333_1509, blk_1073742334_1510] 2016-07-30 19:37:38,799 INFO org.apache.hadoop.hdfs.StateChange: DIR* completeFile: /user/spark/eventlog/.a0ca1b29-3022-4d1c-a868-4710d56903f9 is closed by DFSClient_NONMAPREDUCE_-1615501432_1 2016-07-30 19:37:48,806 INFO org.apache.hadoop.hdfs.StateChange: DIR* completeFile: /user/spark/eventlog/.fa70676f-ce52-4ddf-8fb6-1649284f5da0 is closed by DFSClient_NONMAPREDUCE_-1615501432_1 2016-07-30 19:37:58,814 INFO org.apache.hadoop.hdfs.StateChange: DIR* completeFile: /user/spark/eventlog/.7550f1fe-81e1-4a4f-9a72-5210dbae1a31 is closed by DFSClient_NONMAPREDUCE_-1615501432_1 2016-07-30 19:38:08,819 INFO org.apache.hadoop.hdfs.server.namenode.FSEditLog: Number of transactions: 674 Total time for transactions(ms): 12 Number of transactions batched in Syncs: 0 Number of syncs: 668 SyncTimes(ms): 628 2016-07-30 19:38:08,822 INFO org.apache.hadoop.hdfs.StateChange: DIR* completeFile: /user/spark/eventlog/.f6d27b3c-f60d-4c70-b9eb-9a682c783cf9 is closed by DFSClient_NONMAPREDUCE_-1615501432_1 2016-07-30 19:38:18,830 INFO org.apache.hadoop.hdfs.StateChange: DIR* completeFile: /user/spark/eventlog/.33f22e09-343f-4192-b194-a4617ba6fde5 is closed by DFSClient_NONMAPREDUCE_-1615501432_1 2016-07-30 19:38:28,838 INFO org.apache.hadoop.hdfs.StateChange: DIR* completeFile: /user/spark/eventlog/.9a90102c-bb41-42e8-ab5f-285e74f14388 is closed by DFSClient_NONMAPREDUCE_-1615501432_1 2016-07-30 19:38:38,846 INFO org.apache.hadoop.hdfs.StateChange: DIR* completeFile: /user/spark/eventlog/.f9a82533-de04-4da8-9054-f7f74f781351 is closed by DFSClient_NONMAPREDUCE_-1615501432_1 2016-07-30 19:38:48,854 INFO org.apache.hadoop.hdfs.StateChange: DIR* completeFile: /user/spark/eventlog/.96d8dfad-bcfa-4116-b159-62caa493208d is closed by DFSClient_NONMAPREDUCE_-1615501432_1 2016-07-30 19:38:58,862 INFO org.apache.hadoop.hdfs.StateChange: DIR* completeFile: /user/spark/eventlog/.2c24d60a-c76e-4c6e-a6f2-868b6f7d746b is closed by DFSClient_NONMAPREDUCE_-1615501432_1 2016-07-30 19:39:08,867 INFO org.apache.hadoop.hdfs.server.namenode.FSEditLog: Number of transactions: 692 Total time for transactions(ms): 14 Number of transactions batched in Syncs: 0 Number of syncs: 686 SyncTimes(ms): 643 2016-07-30 19:39:08,870 INFO org.apache.hadoop.hdfs.StateChange: DIR* completeFile: /user/spark/eventlog/.200cfa9e-9429-4c9f-9227-aad743d833d7 is closed by DFSClient_NONMAPREDUCE_-1615501432_1 2016-07-30 19:39:18,878 INFO org.apache.hadoop.hdfs.StateChange: DIR* completeFile: /user/spark/eventlog/.b2c007fb-0334-4539-b83f-152069a0cde9 is closed by DFSClient_NONMAPREDUCE_-1615501432_1 2016-07-30 19:39:28,885 INFO org.apache.hadoop.hdfs.StateChange: DIR* completeFile: /user/spark/eventlog/.c5cc9039-11de-4a18-aa1d-95d16db8dcf9 is closed by DFSClient_NONMAPREDUCE_-1615501432_1 2016-07-30 19:39:38,893 INFO org.apache.hadoop.hdfs.StateChange: DIR* completeFile: /user/spark/eventlog/.5b18a8cc-18d2-404e-aed4-799257e460d2 is closed by DFSClient_NONMAPREDUCE_-1615501432_1 2016-07-30 19:39:48,901 INFO org.apache.hadoop.hdfs.StateChange: DIR* completeFile: /user/spark/eventlog/.82de795e-9c85-4b03-b596-d6dcdee6eaa3 is closed by DFSClient_NONMAPREDUCE_-1615501432_1 2016-07-30 19:39:58,909 INFO org.apache.hadoop.hdfs.StateChange: DIR* completeFile: /user/spark/eventlog/.c724a7b0-722b-4207-b946-f859fe2f10cc is closed by DFSClient_NONMAPREDUCE_-1615501432_1 2016-07-30 19:40:08,914 INFO org.apache.hadoop.hdfs.server.namenode.FSEditLog: Number of transactions: 710 Total time for transactions(ms): 14 Number of transactions batched in Syncs: 0 Number of syncs: 704 SyncTimes(ms): 659 2016-07-30 19:40:08,917 INFO org.apache.hadoop.hdfs.StateChange: DIR* completeFile: /user/spark/eventlog/.46ce84b2-885c-497a-8b9f-8f3202a317c2 is closed by DFSClient_NONMAPREDUCE_-1615501432_1 2016-07-30 19:40:18,925 INFO org.apache.hadoop.hdfs.StateChange: DIR* completeFile: /user/spark/eventlog/.5fa59a96-cda0-4820-b1ec-38d120ff5dca is closed by DFSClient_NONMAPREDUCE_-1615501432_1 2016-07-30 19:40:29,006 INFO org.apache.hadoop.hdfs.StateChange: DIR* completeFile: /user/spark/eventlog/.0f45738e-9626-4713-b39d-3883f0408146 is closed by DFSClient_NONMAPREDUCE_-1615501432_1 2016-07-30 19:40:39,014 INFO org.apache.hadoop.hdfs.StateChange: DIR* completeFile: /user/spark/eventlog/.005ce47c-ef57-4d4c-9a2f-57c32927aca1 is closed by DFSClient_NONMAPREDUCE_-1615501432_1 2016-07-30 19:40:49,023 INFO org.apache.hadoop.hdfs.StateChange: DIR* completeFile: /user/spark/eventlog/.1f889794-c1e6-4054-a533-7f43ee06966b is closed by DFSClient_NONMAPREDUCE_-1615501432_1 2016-07-30 19:40:59,029 INFO org.apache.hadoop.hdfs.StateChange: DIR* completeFile: /user/spark/eventlog/.bc953f0d-287e-4745-b862-cfdd713e3777 is closed by DFSClient_NONMAPREDUCE_-1615501432_1 2016-07-30 19:41:09,034 INFO org.apache.hadoop.hdfs.server.namenode.FSEditLog: Number of transactions: 728 Total time for transactions(ms): 14 Number of transactions batched in Syncs: 0 Number of syncs: 722 SyncTimes(ms): 675 2016-07-30 19:41:09,038 INFO org.apache.hadoop.hdfs.StateChange: DIR* completeFile: /user/spark/eventlog/.5129bf62-08d3-4171-9591-57a5b004bb34 is closed by DFSClient_NONMAPREDUCE_-1615501432_1 2016-07-30 19:41:19,045 INFO org.apache.hadoop.hdfs.StateChange: DIR* completeFile: /user/spark/eventlog/.2f78852f-309c-45ef-ae9e-38b46c705e98 is closed by DFSClient_NONMAPREDUCE_-1615501432_1 2016-07-30 19:41:29,052 INFO org.apache.hadoop.hdfs.StateChange: DIR* completeFile: /user/spark/eventlog/.b0dc7906-651d-4b26-b683-1799b325ba8d is closed by DFSClient_NONMAPREDUCE_-1615501432_1 2016-07-30 19:41:39,059 INFO org.apache.hadoop.hdfs.StateChange: DIR* completeFile: /user/spark/eventlog/.cbcca99f-bedc-43d8-a890-a69c18b29b43 is closed by DFSClient_NONMAPREDUCE_-1615501432_1 2016-07-30 19:41:49,067 INFO org.apache.hadoop.hdfs.StateChange: DIR* completeFile: /user/spark/eventlog/.7ea8f3d6-dfd8-4080-8a45-a42419303fa0 is closed by DFSClient_NONMAPREDUCE_-1615501432_1 2016-07-30 19:41:59,074 INFO org.apache.hadoop.hdfs.StateChange: DIR* completeFile: /user/spark/eventlog/.1f22a8fd-ccb7-4138-b9f9-ab1ff1963b02 is closed by DFSClient_NONMAPREDUCE_-1615501432_1 2016-07-30 19:42:09,078 INFO org.apache.hadoop.hdfs.server.namenode.FSEditLog: Number of transactions: 746 Total time for transactions(ms): 14 Number of transactions batched in Syncs: 0 Number of syncs: 740 SyncTimes(ms): 691 2016-07-30 19:42:09,081 INFO org.apache.hadoop.hdfs.StateChange: DIR* completeFile: /user/spark/eventlog/.6b4b0b45-00bf-47d6-bc2b-9dc149e10f01 is closed by DFSClient_NONMAPREDUCE_-1615501432_1 2016-07-30 19:42:19,089 INFO org.apache.hadoop.hdfs.StateChange: DIR* completeFile: /user/spark/eventlog/.2a1a8c1e-1b8b-485d-a108-41ea8087bafe is closed by DFSClient_NONMAPREDUCE_-1615501432_1 2016-07-30 19:42:29,096 INFO org.apache.hadoop.hdfs.StateChange: DIR* completeFile: /user/spark/eventlog/.8c1b7511-83b2-4584-ab14-408a9e85d0c4 is closed by DFSClient_NONMAPREDUCE_-1615501432_1 2016-07-30 19:42:39,103 INFO org.apache.hadoop.hdfs.StateChange: DIR* completeFile: /user/spark/eventlog/.216d4363-b070-47c3-97ac-f0eac64ed411 is closed by DFSClient_NONMAPREDUCE_-1615501432_1 2016-07-30 19:42:49,110 INFO org.apache.hadoop.hdfs.StateChange: DIR* completeFile: /user/spark/eventlog/.5d5cbb0a-8cad-41be-ba17-388b9fc955c4 is closed by DFSClient_NONMAPREDUCE_-1615501432_1 2016-07-30 19:42:59,117 INFO org.apache.hadoop.hdfs.StateChange: DIR* completeFile: /user/spark/eventlog/.c805224b-1833-4dba-8cf5-80164b3ecd7b is closed by DFSClient_NONMAPREDUCE_-1615501432_1 2016-07-30 19:43:09,121 INFO org.apache.hadoop.hdfs.server.namenode.FSEditLog: Number of transactions: 764 Total time for transactions(ms): 14 Number of transactions batched in Syncs: 0 Number of syncs: 758 SyncTimes(ms): 707 2016-07-30 19:43:09,125 INFO org.apache.hadoop.hdfs.StateChange: DIR* completeFile: /user/spark/eventlog/.51715f99-5d67-4fa7-907b-7522fcca03c2 is closed by DFSClient_NONMAPREDUCE_-1615501432_1 2016-07-30 19:43:19,132 INFO org.apache.hadoop.hdfs.StateChange: DIR* completeFile: /user/spark/eventlog/.95920ee2-d9e2-41f4-a9f6-a495560af73f is closed by DFSClient_NONMAPREDUCE_-1615501432_1 2016-07-30 19:43:29,141 INFO org.apache.hadoop.hdfs.StateChange: DIR* completeFile: /user/spark/eventlog/.0d4d5099-21d1-4e3f-84e0-7623511c542c is closed by DFSClient_NONMAPREDUCE_-1615501432_1 2016-07-30 19:43:39,148 INFO org.apache.hadoop.hdfs.StateChange: DIR* completeFile: /user/spark/eventlog/.b6c93d4f-040c-4b9e-a89e-15313efd13ce is closed by DFSClient_NONMAPREDUCE_-1615501432_1 2016-07-30 19:43:49,157 INFO org.apache.hadoop.hdfs.StateChange: DIR* completeFile: /user/spark/eventlog/.35ed35e6-2c7d-4a45-ae4f-afaf538afc78 is closed by DFSClient_NONMAPREDUCE_-1615501432_1 2016-07-30 19:43:59,164 INFO org.apache.hadoop.hdfs.StateChange: DIR* completeFile: /user/spark/eventlog/.49c44bf3-ea11-4df1-ac71-a26203e9abba is closed by DFSClient_NONMAPREDUCE_-1615501432_1 2016-07-30 19:44:09,170 INFO org.apache.hadoop.hdfs.server.namenode.FSEditLog: Number of transactions: 782 Total time for transactions(ms): 14 Number of transactions batched in Syncs: 0 Number of syncs: 776 SyncTimes(ms): 725 2016-07-30 19:44:09,173 INFO org.apache.hadoop.hdfs.StateChange: DIR* completeFile: /user/spark/eventlog/.060f7d11-d341-4cab-8925-9b6203316744 is closed by DFSClient_NONMAPREDUCE_-1615501432_1 2016-07-30 19:44:19,181 INFO org.apache.hadoop.hdfs.StateChange: DIR* completeFile: /user/spark/eventlog/.666c8d61-405e-49bd-b2d0-939c920b6cd2 is closed by DFSClient_NONMAPREDUCE_-1615501432_1 2016-07-30 19:44:29,188 INFO org.apache.hadoop.hdfs.StateChange: DIR* completeFile: /user/spark/eventlog/.433f0daa-3386-44a6-b6b1-0285e9f5b176 is closed by DFSClient_NONMAPREDUCE_-1615501432_1 2016-07-30 19:44:39,197 INFO org.apache.hadoop.hdfs.StateChange: DIR* completeFile: /user/spark/eventlog/.1e840f6a-999b-4e1d-8eda-a95c409e351c is closed by DFSClient_NONMAPREDUCE_-1615501432_1 2016-07-30 19:44:49,206 INFO org.apache.hadoop.hdfs.StateChange: DIR* completeFile: /user/spark/eventlog/.c0df4079-d352-4aae-8392-9596f355c408 is closed by DFSClient_NONMAPREDUCE_-1615501432_1 2016-07-30 19:44:59,215 INFO org.apache.hadoop.hdfs.StateChange: DIR* completeFile: /user/spark/eventlog/.df28952f-5a2a-411b-b72d-49380b1ac88e is closed by DFSClient_NONMAPREDUCE_-1615501432_1 2016-07-30 19:45:09,221 INFO org.apache.hadoop.hdfs.server.namenode.FSEditLog: Number of transactions: 800 Total time for transactions(ms): 14 Number of transactions batched in Syncs: 0 Number of syncs: 794 SyncTimes(ms): 743 2016-07-30 19:45:09,224 INFO org.apache.hadoop.hdfs.StateChange: DIR* completeFile: /user/spark/eventlog/.5515b3ca-de5d-46df-a49c-c07d5c09969d is closed by DFSClient_NONMAPREDUCE_-1615501432_1 2016-07-30 19:45:19,234 INFO org.apache.hadoop.hdfs.StateChange: DIR* completeFile: /user/spark/eventlog/.3991cd72-3fb2-48a4-8083-5327d82be73b is closed by DFSClient_NONMAPREDUCE_-1615501432_1 2016-07-30 19:45:29,243 INFO org.apache.hadoop.hdfs.StateChange: DIR* completeFile: /user/spark/eventlog/.5233a5f3-e15a-4bae-9baa-5d81b5da0459 is closed by DFSClient_NONMAPREDUCE_-1615501432_1 2016-07-30 19:45:39,252 INFO org.apache.hadoop.hdfs.StateChange: DIR* completeFile: /user/spark/eventlog/.6097d0dd-d3c4-482e-8fb7-baa22602fb53 is closed by DFSClient_NONMAPREDUCE_-1615501432_1 2016-07-30 19:45:49,261 INFO org.apache.hadoop.hdfs.StateChange: DIR* completeFile: /user/spark/eventlog/.f94ba7f7-313b-4387-a447-59214ddf6ecc is closed by DFSClient_NONMAPREDUCE_-1615501432_1 2016-07-30 19:45:59,269 INFO org.apache.hadoop.hdfs.StateChange: DIR* completeFile: /user/spark/eventlog/.864e23f9-2b2a-44f4-b11d-e4d48249c7f3 is closed by DFSClient_NONMAPREDUCE_-1615501432_1 2016-07-30 19:46:09,275 INFO org.apache.hadoop.hdfs.server.namenode.FSEditLog: Number of transactions: 818 Total time for transactions(ms): 14 Number of transactions batched in Syncs: 0 Number of syncs: 812 SyncTimes(ms): 761 2016-07-30 19:46:09,278 INFO org.apache.hadoop.hdfs.StateChange: DIR* completeFile: /user/spark/eventlog/.96da4a90-afeb-4fe9-84eb-2d759785d428 is closed by DFSClient_NONMAPREDUCE_-1615501432_1 2016-07-30 19:46:19,288 INFO org.apache.hadoop.hdfs.StateChange: DIR* completeFile: /user/spark/eventlog/.a50035 66-95ab-4a4b-a1f3-7de6302d26a0 is closed by DFSClient_NONMAPREDUCE_-1615501432_1 2016-07-30 19:46:29,296 INFO org.apache.hadoop.hdfs.StateChange: DIR* completeFile: /user/spark/eventlog/.65acb8 c8-9f16-49cc-951c-01dadd298e86 is closed by DFSClient_NONMAPREDUCE_-1615501432_1 2016-07-30 19:46:39,306 INFO org.apache.hadoop.hdfs.StateChange: DIR* completeFile: /user/spark/eventlog/.a3281e 71-6713-422e-b43f-5cd9500f8dd2 is closed by DFSClient_NONMAPREDUCE_-1615501432_1 2016-07-30 19:46:49,314 INFO org.apache.hadoop.hdfs.StateChange: DIR* completeFile: /user/spark/eventlog/.343366 ba-49fc-4b9a-b4d8-7a6b6c8683e0 is closed by DFSClient_NONMAPREDUCE_-1615501432_1 2016-07-30 19:46:59,323 INFO org.apache.hadoop.hdfs.StateChange: DIR* completeFile: /user/spark/eventlog/.88395b cf-4668-4a9e-8586-69de43e7e0b9 is closed by DFSClient_NONMAPREDUCE_-1615501432_1 2016-07-30 19:47:09,328 INFO org.apache.hadoop.hdfs.server.namenode.FSEditLog: Number of transactions: 836 Total time for transactions(ms): 14 Number of transactions batched in Syncs: 0 Number of syncs: 830 SyncTimes(ms): 77 9 2016-07-30 19:47:09,331 INFO org.apache.hadoop.hdfs.StateChange: DIR* completeFile: /user/spark/eventlog/.c3c519 04-dda5-4e25-af9c-6188615063d5 is closed by DFSClient_NONMAPREDUCE_-1615501432_1 2016-07-30 19:47:19,338 INFO org.apache.hadoop.hdfs.StateChange: DIR* completeFile: /user/spark/eventlog/.163b09 48-28a7-4a1e-9a09-a8de560d0200 is closed by DFSClient_NONMAPREDUCE_-1615501432_1 2016-07-30 19:47:29,346 INFO org.apache.hadoop.hdfs.StateChange: DIR* completeFile: /user/spark/eventlog/.b726de 12-176b-47c0-964c-0fa5196f626f is closed by DFSClient_NONMAPREDUCE_-1615501432_1 2016-07-30 19:47:39,354 INFO org.apache.hadoop.hdfs.StateChange: DIR* completeFile: /user/spark/eventlog/.8eddb5 b2-746c-4852-8b15-f15c7a7068f0 is closed by DFSClient_NONMAPREDUCE_-1615501432_1 2016-07-30 19:47:49,364 INFO org.apache.hadoop.hdfs.StateChange: DIR* completeFile: /user/spark/eventlog/.0f467a 89-42d3-4b2e-ae9e-36714bd417f3 is closed by DFSClient_NONMAPREDUCE_-1615501432_1 2016-07-30 19:47:59,374 INFO org.apache.hadoop.hdfs.StateChange: DIR* completeFile: /user/spark/eventlog/.074a81 30-f4ba-4be6-a614-e39a252cd57b is closed by DFSClient_NONMAPREDUCE_-1615501432_1 2016-07-30 19:48:09,380 INFO org.apache.hadoop.hdfs.server.namenode.FSEditLog: Number of transactions: 854 Total time for transactions(ms): 14 Number of transactions batched in Syncs: 0 Number of syncs: 848 SyncTimes(ms): 79 7 2016-07-30 19:48:09,383 INFO org.apache.hadoop.hdfs.StateChange: DIR* completeFile: /user/spark/eventlog/.b7e19c 12-827d-4ca7-87e9-1e3b8ab01c01 is closed by DFSClient_NONMAPREDUCE_-1615501432_1 2016-07-30 19:48:19,391 INFO org.apache.hadoop.hdfs.StateChange: DIR* completeFile: /user/spark/eventlog/.a06d3b 48-6a08-4294-9b6a-7c8ffcebef52 is closed by DFSClient_NONMAPREDUCE_-1615501432_1 2016-07-30 19:48:29,401 INFO org.apache.hadoop.hdfs.StateChange: DIR* completeFile: /user/spark/eventlog/.89fc29 55-136b-400f-a78c-c439726e4964 is closed by DFSClient_NONMAPREDUCE_-1615501432_1 > Exception in thread “main” org.apache.spark.SparkException: Application > application finished with failed status > --------------------------------------------------------------------------------------------------------------- > > Key: SPARK-16819 > URL: https://issues.apache.org/jira/browse/SPARK-16819 > Project: Spark > Issue Type: Question > Components: Streaming, YARN > Reporter: Asmaa Ali > Labels: beginner > Original Estimate: 24h > Remaining Estimate: 24h > > What is the reason of this exception ?! > cancerdetector@cluster-cancerdetector-m:~/SparkBWA/build$ spark-submit > --class SparkBWA --master yarn-cluster --deploy-mode cluster --conf > spark.yarn.jar=hdfs:///user/spark/spark-assembly.jar --driver-memory 1500m > --executor-memory 1500m --executor-cores 1 --archives ./bwa.zip --verbose > ./SparkBWA.jar -algorithm mem -reads paired -index /Data/HumanBase/hg38 > -partitions 32 ERR000589_1.filt.fastq ERR000589_2.filt.fastqhb > Output_ERR000589 > Using properties file: /usr/lib/spark/conf/spark-defaults.conf > Adding default property: > spark.executor.extraJavaOptions=-Xbootclasspath/p:/usr/local/share/google/alpn/alpn-boot-8.1.7.v20160121.jar > Adding default property: > spark.history.fs.logDirectory=hdfs://cluster-cancerdetector-m/user/spark/eventlog > Adding default property: spark.eventLog.enabled=true > Adding default property: spark.driver.maxResultSize=1920m > Adding default property: spark.shuffle.service.enabled=true > Adding default property: > spark.yarn.historyServer.address=cluster-cancerdetector-m:18080 > Adding default property: spark.sql.parquet.cacheMetadata=false > Adding default property: spark.driver.memory=3840m > Adding default property: spark.dynamicAllocation.maxExecutors=10000 > Adding default property: spark.scheduler.minRegisteredResourcesRatio=0.0 > Adding default property: spark.yarn.am.memoryOverhead=558 > Adding default property: spark.yarn.am.memory=5586m > Adding default property: > spark.driver.extraJavaOptions=-Xbootclasspath/p:/usr/local/share/google/alpn/alpn-boot-8.1.7.v20160121.jar > Adding default property: spark.master=yarn-client > Adding default property: spark.executor.memory=5586m > Adding default property: > spark.eventLog.dir=hdfs://cluster-cancerdetector-m/user/spark/eventlog > Adding default property: spark.dynamicAllocation.enabled=true > Adding default property: spark.executor.cores=2 > Adding default property: spark.yarn.executor.memoryOverhead=558 > Adding default property: spark.dynamicAllocation.minExecutors=1 > Adding default property: spark.dynamicAllocation.initialExecutors=10000 > Adding default property: spark.akka.frameSize=512 > Parsed arguments: > master yarn-cluster > deployMode cluster > executorMemory 1500m > executorCores 1 > totalExecutorCores null > propertiesFile /usr/lib/spark/conf/spark-defaults.conf > driverMemory 1500m > driverCores null > driverExtraClassPath null > driverExtraLibraryPath null > driverExtraJavaOptions > -Xbootclasspath/p:/usr/local/share/google/alpn/alpn-boot-8.1.7.v20160121.jar > supervise false > queue null > numExecutors null > files null > pyFiles null > archives file:/home/cancerdetector/SparkBWA/build/./bwa.zip > mainClass SparkBWA > primaryResource > file:/home/cancerdetector/SparkBWA/build/./SparkBWA.jar > name SparkBWA > childArgs [-algorithm mem -reads paired -index > /Data/HumanBase/hg38 -partitions 32 ERR000589_1.filt.fastq > ERR000589_2.filt.fastqhb Output_ERR000589] > jars null > packages null > packagesExclusions null > repositories null > verbose true > Spark properties used, including those specified through > --conf and those from the properties file > /usr/lib/spark/conf/spark-defaults.conf: > spark.yarn.am.memoryOverhead -> 558 > spark.driver.memory -> 1500m > spark.yarn.jar -> hdfs:///user/spark/spark-assembly.jar > spark.executor.memory -> 5586m > spark.yarn.historyServer.address -> cluster-cancerdetector-m:18080 > spark.eventLog.enabled -> true > spark.scheduler.minRegisteredResourcesRatio -> 0.0 > spark.dynamicAllocation.maxExecutors -> 10000 > spark.akka.frameSize -> 512 > spark.executor.extraJavaOptions -> > -Xbootclasspath/p:/usr/local/share/google/alpn/alpn-boot-8.1.7.v20160121.jar > spark.sql.parquet.cacheMetadata -> false > spark.shuffle.service.enabled -> true > spark.history.fs.logDirectory -> > hdfs://cluster-cancerdetector-m/user/spark/eventlog > spark.dynamicAllocation.initialExecutors -> 10000 > spark.dynamicAllocation.minExecutors -> 1 > spark.yarn.executor.memoryOverhead -> 558 > spark.driver.extraJavaOptions -> > -Xbootclasspath/p:/usr/local/share/google/alpn/alpn-boot-8.1.7.v20160121.jar > spark.eventLog.dir -> hdfs://cluster-cancerdetector-m/user/spark/eventlog > spark.yarn.am.memory -> 5586m > spark.driver.maxResultSize -> 1920m > spark.master -> yarn-client > spark.dynamicAllocation.enabled -> true > spark.executor.cores -> 2 > > Main class: > org.apache.spark.deploy.yarn.Client > Arguments: > --name > SparkBWA > --driver-memory > 1500m > --executor-memory > 1500m > --executor-cores > 1 > --archives > file:/home/cancerdetector/SparkBWA/build/./bwa.zip > --jar > file:/home/cancerdetector/SparkBWA/build/./SparkBWA.jar > --class > SparkBWA > --arg > -algorithm > --arg > mem > --arg > -reads > --arg > paired > --arg > -index > --arg > /Data/HumanBase/hg38 > --arg > -partitions > --arg > 32 > --arg > ERR000589_1.filt.fastq > --arg > ERR000589_2.filt.fastqhb > --arg > Output_ERR000589 > System properties: > spark.yarn.am.memoryOverhead -> 558 > spark.driver.memory -> 1500m > spark.yarn.jar -> hdfs:///user/spark/spark-assembly.jar > spark.executor.memory -> 1500m > spark.yarn.historyServer.address -> cluster-cancerdetector-m:18080 > spark.eventLog.enabled -> true > spark.scheduler.minRegisteredResourcesRatio -> 0.0 > SPARK_SUBMIT -> true > spark.dynamicAllocation.maxExecutors -> 10000 > spark.akka.frameSize -> 512 > spark.sql.parquet.cacheMetadata -> false > spark.executor.extraJavaOptions -> > -Xbootclasspath/p:/usr/local/share/google/alpn/alpn-boot-8.1.7.v20160121.jar > spark.app.name -> SparkBWA > spark.shuffle.service.enabled -> true > spark.history.fs.logDirectory -> > hdfs://cluster-cancerdetector-m/user/spark/eventlog > spark.dynamicAllocation.initialExecutors -> 10000 > spark.dynamicAllocation.minExecutors -> 1 > spark.yarn.executor.memoryOverhead -> 558 > spark.driver.extraJavaOptions -> > -Xbootclasspath/p:/usr/local/share/google/alpn/alpn-boot-8.1.7.v20160121.jar > spark.submit.deployMode -> cluster > spark.eventLog.dir -> hdfs://cluster-cancerdetector-m/user/spark/eventlog > spark.yarn.am.memory -> 5586m > spark.driver.maxResultSize -> 1920m > spark.master -> yarn-cluster > spark.dynamicAllocation.enabled -> true > spark.executor.cores -> 1 > Classpath elements: > spark.yarn.am.memory is set but does not apply in cluster mode. > spark.yarn.am.memoryOverhead is set but does not apply in cluster mode. > 16/07/31 01:12:39 INFO org.apache.hadoop.yarn.client.RMProxy: Connecting to > ResourceManager at cluster-cancerdet > ector-m/10.132.0.2:8032 > 16/07/31 01:12:40 INFO org.apache.hadoop.yarn.client.api.impl.YarnClientImpl: > Submitted application application_ > 1467990031555_0106 > Exception in thread "main" org.apache.spark.SparkException: Application > application_1467990031555_0106 finished > with failed status > at org.apache.spark.deploy.yarn.Client.run(Client.scala:1034) > at org.apache.spark.deploy.yarn.Client$.main(Client.scala:1081) > at org.apache.spark.deploy.yarn.Client.main(Client.scala) > at sun.reflect.NativeMethodAccessorImpl.invoke0(Native Method) > at > sun.reflect.NativeMethodAccessorImpl.invoke(NativeMethodAccessorImpl.java:62) > at > sun.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:43) > at java.lang.reflect.Method.invoke(Method.java:498) > at > org.apache.spark.deploy.SparkSubmit$.org$apache$spark$deploy$SparkSubmit$$runMain(SparkSubmit.scala:7 > 31) > at > org.apache.spark.deploy.SparkSubmit$.doRunMain$1(SparkSubmit.scala:181) > at org.apache.spark.deploy.SparkSubmit$.submit(SparkSubmit.scala:206) > at org.apache.spark.deploy.SparkSubmit$.main(SparkSubmit.scala:121) > at org.apache.spark.deploy.SparkSubmit.main(SparkSubmit.scala) > When I tried to check the AM and executor logs. the command didn't work (I > have set the yarn.log-aggregation-enable to true), So I tried to manually > access into NM's log dir to see the detailed application logs. Here are the > application logs from the NM's log file: > 2016-07-31 01:12:40,387 INFO BlockStateChange: BLOCK* addStoredBlock: > blockMap updated: 10.132.0.4:50010 is added to > blk_1073742335_1511{UCState=UNDER_CONSTRUCTION, truncateBlock=null, > primaryNodeIndex=-1, > replicas=[ReplicaUC[[DISK]DS-19f52f20-0053-443d-bf33-dd636d8b2d07:NORMAL:10.132.0.3:50010|RBW], > > ReplicaUC[[DISK]DS-6b7272d9-24d2-4d77-85e2-49c492bd12a4:NORMAL:10.132.0.4:50010|RBW]]} > size 0 > 2016-07-31 01:12:40,387 INFO BlockStateChange: BLOCK* addStoredBlock: > blockMap updated: 10.132.0.3:50010 is added to > blk_1073742335_1511{UCState=UNDER_CONSTRUCTION, truncateBlock=null, > primaryNodeIndex=-1, > replicas=[ReplicaUC[[DISK]DS-19f52f20-0053-443d-bf33-dd636d8b2d07:NORMAL:10.132.0.3:50010|RBW], > > ReplicaUC[[DISK]DS-6b7272d9-24d2-4d77-85e2-49c492bd12a4:NORMAL:10.132.0.4:50010|RBW]]} > size 0 > 2016-07-31 01:12:40,391 INFO org.apache.hadoop.hdfs.StateChange: DIR* > completeFile: > /user/cancerdetector/.sparkStaging/application_1467990031555_0106/SparkBWA.jar > is closed by DFSClient_NONMAPREDUCE_-762268348_1 > 2016-07-31 01:12:40,419 INFO org.apache.hadoop.hdfs.StateChange: BLOCK* > allocate blk_1073742336_1512{UCState=UNDER_CONSTRUCTION, truncateBlock=null, > primaryNodeIndex=-1, > replicas=[ReplicaUC[[DISK]DS-19f52f20-0053-443d-bf33-dd636d8b2d07:NORMAL:10.132.0.3:50010|RBW], > > ReplicaUC[[DISK]DS-6b7272d9-24d2-4d77-85e2-49c492bd12a4:NORMAL:10.132.0.4:50010|RBW]]} > for /user/cancerdetector/.sparkStaging/application_1467990031555_0106/bwa.zip > 2016-07-31 01:12:40,445 INFO BlockStateChange: BLOCK* addStoredBlock: > blockMap updated: 10.132.0.4:50010 is added to > blk_1073742336_1512{UCState=UNDER_CONSTRUCTION, truncateBlock=null, > primaryNodeIndex=-1, > replicas=[ReplicaUC[[DISK]DS-19f52f20-0053-443d-bf33-dd636d8b2d07:NORMAL:10.132.0.3:50010|RBW], > > ReplicaUC[[DISK]DS-6b7272d9-24d2-4d77-85e2-49c492bd12a4:NORMAL:10.132.0.4:50010|RBW]]} > size 0 > 2016-07-31 01:12:40,446 INFO BlockStateChange: BLOCK* addStoredBlock: > blockMap updated: 10.132.0.3:50010 is added to > blk_1073742336_1512{UCState=UNDER_CONSTRUCTION, truncateBlock=null, > primaryNodeIndex=-1, > replicas=[ReplicaUC[[DISK]DS-19f52f20-0053-443d-bf33-dd636d8b2d07:NORMAL:10.132.0.3:50010|RBW], > > ReplicaUC[[DISK]DS-6b7272d9-24d2-4d77-85e2-49c492bd12a4:NORMAL:10.132.0.4:50010|RBW]]} > size 0 > 2016-07-31 01:12:40,448 INFO org.apache.hadoop.hdfs.StateChange: DIR* > completeFile: > /user/cancerdetector/.sparkStaging/application_1467990031555_0106/bwa.zip is > closed by DFSClient_NONMAPREDUCE_-762268348_1 > 2016-07-31 01:12:40,495 INFO org.apache.hadoop.hdfs.StateChange: BLOCK* > allocate blk_1073742337_1513{UCState=UNDER_CONSTRUCTION, truncateBlock=null, > primaryNodeIndex=-1, > replicas=[ReplicaUC[[DISK]DS-19f52f20-0053-443d-bf33-dd636d8b2d07:NORMAL:10.132.0.3:50010|RBW], > > ReplicaUC[[DISK]DS-6b7272d9-24d2-4d77-85e2-49c492bd12a4:NORMAL:10.132.0.4:50010|RBW]]} > for > /user/cancerdetector/.sparkStaging/application_1467990031555_0106/__spark_conf__2552000168715758347.zip > 2016-07-31 01:12:40,506 INFO BlockStateChange: BLOCK* addStoredBlock: > blockMap updated: 10.132.0.4:50010 is added to > blk_1073742337_1513{UCState=UNDER_CONSTRUCTION, truncateBlock=null, > primaryNodeIndex=-1, > replicas=[ReplicaUC[[DISK]DS-19f52f20-0053-443d-bf33-dd636d8b2d07:NORMAL:10.132.0.3:50010|RBW], > > ReplicaUC[[DISK]DS-6b7272d9-24d2-4d77-85e2-49c492bd12a4:NORMAL:10.132.0.4:50010|RBW]]} > size 0 > 2016-07-31 01:12:40,506 INFO BlockStateChange: BLOCK* addStoredBlock: > blockMap updated: 10.132.0.3:50010 is added to > blk_1073742337_1513{UCState=UNDER_CONSTRUCTION, truncateBlock=null, > primaryNodeIndex=-1, > replicas=[ReplicaUC[[DISK]DS-19f52f20-0053-443d-bf33-dd636d8b2d07:NORMAL:10.132.0.3:50010|RBW], > > ReplicaUC[[DISK]DS-6b7272d9-24d2-4d77-85e2-49c492bd12a4:NORMAL:10.132.0.4:50010|RBW]]} > size 0 > 2016-07-31 01:12:40,509 INFO org.apache.hadoop.hdfs.StateChange: DIR* > completeFile: > /user/cancerdetector/.sparkStaging/application_1467990031555_0106/__spark_conf__2552000168715758347.zip > is closed by DFSClient_NONMAPREDUCE_-762268348_1 > 2016-07-31 01:12:44,720 INFO org.apache.hadoop.hdfs.StateChange: BLOCK* > allocate blk_1073742338_1514{UCState=UNDER_CONSTRUCTION, truncateBlock=null, > primaryNodeIndex=-1, > replicas=[ReplicaUC[[DISK]DS-19f52f20-0053-443d-bf33-dd636d8b2d07:NORMAL:10.132.0.3:50010|RBW], > > ReplicaUC[[DISK]DS-6b7272d9-24d2-4d77-85e2-49c492bd12a4:NORMAL:10.132.0.4:50010|RBW]]} > for /user/spark/eventlog/application_1467990031555_0106_1.inprogress > 2016-07-31 01:12:44,877 INFO org.apache.hadoop.hdfs.StateChange: BLOCK* > fsync: /user/spark/eventlog/application_1467990031555_0106_1.inprogress for > DFSClient_NONMAPREDUCE_-1111833453_14 > 2016-07-31 01:12:45,373 INFO BlockStateChange: BLOCK* addStoredBlock: > blockMap updated: 10.132.0.4:50010 is added to > blk_1073742338_1514{UCState=UNDER_CONSTRUCTION, truncateBlock=null, > primaryNodeIndex=-1, > replicas=[ReplicaUC[[DISK]DS-19f52f20-0053-443d-bf33-dd636d8b2d07:NORMAL:10.132.0.3:50010|RBW], > > ReplicaUC[[DISK]DS-6b7272d9-24d2-4d77-85e2-49c492bd12a4:NORMAL:10.132.0.4:50010|RBW]]} > size 231 > 2016-07-31 01:12:45,375 INFO BlockStateChange: BLOCK* addStoredBlock: > blockMap updated: 10.132.0.3:50010 is added to > blk_1073742338_1514{UCState=UNDER_CONSTRUCTION, truncateBlock=null, > primaryNodeIndex=-1, > replicas=[ReplicaUC[[DISK]DS-19f52f20-0053-443d-bf33-dd636d8b2d07:NORMAL:10.132.0.3:50010|RBW], > > ReplicaUC[[DISK]DS-6b7272d9-24d2-4d77-85e2-49c492bd12a4:NORMAL:10.132.0.4:50010|RBW]]} > size 231 > 2016-07-31 01:12:45,379 INFO org.apache.hadoop.hdfs.StateChange: DIR* > completeFile: > /user/spark/eventlog/application_1467990031555_0106_1.inprogress is closed by > DFSClient_NONMAPREDUCE_-1111833453_14 > 2016-07-31 01:12:45,843 INFO org.apache.hadoop.hdfs.StateChange: DIR* > completeFile: /user/spark/eventlog/.b7989393-f278-477c-8e83-ff5da9079e8a is > closed by DFSClient_NONMAPREDUCE_-1615501432_1 > 2016-07-31 01:12:49,914 INFO org.apache.hadoop.hdfs.StateChange: BLOCK* > allocate blk_1073742339_1515{UCState=UNDER_CONSTRUCTION, truncateBlock=null, > primaryNodeIndex=-1, > replicas=[ReplicaUC[[DISK]DS-19f52f20-0053-443d-bf33-dd636d8b2d07:NORMAL:10.132.0.3:50010|RBW], > > ReplicaUC[[DISK]DS-6b7272d9-24d2-4d77-85e2-49c492bd12a4:NORMAL:10.132.0.4:50010|RBW]]} > for /user/spark/eventlog/application_1467990031555_0106_2.inprogress > 2016-07-31 01:12:50,100 INFO org.apache.hadoop.hdfs.StateChange: BLOCK* > fsync: /user/spark/eventlog/application_1467990031555_0106_2.inprogress for > DFSClient_NONMAPREDUCE_378341726_14 > 2016-07-31 01:12:50,737 INFO BlockStateChange: BLOCK* addStoredBlock: > blockMap updated: 10.132.0.4:50010 is added to > blk_1073742339_1515{UCState=UNDER_CONSTRUCTION, truncateBlock=null, > primaryNodeIndex=-1, > replicas=[ReplicaUC[[DISK]DS-19f52f20-0053-443d-bf33-dd636d8b2d07:NORMAL:10.132.0.3:50010|RBW], > > ReplicaUC[[DISK]DS-6b7272d9-24d2-4d77-85e2-49c492bd12a4:NORMAL:10.132.0.4:50010|RBW]]} > size 231 > 2016-07-31 01:12:50,738 INFO BlockStateChange: BLOCK* addStoredBlock: > blockMap updated: 10.132.0.3:50010 is added to > blk_1073742339_1515{UCState=UNDER_CONSTRUCTION, truncateBlock=null, > primaryNodeIndex=-1, > replicas=[ReplicaUC[[DISK]DS-19f52f20-0053-443d-bf33-dd636d8b2d07:NORMAL:10.132.0.3:50010|RBW], > > ReplicaUC[[DISK]DS-6b7272d9-24d2-4d77-85e2-49c492bd12a4:NORMAL:10.132.0.4:50010|RBW]]} > size 231 > 2016-07-31 01:12:50,742 INFO org.apache.hadoop.hdfs.StateChange: DIR* > completeFile: > /user/spark/eventlog/application_1467990031555_0106_2.inprogress is closed by > DFSClient_NONMAPREDUCE_378341726_14 > 2016-07-31 01:12:50,892 INFO BlockStateChange: BLOCK* addToInvalidates: > blk_1073742335_1511 10.132.0.3:50010 10.132.0.4:50010 > 2016-07-31 01:12:50,892 INFO BlockStateChange: BLOCK* addToInvalidates: > blk_1073742337_1513 10.132.0.3:50010 10.132.0.4:50010 > 2016-07-31 01:12:50,892 INFO BlockStateChange: BLOCK* addToInvalidates: > blk_1073742336_1512 10.132.0.3:50010 10.132.0.4:50010 > 2016-07-31 01:12:51,804 INFO BlockStateChange: BLOCK* BlockManager: ask > 10.132.0.3:50010 to delete [blk_1073742336_1512, blk_1073742337_1513, > blk_1073742335_1511] > 2016-07-31 01:12:54,804 INFO BlockStateChange: BLOCK* BlockManager: ask > 10.132.0.4:50010 to delete [blk_1073742336_1512, blk_1073742337_1513, > blk_1073742335_1511] > 2016-07-31 01:12:55,868 INFO org.apache.hadoop.hdfs.StateChange: DIR* > completeFile: /user/spark/eventlog/.46380a1f-b5fd-4924-96aa-f59dcae0cbec is > closed by DFSClient_NONMAPREDUCE_-1615501432_1 > 2016-07-31 01:13:05,882 INFO > org.apache.hadoop.hdfs.server.namenode.FSEditLog: Number of transactions: 244 > Total time for transactions(ms): 5 Number of transactions batched in Syncs: 0 > Number of syncs: 234 SyncTimes(ms): 221 > 2016-07-31 01:13:05,885 INFO org.apache.hadoop.hdfs.StateChange: DIR* > completeFile: /user/spark/eventlog/.7273ee28-eb1c-4fe2-98d2-c5a20ebe4ffa is > closed by DFSClient_NONMAPREDUCE_-1615501432_1 > 2016-07-31 01:13:15,892 INFO org.apache.hadoop.hdfs.StateChange: DIR* > completeFile: /user/spark/eventlog/.0f640743-d06c-4583-ac95-9d520dc8f301 is > closed by DFSClient_NONMAPREDUCE_-1615501432_1 > 2016-07-31 01:13:25,902 INFO org.apache.hadoop.hdfs.StateChange: DIR* > completeFile: /user/spark/eventlog/.bc63864c-0267-47b5-bcc1-96ba81d6c9a5 is > closed by DFSClient_NONMAPREDUCE_-1615501432_1 > 2016-07-31 01:13:35,910 INFO org.apache.hadoop.hdfs.StateChange: DIR* > completeFile: /user/spark/eventlog/.93557793-2ba2-47e8-b54c-234c861b6e6c is > closed by DFSClient_NONMAPREDUCE_-1615501432_1 > 2016-07-31 01:13:45,918 INFO org.apache.hadoop.hdfs.StateChange: DIR* > completeFile: /user/spark/eventlog/.0fdf083c-3c53-4051-af16-d579f700962e is > closed by DFSClient_NONMAPREDUCE_-1615501432_1 > 2016-07-31 01:13:55,927 INFO org.apache.hadoop.hdfs.StateChange: DIR* > completeFile: /user/spark/eventlog/.834632f1-d9c6-4e14-9354-72f8c18f66d0 is > closed by DFSClient_NONMAPREDUCE_-1615501432_1 > 2016-07-31 01:14:05,933 INFO > org.apache.hadoop.hdfs.server.namenode.FSEditLog: Number of transactions: 262 > Total time for transactions(ms): 5 Number of transactions batched in Syncs: 0 > Number of syncs: 252 SyncTimes(ms): 236 > 2016-07-31 01:14:05,936 INFO org.apache.hadoop.hdfs.StateChange: DIR* > completeFile: /user/spark/eventlog/.d06ef3b4-873f-464d-9cd0-e360da48e194 is > closed by DFSClient_NONMAPREDUCE_-1615501432_1 > 2016-07-31 01:14:15,944 INFO org.apache.hadoop.hdfs.StateChange: DIR* > completeFile: /user/spark/eventlog/.32ccba74-5f6c-45fc-b5db-26efb1b840e2 is > closed by DFSClient_NONMAPREDUCE_-1615501432_1 > 2016-07-31 01:14:25,952 INFO org.apache.hadoop.hdfs.StateChange: DIR* > completeFile: /user/spark/eventlog/.fef919cd-9952-4af8-a49a-e6dd2aa032f1 is > closed by DFSClient_NONMAPREDUCE_-1615501432_1 > 2016-07-31 01:14:35,961 INFO org.apache.hadoop.hdfs.StateChange: DIR* > completeFile: /user/spark/eventlog/.77ffdf36-8e42-43d8-9c1f-df6f3d11700d is > closed by DFSClient_NONMAPREDUCE_-1615501432_1 > 2016-07-31 01:14:45,968 INFO org.apache.hadoop.hdfs.StateChange: DIR* > completeFile: /user/spark/eventlog/.c31cfcbb-b47c-4169-ab0f-7ae87d4f815d is > closed by DFSClient_NONMAPREDUCE_-1615501432_1 > 2016-07-31 01:14:55,976 INFO org.apache.hadoop.hdfs.StateChange: DIR* > completeFile: /user/spark/eventlog/.6429570d-fb0a-4117-bb12-127a67e0a0b7 is > closed by DFSClient_NONMAPREDUCE_-1615501432_1 > 2016-07-31 01:15:05,981 INFO > org.apache.hadoop.hdfs.server.namenode.FSEditLog: Number of transactions: 280 > Total time for transactions(ms): 6 Number of transactions batched in Syncs: 0 > Number of syncs: 270 SyncTimes(ms): 253 > 2016-07-31 01:15:05,984 INFO org.apache.hadoop.hdfs.StateChange: DIR* > completeFile: /user/spark/eventlog/.8030b18d-05f2-4520-b5c4-2fe42338b92b is > closed by DFSClient_NONMAPREDUCE_-1615501432_1 > 2016-07-31 01:15:15,991 INFO org.apache.hadoop.hdfs.StateChange: DIR* > completeFile: /user/spark/eventlog/.f608a0f4-e730-43cd-a19d-da57caac346e is > closed by DFSClient_NONMAPREDUCE_-1615501432_1 > 2016-07-31 01:15:25,999 INFO org.apache.hadoop.hdfs.StateChange: DIR* > completeFile: /user/spark/eventlog/.9d5a1f80-2f2a-43a7-84f1-b26a8c90a98f is > closed by DFSClient_NONMAPREDUCE_-1615501432_1 > 2016-07-31 01:15:36,007 INFO org.apache.hadoop.hdfs.StateChange: DIR* > completeFile: /user/spark/eventlog/.279e96fc-180c-47a5-a3ba-cfda581eedad is > closed by DFSClient_NONMAPREDUCE_-1615501432_1 > 2016-07-31 01:15:46,015 INFO org.apache.hadoop.hdfs.StateChange: DIR* > completeFile: /user/spark/eventlog/.a85bbf52-61f4-4899-98b1-23615a549774 is > closed by DFSClient_NONMAPREDUCE_-1615501432_1 > 2016-07-31 01:15:56,023 INFO org.apache.hadoop.hdfs.StateChange: DIR* > completeFile: /user/spark/eventlog/.80613e8e-7015-4aeb-81df-49884bd0eb5e is > closed by DFSClient_NONMAPREDUCE_-1615501432_1 > 2016-07-31 01:16:06,028 INFO > org.apache.hadoop.hdfs.server.namenode.FSEditLog: Number of transactions: 298 > Total time for transactions(ms): 6 Number of transactions batched in Syncs: 0 > Number of syncs: 288 SyncTimes(ms): 267 > 2016-07-31 01:16:06,031 INFO org.apache.hadoop.hdfs.StateChange: DIR* > completeFile: /user/spark/eventlog/.2be7fc48-bd1c-4042-88e4-239b1c630458 is > closed by DFSClient_NONMAPREDUCE_-1615501432_1 > 2016-07-31 01:16:16,038 INFO org.apache.hadoop.hdfs.StateChange: DIR* > completeFile: /user/spark/eventlog/.40fc68a6-f003-4e35-b4b3-50bd3c4a0c82 is > closed by DFSClient_NONMAPREDUCE_-1615501432_1 > 2016-07-31 01:16:26,045 INFO org.apache.hadoop.hdfs.StateChange: DIR* > completeFile: /user/spark/eventlog/.97e7d15c-4d28-4089-b4a5-9f0935a72589 is > closed by DFSClient_NONMAPREDUCE_-1615501432_1 > 2016-07-31 01:16:36,052 INFO org.apache.hadoop.hdfs.StateChange: DIR* > completeFile: /user/spark/eventlog/.84d8e78d-90fd-419f-9000-fa04ab56955e is > closed by DFSClient_NONMAPREDUCE_-1615501432_1 > 2016-07-31 01:16:46,059 INFO org.apache.hadoop.hdfs.StateChange: DIR* > completeFile: /user/spark/eventlog/.6691cc3e-6969-4a8f-938f-272d1c96701d is > closed by DFSClient_NONMAPREDUCE_-1615501432_1 > 2016-07-31 01:16:56,066 INFO org.apache.hadoop.hdfs.StateChange: DIR* > completeFile: /user/spark/eventlog/.077143b6-281a-468c-8b2c-bcb6cd3bc27a is > closed by DFSClient_NONMAPREDUCE_-1615501432_1 > 2016-07-31 01:17:06,070 INFO > org.apache.hadoop.hdfs.server.namenode.FSEditLog: Number of transactions: 316 > Total time for transactions(ms): 6 Number of transactions batched in Syncs: 0 > Number of syncs: 306 SyncTimes(ms): 284 > 2016-07-31 01:17:06,073 INFO org.apache.hadoop.hdfs.StateChange: DIR* > completeFile: /user/spark/eventlog/.817d1886-aea2-450a-a586-08677dc18d60 is > closed by DFSClient_NONMAPREDUCE_-1615501432_1 > 2016-07-31 01:17:16,080 INFO org.apache.hadoop.hdfs.StateChange: DIR* > completeFile: /user/spark/eventlog/.abd46886-1359-4c5e-8276-ea4f2969411f is > closed by DFSClient_NONMAPREDUCE_-1615501432_1 > 2016-07-31 01:17:26,087 INFO org.apache.hadoop.hdfs.StateChange: DIR* > completeFile: /user/spark/eventlog/.24625260-59be-4a9b-b47b-b8d5b76cb789 is > closed by DFSClient_NONMAPREDUCE_-1615501432_1 > 2016-07-31 01:17:36,096 INFO org.apache.hadoop.hdfs.StateChange: DIR* > completeFile: /user/spark/eventlog/.11630782-e50e-4260-a0da-99845bc3f1db is > closed by DFSClient_NONMAPREDUCE_-1615501432_1 > 2016-07-31 01:17:46,103 INFO org.apache.hadoop.hdfs.StateChange: DIR* > completeFile: /user/spark/eventlog/.16cdd027-f1b8-4cbf-a30c-2f1712f4abb5 is > closed by DFSClient_NONMAPREDUCE_-1615501432_1 > 2016-07-31 01:17:56,111 INFO org.apache.hadoop.hdfs.StateChange: DIR* > completeFile: /user/spark/eventlog/.93fb2e86-2fec-4069-b73b-632750fda603 is > closed by DFSClient_NONMAPREDUCE_-1615501432_1 > 2016-07-31 01:18:06,116 INFO > org.apache.hadoop.hdfs.server.namenode.FSEditLog: Number of transactions: 334 > Total time for transactions(ms): 6 Number of transactions batched in Syncs: 0 > Number of syncs: 324 SyncTimes(ms): 300 > 2016-07-31 01:18:06,119 INFO org.apache.hadoop.hdfs.StateChange: DIR* > completeFile: /user/spark/eventlog/.b19fddda-ea90-49ab-b44d-434cce28cb67 is > closed by DFSClient_NONMAPREDUCE_-1615501432_1 > 2016-07-31 01:18:16,127 INFO org.apache.hadoop.hdfs.StateChange: DIR* > completeFile: /user/spark/eventlog/.d81ab189-bde5-4878-b82b-903983466f86 is > closed by DFSClient_NONMAPREDUCE_-1615501432_1 > 2016-07-31 01:18:26,135 INFO org.apache.hadoop.hdfs.StateChange: DIR* > completeFile: /user/spark/eventlog/.e5b51632-f714-4814-b896-59bba137b42d is > closed by DFSClient_NONMAPREDUCE_-1615501432_1 > 2016-07-31 01:18:36,144 INFO org.apache.hadoop.hdfs.StateChange: DIR* > completeFile: /user/spark/eventlog/.39791121-9399-4a22-a50c-90eaddf31ffb is > closed by DFSClient_NONMAPREDUCE_-1615501432_1 > 2016-07-31 01:18:46,153 INFO org.apache.hadoop.hdfs.StateChange: DIR* > completeFile: /user/spark/eventlog/.861c269b-5466-4855-84fd-587ed3306012 is > closed by DFSClient_NONMAPREDUCE_-1615501432_1 > 2016-07-31 01:18:56,162 INFO org.apache.hadoop.hdfs.StateChange: DIR* > completeFile: /user/spark/eventlog/.8a9ff721-bd56-4bea-b399-31bfaabe8c7c is > closed by DFSClient_NONMAPREDUCE_-1615501432_1 > 2016-07-31 01:19:06,168 INFO > org.apache.hadoop.hdfs.server.namenode.FSEditLog: Number of transactions: 352 > Total time for transactions(ms): 7 Number of transactions batched in Syncs: 0 > Number of syncs: 342 SyncTimes(ms): 313 > 2016-07-31 01:19:06,170 INFO org.apache.hadoop.hdfs.StateChange: DIR* > completeFile: /user/spark/eventlog/.492bf987-4991-4533-80e2-678efa843cb9 is > closed by DFSClient_NONMAPREDUCE_-1615501432_1 > 2016-07-31 01:19:16,178 INFO org.apache.hadoop.hdfs.StateChange: DIR* > completeFile: /user/spark/eventlog/.9294c0c6-43db-4f6d-9d31-f493143b6baf is > closed by DFSClient_NONMAPREDUCE_-1615501432_1 > 2016-07-31 01:19:26,187 INFO org.apache.hadoop.hdfs.StateChange: DIR* > completeFile: /user/spark/eventlog/.341dd131-c14c-4147-bcbc-849d1d6bba8c is > closed by DFSClient_NONMAPREDUCE_-1615501432_1 > 2016-07-31 01:19:36,196 INFO org.apache.hadoop.hdfs.StateChange: DIR* > completeFile: /user/spark/eventlog/.56f92e8e-ef93-4279-a57f-472dd5d8f399 is > closed by DFSClient_NONMAPREDUCE_-1615501432_1 > 2016-07-31 01:19:46,204 INFO org.apache.hadoop.hdfs.StateChange: DIR* > completeFile: /user/spark/eventlog/.5ddcda82-b501-4043-bb54-a29902d9d234 is > closed by DFSClient_NONMAPREDUCE_-1615501432_1 > 2016-07-31 01:19:56,212 INFO org.apache.hadoop.hdfs.StateChange: DIR* > completeFile: /user/spark/eventlog/.31e7517b-2ef3-458c-9979-324d7a96302f is > closed by DFSClient_NONMAPREDUCE_-1615501432_1 > 2016-07-31 01:20:06,218 INFO > org.apache.hadoop.hdfs.server.namenode.FSEditLog: Number of transactions: 370 > Total time for transactions(ms): 7 Number of transactions batched in Syncs: 0 > Number of syncs: 360 SyncTimes(ms): 329 > 2016-07-31 01:20:06,220 INFO org.apache.hadoop.hdfs.StateChange: DIR* > completeFile: /user/spark/eventlog/.5251f5df-0957-4008-b664-8d82eaa9789e is > closed by DFSClient_NONMAPREDUCE_-1615501432_1 > 2016-07-31 01:20:16,229 INFO org.apache.hadoop.hdfs.StateChange: DIR* > completeFile: /user/spark/eventlog/.3320b948-2478-4807-9ab3-d23e4945765e is > closed by DFSClient_NONMAPREDUCE_-1615501432_1 > 2016-07-31 01:20:26,237 INFO org.apache.hadoop.hdfs.StateChange: DIR* > completeFile: /user/spark/eventlog/.0928c940-e57d-4a34-a7dc-53dade7ff909 is > closed by DFSClient_NONMAPREDUCE_-1615501432_1 > 2016-07-31 01:20:36,246 INFO org.apache.hadoop.hdfs.StateChange: DIR* > completeFile: /user/spark/eventlog/.6240fcdf-696e-49c4-a883-3eda5ab89b4d is > closed by DFSClient_NONMAPREDUCE_-1615501432_1 > 2016-07-31 01:20:46,254 INFO org.apache.hadoop.hdfs.StateChange: DIR* > completeFile: /user/spark/eventlog/.5622850e-b7b0-458a-9ffa-89e134fa3fda is > closed by DFSClient_NONMAPREDUCE_-1615501432_1 > 2016-07-31 01:20:56,262 INFO org.apache.hadoop.hdfs.StateChange: DIR* > completeFile: /user/spark/eventlog/.faa076e8-490c-489f-8183-778325e0b144 is > closed by DFSClient_NONMAPREDUCE_-1615501432_1 > 2016-07-31 01:21:06,268 INFO > org.apache.hadoop.hdfs.server.namenode.FSEditLog: Number of transactions: 388 > Total time for transactions(ms): 7 Number of transactions batched in Syncs: 0 > Number of syncs: 378 SyncTimes(ms): 347 > 2016-07-31 01:21:06,270 INFO org.apache.hadoop.hdfs.StateChange: DIR* > completeFile: /user/spark/eventlog/.18b2464e-9d14-4bae-95d9-f261edbdee1b is > closed by DFSClient_NONMAPREDUCE_-1615501432_1 > 2016-07-31 01:21:16,278 INFO org.apache.hadoop.hdfs.StateChange: DIR* > completeFile: /user/spark/eventlog/.6c53dd52-3996-4541-b368-e8406f99f68e is > closed by DFSClient_NONMAPREDUCE_-1615501432_1 > 2016-07-31 01:21:26,287 INFO org.apache.hadoop.hdfs.StateChange: DIR* > completeFile: /user/spark/eventlog/.8b5ac93c-b268-432d-9236-48c004c33743 is > closed by DFSClient_NONMAPREDUCE_-1615501432_1 > 2016-07-31 01:21:36,303 INFO org.apache.hadoop.hdfs.StateChange: DIR* > completeFile: /user/spark/eventlog/.22a03e6f-4531-466c-af28-e0797d6b803e is > closed by DFSClient_NONMAPREDUCE_-1615501432_1 > 2016-07-31 01:21:46,311 INFO org.apache.hadoop.hdfs.StateChange: DIR* > completeFile: /user/spark/eventlog/.1df0d173-6432-481f-af97-6632660700b0 is > closed by DFSClient_NONMAPREDUCE_-1615501432_1 > 2016-07-31 01:21:56,319 INFO org.apache.hadoop.hdfs.StateChange: DIR* > completeFile: /user/spark/eventlog/.4095d5a1-ba2d-4966-ad13-99843c51ee91 is > closed by DFSClient_NONMAPREDUCE_-1615501432_1 > 2016-07-31 01:22:06,325 INFO > org.apache.hadoop.hdfs.server.namenode.FSEditLog: Number of transactions: 406 > Total time for transactions(ms): 8 Number of transactions batched in Syncs: 0 > Number of syncs: 396 SyncTimes(ms): 362 > 2016-07-31 01:22:06,328 INFO org.apache.hadoop.hdfs.StateChange: DIR* > completeFile: /user/spark/eventlog/.f35e73f9-842d-4fc2-96b3-9b70df17e7e3 is > closed by DFSClient_NONMAPREDUCE_-1615501432_1 > 2016-07-31 01:22:16,337 INFO org.apache.hadoop.hdfs.StateChange: DIR* > completeFile: /user/spark/eventlog/.fdfa32ef-5c3c-48a3-9d15-0edc1b9d5072 is > closed by DFSClient_NONMAPREDUCE_-1615501432_1 > 2016-07-31 01:22:26,345 INFO org.apache.hadoop.hdfs.StateChange: DIR* > completeFile: /user/spark/eventlog/.0315d9f7-ea5c-4a58-ad68-3f942d97676a is > closed by DFSClient_NONMAPREDUCE_-1615501432_1 > 2016-07-31 01:22:36,353 INFO org.apache.hadoop.hdfs.StateChange: DIR* > completeFile: /user/spark/eventlog/.eecbddee-6bfb-44b6-97ef-1b5eece8a982 is > closed by DFSClient_NONMAPREDUCE_-1615501432_1 > 2016-07-31 01:22:46,362 INFO org.apache.hadoop.hdfs.StateChange: DIR* > completeFile: /user/spark/eventlog/.2c363a5b-bd43-47c5-9050-b15f1f6ade77 is > closed by DFSClient_NONMAPREDUCE_-1615501432_1 > 2016-07-31 01:22:56,449 INFO org.apache.hadoop.hdfs.StateChange: DIR* > completeFile: /user/spark/eventlog/.6f4e8030-39a7-4fc8-b551-5b1d88e0885e is > closed by DFSClient_NONMAPREDUCE_-1615501432_1 -- This message was sent by Atlassian JIRA (v6.3.4#6332) --------------------------------------------------------------------- To unsubscribe, e-mail: issues-unsubscr...@spark.apache.org For additional commands, e-mail: issues-h...@spark.apache.org