[jira] [Commented] (TEZ-3478) Cleanup fetcher data for failing task attempts (Unordered fetcher)

2016-12-28 Thread Zhiyuan Yang (JIRA)

[ 
https://issues.apache.org/jira/browse/TEZ-3478?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=15784016#comment-15784016
 ] 

Zhiyuan Yang commented on TEZ-3478:
---

Mostly looks good to me. Minor comments:  

\\1. SimpleFetchedInputAllocator.allocate()
{code}
Path diskFetchPath = diskFetchedInput.getInputPath().getParent();
{code}
This make structures of fetched files known to SimpleFetchedInputAllocator. We 
might want to hide this from SimpleFetchedInputAllocator in case the structure 
get changes in future.

\\2.ShuffleManager::FetchFutureCallback.onFailure()   
{code}
shutdown();
{code}
This one is not necessary because it will be invoked eventually during task 
cleanup.

> Cleanup fetcher data for failing task attempts (Unordered fetcher)
> --
>
> Key: TEZ-3478
> URL: https://issues.apache.org/jira/browse/TEZ-3478
> Project: Apache Tez
>  Issue Type: Improvement
>Reporter: Rajesh Balamohan
>Assignee: Rajesh Balamohan
>Priority: Minor
> Attachments: TEZ-3479.1.patch, TEZ-3479.branch-0.7.001.patch
>
>
> Env: 3 node AWS cluster with entire dataset in S3. Since data is in S3, it 
> does have not additional storage for HDFS (uses existing space available in 
> VMs). tez version is 0.7.
> With some workloads (e.g q29 in tpcds), unordered fetchers download data in 
> parallel for different vertices and runs out of disk space. However, 
> downloaded
> data related to these failed task attempts are not cleared. So subsequent 
> task attempts also encounter similar situation and fails with "No space" 
> exception. e.g stack trace
> {noformat}
> , errorMessage=Fetch failed:org.apache.hadoop.fs.FSError: 
> java.io.IOException: No space left on device
> at 
> org.apache.hadoop.fs.RawLocalFileSystem$LocalFSFileOutputStream.write(RawLocalFileSystem.java:261)
> at 
> java.io.BufferedOutputStream.flushBuffer(BufferedOutputStream.java:82)
> at java.io.BufferedOutputStream.write(BufferedOutputStream.java:126)
> at 
> org.apache.hadoop.fs.FSDataOutputStream$PositionCache.write(FSDataOutputStream.java:58)
> at java.io.DataOutputStream.write(DataOutputStream.java:107)
> at 
> org.apache.hadoop.fs.ChecksumFileSystem$ChecksumFSOutputSummer.writeChunk(ChecksumFileSystem.java:426)
> at 
> org.apache.hadoop.fs.FSOutputSummer.writeChecksumChunks(FSOutputSummer.java:206)
> at org.apache.hadoop.fs.FSOutputSummer.write1(FSOutputSummer.java:124)
> at org.apache.hadoop.fs.FSOutputSummer.write(FSOutputSummer.java:110)
> at 
> org.apache.hadoop.fs.FSDataOutputStream$PositionCache.write(FSDataOutputStream.java:58)
> at java.io.DataOutputStream.write(DataOutputStream.java:107)
> at 
> org.apache.tez.runtime.library.common.shuffle.ShuffleUtils.shuffleToDisk(ShuffleUtils.java:146)
> at 
> org.apache.tez.runtime.library.common.shuffle.Fetcher.fetchInputs(Fetcher.java:771)
> at 
> org.apache.tez.runtime.library.common.shuffle.Fetcher.doHttpFetch(Fetcher.java:497)
> at 
> org.apache.tez.runtime.library.common.shuffle.Fetcher.doHttpFetch(Fetcher.java:396)
> at 
> org.apache.tez.runtime.library.common.shuffle.Fetcher.callInternal(Fetcher.java:195)
> at 
> org.apache.tez.runtime.library.common.shuffle.Fetcher.callInternal(Fetcher.java:70)
> at org.apache.tez.common.CallableWithNdc.call(CallableWithNdc.java:36)
> at java.util.concurrent.FutureTask.run(FutureTask.java:262)
> at 
> java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1145)
> at 
> java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:615)
> at java.lang.Thread.run(Thread.java:745)
> Caused by: java.io.IOException: No space left on device
> at java.io.FileOutputStream.writeBytes(Native Method)
> at java.io.FileOutputStream.write(FileOutputStream.java:345)
> at 
> org.apache.hadoop.fs.RawLocalFileSystem$LocalFSFileOutputStream.write(RawLocalFileSy
> {noformat}
> This would also affect any other job running in the cluster at the same time. 
> It would be helpful to clean up the data downloaded for the failed task 
> attempts.
> Creating this ticket mainly for unordered fetcher case, though it could be 
> similar case for ordered shuffle case as well.
> e.g files
> {noformat}
> 17M   
> /hadoopfs/fs1/yarn/nodemanager/usercache/cloudbreak/appcache/application_1476667862449_0043/attempt_1476667862449_0043_1_07_28_0_10023_src_62_spill_-1.out
> 18M   
> /hadoopfs/fs1/yarn/nodemanager/usercache/cloudbreak/appcache/application_1476667862449_0043/attempt_1476667862449_0043_1_07_28_0_10023_src_63_spill_-1.out
> 16M   
> 

[jira] [Commented] (TEZ-3478) Cleanup fetcher data for failing task attempts (Unordered fetcher)

2016-10-19 Thread TezQA (JIRA)

[ 
https://issues.apache.org/jira/browse/TEZ-3478?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=15588825#comment-15588825
 ] 

TezQA commented on TEZ-3478:


{color:red}-1 overall{color}.  Here are the results of testing the latest 
attachment
  http://issues.apache.org/jira/secure/attachment/12834130/TEZ-3479.1.patch
  against master revision 8033e3d.

{color:green}+1 @author{color}.  The patch does not contain any @author 
tags.

{color:red}-1 tests included{color}.  The patch doesn't appear to include 
any new or modified tests.
Please justify why no new tests are needed for this 
patch.
Also please list what manual steps were performed to 
verify this patch.

{color:green}+1 javac{color}.  The applied patch does not increase the 
total number of javac compiler warnings.

{color:green}+1 javadoc{color}.  There were no new javadoc warning messages.

{color:green}+1 findbugs{color}.  The patch does not introduce any new 
Findbugs (version 3.0.1) warnings.

{color:green}+1 release audit{color}.  The applied patch does not increase 
the total number of release audit warnings.

{color:green}+1 core tests{color}.  The patch passed unit tests in .

Test results: 
https://builds.apache.org/job/PreCommit-TEZ-Build/2046//testReport/
Console output: https://builds.apache.org/job/PreCommit-TEZ-Build/2046//console

This message is automatically generated.

> Cleanup fetcher data for failing task attempts (Unordered fetcher)
> --
>
> Key: TEZ-3478
> URL: https://issues.apache.org/jira/browse/TEZ-3478
> Project: Apache Tez
>  Issue Type: Improvement
>Reporter: Rajesh Balamohan
>Assignee: Rajesh Balamohan
>Priority: Minor
> Attachments: TEZ-3479.1.patch, TEZ-3479.branch-0.7.001.patch
>
>
> Env: 3 node AWS cluster with entire dataset in S3. Since data is in S3, it 
> does have not additional storage for HDFS (uses existing space available in 
> VMs). tez version is 0.7.
> With some workloads (e.g q29 in tpcds), unordered fetchers download data in 
> parallel for different vertices and runs out of disk space. However, 
> downloaded
> data related to these failed task attempts are not cleared. So subsequent 
> task attempts also encounter similar situation and fails with "No space" 
> exception. e.g stack trace
> {noformat}
> , errorMessage=Fetch failed:org.apache.hadoop.fs.FSError: 
> java.io.IOException: No space left on device
> at 
> org.apache.hadoop.fs.RawLocalFileSystem$LocalFSFileOutputStream.write(RawLocalFileSystem.java:261)
> at 
> java.io.BufferedOutputStream.flushBuffer(BufferedOutputStream.java:82)
> at java.io.BufferedOutputStream.write(BufferedOutputStream.java:126)
> at 
> org.apache.hadoop.fs.FSDataOutputStream$PositionCache.write(FSDataOutputStream.java:58)
> at java.io.DataOutputStream.write(DataOutputStream.java:107)
> at 
> org.apache.hadoop.fs.ChecksumFileSystem$ChecksumFSOutputSummer.writeChunk(ChecksumFileSystem.java:426)
> at 
> org.apache.hadoop.fs.FSOutputSummer.writeChecksumChunks(FSOutputSummer.java:206)
> at org.apache.hadoop.fs.FSOutputSummer.write1(FSOutputSummer.java:124)
> at org.apache.hadoop.fs.FSOutputSummer.write(FSOutputSummer.java:110)
> at 
> org.apache.hadoop.fs.FSDataOutputStream$PositionCache.write(FSDataOutputStream.java:58)
> at java.io.DataOutputStream.write(DataOutputStream.java:107)
> at 
> org.apache.tez.runtime.library.common.shuffle.ShuffleUtils.shuffleToDisk(ShuffleUtils.java:146)
> at 
> org.apache.tez.runtime.library.common.shuffle.Fetcher.fetchInputs(Fetcher.java:771)
> at 
> org.apache.tez.runtime.library.common.shuffle.Fetcher.doHttpFetch(Fetcher.java:497)
> at 
> org.apache.tez.runtime.library.common.shuffle.Fetcher.doHttpFetch(Fetcher.java:396)
> at 
> org.apache.tez.runtime.library.common.shuffle.Fetcher.callInternal(Fetcher.java:195)
> at 
> org.apache.tez.runtime.library.common.shuffle.Fetcher.callInternal(Fetcher.java:70)
> at org.apache.tez.common.CallableWithNdc.call(CallableWithNdc.java:36)
> at java.util.concurrent.FutureTask.run(FutureTask.java:262)
> at 
> java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1145)
> at 
> java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:615)
> at java.lang.Thread.run(Thread.java:745)
> Caused by: java.io.IOException: No space left on device
> at java.io.FileOutputStream.writeBytes(Native Method)
> at java.io.FileOutputStream.write(FileOutputStream.java:345)
> at 
> org.apache.hadoop.fs.RawLocalFileSystem$LocalFSFileOutputStream.write(RawLocalFileSy
> {noformat}
> This would also affect 

[jira] [Commented] (TEZ-3478) Cleanup fetcher data for failing task attempts (Unordered fetcher)

2016-10-18 Thread Rajesh Balamohan (JIRA)

[ 
https://issues.apache.org/jira/browse/TEZ-3478?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=15586993#comment-15586993
 ] 

Rajesh Balamohan commented on TEZ-3478:
---

Haven't checked for ordered case yet, but should be present there as well. 
Created this ticket to handle cleanup of unordered data here.  Will create 
subsequent jira for ordered case.

> Cleanup fetcher data for failing task attempts (Unordered fetcher)
> --
>
> Key: TEZ-3478
> URL: https://issues.apache.org/jira/browse/TEZ-3478
> Project: Apache Tez
>  Issue Type: Improvement
>Reporter: Rajesh Balamohan
>Assignee: Rajesh Balamohan
>Priority: Minor
>
> Env: 3 node AWS cluster with entire dataset in S3. Since data is in S3, it 
> does have not additional storage for HDFS (uses existing space available in 
> VMs). tez version is 0.7.
> With some workloads (e.g q29 in tpcds), unordered fetchers download data in 
> parallel for different vertices and runs out of disk space. However, 
> downloaded
> data related to these failed task attempts are not cleared. So subsequent 
> task attempts also encounter similar situation and fails with "No space" 
> exception. e.g stack trace
> {noformat}
> , errorMessage=Fetch failed:org.apache.hadoop.fs.FSError: 
> java.io.IOException: No space left on device
> at 
> org.apache.hadoop.fs.RawLocalFileSystem$LocalFSFileOutputStream.write(RawLocalFileSystem.java:261)
> at 
> java.io.BufferedOutputStream.flushBuffer(BufferedOutputStream.java:82)
> at java.io.BufferedOutputStream.write(BufferedOutputStream.java:126)
> at 
> org.apache.hadoop.fs.FSDataOutputStream$PositionCache.write(FSDataOutputStream.java:58)
> at java.io.DataOutputStream.write(DataOutputStream.java:107)
> at 
> org.apache.hadoop.fs.ChecksumFileSystem$ChecksumFSOutputSummer.writeChunk(ChecksumFileSystem.java:426)
> at 
> org.apache.hadoop.fs.FSOutputSummer.writeChecksumChunks(FSOutputSummer.java:206)
> at org.apache.hadoop.fs.FSOutputSummer.write1(FSOutputSummer.java:124)
> at org.apache.hadoop.fs.FSOutputSummer.write(FSOutputSummer.java:110)
> at 
> org.apache.hadoop.fs.FSDataOutputStream$PositionCache.write(FSDataOutputStream.java:58)
> at java.io.DataOutputStream.write(DataOutputStream.java:107)
> at 
> org.apache.tez.runtime.library.common.shuffle.ShuffleUtils.shuffleToDisk(ShuffleUtils.java:146)
> at 
> org.apache.tez.runtime.library.common.shuffle.Fetcher.fetchInputs(Fetcher.java:771)
> at 
> org.apache.tez.runtime.library.common.shuffle.Fetcher.doHttpFetch(Fetcher.java:497)
> at 
> org.apache.tez.runtime.library.common.shuffle.Fetcher.doHttpFetch(Fetcher.java:396)
> at 
> org.apache.tez.runtime.library.common.shuffle.Fetcher.callInternal(Fetcher.java:195)
> at 
> org.apache.tez.runtime.library.common.shuffle.Fetcher.callInternal(Fetcher.java:70)
> at org.apache.tez.common.CallableWithNdc.call(CallableWithNdc.java:36)
> at java.util.concurrent.FutureTask.run(FutureTask.java:262)
> at 
> java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1145)
> at 
> java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:615)
> at java.lang.Thread.run(Thread.java:745)
> Caused by: java.io.IOException: No space left on device
> at java.io.FileOutputStream.writeBytes(Native Method)
> at java.io.FileOutputStream.write(FileOutputStream.java:345)
> at 
> org.apache.hadoop.fs.RawLocalFileSystem$LocalFSFileOutputStream.write(RawLocalFileSy
> {noformat}
> This would also affect any other job running in the cluster at the same time. 
> It would be helpful to clean up the data downloaded for the failed task 
> attempts.
> Creating this ticket mainly for unordered fetcher case, though it could be 
> similar case for ordered shuffle case as well.
> e.g files
> {noformat}
> 17M   
> /hadoopfs/fs1/yarn/nodemanager/usercache/cloudbreak/appcache/application_1476667862449_0043/attempt_1476667862449_0043_1_07_28_0_10023_src_62_spill_-1.out
> 18M   
> /hadoopfs/fs1/yarn/nodemanager/usercache/cloudbreak/appcache/application_1476667862449_0043/attempt_1476667862449_0043_1_07_28_0_10023_src_63_spill_-1.out
> 16M   
> /hadoopfs/fs1/yarn/nodemanager/usercache/cloudbreak/appcache/application_1476667862449_0043/attempt_1476667862449_0043_1_07_28_0_10023_src_64_spill_-1.out
> ..
> ..
> 18M   
> /hadoopfs/fs1/yarn/nodemanager/usercache/cloudbreak/appcache/application_1476667862449_0043/attempt_1476667862449_0043_1_07_28_2_10003_src_0_spill_-1.out
> 17M   
> /hadoopfs/fs1/yarn/nodemanager/usercache/cloudbreak/appcache/application_1476667862449_0043/attempt_1476667862449_0043_1_07_28_2_10003_src_13_spill_-1.out

[jira] [Commented] (TEZ-3478) Cleanup fetcher data for failing task attempts (Unordered fetcher)

2016-10-18 Thread Hitesh Shah (JIRA)

[ 
https://issues.apache.org/jira/browse/TEZ-3478?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=15586962#comment-15586962
 ] 

Hitesh Shah commented on TEZ-3478:
--

Is this only an issue with unordered data? 

> Cleanup fetcher data for failing task attempts (Unordered fetcher)
> --
>
> Key: TEZ-3478
> URL: https://issues.apache.org/jira/browse/TEZ-3478
> Project: Apache Tez
>  Issue Type: Improvement
>Reporter: Rajesh Balamohan
>Assignee: Rajesh Balamohan
>Priority: Minor
>
> Env: 3 node AWS cluster with entire dataset in S3. Since data is in S3, it 
> does have not additional storage for HDFS (uses existing space available in 
> VMs). tez version is 0.7.
> With some workloads (e.g q29 in tpcds), unordered fetchers download data in 
> parallel for different vertices and runs out of disk space. However, 
> downloaded
> data related to these failed task attempts are not cleared. So subsequent 
> task attempts also encounter similar situation and fails with "No space" 
> exception. e.g stack trace
> {noformat}
> , errorMessage=Fetch failed:org.apache.hadoop.fs.FSError: 
> java.io.IOException: No space left on device
> at 
> org.apache.hadoop.fs.RawLocalFileSystem$LocalFSFileOutputStream.write(RawLocalFileSystem.java:261)
> at 
> java.io.BufferedOutputStream.flushBuffer(BufferedOutputStream.java:82)
> at java.io.BufferedOutputStream.write(BufferedOutputStream.java:126)
> at 
> org.apache.hadoop.fs.FSDataOutputStream$PositionCache.write(FSDataOutputStream.java:58)
> at java.io.DataOutputStream.write(DataOutputStream.java:107)
> at 
> org.apache.hadoop.fs.ChecksumFileSystem$ChecksumFSOutputSummer.writeChunk(ChecksumFileSystem.java:426)
> at 
> org.apache.hadoop.fs.FSOutputSummer.writeChecksumChunks(FSOutputSummer.java:206)
> at org.apache.hadoop.fs.FSOutputSummer.write1(FSOutputSummer.java:124)
> at org.apache.hadoop.fs.FSOutputSummer.write(FSOutputSummer.java:110)
> at 
> org.apache.hadoop.fs.FSDataOutputStream$PositionCache.write(FSDataOutputStream.java:58)
> at java.io.DataOutputStream.write(DataOutputStream.java:107)
> at 
> org.apache.tez.runtime.library.common.shuffle.ShuffleUtils.shuffleToDisk(ShuffleUtils.java:146)
> at 
> org.apache.tez.runtime.library.common.shuffle.Fetcher.fetchInputs(Fetcher.java:771)
> at 
> org.apache.tez.runtime.library.common.shuffle.Fetcher.doHttpFetch(Fetcher.java:497)
> at 
> org.apache.tez.runtime.library.common.shuffle.Fetcher.doHttpFetch(Fetcher.java:396)
> at 
> org.apache.tez.runtime.library.common.shuffle.Fetcher.callInternal(Fetcher.java:195)
> at 
> org.apache.tez.runtime.library.common.shuffle.Fetcher.callInternal(Fetcher.java:70)
> at org.apache.tez.common.CallableWithNdc.call(CallableWithNdc.java:36)
> at java.util.concurrent.FutureTask.run(FutureTask.java:262)
> at 
> java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1145)
> at 
> java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:615)
> at java.lang.Thread.run(Thread.java:745)
> Caused by: java.io.IOException: No space left on device
> at java.io.FileOutputStream.writeBytes(Native Method)
> at java.io.FileOutputStream.write(FileOutputStream.java:345)
> at 
> org.apache.hadoop.fs.RawLocalFileSystem$LocalFSFileOutputStream.write(RawLocalFileSy
> {noformat}
> This would also affect any other job running in the cluster at the same time. 
> It would be helpful to clean up the data downloaded for the failed task 
> attempts.
> Creating this ticket mainly for unordered fetcher case, though it could be 
> similar case for ordered shuffle case as well.
> e.g files
> {noformat}
> 17M   
> /hadoopfs/fs1/yarn/nodemanager/usercache/cloudbreak/appcache/application_1476667862449_0043/attempt_1476667862449_0043_1_07_28_0_10023_src_62_spill_-1.out
> 18M   
> /hadoopfs/fs1/yarn/nodemanager/usercache/cloudbreak/appcache/application_1476667862449_0043/attempt_1476667862449_0043_1_07_28_0_10023_src_63_spill_-1.out
> 16M   
> /hadoopfs/fs1/yarn/nodemanager/usercache/cloudbreak/appcache/application_1476667862449_0043/attempt_1476667862449_0043_1_07_28_0_10023_src_64_spill_-1.out
> ..
> ..
> 18M   
> /hadoopfs/fs1/yarn/nodemanager/usercache/cloudbreak/appcache/application_1476667862449_0043/attempt_1476667862449_0043_1_07_28_2_10003_src_0_spill_-1.out
> 17M   
> /hadoopfs/fs1/yarn/nodemanager/usercache/cloudbreak/appcache/application_1476667862449_0043/attempt_1476667862449_0043_1_07_28_2_10003_src_13_spill_-1.out
> 16M   
>