[jira] [Created] (YARN-2790) NM can't aggregate logs past HDFS delegation token expiry.

2014-10-31 Thread Tassapol Athiapinya (JIRA)
Tassapol Athiapinya created YARN-2790:
-

 Summary: NM can't aggregate logs past HDFS delegation token expiry.
 Key: YARN-2790
 URL: https://issues.apache.org/jira/browse/YARN-2790
 Project: Hadoop YARN
  Issue Type: Bug
  Components: nodemanager
Affects Versions: 2.6.0
Reporter: Tassapol Athiapinya


We shorten hdfs delegation token lifetime, set RM to act as as a proxy user in 
order to be able to renew the token on behalf of a user submitting the 
application. Still we see NM log aggregation fail due to token expiry error.

{code}
2014-10-31 00:11:56,579 INFO  logaggregation.AppLogAggregatorImpl 
(AppLogAggregatorImpl.java:finishLogAggregation(481)) - Application just 
finished : application_1414705229376_0004
2014-10-31 00:11:56,588 WARN  ipc.Client (Client.java:run(675)) - Exception 
encountered while connecting to the server : 
org.apache.hadoop.ipc.RemoteException(org.apache.hadoop.security.token.SecretManager$InvalidToken):
 token (HDFS_DELEGATION_TOKEN token 75 for hrt_qa) is expired
2014-10-31 00:11:56,589 ERROR logaggregation.AppLogAggregatorImpl 
(AppLogAggregatorImpl.java:uploadLogsForContainers(233)) - Cannot create writer 
for app application_1414705229376_0004. Skip log upload this time.
{code}



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Updated] (YARN-2790) NM can't aggregate logs past HDFS delegation token expiry.

2014-10-31 Thread Tassapol Athiapinya (JIRA)

 [ 
https://issues.apache.org/jira/browse/YARN-2790?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Tassapol Athiapinya updated YARN-2790:
--
Description: 
We shorten hdfs delegation token lifetime, set RM to act as as a proxy user in 
order to be able to renew the token on behalf of a user submitting the 
application. Still we see NM log aggregation fail due to token expiry error.

{code}
2014-10-31 00:11:56,579 INFO  logaggregation.AppLogAggregatorImpl 
(AppLogAggregatorImpl.java:finishLogAggregation(481)) - Application just 
finished : application_1414705229376_0004
2014-10-31 00:11:56,588 WARN  ipc.Client (Client.java:run(675)) - Exception 
encountered while connecting to the server : 
org.apache.hadoop.ipc.RemoteException(org.apache.hadoop.security.token.SecretManager$InvalidToken):
 token (HDFS_DELEGATION_TOKEN token 75 for user1) is expired
2014-10-31 00:11:56,589 ERROR logaggregation.AppLogAggregatorImpl 
(AppLogAggregatorImpl.java:uploadLogsForContainers(233)) - Cannot create writer 
for app application_1414705229376_0004. Skip log upload this time.
{code}

  was:
We shorten hdfs delegation token lifetime, set RM to act as as a proxy user in 
order to be able to renew the token on behalf of a user submitting the 
application. Still we see NM log aggregation fail due to token expiry error.

{code}
2014-10-31 00:11:56,579 INFO  logaggregation.AppLogAggregatorImpl 
(AppLogAggregatorImpl.java:finishLogAggregation(481)) - Application just 
finished : application_1414705229376_0004
2014-10-31 00:11:56,588 WARN  ipc.Client (Client.java:run(675)) - Exception 
encountered while connecting to the server : 
org.apache.hadoop.ipc.RemoteException(org.apache.hadoop.security.token.SecretManager$InvalidToken):
 token (HDFS_DELEGATION_TOKEN token 75 for hrt_qa) is expired
2014-10-31 00:11:56,589 ERROR logaggregation.AppLogAggregatorImpl 
(AppLogAggregatorImpl.java:uploadLogsForContainers(233)) - Cannot create writer 
for app application_1414705229376_0004. Skip log upload this time.
{code}


 NM can't aggregate logs past HDFS delegation token expiry.
 --

 Key: YARN-2790
 URL: https://issues.apache.org/jira/browse/YARN-2790
 Project: Hadoop YARN
  Issue Type: Bug
  Components: nodemanager
Affects Versions: 2.6.0
Reporter: Tassapol Athiapinya

 We shorten hdfs delegation token lifetime, set RM to act as as a proxy user 
 in order to be able to renew the token on behalf of a user submitting the 
 application. Still we see NM log aggregation fail due to token expiry error.
 {code}
 2014-10-31 00:11:56,579 INFO  logaggregation.AppLogAggregatorImpl 
 (AppLogAggregatorImpl.java:finishLogAggregation(481)) - Application just 
 finished : application_1414705229376_0004
 2014-10-31 00:11:56,588 WARN  ipc.Client (Client.java:run(675)) - Exception 
 encountered while connecting to the server : 
 org.apache.hadoop.ipc.RemoteException(org.apache.hadoop.security.token.SecretManager$InvalidToken):
  token (HDFS_DELEGATION_TOKEN token 75 for user1) is expired
 2014-10-31 00:11:56,589 ERROR logaggregation.AppLogAggregatorImpl 
 (AppLogAggregatorImpl.java:uploadLogsForContainers(233)) - Cannot create 
 writer for app application_1414705229376_0004. Skip log upload this time.
 {code}



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Commented] (YARN-2790) NM can't aggregate logs past HDFS delegation token expiry.

2014-10-31 Thread Tassapol Athiapinya (JIRA)

[ 
https://issues.apache.org/jira/browse/YARN-2790?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanelfocusedCommentId=14192804#comment-14192804
 ] 

Tassapol Athiapinya commented on YARN-2790:
---

+1. Tested end-to-end on real cluster. Old error goes away and log aggregation 
does succeed.

 NM can't aggregate logs past HDFS delegation token expiry.
 --

 Key: YARN-2790
 URL: https://issues.apache.org/jira/browse/YARN-2790
 Project: Hadoop YARN
  Issue Type: Bug
  Components: nodemanager
Affects Versions: 2.6.0
Reporter: Tassapol Athiapinya
Assignee: Jian He
 Attachments: YARN-2790.1.patch


 We shorten hdfs delegation token lifetime, set RM to act as as a proxy user 
 in order to be able to renew the token on behalf of a user submitting the 
 application. Still we see NM log aggregation fail due to token expiry error.
 {code}
 2014-10-31 00:11:56,579 INFO  logaggregation.AppLogAggregatorImpl 
 (AppLogAggregatorImpl.java:finishLogAggregation(481)) - Application just 
 finished : application_1414705229376_0004
 2014-10-31 00:11:56,588 WARN  ipc.Client (Client.java:run(675)) - Exception 
 encountered while connecting to the server : 
 org.apache.hadoop.ipc.RemoteException(org.apache.hadoop.security.token.SecretManager$InvalidToken):
  token (HDFS_DELEGATION_TOKEN token 75 for user1) is expired
 2014-10-31 00:11:56,589 ERROR logaggregation.AppLogAggregatorImpl 
 (AppLogAggregatorImpl.java:uploadLogsForContainers(233)) - Cannot create 
 writer for app application_1414705229376_0004. Skip log upload this time.
 {code}



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Created] (YARN-2561) MR job client cannot reconnect to AM after NM restart.

2014-09-16 Thread Tassapol Athiapinya (JIRA)
Tassapol Athiapinya created YARN-2561:
-

 Summary: MR job client cannot reconnect to AM after NM restart.
 Key: YARN-2561
 URL: https://issues.apache.org/jira/browse/YARN-2561
 Project: Hadoop YARN
  Issue Type: Bug
Affects Versions: 2.6.0
Reporter: Tassapol Athiapinya
Priority: Critical


Work-preserving NM restart is disabled.
Submit a job. Restart NM with AM running. Job client won't be able to connect 
to new AM attempt and hang with connect retries.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Updated] (YARN-2285) Capacity scheduler root queue usage can show above 100% due to reserved container.

2014-07-16 Thread Tassapol Athiapinya (JIRA)

 [ 
https://issues.apache.org/jira/browse/YARN-2285?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Tassapol Athiapinya updated YARN-2285:
--

Summary: Capacity scheduler root queue usage can show above 100% due to 
reserved container.  (was: Preemption can cause capacity scheduler to show 
5,000% queue capacity.)

 Capacity scheduler root queue usage can show above 100% due to reserved 
 container.
 --

 Key: YARN-2285
 URL: https://issues.apache.org/jira/browse/YARN-2285
 Project: Hadoop YARN
  Issue Type: Improvement
  Components: capacityscheduler
Affects Versions: 2.5.0
 Environment: Turn on CS Preemption.
Reporter: Tassapol Athiapinya
Assignee: Wangda Tan
Priority: Minor
 Attachments: preemption_5000_percent.png


 I configure queue A, B to have 1%, 99% capacity respectively. There is no max 
 capacity for each queue. Set high user limit factor.
 Submit app 1 to queue A. AM container takes 50% of cluster memory. Task 
 containers take another 50%. Submit app 2 to queue B. Preempt task containers 
 of app 1 out. Turns out capacity of queue B increases to 99% but queue A has 
 5000% used.



--
This message was sent by Atlassian JIRA
(v6.2#6252)


[jira] [Commented] (YARN-2297) Preemption can hang in corner case by not allowing any task container to proceed.

2014-07-15 Thread Tassapol Athiapinya (JIRA)

[ 
https://issues.apache.org/jira/browse/YARN-2297?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanelfocusedCommentId=14063099#comment-14063099
 ] 

Tassapol Athiapinya commented on YARN-2297:
---

bq. Are there realistic configurations where this creates a problem? If a queue 
is configured with less than a container's capacity, what is the intent?
Sorry, I don't have real world scenario yet. The issue is found during system 
test for a case when AM is preempted ([YARN-2074]). This configuration can 
simulate that scenario in controllable manner.

 Preemption can hang in corner case by not allowing any task container to 
 proceed.
 -

 Key: YARN-2297
 URL: https://issues.apache.org/jira/browse/YARN-2297
 Project: Hadoop YARN
  Issue Type: Sub-task
  Components: capacityscheduler
Affects Versions: 2.5.0
Reporter: Tassapol Athiapinya
Assignee: Wangda Tan
Priority: Critical

 Preemption can cause hang issue in single-node cluster. Only AMs run. No task 
 container can run.
 h3. queue configuration
 Queue A/B has 1% and 99% respectively. 
 No max capacity.
 h3. scenario
 Turn on preemption. Configure 1 NM with 4 GB of memory. Use only 2 apps. Use 
 1 user.
 Submit app 1 to queue A. AM needs 2 GB. There is 1 task that needs 2 GB. 
 Occupy entire cluster.
 Submit app 2 to queue B. AM needs 2 GB. There are 3 tasks that need 2 GB each.
 Instead of entire app 1 preempted, app 1 AM will stay. App 2 AM will launch. 
 No task of either app can proceed. 
 h3. commands
 /usr/lib/hadoop/bin/hadoop jar 
 /usr/lib/hadoop-mapreduce/hadoop-mapreduce-examples.jar randomtextwriter 
 -Dmapreduce.map.memory.mb=2000 
 -Dyarn.app.mapreduce.am.command-opts=-Xmx1800M 
 -Dmapreduce.randomtextwriter.bytespermap=2147483648 
 -Dmapreduce.job.queuename=A -Dmapreduce.map.maxattempts=100 
 -Dmapreduce.am.max-attempts=1 -Dyarn.app.mapreduce.am.resource.mb=2000 
 -Dmapreduce.map.java.opts=-Xmx1800M 
 -Dmapreduce.randomtextwriter.mapsperhost=1 
 -Dmapreduce.randomtextwriter.totalbytes=2147483648 dir1
 /usr/lib/hadoop/bin/hadoop jar 
 /usr/lib/hadoop-mapreduce/hadoop-mapreduce-client-jobclient-tests.jar sleep 
 -Dmapreduce.map.memory.mb=2000 
 -Dyarn.app.mapreduce.am.command-opts=-Xmx1800M 
 -Dmapreduce.job.queuename=B -Dmapreduce.map.maxattempts=100 
 -Dmapreduce.am.max-attempts=1 -Dyarn.app.mapreduce.am.resource.mb=2000 
 -Dmapreduce.map.java.opts=-Xmx1800M -m 1 -r 0 -mt 4000  -rt 0



--
This message was sent by Atlassian JIRA
(v6.2#6252)


[jira] [Commented] (YARN-2285) Preemption can cause capacity scheduler to show 5,000% queue absolute used capacity.

2014-07-15 Thread Tassapol Athiapinya (JIRA)

[ 
https://issues.apache.org/jira/browse/YARN-2285?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanelfocusedCommentId=14062693#comment-14062693
 ] 

Tassapol Athiapinya commented on YARN-2285:
---

After closer look, 5000% is valid number. It means 5000% of guaranteed 
capacity of queue A (about 50% of absolute used capacity). I am making changes 
to jira title accordingly. I will also make this improvement jira instead of a 
bug.

The point here becomes whether it is nice to re-label text in web UI to 
better reflect its meaning saying % used next to queue is % of guaranteed 
queue capacity, not absolute used capacity.

 Preemption can cause capacity scheduler to show 5,000% queue absolute used 
 capacity.
 

 Key: YARN-2285
 URL: https://issues.apache.org/jira/browse/YARN-2285
 Project: Hadoop YARN
  Issue Type: Bug
  Components: capacityscheduler
Affects Versions: 2.5.0
 Environment: Turn on CS Preemption.
Reporter: Tassapol Athiapinya
 Attachments: preemption_5000_percent.png


 I configure queue A, B to have 1%, 99% capacity respectively. There is no max 
 capacity for each queue. Set high user limit factor.
 Submit app 1 to queue A. AM container takes 50% of cluster memory. Task 
 containers take another 50%. Submit app 2 to queue B. Preempt task containers 
 of app 1 out. Turns out capacity of queue B increases to 99% but queue A has 
 5000% used.



--
This message was sent by Atlassian JIRA
(v6.2#6252)


[jira] [Updated] (YARN-2285) Preemption can cause capacity scheduler to show 5,000% queue capacity.

2014-07-15 Thread Tassapol Athiapinya (JIRA)

 [ 
https://issues.apache.org/jira/browse/YARN-2285?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Tassapol Athiapinya updated YARN-2285:
--

  Priority: Minor  (was: Major)
Issue Type: Improvement  (was: Bug)
   Summary: Preemption can cause capacity scheduler to show 5,000% queue 
capacity.  (was: Preemption can cause capacity scheduler to show 5,000% queue 
absolute used capacity.)

 Preemption can cause capacity scheduler to show 5,000% queue capacity.
 --

 Key: YARN-2285
 URL: https://issues.apache.org/jira/browse/YARN-2285
 Project: Hadoop YARN
  Issue Type: Improvement
  Components: capacityscheduler
Affects Versions: 2.5.0
 Environment: Turn on CS Preemption.
Reporter: Tassapol Athiapinya
Priority: Minor
 Attachments: preemption_5000_percent.png


 I configure queue A, B to have 1%, 99% capacity respectively. There is no max 
 capacity for each queue. Set high user limit factor.
 Submit app 1 to queue A. AM container takes 50% of cluster memory. Task 
 containers take another 50%. Submit app 2 to queue B. Preempt task containers 
 of app 1 out. Turns out capacity of queue B increases to 99% but queue A has 
 5000% used.



--
This message was sent by Atlassian JIRA
(v6.2#6252)


[jira] [Commented] (YARN-2285) Preemption can cause capacity scheduler to show 5,000% queue capacity.

2014-07-15 Thread Tassapol Athiapinya (JIRA)

[ 
https://issues.apache.org/jira/browse/YARN-2285?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanelfocusedCommentId=14062697#comment-14062697
 ] 

Tassapol Athiapinya commented on YARN-2285:
---

Also it is not major but percentage shown is not right. In attached screenshot, 
root queue used is 146.5%.

 Preemption can cause capacity scheduler to show 5,000% queue capacity.
 --

 Key: YARN-2285
 URL: https://issues.apache.org/jira/browse/YARN-2285
 Project: Hadoop YARN
  Issue Type: Improvement
  Components: capacityscheduler
Affects Versions: 2.5.0
 Environment: Turn on CS Preemption.
Reporter: Tassapol Athiapinya
Priority: Minor
 Attachments: preemption_5000_percent.png


 I configure queue A, B to have 1%, 99% capacity respectively. There is no max 
 capacity for each queue. Set high user limit factor.
 Submit app 1 to queue A. AM container takes 50% of cluster memory. Task 
 containers take another 50%. Submit app 2 to queue B. Preempt task containers 
 of app 1 out. Turns out capacity of queue B increases to 99% but queue A has 
 5000% used.



--
This message was sent by Atlassian JIRA
(v6.2#6252)


[jira] [Created] (YARN-2297) Preemption can hang in corner case by not allowing any task container to proceed.

2014-07-15 Thread Tassapol Athiapinya (JIRA)
Tassapol Athiapinya created YARN-2297:
-

 Summary: Preemption can hang in corner case by not allowing any 
task container to proceed.
 Key: YARN-2297
 URL: https://issues.apache.org/jira/browse/YARN-2297
 Project: Hadoop YARN
  Issue Type: Bug
  Components: capacityscheduler
Affects Versions: 2.5.0
Reporter: Tassapol Athiapinya
Priority: Critical


Preemption can cause hang issue in single-node cluster. Only AMs run. No task 
container can run.

h3. queue configuration
Queue A/B has 1% and 99% respectively. 
No max capacity.

h3. scenario
Turn on preemption. Configure 1 NM with 4 GB of memory. Use only 2 apps. Use 1 
user.
Submit app 1 to queue A. AM needs 2 GB. There is 1 task that needs 2 GB. Occupy 
entire cluster.
Submit app 2 to queue B. AM needs 2 GB. There are 3 tasks that need 2 GB each.
Instead of entire app 1 preempted, app 1 AM will stay. App 2 AM will launch. No 
task of either app can proceed. 

h3. commands
/usr/lib/hadoop/bin/hadoop jar 
/usr/lib/hadoop-mapreduce/hadoop-mapreduce-examples.jar randomtextwriter 
-Dmapreduce.map.memory.mb=2000 
-Dyarn.app.mapreduce.am.command-opts=-Xmx1800M 
-Dmapreduce.randomtextwriter.bytespermap=2147483648 
-Dmapreduce.job.queuename=A -Dmapreduce.map.maxattempts=100 
-Dmapreduce.am.max-attempts=1 -Dyarn.app.mapreduce.am.resource.mb=2000 
-Dmapreduce.map.java.opts=-Xmx1800M 
-Dmapreduce.randomtextwriter.mapsperhost=1 
-Dmapreduce.randomtextwriter.totalbytes=2147483648 dir1

/usr/lib/hadoop/bin/hadoop jar 
/usr/lib/hadoop-mapreduce/hadoop-mapreduce-client-jobclient-tests.jar sleep 
-Dmapreduce.map.memory.mb=2000 
-Dyarn.app.mapreduce.am.command-opts=-Xmx1800M -Dmapreduce.job.queuename=B 
-Dmapreduce.map.maxattempts=100 -Dmapreduce.am.max-attempts=1 
-Dyarn.app.mapreduce.am.resource.mb=2000 
-Dmapreduce.map.java.opts=-Xmx1800M -m 1 -r 0 -mt 4000  -rt 0




--
This message was sent by Atlassian JIRA
(v6.2#6252)


[jira] [Created] (YARN-2285) Preemption can cause capacity scheduler to show 5,000% queue absolute used capacity.

2014-07-14 Thread Tassapol Athiapinya (JIRA)
Tassapol Athiapinya created YARN-2285:
-

 Summary: Preemption can cause capacity scheduler to show 5,000% 
queue absolute used capacity.
 Key: YARN-2285
 URL: https://issues.apache.org/jira/browse/YARN-2285
 Project: Hadoop YARN
  Issue Type: Bug
  Components: capacityscheduler
Affects Versions: 2.5.0
 Environment: Turn on CS Preemption.
Reporter: Tassapol Athiapinya
 Attachments: preemption_5000_percent.png

I configure queue A, B to have 1%, 99% capacity respectively. There is no max 
capacity for each queue. Set high user limit factor.
Submit app 1 to queue A. AM container takes 50% of cluster memory. Task 
containers take another 50%. Submit app 2 to queue B. Preempt task containers 
of app 1 out. Turns out capacity of queue B increases to 99% but queue A has 
5000% used.



--
This message was sent by Atlassian JIRA
(v6.2#6252)


[jira] [Updated] (YARN-2285) Preemption can cause capacity scheduler to show 5,000% queue absolute used capacity.

2014-07-14 Thread Tassapol Athiapinya (JIRA)

 [ 
https://issues.apache.org/jira/browse/YARN-2285?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Tassapol Athiapinya updated YARN-2285:
--

Attachment: preemption_5000_percent.png

 Preemption can cause capacity scheduler to show 5,000% queue absolute used 
 capacity.
 

 Key: YARN-2285
 URL: https://issues.apache.org/jira/browse/YARN-2285
 Project: Hadoop YARN
  Issue Type: Bug
  Components: capacityscheduler
Affects Versions: 2.5.0
 Environment: Turn on CS Preemption.
Reporter: Tassapol Athiapinya
 Attachments: preemption_5000_percent.png


 I configure queue A, B to have 1%, 99% capacity respectively. There is no max 
 capacity for each queue. Set high user limit factor.
 Submit app 1 to queue A. AM container takes 50% of cluster memory. Task 
 containers take another 50%. Submit app 2 to queue B. Preempt task containers 
 of app 1 out. Turns out capacity of queue B increases to 99% but queue A has 
 5000% used.



--
This message was sent by Atlassian JIRA
(v6.2#6252)


[jira] [Commented] (YARN-2144) Add logs when preemption occurs

2014-06-16 Thread Tassapol Athiapinya (JIRA)

[ 
https://issues.apache.org/jira/browse/YARN-2144?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanelfocusedCommentId=14032603#comment-14032603
 ] 

Tassapol Athiapinya commented on YARN-2144:
---

[~leftnoteasy] Can you please clarify me on these points?
- In AM page, Does Resource Preempted from Current Attempt mean Total 
Resource Preempted from Latest AM attempt? Can it show only data point from 
current (is it latest?) attempt?
- Can you change #Container Preempted from Current Attempt: to Number of 
Containers Preempted from Current(Latest) Attempt? # syntax maybe hard to 
comprehend for wider group of user. 

 Add logs when preemption occurs
 ---

 Key: YARN-2144
 URL: https://issues.apache.org/jira/browse/YARN-2144
 Project: Hadoop YARN
  Issue Type: Improvement
  Components: capacityscheduler
Affects Versions: 2.5.0
Reporter: Tassapol Athiapinya
Assignee: Wangda Tan
 Attachments: AM-page-preemption-info.png, YARN-2144.patch, 
 YARN-2144.patch, YARN-2144.patch


 There should be easy-to-read logs when preemption does occur. 
 1. For debugging purpose, RM should log this.
 2. For administrative purpose, RM webpage should have a page to show recent 
 preemption events.
 RM logs should have following properties:
 * Logs are retrievable when an application is still running and often flushed.
 * Can distinguish between AM container preemption and task container 
 preemption with container ID shown.
 * Should be INFO level log.



--
This message was sent by Atlassian JIRA
(v6.2#6252)


[jira] [Resolved] (YARN-2150) Add CLI to list applications by final-status

2014-06-11 Thread Tassapol Athiapinya (JIRA)

 [ 
https://issues.apache.org/jira/browse/YARN-2150?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Tassapol Athiapinya resolved YARN-2150.
---

Resolution: Duplicate

Closed as duplicate of YARN-1480. Thanks for pointing this out.

 Add CLI to list applications by final-status
 

 Key: YARN-2150
 URL: https://issues.apache.org/jira/browse/YARN-2150
 Project: Hadoop YARN
  Issue Type: Improvement
  Components: client
Affects Versions: 2.5.0
Reporter: Tassapol Athiapinya

 yarn application -list supports filtering by app state. Filtering by app 
 final-status is currently missing. There should be a way to list applications 
 according to app-specific status. For example, cluster admin might want to 
 find failed MR jobs. To be specific, the admin wants to find application with 
 FINISHED state and FAILED final-status. There is currently no way to list 
 this without manual parsing of CLI output.



--
This message was sent by Atlassian JIRA
(v6.2#6252)


[jira] [Created] (YARN-2144) Add logs when preemption occurs

2014-06-10 Thread Tassapol Athiapinya (JIRA)
Tassapol Athiapinya created YARN-2144:
-

 Summary: Add logs when preemption occurs
 Key: YARN-2144
 URL: https://issues.apache.org/jira/browse/YARN-2144
 Project: Hadoop YARN
  Issue Type: Improvement
  Components: capacityscheduler
Affects Versions: 2.5.0
Reporter: Tassapol Athiapinya


There should be easy-to-read logs when preemption does occur. 
1. For debugging purpose, RM should log this.
2. For administrative purpose, RM webpage should have a page to show recent 
preemption events.

RM logs should have following properties:
* Logs are retrievable when an application is still running and often flushed.
* Can distinguish between AM container preemption and task container preemption 
with container ID shown.
* Should be INFO level log.




--
This message was sent by Atlassian JIRA
(v6.2#6252)


[jira] [Commented] (YARN-1621) Add CLI to list states of yarn container-IDs/hosts

2014-06-10 Thread Tassapol Athiapinya (JIRA)

[ 
https://issues.apache.org/jira/browse/YARN-1621?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanelfocusedCommentId=14027284#comment-14027284
 ] 

Tassapol Athiapinya commented on YARN-1621:
---

yarnapplicationattempt lists only AM attempts so this is different command.

 Add CLI to list states of yarn container-IDs/hosts
 --

 Key: YARN-1621
 URL: https://issues.apache.org/jira/browse/YARN-1621
 Project: Hadoop YARN
  Issue Type: Improvement
Affects Versions: 2.2.0
Reporter: Tassapol Athiapinya
 Fix For: 2.5.0


 As more applications are moved to YARN, we need generic CLI to list states of 
 yarn containers and their hosts. Today if YARN application running in a 
 container does hang, there is no way other than to manually kill its process.
 For each running application, it is useful to differentiate between 
 running/succeeded/failed/killed containers. 
 {code:title=proposed yarn cli}
 $ yarn application -list-containers appId status
 where status is one of running/succeeded/killed/failed/all
 {code}



--
This message was sent by Atlassian JIRA
(v6.2#6252)


[jira] [Updated] (YARN-1621) Add CLI to list states of yarn container-IDs/hosts

2014-06-10 Thread Tassapol Athiapinya (JIRA)

 [ 
https://issues.apache.org/jira/browse/YARN-1621?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Tassapol Athiapinya updated YARN-1621:
--

Description: 
As more applications are moved to YARN, we need generic CLI to list rows of 
task attempt ID, container ID, host of container, state of container. Today 
if YARN application running in a container does hang, there is no way to find 
out more info because a user does not know where each attempt is running in.

For each running application, it is useful to differentiate between 
running/succeeded/failed/killed containers.
 
{code:title=proposed yarn cli}
$ yarn application -list-containers -applicationId appId [-containerState 
state of container]
where containerState is optional filter to list container in given state only.
container state can be running/succeeded/killed/failed/all.
A user can specify more than one container state at once e.g. KILLED,FAILED.

task attempt ID container ID host of container state of container 
{code}


  was:
As more applications are moved to YARN, we need generic CLI to list states of 
yarn containers and their hosts. Today if YARN application running in a 
container does hang, there is no way other than to manually kill its process.

For each running application, it is useful to differentiate between 
running/succeeded/failed/killed containers. 
{code:title=proposed yarn cli}
$ yarn application -list-containers appId status
where status is one of running/succeeded/killed/failed/all
{code}


 Add CLI to list states of yarn container-IDs/hosts
 --

 Key: YARN-1621
 URL: https://issues.apache.org/jira/browse/YARN-1621
 Project: Hadoop YARN
  Issue Type: Improvement
Affects Versions: 2.2.0
Reporter: Tassapol Athiapinya
 Fix For: 2.5.0


 As more applications are moved to YARN, we need generic CLI to list rows of 
 task attempt ID, container ID, host of container, state of container. Today 
 if YARN application running in a container does hang, there is no way to find 
 out more info because a user does not know where each attempt is running in.
 For each running application, it is useful to differentiate between 
 running/succeeded/failed/killed containers.
  
 {code:title=proposed yarn cli}
 $ yarn application -list-containers -applicationId appId [-containerState 
 state of container]
 where containerState is optional filter to list container in given state only.
 container state can be running/succeeded/killed/failed/all.
 A user can specify more than one container state at once e.g. KILLED,FAILED.
 task attempt ID container ID host of container state of container 
 {code}



--
This message was sent by Atlassian JIRA
(v6.2#6252)


[jira] [Updated] (YARN-1621) Add CLI to list rows of task attempt ID, container ID, host of container, state of container

2014-06-10 Thread Tassapol Athiapinya (JIRA)

 [ 
https://issues.apache.org/jira/browse/YARN-1621?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Tassapol Athiapinya updated YARN-1621:
--

Summary: Add CLI to list rows of task attempt ID, container ID, host of 
container, state of container  (was: Add CLI to list states of yarn 
container-IDs/hosts)

 Add CLI to list rows of task attempt ID, container ID, host of container, 
 state of container
 --

 Key: YARN-1621
 URL: https://issues.apache.org/jira/browse/YARN-1621
 Project: Hadoop YARN
  Issue Type: Improvement
Affects Versions: 2.2.0
Reporter: Tassapol Athiapinya
 Fix For: 2.5.0


 As more applications are moved to YARN, we need generic CLI to list rows of 
 task attempt ID, container ID, host of container, state of container. Today 
 if YARN application running in a container does hang, there is no way to find 
 out more info because a user does not know where each attempt is running in.
 For each running application, it is useful to differentiate between 
 running/succeeded/failed/killed containers.
  
 {code:title=proposed yarn cli}
 $ yarn application -list-containers -applicationId appId [-containerState 
 state of container]
 where containerState is optional filter to list container in given state only.
 container state can be running/succeeded/killed/failed/all.
 A user can specify more than one container state at once e.g. KILLED,FAILED.
 task attempt ID container ID host of container state of container 
 {code}



--
This message was sent by Atlassian JIRA
(v6.2#6252)


[jira] [Commented] (YARN-1621) Add CLI to list rows of task attempt ID, container ID, host of container, state of container

2014-06-10 Thread Tassapol Athiapinya (JIRA)

[ 
https://issues.apache.org/jira/browse/YARN-1621?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanelfocusedCommentId=14027291#comment-14027291
 ] 

Tassapol Athiapinya commented on YARN-1621:
---

I updated jira info to not only list container-ID and host. It is better to 
list attempt ID and container state too.

 Add CLI to list rows of task attempt ID, container ID, host of container, 
 state of container
 --

 Key: YARN-1621
 URL: https://issues.apache.org/jira/browse/YARN-1621
 Project: Hadoop YARN
  Issue Type: Improvement
Affects Versions: 2.2.0
Reporter: Tassapol Athiapinya
 Fix For: 2.5.0


 As more applications are moved to YARN, we need generic CLI to list rows of 
 task attempt ID, container ID, host of container, state of container. Today 
 if YARN application running in a container does hang, there is no way to find 
 out more info because a user does not know where each attempt is running in.
 For each running application, it is useful to differentiate between 
 running/succeeded/failed/killed containers.
  
 {code:title=proposed yarn cli}
 $ yarn application -list-containers -applicationId appId [-containerState 
 state of container]
 where containerState is optional filter to list container in given state only.
 container state can be running/succeeded/killed/failed/all.
 A user can specify more than one container state at once e.g. KILLED,FAILED.
 task attempt ID container ID host of container state of container 
 {code}



--
This message was sent by Atlassian JIRA
(v6.2#6252)


[jira] [Updated] (YARN-1621) Add CLI to list rows of task attempt ID, container ID, host of container, state of container

2014-06-10 Thread Tassapol Athiapinya (JIRA)

 [ 
https://issues.apache.org/jira/browse/YARN-1621?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Tassapol Athiapinya updated YARN-1621:
--

Description: 
As more applications are moved to YARN, we need generic CLI to list rows of 
task attempt ID, container ID, host of container, state of container. Today 
if YARN application running in a container does hang, there is no way to find 
out more info because a user does not know where each attempt is running in.

For each running application, it is useful to differentiate between 
running/succeeded/failed/killed containers.
 
{code:title=proposed yarn cli}
$ yarn application -list-containers -applicationId appId [-containerState 
state of container]
where containerState is optional filter to list container in given state only.
container state can be running/succeeded/killed/failed/all.
A user can specify more than one container state at once e.g. KILLED,FAILED.

task attempt ID container ID host of container state of container 
{code}

CLI should work with running application/completed application. If a container 
runs many task attempts, all attempts should be shown. That will likely be the 
case of Tez container-reuse application.

  was:
As more applications are moved to YARN, we need generic CLI to list rows of 
task attempt ID, container ID, host of container, state of container. Today 
if YARN application running in a container does hang, there is no way to find 
out more info because a user does not know where each attempt is running in.

For each running application, it is useful to differentiate between 
running/succeeded/failed/killed containers.
 
{code:title=proposed yarn cli}
$ yarn application -list-containers -applicationId appId [-containerState 
state of container]
where containerState is optional filter to list container in given state only.
container state can be running/succeeded/killed/failed/all.
A user can specify more than one container state at once e.g. KILLED,FAILED.

task attempt ID container ID host of container state of container 
{code}



 Add CLI to list rows of task attempt ID, container ID, host of container, 
 state of container
 --

 Key: YARN-1621
 URL: https://issues.apache.org/jira/browse/YARN-1621
 Project: Hadoop YARN
  Issue Type: Improvement
Affects Versions: 2.2.0
Reporter: Tassapol Athiapinya
 Fix For: 2.5.0


 As more applications are moved to YARN, we need generic CLI to list rows of 
 task attempt ID, container ID, host of container, state of container. Today 
 if YARN application running in a container does hang, there is no way to find 
 out more info because a user does not know where each attempt is running in.
 For each running application, it is useful to differentiate between 
 running/succeeded/failed/killed containers.
  
 {code:title=proposed yarn cli}
 $ yarn application -list-containers -applicationId appId [-containerState 
 state of container]
 where containerState is optional filter to list container in given state only.
 container state can be running/succeeded/killed/failed/all.
 A user can specify more than one container state at once e.g. KILLED,FAILED.
 task attempt ID container ID host of container state of container 
 {code}
 CLI should work with running application/completed application. If a 
 container runs many task attempts, all attempts should be shown. That will 
 likely be the case of Tez container-reuse application.



--
This message was sent by Atlassian JIRA
(v6.2#6252)


[jira] [Created] (YARN-1839) Capacity scheduler preempts an AM out. AM attempt 2 fails to launch task container with SecretManager$InvalidToken: No NMToken sent

2014-03-14 Thread Tassapol Athiapinya (JIRA)
Tassapol Athiapinya created YARN-1839:
-

 Summary: Capacity scheduler preempts an AM out. AM attempt 2 fails 
to launch task container with SecretManager$InvalidToken: No NMToken sent
 Key: YARN-1839
 URL: https://issues.apache.org/jira/browse/YARN-1839
 Project: Hadoop YARN
  Issue Type: Bug
  Components: applications, capacityscheduler
Affects Versions: 2.3.0
Reporter: Tassapol Athiapinya
Priority: Critical


Use single-node cluster. Turn on capacity scheduler preemption. Run MR sleep 
job as app 1. Take entire cluster. Run MR sleep job as app 2. Preempt app1 out. 
Wait till app 2 finishes. App 1 AM attempt 2 will start. It won't be able to 
launch a task container with this error stack trace in AM logs:

{code}
2014-03-13 20:13:50,254 INFO [AsyncDispatcher event handler] 
org.apache.hadoop.mapreduce.v2.app.job.impl.TaskAttemptImpl: Diagnostics report 
from attempt_1394741557066_0001_m_00_1009: Container launch failed for 
container_1394741557066_0001_02_21 : 
org.apache.hadoop.security.token.SecretManager$InvalidToken: No NMToken sent 
for host:45454
at 
org.apache.hadoop.yarn.client.api.impl.ContainerManagementProtocolProxy$ContainerManagementProtocolProxyData.newProxy(ContainerManagementProtocolProxy.java:206)
at 
org.apache.hadoop.yarn.client.api.impl.ContainerManagementProtocolProxy$ContainerManagementProtocolProxyData.init(ContainerManagementProtocolProxy.java:196)
at 
org.apache.hadoop.yarn.client.api.impl.ContainerManagementProtocolProxy.getProxy(ContainerManagementProtocolProxy.java:117)
at 
org.apache.hadoop.mapreduce.v2.app.launcher.ContainerLauncherImpl.getCMProxy(ContainerLauncherImpl.java:403)
at 
org.apache.hadoop.mapreduce.v2.app.launcher.ContainerLauncherImpl$Container.launch(ContainerLauncherImpl.java:138)
at 
org.apache.hadoop.mapreduce.v2.app.launcher.ContainerLauncherImpl$EventProcessor.run(ContainerLauncherImpl.java:369)
at 
java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1145)
at 
java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:615)
at java.lang.Thread.run(Thread.java:722)
{code}





--
This message was sent by Atlassian JIRA
(v6.2#6252)


[jira] [Created] (YARN-1792) Add a CLI to kill yarn container

2014-03-06 Thread Tassapol Athiapinya (JIRA)
Tassapol Athiapinya created YARN-1792:
-

 Summary: Add a CLI to kill yarn container
 Key: YARN-1792
 URL: https://issues.apache.org/jira/browse/YARN-1792
 Project: Hadoop YARN
  Issue Type: Improvement
  Components: client
Affects Versions: 2.4.0
Reporter: Tassapol Athiapinya


One of my teammates saw an issue when there is dangling container. The reason 
could have been because of a bug in YARN application or unexpected environment 
failure. It is nice if YARN can handle this from YARN framework. I suggest YARN 
to provide a CLI to kill container(s).

Security should be obeyed. In first phase, we could allow only YARN admin to 
kill container(s). 

The method should also work in both Linux and Windows platform.



--
This message was sent by Atlassian JIRA
(v6.2#6252)


[jira] [Updated] (YARN-1792) Add a CLI to kill yarn container

2014-03-06 Thread Tassapol Athiapinya (JIRA)

 [ 
https://issues.apache.org/jira/browse/YARN-1792?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Tassapol Athiapinya updated YARN-1792:
--

Description: 
One of my teammates saw an issue when there was dangling container. The reason 
could have been because of a bug in YARN application or unexpected environment 
failure. It is nice if YARN can handle this from YARN framework. I suggest YARN 
to provide a CLI to kill container(s).

Security should be obeyed. In first phase, we could allow only YARN admin to 
kill container(s). 

The method should also work in both Linux and Windows platform.

  was:
One of my teammates saw an issue when there is dangling container. The reason 
could have been because of a bug in YARN application or unexpected environment 
failure. It is nice if YARN can handle this from YARN framework. I suggest YARN 
to provide a CLI to kill container(s).

Security should be obeyed. In first phase, we could allow only YARN admin to 
kill container(s). 

The method should also work in both Linux and Windows platform.


 Add a CLI to kill yarn container
 

 Key: YARN-1792
 URL: https://issues.apache.org/jira/browse/YARN-1792
 Project: Hadoop YARN
  Issue Type: Improvement
  Components: client
Affects Versions: 2.4.0
Reporter: Tassapol Athiapinya
Assignee: Xuan Gong

 One of my teammates saw an issue when there was dangling container. The 
 reason could have been because of a bug in YARN application or unexpected 
 environment failure. It is nice if YARN can handle this from YARN framework. 
 I suggest YARN to provide a CLI to kill container(s).
 Security should be obeyed. In first phase, we could allow only YARN admin to 
 kill container(s). 
 The method should also work in both Linux and Windows platform.



--
This message was sent by Atlassian JIRA
(v6.2#6252)


[jira] [Created] (YARN-1788) AppsCompleted/AppsKilled metric is incorrect when MR job is killed with yarn application -kill

2014-03-05 Thread Tassapol Athiapinya (JIRA)
Tassapol Athiapinya created YARN-1788:
-

 Summary: AppsCompleted/AppsKilled metric is incorrect when MR job 
is killed with yarn application -kill
 Key: YARN-1788
 URL: https://issues.apache.org/jira/browse/YARN-1788
 Project: Hadoop YARN
  Issue Type: Bug
  Components: resourcemanager
Affects Versions: 2.3.0
Reporter: Tassapol Athiapinya


Run MR sleep job. Kill the application in RUNNING state. Observe RM metrics.
Expecting AppsCompleted = 0/AppsKilled = 1
Actual is AppsCompleted = 1/AppsKilled = 0



--
This message was sent by Atlassian JIRA
(v6.2#6252)


[jira] [Created] (YARN-1673) Valid yarn kill application prints out help message.

2014-01-29 Thread Tassapol Athiapinya (JIRA)
Tassapol Athiapinya created YARN-1673:
-

 Summary: Valid yarn kill application prints out help message.
 Key: YARN-1673
 URL: https://issues.apache.org/jira/browse/YARN-1673
 Project: Hadoop YARN
  Issue Type: Bug
  Components: client
Affects Versions: 2.4.0
Reporter: Tassapol Athiapinya
Priority: Critical
 Fix For: 2.4.0


yarn application -kill application ID 
used to work previously. In 2.4.0 it prints out help message and does not kill 
the application.



--
This message was sent by Atlassian JIRA
(v6.1.5#6160)


[jira] [Created] (YARN-1661) AppMaster logs says failing even if an application does succeed.

2014-01-27 Thread Tassapol Athiapinya (JIRA)
Tassapol Athiapinya created YARN-1661:
-

 Summary: AppMaster logs says failing even if an application does 
succeed.
 Key: YARN-1661
 URL: https://issues.apache.org/jira/browse/YARN-1661
 Project: Hadoop YARN
  Issue Type: Bug
  Components: applications/distributed-shell
Affects Versions: 2.4.0
Reporter: Tassapol Athiapinya
 Fix For: 2.4.0


Run:
/usr/bin/yarn  org.apache.hadoop.yarn.applications.distributedshell.Client -jar 
distributed shell jar -shell_command ls

Open AM logs. Last line would indicate AM failure even though container logs 
print good ls result.

{code}
2014-01-24 21:45:29,592 INFO  [main] distributedshell.ApplicationMaster 
(ApplicationMaster.java:finish(599)) - Application completed. Signalling finish 
to RM
2014-01-24 21:45:29,612 INFO  [main] impl.AMRMClientImpl 
(AMRMClientImpl.java:unregisterApplicationMaster(315)) - Waiting for 
application to be successfully unregistered.
2014-01-24 21:45:29,816 INFO  [main] distributedshell.ApplicationMaster 
(ApplicationMaster.java:main(267)) - Application Master failed. exiting
{code}



--
This message was sent by Atlassian JIRA
(v6.1.5#6160)


[jira] [Created] (YARN-1620) Add cli to simulate app failure in yarn container

2014-01-21 Thread Tassapol Athiapinya (JIRA)
Tassapol Athiapinya created YARN-1620:
-

 Summary: Add cli to simulate app failure in yarn container
 Key: YARN-1620
 URL: https://issues.apache.org/jira/browse/YARN-1620
 Project: Hadoop YARN
  Issue Type: Improvement
Affects Versions: 2.2.0
Reporter: Tassapol Athiapinya
 Fix For: 2.4.0


It will be useful to have a generic cli tool to simulate application failure in 
yarn container-level. Application running in a container can throw 
Error/Exception. If we have such CLI, anyone using YARN can try 
application-induced failure in various scenarios.



--
This message was sent by Atlassian JIRA
(v6.1.5#6160)


[jira] [Created] (YARN-1621) Add CLI to list states of yarn container-IDs/hosts

2014-01-21 Thread Tassapol Athiapinya (JIRA)
Tassapol Athiapinya created YARN-1621:
-

 Summary: Add CLI to list states of yarn container-IDs/hosts
 Key: YARN-1621
 URL: https://issues.apache.org/jira/browse/YARN-1621
 Project: Hadoop YARN
  Issue Type: Improvement
Affects Versions: 2.2.0
Reporter: Tassapol Athiapinya
 Fix For: 2.4.0


As more applications are moved to YARN, we need generic CLI to list states of 
yarn containers and their hosts. Today if YARN application running in a 
container does hang, there is no way other than to manually kill its process.

For each running application, it is useful to differentiate between 
running/succeeded/failed/killed containers. 
{code:title=proposed yarn cli}
$ yarn application -list-containers appId status
where status is one of running/succeeded/killed/failed/all
{code}



--
This message was sent by Atlassian JIRA
(v6.1.5#6160)


[jira] [Commented] (YARN-1435) Distributed Shell should not run other commands except sh, and run the custom script at the same time.

2013-12-03 Thread Tassapol Athiapinya (JIRA)

[ 
https://issues.apache.org/jira/browse/YARN-1435?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanelfocusedCommentId=13838349#comment-13838349
 ] 

Tassapol Athiapinya commented on YARN-1435:
---

Is it safe to presume /bin/bash exist in any Linux machine?

 Distributed Shell should not run other commands except sh, and run the 
 custom script at the same time.
 

 Key: YARN-1435
 URL: https://issues.apache.org/jira/browse/YARN-1435
 Project: Hadoop YARN
  Issue Type: Bug
  Components: applications/distributed-shell
Affects Versions: 2.3.0
Reporter: Tassapol Athiapinya
Assignee: Xuan Gong
 Attachments: YARN-1435.1.patch, YARN-1435.1.patch, YARN-1435.2.patch


 Currently, if we want to run custom script at DS. We can do it like this :
 --shell_command sh --shell_script custom_script.sh
 But it may be better to separate running shell_command and shell_script



--
This message was sent by Atlassian JIRA
(v6.1#6144)


[jira] [Created] (YARN-1435) Custom script cannot be run because it lacks of executable bit at container level

2013-11-21 Thread Tassapol Athiapinya (JIRA)
Tassapol Athiapinya created YARN-1435:
-

 Summary: Custom script cannot be run because it lacks of 
executable bit at container level
 Key: YARN-1435
 URL: https://issues.apache.org/jira/browse/YARN-1435
 Project: Hadoop YARN
  Issue Type: Bug
  Components: applications/distributed-shell
Affects Versions: 2.2.1
Reporter: Tassapol Athiapinya
 Fix For: 2.2.1


Create custom shell script and use -shell_command to point to that script. 
Uploaded shell script won't be able to execute at container level because 
executable bit is missing when container fetches the shell script from HDFS. 
Distributed shell should grant executable bit in this case.



--
This message was sent by Atlassian JIRA
(v6.1#6144)


[jira] [Created] (YARN-1334) YARN should give more info on errors when running failed distributed shell command

2013-10-21 Thread Tassapol Athiapinya (JIRA)
Tassapol Athiapinya created YARN-1334:
-

 Summary: YARN should give more info on errors when running failed 
distributed shell command
 Key: YARN-1334
 URL: https://issues.apache.org/jira/browse/YARN-1334
 Project: Hadoop YARN
  Issue Type: Improvement
  Components: applications/distributed-shell
Affects Versions: 2.2.1
Reporter: Tassapol Athiapinya
 Fix For: 2.2.1


Run incorrect command such as:
/usr/bin/yarn  org.apache.hadoop.yarn.applications.distributedshell.Client -jar 
distributedshell jar -shell_command ./test1.sh -shell_script ./

would show shell exit code exception with no useful message. It should print 
out sysout/syserr of containers/AM of why it is failing.



--
This message was sent by Atlassian JIRA
(v6.1#6144)


[jira] [Commented] (YARN-1334) YARN should give more info on errors when running failed distributed shell command

2013-10-21 Thread Tassapol Athiapinya (JIRA)

[ 
https://issues.apache.org/jira/browse/YARN-1334?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanelfocusedCommentId=13801319#comment-13801319
 ] 

Tassapol Athiapinya commented on YARN-1334:
---

Can you please check if this part is correct? 
   -nodeAddress  = getConf().get(YarnConfiguration.NM_ADDRESS))
Does it dynamically retrieves NM address based on container ID? If not, you can 
safely remove this part.

 YARN should give more info on errors when running failed distributed shell 
 command
 --

 Key: YARN-1334
 URL: https://issues.apache.org/jira/browse/YARN-1334
 Project: Hadoop YARN
  Issue Type: Improvement
  Components: applications/distributed-shell
Affects Versions: 2.2.1
Reporter: Tassapol Athiapinya
Assignee: Xuan Gong
 Fix For: 2.2.1

 Attachments: YARN-1334.1.patch


 Run incorrect command such as:
 /usr/bin/yarn  org.apache.hadoop.yarn.applications.distributedshell.Client 
 -jar distributedshell jar -shell_command ./test1.sh -shell_script ./
 would show shell exit code exception with no useful message. It should print 
 out sysout/syserr of containers/AM of why it is failing.



--
This message was sent by Atlassian JIRA
(v6.1#6144)


[jira] [Created] (YARN-1320) Custom log4j properties does not work properly.

2013-10-18 Thread Tassapol Athiapinya (JIRA)
Tassapol Athiapinya created YARN-1320:
-

 Summary: Custom log4j properties does not work properly.
 Key: YARN-1320
 URL: https://issues.apache.org/jira/browse/YARN-1320
 Project: Hadoop YARN
  Issue Type: Bug
  Components: applications/distributed-shell
Reporter: Tassapol Athiapinya
 Fix For: 2.2.1


Distributed shell cannot pick up custom log4j properties (specified with 
-log_properties). It always uses default log4j properties.



--
This message was sent by Atlassian JIRA
(v6.1#6144)


[jira] [Commented] (YARN-1314) Cannot pass more than 1 argument to shell command

2013-10-16 Thread Tassapol Athiapinya (JIRA)

[ 
https://issues.apache.org/jira/browse/YARN-1314?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanelfocusedCommentId=13797423#comment-13797423
 ] 

Tassapol Athiapinya commented on YARN-1314:
---

[~xgong] can you please give an example of command to run multiple arguments? 
Ideally each argument should allow spaces in between also.

As an example, similar to shell command, we can do:
cp my file  1.txt my file  2.txt
This can be complex by allowing \ inside each argument in addition to having 
spaces.
cp my\file  1.txt my\file  2.txt

 Cannot pass more than 1 argument to shell command
 -

 Key: YARN-1314
 URL: https://issues.apache.org/jira/browse/YARN-1314
 Project: Hadoop YARN
  Issue Type: Bug
  Components: applications/distributed-shell
Reporter: Tassapol Athiapinya
Assignee: Xuan Gong
 Fix For: 2.2.1

 Attachments: YARN-1314.1.patch


 Distributed shell cannot accept more than 1 parameters in argument parts.
 All of these commands are treated as 1 parameter:
 /usr/bin/yarn  org.apache.hadoop.yarn.applications.distributedshell.Client 
 -jar distrubuted shell jar -shell_command echo -shell_args 'My   name
 is  Teddy'
 /usr/bin/yarn  org.apache.hadoop.yarn.applications.distributedshell.Client 
 -jar distrubuted shell jar -shell_command echo -shell_args ''My   name'
 'is  Teddy''
 /usr/bin/yarn  org.apache.hadoop.yarn.applications.distributedshell.Client 
 -jar distrubuted shell jar -shell_command echo -shell_args 'My   name' 
'is  Teddy'



--
This message was sent by Atlassian JIRA
(v6.1#6144)


[jira] [Commented] (YARN-1314) Cannot pass more than 1 argument to shell command

2013-10-16 Thread Tassapol Athiapinya (JIRA)

[ 
https://issues.apache.org/jira/browse/YARN-1314?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanelfocusedCommentId=13797448#comment-13797448
 ] 

Tassapol Athiapinya commented on YARN-1314:
---

Can each argument be complex as I describe above? It is matching regular shell 
behavior.

 Cannot pass more than 1 argument to shell command
 -

 Key: YARN-1314
 URL: https://issues.apache.org/jira/browse/YARN-1314
 Project: Hadoop YARN
  Issue Type: Bug
  Components: applications/distributed-shell
Reporter: Tassapol Athiapinya
Assignee: Xuan Gong
 Fix For: 2.2.1

 Attachments: YARN-1314.1.patch


 Distributed shell cannot accept more than 1 parameters in argument parts.
 All of these commands are treated as 1 parameter:
 /usr/bin/yarn  org.apache.hadoop.yarn.applications.distributedshell.Client 
 -jar distrubuted shell jar -shell_command echo -shell_args 'My   name
 is  Teddy'
 /usr/bin/yarn  org.apache.hadoop.yarn.applications.distributedshell.Client 
 -jar distrubuted shell jar -shell_command echo -shell_args ''My   name'
 'is  Teddy''
 /usr/bin/yarn  org.apache.hadoop.yarn.applications.distributedshell.Client 
 -jar distrubuted shell jar -shell_command echo -shell_args 'My   name' 
'is  Teddy'



--
This message was sent by Atlassian JIRA
(v6.1#6144)


[jira] [Commented] (YARN-1314) Cannot pass more than 1 argument to shell command

2013-10-16 Thread Tassapol Athiapinya (JIRA)

[ 
https://issues.apache.org/jira/browse/YARN-1314?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanelfocusedCommentId=13797631#comment-13797631
 ] 

Tassapol Athiapinya commented on YARN-1314:
---

Looking good. Thanks for the patch!

 Cannot pass more than 1 argument to shell command
 -

 Key: YARN-1314
 URL: https://issues.apache.org/jira/browse/YARN-1314
 Project: Hadoop YARN
  Issue Type: Bug
  Components: applications/distributed-shell
Reporter: Tassapol Athiapinya
Assignee: Xuan Gong
 Fix For: 2.2.1

 Attachments: YARN-1314.1.patch, YARN-1314.1.patch, YARN-1314.2.patch


 Distributed shell cannot accept more than 1 parameters in argument parts.
 All of these commands are treated as 1 parameter:
 /usr/bin/yarn  org.apache.hadoop.yarn.applications.distributedshell.Client 
 -jar distrubuted shell jar -shell_command echo -shell_args 'My   name
 is  Teddy'
 /usr/bin/yarn  org.apache.hadoop.yarn.applications.distributedshell.Client 
 -jar distrubuted shell jar -shell_command echo -shell_args ''My   name'
 'is  Teddy''
 /usr/bin/yarn  org.apache.hadoop.yarn.applications.distributedshell.Client 
 -jar distrubuted shell jar -shell_command echo -shell_args 'My   name' 
'is  Teddy'



--
This message was sent by Atlassian JIRA
(v6.1#6144)


[jira] [Created] (YARN-1303) Allow multiple commands separating with ;

2013-10-14 Thread Tassapol Athiapinya (JIRA)
Tassapol Athiapinya created YARN-1303:
-

 Summary: Allow multiple commands separating with ;
 Key: YARN-1303
 URL: https://issues.apache.org/jira/browse/YARN-1303
 Project: Hadoop YARN
  Issue Type: Improvement
  Components: applications/distributed-shell
Reporter: Tassapol Athiapinya
 Fix For: 2.2.1


In shell, we can do ls; ls to run 2 commands at once. 

In distributed shell, this is not working. We should improve to allow this to 
occur. There are practical use cases that I know of to run multiple commands or 
to set environment variables before a command.



--
This message was sent by Atlassian JIRA
(v6.1#6144)


[jira] [Commented] (YARN-1303) Allow multiple commands separating with ;

2013-10-14 Thread Tassapol Athiapinya (JIRA)

[ 
https://issues.apache.org/jira/browse/YARN-1303?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanelfocusedCommentId=13794300#comment-13794300
 ] 

Tassapol Athiapinya commented on YARN-1303:
---

Script option can be substituted in all cases. I think this is for convenient 
use and for the purpose to have as similar behavior as normal shell as possible.

 Allow multiple commands separating with ;
 -

 Key: YARN-1303
 URL: https://issues.apache.org/jira/browse/YARN-1303
 Project: Hadoop YARN
  Issue Type: Improvement
  Components: applications/distributed-shell
Reporter: Tassapol Athiapinya
Assignee: Xuan Gong
 Fix For: 2.2.1


 In shell, we can do ls; ls to run 2 commands at once. 
 In distributed shell, this is not working. We should improve to allow this to 
 occur. There are practical use cases that I know of to run multiple commands 
 or to set environment variables before a command.



--
This message was sent by Atlassian JIRA
(v6.1#6144)


[jira] [Commented] (YARN-1168) Cannot run echo \Hello World\

2013-10-07 Thread Tassapol Athiapinya (JIRA)

[ 
https://issues.apache.org/jira/browse/YARN-1168?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanelfocusedCommentId=13788445#comment-13788445
 ] 

Tassapol Athiapinya commented on YARN-1168:
---

Use a double-quote with following single-quote instead of one double-quote will 
work.

/usr/bin/yarn org.apache.hadoop.yarn.applications.distributedshell.Client -jar 
jar -shell_command echo -shell_args 'hello world'

 Cannot run echo \Hello World\
 -

 Key: YARN-1168
 URL: https://issues.apache.org/jira/browse/YARN-1168
 Project: Hadoop YARN
  Issue Type: Bug
  Components: applications/distributed-shell
Reporter: Tassapol Athiapinya
Priority: Critical
 Fix For: 2.2.0


 Run
 $ ssh localhost echo \Hello World\
 with bash does succeed. Hello World is shown in stdout.
 Run distributed shell with similar echo command. That is either
 $ /usr/bin/yarn  org.apache.hadoop.yarn.applications.distributedshell.Client 
 -jar /usr/lib/hadoop-yarn/hadoop-yarn-applications-distributedshell-2.*.jar 
 -shell_command echo -shell_args \Hello World\
 or
 $ /usr/bin/yarn  org.apache.hadoop.yarn.applications.distributedshell.Client 
 -jar /usr/lib/hadoop-yarn/hadoop-yarn-applications-distributedshell-2.*.jar 
 -shell_command echo -shell_args Hello World
 {code:title=yarn logs -- only hello is shown}
 LogType: stdout
 LogLength: 6
 Log Contents:
 hello
 {code}



--
This message was sent by Atlassian JIRA
(v6.1#6144)


[jira] [Resolved] (YARN-1168) Cannot run echo \Hello World\

2013-10-07 Thread Tassapol Athiapinya (JIRA)

 [ 
https://issues.apache.org/jira/browse/YARN-1168?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Tassapol Athiapinya resolved YARN-1168.
---

Resolution: Not A Problem

 Cannot run echo \Hello World\
 -

 Key: YARN-1168
 URL: https://issues.apache.org/jira/browse/YARN-1168
 Project: Hadoop YARN
  Issue Type: Bug
  Components: applications/distributed-shell
Reporter: Tassapol Athiapinya
Priority: Critical
 Fix For: 2.2.0


 Run
 $ ssh localhost echo \Hello World\
 with bash does succeed. Hello World is shown in stdout.
 Run distributed shell with similar echo command. That is either
 $ /usr/bin/yarn  org.apache.hadoop.yarn.applications.distributedshell.Client 
 -jar /usr/lib/hadoop-yarn/hadoop-yarn-applications-distributedshell-2.*.jar 
 -shell_command echo -shell_args \Hello World\
 or
 $ /usr/bin/yarn  org.apache.hadoop.yarn.applications.distributedshell.Client 
 -jar /usr/lib/hadoop-yarn/hadoop-yarn-applications-distributedshell-2.*.jar 
 -shell_command echo -shell_args Hello World
 {code:title=yarn logs -- only hello is shown}
 LogType: stdout
 LogLength: 6
 Log Contents:
 hello
 {code}



--
This message was sent by Atlassian JIRA
(v6.1#6144)


[jira] [Reopened] (YARN-1276) Secure cluster can have random failure to launch a container.

2013-10-05 Thread Tassapol Athiapinya (JIRA)

 [ 
https://issues.apache.org/jira/browse/YARN-1276?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Tassapol Athiapinya reopened YARN-1276:
---


 Secure cluster can have random failure to launch a container.
 -

 Key: YARN-1276
 URL: https://issues.apache.org/jira/browse/YARN-1276
 Project: Hadoop YARN
  Issue Type: Bug
  Components: applications/distributed-shell
Reporter: Tassapol Athiapinya
 Fix For: 2.1.2-beta






--
This message was sent by Atlassian JIRA
(v6.1#6144)


[jira] [Resolved] (YARN-1276) Secure cluster can have random failure to launch a container.

2013-10-05 Thread Tassapol Athiapinya (JIRA)

 [ 
https://issues.apache.org/jira/browse/YARN-1276?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Tassapol Athiapinya resolved YARN-1276.
---

Resolution: Duplicate

Correction: closed as duplicate of YARN-1274

 Secure cluster can have random failure to launch a container.
 -

 Key: YARN-1276
 URL: https://issues.apache.org/jira/browse/YARN-1276
 Project: Hadoop YARN
  Issue Type: Bug
  Components: applications/distributed-shell
Reporter: Tassapol Athiapinya
 Fix For: 2.1.2-beta






--
This message was sent by Atlassian JIRA
(v6.1#6144)


[jira] [Resolved] (YARN-1276) Secure cluster can have random failure to launch a container.

2013-10-05 Thread Tassapol Athiapinya (JIRA)

 [ 
https://issues.apache.org/jira/browse/YARN-1276?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Tassapol Athiapinya resolved YARN-1276.
---

Resolution: Invalid

 Secure cluster can have random failure to launch a container.
 -

 Key: YARN-1276
 URL: https://issues.apache.org/jira/browse/YARN-1276
 Project: Hadoop YARN
  Issue Type: Bug
  Components: applications/distributed-shell
Reporter: Tassapol Athiapinya
 Fix For: 2.1.2-beta






--
This message was sent by Atlassian JIRA
(v6.1#6144)


[jira] [Created] (YARN-1275) Distributed shell in secure cluster can randomly fail.

2013-10-04 Thread Tassapol Athiapinya (JIRA)
Tassapol Athiapinya created YARN-1275:
-

 Summary: Distributed shell in secure cluster can randomly fail.
 Key: YARN-1275
 URL: https://issues.apache.org/jira/browse/YARN-1275
 Project: Hadoop YARN
  Issue Type: Bug
  Components: applications/distributed-shell
Reporter: Tassapol Athiapinya
 Fix For: 2.1.2-beta






--
This message was sent by Atlassian JIRA
(v6.1#6144)


[jira] [Created] (YARN-1276) Secure cluster can have random failure to launch a container.

2013-10-04 Thread Tassapol Athiapinya (JIRA)
Tassapol Athiapinya created YARN-1276:
-

 Summary: Secure cluster can have random failure to launch a 
container.
 Key: YARN-1276
 URL: https://issues.apache.org/jira/browse/YARN-1276
 Project: Hadoop YARN
  Issue Type: Bug
  Components: applications/distributed-shell
Reporter: Tassapol Athiapinya
 Fix For: 2.1.2-beta






--
This message was sent by Atlassian JIRA
(v6.1#6144)


[jira] [Resolved] (YARN-1275) Distributed shell in secure cluster can randomly fail.

2013-10-04 Thread Tassapol Athiapinya (JIRA)

 [ 
https://issues.apache.org/jira/browse/YARN-1275?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Tassapol Athiapinya resolved YARN-1275.
---

Resolution: Invalid

Closing as invalid.

 Distributed shell in secure cluster can randomly fail.
 --

 Key: YARN-1275
 URL: https://issues.apache.org/jira/browse/YARN-1275
 Project: Hadoop YARN
  Issue Type: Bug
  Components: applications/distributed-shell
Reporter: Tassapol Athiapinya
 Fix For: 2.1.2-beta






--
This message was sent by Atlassian JIRA
(v6.1#6144)


[jira] [Resolved] (YARN-1275) Distributed shell in secure cluster can randomly fail.

2013-10-04 Thread Tassapol Athiapinya (JIRA)

 [ 
https://issues.apache.org/jira/browse/YARN-1275?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Tassapol Athiapinya resolved YARN-1275.
---

Resolution: Duplicate

 Distributed shell in secure cluster can randomly fail.
 --

 Key: YARN-1275
 URL: https://issues.apache.org/jira/browse/YARN-1275
 Project: Hadoop YARN
  Issue Type: Bug
  Components: applications/distributed-shell
Reporter: Tassapol Athiapinya
 Fix For: 2.1.2-beta






--
This message was sent by Atlassian JIRA
(v6.1#6144)


[jira] [Commented] (YARN-1275) Distributed shell in secure cluster can randomly fail.

2013-10-04 Thread Tassapol Athiapinya (JIRA)

[ 
https://issues.apache.org/jira/browse/YARN-1275?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanelfocusedCommentId=13786907#comment-13786907
 ] 

Tassapol Athiapinya commented on YARN-1275:
---

Correction: Closing as duplicate of YARN-1274.

 Distributed shell in secure cluster can randomly fail.
 --

 Key: YARN-1275
 URL: https://issues.apache.org/jira/browse/YARN-1275
 Project: Hadoop YARN
  Issue Type: Bug
  Components: applications/distributed-shell
Reporter: Tassapol Athiapinya
 Fix For: 2.1.2-beta






--
This message was sent by Atlassian JIRA
(v6.1#6144)


[jira] [Reopened] (YARN-1275) Distributed shell in secure cluster can randomly fail.

2013-10-04 Thread Tassapol Athiapinya (JIRA)

 [ 
https://issues.apache.org/jira/browse/YARN-1275?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Tassapol Athiapinya reopened YARN-1275:
---


 Distributed shell in secure cluster can randomly fail.
 --

 Key: YARN-1275
 URL: https://issues.apache.org/jira/browse/YARN-1275
 Project: Hadoop YARN
  Issue Type: Bug
  Components: applications/distributed-shell
Reporter: Tassapol Athiapinya
 Fix For: 2.1.2-beta






--
This message was sent by Atlassian JIRA
(v6.1#6144)


[jira] [Updated] (YARN-1157) ResourceManager UI has invalid tracking URL link for distributed shell application

2013-09-25 Thread Tassapol Athiapinya (JIRA)

 [ 
https://issues.apache.org/jira/browse/YARN-1157?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Tassapol Athiapinya updated YARN-1157:
--

Priority: Critical  (was: Major)

 ResourceManager UI has invalid tracking URL link for distributed shell 
 application
 --

 Key: YARN-1157
 URL: https://issues.apache.org/jira/browse/YARN-1157
 Project: Hadoop YARN
  Issue Type: Bug
  Components: resourcemanager
Reporter: Tassapol Athiapinya
Assignee: Xuan Gong
Priority: Critical
 Fix For: 2.1.2-beta

 Attachments: YARN-1157.1.patch, YARN-1157.2.patch, YARN-1157.2.patch, 
 YARN-1157.3.patch, YARN-1157.4.patch, YARN-1157.5.patch, YARN-1157.6.patch


 Submit YARN distributed shell application. Goto ResourceManager Web UI. The 
 application definitely appears. In Tracking UI column, there will be history 
 link. Click on that link. Instead of showing application master web UI, HTTP 
 error 500 would appear.

--
This message is automatically generated by JIRA.
If you think it was sent incorrectly, please contact your JIRA administrators
For more information on JIRA, see: http://www.atlassian.com/software/jira


[jira] [Updated] (YARN-1157) ResourceManager UI has invalid tracking URL link for distributed shell application

2013-09-25 Thread Tassapol Athiapinya (JIRA)

 [ 
https://issues.apache.org/jira/browse/YARN-1157?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Tassapol Athiapinya updated YARN-1157:
--

Priority: Major  (was: Critical)

 ResourceManager UI has invalid tracking URL link for distributed shell 
 application
 --

 Key: YARN-1157
 URL: https://issues.apache.org/jira/browse/YARN-1157
 Project: Hadoop YARN
  Issue Type: Bug
  Components: resourcemanager
Reporter: Tassapol Athiapinya
Assignee: Xuan Gong
 Fix For: 2.1.2-beta

 Attachments: YARN-1157.1.patch, YARN-1157.2.patch, YARN-1157.2.patch, 
 YARN-1157.3.patch, YARN-1157.4.patch, YARN-1157.5.patch, YARN-1157.6.patch, 
 YARN-1157.7.patch


 Submit YARN distributed shell application. Goto ResourceManager Web UI. The 
 application definitely appears. In Tracking UI column, there will be history 
 link. Click on that link. Instead of showing application master web UI, HTTP 
 error 500 would appear.

--
This message is automatically generated by JIRA.
If you think it was sent incorrectly, please contact your JIRA administrators
For more information on JIRA, see: http://www.atlassian.com/software/jira


[jira] [Created] (YARN-1173) Run -shell_command echo Hello has empty stdout

2013-09-09 Thread Tassapol Athiapinya (JIRA)
Tassapol Athiapinya created YARN-1173:
-

 Summary: Run -shell_command echo Hello has empty stdout
 Key: YARN-1173
 URL: https://issues.apache.org/jira/browse/YARN-1173
 Project: Hadoop YARN
  Issue Type: Bug
  Components: applications/distributed-shell
Reporter: Tassapol Athiapinya
 Fix For: 2.1.1-beta


Run:
$ /usr/bin/yarn  org.apache.hadoop.yarn.applications.distributedshell.Client 
-jar /usr/lib/hadoop-yarn/hadoop-yarn-applications-distributedshell-*.jar 
-shell_command echo Hello

Get logs with YARN logs:
{panel}
-bash-4.1$ yarn logs -applicationId application_1378424977532_0071


Container: container_1378424977532_0071_01_02 on myhost
===
LogType: stderr
LogLength: 0
Log Contents:

LogType: stdout
LogLength: 1
Log Contents:
{panel}

--
This message is automatically generated by JIRA.
If you think it was sent incorrectly, please contact your JIRA administrators
For more information on JIRA, see: http://www.atlassian.com/software/jira


[jira] [Commented] (YARN-1175) LogLength shown in $ yarn logs is 1 character longer than actual stdout

2013-09-09 Thread Tassapol Athiapinya (JIRA)

[ 
https://issues.apache.org/jira/browse/YARN-1175?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanelfocusedCommentId=13762375#comment-13762375
 ] 

Tassapol Athiapinya commented on YARN-1175:
---

I understand that point. As an end user, it feels little bit weird. Looking at 
unix filesystem, when a file is empty, the file has 0 bytes in size. In this 
case, even if we do -shell_commands echo, LogLength would show up as 1.

 LogLength shown in $ yarn logs is 1 character longer than actual stdout
 ---

 Key: YARN-1175
 URL: https://issues.apache.org/jira/browse/YARN-1175
 Project: Hadoop YARN
  Issue Type: Bug
  Components: applications/distributed-shell
Reporter: Tassapol Athiapinya
 Fix For: 2.1.1-beta


 Run distributed shell with -shell_command pwd. Do $ yarn logs on that 
 application.
 Count number of characters in Log Contents field. Number of characters will 
 be smaller than LogLength field by one.
 {code:title=mock-up yarn logs output}
 $ /usr/bin/yarn logs -applicationId application_1378424977532_0088
 ...
 LogType: stdout
 LogLength: 87
 Log Contents:
 /mypath/appcache/application_1378424977532_0088/container_1378424977532_0088_01_02
 {code}
 
 {panel}
 Th length of 
 /mypath/appcache/application_1378424977532_0088/container_1378424977532_0088_01_02
 is 86.
 {panel}

--
This message is automatically generated by JIRA.
If you think it was sent incorrectly, please contact your JIRA administrators
For more information on JIRA, see: http://www.atlassian.com/software/jira


[jira] [Commented] (YARN-1175) LogLength shown in $ yarn logs is 1 character longer than actual stdout

2013-09-09 Thread Tassapol Athiapinya (JIRA)

[ 
https://issues.apache.org/jira/browse/YARN-1175?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanelfocusedCommentId=13762379#comment-13762379
 ] 

Tassapol Athiapinya commented on YARN-1175:
---

Also, for distributed shell of running echo, AppMaster.stdout which does not 
print out anything, has LogLength of 0. There is inconsistency between the app 
master which does not call any print command and echo stdout which calls prints 
with  (no end of line).

 LogLength shown in $ yarn logs is 1 character longer than actual stdout
 ---

 Key: YARN-1175
 URL: https://issues.apache.org/jira/browse/YARN-1175
 Project: Hadoop YARN
  Issue Type: Bug
  Components: applications/distributed-shell
Reporter: Tassapol Athiapinya
 Fix For: 2.1.1-beta


 Run distributed shell with -shell_command pwd. Do $ yarn logs on that 
 application.
 Count number of characters in Log Contents field. Number of characters will 
 be smaller than LogLength field by one.
 {code:title=mock-up yarn logs output}
 $ /usr/bin/yarn logs -applicationId application_1378424977532_0088
 ...
 LogType: stdout
 LogLength: 87
 Log Contents:
 /mypath/appcache/application_1378424977532_0088/container_1378424977532_0088_01_02
 {code}
 
 {panel}
 Th length of 
 /mypath/appcache/application_1378424977532_0088/container_1378424977532_0088_01_02
 is 86.
 {panel}

--
This message is automatically generated by JIRA.
If you think it was sent incorrectly, please contact your JIRA administrators
For more information on JIRA, see: http://www.atlassian.com/software/jira


[jira] [Resolved] (YARN-1175) LogLength shown in $ yarn logs is 1 character longer than actual stdout

2013-09-09 Thread Tassapol Athiapinya (JIRA)

 [ 
https://issues.apache.org/jira/browse/YARN-1175?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Tassapol Athiapinya resolved YARN-1175.
---

Resolution: Not A Problem

It works as intended.

 LogLength shown in $ yarn logs is 1 character longer than actual stdout
 ---

 Key: YARN-1175
 URL: https://issues.apache.org/jira/browse/YARN-1175
 Project: Hadoop YARN
  Issue Type: Bug
  Components: applications/distributed-shell
Reporter: Tassapol Athiapinya
 Fix For: 2.1.1-beta


 Run distributed shell with -shell_command pwd. Do $ yarn logs on that 
 application.
 Count number of characters in Log Contents field. Number of characters will 
 be smaller than LogLength field by one.
 {code:title=mock-up yarn logs output}
 $ /usr/bin/yarn logs -applicationId application_1378424977532_0088
 ...
 LogType: stdout
 LogLength: 87
 Log Contents:
 /mypath/appcache/application_1378424977532_0088/container_1378424977532_0088_01_02
 {code}
 
 {panel}
 Th length of 
 /mypath/appcache/application_1378424977532_0088/container_1378424977532_0088_01_02
 is 86.
 {panel}

--
This message is automatically generated by JIRA.
If you think it was sent incorrectly, please contact your JIRA administrators
For more information on JIRA, see: http://www.atlassian.com/software/jira


[jira] [Commented] (YARN-1175) LogLength shown in $ yarn logs is 1 character longer than actual stdout

2013-09-09 Thread Tassapol Athiapinya (JIRA)

[ 
https://issues.apache.org/jira/browse/YARN-1175?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanelfocusedCommentId=13762391#comment-13762391
 ] 

Tassapol Athiapinya commented on YARN-1175:
---

Thank you [~jlowe] for clarification. I will close this issue.

 LogLength shown in $ yarn logs is 1 character longer than actual stdout
 ---

 Key: YARN-1175
 URL: https://issues.apache.org/jira/browse/YARN-1175
 Project: Hadoop YARN
  Issue Type: Bug
  Components: applications/distributed-shell
Reporter: Tassapol Athiapinya
 Fix For: 2.1.1-beta


 Run distributed shell with -shell_command pwd. Do $ yarn logs on that 
 application.
 Count number of characters in Log Contents field. Number of characters will 
 be smaller than LogLength field by one.
 {code:title=mock-up yarn logs output}
 $ /usr/bin/yarn logs -applicationId application_1378424977532_0088
 ...
 LogType: stdout
 LogLength: 87
 Log Contents:
 /mypath/appcache/application_1378424977532_0088/container_1378424977532_0088_01_02
 {code}
 
 {panel}
 Th length of 
 /mypath/appcache/application_1378424977532_0088/container_1378424977532_0088_01_02
 is 86.
 {panel}

--
This message is automatically generated by JIRA.
If you think it was sent incorrectly, please contact your JIRA administrators
For more information on JIRA, see: http://www.atlassian.com/software/jira


[jira] [Created] (YARN-1175) LogLength shown in $ yarn logs is 1 character longer than actual stdout

2013-09-09 Thread Tassapol Athiapinya (JIRA)
Tassapol Athiapinya created YARN-1175:
-

 Summary: LogLength shown in $ yarn logs is 1 character longer than 
actual stdout
 Key: YARN-1175
 URL: https://issues.apache.org/jira/browse/YARN-1175
 Project: Hadoop YARN
  Issue Type: Bug
  Components: applications/distributed-shell
Reporter: Tassapol Athiapinya
 Fix For: 2.1.1-beta


Run distributed shell with -shell_command pwd. Do $ yarn logs on that 
application.

Count number of characters in Log Contents field. Number of characters will be 
smaller than LogLength field by one.

{code:title=mock-up yarn logs output}
$ /usr/bin/yarn logs -applicationId application_1378424977532_0088
...
LogType: stdout
LogLength: 87
Log Contents:
/mypath/appcache/application_1378424977532_0088/container_1378424977532_0088_01_02
{code}


{panel}
Th length of 
/mypath/appcache/application_1378424977532_0088/container_1378424977532_0088_01_02
is 86.
{panel}

--
This message is automatically generated by JIRA.
If you think it was sent incorrectly, please contact your JIRA administrators
For more information on JIRA, see: http://www.atlassian.com/software/jira


[jira] [Resolved] (YARN-1173) Run -shell_command echo Hello has empty stdout

2013-09-09 Thread Tassapol Athiapinya (JIRA)

 [ 
https://issues.apache.org/jira/browse/YARN-1173?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Tassapol Athiapinya resolved YARN-1173.
---

Resolution: Invalid

It is invalid usage. Distributed shell user has to separate between shell 
command and shell argument. To do echo Hello, the command has to be:

$ /usr/bin/yarn  org.apache.hadoop.yarn.applications.distributedshell.Client 
-jar /usr/lib/hadoop-yarn/hadoop-yarn-applications-distributedshell-*.jar 
-shell_command echo -shell_args hello

 Run -shell_command echo Hello has empty stdout
 

 Key: YARN-1173
 URL: https://issues.apache.org/jira/browse/YARN-1173
 Project: Hadoop YARN
  Issue Type: Bug
  Components: applications/distributed-shell
Reporter: Tassapol Athiapinya
 Fix For: 2.1.1-beta


 Run:
 $ /usr/bin/yarn  org.apache.hadoop.yarn.applications.distributedshell.Client 
 -jar /usr/lib/hadoop-yarn/hadoop-yarn-applications-distributedshell-*.jar 
 -shell_command echo Hello
 Get logs with YARN logs:
 {panel}
 -bash-4.1$ yarn logs -applicationId application_1378424977532_0071
 Container: container_1378424977532_0071_01_02 on myhost
 ===
 LogType: stderr
 LogLength: 0
 Log Contents:
 LogType: stdout
 LogLength: 1
 Log Contents:
 {panel}

--
This message is automatically generated by JIRA.
If you think it was sent incorrectly, please contact your JIRA administrators
For more information on JIRA, see: http://www.atlassian.com/software/jira


[jira] [Updated] (YARN-1168) Cannot run echo \Hello World\

2013-09-09 Thread Tassapol Athiapinya (JIRA)

 [ 
https://issues.apache.org/jira/browse/YARN-1168?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Tassapol Athiapinya updated YARN-1168:
--

Description: 
Run
$ ssh localhost echo \Hello World\
with bash does succeed. Hello World is shown in stdout.

Run distributed shell with similar echo command. That is either
$ /usr/bin/yarn  org.apache.hadoop.yarn.applications.distributedshell.Client 
-jar /usr/lib/hadoop-yarn/hadoop-yarn-applications-distributedshell-2.*.jar 
-shell_command echo -shell_args \Hello World\
or
$ /usr/bin/yarn  org.apache.hadoop.yarn.applications.distributedshell.Client 
-jar /usr/lib/hadoop-yarn/hadoop-yarn-applications-distributedshell-2.*.jar 
-shell_command echo -shell_args Hello World

{code:title=yarn logs -- only hello is shown}
LogType: stdout
LogLength: 6
Log Contents:
hello
{code}

  was:
Run
$ ssh localhost echo \Hello World\
with bash does succeed. Hello World is shown in stdout.

Run distributed shell with similar echo command. That is
$ /usr/bin/yarn  org.apache.hadoop.yarn.applications.distributedshell.Client 
-jar /usr/lib/hadoop-yarn/hadoop-yarn-applications-distributedshell-2.*.jar 
-shell_command echo \Hello World\

{code:title=partial console logs}
distributedshell.Client: Completed setting up app master command 
$JAVA_HOME/bin/java -Xmx10m 
org.apache.hadoop.yarn.applications.distributedshell.ApplicationMaster 
--container_memory 10 --num_containers 1 --priority 0 --shell_command echo 
Hello World 1LOG_DIR/AppMaster.stdout 2LOG_DIR/AppMaster.stderr
...
line 28: syntax error: unexpected end of file

at org.apache.hadoop.util.Shell.runCommand(Shell.java:458)
at org.apache.hadoop.util.Shell.run(Shell.java:373)
at 
org.apache.hadoop.util.Shell$ShellCommandExecutor.execute(Shell.java:578)
at 
org.apache.hadoop.yarn.server.nodemanager.DefaultContainerExecutor.launchContainer(DefaultContainerExecutor.java:195)
at 
org.apache.hadoop.yarn.server.nodemanager.containermanager.launcher.ContainerLaunch.call(ContainerLaunch.java:258)
at 
org.apache.hadoop.yarn.server.nodemanager.containermanager.launcher.ContainerLaunch.call(ContainerLaunch.java:74)
...
distributedshell.Client: Application failed to complete successfully
{code}


 Cannot run echo \Hello World\
 -

 Key: YARN-1168
 URL: https://issues.apache.org/jira/browse/YARN-1168
 Project: Hadoop YARN
  Issue Type: Bug
  Components: applications/distributed-shell
Reporter: Tassapol Athiapinya
Priority: Critical
 Fix For: 2.1.1-beta


 Run
 $ ssh localhost echo \Hello World\
 with bash does succeed. Hello World is shown in stdout.
 Run distributed shell with similar echo command. That is either
 $ /usr/bin/yarn  org.apache.hadoop.yarn.applications.distributedshell.Client 
 -jar /usr/lib/hadoop-yarn/hadoop-yarn-applications-distributedshell-2.*.jar 
 -shell_command echo -shell_args \Hello World\
 or
 $ /usr/bin/yarn  org.apache.hadoop.yarn.applications.distributedshell.Client 
 -jar /usr/lib/hadoop-yarn/hadoop-yarn-applications-distributedshell-2.*.jar 
 -shell_command echo -shell_args Hello World
 {code:title=yarn logs -- only hello is shown}
 LogType: stdout
 LogLength: 6
 Log Contents:
 hello
 {code}

--
This message is automatically generated by JIRA.
If you think it was sent incorrectly, please contact your JIRA administrators
For more information on JIRA, see: http://www.atlassian.com/software/jira


[jira] [Created] (YARN-1168) Cannot run echo \Hello World\

2013-09-07 Thread Tassapol Athiapinya (JIRA)
Tassapol Athiapinya created YARN-1168:
-

 Summary: Cannot run echo \Hello World\
 Key: YARN-1168
 URL: https://issues.apache.org/jira/browse/YARN-1168
 Project: Hadoop YARN
  Issue Type: Bug
  Components: applications/distributed-shell
Reporter: Tassapol Athiapinya
Priority: Critical
 Fix For: 2.1.1-beta


Run
$ ssh localhost echo \Hello World\
with bash does succeed. Hello World is shown in stdout.

Run distributed shell with similar echo command. That is
$ /usr/bin/yarn  org.apache.hadoop.yarn.applications.distributedshell.Client 
-jar /usr/lib/hadoop-yarn/hadoop-yarn-applications-distributedshell-2.*.jar 
-shell_command echo \Hello World\

{code:title=partial console logs}
distributedshell.Client: Completed setting up app master command 
$JAVA_HOME/bin/java -Xmx10m 
org.apache.hadoop.yarn.applications.distributedshell.ApplicationMaster 
--container_memory 10 --num_containers 1 --priority 0 --shell_command echo 
Hello World 1LOG_DIR/AppMaster.stdout 2LOG_DIR/AppMaster.stderr
...
line 28: syntax error: unexpected end of file

at org.apache.hadoop.util.Shell.runCommand(Shell.java:458)
at org.apache.hadoop.util.Shell.run(Shell.java:373)
at 
org.apache.hadoop.util.Shell$ShellCommandExecutor.execute(Shell.java:578)
at 
org.apache.hadoop.yarn.server.nodemanager.DefaultContainerExecutor.launchContainer(DefaultContainerExecutor.java:195)
at 
org.apache.hadoop.yarn.server.nodemanager.containermanager.launcher.ContainerLaunch.call(ContainerLaunch.java:258)
at 
org.apache.hadoop.yarn.server.nodemanager.containermanager.launcher.ContainerLaunch.call(ContainerLaunch.java:74)
...
distributedshell.Client: Application failed to complete successfully
{code}

--
This message is automatically generated by JIRA.
If you think it was sent incorrectly, please contact your JIRA administrators
For more information on JIRA, see: http://www.atlassian.com/software/jira


[jira] [Created] (YARN-1167) Submitted distributed shell application shows appMasterHost = empty

2013-09-06 Thread Tassapol Athiapinya (JIRA)
Tassapol Athiapinya created YARN-1167:
-

 Summary: Submitted distributed shell application shows 
appMasterHost = empty
 Key: YARN-1167
 URL: https://issues.apache.org/jira/browse/YARN-1167
 Project: Hadoop YARN
  Issue Type: Bug
  Components: applications/distributed-shell
Reporter: Tassapol Athiapinya
 Fix For: 2.1.1-beta


Submit distributed shell application. Once the application turns to be RUNNING 
state, app master host should not be empty. In reality, it is empty.

==console logs==
distributedshell.Client: Got application report from ASM for, appId=12, 
clientToAMToken=null, appDiagnostics=, appMasterHost=, appQueue=default, 
appMasterRpcPort=0, appStartTime=1378505161360, yarnAppState=RUNNING, 
distributedFinalState=UNDEFINED, 


--
This message is automatically generated by JIRA.
If you think it was sent incorrectly, please contact your JIRA administrators
For more information on JIRA, see: http://www.atlassian.com/software/jira


[jira] [Created] (YARN-1157) ResourceManager UI has invalid tracking URL link for distributed shell application

2013-09-05 Thread Tassapol Athiapinya (JIRA)
Tassapol Athiapinya created YARN-1157:
-

 Summary: ResourceManager UI has invalid tracking URL link for 
distributed shell application
 Key: YARN-1157
 URL: https://issues.apache.org/jira/browse/YARN-1157
 Project: Hadoop YARN
  Issue Type: Bug
  Components: resourcemanager
Reporter: Tassapol Athiapinya
 Fix For: 2.1.1-beta


Submit YARN distributed shell application. Goto ResourceManager Web UI. The 
application definitely appears. In Tracking UI column, there will be history 
link. Click on that link. Instead of showing application master web UI, HTTP 
error 500 would appear.

--
This message is automatically generated by JIRA.
If you think it was sent incorrectly, please contact your JIRA administrators
For more information on JIRA, see: http://www.atlassian.com/software/jira


[jira] [Created] (YARN-1158) ResourceManager UI has application stdout missing if application stdout is not in the same directory as AppMaster stdout

2013-09-05 Thread Tassapol Athiapinya (JIRA)
Tassapol Athiapinya created YARN-1158:
-

 Summary: ResourceManager UI has application stdout missing if 
application stdout is not in the same directory as AppMaster stdout
 Key: YARN-1158
 URL: https://issues.apache.org/jira/browse/YARN-1158
 Project: Hadoop YARN
  Issue Type: Bug
  Components: resourcemanager
Reporter: Tassapol Athiapinya
 Fix For: 2.1.1-beta


Configure yarn-site.xml's yarn.nodemanager.local-dirs to multiple directories. 
Turn on log aggregation. Run distributed shell application. If an application 
writes AppMaster.stdout in one directory and stdout in another directory. Goto 
ResourceManager web UI. Open up container logs. Only AppMaster.stdout would 
appear.

--
This message is automatically generated by JIRA.
If you think it was sent incorrectly, please contact your JIRA administrators
For more information on JIRA, see: http://www.atlassian.com/software/jira


[jira] [Created] (YARN-1131) $ yarn logs should return a message log aggregation is during progress if YARN application is running

2013-08-30 Thread Tassapol Athiapinya (JIRA)
Tassapol Athiapinya created YARN-1131:
-

 Summary: $ yarn logs should return a message log aggregation is 
during progress if YARN application is running
 Key: YARN-1131
 URL: https://issues.apache.org/jira/browse/YARN-1131
 Project: Hadoop YARN
  Issue Type: Improvement
  Components: client
Reporter: Tassapol Athiapinya
Priority: Minor
 Fix For: 2.1.1-beta


In the case when log aggregation is enabled, if a user submits MapReduce job 
and runs $ yarn logs -applicationId app ID while the YARN application is 
running, the command will return no message and return user back to shell. It 
is nice to tell the user that log aggregation is in progress.

{code}
-bash-4.1$ /usr/bin/yarn logs -applicationId application_1377900193583_0002
-bash-4.1$
{code}

At the same time, if invalid application ID is given, YARN CLI should say that 
the application ID is incorrect rather than throwing NoSuchElementException.
{code}
$ /usr/bin/yarn logs -applicationId application_0
Exception in thread main java.util.NoSuchElementException
at com.google.common.base.AbstractIterator.next(AbstractIterator.java:75)
at 
org.apache.hadoop.yarn.util.ConverterUtils.toApplicationId(ConverterUtils.java:124)
at 
org.apache.hadoop.yarn.util.ConverterUtils.toApplicationId(ConverterUtils.java:119)
at org.apache.hadoop.yarn.logaggregation.LogDumper.run(LogDumper.java:110)
at org.apache.hadoop.yarn.logaggregation.LogDumper.main(LogDumper.java:255)

{code}


--
This message is automatically generated by JIRA.
If you think it was sent incorrectly, please contact your JIRA administrators
For more information on JIRA, see: http://www.atlassian.com/software/jira


[jira] [Created] (YARN-1118) Improve help message for $ yarn node

2013-08-28 Thread Tassapol Athiapinya (JIRA)
Tassapol Athiapinya created YARN-1118:
-

 Summary: Improve help message for $ yarn node
 Key: YARN-1118
 URL: https://issues.apache.org/jira/browse/YARN-1118
 Project: Hadoop YARN
  Issue Type: Improvement
  Components: client
Reporter: Tassapol Athiapinya


There is standardization of help message in YARN-1080. It is nice to have 
similar changes for $ yarn node

--
This message is automatically generated by JIRA.
If you think it was sent incorrectly, please contact your JIRA administrators
For more information on JIRA, see: http://www.atlassian.com/software/jira


[jira] [Created] (YARN-1117) Improve help message for $ yarn applications

2013-08-28 Thread Tassapol Athiapinya (JIRA)
Tassapol Athiapinya created YARN-1117:
-

 Summary: Improve help message for $ yarn applications
 Key: YARN-1117
 URL: https://issues.apache.org/jira/browse/YARN-1117
 Project: Hadoop YARN
  Issue Type: Improvement
  Components: client
Reporter: Tassapol Athiapinya


There is standardization of help message in YARN-1080. It is nice to have 
similar changes for $ yarn appications

--
This message is automatically generated by JIRA.
If you think it was sent incorrectly, please contact your JIRA administrators
For more information on JIRA, see: http://www.atlassian.com/software/jira


[jira] [Updated] (YARN-1094) RM restart throws Null pointer Exception in Secure Env

2013-08-24 Thread Tassapol Athiapinya (JIRA)

 [ 
https://issues.apache.org/jira/browse/YARN-1094?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Tassapol Athiapinya updated YARN-1094:
--

Attachment: YARN-1094-20130824.1.txt

Attaching YARN-1094-20130824.1.txt again (no changes) to kick off another 
Jenkins build

 RM restart throws Null pointer Exception in Secure Env
 --

 Key: YARN-1094
 URL: https://issues.apache.org/jira/browse/YARN-1094
 Project: Hadoop YARN
  Issue Type: Bug
 Environment: secure env
Reporter: yeshavora
Assignee: Vinod Kumar Vavilapalli
Priority: Blocker
 Attachments: YARN-1094-20130824.1.txt, YARN-1094-20130824.1.txt, 
 YARN-1094-20130824.txt


 Enable rmrestart feature And restart Resorce Manager while a job is running.
 Resorce Manager fails to start with below error
 2013-08-23 17:57:40,705 INFO  resourcemanager.RMAppManager 
 (RMAppManager.java:recover(370)) - Recovering application 
 application_1377280618693_0001
 2013-08-23 17:57:40,763 ERROR resourcemanager.ResourceManager 
 (ResourceManager.java:serviceStart(617)) - Failed to load/recover state
 java.lang.NullPointerException
 at 
 org.apache.hadoop.yarn.server.resourcemanager.security.DelegationTokenRenewer.setTimerForTokenRenewal(DelegationTokenRenewer.java:371)
 at 
 org.apache.hadoop.yarn.server.resourcemanager.security.DelegationTokenRenewer.addApplication(DelegationTokenRenewer.java:307)
 at 
 org.apache.hadoop.yarn.server.resourcemanager.RMAppManager.submitApplication(RMAppManager.java:291)
 at 
 org.apache.hadoop.yarn.server.resourcemanager.RMAppManager.recover(RMAppManager.java:371)
 at 
 org.apache.hadoop.yarn.server.resourcemanager.ResourceManager.recover(ResourceManager.java:819)
 at 
 org.apache.hadoop.yarn.server.resourcemanager.ResourceManager.serviceStart(ResourceManager.java:613)
 at 
 org.apache.hadoop.service.AbstractService.start(AbstractService.java:193)
 at 
 org.apache.hadoop.yarn.server.resourcemanager.ResourceManager.main(ResourceManager.java:832)
 2013-08-23 17:57:40,766 INFO  util.ExitUtil (ExitUtil.java:terminate(124)) - 
 Exiting with status 1
   
   

--
This message is automatically generated by JIRA.
If you think it was sent incorrectly, please contact your JIRA administrators
For more information on JIRA, see: http://www.atlassian.com/software/jira


[jira] [Created] (YARN-1080) Standardize help message for required parameter of $ yarn logs

2013-08-19 Thread Tassapol Athiapinya (JIRA)
Tassapol Athiapinya created YARN-1080:
-

 Summary: Standardize help message for required parameter of $ yarn 
logs
 Key: YARN-1080
 URL: https://issues.apache.org/jira/browse/YARN-1080
 Project: Hadoop YARN
  Issue Type: Improvement
  Components: client
Reporter: Tassapol Athiapinya
 Fix For: 2.1.0-beta


YARN CLI has a command logs ($ yarn logs). The command always requires a 
parameter of -applicationId arg. However, help message of the command does 
not make it clear. It lists -applicationId as optional parameter. If I don't 
set it, YARN CLI will complain this is missing. It is better to use standard 
required notation used in other Linux command for help message. Any user 
familiar to the command can understand that this parameter is needed more 
easily.

{code:title=current help message}
-bash-4.1$ yarn logs
usage: general options are:
 -applicationId arg   ApplicationId (required)
 -appOwner argAppOwner (assumed to be current user if not
specified)
 -containerId arg ContainerId (must be specified if node address is
specified)
 -nodeAddress arg NodeAddress in the format nodename:port (must be
specified if container id is specified)
{code}

{code:title=proposed help message}
-bash-4.1$ yarn logs
usage: yarn logs -applicationId application ID [OPTIONS]
general options are:
 -appOwner argAppOwner (assumed to be current user if not
specified)
 -containerId arg ContainerId (must be specified if node address is
specified)
 -nodeAddress arg NodeAddress in the format nodename:port (must be
specified if container id is specified)
{code}

--
This message is automatically generated by JIRA.
If you think it was sent incorrectly, please contact your JIRA administrators
For more information on JIRA, see: http://www.atlassian.com/software/jira


[jira] [Updated] (YARN-1080) Improve help message for $ yarn logs

2013-08-19 Thread Tassapol Athiapinya (JIRA)

 [ 
https://issues.apache.org/jira/browse/YARN-1080?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Tassapol Athiapinya updated YARN-1080:
--

Description: 
There are 2 parts I am proposing in this jira. They can be fixed together in 
one patch.

1. Standardize help message for required parameter of $ yarn logs
YARN CLI has a command logs ($ yarn logs). The command always requires a 
parameter of -applicationId arg. However, help message of the command does 
not make it clear. It lists -applicationId as optional parameter. If I don't 
set it, YARN CLI will complain this is missing. It is better to use standard 
required notation used in other Linux command for help message. Any user 
familiar to the command can understand that this parameter is needed more 
easily.

{code:title=current help message}
-bash-4.1$ yarn logs
usage: general options are:
 -applicationId arg   ApplicationId (required)
 -appOwner argAppOwner (assumed to be current user if not
specified)
 -containerId arg ContainerId (must be specified if node address is
specified)
 -nodeAddress arg NodeAddress in the format nodename:port (must be
specified if container id is specified)
{code}

{code:title=proposed help message}
-bash-4.1$ yarn logs
usage: yarn logs -applicationId application ID [OPTIONS]
general options are:
 -appOwner argAppOwner (assumed to be current user if not
specified)
 -containerId arg ContainerId (must be specified if node address is
specified)
 -nodeAddress arg NodeAddress in the format nodename:port (must be
specified if container id is specified)
{code}

2. Add description for help command. As far as I know, a user cannot get logs 
for running job. Since I spent some time trying to get logs of running 
applications, it should be nice to say this in command description.
{code:title=proposed help}
Retrieve logs for completed/killed YARN application
usage: general options are...
{code}


  was:
YARN CLI has a command logs ($ yarn logs). The command always requires a 
parameter of -applicationId arg. However, help message of the command does 
not make it clear. It lists -applicationId as optional parameter. If I don't 
set it, YARN CLI will complain this is missing. It is better to use standard 
required notation used in other Linux command for help message. Any user 
familiar to the command can understand that this parameter is needed more 
easily.

{code:title=current help message}
-bash-4.1$ yarn logs
usage: general options are:
 -applicationId arg   ApplicationId (required)
 -appOwner argAppOwner (assumed to be current user if not
specified)
 -containerId arg ContainerId (must be specified if node address is
specified)
 -nodeAddress arg NodeAddress in the format nodename:port (must be
specified if container id is specified)
{code}

{code:title=proposed help message}
-bash-4.1$ yarn logs
usage: yarn logs -applicationId application ID [OPTIONS]
general options are:
 -appOwner argAppOwner (assumed to be current user if not
specified)
 -containerId arg ContainerId (must be specified if node address is
specified)
 -nodeAddress arg NodeAddress in the format nodename:port (must be
specified if container id is specified)
{code}

Summary: Improve help message for $ yarn logs  (was: Standardize help 
message for required parameter of $ yarn logs)

 Improve help message for $ yarn logs
 

 Key: YARN-1080
 URL: https://issues.apache.org/jira/browse/YARN-1080
 Project: Hadoop YARN
  Issue Type: Improvement
  Components: client
Reporter: Tassapol Athiapinya
 Fix For: 2.1.0-beta


 There are 2 parts I am proposing in this jira. They can be fixed together in 
 one patch.
 1. Standardize help message for required parameter of $ yarn logs
 YARN CLI has a command logs ($ yarn logs). The command always requires a 
 parameter of -applicationId arg. However, help message of the command 
 does not make it clear. It lists -applicationId as optional parameter. If I 
 don't set it, YARN CLI will complain this is missing. It is better to use 
 standard required notation used in other Linux command for help message. Any 
 user familiar to the command can understand that this parameter is needed 
 more easily.
 {code:title=current help message}
 -bash-4.1$ yarn logs
 usage: general options are:
  -applicationId arg   ApplicationId (required)
  -appOwner argAppOwner (assumed to be current user if not
 specified)
  -containerId arg ContainerId (must be specified if node 

[jira] [Created] (YARN-1081) Minor improvement to output header for $ yarn node -list

2013-08-19 Thread Tassapol Athiapinya (JIRA)
Tassapol Athiapinya created YARN-1081:
-

 Summary: Minor improvement to output header for $ yarn node -list
 Key: YARN-1081
 URL: https://issues.apache.org/jira/browse/YARN-1081
 Project: Hadoop YARN
  Issue Type: Improvement
  Components: client
Reporter: Tassapol Athiapinya
 Fix For: 2.1.0-beta


Output of $ yarn node -list shows number of running containers at each node. I 
found a case when new user of YARN thinks that this is container ID, use it 
later in other YARN commands and find an error due to misunderstanding.

{code:title=current output}
2013-07-31 04:00:37,814|beaver.machine|INFO|RUNNING: /usr/bin/yarn node -list
2013-07-31 04:00:38,746|beaver.machine|INFO|Total Nodes:1
2013-07-31 04:00:38,747|beaver.machine|INFO|Node-Id Node-State  
Node-Http-Address   Running-Containers
2013-07-31 04:00:38,747|beaver.machine|INFO|myhost:45454   RUNNING  
myhost:50060   2
{code}

{code:title=proposed output}
2013-07-31 04:00:37,814|beaver.machine|INFO|RUNNING: /usr/bin/yarn node -list
2013-07-31 04:00:38,746|beaver.machine|INFO|Total Nodes:1
2013-07-31 04:00:38,747|beaver.machine|INFO|Node-Id Node-State  
Node-Http-Address   Number-of-Running-Containers
2013-07-31 04:00:38,747|beaver.machine|INFO|myhost:45454   RUNNING  
myhost:50060   2
{code}

--
This message is automatically generated by JIRA.
If you think it was sent incorrectly, please contact your JIRA administrators
For more information on JIRA, see: http://www.atlassian.com/software/jira


[jira] [Updated] (YARN-1074) Provides command line to clean up application list

2013-08-16 Thread Tassapol Athiapinya (JIRA)

 [ 
https://issues.apache.org/jira/browse/YARN-1074?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Tassapol Athiapinya updated YARN-1074:
--

Description: 
Once a user brings up YARN daemon, runs jobs, jobs will stay in output returned 
by $ yarn application -list even after jobs complete already. We want YARN 
command line to clean up this list. Specifically, we want to remove 
applications with FINISHED state(not Final-State) or KILLED state from the 
result.

{code}
[user1@host1 ~]$ yarn application -list
Total Applications:150
Application-Id  Application-NameApplication-Type
  User   Queue   State   Final-State   
ProgressTracking-URL
application_1374638600275_0109 Sleep job   MAPREDUCE
user1  default  KILLEDKILLED
   100%host1:54059
application_1374638600275_0121 Sleep job   MAPREDUCE
user1  defaultFINISHED SUCCEEDED
   100% host1:19888/jobhistory/job/job_1374638600275_0121
application_1374638600275_0020 Sleep job   MAPREDUCE
user1  defaultFINISHED SUCCEEDED
   100% host1:19888/jobhistory/job/job_1374638600275_0020
application_1374638600275_0038 Sleep job   MAPREDUCE
user1  default  

{code}


  was:
Once a user brings up YARN daemon, runs jobs, jobs will stay in output returned 
by $ yarn application -list even after jobs complete already. We want YARN 
command line to clean up this list. Specifically, we want to remove 
applications with FINISHED state(not Final-State) from the result.

{code}
[user1@host1 ~]$ yarn application -list
Total Applications:150
Application-Id  Application-NameApplication-Type
  User   Queue   State   Final-State   
ProgressTracking-URL
application_1374638600275_0109 Sleep job   MAPREDUCE
user1  default  KILLEDKILLED
   100%host1:54059
application_1374638600275_0121 Sleep job   MAPREDUCE
user1  defaultFINISHED SUCCEEDED
   100% host1:19888/jobhistory/job/job_1374638600275_0121
application_1374638600275_0020 Sleep job   MAPREDUCE
user1  defaultFINISHED SUCCEEDED
   100% host1:19888/jobhistory/job/job_1374638600275_0020
application_1374638600275_0038 Sleep job   MAPREDUCE
user1  default  

{code}



 Provides command line to clean up application list
 --

 Key: YARN-1074
 URL: https://issues.apache.org/jira/browse/YARN-1074
 Project: Hadoop YARN
  Issue Type: New Feature
  Components: client
Reporter: Tassapol Athiapinya
 Fix For: 2.1.0-beta


 Once a user brings up YARN daemon, runs jobs, jobs will stay in output 
 returned by $ yarn application -list even after jobs complete already. We 
 want YARN command line to clean up this list. Specifically, we want to remove 
 applications with FINISHED state(not Final-State) or KILLED state from the 
 result.
 {code}
 [user1@host1 ~]$ yarn application -list
 Total Applications:150
 Application-IdApplication-Name
 Application-Type  User   Queue   State   
 Final-State   ProgressTracking-URL
 application_1374638600275_0109   Sleep job   
 MAPREDUCEuser1  default  KILLED
 KILLED   100%host1:54059
 application_1374638600275_0121   Sleep job   
 MAPREDUCEuser1  defaultFINISHED 
 SUCCEEDED   100% host1:19888/jobhistory/job/job_1374638600275_0121
 application_1374638600275_0020   Sleep job   
 MAPREDUCEuser1  defaultFINISHED 
 SUCCEEDED   100% host1:19888/jobhistory/job/job_1374638600275_0020
 application_1374638600275_0038   Sleep job   
 MAPREDUCEuser1  default  
 
 {code}

--
This message is automatically generated by JIRA.
If you think it was sent incorrectly, please contact your JIRA administrators
For more information on JIRA, see: http://www.atlassian.com/software/jira


[jira] [Created] (YARN-1074) Provides command line to clean up application list

2013-08-16 Thread Tassapol Athiapinya (JIRA)
Tassapol Athiapinya created YARN-1074:
-

 Summary: Provides command line to clean up application list
 Key: YARN-1074
 URL: https://issues.apache.org/jira/browse/YARN-1074
 Project: Hadoop YARN
  Issue Type: New Feature
  Components: client
Reporter: Tassapol Athiapinya
 Fix For: 2.1.0-beta


Once a user brings up YARN daemon, runs jobs, jobs will stay in output returned 
by $ yarn application -list even after jobs complete already. We want YARN 
command line to clean up this list. Specifically, we want to remove 
applications with FINISHED state(not Final-State) from the result.

{code}
[user1@host1 ~]$ yarn application -list
Total Applications:150
Application-Id  Application-NameApplication-Type
  User   Queue   State   Final-State   
ProgressTracking-URL
application_1374638600275_0109 Sleep job   MAPREDUCE
user1  default  KILLEDKILLED
   100%host1:54059
application_1374638600275_0121 Sleep job   MAPREDUCE
user1  defaultFINISHED SUCCEEDED
   100% host1:19888/jobhistory/job/job_1374638600275_0121
application_1374638600275_0020 Sleep job   MAPREDUCE
user1  defaultFINISHED SUCCEEDED
   100% host1:19888/jobhistory/job/job_1374638600275_0020
application_1374638600275_0038 Sleep job   MAPREDUCE
user1  default  

{code}


--
This message is automatically generated by JIRA.
If you think it was sent incorrectly, please contact your JIRA administrators
For more information on JIRA, see: http://www.atlassian.com/software/jira


[jira] [Updated] (YARN-890) The roundup for memory values on resource manager UI is misleading

2013-07-10 Thread Tassapol Athiapinya (JIRA)

 [ 
https://issues.apache.org/jira/browse/YARN-890?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Tassapol Athiapinya updated YARN-890:
-

Attachment: Screen Shot 2013-07-10 at 10.43.34 AM.png

Attached screenshot. Memory showed in the UI is 5 GB. In fact, the memory given 
in configuration is little above 4 GB. That is why original reporter says 
rounding up is misleading. It should be better to round it to 4 GB.

 The roundup for memory values on resource manager UI is misleading
 --

 Key: YARN-890
 URL: https://issues.apache.org/jira/browse/YARN-890
 Project: Hadoop YARN
  Issue Type: Bug
  Components: resourcemanager
Reporter: Trupti Dhavle
 Attachments: Screen Shot 2013-07-10 at 10.43.34 AM.png


 From the yarn-site.xml, I see following values-
 property
 nameyarn.nodemanager.resource.memory-mb/name
 value4192/value
 /property
 property
 nameyarn.scheduler.maximum-allocation-mb/name
 value4192/value
 /property
 property
 nameyarn.scheduler.minimum-allocation-mb/name
 value1024/value
 /property
 However the resourcemanager UI shows total memory as 5MB 

--
This message is automatically generated by JIRA.
If you think it was sent incorrectly, please contact your JIRA administrators
For more information on JIRA, see: http://www.atlassian.com/software/jira