[jira] [Commented] (YARN-9977) Support monitor threads number in ContainersMonitorImpl

2019-12-04 Thread yehuanhuan (Jira)


[ 
https://issues.apache.org/jira/browse/YARN-9977?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16988416#comment-16988416
 ] 

yehuanhuan commented on YARN-9977:
--

hi zhoukang
Currently, we are using the system file that is in the /proc/pid/status path to 
monitor the number of threads.
Do you have a good idea?

> Support monitor threads number in ContainersMonitorImpl
> ---
>
> Key: YARN-9977
> URL: https://issues.apache.org/jira/browse/YARN-9977
> Project: Hadoop YARN
>  Issue Type: New Feature
>  Components: nodemanager
>Reporter: zhoukang
>Assignee: zhoukang
>Priority: Major
>
> In this jira, we want add a feature to monitor thread number for given 
> container.



--
This message was sent by Atlassian Jira
(v8.3.4#803005)

-
To unsubscribe, e-mail: yarn-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: yarn-issues-h...@hadoop.apache.org



[jira] [Comment Edited] (YARN-6492) Generate queue metrics for each partition

2019-12-04 Thread Jonathan Hung (Jira)


[ 
https://issues.apache.org/jira/browse/YARN-6492?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16988370#comment-16988370
 ] 

Jonathan Hung edited comment on YARN-6492 at 12/5/19 1:36 AM:
--

A couple more high level comments:
 * In places like
{noformat}
public void allocateResources(String partition, String user, int containers,
Resource res, boolean decrPending) {{noformat}

should we only call _allocateResources if partition is null or empty? Otherwise 
metrics for default partition will be updated when this is called for non-null 
partition.

Same comment for other places like reserveResource, incrPendingResources, 
decrPendingResources, etc
 * Related to above, in getPartitionQueueMetrics, can we just return null 
QueueMetrics object if partition is null or empty? With the change described 
above, the outer functions which call getPartitionQueueMetrics (e.g. 
allocateResources) should already update default partition's metrics. Then 
getPartitionQueueMetrics only returns a non-null PartitionQueueMetrics object 
if partition is non-null, so we don't have to maintain a duplicate 
PartitionQueueMetrics object for default partition.
 * I see currently PartitionQueueMetrics#getPartitionQueueMetrics keys by 
partition, while QueueMetrics#getPartitionQueueMetrics keys by partition + 
queue. Can we just remove the logic in 
PartitionQueueMetrics#getPartitionQueueMetrics? I don't think we need to 
maintain a separate QueueMetrics object for the entire partition.
 * In QueueMetrics#getPartitionQueueMetrics, can we add the partition + 
queueName key to a separate map, instead of adding it to QUEUE_METRICS? Like a 
PARTITION_QUEUE_METRICS cache. We can do something like a nested map, with 
partition -> queue -> QueueMetrics object. I feel it's weird to add both queue 
metrics, and partition queue metrics to the same map. Also it avoids the 
metricName = partition + this.queueName concatenation logic, which seems not 
very intuitive.
 *


was (Author: jhung):
A couple more high level comments:
 * In places like
{noformat}
public void allocateResources(String partition, String user, int containers,
Resource res, boolean decrPending) { {noformat}

> Generate queue metrics for each partition
> -
>
> Key: YARN-6492
> URL: https://issues.apache.org/jira/browse/YARN-6492
> Project: Hadoop YARN
>  Issue Type: Improvement
>  Components: capacity scheduler
>Reporter: Jonathan Hung
>Assignee: Manikandan R
>Priority: Major
> Attachments: PartitionQueueMetrics_default_partition.txt, 
> PartitionQueueMetrics_x_partition.txt, PartitionQueueMetrics_y_partition.txt, 
> YARN-6492.001.patch, YARN-6492.002.patch, YARN-6492.003.patch, 
> YARN-6492.004.patch, YARN-6492.005.WIP.patch, YARN-6492.006.WIP.patch, 
> YARN-6492.007.WIP.patch, partition_metrics.txt
>
>
> We are interested in having queue metrics for all partitions. Right now each 
> queue has one QueueMetrics object which captures metrics either in default 
> partition or across all partitions. (After YARN-6467 it will be in default 
> partition)
> But having the partition metrics would be very useful.



--
This message was sent by Atlassian Jira
(v8.3.4#803005)

-
To unsubscribe, e-mail: yarn-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: yarn-issues-h...@hadoop.apache.org



[jira] [Commented] (YARN-6492) Generate queue metrics for each partition

2019-12-04 Thread Jonathan Hung (Jira)


[ 
https://issues.apache.org/jira/browse/YARN-6492?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16988370#comment-16988370
 ] 

Jonathan Hung commented on YARN-6492:
-

A couple more high level comments:
 * In places like
{noformat}
public void allocateResources(String partition, String user, int containers,
Resource res, boolean decrPending) { {noformat}

> Generate queue metrics for each partition
> -
>
> Key: YARN-6492
> URL: https://issues.apache.org/jira/browse/YARN-6492
> Project: Hadoop YARN
>  Issue Type: Improvement
>  Components: capacity scheduler
>Reporter: Jonathan Hung
>Assignee: Manikandan R
>Priority: Major
> Attachments: PartitionQueueMetrics_default_partition.txt, 
> PartitionQueueMetrics_x_partition.txt, PartitionQueueMetrics_y_partition.txt, 
> YARN-6492.001.patch, YARN-6492.002.patch, YARN-6492.003.patch, 
> YARN-6492.004.patch, YARN-6492.005.WIP.patch, YARN-6492.006.WIP.patch, 
> YARN-6492.007.WIP.patch, partition_metrics.txt
>
>
> We are interested in having queue metrics for all partitions. Right now each 
> queue has one QueueMetrics object which captures metrics either in default 
> partition or across all partitions. (After YARN-6467 it will be in default 
> partition)
> But having the partition metrics would be very useful.



--
This message was sent by Atlassian Jira
(v8.3.4#803005)

-
To unsubscribe, e-mail: yarn-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: yarn-issues-h...@hadoop.apache.org



[jira] [Commented] (YARN-10009) In Capacity Scheduler, DRC can treat minimum user limit percent as a max when custom resource is defined

2019-12-04 Thread Wangda Tan (Jira)


[ 
https://issues.apache.org/jira/browse/YARN-10009?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16988283#comment-16988283
 ] 

Wangda Tan commented on YARN-10009:
---

+1 from my side, except one comment:
{quote}[^YARN-10009.001.patch]+ // allocate 5 containers for app1 with 1GB 
memory, 1 vcore, 5 res_1s
{quote}
The above comment is not right in the test case.

[~sunilg], do you want to take a look?

> In Capacity Scheduler, DRC can treat minimum user limit percent as a max when 
> custom resource is defined
> 
>
> Key: YARN-10009
> URL: https://issues.apache.org/jira/browse/YARN-10009
> Project: Hadoop YARN
>  Issue Type: Improvement
>  Components: capacity scheduler
>Affects Versions: 2.10.0, 3.3.0, 3.2.1, 3.1.3, 2.11.0
>Reporter: Eric Payne
>Assignee: Eric Payne
>Priority: Critical
> Attachments: YARN-10009.001.patch, YARN-10009.UT.patch
>
>
> | |Memory|Vcores|res_1|
> |Queue1 Totals|20GB|100|80|
> |Resources requested by App1 in Queue1|8GB (40% of total)|8 (8% of total)|80 
> (100% of total)|
> In the previous use case:
>  - Queue1 has a value of 25 for {{miminum-user-limit-percent}}
>  - User1 has requested 8 containers with {{}} 
> each
>  - {{res_1}} will be the dominant resource this case.
> All 8 containers should be assigned by the capacity scheduler, but with min 
> user limit pct set to 25, only 2 containers are assigned.



--
This message was sent by Atlassian Jira
(v8.3.4#803005)

-
To unsubscribe, e-mail: yarn-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: yarn-issues-h...@hadoop.apache.org



[jira] [Updated] (YARN-10009) In Capacity Scheduler, DRC can treat minimum user limit percent as a max when custom resource is defined

2019-12-04 Thread Wangda Tan (Jira)


 [ 
https://issues.apache.org/jira/browse/YARN-10009?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Wangda Tan updated YARN-10009:
--
Priority: Critical  (was: Major)

> In Capacity Scheduler, DRC can treat minimum user limit percent as a max when 
> custom resource is defined
> 
>
> Key: YARN-10009
> URL: https://issues.apache.org/jira/browse/YARN-10009
> Project: Hadoop YARN
>  Issue Type: Improvement
>  Components: capacity scheduler
>Affects Versions: 2.10.0, 3.3.0, 3.2.1, 3.1.3, 2.11.0
>Reporter: Eric Payne
>Assignee: Eric Payne
>Priority: Critical
> Attachments: YARN-10009.001.patch, YARN-10009.UT.patch
>
>
> | |Memory|Vcores|res_1|
> |Queue1 Totals|20GB|100|80|
> |Resources requested by App1 in Queue1|8GB (40% of total)|8 (8% of total)|80 
> (100% of total)|
> In the previous use case:
>  - Queue1 has a value of 25 for {{miminum-user-limit-percent}}
>  - User1 has requested 8 containers with {{}} 
> each
>  - {{res_1}} will be the dominant resource this case.
> All 8 containers should be assigned by the capacity scheduler, but with min 
> user limit pct set to 25, only 2 containers are assigned.



--
This message was sent by Atlassian Jira
(v8.3.4#803005)

-
To unsubscribe, e-mail: yarn-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: yarn-issues-h...@hadoop.apache.org



[jira] [Comment Edited] (YARN-10009) In Capacity Scheduler, DRC can treat minimum user limit percent as a max when custom resource is defined

2019-12-04 Thread Eric Payne (Jira)


[ 
https://issues.apache.org/jira/browse/YARN-10009?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16986411#comment-16986411
 ] 

Eric Payne edited comment on YARN-10009 at 12/4/19 6:12 PM:


The root cause is here:
{code:title=UsersManager#computeUserLimit}
/*
 * User limit resource is determined by: max(currentCapacity / #activeUsers,
 * currentCapacity * user-limit-percentage%)
 */
Resource userLimitResource = Resources.max(resourceCalculator,
partitionResource,
Resources.divideAndCeil(resourceCalculator, resourceUsed,
usersSummedByWeight),
Resources.divideAndCeil(resourceCalculator,
Resources.multiplyAndRoundDown(currentCapacity, getUserLimit()),
100));
{code}
When calculating the user resource limit, {{divideAndCeil}} is used to take the 
max of either (queue capacity / # of active users) or (queue capacity / min 
user limit pct). However, they are not the same divideAndCeil methods. The 
first takes a {{Resource}} and a {{float}} and the second takes a {{Resource}} 
and an {{int}}. The method with the {{Resource}} {{float}} signature was never 
updated to handle custom resources.

The only place that calls {{difideAndCeil(Resource, float)}} is here in 
{{UsersManager#computeUserLimit}}


was (Author: eepayne):
The root cause is here:
{code:UsersManager#computeUserLimit}
/*
 * User limit resource is determined by: max(currentCapacity / #activeUsers,
 * currentCapacity * user-limit-percentage%)
 */
Resource userLimitResource = Resources.max(resourceCalculator,
partitionResource,
Resources.divideAndCeil(resourceCalculator, resourceUsed,
usersSummedByWeight),
Resources.divideAndCeil(resourceCalculator,
Resources.multiplyAndRoundDown(currentCapacity, getUserLimit()),
100));
{code}
When calculating the user resource limit, {{divideAndCeil}} is used to take the 
max of either (queue capacity / # of active users) or (queue capacity / min 
user limit pct). However, they are not the same divideAndCeil methods. The 
first takes a {{Resource}} and a {{float}} and the second takes a {{Resource}} 
and an {{int}}. The method with the {{Resource}} {{float}} signature was never 
updated to handle custom resources.

The only place that calls {{difideAndCeil(Resource, float)}} is here in 
{{UsersManager#computeUserLimit}}

> In Capacity Scheduler, DRC can treat minimum user limit percent as a max when 
> custom resource is defined
> 
>
> Key: YARN-10009
> URL: https://issues.apache.org/jira/browse/YARN-10009
> Project: Hadoop YARN
>  Issue Type: Improvement
>  Components: capacity scheduler
>Affects Versions: 2.10.0, 3.3.0, 3.2.1, 3.1.3, 2.11.0
>Reporter: Eric Payne
>Assignee: Eric Payne
>Priority: Major
> Attachments: YARN-10009.001.patch, YARN-10009.UT.patch
>
>
> | |Memory|Vcores|res_1|
> |Queue1 Totals|20GB|100|80|
> |Resources requested by App1 in Queue1|8GB (40% of total)|8 (8% of total)|80 
> (100% of total)|
> In the previous use case:
>  - Queue1 has a value of 25 for {{miminum-user-limit-percent}}
>  - User1 has requested 8 containers with {{}} 
> each
>  - {{res_1}} will be the dominant resource this case.
> All 8 containers should be assigned by the capacity scheduler, but with min 
> user limit pct set to 25, only 2 containers are assigned.



--
This message was sent by Atlassian Jira
(v8.3.4#803005)

-
To unsubscribe, e-mail: yarn-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: yarn-issues-h...@hadoop.apache.org



[jira] [Commented] (YARN-9052) Replace all MockRM submit method definitions with a builder

2019-12-04 Thread Sunil G (Jira)


[ 
https://issues.apache.org/jira/browse/YARN-9052?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16988021#comment-16988021
 ] 

Sunil G commented on YARN-9052:
---

Looks fine to me. Lets get this in tomorrow if there are no objections.

cc [~rohithsharmaks] [~prabhujoseph]

> Replace all MockRM submit method definitions with a builder
> ---
>
> Key: YARN-9052
> URL: https://issues.apache.org/jira/browse/YARN-9052
> Project: Hadoop YARN
>  Issue Type: Improvement
>Reporter: Szilard Nemeth
>Assignee: Szilard Nemeth
>Priority: Minor
> Attachments: 
> YARN-9052-004withlogs-patch-unit-hadoop-yarn-project_hadoop-yarn_hadoop-yarn-server_hadoop-yarn-server-resourcemanager.txt,
>  YARN-9052-testlogs003-justfailed.txt, 
> YARN-9052-testlogs003-patch-unit-hadoop-yarn-project_hadoop-yarn_hadoop-yarn-server_hadoop-yarn-server-resourcemanager.txt,
>  YARN-9052-testlogs004-justfailed.txt, YARN-9052.001.patch, 
> YARN-9052.002.patch, YARN-9052.003.patch, YARN-9052.004.patch, 
> YARN-9052.004.withlogs.patch, YARN-9052.005.patch, YARN-9052.006.patch, 
> YARN-9052.007.patch, YARN-9052.008.patch, YARN-9052.009.patch, 
> YARN-9052.009.patch, YARN-9052.testlogs.002.patch, 
> YARN-9052.testlogs.002.patch, YARN-9052.testlogs.003.patch, 
> YARN-9052.testlogs.patch
>
>
> MockRM has 31 definitions of submitApp, most of them having more than 
> acceptable number of parameters, ranging from 2 to even 22 parameters, which 
> makes the code completely unreadable.
> On top of unreadability, it's very hard to follow what RmApp will be produced 
> for tests as they often pass a lot of empty / null values as parameters.



--
This message was sent by Atlassian Jira
(v8.3.4#803005)

-
To unsubscribe, e-mail: yarn-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: yarn-issues-h...@hadoop.apache.org



[jira] [Assigned] (YARN-10012) Guaranteed and max capacity queue metrics for custom resources

2019-12-04 Thread Manikandan R (Jira)


 [ 
https://issues.apache.org/jira/browse/YARN-10012?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Manikandan R reassigned YARN-10012:
---

Assignee: Manikandan R

> Guaranteed and max capacity queue metrics for custom resources
> --
>
> Key: YARN-10012
> URL: https://issues.apache.org/jira/browse/YARN-10012
> Project: Hadoop YARN
>  Issue Type: Improvement
>Reporter: Jonathan Hung
>Assignee: Manikandan R
>Priority: Major
>
> YARN-9085 adds support for guaranteed/maxcapacity MB/vcores. We should add 
> the same for custom resources.



--
This message was sent by Atlassian Jira
(v8.3.4#803005)

-
To unsubscribe, e-mail: yarn-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: yarn-issues-h...@hadoop.apache.org



[jira] [Assigned] (YARN-9892) Capacity scheduler: support DRF ordering policy on queue level

2019-12-04 Thread Manikandan R (Jira)


 [ 
https://issues.apache.org/jira/browse/YARN-9892?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Manikandan R reassigned YARN-9892:
--

Assignee: Manikandan R  (was: Peter Bacsko)

> Capacity scheduler: support DRF ordering policy on queue level
> --
>
> Key: YARN-9892
> URL: https://issues.apache.org/jira/browse/YARN-9892
> Project: Hadoop YARN
>  Issue Type: Sub-task
>  Components: capacity scheduler
>Reporter: Peter Bacsko
>Assignee: Manikandan R
>Priority: Major
>
> Capacity scheduler does not support DRF (Dominant Resource Fairness) ordering 
> policy on queue level. Only "fifo" and "fair" are accepted for 
> {{yarn.scheduler.capacity..ordering-policy}}.
> DRF can only be used globally if 
> {{yarn.scheduler.capacity.resource-calculator}} is set to 
> DominantResourceCalculator.



--
This message was sent by Atlassian Jira
(v8.3.4#803005)

-
To unsubscribe, e-mail: yarn-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: yarn-issues-h...@hadoop.apache.org



[jira] [Assigned] (YARN-10006) IOException used in place of YARNException in CapaitySheduler

2019-12-04 Thread Adam Antal (Jira)


 [ 
https://issues.apache.org/jira/browse/YARN-10006?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Adam Antal reassigned YARN-10006:
-

Assignee: Adam Antal

> IOException used in place of YARNException in CapaitySheduler
> -
>
> Key: YARN-10006
> URL: https://issues.apache.org/jira/browse/YARN-10006
> Project: Hadoop YARN
>  Issue Type: Bug
>  Components: capacity scheduler
>Affects Versions: 3.3.0
>Reporter: Prabhu Joseph
>Assignee: Adam Antal
>Priority: Minor
>
> IOException used in place of YARNException in CapaityScheduler. As per 
> YARNException Doc,
> {code:java}
> /**
>  * YarnException indicates exceptions from yarn servers. On the other hand,
>  * IOExceptions indicates exceptions from RPC layer.
>  */
> {code}
> Below methods throws IOException but it is suppose to throw YarnException.
> CapaityShedulerQueueManager#parseQueue <- initializeQueues <- 
> CapacityScheduler#initializeQueues <- initScheduler <- serviceInit



--
This message was sent by Atlassian Jira
(v8.3.4#803005)

-
To unsubscribe, e-mail: yarn-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: yarn-issues-h...@hadoop.apache.org



[jira] [Commented] (YARN-4993) Refactory ContainersLogsBlock, AggregatedLogsBlock and container log webservice introduced in AHS to minimize the duplication.

2019-12-04 Thread Adam Antal (Jira)


[ 
https://issues.apache.org/jira/browse/YARN-4993?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16987857#comment-16987857
 ] 

Adam Antal commented on YARN-4993:
--

Hi [~xgong],
Do you work on this recently?

> Refactory ContainersLogsBlock, AggregatedLogsBlock and container log 
> webservice introduced in AHS to minimize the duplication.
> --
>
> Key: YARN-4993
> URL: https://issues.apache.org/jira/browse/YARN-4993
> Project: Hadoop YARN
>  Issue Type: Sub-task
>Reporter: Xuan Gong
>Assignee: Xuan Gong
>Priority: Major
>
> There are many duplicate code in ContainersLogsBlock, AggregatedLogsBlock and 
> container log webservice introduced by YARN-4920. We should move the 
> duplications to a common web utility class.



--
This message was sent by Atlassian Jira
(v8.3.4#803005)

-
To unsubscribe, e-mail: yarn-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: yarn-issues-h...@hadoop.apache.org



[jira] [Commented] (YARN-9877) Intermittent TIME_OUT of LogAggregationReport

2019-12-04 Thread Adam Antal (Jira)


[ 
https://issues.apache.org/jira/browse/YARN-9877?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16987735#comment-16987735
 ] 

Adam Antal commented on YARN-9877:
--

[~pbacsko] I'll take a look at this option.

> Intermittent TIME_OUT of LogAggregationReport
> -
>
> Key: YARN-9877
> URL: https://issues.apache.org/jira/browse/YARN-9877
> Project: Hadoop YARN
>  Issue Type: Bug
>  Components: log-aggregation, resourcemanager, yarn
>Affects Versions: 3.0.3, 3.3.0, 3.2.1, 3.1.3
>Reporter: Adam Antal
>Assignee: Adam Antal
>Priority: Major
> Attachments: YARN-9877.001.patch
>
>
> I noticed some intermittent TIME_OUT in some downstream log-aggregation based 
> tests.
> Steps to reproduce:
> - Let's run a MR job
> {code}
> hadoop jar hadoop-mapreduce/hadoop-mapreduce-client-jobclient-tests.jar sleep 
> -Dmapreduce.job.queuename=root.default -m 10 -r 10 -mt 5000 -rt 5000
> {code}
> - Suppose the AM is requesting more containers, but as soon as they're 
> allocated - the AM realizes it doesn't need them. The container's state 
> changes are: ALLOCATED -> ACQUIRED -> RELEASED. 
> Let's suppose these extra containers are allocated in a different node from 
> the other 21 (AM + 10 mapper + 10 reducer) containers' node.
> - All the containers finish successfully and the app is finished successfully 
> as well. Log aggregation status for the whole app seemingly stucks in RUNNING 
> state.
> - After a while the final log aggregation status for the app changes to 
> TIME_OUT.
> Root cause:
> - As unused containers are getting through the state transition in the RM's 
> internal representation, {{RMAppImpl$AppRunningOnNodeTransition}}'s 
> transition function is called. This calls the 
> {{RMAppLogAggregation$addReportIfNecessary}} which forcefully adds the 
> "NOT_START" LogAggregationStatus associated with this NodeId for the app, 
> even though it does not have any running container on it.
> - The node's LogAggregationStatus is never updated to "SUCCEEDED" by the 
> NodeManager because it does not have any running container on it (Note that 
> the AM immediately released them after acquisition). The LogAggregationStatus 
> remains NOT_START until time out is reached. After that point the RM 
> aggregates the LogAggregationReports for all the nodes, and though all the 
> containers have SUCCEEDED state, one particular node has NOT_START, so the 
> final log aggregation will be TIME_OUT.
> (I crawled the RM UI for the log aggregation statuses, and it was always 
> NOT_START for this particular node).
> This situation is highly unlikely, but has an estimated ~0.8% of failure rate 
> based on a year's 1500 run on an unstressed cluster.



--
This message was sent by Atlassian Jira
(v8.3.4#803005)

-
To unsubscribe, e-mail: yarn-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: yarn-issues-h...@hadoop.apache.org



[jira] [Created] (YARN-10013) sqoop import data failed: creating symbolic link `jobSubmitDir/job.splitmetainfo': No such file or directory

2019-12-04 Thread ningjie (Jira)
ningjie created YARN-10013:
--

 Summary: sqoop import data failed: creating symbolic link 
`jobSubmitDir/job.splitmetainfo': No such file or directory
 Key: YARN-10013
 URL: https://issues.apache.org/jira/browse/YARN-10013
 Project: Hadoop YARN
  Issue Type: Bug
 Environment: CDH-6.1.1
Reporter: ningjie


Application application_1575352411787_0268 failed 2 times due to AM Container 
for appattempt_1575352411787_0268_02 exited with exitCode: 1
Failing this attempt.Diagnostics: [2019-12-04 14:43:01.809]Exception from 
container-launch.
Container id: container_1575352411787_0268_02_01
Exit code: 1
 
[2019-12-04 14:43:01.810]Container exited with a non-zero exit code 1. Error 
file: prelaunch.err.
Last 4096 bytes of prelaunch.err :
ln: creating symbolic link `jobSubmitDir/job.splitmetainfo': No such file or 
directory
 
[2019-12-04 14:43:01.810]Container exited with a non-zero exit code 1. Error 
file: prelaunch.err.
Last 4096 bytes of prelaunch.err :
ln: creating symbolic link `jobSubmitDir/job.splitmetainfo': No such file or 
directory
 
For more detailed output, check the application tracking page: 
http://cloud243:8088/cluster/app/application_1575352411787_0268 Then click on 
links to logs of each attempt.
. Failing the application.



--
This message was sent by Atlassian Jira
(v8.3.4#803005)

-
To unsubscribe, e-mail: yarn-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: yarn-issues-h...@hadoop.apache.org