RE: Word Count examples run failed with Tez 0.8.4

2016-08-03 Thread HuXi
Default configuration was used with yarn.resourcemanager.hostname  set to 
0.0.0.0 and yarn.resourcemanager.address set to 0.0.0.0:8032.
If what you mentioned is really the reason, please tell me what I should do to 
fix it? 

> Date: Wed, 3 Aug 2016 20:41:31 -0700
> Subject: Re: Word Count examples run failed with Tez 0.8.4
> From: gop...@apache.org
> To: user@tez.apache.org
> 
> 
> > 16/08/04 09:36:00 INFO client.TezClient: The url to track the Tez AM:
> >http://iZ25f2qedc7Z:8088/proxy/application_1470148111230_0014/
> > 16/08/04 09:36:05 INFO client.RMProxy: Connecting to ResourceManager at
> >/0.0.0.0:8032
> 
> That sounds very strange - is the resource manager really running on
> localhost, but that resolves back to that strange hostname?
> 
> Cheers,
> Gopal
> 
> 
> 
> 
> 
> 
  

Re: Word Count examples run failed with Tez 0.8.4

2016-08-03 Thread Gopal Vijayaraghavan

> 16/08/04 09:36:00 INFO client.TezClient: The url to track the Tez AM:
>http://iZ25f2qedc7Z:8088/proxy/application_1470148111230_0014/
> 16/08/04 09:36:05 INFO client.RMProxy: Connecting to ResourceManager at
>/0.0.0.0:8032

That sounds very strange - is the resource manager really running on
localhost, but that resolves back to that strange hostname?

Cheers,
Gopal








Word Count examples run failed with Tez 0.8.4

2016-08-03 Thread HuXi
Hi,


I am using Tez 0.8.4 integrated into my apache hadoop 2.7.2 cluster. When 
running the word count example via issuing "hadoop jar tez-examples-0.8.4.jar 
orderedwordcount /apps/tez/NOTICE.txt /out”, the log says it was failed as 
below:


SLF4J: Class path contains multiple SLF4J bindings.
SLF4J: Found binding in 
[jar:file:/mnt/disk/huxi/hadoop/share/hadoop/common/lib/slf4j-log4j12-1.7.10.jar!/org/slf4j/impl/StaticLoggerBinder.class]
SLF4J: Found binding in 
[jar:file:/mnt/disk/huxi/tez/lib/slf4j-log4j12-1.7.10.jar!/org/slf4j/impl/StaticLoggerBinder.class]
SLF4J: See http://www.slf4j.org/codes.html#multiple_bindings for an explanation.
SLF4J: Actual binding is of type [org.slf4j.impl.Log4jLoggerFactory]
16/08/04 09:35:58 INFO shim.HadoopShimsLoader: Trying to locate 
HadoopShimProvider for hadoopVersion=2.7.2, majorVersion=2, minorVersion=7
16/08/04 09:35:58 INFO shim.HadoopShimsLoader: Picked HadoopShim 
org.apache.tez.hadoop.shim.HadoopShim26, 
providerName=org.apache.tez.hadoop.shim.HadoopShim25_26_27Provider, 
overrideProviderViaConfig=null, hadoopVersion=2.7.2, majorVersion=2, 
minorVersion=7
16/08/04 09:35:58 WARN util.NativeCodeLoader: Unable to load native-hadoop 
library for your platform... using builtin-java classes where applicable
16/08/04 09:35:58 INFO client.TezClient: Tez Client Version: [ 
component=tez-api, version=0.8.4, 
revision=ef70407682918c022dffea86d6fa0571ccebcd8b, 
SCM-URL=scm:git:https://git-wip-us.apache.org/repos/asf/tez.git, 
buildTime=20160705-1449 ]
16/08/04 09:35:59 INFO client.RMProxy: Connecting to ResourceManager at 
/0.0.0.0:8032
16/08/04 09:36:00 INFO examples.OrderedWordCount: Running OrderedWordCount
16/08/04 09:36:00 INFO client.TezClient: Submitting DAG application with id: 
application_1470148111230_0014
16/08/04 09:36:00 INFO client.TezClientUtils: Using tez.lib.uris value from 
configuration: hdfs://localhost:8500/apps/tez/tez.tar.gz
16/08/04 09:36:00 INFO client.TezClientUtils: Using tez.lib.uris.classpath 
value from configuration: null
16/08/04 09:36:00 INFO client.TezClient: Tez system stage directory 
hdfs://localhost:8500/tmp/work/tez/staging/.tez/application_1470148111230_0014 
doesn't exist and is created
16/08/04 09:36:00 INFO client.TezClient: Submitting DAG to YARN, 
applicationId=application_1470148111230_0014, dagName=OrderedWordCount, 
callerContext={ context=TezExamples, callerType=null, callerId=null }
16/08/04 09:36:00 INFO impl.YarnClientImpl: Submitted application 
application_1470148111230_0014
16/08/04 09:36:00 INFO client.TezClient: The url to track the Tez AM: 
http://iZ25f2qedc7Z:8088/proxy/application_1470148111230_0014/
16/08/04 09:36:05 INFO client.RMProxy: Connecting to ResourceManager at 
/0.0.0.0:8032
16/08/04 09:36:06 INFO client.DAGClientImpl: DAG initialized: 
CurrentState=Running
16/08/04 09:36:06 INFO client.DAGClientImpl: DAG: State: RUNNING Progress: 0% 
TotalTasks: 3 Succeeded: 0 Running: 0 Failed: 0 Killed: 0
16/08/04 09:36:06 INFO client.DAGClientImpl:VertexStatus: VertexName: 
Tokenizer Progress: 0% TotalTasks: 1 Succeeded: 0 Running: 0 Failed: 0 Killed: 0
16/08/04 09:36:06 INFO client.DAGClientImpl:VertexStatus: VertexName: 
Summation Progress: 0% TotalTasks: 1 Succeeded: 0 Running: 0 Failed: 0 Killed: 0
16/08/04 09:36:06 INFO client.DAGClientImpl:VertexStatus: VertexName: 
Sorter Progress: 0% TotalTasks: 1 Succeeded: 0 Running: 0 Failed: 0 Killed: 0
16/08/04 09:36:08 INFO ipc.Client: Retrying connect to server: 
iZ25f2qedc7Z/10.171.49.61:45880. Already tried 0 time(s); retry policy is 
RetryUpToMaximumCountWithFixedSleep(maxRetries=10, sleepTime=1000 MILLISECONDS)
16/08/04 09:36:09 INFO ipc.Client: Retrying connect to server: 
iZ25f2qedc7Z/10.171.49.61:45880. Already tried 1 time(s); retry policy is 
RetryUpToMaximumCountWithFixedSleep(maxRetries=10, sleepTime=1000 MILLISECONDS)
16/08/04 09:36:10 INFO ipc.Client: Retrying connect to server: 
iZ25f2qedc7Z/10.171.49.61:45880. Already tried 2 time(s); retry policy is 
RetryUpToMaximumCountWithFixedSleep(maxRetries=10, sleepTime=1000 MILLISECONDS)
16/08/04 09:36:11 INFO ipc.Client: Retrying connect to server: 
iZ25f2qedc7Z/10.171.49.61:45880. Already tried 3 time(s); retry policy is 
RetryUpToMaximumCountWithFixedSleep(maxRetries=10, sleepTime=1000 MILLISECONDS)
16/08/04 09:36:12 INFO ipc.Client: Retrying connect to server: 
iZ25f2qedc7Z/10.171.49.61:45880. Already tried 4 time(s); retry policy is 
RetryUpToMaximumCountWithFixedSleep(maxRetries=10, sleepTime=1000 MILLISECONDS)
16/08/04 09:36:13 INFO ipc.Client: Retrying connect to server: 
iZ25f2qedc7Z/10.171.49.61:45880. Already tried 5 time(s); retry policy is 
RetryUpToMaximumCountWithFixedSleep(maxRetries=10, sleepTime=1000 MILLISECONDS)
16/08/04 09:36:14 INFO ipc.Client: Retrying connect to server: 
iZ25f2qedc7Z/10.171.49.61:45880. Already tried 6 time(s); retry policy is 
RetryUpToMaximumCountWithFixedSleep(maxRetries=10, sleepTime=1000 MILLISECONDS)
16/08/04 09:36:15 INFO ipc.Client: Retrying conn

Re: hung AM due to timeline timeout

2016-08-03 Thread Hitesh Shah
It might worth filing a YARN jira to get it backported to 2.6.x and 2.7.x. At 
the very least, it will simplify rebuilding the timeline-server jar against the 
CDH version that you are running. 

— Hitesh

> On Aug 3, 2016, at 4:42 PM, Slava Markeyev  wrote:
> 
> Thanks for the info Hitesh. Unfortunately it seems that RollingLevelDB is 
> only in trunk. I may have to backport it to 2.6.2 (version I use). I did 
> notice that the leveldb does grow to tens of gb which may be an indication of 
> pruning not happening often enough (or at all?). I also need to fix the 
> logging as the logs for the timeline server don't seem to be very active 
> beyond it starting up.
> 
> For the job I posted before here is the associated eventQueueBacklog log line.
> 2016-08-03 19:23:27,932 [INFO] [AMShutdownThread] 
> |ats.ATSHistoryLoggingService|: Stopping ATSService, eventQueueBacklog=17553
> I'll look into lowering tez.yarn.ats.event.flush.timeout.millis while trying 
> to look into the timelineserver.
> 
> Thanks for your help,
> Slava
> 
> On Wed, Aug 3, 2016 at 2:45 PM, Hitesh Shah  wrote:
> Hello Slava,
> 
> Can you check for a log line along the lines of "Stopping ATSService, 
> eventQueueBacklog=“ to see how backed up is the event queue to YARN timeline?
> 
> I have noticed this in quite a few installs with YARN Timeline where YARN 
> Timeline is using the simple Level DB impl and not the RollingLevelDB storage 
> class. The YARN timeline ends up hitting some bottlenecks around the time 
> when the data purging happens ( takes a global lock on level db ). The 
> Rolling level db storage impl solved this problem by using separate level dos 
> for different time intervals and just throwing out the level db instead of 
> trying to do a full scan+purge.
> 
> Another workaround though not a great one is to set 
> “tez.yarn.ats.event.flush.timeout.millis” to a value say 6 i.e. 1 min. 
> This implies that the Tez AM will try for at max 1 min to flush the queue to 
> YARN timeline before giving up and shutting down the Tez AM.
> 
> A longer term option is the YARN Timeline version 1.5 work currently slated 
> to be released in hadoop 2.8.0 which uses HDFS for writes instead of the 
> current web service based approach. This has a far better perf throughput for 
> writes albeit with a delay on the read path as the Timeline server scans HDFS 
> for new updates. The tez changes for this are already available in the source 
> code under the hadoop28 profile though the documentation for this is still 
> pending.
> 
> thanks
> — Hitesh
> 
> 
> 
> 
> 
> > On Aug 3, 2016, at 2:02 PM, Slava Markeyev  
> > wrote:
> >
> > I'm running into an issue that occurs fairly often (but not consistently 
> > reproducible) where yarn reports a negative value for memory allocation eg 
> > (-2048) and a 0 vcore allocation despite the AM actually running. For 
> > example the AM reports a runtime of 1hrs, 29mins, 40sec while the dag only 
> > 880 seconds.
> >
> > After some investigating I've noticed that the AM has repeated issues 
> > contacting the timeline server after the dag is complete (error trace 
> > below). This seems to be delaying the shutdown sequence. It seems to retry 
> > every minute before either giving up or succeeding but I'm not sure which. 
> > What's the best way to debug why this would be happening and potentially 
> > shortening the timeout retry period as I'm more concerned with job 
> > completion than logging it to the timeline server. This doesn't seem to be 
> > happening consistently to all tez jobs only some.
> >
> > I'm using hive 1.1.0 and tez 0.7.1 on cdh5.4.10 (hadoop 2.6).
> >
> > 2016-08-03 19:18:22,881 [INFO] [ContainerLauncher #112] 
> > |impl.ContainerManagementProtocolProxy|: Opening proxy : node:45454
> > 2016-08-03 19:18:23,292 [WARN] [HistoryEventHandlingThread] 
> > |security.UserGroupInformation|: PriviledgedActionException as:x 
> > (auth:SIMPLE) cause:java.net.SocketTimeoutException: Read timed out
> > 2016-08-03 19:18:23,292 [ERROR] [HistoryEventHandlingThread] 
> > |impl.TimelineClientImpl|: Failed to get the response from the timeline 
> > server.
> > com.sun.jersey.api.client.ClientHandlerException: 
> > java.net.SocketTimeoutException: Read timed out
> > at 
> > com.sun.jersey.client.urlconnection.URLConnectionClientHandler.handle(URLConnectionClientHandler.java:149)
> > at 
> > org.apache.hadoop.yarn.client.api.impl.TimelineClientImpl$TimelineJerseyRetryFilter$1.run(TimelineClientImpl.java:226)
> > at 
> > org.apache.hadoop.yarn.client.api.impl.TimelineClientImpl$TimelineClientConnectionRetry.retryOn(TimelineClientImpl.java:162)
> > at 
> > org.apache.hadoop.yarn.client.api.impl.TimelineClientImpl$TimelineJerseyRetryFilter.handle(TimelineClientImpl.java:237)
> > at com.sun.jersey.api.client.Client.handle(Client.java:648)
> > at com.sun.jersey.api.client.WebResource.handle(WebResource.java:670)
> > at com.sun.jersey.api.client.WebResource.access$200(WebR

Re: hung AM due to timeline timeout

2016-08-03 Thread Slava Markeyev
Thanks for the info Hitesh. Unfortunately it seems that RollingLevelDB is
only in trunk. I may have to backport it to 2.6.2 (version I use). I did
notice that the leveldb does grow to tens of gb which may be an indication
of pruning not happening often enough (or at all?). I also need to fix the
logging as the logs for the timeline server don't seem to be very active
beyond it starting up.

For the job I posted before here is the associated eventQueueBacklog log
line.

2016-08-03 19:23:27,932 [INFO] [AMShutdownThread]
|ats.ATSHistoryLoggingService|: Stopping ATSService,
eventQueueBacklog=17553

I'll look into lowering tez.yarn.ats.event.flush.timeout.millis while
trying to look into the timelineserver.

Thanks for your help,
Slava

On Wed, Aug 3, 2016 at 2:45 PM, Hitesh Shah  wrote:

> Hello Slava,
>
> Can you check for a log line along the lines of "Stopping ATSService,
> eventQueueBacklog=“ to see how backed up is the event queue to YARN
> timeline?
>
> I have noticed this in quite a few installs with YARN Timeline where YARN
> Timeline is using the simple Level DB impl and not the RollingLevelDB
> storage class. The YARN timeline ends up hitting some bottlenecks around
> the time when the data purging happens ( takes a global lock on level db ).
> The Rolling level db storage impl solved this problem by using separate
> level dos for different time intervals and just throwing out the level db
> instead of trying to do a full scan+purge.
>
> Another workaround though not a great one is to set
> “tez.yarn.ats.event.flush.timeout.millis” to a value say 6 i.e. 1 min.
> This implies that the Tez AM will try for at max 1 min to flush the queue
> to YARN timeline before giving up and shutting down the Tez AM.
>
> A longer term option is the YARN Timeline version 1.5 work currently
> slated to be released in hadoop 2.8.0 which uses HDFS for writes instead of
> the current web service based approach. This has a far better perf
> throughput for writes albeit with a delay on the read path as the Timeline
> server scans HDFS for new updates. The tez changes for this are already
> available in the source code under the hadoop28 profile though the
> documentation for this is still pending.
>
> thanks
> — Hitesh
>
>
>
>
>
> > On Aug 3, 2016, at 2:02 PM, Slava Markeyev 
> wrote:
> >
> > I'm running into an issue that occurs fairly often (but not consistently
> reproducible) where yarn reports a negative value for memory allocation eg
> (-2048) and a 0 vcore allocation despite the AM actually running. For
> example the AM reports a runtime of 1hrs, 29mins, 40sec while the dag only
> 880 seconds.
> >
> > After some investigating I've noticed that the AM has repeated issues
> contacting the timeline server after the dag is complete (error trace
> below). This seems to be delaying the shutdown sequence. It seems to retry
> every minute before either giving up or succeeding but I'm not sure which.
> What's the best way to debug why this would be happening and potentially
> shortening the timeout retry period as I'm more concerned with job
> completion than logging it to the timeline server. This doesn't seem to be
> happening consistently to all tez jobs only some.
> >
> > I'm using hive 1.1.0 and tez 0.7.1 on cdh5.4.10 (hadoop 2.6).
> >
> > 2016-08-03 19:18:22,881 [INFO] [ContainerLauncher #112]
> |impl.ContainerManagementProtocolProxy|: Opening proxy : node:45454
> > 2016-08-03 19:18:23,292 [WARN] [HistoryEventHandlingThread]
> |security.UserGroupInformation|: PriviledgedActionException as:x
> (auth:SIMPLE) cause:java.net.SocketTimeoutException: Read timed out
> > 2016-08-03 19:18:23,292 [ERROR] [HistoryEventHandlingThread]
> |impl.TimelineClientImpl|: Failed to get the response from the timeline
> server.
> > com.sun.jersey.api.client.ClientHandlerException:
> java.net.SocketTimeoutException: Read timed out
> > at
> com.sun.jersey.client.urlconnection.URLConnectionClientHandler.handle(URLConnectionClientHandler.java:149)
> > at
> org.apache.hadoop.yarn.client.api.impl.TimelineClientImpl$TimelineJerseyRetryFilter$1.run(TimelineClientImpl.java:226)
> > at
> org.apache.hadoop.yarn.client.api.impl.TimelineClientImpl$TimelineClientConnectionRetry.retryOn(TimelineClientImpl.java:162)
> > at
> org.apache.hadoop.yarn.client.api.impl.TimelineClientImpl$TimelineJerseyRetryFilter.handle(TimelineClientImpl.java:237)
> > at com.sun.jersey.api.client.Client.handle(Client.java:648)
> > at com.sun.jersey.api.client.WebResource.handle(WebResource.java:670)
> > at
> com.sun.jersey.api.client.WebResource.access$200(WebResource.java:74)
> > at
> com.sun.jersey.api.client.WebResource$Builder.post(WebResource.java:563)
> > at
> org.apache.hadoop.yarn.client.api.impl.TimelineClientImpl.doPostingObject(TimelineClientImpl.java:472)
> > at
> org.apache.hadoop.yarn.client.api.impl.TimelineClientImpl.doPosting(TimelineClientImpl.java:321)
> > at
> org.apache.hadoop.yarn.client.api.impl.Timelin

Re: hung AM due to timeline timeout

2016-08-03 Thread Hitesh Shah
Hello Slava, 

Can you check for a log line along the lines of "Stopping ATSService, 
eventQueueBacklog=“ to see how backed up is the event queue to YARN timeline? 

I have noticed this in quite a few installs with YARN Timeline where YARN 
Timeline is using the simple Level DB impl and not the RollingLevelDB storage 
class. The YARN timeline ends up hitting some bottlenecks around the time when 
the data purging happens ( takes a global lock on level db ). The Rolling level 
db storage impl solved this problem by using separate level dos for different 
time intervals and just throwing out the level db instead of trying to do a 
full scan+purge.

Another workaround though not a great one is to set 
“tez.yarn.ats.event.flush.timeout.millis” to a value say 6 i.e. 1 min. This 
implies that the Tez AM will try for at max 1 min to flush the queue to YARN 
timeline before giving up and shutting down the Tez AM. 

A longer term option is the YARN Timeline version 1.5 work currently slated to 
be released in hadoop 2.8.0 which uses HDFS for writes instead of the current 
web service based approach. This has a far better perf throughput for writes 
albeit with a delay on the read path as the Timeline server scans HDFS for new 
updates. The tez changes for this are already available in the source code 
under the hadoop28 profile though the documentation for this is still pending. 

thanks
— Hitesh





> On Aug 3, 2016, at 2:02 PM, Slava Markeyev  wrote:
> 
> I'm running into an issue that occurs fairly often (but not consistently 
> reproducible) where yarn reports a negative value for memory allocation eg 
> (-2048) and a 0 vcore allocation despite the AM actually running. For example 
> the AM reports a runtime of 1hrs, 29mins, 40sec while the dag only 880 
> seconds.
> 
> After some investigating I've noticed that the AM has repeated issues 
> contacting the timeline server after the dag is complete (error trace below). 
> This seems to be delaying the shutdown sequence. It seems to retry every 
> minute before either giving up or succeeding but I'm not sure which. What's 
> the best way to debug why this would be happening and potentially shortening 
> the timeout retry period as I'm more concerned with job completion than 
> logging it to the timeline server. This doesn't seem to be happening 
> consistently to all tez jobs only some.
> 
> I'm using hive 1.1.0 and tez 0.7.1 on cdh5.4.10 (hadoop 2.6).
> 
> 2016-08-03 19:18:22,881 [INFO] [ContainerLauncher #112] 
> |impl.ContainerManagementProtocolProxy|: Opening proxy : node:45454
> 2016-08-03 19:18:23,292 [WARN] [HistoryEventHandlingThread] 
> |security.UserGroupInformation|: PriviledgedActionException as:x 
> (auth:SIMPLE) cause:java.net.SocketTimeoutException: Read timed out
> 2016-08-03 19:18:23,292 [ERROR] [HistoryEventHandlingThread] 
> |impl.TimelineClientImpl|: Failed to get the response from the timeline 
> server.
> com.sun.jersey.api.client.ClientHandlerException: 
> java.net.SocketTimeoutException: Read timed out
> at 
> com.sun.jersey.client.urlconnection.URLConnectionClientHandler.handle(URLConnectionClientHandler.java:149)
> at 
> org.apache.hadoop.yarn.client.api.impl.TimelineClientImpl$TimelineJerseyRetryFilter$1.run(TimelineClientImpl.java:226)
> at 
> org.apache.hadoop.yarn.client.api.impl.TimelineClientImpl$TimelineClientConnectionRetry.retryOn(TimelineClientImpl.java:162)
> at 
> org.apache.hadoop.yarn.client.api.impl.TimelineClientImpl$TimelineJerseyRetryFilter.handle(TimelineClientImpl.java:237)
> at com.sun.jersey.api.client.Client.handle(Client.java:648)
> at com.sun.jersey.api.client.WebResource.handle(WebResource.java:670)
> at com.sun.jersey.api.client.WebResource.access$200(WebResource.java:74)
> at 
> com.sun.jersey.api.client.WebResource$Builder.post(WebResource.java:563)
> at 
> org.apache.hadoop.yarn.client.api.impl.TimelineClientImpl.doPostingObject(TimelineClientImpl.java:472)
> at 
> org.apache.hadoop.yarn.client.api.impl.TimelineClientImpl.doPosting(TimelineClientImpl.java:321)
> at 
> org.apache.hadoop.yarn.client.api.impl.TimelineClientImpl.putEntities(TimelineClientImpl.java:301)
> at 
> org.apache.tez.dag.history.logging.ats.ATSHistoryLoggingService.handleEvents(ATSHistoryLoggingService.java:349)
> at 
> org.apache.tez.dag.history.logging.ats.ATSHistoryLoggingService.access$700(ATSHistoryLoggingService.java:53)
> at 
> org.apache.tez.dag.history.logging.ats.ATSHistoryLoggingService$1.run(ATSHistoryLoggingService.java:190)
> at java.lang.Thread.run(Thread.java:745)
> Caused by: java.net.SocketTimeoutException: Read timed out
> at java.net.SocketInputStream.socketRead0(Native Method)
> at java.net.SocketInputStream.read(SocketInputStream.java:152)
> at java.net.SocketInputStream.read(SocketInputStream.java:122)
> at java.io.BufferedInputStream.fill(BufferedInputStream.java:235)
> at java.io.BufferedInputStream.read1(BufferedInputStr

hung AM due to timeline timeout

2016-08-03 Thread Slava Markeyev
I'm running into an issue that occurs fairly often (but not consistently
reproducible) where yarn reports a negative value for memory allocation eg
(-2048) and a 0 vcore allocation despite the AM actually running. For
example the AM reports a runtime of 1hrs, 29mins, 40sec while the dag only
880 seconds.

After some investigating I've noticed that the AM has repeated issues
contacting the timeline server after the dag is complete (error trace
below). This seems to be delaying the shutdown sequence. It seems to retry
every minute before either giving up or succeeding but I'm not sure which.
What's the best way to debug why this would be happening and potentially
shortening the timeout retry period as I'm more concerned with job
completion than logging it to the timeline server. This doesn't seem to be
happening consistently to all tez jobs only some.

I'm using hive 1.1.0 and tez 0.7.1 on cdh5.4.10 (hadoop 2.6).

2016-08-03 19:18:22,881 [INFO] [ContainerLauncher #112]
|impl.ContainerManagementProtocolProxy|: Opening proxy : node:45454
2016-08-03 19:18:23,292 [WARN] [HistoryEventHandlingThread]
|security.UserGroupInformation|: PriviledgedActionException as:x
(auth:SIMPLE) cause:java.net.SocketTimeoutException: Read timed out
2016-08-03 19:18:23,292 [ERROR] [HistoryEventHandlingThread]
|impl.TimelineClientImpl|: Failed to get the response from the timeline
server.
com.sun.jersey.api.client.ClientHandlerException:
java.net.SocketTimeoutException: Read timed out
at
com.sun.jersey.client.urlconnection.URLConnectionClientHandler.handle(URLConnectionClientHandler.java:149)
at
org.apache.hadoop.yarn.client.api.impl.TimelineClientImpl$TimelineJerseyRetryFilter$1.run(TimelineClientImpl.java:226)
at
org.apache.hadoop.yarn.client.api.impl.TimelineClientImpl$TimelineClientConnectionRetry.retryOn(TimelineClientImpl.java:162)
at
org.apache.hadoop.yarn.client.api.impl.TimelineClientImpl$TimelineJerseyRetryFilter.handle(TimelineClientImpl.java:237)
at com.sun.jersey.api.client.Client.handle(Client.java:648)
at com.sun.jersey.api.client.WebResource.handle(WebResource.java:670)
at com.sun.jersey.api.client.WebResource.access$200(WebResource.java:74)
at
com.sun.jersey.api.client.WebResource$Builder.post(WebResource.java:563)
at
org.apache.hadoop.yarn.client.api.impl.TimelineClientImpl.doPostingObject(TimelineClientImpl.java:472)
at
org.apache.hadoop.yarn.client.api.impl.TimelineClientImpl.doPosting(TimelineClientImpl.java:321)
at
org.apache.hadoop.yarn.client.api.impl.TimelineClientImpl.putEntities(TimelineClientImpl.java:301)
at
org.apache.tez.dag.history.logging.ats.ATSHistoryLoggingService.handleEvents(ATSHistoryLoggingService.java:349)
at
org.apache.tez.dag.history.logging.ats.ATSHistoryLoggingService.access$700(ATSHistoryLoggingService.java:53)
at
org.apache.tez.dag.history.logging.ats.ATSHistoryLoggingService$1.run(ATSHistoryLoggingService.java:190)
at java.lang.Thread.run(Thread.java:745)
Caused by: java.net.SocketTimeoutException: Read timed out
at java.net.SocketInputStream.socketRead0(Native Method)
at java.net.SocketInputStream.read(SocketInputStream.java:152)
at java.net.SocketInputStream.read(SocketInputStream.java:122)
at java.io.BufferedInputStream.fill(BufferedInputStream.java:235)
at java.io.BufferedInputStream.read1(BufferedInputStream.java:275)
at java.io.BufferedInputStream.read(BufferedInputStream.java:334)
at sun.net.www.http.HttpClient.parseHTTPHeader(HttpClient.java:689)
at sun.net.www.http.HttpClient.parseHTTP(HttpClient.java:633)
at
sun.net.www.protocol.http.HttpURLConnection.getInputStream(HttpURLConnection.java:1324)
at
java.net.HttpURLConnection.getResponseCode(HttpURLConnection.java:468)
at
org.apache.hadoop.security.authentication.client.AuthenticatedURL.extractToken(AuthenticatedURL.java:253)
at
org.apache.hadoop.security.authentication.client.PseudoAuthenticator.authenticate(PseudoAuthenticator.java:77)
at
org.apache.hadoop.security.token.delegation.web.DelegationTokenAuthenticator.authenticate(DelegationTokenAuthenticator.java:127)
at
org.apache.hadoop.security.authentication.client.AuthenticatedURL.openConnection(AuthenticatedURL.java:216)
at
org.apache.hadoop.security.token.delegation.web.DelegationTokenAuthenticatedURL.openConnection(DelegationTokenAuthenticatedURL.java:322)
at
org.apache.hadoop.yarn.client.api.impl.TimelineClientImpl$TimelineURLConnectionFactory$1.run(TimelineClientImpl.java:501)
at
org.apache.hadoop.yarn.client.api.impl.TimelineClientImpl$TimelineURLConnectionFactory$1.run(TimelineClientImpl.java:498)
at java.security.AccessController.doPrivileged(Native Method)
at javax.security.auth.Subject.doAs(Subject.java:415)
at
org.apache.hadoop.security.UserGroupInformation.doAs(UserGroupInformation.java:1707)
at
org.apache.hadoop.yarn.client.api.impl.TimelineClientImpl$TimelineURLConnectionFactory.getHttpURLConnection(TimelineClie