[ 
https://issues.apache.org/jira/browse/HIVE-12683?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=15060175#comment-15060175
 ] 

rohit garg commented on HIVE-12683:
-----------------------------------

Thanks for your inputs. I will try these changes and see if that would give me 
any performance boost over hive query engine.

This was the OOM error I was getting before I tweaked memory settings :

0 FATAL [Socket Reader #1 for port 55739] 
org.apache.hadoop.yarn.YarnUncaughtExceptionHandler: Thread Thread[Socket 
Reader #1 for port 55739,5,main] threw an Error.  Shutting down now...
java.lang.OutOfMemoryError: GC overhead limit exceeded
        at java.nio.ByteBuffer.allocate(ByteBuffer.java:331)
        at 
org.apache.hadoop.ipc.Server$Connection.readAndProcess(Server.java:1510)
        at org.apache.hadoop.ipc.Server$Listener.doRead(Server.java:750)
        at 
org.apache.hadoop.ipc.Server$Listener$Reader.doRunLoop(Server.java:624)
        at org.apache.hadoop.ipc.Server$Listener$Reader.run(Server.java:595)
2015-12-07 20:31:32,859 FATAL [AsyncDispatcher event handler] 
org.apache.hadoop.yarn.event.AsyncDispatcher: Error in dispatcher thread
java.lang.OutOfMemoryError: GC overhead limit exceeded
2015-12-07 20:31:30,590 WARN [IPC Server handler 0 on 55739] 
org.apache.hadoop.ipc.Server: IPC Server handler 0 on 55739, call heartbeat({  
containerId=container_1449516549171_0001_01_000100, requestId=10184, 
startIndex=0, maxEventsToGet=0, taskAttemptId=null, eventCount=0 }), rpc 
version=2, client version=19, methodsFingerPrint=557389974 from 
10.10.30.35:47028 Call#11165 Retry#0: error: java.lang.OutOfMemoryError: GC 
overhead limit exceeded
java.lang.OutOfMemoryError: GC overhead limit exceeded
        at 
javax.security.auth.SubjectDomainCombiner.optimize(SubjectDomainCombiner.java:464)
        at 
javax.security.auth.SubjectDomainCombiner.combine(SubjectDomainCombiner.java:267)
        at 
java.security.AccessControlContext.goCombiner(AccessControlContext.java:499)
        at 
java.security.AccessControlContext.optimize(AccessControlContext.java:407)
        at java.security.AccessController.getContext(AccessController.java:501)
        at javax.security.auth.Subject.doAs(Subject.java:412)
        at 
org.apache.hadoop.security.UserGroupInformation.doAs(UserGroupInformation.java:1548)
        at org.apache.hadoop.ipc.Server$Handler.run(Server.java:2007)
2015-12-07 20:32:53,495 INFO [Thread-60] amazon.emr.metrics.MetricsSaver: Saved 
4:3 records to /mnt/var/em/raw/i-782f08c8_20151207_7921_07921_raw.bin
2015-12-07 20:32:53,495 INFO [AsyncDispatcher event handler] 
org.apache.hadoop.yarn.event.AsyncDispatcher: Exiting, bbye..
2015-12-07 20:32:50,435 INFO [IPC Server handler 20 on 55739] 
org.apache.hadoop.ipc.Server: IPC Server handler 20 on 55739, call 
getTask(org.apache.tez.common.ContainerContext@409a6aa9), rpc version=2, client 
version=19, methodsFingerPrint=557389974 from 10.10.30.33:33644 Call#11094 
Retry#0: error: java.io.IOException: java.lang.OutOfMemoryError: GC overhead 
limit exceeded
java.io.IOException: java.lang.OutOfMemoryError: GC overhead limit exceeded
2015-12-07 20:32:29,117 WARN [IPC Server handler 23 on 55739] 
org.apache.hadoop.ipc.Server: IPC Server handler 23 on 55739, call 
getTask(org.apache.tez.common.ContainerContext@7c7e6992), rpc version=2, client 
version=19, methodsFingerPrint=557389974 from 10.10.30.38:44218 Call#11260 
Retry#0: error: java.lang.OutOfMemoryError: GC overhead limit exceeded
java.lang.OutOfMemoryError: GC overhead limit exceeded
2015-12-07 20:32:53,497 INFO [Thread-60] amazon.emr.metrics.MetricsSaver: Saved 
1:1 records to /mnt/var/em/raw/i-782f08c8_20151207_7921_07921_raw.bin
2015-12-07 20:32:53,498 INFO [Thread-61] amazon.emr.metrics.MetricsSaver: Saved 
1:1 records to /mnt/var/em/raw/i-782f08c8_20151207_7921_07921_raw.bin
2015-12-07 20:32:53,498 INFO [Thread-2] org.apache.tez.dag.app.DAGAppMaster: 
DAGAppMaster received a signal. Signaling TaskScheduler
2015-12-07 20:32:53,498 INFO [Thread-2] 
org.apache.tez.dag.app.rm.TaskSchedulerEventHandler: TaskScheduler notified 
that iSignalled was : true
2015-12-07 20:32:53,499 INFO [Thread-2] 
org.apache.tez.dag.history.HistoryEventHandler: Stopping HistoryEventHandler
2015-12-07 20:32:53,499 INFO [Thread-2] 
org.apache.tez.dag.history.recovery.RecoveryService: Stopping RecoveryService
2015-12-07 20:32:53,499 INFO [Thread-2] 
org.apache.tez.dag.history.recovery.RecoveryService: Closing Summary Stream
2015-12-07 20:32:53,499 INFO [LeaseRenewer:[email protected]:9000] 
org.apache.hadoop.util.ExitUtil: Halt with status -1 Message: HaltException

> Does Tez run slower than hive on larger dataset (~2.5 TB)?
> ----------------------------------------------------------
>
>                 Key: HIVE-12683
>                 URL: https://issues.apache.org/jira/browse/HIVE-12683
>             Project: Hive
>          Issue Type: Bug
>            Reporter: rohit garg
>
> We have started to look into testing tez query engine. From initial results, 
> we are getting 30% performance boost over Hive on smaller data set(1-10 GB) 
> but Hive starts to perform better than Tez as data size increases. Like when 
> we run a hive query with Tez on about 2.3 TB worth of data, it performs worse 
> than hive alone.(~20% less performance) Details are in the post below.
> On a cluster with 1.3 TB RAM, I set the following property :
> set tez.task.resource.memory.mb=10000; set tez.am.resource.memory.mb=59205; 
> set tez.am.launch.cmd-opts =-Xmx47364m; set hive.tez.container.size=59205; 
> set hive.tez.java.opts=-Xmx47364m; set tez.am.grouping.max-size=36700160000;
> Is it normal or I am missing some property / not configuring some property 
> properly? Also, I am using an older version of Tez as of now. Could that be 
> the issue too? I still have to bootstrap latest version of Tez on EMR and 
> test it and see if that could do any better.
> Thought of asking here too
> http://www.jwplayer.com/blog/hive-with-tez-on-emr/



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)

Reply via email to