What about the case when even the HS2 Service restart is also not working. So my bad is that after restarting HS2 the health is good(green) but still I cannot query the data.
Any quick thoughts/input? Thanks! On Sun, Oct 11, 2015 at 9:58 PM, Edward Capriolo < [email protected]> wrote: > Dynamic service discovery is an overkill solution. It should just work. > > On Sun, Oct 11, 2015 at 11:46 AM, Benjamin Kim <[email protected]> wrote: > >> We are also facing the same problem. We get a crash at least once a week. >> I am wondering when Cloudera will support Dynamic Service Discovery so we >> can load balance multiple HiveServer2 instances. We don't have to do our >> own load balancing setup if it's built-in. >> >> https://issues.apache.org/jira/browse/HIVE-8376 >> >> >> On Sunday, October 11, 2015 at 6:11:00 AM UTC-7, Edward Capriolo wrote: >>> >>> There is no doubte hs2 is better than hs1. But hs2 is still unreliable. >>> I have run 3 versions of cloudera in the last 2 years. We are using hs2 for >>> hue and for running about 8 queries an hour. >>> >>> Hs2 typically lasts 2 or 3 days before it fails mid job with no real >>> error message. Its best when it shuts down cause it restarts, but typically >>> it end up alive but doing nothing. >>> >>> On the other hand the cli never fails. >>> >>> So we run multiple copies of hsw with load balancer in front and do >>> application retries to deal qith hs2. This is a lot of engineering for >>> somethinf rhat does 8 queries an hour. >>> >>> If i could start over again i would just have our app create a .q file >>> and launch the query from a cli subshell because that never fails. >>> On Saturday, October 10, 2015, Vineet Mishra <[email protected]> >>> wrote: >>> >>>> Hi Edward, >>>> >>>> I am not sure whether you meant HiverServer2 or HS1 as I could see that >>>> HS2 is preferable over HS1 >>>> >>>> https://cwiki.apache.org/confluence/display/Hive/HiveServer >>>> >>>> and didn't heard about any such reliability issue of HS2. Moreover I am >>>> making multiple JDBC Connection to fire parallel query which is barely >>>> consuming cluster resources while the MR job is in state. >>>> >>>> I would even like to have understanding if HS2 is unreliable then how >>>> could we go ahead to fire parallel queries for some process. Is it like >>>> only multiple Hive CLI instance which is possible to make queries, if yes? >>>> Then why the other connecting api (i.e Thrift, JDBC, etc) exists. If No? >>>> What could be the other possible alternative/solution. >>>> >>>> Hi @(Rajat and Manoj), >>>> >>>> I have recently increased the Heap Size for HS2, please find the same >>>> mentioned below >>>> >>>> Client Java Heap Size = 1 GB >>>> Java Heap Size of WebHCat Server = 512MB >>>> >>>> I feel as a developer one's perspective should be to look forward and >>>> fix the issues rather than simply relying on restarting the System/Service >>>> and get rid of it temporarily. >>>> >>>> Any suggestions/input would be highly appreciated. >>>> >>>> Thanks! >>>> >>>> On Sat, Oct 10, 2015 at 10:22 PM, manoj donga <[email protected]> >>>> wrote: >>>> >>>>> We are facing same issue since long , could found reliable solution is >>>>> restart all hive server 2 :) >>>>> On 10 Oct 2015 10:06 pm, "Rajat Dua" <[email protected]> wrote: >>>>> >>>>>> Increase heap size of hive aerver2 .might help >>>>>> >>>>>> On Saturday 10 October 2015, Edward Capriolo < >>>>>> [email protected]> wrote: >>>>>> >>>>>>> HiveServer2 goes down all the time. Do yourself a favor : Go into >>>>>>> cloudera manager and hit the checkbox to restart it if it goes down. >>>>>>> Everyone is in denial about hiveserver 2 reliability, i run load >>>>>>> balancers >>>>>>> and run check jobs all day to make sure the thing is serviceable >>>>>>> (barely). >>>>>>> Its a bad situation that Hive talks about deprecating the CLI in favor >>>>>>> of a >>>>>>> CLI that works with HiveServer2 yet no one seems to run it for multiple >>>>>>> days in a row to see how unrelable it is. Your never going to get >>>>>>> anyone on >>>>>>> this list to admit to any problems either... >>>>>>> >>>>>>> On Sat, Oct 10, 2015 at 11:42 AM, Vineet Mishra < >>>>>>> [email protected]> wrote: >>>>>>> >>>>>>>> Any update on this? >>>>>>>> >>>>>>>> URGENT CALL! >>>>>>>> On Oct 10, 2015 9:25 AM, "Vineet Mishra" <[email protected]> >>>>>>>> wrote: >>>>>>>> >>>>>>>>> Hi Jimmy, >>>>>>>>> >>>>>>>>> Any prominent reason for it run out of worker thread when maximum >>>>>>>>> query fired at a time fired are only 8. >>>>>>>>> >>>>>>>>> I am trying to open multiple session on different connection, so >>>>>>>>> each thread corresponds to one connection. >>>>>>>>> >>>>>>>>> Thanks! >>>>>>>>> >>>>>>>>> On Sat, Oct 10, 2015 at 1:22 AM, Jimmy Xiang <[email protected]> >>>>>>>>> wrote: >>>>>>>>> >>>>>>>>>> Could it be because HS2 runs out of worker threads? >>>>>>>>>> Are you trying to open multiple sessions on the same connection? >>>>>>>>>> >>>>>>>>>> Thanks, >>>>>>>>>> Jimmy >>>>>>>>>> >>>>>>>>>> On Fri, Oct 9, 2015 at 12:34 PM, Vineet Mishra < >>>>>>>>>> [email protected]> wrote: >>>>>>>>>> >>>>>>>>>>> Any idea about this? >>>>>>>>>>> >>>>>>>>>>> Frequent Connectivity issue to HiveServer2 >>>>>>>>>>> >>>>>>>>>>> 2015-10-10 00:23:11,070 [main] ERROR (HiveConnection.java:439) >>>>>>>>>>> - Error opening session >>>>>>>>>>> org.apache.thrift.transport.TTransportException >>>>>>>>>>> at >>>>>>>>>>> org.apache.thrift.transport.TIOStreamTransport.read(TIOStreamTransport.java:132) >>>>>>>>>>> at >>>>>>>>>>> org.apache.thrift.transport.TTransport.readAll(TTransport.java:84) >>>>>>>>>>> at >>>>>>>>>>> org.apache.thrift.transport.TSaslTransport.readLength(TSaslTransport.java:355) >>>>>>>>>>> at >>>>>>>>>>> org.apache.thrift.transport.TSaslTransport.readFrame(TSaslTransport.java:432) >>>>>>>>>>> at >>>>>>>>>>> org.apache.thrift.transport.TSaslTransport.read(TSaslTransport.java:414) >>>>>>>>>>> at >>>>>>>>>>> org.apache.thrift.transport.TSaslClientTransport.read(TSaslClientTransport.java:37) >>>>>>>>>>> at >>>>>>>>>>> org.apache.thrift.transport.TTransport.readAll(TTransport.java:84) >>>>>>>>>>> at >>>>>>>>>>> org.apache.thrift.protocol.TBinaryProtocol.readAll(TBinaryProtocol.java:378) >>>>>>>>>>> at >>>>>>>>>>> org.apache.thrift.protocol.TBinaryProtocol.readI32(TBinaryProtocol.java:297) >>>>>>>>>>> at >>>>>>>>>>> org.apache.thrift.protocol.TBinaryProtocol.readMessageBegin(TBinaryProtocol.java:204) >>>>>>>>>>> at >>>>>>>>>>> org.apache.thrift.TServiceClient.receiveBase(TServiceClient.java:69) >>>>>>>>>>> at >>>>>>>>>>> org.apache.hive.service.cli.thrift.TCLIService$Client.recv_OpenSession(TCLIService.java:160) >>>>>>>>>>> at >>>>>>>>>>> org.apache.hive.service.cli.thrift.TCLIService$Client.OpenSession(TCLIService.java:147) >>>>>>>>>>> at >>>>>>>>>>> org.apache.hive.jdbc.HiveConnection.openSession(HiveConnection.java:429) >>>>>>>>>>> at >>>>>>>>>>> org.apache.hive.jdbc.HiveConnection.<init>(HiveConnection.java:192) >>>>>>>>>>> at org.apache.hive.jdbc.HiveDriver.connect(HiveDriver.java:105) >>>>>>>>>>> at java.sql.DriverManager.getConnection(DriverManager.java:571) >>>>>>>>>>> at java.sql.DriverManager.getConnection(DriverManager.java:215) >>>>>>>>>>> at >>>>>>>>>>> com.sd.dwh.sc.tungsten.custom.HiveRunnable.<init>(HiveRunnable.java:42) >>>>>>>>>>> at >>>>>>>>>>> com.sd.dwh.sc.tungsten.custom.HiveInvoker.main(HiveInvoker.java:62) >>>>>>>>>>> at sun.reflect.NativeMethodAccessorImpl.invoke0(Native Method) >>>>>>>>>>> at >>>>>>>>>>> sun.reflect.NativeMethodAccessorImpl.invoke(NativeMethodAccessorImpl.java:57) >>>>>>>>>>> at >>>>>>>>>>> sun.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:43) >>>>>>>>>>> at java.lang.reflect.Method.invoke(Method.java:606) >>>>>>>>>>> at org.apache.hadoop.util.RunJar.main(RunJar.java:212) >>>>>>>>>>> 2015-10-10 00:23:11,071 [main] ERROR (HiveRunnable.java:48) - >>>>>>>>>>> Could not establish connection to >>>>>>>>>>> jdbc:hive2://hadoop-hs2:10000/mydb: null >>>>>>>>>>> >>>>>>>>>>> URGENT CALL! >>>>>>>>>>> >>>>>>>>>>> Thanks! >>>>>>>>>>> >>>>>>>>>>> On Fri, Oct 9, 2015 at 2:42 PM, Vineet Mishra < >>>>>>>>>>> [email protected]> wrote: >>>>>>>>>>> >>>>>>>>>>>> This looks out to be the issue >>>>>>>>>>>> >>>>>>>>>>>> https://issues.apache.org/jira/browse/HIVE-2314 >>>>>>>>>>>> >>>>>>>>>>>> Any workaround or resolution to the same. >>>>>>>>>>>> >>>>>>>>>>>> Thanks! >>>>>>>>>>>> >>>>>>>>>>>> On Fri, Oct 9, 2015 at 1:24 PM, Vineet Mishra < >>>>>>>>>>>> [email protected]> wrote: >>>>>>>>>>>> >>>>>>>>>>>>> Hi All, >>>>>>>>>>>>> >>>>>>>>>>>>> I am trying to connect to HiveServer2 to query some data in >>>>>>>>>>>>> parallel and landing up into some weird exception, stack trace >>>>>>>>>>>>> mentioned >>>>>>>>>>>>> below >>>>>>>>>>>>> >>>>>>>>>>>>> java.sql.SQLException: Error while cleaning up the server >>>>>>>>>>>>> resources >>>>>>>>>>>>> at >>>>>>>>>>>>> org.apache.hive.jdbc.HiveConnection.close(HiveConnection.java:569) >>>>>>>>>>>>> at >>>>>>>>>>>>> com.sd.dwh.sc.tungsten.custom.HiveRunnable.mergeJDBC(HiveRunnable.java:93) >>>>>>>>>>>>> at >>>>>>>>>>>>> com.sd.dwh.sc.tungsten.custom.HiveRunnable.run(HiveRunnable.java:55) >>>>>>>>>>>>> at >>>>>>>>>>>>> java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1145) >>>>>>>>>>>>> at >>>>>>>>>>>>> java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:615) >>>>>>>>>>>>> at java.lang.Thread.run(Thread.java:745) >>>>>>>>>>>>> Caused by: org.apache.thrift.transport.TTransportException >>>>>>>>>>>>> at >>>>>>>>>>>>> org.apache.thrift.transport.TIOStreamTransport.read(TIOStreamTransport.java:132) >>>>>>>>>>>>> at >>>>>>>>>>>>> org.apache.thrift.transport.TTransport.readAll(TTransport.java:84) >>>>>>>>>>>>> at >>>>>>>>>>>>> org.apache.thrift.transport.TSaslTransport.readLength(TSaslTransport.java:355) >>>>>>>>>>>>> at >>>>>>>>>>>>> org.apache.thrift.transport.TSaslTransport.readFrame(TSaslTransport.java:432) >>>>>>>>>>>>> at >>>>>>>>>>>>> org.apache.thrift.transport.TSaslTransport.read(TSaslTransport.java:414) >>>>>>>>>>>>> at >>>>>>>>>>>>> org.apache.thrift.transport.TSaslClientTransport.read(TSaslClientTransport.java:37) >>>>>>>>>>>>> at >>>>>>>>>>>>> org.apache.thrift.transport.TTransport.readAll(TTransport.java:84) >>>>>>>>>>>>> at >>>>>>>>>>>>> org.apache.thrift.protocol.TBinaryProtocol.readAll(TBinaryProtocol.java:378) >>>>>>>>>>>>> at >>>>>>>>>>>>> org.apache.thrift.protocol.TBinaryProtocol.readI32(TBinaryProtocol.java:297) >>>>>>>>>>>>> at >>>>>>>>>>>>> org.apache.thrift.protocol.TBinaryProtocol.readMessageBegin(TBinaryProtocol.java:204) >>>>>>>>>>>>> at >>>>>>>>>>>>> org.apache.thrift.TServiceClient.receiveBase(TServiceClient.java:69) >>>>>>>>>>>>> at >>>>>>>>>>>>> org.apache.hive.service.cli.thrift.TCLIService$Client.recv_CloseSession(TCLIService.java:183) >>>>>>>>>>>>> at >>>>>>>>>>>>> org.apache.hive.service.cli.thrift.TCLIService$Client.CloseSession(TCLIService.java:170) >>>>>>>>>>>>> at >>>>>>>>>>>>> org.apache.hive.jdbc.HiveConnection.close(HiveConnection.java:567) >>>>>>>>>>>>> ... 5 more >>>>>>>>>>>>> >>>>>>>>>>>>> >>>>>>>>>>>>> Any quick revert will be highly appreciable! >>>>>>>>>>>>> >>>>>>>>>>>>> Thanks! >>>>>>>>>>>>> >>>>>>>>>>>>> >>>>>>>>>>>> >>>>>>>>>>> -- >>>>>>>>>>> >>>>>>>>>>> --- >>>>>>>>>>> You received this message because you are subscribed to the >>>>>>>>>>> Google Groups "CDH Users" group. >>>>>>>>>>> To unsubscribe from this group and stop receiving emails from >>>>>>>>>>> it, send an email to [email protected]. >>>>>>>>>>> For more options, visit >>>>>>>>>>> https://groups.google.com/a/cloudera.org/d/optout. >>>>>>>>>>> >>>>>>>>>> >>>>>>>>>> -- >>>>>>>>>> >>>>>>>>>> --- >>>>>>>>>> You received this message because you are subscribed to the >>>>>>>>>> Google Groups "CDH Users" group. >>>>>>>>>> To unsubscribe from this group and stop receiving emails from it, >>>>>>>>>> send an email to [email protected]. >>>>>>>>>> For more options, visit >>>>>>>>>> https://groups.google.com/a/cloudera.org/d/optout. >>>>>>>>>> >>>>>>>>> >>>>>>>>> -- >>>>>>>> >>>>>>>> --- >>>>>>>> You received this message because you are subscribed to the Google >>>>>>>> Groups "CDH Users" group. >>>>>>>> To unsubscribe from this group and stop receiving emails from it, >>>>>>>> send an email to [email protected]. >>>>>>>> For more options, visit >>>>>>>> https://groups.google.com/a/cloudera.org/d/optout. >>>>>>>> >>>>>>> >>>>>>> -- >>>>>>> >>>>>>> --- >>>>>>> You received this message because you are subscribed to the Google >>>>>>> Groups "CDH Users" group. >>>>>>> To unsubscribe from this group and stop receiving emails from it, >>>>>>> send an email to [email protected]. >>>>>>> For more options, visit >>>>>>> https://groups.google.com/a/cloudera.org/d/optout. >>>>>>> >>>>>> -- >>>>>> >>>>>> --- >>>>>> You received this message because you are subscribed to the Google >>>>>> Groups "CDH Users" group. >>>>>> To unsubscribe from this group and stop receiving emails from it, >>>>>> send an email to [email protected]. >>>>>> For more options, visit >>>>>> https://groups.google.com/a/cloudera.org/d/optout. >>>>>> >>>>> -- >>>>> >>>>> --- >>>>> You received this message because you are subscribed to the Google >>>>> Groups "CDH Users" group. >>>>> To unsubscribe from this group and stop receiving emails from it, send >>>>> an email to [email protected]. >>>>> For more options, visit >>>>> https://groups.google.com/a/cloudera.org/d/optout. >>>>> >>>> >>>> -- >>>> >>>> --- >>>> You received this message because you are subscribed to the Google >>>> Groups "CDH Users" group. >>>> To unsubscribe from this group and stop receiving emails from it, send >>>> an email to [email protected]. >>>> For more options, visit >>>> https://groups.google.com/a/cloudera.org/d/optout. >>>> >>> -- >> >> --- >> You received this message because you are subscribed to the Google Groups >> "CDH Users" group. >> To unsubscribe from this group and stop receiving emails from it, send an >> email to [email protected]. >> For more options, visit https://groups.google.com/a/cloudera.org/d/optout >> . >> > > -- > > --- > You received this message because you are subscribed to the Google Groups > "CDH Users" group. > To unsubscribe from this group and stop receiving emails from it, send an > email to [email protected]. > For more options, visit https://groups.google.com/a/cloudera.org/d/optout. >
