If people are using it then it makes sense to keep it functional.

Feel free to open a PR and add the missing dependency(ies) in the
appropriate place; just ensure that they are not widely propagated if
not necessary.

Best,
Stamatis

On Mon, Aug 21, 2023 at 12:04 PM Zoltán Rátkai
<zrat...@cloudera.com.invalid> wrote:
>
> Hi Team,
>
> I agree with Laszlo Bodor, it is a good tool to develop Hive. It worked
> properly before this PR.
> My question is how do you provide the missing database jar (e.g. Postgres
> driver) for StartMiniHS2Cluster, which is run by maven?  If it requires to
> modify the pom file after this change, then there is no difference then
> just alway reverting this PR mentioned earlier, but it has the risk this
> will be committed together with the actual work a developer does. I think
> this is a wrong approach to modify a pom or java file to be able to run it.
> We must be able to run it without alway modifying.
>
> Regards,
>
> Zoltan Ratkai
>
> On Mon, Aug 21, 2023 at 10:44 AM László Bodor <bodorlaszlo0...@gmail.com>
> wrote:
>
> > Hey!
> >
> > I tend to return to StartMiniHS2Cluster from time to time. It's very good
> > for the "change code - compile - run - debug - repeat" way of doing things.
> > From this point of view, docker image is not an alternative to that. Also,
> > StartMiniHS2Cluster just works, always, moreover, it uses the same
> > minicluster architecture as our qtests (I mean the way it achieves a mini
> > mr/tez/llap/whatever cluster from the hadoop shim). Qtests are also not an
> > alternative: I often need the ability to run queries in random order or in
> > adhoc way without editing a qfile.
> >
> > I feel that HIVE-27338 <https://issues.apache.org/jira/browse/HIVE-27338>
> > was
> > about to solve licensing problems, and it achieved that, but maybe went too
> > far: I believe we should be able to provide jars to the test classpath,
> > e.g. jars that were downloaded beforehand, even manually.
> >
> > Regards,
> > Laszlo Bodor
> >
> >
> > Stamatis Zampetakis <zabe...@gmail.com> ezt írta (időpont: 2023. aug. 18.,
> > P, 13:08):
> >
> > > Hey Zsolt,
> > >
> > > I would divide this discussion into three topics:
> > >
> > > 1. What are the benefits of using the StartMiniHS2Cluster?
> > > 2. What other alternatives are there for testing HS2 with different
> > > metastore DBMS?
> > > 3. How can we make StartMiniHS2Cluster work as before?
> > >
> > > Regarding the point 1, I don't have an answer because I never used
> > > StartMiniHS2Cluster myself. Obviously, other people here are using it so
> > I
> > > would be curious to know in which cases this is useful.
> > >
> > > There are various alternatives for testing HS2 with different metastore
> > > DBMS.
> > >
> > > The first and in my opinion the easiest to use is the classic qtest
> > > framework with the various CLI drivers. Basically, with the work that
> > > Laszlo started in HIVE-21954 it is pretty easy to run any kind of test
> > over
> > > any metastore just by setting the respective system property.
> > >
> > > mvn test -Dtest=TestMiniLlapLocalCliDriver
> > > -Dqfile=partition_params_postgres.q -Dtest.metastore.db=postgres
> > >
> > > The second is to use our brand new and shiny docker images contributed by
> > > Zhihua in HIVE-26400 and get the real feel of HS2 and HMS in a prod-like
> > > setup. I haven't played much with this in apache/master but I did use
> > some
> > > similar images in our internal forks and it's pretty easy to get it up
> > and
> > > running.
> > >
> > > The third is the old and classic hive-dev-box [1] started by Zoltan. With
> > > two/three commands you have a Hive cluster like environment and of course
> > > you can choose which DBMS you want for the metastore.
> > >
> > > Regarding point 3, I assume that it is pretty easy to fix by adding the
> > > postgresql (or other JDBC driver) dependency inside the hive-it-unit
> > > module.
> > >
> > > <dependency>
> > >   <groupId>org.postgresql</groupId>
> > >   <artifactId>postgresql</artifactId>
> > >   <optional>true</optional>
> > > </dependency>
> > >
> > > Given that we have other alternatives do we really need to go into this
> > > direction? In fact, do we really need the StartMiniHS2Cluster class?
> > >
> > > Best,
> > > Stamatis
> > >
> > > [1] https://github.com/kgyrtkirk/hive-dev-box
> > >
> > > On Tue, Aug 15, 2023, 6:08 PM Zsolt Miskolczi <zsolt.miskol...@gmail.com
> > >
> > > wrote:
> > >
> > > > Hey there!Do you know how it is possible to use minihs2 with embedded
> > > > metastore service but postgresql as the metastore database?
> > > >
> > > > Since I'm pretty sure it broke with that change
> > > > https://github.com/apache/hive/pull/4317, handling drivers is changed:
> > > >
> > > > We don't bundle postgresql drivers with the hive build.Firstly, when I
> > > copy
> > > > the postgresql driver jar into the
> > > >
> > > >
> > >
> > packaging/target/apache-hive-4.0.0-beta-1-SNAPSHOT-bin/apache-hive-4.0.0-beta-1-SNAPSHOT-bin/lib
> > > > directory, I can run the schematool as it sees the driver.
> > > > The real issue is with miniHS2Cluster: it is basically a maven test
> > and I
> > > > have really no idea how I can pass the driver to the test. If it would
> > > be a
> > > > simple java command, I would try to pass the -cp argument but I see no
> > > > other way to add extra jar file to the test execution.I want to run
> > > minihs2
> > > > with this command: mvn test -Dtest=StartMiniHS2Cluster
> > -DminiHS2.run=true
> > > > -DminiHS2.usePortsFromConf=true -DminiHS2.clusterType=LLAP
> > > > -DminiHS2.conf="../../data/conf/llap/hive-site.xml"
> > > > -Dpackaging.minimizeJar=false -DskipShade -Dremoteresources.skip=true
> > > > -Dmaven.javadoc.skip=true -Denforcer.skip=trueAnd get that error (it
> > > seems
> > > > the metastore client cannot find the proper driver):
> > > >
> > > > java.lang.RuntimeException: Error applying authorization policy on hive
> > > > configuration: org.apache.hadoop.hive.ql.metadata.HiveException: Unable
> > > to
> > > > instantiate
> > org.apache.hadoop.hive.ql.metadata.SessionHiveMetaStoreClient
> > > > at org.apache.hive.service.cli.CLIService.init(CLIService.java:122)
> > > > at
> > > org.apache.hive.service.CompositeService.init(CompositeService.java:59)
> > > > at
> > org.apache.hive.service.server.HiveServer2.init(HiveServer2.java:243)
> > > > at org.apache.hive.jdbc.miniHS2.MiniHS2.start(MiniHS2.java:394)
> > > > at
> > > >
> > > >
> > >
> > org.apache.hive.jdbc.miniHS2.StartMiniHS2Cluster.testRunCluster(StartMiniHS2Cluster.java:78)
> > > > at sun.reflect.NativeMethodAccessorImpl.invoke0(Native Method)
> > > > at
> > > >
> > > >
> > >
> > sun.reflect.NativeMethodAccessorImpl.invoke(NativeMethodAccessorImpl.java:62)
> > > > at
> > > >
> > > >
> > >
> > sun.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:43)
> > > > at java.lang.reflect.Method.invoke(Method.java:498)
> > > > at
> > > >
> > > >
> > >
> > org.junit.runners.model.FrameworkMethod$1.runReflectiveCall(FrameworkMethod.java:59)
> > > > at
> > > >
> > > >
> > >
> > org.junit.internal.runners.model.ReflectiveCallable.run(ReflectiveCallable.java:12)
> > > > at
> > > >
> > > >
> > >
> > org.junit.runners.model.FrameworkMethod.invokeExplosively(FrameworkMethod.java:56)
> > > > at
> > > >
> > > >
> > >
> > org.junit.internal.runners.statements.InvokeMethod.evaluate(InvokeMethod.java:17)
> > > > at org.junit.runners.ParentRunner$3.evaluate(ParentRunner.java:306)
> > > > at
> > > >
> > > >
> > >
> > org.junit.runners.BlockJUnit4ClassRunner$1.evaluate(BlockJUnit4ClassRunner.java:100)
> > > > at org.junit.runners.ParentRunner.runLeaf(ParentRunner.java:366)
> > > > at
> > > >
> > > >
> > >
> > org.junit.runners.BlockJUnit4ClassRunner.runChild(BlockJUnit4ClassRunner.java:103)
> > > > at
> > > >
> > > >
> > >
> > org.junit.runners.BlockJUnit4ClassRunner.runChild(BlockJUnit4ClassRunner.java:63)
> > > > at org.junit.runners.ParentRunner$4.run(ParentRunner.java:331)
> > > > at org.junit.runners.ParentRunner$1.schedule(ParentRunner.java:79)
> > > > at org.junit.runners.ParentRunner.runChildren(ParentRunner.java:329)
> > > > at org.junit.runners.ParentRunner.access$100(ParentRunner.java:66)
> > > > at org.junit.runners.ParentRunner$2.evaluate(ParentRunner.java:293)
> > > > at org.junit.runners.ParentRunner$3.evaluate(ParentRunner.java:306)
> > > > at org.junit.runners.ParentRunner.run(ParentRunner.java:413)
> > > > at
> > > >
> > > >
> > >
> > org.apache.maven.surefire.junit4.JUnit4Provider.execute(JUnit4Provider.java:365)
> > > > at
> > > >
> > > >
> > >
> > org.apache.maven.surefire.junit4.JUnit4Provider.executeWithRerun(JUnit4Provider.java:273)
> > > > at
> > > >
> > > >
> > >
> > org.apache.maven.surefire.junit4.JUnit4Provider.executeTestSet(JUnit4Provider.java:238)
> > > > at
> > > >
> > > >
> > >
> > org.apache.maven.surefire.junit4.JUnit4Provider.invoke(JUnit4Provider.java:159)
> > > > at
> > > >
> > > >
> > >
> > org.apache.maven.surefire.booter.ForkedBooter.runSuitesInProcess(ForkedBooter.java:377)
> > > > at
> > > >
> > > >
> > >
> > org.apache.maven.surefire.booter.ForkedBooter.execute(ForkedBooter.java:138)
> > > > at
> > > org.apache.maven.surefire.booter.ForkedBooter.run(ForkedBooter.java:465)
> > > > at
> > > >
> > org.apache.maven.surefire.booter.ForkedBooter.main(ForkedBooter.java:451)
> > > > Caused by: java.lang.RuntimeException:
> > > > org.apache.hadoop.hive.ql.metadata.HiveException: Unable to instantiate
> > > > org.apache.hadoop.hive.ql.metadata.SessionHiveMetaStoreClient
> > > > at
> > > >
> > > >
> > >
> > org.apache.hadoop.hive.ql.session.SessionState.setupAuth(SessionState.java:1001)
> > > > at
> > > >
> > > >
> > >
> > org.apache.hadoop.hive.ql.session.SessionState.applyAuthorizationPolicy(SessionState.java:2003)
> > > > at
> > > >
> > > >
> > >
> > org.apache.hive.service.cli.CLIService.applyAuthorizationConfigPolicy(CLIService.java:135)
> > > > at org.apache.hive.service.cli.CLIService.init(CLIService.java:119)
> > > > ... 32 more
> > > > Caused by: org.apache.hadoop.hive.ql.metadata.HiveException: Unable to
> > > > instantiate
> > org.apache.hadoop.hive.ql.metadata.SessionHiveMetaStoreClient
> > > > at
> > > >
> > > >
> > >
> > org.apache.hadoop.hive.ql.session.SessionState.setAuthorizerV2Config(SessionState.java:1033)
> > > > at
> > > >
> > > >
> > >
> > org.apache.hadoop.hive.ql.session.SessionState.setupAuth(SessionState.java:993)
> > > > ... 35 more
> > > > Caused by: java.lang.RuntimeException: Unable to instantiate
> > > > org.apache.hadoop.hive.ql.metadata.SessionHiveMetaStoreClient
> > > > at
> > > >
> > > >
> > >
> > org.apache.hadoop.hive.metastore.utils.JavaUtils.newInstance(JavaUtils.java:88)
> > > > at
> > > >
> > > >
> > >
> > org.apache.hadoop.hive.metastore.RetryingMetaStoreClient.<init>(RetryingMetaStoreClient.java:96)
> > > > at
> > > >
> > > >
> > >
> > org.apache.hadoop.hive.metastore.RetryingMetaStoreClient.getProxy(RetryingMetaStoreClient.java:149)
> > > > at
> > > >
> > > >
> > >
> > org.apache.hadoop.hive.metastore.RetryingMetaStoreClient.getProxy(RetryingMetaStoreClient.java:120)
> > > > at
> > > >
> > > >
> > >
> > org.apache.hadoop.hive.ql.metadata.Hive.createMetaStoreClient(Hive.java:5769)
> > > > at org.apache.hadoop.hive.ql.metadata.Hive.getMSC(Hive.java:5847)
> > > > at org.apache.hadoop.hive.ql.metadata.Hive.getMSC(Hive.java:5827)
> > > > at
> > > >
> > > >
> > >
> > org.apache.hadoop.hive.ql.session.SessionState.setAuthorizerV2Config(SessionState.java:1029)
> > > > ... 36 more
> > > > Caused by: java.lang.reflect.InvocationTargetException
> > > > at sun.reflect.NativeConstructorAccessorImpl.newInstance0(Native
> > Method)
> > > > at
> > > >
> > > >
> > >
> > sun.reflect.NativeConstructorAccessorImpl.newInstance(NativeConstructorAccessorImpl.java:62)
> > > > at
> > > >
> > > >
> > >
> > sun.reflect.DelegatingConstructorAccessorImpl.newInstance(DelegatingConstructorAccessorImpl.java:45)
> > > > at java.lang.reflect.Constructor.newInstance(Constructor.java:423)
> > > > at
> > > >
> > > >
> > >
> > org.apache.hadoop.hive.metastore.utils.JavaUtils.newInstance(JavaUtils.java:86)
> > > > ... 43 more
> > > > Caused by: MetaException(message:Got exception:
> > > > org.apache.hadoop.hive.metastore.api.MetaException
> > java.sql.SQLException:
> > > > No suitable driver)
> > > > at
> > > >
> > > >
> > >
> > org.apache.hadoop.hive.metastore.utils.MetaStoreUtils.throwMetaException(MetaStoreUtils.java:150)
> > > > at
> > > >
> > > >
> > >
> > org.apache.hadoop.hive.metastore.HiveMetaStoreClient.callEmbeddedMetastore(HiveMetaStoreClient.java:309)
> > > > at
> > > >
> > > >
> > >
> > org.apache.hadoop.hive.metastore.HiveMetaStoreClient.<init>(HiveMetaStoreClient.java:220)
> > > > at
> > > >
> > > >
> > >
> > org.apache.hadoop.hive.ql.metadata.HiveMetaStoreClientWithLocalCache.<init>(HiveMetaStoreClientWithLocalCache.java:120)
> > > > at
> > > >
> > > >
> > >
> > org.apache.hadoop.hive.ql.metadata.SessionHiveMetaStoreClient.<init>(SessionHiveMetaStoreClient.java:154)
> > > > ... 48 more
> > > >
> > > >
> > > > Any help is appreciated,
> > > > Thank you
> > > >
> > >
> >

Reply via email to