hi,all:
I am using PXF and hcatalog to query hive, table t1,t2 in hive, and t1 is large 
table. and hawq in yarn mode.


[hive@master ~]$ hive
hive> select count(1) from t1;
OK
680852926
Time taken: 0.721 seconds, Fetched: 1 row(s)
hive> exit;


when I query t1 in hawq ,it is very very slow:


[gpadmin@master ~]$ 
[gpadmin@master ~]$ psql -U gpadmin -d gpadmin 
gpadmin=# set pxf_service_address to 'master:51200';
SET
Time: 0.410 ms
gpadmin=# select count(*) from hcatalog.default.t2;
 count 
-------
  1000
(1 row)


Time: 910.853 ms


gpadmin=# explain select count(*) from hcatalog.default.t1;
                                             QUERY PLAN                         
                    
----------------------------------------------------------------------------------------------------
 Aggregate  (cost=0.00..431.00 rows=1 width=8)
   ->  Gather Motion 1:1  (slice1; segments: 1)  (cost=0.00..431.00 rows=1 
width=8)
         ->  Aggregate  (cost=0.00..431.00 rows=1 width=8)
               ->  External Scan on t1  (cost=0.00..431.00 rows=1 width=1)
 Optimizer status: PQO version 1.627
(5 rows)


Time: 1388.073 ms
gpadmin=# 
gpadmin=# select count(1) from hcatalog.default.t1;


wait a long time,and cannot get result.
log messages:


2016-09-27 09:46:25.816366 
CST,"gpadmin","gpadmin",p764498,th-1935386496,"10.0.230.20","16234",2016-09-27 
09:31:13 CST,90355,con51,cmd20,seg-1,,,x90355,sx1,"LOG","00000","ConnID 5. 
Registered in HAWQ resource manager (By OID)",,,,,,"select count(*) from 
hcatalog.default.t1;",0,,"rmcomm_QD2RM.c",609,
2016-09-27 09:46:25.816508 
CST,,,p760393,th-1935386496,,,,0,con4,,seg-10000,,,,,"LOG","00000","ConnID 5. 
Expect query resource (256 MB, 0.022727 CORE) x 1 ( MIN 1 ) resource after 
adjusting based on queue NVSEG limits.",,,,,,,0,,"resqueuemanager.c",1913,
2016-09-27 09:46:25.816603 
CST,,,p760393,th-1935386496,,,,0,con4,,seg-10000,,,,,"LOG","00000","Latency of 
getting resource allocated is 138us",,,,,,,0,,"resqueuemanager.c",4375,
2016-09-27 09:46:25.816743 
CST,"gpadmin","gpadmin",p764498,th-1935386496,"10.0.230.20","16234",2016-09-27 
09:31:13 CST,90355,con51,cmd20,seg-1,,,x90355,sx1,"LOG","00000","ConnID 5. 
Acquired resource from resource manager, (256 MB, 0.022727 CORE) x 
1.",,,,,,"select count(*) from hcatalog.default.t1;",0,,"rmcomm_QD2RM.c",868,
2016-09-27 09:46:25.816868 
CST,"gpadmin","gpadmin",p764498,th-1935386496,"10.0.230.20","16234",2016-09-27 
09:31:13 CST,90355,con51,cmd20,seg-1,,,x90355,sx1,"LOG","00000","data locality 
ratio: 0.000; virtual segment number: 1; different host number: 1; virtual 
segment number per host(avg/min/max): (1/1/1); segment size(avg/min/max): 
(0.000 B/0 B/0 B); segment size with penalty(avg/min/max): (0.000 B/0 B/0 B); 
continuity(avg/min/max): (0.000/0.000/0.000).",,,,,,"select count(*) from 
hcatalog.default.t1;",0,,"cdbdatalocality.c",3396,




I don't know why hawq only get such little resources.
Is there any parameters I can set to let it (query hive using pxf and hcatalog) 
faster like in hive directly.



  <configuration>
    
    <property>
      <name>hawq_dfs_url</name>
      <value>master:8020/hawq_default</value>
    </property>
    
    <property>
      <name>hawq_global_rm_type</name>
      <value>yarn</value>
    </property>
    
    <property>
      <name>hawq_master_address_host</name>
      <value>master</value>
    </property>
    
    <property>
      <name>hawq_master_address_port</name>
      <value>6432</value>
    </property>
    
    <property>
      <name>hawq_master_directory</name>
      <value>/data/hawq/master</value>
    </property>
    
    <property>
      <name>hawq_master_temp_directory</name>
      <value>/tmp</value>
    </property>
    
    <property>
      <name>hawq_re_cgroup_hierarchy_name</name>
      <value>hadoop-yarn</value>
    </property>
    
    <property>
      <name>hawq_re_cgroup_mount_point</name>
      <value>/sys/fs/cgroup</value>
    </property>
    
    <property>
      <name>hawq_re_cleanup_period</name>
      <value>180</value>
    </property>
    
    <property>
      <name>hawq_re_cpu_enable</name>
      <value>false</value>
    </property>
    
    <property>
      <name>hawq_re_cpu_weight</name>
      <value>1024.0</value>
    </property>
    
    <property>
      <name>hawq_re_vcore_pcore_ratio</name>
      <value>1.0</value>
    </property>
    
    <property>
      <name>hawq_resourcemanager_master_address_domainsocket_port</name>
      <value>5436</value>
    </property>
    
    <property>
      <name>hawq_rm_master_port</name>
      <value>5437</value>
    </property>
    
    <property>
      <name>hawq_rm_memory_limit_perseg</name>
      <value>240GB</value>
    </property>
    
    <property>
      <name>hawq_rm_nvcore_limit_perseg</name>
      <value>16</value>
    </property>
    
    <property>
      <name>hawq_rm_nvseg_perquery_perseg_limit</name>
      <value>18</value>
    </property>
    
    <property>
      <name>hawq_rm_segment_port</name>
      <value>5438</value>
    </property>
    
    <property>
      <name>hawq_rm_yarn_address</name>
      <value>worker1:8050</value>
    </property>
    
    <property>
      <name>hawq_rm_yarn_app_name</name>
      <value>hawq</value>
    </property>
    
    <property>
      <name>hawq_rm_yarn_queue_name</name>
      <value>default</value>
    </property>
    
    <property>
      <name>hawq_rm_yarn_scheduler_address</name>
      <value>worker1:8030</value>
    </property>
    
    <property>
      <name>hawq_segment_address_port</name>
      <value>40000</value>
    </property>
    
    <property>
      <name>hawq_segment_directory</name>
      <value>/data/hawq/segment</value>
    </property>
    
    <property>
      <name>hawq_segment_temp_directory</name>
      <value>/tmp</value>
    </property>
    
    <property>
      <name>hawq_standby_address_host</name>
      <value>worker1</value>
    </property>
    
  </configuration>
  <configuration>
    
    <property>
      <name>ambari.hive.db.schema.name</name>
      <value>hive</value>
    </property>
    
    <property>
      <name>atlas.hook.hive.maxThreads</name>
      <value>1</value>
    </property>
    
    <property>
      <name>atlas.hook.hive.minThreads</name>
      <value>1</value>
    </property>
    
    <property>
      <name>datanucleus.autoCreateSchema</name>
      <value>false</value>
    </property>
    
    <property>
      <name>datanucleus.cache.level2.type</name>
      <value>none</value>
    </property>
    
    <property>
      <name>datanucleus.fixedDatastore</name>
      <value>true</value>
    </property>
    
    <property>
      <name>hive.auto.convert.join</name>
      <value>true</value>
    </property>
    
    <property>
      <name>hive.auto.convert.join.noconditionaltask</name>
      <value>true</value>
    </property>
    
    <property>
      <name>hive.auto.convert.join.noconditionaltask.size</name>
      <value>5368709120</value>
    </property>
    
    <property>
      <name>hive.auto.convert.sortmerge.join</name>
      <value>true</value>
    </property>
    
    <property>
      <name>hive.auto.convert.sortmerge.join.to.mapjoin</name>
      <value>false</value>
    </property>
    
    <property>
      <name>hive.cbo.enable</name>
      <value>true</value>
    </property>
    
    <property>
      <name>hive.cli.print.header</name>
      <value>false</value>
    </property>
    
    <property>
      <name>hive.cluster.delegation.token.store.class</name>
      <value>org.apache.hadoop.hive.thrift.ZooKeeperTokenStore</value>
    </property>
    
    <property>
      <name>hive.cluster.delegation.token.store.zookeeper.connectString</name>
      <value>worker1.bigdata:2181,master.bigdata:2181,worker2.bigdata:2181</value>
    </property>
    
    <property>
      <name>hive.cluster.delegation.token.store.zookeeper.znode</name>
      <value>/hive/cluster/delegation</value>
    </property>
    
    <property>
      <name>hive.compactor.abortedtxn.threshold</name>
      <value>1000</value>
    </property>
    
    <property>
      <name>hive.compactor.check.interval</name>
      <value>300L</value>
    </property>
    
    <property>
      <name>hive.compactor.delta.num.threshold</name>
      <value>10</value>
    </property>
    
    <property>
      <name>hive.compactor.delta.pct.threshold</name>
      <value>0.1f</value>
    </property>
    
    <property>
      <name>hive.compactor.initiator.on</name>
      <value>false</value>
    </property>
    
    <property>
      <name>hive.compactor.worker.threads</name>
      <value>0</value>
    </property>
    
    <property>
      <name>hive.compactor.worker.timeout</name>
      <value>86400L</value>
    </property>
    
    <property>
      <name>hive.compute.query.using.stats</name>
      <value>true</value>
    </property>
    
    <property>
      <name>hive.conf.restricted.list</name>
      <value>hive.security.authenticator.manager,hive.security.authorization.manager,hive.users.in.admin.role</value>
    </property>
    
    <property>
      <name>hive.convert.join.bucket.mapjoin.tez</name>
      <value>false</value>
    </property>
    
    <property>
      <name>hive.default.fileformat</name>
      <value>TextFile</value>
    </property>
    
    <property>
      <name>hive.default.fileformat.managed</name>
      <value>TextFile</value>
    </property>
    
    <property>
      <name>hive.enforce.bucketing</name>
      <value>false</value>
    </property>
    
    <property>
      <name>hive.enforce.sorting</name>
      <value>true</value>
    </property>
    
    <property>
      <name>hive.enforce.sortmergebucketmapjoin</name>
      <value>true</value>
    </property>
    
    <property>
      <name>hive.exec.compress.intermediate</name>
      <value>false</value>
    </property>
    
    <property>
      <name>hive.exec.compress.output</name>
      <value>false</value>
    </property>
    
    <property>
      <name>hive.exec.dynamic.partition</name>
      <value>true</value>
    </property>
    
    <property>
      <name>hive.exec.dynamic.partition.mode</name>
      <value>strict</value>
    </property>
    
    <property>
      <name>hive.exec.failure.hooks</name>
      <value>org.apache.hadoop.hive.ql.hooks.ATSHook</value>
    </property>
    
    <property>
      <name>hive.exec.max.created.files</name>
      <value>100000</value>
    </property>
    
    <property>
      <name>hive.exec.max.dynamic.partitions</name>
      <value>5000</value>
    </property>
    
    <property>
      <name>hive.exec.max.dynamic.partitions.pernode</name>
      <value>2000</value>
    </property>
    
    <property>
      <name>hive.exec.orc.compression.strategy</name>
      <value>SPEED</value>
    </property>
    
    <property>
      <name>hive.exec.orc.default.compress</name>
      <value>ZLIB</value>
    </property>
    
    <property>
      <name>hive.exec.orc.default.stripe.size</name>
      <value>67108864</value>
    </property>
    
    <property>
      <name>hive.exec.orc.encoding.strategy</name>
      <value>SPEED</value>
    </property>
    
    <property>
      <name>hive.exec.parallel</name>
      <value>false</value>
    </property>
    
    <property>
      <name>hive.exec.parallel.thread.number</name>
      <value>8</value>
    </property>
    
    <property>
      <name>hive.exec.post.hooks</name>
      <value>org.apache.hadoop.hive.ql.hooks.ATSHook</value>
    </property>
    
    <property>
      <name>hive.exec.pre.hooks</name>
      <value>org.apache.hadoop.hive.ql.hooks.ATSHook</value>
    </property>
    
    <property>
      <name>hive.exec.reducers.bytes.per.reducer</name>
      <value>67108864</value>
    </property>
    
    <property>
      <name>hive.exec.reducers.max</name>
      <value>1009</value>
    </property>
    
    <property>
      <name>hive.exec.scratchdir</name>
      <value>/tmp/hive</value>
    </property>
    
    <property>
      <name>hive.exec.submit.local.task.via.child</name>
      <value>true</value>
    </property>
    
    <property>
      <name>hive.exec.submitviachild</name>
      <value>false</value>
    </property>
    
    <property>
      <name>hive.execution.engine</name>
      <value>mr</value>
    </property>
    
    <property>
      <name>hive.fetch.task.aggr</name>
      <value>false</value>
    </property>
    
    <property>
      <name>hive.fetch.task.conversion</name>
      <value>more</value>
    </property>
    
    <property>
      <name>hive.fetch.task.conversion.threshold</name>
      <value>1073741824</value>
    </property>
    
    <property>
      <name>hive.limit.optimize.enable</name>
      <value>true</value>
    </property>
    
    <property>
      <name>hive.limit.pushdown.memory.usage</name>
      <value>0.04</value>
    </property>
    
    <property>
      <name>hive.map.aggr</name>
      <value>true</value>
    </property>
    
    <property>
      <name>hive.map.aggr.hash.force.flush.memory.threshold</name>
      <value>0.9</value>
    </property>
    
    <property>
      <name>hive.map.aggr.hash.min.reduction</name>
      <value>0.5</value>
    </property>
    
    <property>
      <name>hive.map.aggr.hash.percentmemory</name>
      <value>0.5</value>
    </property>
    
    <property>
      <name>hive.mapjoin.bucket.cache.size</name>
      <value>10000</value>
    </property>
    
    <property>
      <name>hive.mapjoin.optimized.hashtable</name>
      <value>true</value>
    </property>
    
    <property>
      <name>hive.mapred.reduce.tasks.speculative.execution</name>
      <value>false</value>
    </property>
    
    <property>
      <name>hive.merge.mapfiles</name>
      <value>true</value>
    </property>
    
    <property>
      <name>hive.merge.mapredfiles</name>
      <value>false</value>
    </property>
    
    <property>
      <name>hive.merge.orcfile.stripe.level</name>
      <value>true</value>
    </property>
    
    <property>
      <name>hive.merge.rcfile.block.level</name>
      <value>true</value>
    </property>
    
    <property>
      <name>hive.merge.size.per.task</name>
      <value>256000000</value>
    </property>
    
    <property>
      <name>hive.merge.smallfiles.avgsize</name>
      <value>16000000</value>
    </property>
    
    <property>
      <name>hive.merge.tezfiles</name>
      <value>false</value>
    </property>
    
    <property>
      <name>hive.metastore.authorization.storage.checks</name>
      <value>false</value>
    </property>
    
    <property>
      <name>hive.metastore.cache.pinobjtypes</name>
      <value>Table,Database,Type,FieldSchema,Order</value>
    </property>
    
    <property>
      <name>hive.metastore.client.connect.retry.delay</name>
      <value>5s</value>
    </property>
    
    <property>
      <name>hive.metastore.client.socket.timeout</name>
      <value>1800s</value>
    </property>
    
    <property>
      <name>hive.metastore.connect.retries</name>
      <value>24</value>
    </property>
    
    <property>
      <name>hive.metastore.execute.setugi</name>
      <value>true</value>
    </property>
    
    <property>
      <name>hive.metastore.failure.retries</name>
      <value>24</value>
    </property>
    
    <property>
      <name>hive.metastore.kerberos.keytab.file</name>
      <value>/etc/security/keytabs/hive.service.keytab</value>
    </property>
    
    <property>
      <name>hive.metastore.kerberos.principal</name>
      <value>hive/[email protected]</value>
    </property>
    
    <property>
      <name>hive.metastore.pre.event.listeners</name>
      <value>org.apache.hadoop.hive.ql.security.authorization.AuthorizationPreEventListener</value>
    </property>
    
    <property>
      <name>hive.metastore.sasl.enabled</name>
      <value>false</value>
    </property>
    
    <property>
      <name>hive.metastore.server.max.threads</name>
      <value>100000</value>
    </property>
    
    <property>
      <name>hive.metastore.uris</name>
      <value>thrift://worker1.bigdata:9083</value>
    </property>
    
    <property>
      <name>hive.metastore.warehouse.dir</name>
      <value>/apps/hive/warehouse</value>
    </property>
    
    <property>
      <name>hive.optimize.bucketmapjoin</name>
      <value>true</value>
    </property>
    
    <property>
      <name>hive.optimize.bucketmapjoin.sortedmerge</name>
      <value>false</value>
    </property>
    
    <property>
      <name>hive.optimize.constant.propagation</name>
      <value>true</value>
    </property>
    
    <property>
      <name>hive.optimize.index.filter</name>
      <value>true</value>
    </property>
    
    <property>
      <name>hive.optimize.metadataonly</name>
      <value>true</value>
    </property>
    
    <property>
      <name>hive.optimize.null.scan</name>
      <value>true</value>
    </property>
    
    <property>
      <name>hive.optimize.reducededuplication</name>
      <value>true</value>
    </property>
    
    <property>
      <name>hive.optimize.reducededuplication.min.reducer</name>
      <value>4</value>
    </property>
    
    <property>
      <name>hive.optimize.sort.dynamic.partition</name>
      <value>false</value>
    </property>
    
    <property>
      <name>hive.orc.compute.splits.num.threads</name>
      <value>10</value>
    </property>
    
    <property>
      <name>hive.orc.splits.include.file.footer</name>
      <value>false</value>
    </property>
    
    <property>
      <name>hive.prewarm.enabled</name>
      <value>false</value>
    </property>
    
    <property>
      <name>hive.prewarm.numcontainers</name>
      <value>3</value>
    </property>
    
    <property>
      <name>hive.security.authenticator.manager</name>
      <value>org.apache.hadoop.hive.ql.security.ProxyUserAuthenticator</value>
    </property>
    
    <property>
      <name>hive.security.authorization.enabled</name>
      <value>false</value>
    </property>
    
    <property>
      <name>hive.security.authorization.manager</name>
      <value>org.apache.hadoop.hive.ql.security.authorization.plugin.sqlstd.SQLStdConfOnlyAuthorizerFactory</value>
    </property>
    
    <property>
      <name>hive.security.metastore.authenticator.manager</name>
      <value>org.apache.hadoop.hive.ql.security.HadoopDefaultMetastoreAuthenticator</value>
    </property>
    
    <property>
      <name>hive.security.metastore.authorization.auth.reads</name>
      <value>true</value>
    </property>
    
    <property>
      <name>hive.security.metastore.authorization.manager</name>
      <value>org.apache.hadoop.hive.ql.security.authorization.StorageBasedAuthorizationProvider</value>
    </property>
    
    <property>
      <name>hive.server2.allow.user.substitution</name>
      <value>true</value>
    </property>
    
    <property>
      <name>hive.server2.authentication</name>
      <value>NONE</value>
    </property>
    
    <property>
      <name>hive.server2.authentication.spnego.keytab</name>
      <value>HTTP/[email protected]</value>
    </property>
    
    <property>
      <name>hive.server2.authentication.spnego.principal</name>
      <value>/etc/security/keytabs/spnego.service.keytab</value>
    </property>
    
    <property>
      <name>hive.server2.enable.doAs</name>
      <value>true</value>
    </property>
    
    <property>
      <name>hive.server2.logging.operation.enabled</name>
      <value>true</value>
    </property>
    
    <property>
      <name>hive.server2.logging.operation.log.location</name>
      <value>/tmp/hive/operation_logs</value>
    </property>
    
    <property>
      <name>hive.server2.support.dynamic.service.discovery</name>
      <value>true</value>
    </property>
    
    <property>
      <name>hive.server2.table.type.mapping</name>
      <value>CLASSIC</value>
    </property>
    
    <property>
      <name>hive.server2.tez.default.queues</name>
      <value>default</value>
    </property>
    
    <property>
      <name>hive.server2.tez.initialize.default.sessions</name>
      <value>false</value>
    </property>
    
    <property>
      <name>hive.server2.tez.sessions.per.default.queue</name>
      <value>1</value>
    </property>
    
    <property>
      <name>hive.server2.thrift.http.path</name>
      <value>cliservice</value>
    </property>
    
    <property>
      <name>hive.server2.thrift.http.port</name>
      <value>10001</value>
    </property>
    
    <property>
      <name>hive.server2.thrift.max.worker.threads</name>
      <value>500</value>
    </property>
    
    <property>
      <name>hive.server2.thrift.port</name>
      <value>10000</value>
    </property>
    
    <property>
      <name>hive.server2.thrift.sasl.qop</name>
      <value>auth</value>
    </property>
    
    <property>
      <name>hive.server2.transport.mode</name>
      <value>binary</value>
    </property>
    
    <property>
      <name>hive.server2.use.SSL</name>
      <value>false</value>
    </property>
    
    <property>
      <name>hive.server2.zookeeper.namespace</name>
      <value>hiveserver2</value>
    </property>
    
    <property>
      <name>hive.smbjoin.cache.rows</name>
      <value>10000</value>
    </property>
    
    <property>
      <name>hive.stats.autogather</name>
      <value>true</value>
    </property>
    
    <property>
      <name>hive.stats.dbclass</name>
      <value>fs</value>
    </property>
    
    <property>
      <name>hive.stats.fetch.column.stats</name>
      <value>true</value>
    </property>
    
    <property>
      <name>hive.stats.fetch.partition.stats</name>
      <value>true</value>
    </property>
    
    <property>
      <name>hive.support.concurrency</name>
      <value>false</value>
    </property>
    
    <property>
      <name>hive.tez.auto.reducer.parallelism</name>
      <value>true</value>
    </property>
    
    <property>
      <name>hive.tez.container.size</name>
      <value>15360</value>
    </property>
    
    <property>
      <name>hive.tez.cpu.vcores</name>
      <value>-1</value>
    </property>
    
    <property>
      <name>hive.tez.dynamic.partition.pruning</name>
      <value>true</value>
    </property>
    
    <property>
      <name>hive.tez.dynamic.partition.pruning.max.data.size</name>
      <value>104857600</value>
    </property>
    
    <property>
      <name>hive.tez.dynamic.partition.pruning.max.event.size</name>
      <value>1048576</value>
    </property>
    
    <property>
      <name>hive.tez.input.format</name>
      <value>org.apache.hadoop.hive.ql.io.HiveInputFormat</value>
    </property>
    
    <property>
      <name>hive.tez.java.opts</name>
      <value>-server -Djava.net.preferIPv4Stack=true -XX:NewRatio=8 -XX:+UseNUMA -XX:+UseParallelGC -XX:+PrintGCDetails -verbose:gc -XX:+PrintGCTimeStamps</value>
    </property>
    
    <property>
      <name>hive.tez.log.level</name>
      <value>INFO</value>
    </property>
    
    <property>
      <name>hive.tez.max.partition.factor</name>
      <value>2.0</value>
    </property>
    
    <property>
      <name>hive.tez.min.partition.factor</name>
      <value>0.25</value>
    </property>
    
    <property>
      <name>hive.tez.smb.number.waves</name>
      <value>0.5</value>
    </property>
    
    <property>
      <name>hive.txn.manager</name>
      <value>org.apache.hadoop.hive.ql.lockmgr.DummyTxnManager</value>
    </property>
    
    <property>
      <name>hive.txn.max.open.batch</name>
      <value>1000</value>
    </property>
    
    <property>
      <name>hive.txn.timeout</name>
      <value>300</value>
    </property>
    
    <property>
      <name>hive.user.install.directory</name>
      <value>/user/</value>
    </property>
    
    <property>
      <name>hive.vectorized.execution.enabled</name>
      <value>true</value>
    </property>
    
    <property>
      <name>hive.vectorized.execution.reduce.enabled</name>
      <value>false</value>
    </property>
    
    <property>
      <name>hive.vectorized.groupby.checkinterval</name>
      <value>4096</value>
    </property>
    
    <property>
      <name>hive.vectorized.groupby.flush.percent</name>
      <value>0.1</value>
    </property>
    
    <property>
      <name>hive.vectorized.groupby.maxentries</name>
      <value>100000</value>
    </property>
    
    <property>
      <name>hive.zookeeper.client.port</name>
      <value>2181</value>
    </property>
    
    <property>
      <name>hive.zookeeper.namespace</name>
      <value>hive_zookeeper_namespace</value>
    </property>
    
    <property>
      <name>hive.zookeeper.quorum</name>
      <value>worker1.bigdata:2181,master.bigdata:2181,worker2.bigdata:2181</value>
    </property>
    
    <property>
      <name>javax.jdo.option.ConnectionDriverName</name>
      <value>com.mysql.jdbc.Driver</value>
    </property>
    
    <property>
      <name>javax.jdo.option.ConnectionURL</name>
      <value>jdbc:mysql://worker1.bigdata/hive?createDatabaseIfNotExist=true</value>
    </property>
    
    <property>
      <name>javax.jdo.option.ConnectionUserName</name>
      <value>hive</value>
    </property>
    
  </configuration>

Reply via email to