Re: how hawq use HA hdfs

2018-05-30 Thread xiang . dai
Change the following parameter in the $GPHOME/etc/hawq-site.xml file: hawq_dfs_url hdpcluster/hawq_default URL for accessing HDFS. In the listing above: * Replace hdpcluster with the actual service ID that is configured in HDFS. * Replace /hawq_default with the directory

Re: how hawq use HA hdfs

2018-05-30 Thread xiang . dai
I had, here is related configs in hdfs-client.xml: dfs.nameservices skydata dfs.ha.namenodes.skydata nn1,nn2 dfs.namenode.rpc-address.skydata.nn1 192.168.60.24:8020 dfs.namenode.rpc-address.skydata.nn2 192.168.60.32:8020 dfs.namenode.http-address.skydata.nn1

Re: how hawq use HA hdfs

2018-05-30 Thread xiang . dai
Thank you very much. From: "Radar Lei" To: "user" Cc: "dev" Sent: Thursday, May 31, 2018 11:46:09 AM Subject: Re: how hawq use HA hdfs Seems like you set the 'dfs.nameservices' as skydata, but not 'dx' which you defined in hawq-site.xml. Regards, Radar On Thu, May 31, 2018 at

Re: how hawq use HA hdfs

2018-05-30 Thread Radar Lei
Seems like you set the 'dfs.nameservices' as skydata, but not 'dx' which you defined in hawq-site.xml. Regards, Radar On Thu, May 31, 2018 at 11:22 AM, wrote: > I had, here is related configs in hdfs-client.xml: > > > > dfs.nameservices > skydata > > > >

Re: how hawq use HA hdfs

2018-05-30 Thread Radar Lei
Have you made changes to HAWQ configuration file 'hdfs-client.xml'? Regards, Radar On Thu, May 31, 2018 at 10:07 AM, wrote: > Change the following parameter in the $GPHOME/etc/hawq-site.xml file: > > > hawq_dfs_url > hdpcluster/hawq_default > URL for accessing HDFS. > > In the

Re: how hawq use HA hdfs

2018-05-30 Thread Radar Lei
If you are installing a new HAWQ, then file space move is not required. I think HAWQ will treat the host string as an url unless you configured HAWQ hdfs HA correctly. So please verify if you missed any other steps. Regards, Radar On Thu, May 31, 2018 at 9:40 AM, wrote: > I think that move

Questions for HAWQ dev community: Pluggable storage formats and files systems vs. PXF profiles

2018-05-30 Thread Ed Espino
As HAWQ has matured, the introduction of the Pluggable storage formats is giving users two methods to access external data. Is the project at a point where we can advise the community that we will recommend the use of the Pluggable storage format instead of PXF? I believe this was discussed in a

Re: how hawq auto active standby

2018-05-30 Thread Radar Lei
There is no command line tools to activate standby automatically yet. Maybe you can write a script to do it by yourself, just need to be more carefull. Regards, Radar On Wed, May 30, 2018 at 8:22 PM, wrote: > Hi! > > I has add a standby node for hawq, and it can active to master by hand if

[jira] [Created] (HAWQ-1620) Push Down Target List Information To Parquet Scan For Bloomfilter

2018-05-30 Thread Lin Wen (JIRA)
Lin Wen created HAWQ-1620: - Summary: Push Down Target List Information To Parquet Scan For Bloomfilter Key: HAWQ-1620 URL: https://issues.apache.org/jira/browse/HAWQ-1620 Project: Apache HAWQ Issue

[jira] [Created] (HAWQ-1619) Fix bugs which evaluate in feature test

2018-05-30 Thread WANG Weinan (JIRA)
WANG Weinan created HAWQ-1619: - Summary: Fix bugs which evaluate in feature test Key: HAWQ-1619 URL: https://issues.apache.org/jira/browse/HAWQ-1619 Project: Apache HAWQ Issue Type: Sub-task

[jira] [Created] (HAWQ-1618) Segment panic at workfile_mgr_close_file() when transaction ROLLBACK

2018-05-30 Thread Hongxu Ma (JIRA)
Hongxu Ma created HAWQ-1618: --- Summary: Segment panic at workfile_mgr_close_file() when transaction ROLLBACK Key: HAWQ-1618 URL: https://issues.apache.org/jira/browse/HAWQ-1618 Project: Apache HAWQ