Change the following parameter in the $GPHOME/etc/hawq-site.xml file:
hawq_dfs_url hdpcluster/hawq_default
URL for accessing HDFS.
In the listing above:
* Replace hdpcluster with the actual service ID that is configured in HDFS.
* Replace /hawq_default with the directory
I had, here is related configs in hdfs-client.xml:
dfs.nameservices
skydata
dfs.ha.namenodes.skydata
nn1,nn2
dfs.namenode.rpc-address.skydata.nn1
192.168.60.24:8020
dfs.namenode.rpc-address.skydata.nn2
192.168.60.32:8020
dfs.namenode.http-address.skydata.nn1
Thank you very much.
From: "Radar Lei"
To: "user"
Cc: "dev"
Sent: Thursday, May 31, 2018 11:46:09 AM
Subject: Re: how hawq use HA hdfs
Seems like you set the 'dfs.nameservices' as skydata, but not 'dx' which you
defined in hawq-site.xml.
Regards,
Radar
On Thu, May 31, 2018 at
Seems like you set the 'dfs.nameservices' as skydata, but not 'dx' which
you defined in hawq-site.xml.
Regards,
Radar
On Thu, May 31, 2018 at 11:22 AM, wrote:
> I had, here is related configs in hdfs-client.xml:
>
>
>
> dfs.nameservices
> skydata
>
>
>
>
Have you made changes to HAWQ configuration file 'hdfs-client.xml'?
Regards,
Radar
On Thu, May 31, 2018 at 10:07 AM, wrote:
> Change the following parameter in the $GPHOME/etc/hawq-site.xml file:
>
>
> hawq_dfs_url
> hdpcluster/hawq_default
> URL for accessing HDFS.
>
> In the
If you are installing a new HAWQ, then file space move is not required.
I think HAWQ will treat the host string as an url unless you configured
HAWQ hdfs HA correctly. So please verify if you missed any other steps.
Regards,
Radar
On Thu, May 31, 2018 at 9:40 AM, wrote:
> I think that move
As HAWQ has matured, the introduction of the Pluggable storage formats is
giving users two methods to access external data. Is the project at a point
where we can advise the community that we will recommend the use of
the Pluggable
storage format instead of PXF? I believe this was discussed in a
There is no command line tools to activate standby automatically yet.
Maybe you can write a script to do it by yourself, just need to be more
carefull.
Regards,
Radar
On Wed, May 30, 2018 at 8:22 PM, wrote:
> Hi!
>
> I has add a standby node for hawq, and it can active to master by hand if
Lin Wen created HAWQ-1620:
-
Summary: Push Down Target List Information To Parquet Scan For
Bloomfilter
Key: HAWQ-1620
URL: https://issues.apache.org/jira/browse/HAWQ-1620
Project: Apache HAWQ
Issue
WANG Weinan created HAWQ-1619:
-
Summary: Fix bugs which evaluate in feature test
Key: HAWQ-1619
URL: https://issues.apache.org/jira/browse/HAWQ-1619
Project: Apache HAWQ
Issue Type: Sub-task
Hongxu Ma created HAWQ-1618:
---
Summary: Segment panic at workfile_mgr_close_file() when
transaction ROLLBACK
Key: HAWQ-1618
URL: https://issues.apache.org/jira/browse/HAWQ-1618
Project: Apache HAWQ
11 matches
Mail list logo