Thanks for reply

I'm using HAWQ 2.0.1.0 installed on single host ( 40 cpu , over 100Gigs of
RAM). I wonder how to configure single host HAWQ instance to make use of
host resources.
At this moment im using HAWQ without yarn ,when i set
hawq_rm_stmt_nvseg=X; only
6 of  X virtual segments spawned by HAWQ use 100% cpu, rest virtual
segments are idle

my rm settings
show hawq_rm_memory_limit_perseg -> 128GB
show hawq_rm_nvcore_limit_perseg -> 40
show default_hash_table_bucket_number -> 12




2016-09-29 15:44 GMT+02:00 Vineet Goel <[email protected]>:

> Dominik,
>
> What version are you using? If latest version, hawq expand (gpexpand) is
> not required and cluster can be expanded dynamically with just a few steps
> very easily.
>
> If using Ambari, see: http://hdb.docs.pivotal.
> io/20/admin/ambari-admin.html#amb-expand
>
> Otherwise, see: http://hdb.docs.pivotal.io/20/admin/ClusterExpansion.html
>
> - Vineet
>
>
> On Thu, Sep 29, 2016 at 8:12 AM Dominik Choma <[email protected]>
> wrote:
>
>> Hi all,
>>
>> I want to increase parallelism level and hardware utilization  by
>> increasing number of segments per host. In PHD it can be done via gpexpand
>> http://pivotalhd-210.docs.pivotal.io/doc/2010/
>> ExpandingtheHAWQSystem.html#ExpandingtheHAWQSystem-
>> IncreasingSegmentsPerHost.
>> Does HAWQ have some utility similar to gpexpand   ?
>>
>> Dominik
>>
>>

Reply via email to