In an ongoing evaluation of HAWQ in Azure, we've encountered some
sub-optimal network performance. It would be great to get some additional
information about a few server parameters related to the network:

- gp_max_packet_size
   The default is documented at 8192. Why was this number chosen? Should
this value be aligned with the network infrastructure's configured MTU,
accounting for the packet header size of the chosen interconnect type?
 (Azure only support MTU 1500 and has been showing better reliability using
TCP in Greenplum)

- gp_interconnect_type
    The docs claim UDPIFC is the default, UDP is the observed default. Do
the recommendations around which setting to use vary in an IaaS environment
(AWS or Azure)?

- gp_interconnect_queue_depth
   My naive read of this is performance can be traded off for (potentially
significant) RAM utilization. Is there additional detail around turning
this knob? How does the interaction between this and the underlying NIC
queue depth affect performance? As an example, in Azure, disabling TX
queuing (ifconfig eth0 txqueue 0) on the virtual NIC improved benchmark
performance, as the underlying HyperV host is doing it's own queuing anyway.


Thanks,
Kyle
-- 
*Kyle Dunn | Data Engineering | Pivotal*
Direct: 303.905.3171 <3039053171> | Email: kd...@pivotal.io

Reply via email to