Some back ground info: HAWQ 2.0 right now doesn't do dynamic resource allocation for PXF queries (External Table). It was a compromise we made as PXF used to have its own allocation logic and we didn't get a chance to converge the logic with HAWQ 2.0. So to make it compatible (on performance) with 1.x HAWQ, the current logic will assume external table queries need 8 segments per node to execute. (e.g. if 3 nodes in the cluster, it'll need 24 segments). If that allocation fails, the query will fail and user will see the error message like "do not have sufficient resources" or "segments" to execute the query.
As I understand, the 1st call is to get fragment info, 2nd call is to optimize allocation for fragments to segments based on the info got from 1st call and generate the optimized plan. -Goden On Mon, Aug 29, 2016 at 10:31 AM Kavinder Dhaliwal <[email protected]> wrote: > Hi, > > Recently I was looking into the issue of PXF receiving multiple REST > requests to the fragmenter. Based on offline discussions I have got a rough > idea that this is happening because HAWQ plans every query twice on the > master. I understand that this is to allow resource negotiation that was a > feature of HAWQ 2.0. I'd like to know if anyone on the mailing list can > give any more background into the history of the decision making behind > this change for HAWQ 2.0 and whether this is only a short term solution > > Thanks, > Kavinder >
