[
https://issues.apache.org/jira/browse/HAWQ-1231?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=15770364#comment-15770364
]
Jon Roberts commented on HAWQ-1231:
-----------------------------------
I don't understand this suggested change. On the one hand, it sounds like you
are referencing a guc that controls the maximum number of vsegs per segment
which I don't believe there is one.
The summary suggests that you want to overload
hawq_rm_nvseg_perquery_perseg_limit to not only be for a session but also set
the maximum for the cluster. I don't agree with this at all.
There is another guc hawq_rm_nvseg_perquery_limit which sets the maximum number
of vsegs for the entire cluster. This is rather confusing especially for a
cluster where compute and storage are separate and the use case is for a
growing and shrinking cluster. I would rather have a guc for the maximum
number of vsegs per segment. This is something you referenced but again, I
don't think such a guc exists.
> RM should error out when nvseg in statement level is greater than the guc for
> nvseg per seg.
> --------------------------------------------------------------------------------------------
>
> Key: HAWQ-1231
> URL: https://issues.apache.org/jira/browse/HAWQ-1231
> Project: Apache HAWQ
> Issue Type: Bug
> Components: Resource Manager
> Reporter: Xiang Sheng
> Assignee: Xiang Sheng
> Fix For: backlog
>
>
> If user set a larget nvseg in statement level, the nvseg is greater than the
> guc nvseg per seg, rm won't report error. It will allocate according to nvseg
> per seg. But this cannot satisfy statement level nvseg. RM will timeout at
> last. This behaviors is not reasonable. We should error out directly and tell
> user if he wants the nvseg in statement, he must change the guc (
> hawq_rm_nvseg_perquery_perseg_limit ) accordingly.
--
This message was sent by Atlassian JIRA
(v6.3.4#6332)