[
https://issues.apache.org/jira/browse/KUDU-2639?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16720932#comment-16720932
]
Adar Dembo commented on KUDU-2639:
--
If I'm understanding you correctly, you're asking what will happen if you try
to insert 10G of data when your limit is configured to 8G? Will the limit be
exceeded?
It's a difficult question to answer definitively. As Kudu approaches the memory
limit, there will be more and more backpressure on incoming writes. When the
memory limit is reached, pretty much all writes will fail. In theory this
allows Kudu to flush to disk without accumulating additional data and to bring
the memory consumption back down. I say "in theory" because it's not an
airtight system. As I mentioned before, scans aren't accounted for in memory
consumption, and there's no scan-side backpressure when memory consumption is
high. So it's possible for consumption to be at the limit and to accept new
scans that further increase the consumption.
But, in most cases, your 10G insert workload will receive backpressure and will
slow down so that the server can flush enough data to stay under its 8G memory
limit.
> How do I clear kudu memory or how do I ensure that kudu memory does not
> exceed the limit
>
>
> Key: KUDU-2639
> URL: https://issues.apache.org/jira/browse/KUDU-2639
> Project: Kudu
> Issue Type: Bug
> Components: master, metrics, server
>Affects Versions: 1.7.0
>Reporter: wangkang
>Priority: Major
> Fix For: n/a
>
> Attachments: 1544690968288.jpg, 1544691002343.jpg
>
>
> When I insert 1.2 gigabytes of data, the server value keeps increasing,
> reaching a peak of 3.2 gigabytes and the memory utilization reaches 48%. So
> if I want to insert more, is it possible to cause memory usage limit? How to
> avoid this situation? Can the memory used by this server be cleared manually?
--
This message was sent by Atlassian JIRA
(v7.6.3#76005)