Hi,
I'm having trouble understanding your question. Can you give an example of
the operations you are trying and why you believe data is being lost?
-Todd
On Thu, Jun 14, 2018 at 8:24 PM, 秦坤 wrote:
> hello:
> I use java scan api to operate kudu in large batches
> If a session contains
The op seen in the logs is a rowset compaction, which takes existing
diskrowsets and rewrites them. It's not a flush, which writes data in
memory to disk, so I don't think the flush_threshold_mb is relevant. Rowset
compaction is done to reduce the amount of overlap of rowsets in primary
key space,
Hi all,
I'm running kudu 1.6.0-cdh5.14.2. When looking into the logs of tablet server,
I find most of the compactions are compacting small files (~40MB for each). For
example:
I0615 07:22:42.63735130614tablet.cc:1661] T 6bdefb8c27764a0597dcf98ee1b450ba P
70f3e54fe0f3490cbf0371a6830a33a7: