I guess the point about the low-latency requests was that long RPC queues
might add extra latency to request handling, and the latency might be
unpredictably long. E.g., if the queue is almost full and a new RPC
request is added, the request will be dispatched to one of the available
service
Thanks Todd. Better late than never indeed, appreciate it very much.
Yes, precisely, we are dealing with very spikey ingest.
Immediate issue has been addressed though: we extended the spark
KuduContext so we could build our own AsyncKuduClient and
increase defaultOperationTimeoutMs from default
Hi Mauricio,
Sorry for the late reply on this one. Hope "better late than never" is the
case here :)
As you implied in your email, the main issue with increasing queue length
to deal with queue overflows is that it only helps with momentary spikes.
According to queueing theory (and intuition) if
Hello all.
We're dealing with some regular (~daily) client timeouts and resulting
ingest job failures. Reviewing logs it all points to slow disks (hosts are
getting old). Sadly the spark connector doesn't expose the
defaultOperationTimeoutMs in AsyncKuduClient so we're stuck with 30s
default