mohit.kaushik wrote:
On 12/16/2015 09:07 PM, Eric Newton wrote:
I was making the huge assumption that your client runs with the
accumulo scripts and it is not one of the accumulo known start points:
in this case, it is given the JVM parameters of ACCUMULO_OTHER_OPTS.
Perhaps I am asking somethi
Regardless of how you are running your client, it is running out of
memory. You can adjust your java options to give the jvm more memory.
Alternatively, you an change how you use the client API to reduce memory
usage.
I just map the hostnames of the servers in client's /etc/hosts file and use
> t
On 12/16/2015 09:07 PM, Eric Newton wrote:
I was making the huge assumption that your client runs with the
accumulo scripts and it is not one of the accumulo known start points:
in this case, it is given the JVM parameters of ACCUMULO_OTHER_OPTS.
Perhaps I am asking something very obvious but I
I would need more details to break down this question:
why is CentOS caching 21 GB
What leads you to believe this?
Is it expected to fill all available memory?
The OS is expected to use all memory.
We have found the OS's aggressive use of disk caching swipes memory from
large processes lik
Thanks Eric, but one doubt still left unclear is that when all the
processes have there own memory limits then why is CentOS caching 21 GB.
Is it expected to fill all available memory? And how does
ACCUMULO_OTHER_OPTS helps in ingestion when I am using native memory maps?
On 12/15/2015 09:21 P
This is actually a client issue, and not related to the server or its
performance.
The code sending updates to the server is spending so much time in java GC,
that it has decided to kill itself.
You may want to increase the size of the JVM used for ingest, probably by
using a larger value in ACCU