>
> Thanks for your suggestion. I did not follow this:
>
> "For this use-case I would suggest using single cache puts (the same way
> you
> insert data to Oracle) and combine it with write-behind store writing to
> HDFS, this should give you better latencies."
>
> Are you suggesting not using
Thanks for your suggestion. I did not follow this:
"For this use-case I would suggest using single cache puts (the same way
you
insert data to Oracle) and combine it with write-behind store writing to
HDFS, this should give you better latencies."
Are you suggesting not using IGFS and using
Kobe,
I am not sure this is a fair comparison because writing a file to IGFS
involves 3 operations: updating the metadata cache (empty file creation),
actual file writing and then updating the metadata cache again (update the
file size).
For this use-case I would suggest using single cache puts
Paolo,
I think you can create a replicated cache and store such metrics there. When
a job starts, it increments number of used CPUs and decrements when it
finishes execution. Load balancer can make decisions based on data in the
cache.
Will this work?
-Val
--
View this message in context:
Hello..
I am comparing the persistence of large blobs (megabytes) in Oracle
relational database vs. IGFS (1.5.0.final, DUAL_ASYNC, backed by secondary
filesystem) on a 64 bit, 16GB RAM, 8 core RHEL6 VM.
I notice that the other parameters remaining constant, the time to persist
the same blob in
Hi Amit,
As a workaround I can suggest the following:
1. In CacheStore.write do not write to the database, but instead add the
key-value pair to some collection that is maintained on per session basis.
You can use CacheStoreSession.attachment() for this.
2. Implement your own
Hi Mateusz,
Please see inline
On 2/17/2016 11:30 AM, mp wrote:
Denis,
Please see below for answers.
Cheers,
-Mateusz
On Tue, Feb 16, 2016 at 10:18 PM, Denis Magda > wrote:
Hi Mateusz,
I've revisited the whole discussion from the
Thanks Pavel I will check this out.
Regards,
Dor Ben Dov
From: Pavel Tupitsyn [mailto:ptupit...@gridgain.com]
Sent: יום ד 24 פברואר 2016 15:49
To: user@ignite.apache.org
Subject: Re: c++ native client
Hi,
We have C++ support, please see https://apacheignite-cpp.readme.io/
On Wed, Feb 24, 2016
Hi,
We have C++ support, please see https://apacheignite-cpp.readme.io/
On Wed, Feb 24, 2016 at 4:40 PM, Dor Ben Dov wrote:
> Hi,
>
>
>
> Any possible way that you are working on or having in your road map to
> develop c++ client ?
>
>
>
> Regards,
>
> Dor Ben Dov
>
Hi,
Any possible way that you are working on or having in your road map to develop
c++ client ?
Regards,
Dor Ben Dov
This message and the information contained herein is proprietary and
confidential and subject to the Amdocs policy statement,
you may review at
Looks like the same problem was reported here -
http://apache-ignite-users.70518.x6.nabble.com/Exception-on-Ignite-cluster-shutdown-td3129.html.
I started another thread on dev list regarding exceptions in method
signatures.
--Yakov
2016-02-23 4:47 GMT+03:00 vkulichenko
Hi Steve!
It seems you close cache on your service stop. Given service is getting
stopped because of the node shutdown cache close request cannot be
processed and exception is thrown.
I think for now you should just catch this exception in your service
close() method.
I will start discussion on
12 matches
Mail list logo