Re: Blob persistence performance: IGFS vs Oracle

2016-02-24 Thread Alexey Goncharuk
> > Thanks for your suggestion. I did not follow this: > > "For this use-case I would suggest using single cache puts (the same way > you > insert data to Oracle) and combine it with write-behind store writing to > HDFS, this should give you better latencies." > > Are you suggesting not using

Re: Blob persistence performance: IGFS vs Oracle

2016-02-24 Thread Kobe
Thanks for your suggestion. I did not follow this: "For this use-case I would suggest using single cache puts (the same way you insert data to Oracle) and combine it with write-behind store writing to HDFS, this should give you better latencies." Are you suggesting not using IGFS and using

Re: Blob persistence performance: IGFS vs Oracle

2016-02-24 Thread Alexey Goncharuk
Kobe, I am not sure this is a fair comparison because writing a file to IGFS involves 3 operations: updating the metadata cache (empty file creation), actual file writing and then updating the metadata cache again (update the file size). For this use-case I would suggest using single cache puts

Re: Active jobs from LoadBalancingSpi

2016-02-24 Thread vkulichenko
Paolo, I think you can create a replicated cache and store such metrics there. When a job starts, it increments number of used CPUs and decrements when it finishes execution. Load balancer can make decisions based on data in the cache. Will this work? -Val -- View this message in context:

Blob persistence performance: IGFS vs Oracle

2016-02-24 Thread Kobe
Hello.. I am comparing the persistence of large blobs (megabytes) in Oracle relational database vs. IGFS (1.5.0.final, DUAL_ASYNC, backed by secondary filesystem) on a 64 bit, 16GB RAM, 8 core RHEL6 VM. I notice that the other parameters remaining constant, the time to persist the same blob in

Re: Cache Updates Order

2016-02-24 Thread vkulichenko
Hi Amit, As a workaround I can suggest the following: 1. In CacheStore.write do not write to the database, but instead add the key-value pair to some collection that is maintained on per session basis. You can use CacheStoreSession.attachment() for this. 2. Implement your own

Re: Distributed queue problem with peerClassLoading enabled

2016-02-24 Thread Denis Magda
Hi Mateusz, Please see inline On 2/17/2016 11:30 AM, mp wrote: Denis, Please see below for answers. Cheers, -Mateusz On Tue, Feb 16, 2016 at 10:18 PM, Denis Magda > wrote: Hi Mateusz, I've revisited the whole discussion from the

RE: c++ native client

2016-02-24 Thread Dor Ben Dov
Thanks Pavel I will check this out. Regards, Dor Ben Dov From: Pavel Tupitsyn [mailto:ptupit...@gridgain.com] Sent: יום ד 24 פברואר 2016 15:49 To: user@ignite.apache.org Subject: Re: c++ native client Hi, We have C++ support, please see https://apacheignite-cpp.readme.io/ On Wed, Feb 24, 2016

Re: c++ native client

2016-02-24 Thread Pavel Tupitsyn
Hi, We have C++ support, please see https://apacheignite-cpp.readme.io/ On Wed, Feb 24, 2016 at 4:40 PM, Dor Ben Dov wrote: > Hi, > > > > Any possible way that you are working on or having in your road map to > develop c++ client ? > > > > Regards, > > Dor Ben Dov >

c++ native client

2016-02-24 Thread Dor Ben Dov
Hi, Any possible way that you are working on or having in your road map to develop c++ client ? Regards, Dor Ben Dov This message and the information contained herein is proprietary and confidential and subject to the Amdocs policy statement, you may review at

Re: Exception on Ignite cluster shutdown

2016-02-24 Thread Yakov Zhdanov
Looks like the same problem was reported here - http://apache-ignite-users.70518.x6.nabble.com/Exception-on-Ignite-cluster-shutdown-td3129.html. I started another thread on dev list regarding exceptions in method signatures. --Yakov 2016-02-23 4:47 GMT+03:00 vkulichenko

Re: Exception on Ignite cluster shutdown

2016-02-24 Thread Yakov Zhdanov
Hi Steve! It seems you close cache on your service stop. Given service is getting stopped because of the node shutdown cache close request cannot be processed and exception is thrown. I think for now you should just catch this exception in your service close() method. I will start discussion on