Re: Adding a binary object to two caches fails with FULL_SYNC write mode configured for the replicated cache

2016-07-18 Thread pragmaticbigdata
Please see my comments below

> Currently you have to make a copy of BinaryObject for each cache operation
> because it's not immutable and internally caches some information for
> performance reasons.

Isn't the BinaryObject  not bound

  
to any cache?
Also note that adding the same binary object to two caches works if the
synchronization mode of the replicated cache is PRIMARY_SYNC and not
FULL_SYNC. Why would this be working?

> Do you have a real case then you need to put a lot of binary object keys
> to multiple caches? 

I was trying to simulate a workaround for the  IGNITE-1897
   where I maintain a
replicated cache for all the entries that are added/updated in a
transaction. Hence I am adding the same key/value pair to two caches.

> BTW, if you are using BinaryObject key with only single standard java type
> it's simpler to just use the type as a cache key.

No I do have multiple fields as part of the BinaryObject. Its just for
reproducing the issue I added one field



--
View this message in context: 
http://apache-ignite-users.70518.x6.nabble.com/Adding-a-binary-object-to-two-caches-fails-with-FULL-SYNC-write-mode-configured-for-the-replicated-ce-tp6343p6366.html
Sent from the Apache Ignite Users mailing list archive at Nabble.com.


Re: Re: argument type mismatch of oracle TIMESTAMP field when call loadCache

2016-07-18 Thread Vasiliy Sisko
Hello Bob

We fix import of Oracle TIMESTAMP in issue:
https://issues.apache.org/jira/browse/IGNITE-3394

It is available in master branch. 
It will be available in next nightly build. You can take it here:
https://builds.apache.org/view/H-L/view/Ignite/job/Ignite-nightly/lastSuccessfulBuild/



--
View this message in context: 
http://apache-ignite-users.70518.x6.nabble.com/argument-type-mismatch-of-oracle-TIMESTAMP-field-when-call-loadCache-tp5935p6365.html
Sent from the Apache Ignite Users mailing list archive at Nabble.com.


Re: Is there a way to get keyset except SQL query(select _key from ...)?

2016-07-18 Thread vkulichenko
Hi,

Right now there is no other way except iterating through entries and getting
keys from them, but in this case you will fetch values as well. Later we
will add an optional transformer to SCAN query [1] which will allow this
functionality.

What is the reason for not using SQL query?

[1] https://issues.apache.org/jira/browse/IGNITE-2546

-Val



--
View this message in context: 
http://apache-ignite-users.70518.x6.nabble.com/Is-there-a-way-to-get-keyset-except-SQL-query-select-key-from-tp6363p6364.html
Sent from the Apache Ignite Users mailing list archive at Nabble.com.


Is there a way to get keyset except SQL query(select _key from ...)?

2016-07-18 Thread bluehu
I want to scan keys from ignite cache(except values) like this to reduce
traffic and improve performance:

/Iterator it = cache.keySetIterator();
while (it.hasNext()) {
..
}/

*but I do not want to use SQL query(select _key from ...) for some reason,
how can I achieve this by minimal modification, do you have some
suggestions?*



--
View this message in context: 
http://apache-ignite-users.70518.x6.nabble.com/Is-there-a-way-to-get-keyset-except-SQL-query-select-key-from-tp6363.html
Sent from the Apache Ignite Users mailing list archive at Nabble.com.


Re: How about adding kryo or protostuff as an optional marshaller?

2016-07-18 Thread Lin
Hi Val,


I post the codes in GitHub https://github.com/jackeylu/marshaller-cmp, you can 
run and compare it.


I am so glad that you can help me to choose the right serializes. I am not sure 
my cases is fair or not.


And from my tests, I found that,
1. in most of the case of primitive types or jdk.* types, protostuff not work 
better than ignite binary marshaller, but I think it does'n matter in real 
world.
2. in the case of user defined objects, protostuff can save average 40% 
capacity than ignite binary marshaller. Here the custom defined objects are 
MEDIA_CONTENT_1 and MEDIA_CONTENT_2 which are from 
https://github.com/eishay/jvm-serializers/blob/master/tpc/data/media.1.cks and 
https://github.com/eishay/jvm-serializers/blob/master/tpc/data/media.2.cks






-- Original --
From:  "valentin.kulichenko";;
Date:  Tue, Jul 19, 2016 06:01 AM
To:  "user"; 

Subject:  Re:  How about adding kryo or protostuff as an optional marshaller?



Hi Lin,

Do you have a GitHub project that I can run and compare these two
marshallers? From these snippets it's not very clear what is actually
serialized.

Generally, Ignite does provide minimal overhead in the binary format, mainly
to allow field lookups without deserialization, which is crucial for SQL
queries, for example. However, even with this overhead, there is no much
difference in numbers. I believe that in most real use cases this difference
will be negligible.

However, you can always try to introduce custom serialization protocol.
Simply implement Marshaller interface and provide the implementation in
IgniteConfiguration.

-Val



--
View this message in context: 
http://apache-ignite-users.70518.x6.nabble.com/How-about-adding-kryo-or-protostuff-as-an-optional-marshaller-tp6309p6361.html
Sent from the Apache Ignite Users mailing list archive at Nabble.com.

Re: How about adding kryo or protostuff as an optional marshaller?

2016-07-18 Thread vkulichenko
Hi Lin,

Do you have a GitHub project that I can run and compare these two
marshallers? From these snippets it's not very clear what is actually
serialized.

Generally, Ignite does provide minimal overhead in the binary format, mainly
to allow field lookups without deserialization, which is crucial for SQL
queries, for example. However, even with this overhead, there is no much
difference in numbers. I believe that in most real use cases this difference
will be negligible.

However, you can always try to introduce custom serialization protocol.
Simply implement Marshaller interface and provide the implementation in
IgniteConfiguration.

-Val



--
View this message in context: 
http://apache-ignite-users.70518.x6.nabble.com/How-about-adding-kryo-or-protostuff-as-an-optional-marshaller-tp6309p6361.html
Sent from the Apache Ignite Users mailing list archive at Nabble.com.


Re: Off-heap issue

2016-07-18 Thread Peter Schmitt
Hi Val,

it's like:

public class LoaderThread extends Thread {
private static final Logger LOG = createProprietaryLogger();

private Ignite ignite;
private IgniteCache cache;

public LoaderThread(Ignite ignite) {
this.ignite = ignite;
this.cache = ignite.getOrCreateCache("userCache");
}

@Override
public void run() {
//WILL cause the issue: LOG.info("Import data into Ignite");
//WON'T cause the issue: createProprietaryLogger().info("Import
data into Ignite");

Map dataToCache = loadDataToCache();

try (IgniteDataStreamer stream =
ignite.dataStreamer(cache.getName())) {
stream.allowOverwrite(false);
stream.addData(dataToCache); //issue during this call --
happens just once
}

dataToCache = loadDataToCache();

try (IgniteDataStreamer stream =
ignite.dataStreamer(cache.getName())) {
stream.allowOverwrite(false);
stream.addData(dataToCache); //NO issue during this call
}
}

private Map loadDataToCache() {
//...
}
//...
}

User is not Serializable and there is no logger in the User class.
The loaded test-data is always the same.
The source of the logger is pretty complex and I'm not allowed to share it,
but what I can say is that it replaces JUL.
However, the first line in the run-method leads to a broken (see LOG.info)
run or a working (createProprietaryLogger().info) run.
I couldn't believe my eyes, however, I can reproduce it over and over again
just by changing this one line.

Kind regards,
Peter



2016-07-18 22:50 GMT+02:00 vkulichenko :

> Hi Peter,
>
> I don't see your code, so can hardly suggest anything else. According to
> your description, I'm pretty sure the logger instance is being serialized
> as
> a part of key or value object, but I have no idea why. I would start with
> running the app in debug and checking if my assumption is right or not.
>
> -Val
>
>
>
> --
> View this message in context:
> http://apache-ignite-users.70518.x6.nabble.com/Off-heap-issue-tp6327p6356.html
> Sent from the Apache Ignite Users mailing list archive at Nabble.com.
>


Re: Update performance

2016-07-18 Thread vkulichenko
BTW, here is the ticket for DML that you can follow:
https://issues.apache.org/jira/browse/IGNITE-2294

-Val



--
View this message in context: 
http://apache-ignite-users.70518.x6.nabble.com/Update-performance-tp6214p6359.html
Sent from the Apache Ignite Users mailing list archive at Nabble.com.


Re: Update performance

2016-07-18 Thread vkulichenko
Hi Ionut,

Yes, SQL updates are on the roadmap.

-Val



--
View this message in context: 
http://apache-ignite-users.70518.x6.nabble.com/Update-performance-tp6214p6358.html
Sent from the Apache Ignite Users mailing list archive at Nabble.com.


Re: mybatis-ignite

2016-07-18 Thread vkulichenko
Hi,

First of all, please properly subscribe to the mailing list so that the
community can receive email notifications for you messages. Here is the
instruction:
http://apache-ignite-users.70518.x6.nabble.com/mailing_list/MailingListOptions.jtp?forum=1

As for the issue, looks like IgniteCacheAdapter provided by mybatis always
starts the Ignite instance, which is not really good. But in your case you
can simply remove Ignite's ServletContextListenerStartup and let
IgniteCacheAdapter do its job.

Let me know if this helps.

-Val



--
View this message in context: 
http://apache-ignite-users.70518.x6.nabble.com/mybatis-ignite-tp6332p6357.html
Sent from the Apache Ignite Users mailing list archive at Nabble.com.


Re: Off-heap issue

2016-07-18 Thread vkulichenko
Hi Peter,

I don't see your code, so can hardly suggest anything else. According to
your description, I'm pretty sure the logger instance is being serialized as
a part of key or value object, but I have no idea why. I would start with
running the app in debug and checking if my assumption is right or not.

-Val



--
View this message in context: 
http://apache-ignite-users.70518.x6.nabble.com/Off-heap-issue-tp6327p6356.html
Sent from the Apache Ignite Users mailing list archive at Nabble.com.


Re: Removing or overwriting cache value, but still able to get it from the cache

2016-07-18 Thread vkulichenko
Hi,

Near cache is always consistent with the server cache. I.e., if you update
the near cache, the update is propagated to the server nodes. Also, server
nodes know about all near caches. So even if there are two clients with near
caches, update on one of them will trigger the update on the second one as
well.

In you particular case, I would expect the removeAll() removes the provided
keys from all caches and from the persistence store, so the behavior you're
describing is weird. The reproducible example would be really useful.

-Val



--
View this message in context: 
http://apache-ignite-users.70518.x6.nabble.com/Removing-or-overwriting-cache-value-but-still-able-to-get-it-from-the-cache-tp6334p6355.html
Sent from the Apache Ignite Users mailing list archive at Nabble.com.


Re: Any plans to enhance support for subselects on partitioned caches?

2016-07-18 Thread Dmitriy Setrakyan
On Mon, Jul 18, 2016 at 10:44 PM, Sergi Vladykin 
wrote:

> Subquery in FROM clause should work with distributed joins enabled.
> Subquery expressions (in SELECT, WHERE, etc...) must always be collocated.
>

Thanks, Sergi! This definitely helps. Is it going to be possible to support
non-collocated joins in sub-queries in further releases? What are the
challenges there?

>
>
> Sergi
>
> On Mon, Jul 18, 2016 at 7:09 PM, Cristi C  wrote:
>
>> Thanks for your reply, Alexei.
>>
>> So, considering users will be able to use the distributed join workaround,
>> you're not planning on making any enhancements regarding the distributed
>> subselect in the near future, correct?
>>
>> Thanks,
>>Cristi
>>
>>
>>
>>
>>
>>
>> --
>> View this message in context:
>> http://apache-ignite-users.70518.x6.nabble.com/Any-plans-to-enhance-support-for-subselects-on-partitioned-caches-tp6344p6350.html
>> Sent from the Apache Ignite Users mailing list archive at Nabble.com.
>>
>
>


Re: Any plans to enhance support for subselects on partitioned caches?

2016-07-18 Thread Sergi Vladykin
Subquery in FROM clause should work with distributed joins enabled.
Subquery expressions (in SELECT, WHERE, etc...) must always be collocated.

Sergi

On Mon, Jul 18, 2016 at 7:09 PM, Cristi C  wrote:

> Thanks for your reply, Alexei.
>
> So, considering users will be able to use the distributed join workaround,
> you're not planning on making any enhancements regarding the distributed
> subselect in the near future, correct?
>
> Thanks,
>Cristi
>
>
>
>
>
>
> --
> View this message in context:
> http://apache-ignite-users.70518.x6.nabble.com/Any-plans-to-enhance-support-for-subselects-on-partitioned-caches-tp6344p6350.html
> Sent from the Apache Ignite Users mailing list archive at Nabble.com.
>


Re: Adding a binary object to two caches fails with FULL_SYNC write mode configured for the replicated cache

2016-07-18 Thread Alexei Scherbakov
Hi,

Currently you have to make a copy of BinaryObject for each cache operation
because it's not immutable and internally caches some information for
performance reasons.

Do you have a real case then you need to put a lot of binary object keys to
multiple caches?

BTW, if you are using BinaryObject key with only single standard java type
it's simpler to just use the type as a cache key.

2016-07-18 16:07 GMT+03:00 pragmaticbigdata :

> I am using ignite version 1.6. In my use case I have two caches with the
> below configuration
>
> CacheConfiguration cfg1 = new
> CacheConfiguration<>("Cache 1");
> cfg1.setCacheMode(CacheMode.PARTITIONED);
> cfg1.setAtomicityMode(CacheAtomicityMode.TRANSACTIONAL);
>
> IgniteCache cache1 =
> ignite.getOrCreateCache(cfg1).withKeepBinary();
>
> CacheConfiguration cfg2 = new
> CacheConfiguration<>("Cache 2");
> cfg2.setCacheMode(CacheMode.REPLICATED);
>
> cfg2.setWriteSynchronizationMode(CacheWriteSynchronizationMode.FULL_SYNC);
> //using the default PRIMARY_SYNC write synchronization works fine
> IgniteCache cache2 =
> ignite.getOrCreateCache(cfg2);
>
>
> When adding a BinaryObject to the second cache, Ignite *fails when calling
> cache2.put()*. The code to add data to the cache is
>
> BinaryObjectBuilder keyBuilder =
> ignite.binary().builder("keyType")
> .setField("F1", "V1").hashCode("V1".hashCode());
>
> BinaryObjectBuilder valueBuilder =
> ignite.binary().builder("valueType)
> .setField("F2", "V2")
> .setField("F3", "V3");
>
> BinaryObject key = keyBuilder.build();
> BinaryObject value = valueBuilder.build();
>
> cache1.put(key, value);
> cache2.put(key, value);
>
> If FULL_SYNC write synchronization is turned off (default PRIMARY_SYNC),
> the
> write works fine. Also if a copy of the BinaryObject is made before adding
> to cache2, the put method succeeds. Can someone have a look and let me know
> what could be missing?
>
> The exception is as below.
>
> java.lang.AssertionError: Affinity partition is out of range [part=667,
> partitions=512]
> at
>
> org.apache.ignite.internal.processors.affinity.GridAffinityAssignment.get(GridAffinityAssignment.java:149)
> at
>
> org.apache.ignite.internal.processors.cache.distributed.dht.GridDhtPartitionTopologyImpl.nodes(GridDhtPartitionTopologyImpl.java:827)
> at
>
> org.apache.ignite.internal.processors.cache.distributed.dht.atomic.GridNearAtomicUpdateFuture.mapKey(GridNearAtomicUpdateFuture.java:1031)
> at
>
> org.apache.ignite.internal.processors.cache.distributed.dht.atomic.GridNearAtomicUpdateFuture.mapUpdate(GridNearAtomicUpdateFuture.java:867)
> at
>
> org.apache.ignite.internal.processors.cache.distributed.dht.atomic.GridNearAtomicUpdateFuture.map(GridNearAtomicUpdateFuture.java:689)
> at
>
> org.apache.ignite.internal.processors.cache.distributed.dht.atomic.GridNearAtomicUpdateFuture.mapOnTopology(GridNearAtomicUpdateFuture.java:544)
> at
>
> org.apache.ignite.internal.processors.cache.distributed.dht.atomic.GridNearAtomicAbstractUpdateFuture.map(GridNearAtomicAbstractUpdateFuture.java:202)
> at
>
> org.apache.ignite.internal.processors.cache.distributed.dht.atomic.GridDhtAtomicCache$22.apply(GridDhtAtomicCache.java:1007)
> at
>
> org.apache.ignite.internal.processors.cache.distributed.dht.atomic.GridDhtAtomicCache$22.apply(GridDhtAtomicCache.java:1005)
> at
>
> org.apache.ignite.internal.processors.cache.distributed.dht.atomic.GridDhtAtomicCache.asyncOp(GridDhtAtomicCache.java:703)
> at
>
> org.apache.ignite.internal.processors.cache.distributed.dht.atomic.GridDhtAtomicCache.updateAsync0(GridDhtAtomicCache.java:1005)
> at
>
> org.apache.ignite.internal.processors.cache.distributed.dht.atomic.GridDhtAtomicCache.putAsync0(GridDhtAtomicCache.java:475)
> at
>
> org.apache.ignite.internal.processors.cache.GridCacheAdapter.putAsync(GridCacheAdapter.java:2506)
> at
>
> org.apache.ignite.internal.processors.cache.distributed.dht.atomic.GridDhtAtomicCache.put(GridDhtAtomicCache.java:452)
> at
>
> org.apache.ignite.internal.processors.cache.GridCacheAdapter.put(GridCacheAdapter.java:2180)
> at
>
> org.apache.ignite.internal.processors.cache.IgniteCacheProxy.put(IgniteCacheProxy.java:1165)
>
>
>
>
>
> --
> View this message in context:
> http://apache-ignite-users.70518.x6.nabble.com/Adding-a-binary-object-to-two-caches-fails-with-FULL-SYNC-write-mode-configured-for-the-replicated-ce-tp6343.html
> Sent from the Apache Ignite Users mailing list archive at Nabble.com.
>



-- 

Best regards,
Alexei Scherbakov


Re: Removing or overwriting cache value, but still able to get it from the cache

2016-07-18 Thread zshamrock
Can it be really because of the near cache? Near cache section of the Ignite
documentation doesn't say much or explain the correct usage/caveats of the
near cache https://apacheignite.readme.io/docs/near-caches. Is there more I
can read somewhere?

If I remove the values from the near-cache are those changes propagated to
the server cache? Or the next time I read the value from the near-cache it
will fetch the one from the server cache which was not notified about
removal?

How operations on near-cache are sync with server cache?
Is the configuration option to set to modify the behavior?

Thank you.



--
View this message in context: 
http://apache-ignite-users.70518.x6.nabble.com/Removing-or-overwriting-cache-value-but-still-able-to-get-it-from-the-cache-tp6334p6351.html
Sent from the Apache Ignite Users mailing list archive at Nabble.com.


Re: Any plans to enhance support for subselects on partitioned caches?

2016-07-18 Thread Alexei Scherbakov
Hi Cristi,

There is a work in progress on supporting distributed joins without
necessary data collocation, but it will work only for top level joins.

What means you'll need to rewrite a query containing subselect to the
identical query with join.



2016-07-18 17:01 GMT+03:00 Cristi C :

> Hello,
>
> I see that subselects on partitioned caches don't return the correct
> results
> in the current version (the subselect is ran only on the partition on the
> current node)[1]. Are there any plans to enhance this support in the near
> future?
>
> Thanks,
>Cristi
>
> [1]
>
> http://apache-ignite-users.70518.x6.nabble.com/Does-Ignite-support-nested-SQL-Queries-td1714.html#d1445954868220-448
>
>
>
> --
> View this message in context:
> http://apache-ignite-users.70518.x6.nabble.com/Any-plans-to-enhance-support-for-subselects-on-partitioned-caches-tp6344.html
> Sent from the Apache Ignite Users mailing list archive at Nabble.com.
>



-- 

Best regards,
Alexei Scherbakov


Re: Removing or overwriting cache value, but still able to get it from the cache

2016-07-18 Thread zshamrock
Yes, Pavel, I will try to come with something reproducible. Not sure whether
it is gonna be minimal or just a complete app setup (which is chunk of work,
but will take a look into).

Any ideas/theories, what I should take a look in the meanwhile?

Thank you.



--
View this message in context: 
http://apache-ignite-users.70518.x6.nabble.com/Removing-or-overwriting-cache-value-but-still-able-to-get-it-from-the-cache-tp6334p6348.html
Sent from the Apache Ignite Users mailing list archive at Nabble.com.


Re: Removing or overwriting cache value, but still able to get it from the cache

2016-07-18 Thread Pavel Tupitsyn
Can you please attach a minimal code sample that reproduces the issue,
including config file and cache store code?

On Mon, Jul 18, 2016 at 10:32 AM, zshamrock 
wrote:

> /Hi, you have readThrough enabled and a cache store defined, which is
> causing
> this behavior:
> when readThrough is enabled, each cache.get() causes cacheStore.load call./
>
> But, only if the value is not in the cache (i.e. no entry for the key in
> the
> cache). Also, please, see my other messages above.
>
> Thank you.
>
>
>
> --
> View this message in context:
> http://apache-ignite-users.70518.x6.nabble.com/Removing-or-overwriting-cache-value-but-still-able-to-get-it-from-the-cache-tp6334p6340.html
> Sent from the Apache Ignite Users mailing list archive at Nabble.com.
>


Re: Store-by-reference

2016-07-18 Thread Alexei Scherbakov
Hi Andi,

Ignite doesn't support storing values by references without serializing
them first because of it's distributed nature.

So setting the property storeByValue has no effect.

To avoid deserialization on every read you should use:

CacheConfiguration.setCopyOnRead(false)

2016-07-18 11:20 GMT+03:00 AHof :

> Hi,
>
> is there any way in Ignite, to store values by reference instead of always
> serializing cached values directly on insertion?
>
> When setting CacheConfiguration.storeByValue(false) and creating a new
> cache, nothing happens. I looked a bit through the JCache documentation and
> it does not say what should happen when trying to activate the optional
> store-by-reference feature when it is not implemented by the current cache
> provider.
>
> We need store-by-reference mostly because of performance reasons. Quite
> small caches with immutable objects that are accessed very often. Always
> deserializing the values has a significant performance impact on the
> application.
>
> Kind Regards
>
> -Andi
>
>
>
> --
> View this message in context:
> http://apache-ignite-users.70518.x6.nabble.com/Store-by-reference-tp6341.html
> Sent from the Apache Ignite Users mailing list archive at Nabble.com.
>



-- 

Best regards,
Alexei Scherbakov


Any plans to enhance support for subselects on partitioned caches?

2016-07-18 Thread Cristi C
Hello,

I see that subselects on partitioned caches don't return the correct results
in the current version (the subselect is ran only on the partition on the
current node)[1]. Are there any plans to enhance this support in the near
future? 

Thanks,
   Cristi

[1]
http://apache-ignite-users.70518.x6.nabble.com/Does-Ignite-support-nested-SQL-Queries-td1714.html#d1445954868220-448



--
View this message in context: 
http://apache-ignite-users.70518.x6.nabble.com/Any-plans-to-enhance-support-for-subselects-on-partitioned-caches-tp6344.html
Sent from the Apache Ignite Users mailing list archive at Nabble.com.


Adding a binary object to two caches fails with FULL_SYNC write mode configured for the replicated cache

2016-07-18 Thread pragmaticbigdata
I am using ignite version 1.6. In my use case I have two caches with the
below configuration

CacheConfiguration cfg1 = new
CacheConfiguration<>("Cache 1");
cfg1.setCacheMode(CacheMode.PARTITIONED);
cfg1.setAtomicityMode(CacheAtomicityMode.TRANSACTIONAL);

IgniteCache cache1 =
ignite.getOrCreateCache(cfg1).withKeepBinary();

CacheConfiguration cfg2 = new
CacheConfiguration<>("Cache 2");
cfg2.setCacheMode(CacheMode.REPLICATED);
   
cfg2.setWriteSynchronizationMode(CacheWriteSynchronizationMode.FULL_SYNC); 
//using the default PRIMARY_SYNC write synchronization works fine
IgniteCache cache2 =
ignite.getOrCreateCache(cfg2);


When adding a BinaryObject to the second cache, Ignite *fails when calling
cache2.put()*. The code to add data to the cache is 

BinaryObjectBuilder keyBuilder =
ignite.binary().builder("keyType")
.setField("F1", "V1").hashCode("V1".hashCode());

BinaryObjectBuilder valueBuilder =
ignite.binary().builder("valueType)
.setField("F2", "V2")
.setField("F3", "V3");

BinaryObject key = keyBuilder.build();
BinaryObject value = valueBuilder.build();

cache1.put(key, value);
cache2.put(key, value);

If FULL_SYNC write synchronization is turned off (default PRIMARY_SYNC), the
write works fine. Also if a copy of the BinaryObject is made before adding
to cache2, the put method succeeds. Can someone have a look and let me know
what could be missing?

The exception is as below.

java.lang.AssertionError: Affinity partition is out of range [part=667,
partitions=512]
at
org.apache.ignite.internal.processors.affinity.GridAffinityAssignment.get(GridAffinityAssignment.java:149)
at
org.apache.ignite.internal.processors.cache.distributed.dht.GridDhtPartitionTopologyImpl.nodes(GridDhtPartitionTopologyImpl.java:827)
at
org.apache.ignite.internal.processors.cache.distributed.dht.atomic.GridNearAtomicUpdateFuture.mapKey(GridNearAtomicUpdateFuture.java:1031)
at
org.apache.ignite.internal.processors.cache.distributed.dht.atomic.GridNearAtomicUpdateFuture.mapUpdate(GridNearAtomicUpdateFuture.java:867)
at
org.apache.ignite.internal.processors.cache.distributed.dht.atomic.GridNearAtomicUpdateFuture.map(GridNearAtomicUpdateFuture.java:689)
at
org.apache.ignite.internal.processors.cache.distributed.dht.atomic.GridNearAtomicUpdateFuture.mapOnTopology(GridNearAtomicUpdateFuture.java:544)
at
org.apache.ignite.internal.processors.cache.distributed.dht.atomic.GridNearAtomicAbstractUpdateFuture.map(GridNearAtomicAbstractUpdateFuture.java:202)
at
org.apache.ignite.internal.processors.cache.distributed.dht.atomic.GridDhtAtomicCache$22.apply(GridDhtAtomicCache.java:1007)
at
org.apache.ignite.internal.processors.cache.distributed.dht.atomic.GridDhtAtomicCache$22.apply(GridDhtAtomicCache.java:1005)
at
org.apache.ignite.internal.processors.cache.distributed.dht.atomic.GridDhtAtomicCache.asyncOp(GridDhtAtomicCache.java:703)
at
org.apache.ignite.internal.processors.cache.distributed.dht.atomic.GridDhtAtomicCache.updateAsync0(GridDhtAtomicCache.java:1005)
at
org.apache.ignite.internal.processors.cache.distributed.dht.atomic.GridDhtAtomicCache.putAsync0(GridDhtAtomicCache.java:475)
at
org.apache.ignite.internal.processors.cache.GridCacheAdapter.putAsync(GridCacheAdapter.java:2506)
at
org.apache.ignite.internal.processors.cache.distributed.dht.atomic.GridDhtAtomicCache.put(GridDhtAtomicCache.java:452)
at
org.apache.ignite.internal.processors.cache.GridCacheAdapter.put(GridCacheAdapter.java:2180)
at
org.apache.ignite.internal.processors.cache.IgniteCacheProxy.put(IgniteCacheProxy.java:1165)





--
View this message in context: 
http://apache-ignite-users.70518.x6.nabble.com/Adding-a-binary-object-to-two-caches-fails-with-FULL-SYNC-write-mode-configured-for-the-replicated-ce-tp6343.html
Sent from the Apache Ignite Users mailing list archive at Nabble.com.


Store-by-reference

2016-07-18 Thread AHof
Hi,

is there any way in Ignite, to store values by reference instead of always
serializing cached values directly on insertion?

When setting CacheConfiguration.storeByValue(false) and creating a new
cache, nothing happens. I looked a bit through the JCache documentation and
it does not say what should happen when trying to activate the optional
store-by-reference feature when it is not implemented by the current cache
provider.

We need store-by-reference mostly because of performance reasons. Quite
small caches with immutable objects that are accessed very often. Always
deserializing the values has a significant performance impact on the
application.

Kind Regards

-Andi



--
View this message in context: 
http://apache-ignite-users.70518.x6.nabble.com/Store-by-reference-tp6341.html
Sent from the Apache Ignite Users mailing list archive at Nabble.com.


Re: Removing or overwriting cache value, but still able to get it from the cache

2016-07-18 Thread zshamrock
Another question is how I can configure the logger from the cache store used?
I use Logback, so

/private static final Logger logger =
LoggerFactory.getLogger(SensorsToSessionsCacheStore.class);/, and do debug
call:

@Override
public final V load(final K key) throws CacheLoaderException {
if (logger.isDebugEnabled()) {
logger.debug(String.format("Loading %s for %s",
this.getClass().getSimpleName(), key));
}
final V value = doLoad(key);
if (logger.isDebugEnabled()) {
logger.debug(String.format("Loaded value %s for %s in %s",
value, key, this.getClass().getSimpleName()));
}
return value;
}

And I run Ignite with -v (so not in quiet mode), but I don't see any debug
statement related to the store is happening in the log. Do I need (can I?)
to configure it somehow to show debug logs for my cache store classes (which
are package in jar which is place in ignite/libs directory).





--
View this message in context: 
http://apache-ignite-users.70518.x6.nabble.com/Removing-or-overwriting-cache-value-but-still-able-to-get-it-from-the-cache-tp6334p6339.html
Sent from the Apache Ignite Users mailing list archive at Nabble.com.


Re: Removing or overwriting cache value, but still able to get it from the cache

2016-07-18 Thread zshamrock
So, we have a REST endpoint which we can call with session id to stop a
session. Which in result clear the cache. Which shows in Visor it works
(cache is empty now). 

So, regards cache store (and read through) my expectations were:
- because the cache is empty now
- the next call to get() will read the value from the database
- and it will be the latest I have there

But, what I see - `get(sensorId)` still returns the previous session id (the
one created before the current one), and which as I've mentioned before,
even no longer in the database (as we overwrite the session id association
with sensor id, every time we start a new session).

Could it be because of the near cache? 

Also if it helps, Ignite is running as standalone process, application is
running from Docker (with network mode = host, so sharing the network with
the host system). Running on AWS  EC2.

Thank you.



--
View this message in context: 
http://apache-ignite-users.70518.x6.nabble.com/Removing-or-overwriting-cache-value-but-still-able-to-get-it-from-the-cache-tp6334p6338.html
Sent from the Apache Ignite Users mailing list archive at Nabble.com.


Re: Removing or overwriting cache value, but still able to get it from the cache

2016-07-18 Thread zshamrock
Thank you, Pavel. 

This is fine. But the problem is that in the database I have the latest
values, the one I see in Visor, but cache still returns the old value (which
is not longer in the database). How it can be?



--
View this message in context: 
http://apache-ignite-users.70518.x6.nabble.com/Removing-or-overwriting-cache-value-but-still-able-to-get-it-from-the-cache-tp6334p6337.html
Sent from the Apache Ignite Users mailing list archive at Nabble.com.


Re: Removing or overwriting cache value, but still able to get it from the cache

2016-07-18 Thread Pavel Tupitsyn
Hi, you have readThrough enabled and a cache store defined, which is
causing this behavior:
when readThrough is enabled, each cache.get() causes cacheStore.load call.

Pavel.

On Sun, Jul 17, 2016 at 6:03 PM, zshamrock 
wrote:

> I have an interesting problem.
>
> I have 1 cache sensorsToSessions, mapping String to String, i.e. sensor id
> -> session id.
>
> When session is started I overwrite whatever is in the cache by the sensor
> ids used in the current session, i.e.:
>
> /sensorsToSessionsCache.putAll(sensorsToSessions);/
>
> Also when the session is stopped I remove the items from the cache, i.e.:
>
> /sensorsToSessionsCache.removeAll(sensorsIds);/
>
> Connecting to the Ignite using Visor shows that it works (/cache -scan
> -c=@c4/), i.e.
>
> - after putting the items it prints:
>
> visor> cache -scan -c=@c4
> Entries in cache: sandbox.sensorsToSessions
>
> +===+
> |Key Class |   Key|   Value Class|Value
> |
>
> +===+
> | java.lang.String | 50397794 | java.lang.String |
> 4ae52f51-4c2a-11e6-a169-667afaa8cd5d |
> | java.lang.String | 50397793 | java.lang.String |
> 4ae52f51-4c2a-11e6-a169-667afaa8cd5d |
> | java.lang.String | 50397783 | java.lang.String |
> 4ae52f51-4c2a-11e6-a169-667afaa8cd5d |
> | java.lang.String | 50397776 | java.lang.String |
> 4ae52f51-4c2a-11e6-a169-667afaa8cd5d |
> | java.lang.String | 50397846 | java.lang.String |
> 4ae52f51-4c2a-11e6-a169-667afaa8cd5d |
> | java.lang.String | 50397828 | java.lang.String |
> 4ae52f51-4c2a-11e6-a169-667afaa8cd5d |
> | java.lang.String | 50397817 | java.lang.String |
> 4ae52f51-4c2a-11e6-a169-667afaa8cd5d |
> | java.lang.String | 50397812 | java.lang.String |
> 4ae52f51-4c2a-11e6-a169-667afaa8cd5d |
> | java.lang.String | 50397811 | java.lang.String |
> 4ae52f51-4c2a-11e6-a169-667afaa8cd5d |
> | java.lang.String | 50397801 | java.lang.String |
> 4ae52f51-4c2a-11e6-a169-667afaa8cd5d |
>
> +---+
>
> - after removing the items it prints:
>
> visor> cache -scan -c=@c4
> Cache: sandbox.sensorsToSessions is empty
>
> Although, this is where the issue is, doing /final String sessionId =
> sensorsToSessionsCache.get(sensorId);/ always returns the previous session
> id, i.ie from the log:
>
> /Found sessionId df6f12a0-4c28-11e6-a169-667afaa8cd5d for sensorId
> 50397783/
>
> So, no matter whether I overwrite the items, or remove them completely, and
> even so, Visor proves it works, getting the item from the cache in the code
> always returns previous value.
>
> How it could be? Have I misconfigured the Ignite cluster? I am using
> near-cache, btw, for the sensorsToSessionsCache.
>
> There are 1 server node, and 2 clients nodes in the topology.
>
> Also, if it helps, I have never seen the cache store was triggered for this
> cache (at least I don't see anything in the log).
>
> This is the cache config I am using:
>
>  p:name="sandbox.sensorsToSessions"
> p:backups="0"
> p:statisticsEnabled="true"
> p:readThrough="true">
> 
>  factory-method="factoryOf">
> 
>  c:timeUnit="MINUTES"
> c:durationAmount="120"/>
> 
> 
> 
> 
> 
> c:clazz="com.application.grid.cache.store.SensorsToSessionsCacheStore"/>
> 
> 
>
> And this is how I @Inject/instantiate the cache instance into the
> application components (so, essentially it creates the near cache for the
> same server cache for the client nodes):
>
> @Bean
> @Qualifier("sensorsToSessionsCache")
> public IgniteCache sensorsToSessionsCache() {
> final NearCacheConfiguration nearCacheConfiguration
> =
> new NearCacheConfiguration()
> .setNearEvictionPolicy(
> new
>
> LruEvictionPolicy<>(cacheEnvironment.sensorsToSessionsNearCacheEvictionSize()))
>
> .setNearStartSize(cacheEnvironment.sensorsToSessionsNearCacheStartSize());
> return
> ignite().createNearCache(cacheEnvironment.sensorsToSessionsCacheName(),
> nearCacheConfiguration);
> }
>
> I am running Ignite 1.6.0 on Linux.
>
>
>
>
>
> --
> View this message in context:
> http://apache-ignite-users.70518.x6.nabble.com/Removing-or-overwriting-cache-value-but-still-able-to-get-it-from-the-cache-tp6334.html
> Sent from the Apache Ignite Users mailing list archive at Nabble.com.
>


Re: Update performance

2016-07-18 Thread ionut_s
Val,

Thank you for your input. I don't understand what would be the significant
overhead compared with what "invoke" method is already doing but I'm looking
forward to see how "update" operation will be implemented in Ignite. I saw
that there is such an item in the roadmap. 

Thanks,
Ionut



--
View this message in context: 
http://apache-ignite-users.70518.x6.nabble.com/Update-performance-tp6214p6335.html
Sent from the Apache Ignite Users mailing list archive at Nabble.com.