Re: Call for presentations for ApacheCon North America 2020 now open

2020-08-04 Thread Prasad Bhalerao
Hi Saikat,
Can you please share the slides for both presentations, streaming as well
as ignite internals?
Thanks,
Prasad

On Wed 5 Aug, 2020, 7:10 AM Saikat Maitra  Hi,
>
> I learned that my proposal for talk about Data Streaming using Apache Ignite
> and Apache Flink has been accepted.
>
> I have not yet received the schedule yet. I will share as soon as I have
> it.
>
> Regards,
> Saikat
>
> On Wed, Jan 22, 2020 at 8:09 PM Saikat Maitra 
> wrote:
>
>> Hi Denis,
>>
>> Thank you for your email. I have submitted a talk about Data Streaming
>> using Apache Ignite and Apache Flink.
>>
>> I would also like to co-present on Ignite internals and usecases deep
>> dive.
>>
>> Regards,
>> Saikat
>>
>> On Wed, Jan 22, 2020 at 6:22 PM Denis Magda  wrote:
>>
>>> Igniters,
>>>
>>> I was submitting an Ignite talk to the conference and found out that
>>> this time ASF created a separate category for Ignite-specific proposals.
>>> You can see an abstract submitted by me:
>>> https://drive.google.com/file/d/1woaEOWaIFxN8UIJ7nvFbYoc53mYsUsSN/view?usp=sharing
>>>
>>> Who else is ready to submit? It can be a deep-dive about Ignite
>>> internals or Ignite use case overview. We can co-present. Also, GridGain is
>>> ready to sponsor your trip if the talk is accepted.
>>>
>>>
>>>
>>> -
>>> Denis
>>>
>>>
>>> On Tue, Jan 21, 2020 at 5:22 PM Denis Magda  wrote:
>>>
 Ignite folks, just bringing to your attention and encourage anybody
 interested to submit an Ignite-related session. Both Alex Zinoviev and I
 presented at ApacheCons the previous year -- the events are worth attending
 and speaking at.

 -
 Denis


 -- Forwarded message -
 From: Rich Bowen 
 Date: Tue, Jan 21, 2020 at 7:06 AM
 Subject: Call for presentations for ApacheCon North America 2020 now
 open
 To: plann...@apachecon.com 


 Dear Apache enthusiast,

 (You’re receiving this message because you are subscribed to one or
 more
 project mailing lists at the Apache Software Foundation.)

 The call for presentations for ApacheCon North America 2020 is now open
 at https://apachecon.com/acna2020/cfp

 ApacheCon will be held at the Sheraton, New Orleans, September 28th
 through October 2nd, 2020.

 As in past years, ApacheCon will feature tracks focusing on the various
 technologies within the Apache ecosystem, and so the call for
 presentations will ask you to select one of those tracks, or “General”
 if the content falls outside of one of our already-organized tracks.
 These tracks are:

 Karaf
 Internet of Things
 Fineract
 Community
 Content Delivery
 Solr/Lucene (Search)
 Gobblin/Big Data Integration
 Ignite
 Observability
 Cloudstack
 Geospatial
 Graph
 Camel/Integration
 Flagon
 Tomcat
 Cassandra
 Groovy
 Web/httpd
 General/Other

 The CFP will close Friday, May 1, 2020 8:00 AM (America/New_York time).

 Submit early, submit often, at https://apachecon.com/acna2020/cfp

 Rich, for the ApacheCon Planners

>>>


Re: CountDownLatch issue in Ignite 2.6 version

2020-06-08 Thread Prasad Bhalerao
I just checked the ignite doc for atomic configuration.
But it doesn't say that it is applicable to distributed data structures.

Is it really applicable to distributed data structures like count down latch

On Tue 9 Jun, 2020, 7:26 AM Prasad Bhalerao  Hi,
> I was under the impression that countdown latch is implemented in
> replicated cache. So when any number of nodes go down it does not loose
> it's state.
>
> Can you please explain why atmoc data structures are using 1 back when its
> state is very important?
>
> Can we enforce  atomic data structures to use replicated cache?
>
> Which cache does ignite use to store atomic data structures?
>
> Thanks
> Prasad
>
> On Mon 8 Jun, 2020, 11:58 PM Evgenii Zhuravlev  wrote:
>
>> Hi,
>>
>> By default, cache, that stores all atomic structures has only 1 backup,
>> so, after losing all data for this certain latch, it recreates it. To
>> change the default atomic configuration use
>> IgniteConfiguration.setAtomicConfiguration.
>>
>> Evgenii
>>
>> сб, 6 июн. 2020 г. в 06:20, Akash Shinde :
>>
>>> *Issue:* Countdown latched gets reinitialize to original value(4) when
>>> one or more (but not all) node goes down. *(Partition loss happened)*
>>>
>>> We are using ignite's distributed countdownlatch to make sure that cache
>>> loading is completed on all server nodes. We do this to make sure that our
>>> kafka consumers starts only after cache loading is complete on all server
>>> nodes. This is the basic criteria which needs to be fulfilled before starts
>>> actual processing
>>>
>>>
>>>  We have 4 server nodes and countdownlatch is initialized to 4. We use
>>> cache.loadCache method to start the cache loading. When each server
>>> completes cache loading it reduces the count by 1 using countDown method.
>>> So when all the nodes completes cache loading, the count reaches to zero.
>>> When this count  reaches to zero we start kafka consumers on all server
>>> nodes.
>>>
>>>  But we saw weird behavior in prod env. The 3 server nodes were shut
>>> down at the same time. But 1 node is still alive. When this happened the
>>> count down was reinitialized to original value i.e. 4. But I am not able to
>>> reproduce this in dev env.
>>>
>>>  Is this a bug, when one or more (but not all) nodes goes down then
>>> count re initializes back to original value?
>>>
>>> Thanks,
>>> Akash
>>>
>>


Re: Read through not working as expected in case of Replicated cache

2020-03-02 Thread Prasad Bhalerao
Hi Ivan,

Thank you for the clarification.

So the behavior is same for REPLICATED as well as PARTITIONED cache.

1) Can we please have this behavior documented on Ignite web page? This
will just help users to avoid confusion and design their cache effectively.

2)  You said "You can check it using IgniteCache.localPeek method (ask if
more details how to do it are needed)".  Can you please explain this in
detail?


Regard,
Prasad

On Mon, Mar 2, 2020 at 2:45 PM Ivan Pavlukhin  wrote:

> Hi Prasad,
>
> AFAIK, when value is read through it is not sent to backup nodes. You
> can check it using IgniteCache.localPeek method (ask if more details
> how to do it are needed).
>
> I usually think about read-through cache for a following case. There
> is an underlying storage with "real" data, cache is used to speedup an
> access. Some kind of invalidation mechanism might be used but it is
> assumed fine to read values from cache which are not consistent with
> the backing storage at some point.
>
> Consequently it seems there is no need to distribute values from an
> underlying storage over all replicas because if a value is absent a
> reader will receive an actual value from the underlying storage.
>
> Best regards,
> Ivan Pavlukhin
>
> пн, 2 мар. 2020 г. в 10:41, Prasad Bhalerao  >:
> >
> > Hi Ivan/Denis,
> >
> > Are you saying that when a value is loaded to cache from an underlying
> > storage using read-through approach, value is loaded only on primary node
> > and does not get replicated on its back nodes?
> >
> > I am under the impression that when a value is loaded in a cache using
> > read-through approach, this key/value pair gets replicated on all back-up
> > nodes as well, irrespective of REPLICATED OR PARTITIONED cache.
> > Please correct me if I am wrong.
> >
> > I think the key/value must get replicated on all backup nodes when it is
> > read through underlying storage otherwise user will have to add the same
> > key/value explicitly using cache.put(key,value) operation so that it will
> > get replicated on all of its backup nodes.  This is what I am doing right
> > now as a workaround to solve this issue.
> >
> > I will try to explain my use case again.
> >
> > I have few replicated caches for which read-through is enabled but
> > write-through is disabled. The underlying tables for these caches are
> > updated by different systems. Whenever these tables are updated by 3rd
> > party system I want to reload the "cache entries".
> >
> > I achieve this using below given steps:
> > 1) 3rd party systems sends an update message (which contains the key) to
> > our service by invoking our REST api.
> > 2) Delete an entry from cache using cache().remove(key) method. (Entry is
> > just removed from cache but present in DB as write-through is false)
> > 3) Invoke cache().get(key) method for the same key in step 2 to reload an
> > entry.
> >
> > Thanks,
> > Prasad
> >
> >
> >
> >
> >
> >
> >
> >
> >
> >
> >
> >
> >
> >
> >
> >
> >
> >
> >
> >
> >
> > Prasad
> >
> > On Sat, Feb 29, 2020 at 4:49 AM Denis Magda  wrote:
> >
> > > Ivan, thanks for stepping in.
> > >
> > > Prasad, is Ivan's assumption correct that you query the data with SQL
> under
> > > the observed circumstances? My guess is that you were referring to the
> > > key-value APIs as long as the issue is gone when the write-through is
> > > enabled.
> > >
> > > -
> > > Denis
> > >
> > >
> > > On Fri, Feb 28, 2020 at 2:30 PM Ivan Pavlukhin 
> > > wrote:
> > >
> > > > As I understand the thing here is in combination of read-through and
> > > > SQL. SQL queries do not read from underlying storage when
> read-through
> > > > is configured. And an observed result happens because query from a
> > > > client node over REPLICATED cache picks random server node (kind of
> > > > load-balancing) to retrieve data. Following happens in the described
> > > > case:
> > > > 1. Value is loaded to a cache from an underlying storage on a primary
> > > > node when cache.get is called.
> > > > 2. Query is executed multiple times and when the chose node is the
> > > > primary node then the value is observed. On other nodes the value is
> > > > absent.
> > > >
> > > > Actually, behavior for PARTITIONED cache is similar, but an
> > >

Re: Read through not working as expected in case of Replicated cache

2020-03-01 Thread Prasad Bhalerao
Hi Ivan/Denis,

Are you saying that when a value is loaded to cache from an underlying
storage using read-through approach, value is loaded only on primary node
and does not get replicated on its back nodes?

I am under the impression that when a value is loaded in a cache using
read-through approach, this key/value pair gets replicated on all back-up
nodes as well, irrespective of REPLICATED OR PARTITIONED cache.
Please correct me if I am wrong.

I think the key/value must get replicated on all backup nodes when it is
read through underlying storage otherwise user will have to add the same
key/value explicitly using cache.put(key,value) operation so that it will
get replicated on all of its backup nodes.  This is what I am doing right
now as a workaround to solve this issue.

I will try to explain my use case again.

I have few replicated caches for which read-through is enabled but
write-through is disabled. The underlying tables for these caches are
updated by different systems. Whenever these tables are updated by 3rd
party system I want to reload the "cache entries".

I achieve this using below given steps:
1) 3rd party systems sends an update message (which contains the key) to
our service by invoking our REST api.
2) Delete an entry from cache using cache().remove(key) method. (Entry is
just removed from cache but present in DB as write-through is false)
3) Invoke cache().get(key) method for the same key in step 2 to reload an
entry.

Thanks,
Prasad





















Prasad

On Sat, Feb 29, 2020 at 4:49 AM Denis Magda  wrote:

> Ivan, thanks for stepping in.
>
> Prasad, is Ivan's assumption correct that you query the data with SQL under
> the observed circumstances? My guess is that you were referring to the
> key-value APIs as long as the issue is gone when the write-through is
> enabled.
>
> -
> Denis
>
>
> On Fri, Feb 28, 2020 at 2:30 PM Ivan Pavlukhin 
> wrote:
>
> > As I understand the thing here is in combination of read-through and
> > SQL. SQL queries do not read from underlying storage when read-through
> > is configured. And an observed result happens because query from a
> > client node over REPLICATED cache picks random server node (kind of
> > load-balancing) to retrieve data. Following happens in the described
> > case:
> > 1. Value is loaded to a cache from an underlying storage on a primary
> > node when cache.get is called.
> > 2. Query is executed multiple times and when the chose node is the
> > primary node then the value is observed. On other nodes the value is
> > absent.
> >
> > Actually, behavior for PARTITIONED cache is similar, but an
> > inconsistency is not observed because SQL queries read data from the
> > primary node there. If the primary node leaves a cluster then an SQL
> > query will not see the value anymore. So, the same inconsistency will
> > appear.
> >
> > Best regards,
> > Ivan Pavlukhin
> >
> > пт, 28 февр. 2020 г. в 13:23, Prasad Bhalerao <
> > prasadbhalerao1...@gmail.com>:
> > >
> > > Can someone please comment on this?
> > >
> > > On Wed, Feb 26, 2020 at 6:04 AM Denis Magda  wrote:
> > >
> > > > Ignite Dev team,
> > > >
> > > > This sounds like an issue in our replicated cache implementation
> rather
> > > > than an expected behavior. Especially, if partitioned caches don't
> have
> > > > such a specificity.
> > > >
> > > > Who can explain why write-through needs to be enabled for replicated
> > caches
> > > > to reload an entry from an underlying database properly/consistently?
> > > >
> > > > -
> > > > Denis
> > > >
> > > >
> > > > On Tue, Feb 25, 2020 at 5:11 AM Ilya Kasnacheev <
> > ilya.kasnach...@gmail.com
> > > > >
> > > > wrote:
> > > >
> > > > > Hello!
> > > > >
> > > > > I think this is by design. You may suggest edits on readme.io.
> > > > >
> > > > > Regards,
> > > > > --
> > > > > Ilya Kasnacheev
> > > > >
> > > > >
> > > > > пн, 24 февр. 2020 г. в 17:28, Prasad Bhalerao <
> > > > > prasadbhalerao1...@gmail.com>:
> > > > >
> > > > >> Hi,
> > > > >>
> > > > >> Is this a bug or the cache is designed to work this way?
> > > > >>
> > > > >> If it is as-designed, can this behavior be updated in ignite
> > > > >> documentation?
> > > > >>
> >

Re: Read through not working as expected in case of Replicated cache

2020-02-28 Thread Prasad Bhalerao
Can someone please comment on this?

On Wed, Feb 26, 2020 at 6:04 AM Denis Magda  wrote:

> Ignite Dev team,
>
> This sounds like an issue in our replicated cache implementation rather
> than an expected behavior. Especially, if partitioned caches don't have
> such a specificity.
>
> Who can explain why write-through needs to be enabled for replicated caches
> to reload an entry from an underlying database properly/consistently?
>
> -
> Denis
>
>
> On Tue, Feb 25, 2020 at 5:11 AM Ilya Kasnacheev  >
> wrote:
>
> > Hello!
> >
> > I think this is by design. You may suggest edits on readme.io.
> >
> > Regards,
> > --
> > Ilya Kasnacheev
> >
> >
> > пн, 24 февр. 2020 г. в 17:28, Prasad Bhalerao <
> > prasadbhalerao1...@gmail.com>:
> >
> >> Hi,
> >>
> >> Is this a bug or the cache is designed to work this way?
> >>
> >> If it is as-designed, can this behavior be updated in ignite
> >> documentation?
> >>
> >> Thanks,
> >> Prasad
> >>
> >> On Wed, Oct 30, 2019 at 7:19 PM Ilya Kasnacheev <
> >> ilya.kasnach...@gmail.com> wrote:
> >>
> >>> Hello!
> >>>
> >>> I have discussed this with fellow Ignite developers, and they say read
> >>> through for replicated cache would work where there is either:
> >>>
> >>> - writeThrough enabled and all changes do through it.
> >>> - database contents do not change for already read keys.
> >>>
> >>> I can see that neither is met in your case, so you can expect the
> >>> behavior that you are seeing.
> >>>
> >>> Regards,
> >>> --
> >>> Ilya Kasnacheev
> >>>
> >>>
> >>> вт, 29 окт. 2019 г. в 18:18, Akash Shinde :
> >>>
> >>>> I am using Ignite 2.6 version.
> >>>>
> >>>> I am starting 3 server nodes with a replicated cache and 1 client
> node.
> >>>> Cache configuration is as follows.
> >>>> Read-through true on but write-through is false. Load data by key is
> >>>> implemented as given below in cache-loader.
> >>>>
> >>>> Steps to reproduce issue:
> >>>> 1) Delete an entry from cache using IgniteCache.remove() method.
> (Entry
> >>>> is just removed from cache but present in DB as write-through is
> false)
> >>>> 2) Invoke IgniteCache.get() method for the same key in step 1.
> >>>> 3) Now query the cache from client node. Every invocation returns
> >>>> different results.
> >>>> Sometimes it returns reloaded entry, sometime returns the results
> >>>> without reloaded entry.
> >>>>
> >>>> Looks like read-through is not replicating the reloaded entry on all
> >>>> nodes in case of REPLICATED cache.
> >>>>
> >>>> So to investigate further I changed the cache mode to PARTITIONED and
> >>>> set the backup count to 3 i.e. total number of nodes present in
> cluster (to
> >>>> mimic REPLICATED behavior).
> >>>> This time it worked as expected.
> >>>> Every invocation returned the same result with reloaded entry.
> >>>>
> >>>> *  private CacheConfiguration networkCacheCfg() {*
> >>>>
> >>>>
> >>>>
> >>>>
> >>>>
> >>>>
> >>>>
> >>>>
> >>>>
> >>>>
> >>>>
> >>>>
> >>>>
> >>>>
> >>>>
> >>>>
> >>>>
> >>>>
> >>>>
> >>>> *CacheConfiguration networkCacheCfg = new
> >>>> CacheConfiguration<>(CacheName.NETWORK_CACHE.name
> >>>> <http://CacheName.NETWORK_CACHE.name>());
> >>>> networkCacheCfg.setAtomicityMode(CacheAtomicityMode.TRANSACTIONAL);
> >>>> networkCacheCfg.setWriteThrough(false);
> >>>> networkCacheCfg.setReadThrough(true);
> >>>> networkCacheCfg.setRebalanceMode(CacheRebalanceMode.ASYNC);
> >>>>
> networkCacheCfg.setWriteSynchronizationMode(CacheWriteSynchronizationMode.FULL_SYNC);
> >>>>   //networkCacheCfg.setBackups(3);
> >>>> networkCacheCfg.setCacheMode(CacheMode.REPLICATED);
> >>>> Factory storeFactory =
> >>>> FactoryBuilder.fa

Re: NodeOrder in GridCacheVersion

2020-02-28 Thread Prasad Bhalerao
Hi,

 * How do you ensure that there are no concurrent updates on the keys?
[Prasad]: The cache for which it is failing is kind of bootstrap cache
which changes very rarely. I made sure that I was the only one working on
this system while debugging the issue.
The cache for which it is failing is REPLICATED cache. Read through is
enabled and Write through is disabled. Whenever I get an update message for
these caches from different system, I update an entry in my cache using
following steps.
1. First Remove an entry from the cache using cache.remove() method.
2. Read the entry from cache using cache().get method, which reads the data
from oracle db using read through approach.

 * How many retry attempts did you run?
[Prasad] I retried the transaction 8-10 times.

 * Are your caches in-memory with eviction policy?
[Prasad] Yes, caches are in-memory but without eviction policy. I am using
Oracle DB as a third party persistence. Ignite native persistence is
disabled.

 * Do you have TTL enabled for either of the caches?
[Prasad]: No, I have not set TTL.

 * Do you have a 3rd-party persistence and read-through and write-through
enabled for either of the caches?
[Prasad]: Yes, I have 3rd Party persistence. I have read-through caches but
for all read-through caches write through is disabled. The write-through is
disabled for some caches as I am not the owner of these tables. I also have
Write through caches but for all such caches read-through is disabled. At
this moment I do not have any cache where read-through and write-through
both are enabled. I reload the all my caches using cache loaders.

 * Can you check if the issue reproduces if you set
-DIGNITE_READ_LOAD_BALANCING=false system property?
[Prasad]: Sure will try to reproduce this using this parameter. But the
problem is this happens intermittently.

As per the following code, serReadVer is the GridCache version of
transaction co-ordinator node which it compares with grid cache version of
other nodes.
So as per your explanation, nodeOrder is unique number assigned to each
node joins the grid. So each node in cluster will have a different
nodeOrder. If this is the case then "serReadVer.equals(ver)" will always
return false.
 Please correct me if I am wrong. I just trying to understand the code.
This will help me to identify the issue.






*public boolean checkSerializableReadVersion(GridCacheVersion
serReadVer)throws GridCacheEntryRemovedException {
lockEntry();try {checkObsolete();*

*if (!serReadVer.equals(ver)) {boolean empty =
isStartVersion() || deletedUnlocked();*











*if
(serReadVer.equals(IgniteTxEntry.SER_READ_EMPTY_ENTRY_VER))
return empty;else if
(serReadVer.equals(IgniteTxEntry.SER_READ_NOT_EMPTY_VER))
return !empty;return false;}return
true;}finally {unlockEntry();}}*

Thanks,
Prasad



On Fri, Feb 28, 2020 at 2:24 PM Alexey Goncharuk 
wrote:

> Prasad,
>
>
>> Can you please answer following questions?
>> 1) The significance of the nodeOrder w.r.t Grid and cache?
>>
> Node order is a unique integer assigned to a node when the node joins
> grid. The node order is included into GridCacheVersion to disambiguate
> versions generated on different nodes that happen to have the same local
> version order.
>
>> 2) When does it change?
>>
> Node order does not change during the node lifetime. If two versions have
> different node order, it means that the versions were generated on
> different nodes.
>
>> 3) How it is important w.r.t. transaction?
>>
> GridCacheVersion is used to detect concurrent read-write conflicts as I
> described in the previous message, as well as for data rebalance.
>
>> 4) Inside transaction I am reading and modifying Replicated as well as
>> Partitioned cache. What I observed is this fails for Replicated cache. As
>> workaround, I have moved the code which reads Replicated cache out of
>> transaction block. Is it allowed to read and modify both replicated and
>> Partitioned cache i.e. use both Replicated and Partitioned?
>>
> Yes, this is perfectly fine to update replicated and transactional caches
> inside one transaction.
>
> From the debug output that you provided we can infer that the version of
> both entries have changed for both caches before transaction prepare phase.
> I would back up Alexei here:
>  * How do you ensure that there are no concurrent updates on the keys?
>  * How many retry attempts did you run?
>  * Are your caches in-memory with eviction policy?
>  * Do you have TTL enabled for either of the caches?
>  * Do you have a 3rd-party persistence and read-through and write-through
> enabled for either of the caches?
>  * Can you check if the issue reproduces if you set
> -DIGNITE_READ_LOAD_BALANCING=false system property?
>
> --AG
>


Re: NodeOrder in GridCacheVersion

2020-02-27 Thread Prasad Bhalerao
Hi Alexey,

Key value is not getting changed concurrently,  I am sure about it. The
cache for which I am getting the exception is kind of bootstrap data and it
changes very rarely. I have added retry logic in my code and it failed
every time giving the same error .

Every time if fails in GridDhtTxPrepareFuture.checkReadConflict ->
GridCacheEntryEx.checkSerializableReadVersion method and I think it fails
due to the change in value of nodeOrder. This is what I observed while
debugging the method.
This happens intermittently.

I got following values while inspecting GridCacheVersion object on
different nodes.

Cache : Addons (Node 2)
serReadVer of entry read inside Transaction: GridCacheVersion
[topVer=194120123, order=4, nodeOrder=2]
version on node3: GridCacheVersion [topVer=194120123, order=4, nodeOrder=1]

Cache : Subscription  (Node 3)
serReadVer of entry read inside Transaction:  GridCacheVersion
[topVer=194120123, order=1, nodeOrder=2]
version on node2:  GridCacheVersion [topVer=194120123, order=1, nodeOrder
=10]

Can you please answer following questions?
1) The significance of the nodeOrder w.r.t Grid and cache?
2) When does it change?
3) How it is important w.r.t. transaction?
4) Inside transaction I am reading and modifying Replicated as well as
Partitioned cache. What I observed is this fails for Replicated cache. As
workaround, I have moved the code which reads Replicated cache out of
transaction block. Is it allowed to read and modify both replicated and
Partitioned cache i.e. use both Replicated and Partitioned?

Thanks,
Prasad

On Thu, Feb 27, 2020 at 6:01 PM Alexey Goncharuk 
wrote:

> Prasad,
>
> Since optimistic transactions do not acquire key locks until prepare
> phase, it is possible that the key value is concurrently changed before the
> prepare commences. Optimistic exceptions is thrown exactly in this case and
> suggest a user that they should retry the transaction.
>
> Consider the following example:
> Thread 1: Start tx 1, Read (key1) -> val1
> Thread 2: Start tx 2, Read (key2) -> val1
>
> Thread 1: Write (key1, val2)
> Thread 1: Commit
>
> Thread 2: Write (key1, val3)
> Thread 2: Commit *Optimistic exception is thrown here since current value
> of key1 is not val1 anymore*
>
> When optimistic transactions are used, a user is expected to have a retry
> logic. Alternatively, a pessimistic repeatable_read transaction can be used
> (one should remember that in pessimistic mode locks are acquired on first
> key access and released only on transaction commit).
>
> Hope this helps,
> --AG
>


Fwd: NodeOrder in GridCacheVersion

2020-02-27 Thread Prasad Bhalerao
Hi Ilya, didn't get what are you trying to say.

The problem I am facing is, my transaction is failing giving
TransactionOptimisticException. I do not have a reproducer for this project
and this does not happen frequently.
Transaction is failing during prepare phase. I had to open a debug port on
all grid nodes to do remote  debugging in order to debug this issue.
What I observed is transaction fails because check in
GridCacheMapEntry.checkSerializableReadVersion fails as the nodeOrder in
GridCacheVersion in serialized version is different from the actual
noderOrder in GridCacheVersion of respective node. This method returns
false on 2 nodes out 4 nodes and this is happening for Replicated cache.

This is the reason I asked What is nodeOrder in GridCacheVersion and why it
is important while checking read entries in Transaction context?

I tried to debug nodeOrder in ignite code but could not understand it.

Inside transaction I am reading and modifying Replicated as well as
Partitioned cache. What I observed is this fails for Replicated cache. As
workaround, I have moved the code which reads Replicated cache out of
transaction block.
Is it allowed to read and modify both replicated and Partitioned cache i.e.
use both Replicated and Partition?

Complete exception can be found here
<https://gist.github.com/61979329224e23dbaef2f63976a87a14.git>.

Thanks,
Prasad

On Thu, Feb 27, 2020 at 1:00 AM Ilya Kasnacheev 
wrote:

> Hello!
>
> I don't think this is userlist discussion, this logging is not aimed at
> end-user and you are not supposed to act on it.
>
> Do you have any context for us, such as reproducer project or complete
> logs?
>
> Regards,
> --
> Ilya Kasnacheev
>
>
> ср, 26 февр. 2020 г. в 19:13, Prasad Bhalerao <
> prasadbhalerao1...@gmail.com>:
>
>> Can someone please advise?
>>
>> On Wed 26 Feb, 2020, 12:23 AM Prasad Bhalerao <
>> prasadbhalerao1...@gmail.com wrote:
>>
>>> Hi,
>>>
>>>> Ignite Version: 2.6
>>>> No of nodes: 4
>>>>
>>>> I am getting following exception while committing transaction.
>>>>
>>>> Although I just reading the value from this cache inside transaction
>>>> and I am sure that the  cache and "cache entry" read is not being modified
>>>> out this transaction on any other node.
>>>>
>>>> So I debugged the code and found out that it fails in following code on
>>>> 2 nodes out of 4 nodes.
>>>>
>>>> GridDhtTxPrepareFuture#checkReadConflict -
>>>> GridCacheEntryEx#checkSerializableReadVersion
>>>>
>>>> GridCacheVersion version failing for equals check are given below for 2
>>>> different caches. I can see that it failing because of change in nodeOrder
>>>> of cache.
>>>>
>>>> 1) Can some please explain the significance of the nodeOrder w.r.t Grid
>>>> and cache? When does it change?
>>>> 2) How to solve this problem?
>>>>
>>>> Cache : Addons (Node 2)
>>>> serReadVer of entry read inside Transaction: GridCacheVersion
>>>> [topVer=194120123, order=4, nodeOrder=2]
>>>> version on node3: GridCacheVersion [topVer=194120123, order=4,
>>>> nodeOrder=1]
>>>>
>>>> Cache : Subscription  (Node 3)
>>>> serReadVer of entry read inside Transaction:  GridCacheVersion
>>>> [topVer=194120123, order=1, nodeOrder=2]
>>>> version on node2:  GridCacheVersion [topVer=194120123, order=1,
>>>> nodeOrder=10]
>>>>
>>>>
>>>> *EXCEPTION:*
>>>>
>>>> Caused by:
>>>> org.apache.ignite.internal.transactions.IgniteTxOptimisticCheckedException:
>>>> Failed to prepare transaction, read/write conflict
>>>>
>>>
>>>
>>>>
>>>> Thanks,
>>>> Prasad
>>>>
>>>


Re: Ignite 2.8 documentation

2020-02-25 Thread Prasad Bhalerao
Hi,

Can we have this behavior documented? This will help user to design their
caches appropriately.

*For Replicated Cache:*

Reference mail thread:
http://apache-ignite-users.70518.x6.nabble.com/Read-through-not-working-as-expected-in-case-of-Replicated-cache-td29990.html

 read through for replicated cache would work where there is either:
- writeThrough enabled and all changes do through it.
- database contents do not change for already read keys.

Thanks,
Prasad

On Mon, Feb 24, 2020 at 7:31 PM Alexey Zinoviev 
wrote:

> Please, could you post in this thread a few examples of the documentation
> tickets in JIRA for the current release, to create them correctly?
>
> пн, 24 февр. 2020 г. в 14:53, Alexey Zinoviev :
>
> > Ok, will make ticket, no problemo
> >
> > вс, 23 февр. 2020 г., 23:28 Denis Magda :
> >
> >> Alex, thanks for helping with the documentation. Frankly, the tickets
> >> will be useful to get a complete list of all the updates pages with the
> >> goal of extracting info for blog post(s) - we'll be preparing at least
> one
> >> blog for Ignite 2.8 and can create an ML specific blog as well. Also,
> the
> >> tickets might simplify the review process between you and Artem.
> >>
> >> -
> >> Denis
> >>
> >>
> >> On Sat, Feb 22, 2020 at 2:18 AM Alexey Zinoviev  >
> >> wrote:
> >>
> >>> I've created a draft pages on apache.readme.io and will continue my
> >>> work there during next 2 weeks.
> >>> Should I create any tickets for that? Or could miss that step?
> >>>
> >>> Will notify in this thread than the work will be done!
> >>>
> >>> чт, 20 февр. 2020 г. в 12:16, Alexey Zinoviev  >:
> >>>
>  Yes, there are a lot of changes in ML from 2.7, I'm going to prepare
>  new documentation  and create documentation related tickets for the ML
>  component.
>  After some consultation and review from Artem side I'll add new
>  documentation on readme.io.
> 
> 
> 
>  чт, 20 февр. 2020 г. в 02:34, Denis Magda :
> 
> > Artem,
> >
> > Thanks for stepping in and preparing the list of top priority
> > documentation tasks! How about labeling those tickets somehow and
> creating
> > a filter similar to this one [1] but for "Required & Unresolved
> > Documentation Tasks"? I would simply add this as a new section to the
> > Ignite 2.8 release wiki page for ease of tracking and start working
> with
> > the guys contributed improvements directly. Will see the names of the
> > authors who need to be involved ;)
> >
> > *Alexey Zinoviev*, there are many ML related changes coming in the
> > release. Could you check existing ML docs and suggest any changes?
> >
> > [1]
> >
> https://cwiki.apache.org/confluence/display/IGNITE/Apache+Ignite+2.8#ApacheIgnite2.8-Unresolveddocumentationtasks
> >
> > -
> > Denis
> >
> >
> > On Wed, Feb 19, 2020 at 11:14 AM Artem Budnikov <
> > a.budnikov.ign...@gmail.com> wrote:
> >
> >> Maxim,
> >>
> >> One note from my side, I think we can move disk page compression [1]
> >> > to the 2-nd priority, but definitely must document WAL page
> >> > compression first [2]
> >>
> >>
> >> OK, good to know.
> >>
> >> On Wed, Feb 19, 2020 at 6:48 PM Maxim Muzafarov 
> >> wrote:
> >>
> >> > Artem,
> >> >
> >> >
> >> > Thank you for starting this thread.
> >> > One note from my side, I think we can move disk page compression
> [1]
> >> > to the 2-nd priority, but definitely must document WAL page
> >> > compression first [2]
> >> >
> >> >
> >> > The list of important tasks [3].
> >> > The list of documentation tasks [4].
> >> >
> >> > [1] https://issues.apache.org/jira/browse/IGNITE-10330
> >> > [2] https://issues.apache.org/jira/browse/IGNITE-11336
> >> > [3]
> >> >
> >>
> https://cwiki.apache.org/confluence/display/IGNITE/Apache+Ignite+2.8#ApacheIgnite2.8-Themostimportantreleasetasks
> >> > [4]
> >> >
> >>
> https://cwiki.apache.org/confluence/display/IGNITE/Apache+Ignite+2.8#ApacheIgnite2.8-Unresolveddocumentationtasks
> >> >
> >> > On Wed, 19 Feb 2020 at 18:15, Artem Budnikov
> >> >  wrote:
> >> > >
> >> > > Hi everyone,
> >> > >
> >> > > As the release of Ignite 2.8 is getting closer, let's discuss
> >> which
> >> > features should be documented. I created a list of features based
> >> on the
> >> > release notes and the documentation tickets in jira (see below).
> >> Much more
> >> > has been added, but these seemed to have first priority. It's not
> >> to say
> >> > that other features are not important, but given the limited
> >> resources a
> >> > list of high-priority task would help to schedule the time of
> those
> >> who
> >> > will help with the docs.
> >> > >
> >> > > Here is the list of features:
> >> > >
> >> > > Disk page compression
> >> 

Re: GridGain Web Console is available free of charge for Apache Ignite

2019-12-10 Thread Prasad Bhalerao
Hi,

We found 3 vulnerabilities while scanning Grid Gain Web console application.

We are using HTTP and not HTTPS due to some issues on our side. Although
vulnerabilities are of lower severity, but thought of reporting it here.

1) HTTP TRACE / TRACK Methods Enabled. (CVE-2004-2320
<https://nvd.nist.gov/vuln/detail/CVE-2004-2320>, CVE-2010-0386
<https://nvd.nist.gov/vuln/detail/CVE-2010-0386>, CVE-2003-1567
<https://nvd.nist.gov/vuln/detail/CVE-2003-1567>)
2) Session Cookie Does Not Contain the "Secure" Attribute.
3) Web Server HTTP Trace/Track Method Support Cross-Site Tracing
Vulnerability. (CVE-2004-2320
<https://nvd.nist.gov/vuln/detail/CVE-2004-2320>, CVE-2007-3008
<https://nvd.nist.gov/vuln/detail/CVE-2007-3008>)

Can these be fixed?

Thanks,
Prasad


On Tue, Dec 10, 2019 at 4:39 PM Denis Magda  wrote:

> It's free software without limitations. Just download and use it.
>
> -
> Denis
>
>
> On Tue, Dec 10, 2019 at 1:21 PM Prasad Bhalerao <
> prasadbhalerao1...@gmail.com> wrote:
>
>> Hi,
>>
>> Can apache ignite users use it for free in their production environments?
>> What license does it fall under?
>>
>> Thanks,
>> Prasad
>>
>> On Fri, Oct 4, 2019 at 5:33 AM Denis Magda  wrote:
>>
>>> Igniters,
>>>
>>> There is good news. GridGain made its distribution of Web Console
>>> completely free. It goes with advanced monitoring and management
>>> dashboard
>>> and other handy screens. More details are here:
>>>
>>> https://www.gridgain.com/resources/blog/gridgain-road-simplicity-new-docs-and-free-tools-apache-ignite
>>>
>>> -
>>> Denis
>>>
>>


Re: GridGain Web Console is available free of charge for Apache Ignite

2019-12-10 Thread Prasad Bhalerao
Hi,

Can apache ignite users use it for free in their production environments?
What license does it fall under?

Thanks,
Prasad

On Fri, Oct 4, 2019 at 5:33 AM Denis Magda  wrote:

> Igniters,
>
> There is good news. GridGain made its distribution of Web Console
> completely free. It goes with advanced monitoring and management dashboard
> and other handy screens. More details are here:
>
> https://www.gridgain.com/resources/blog/gridgain-road-simplicity-new-docs-and-free-tools-apache-ignite
>
> -
> Denis
>


Native memory tracking of an Ignite Java 1.8 Application

2019-02-18 Thread Prasad Bhalerao
Hi,

I have set the off heap size to 500 MB and max heap size to 512 MB.

My process is taking around 1.7 GB on Windows 10 as per the task manager.
So I decided to track the memory distribution using jcmd to find out if
there are any memory leaks in non-heap space.

After pushing the data to cache I took the native memory summary using jcmd
tool.

I am trying to understand in which of the following section, allocated off
heap memory goes?

Does off heap come under "*Internal*" category?  Can any ignite memory
expert help me with this?

Oracle documentation

does
not clearly talks about it. I am also attaching the native memory detail
file.

C:\Java64\jdk1.8.0_144\bin>jcmd.exe 16956 VM.native_memory summary

16956:

 Total: reserved=3513712KB, committed=2249108KB

- Java Heap (reserved=524288KB, committed=524288KB)

(mmap: reserved=524288KB, committed=524288KB)



- Class (reserved=1127107KB, committed=86507KB)

(classes #13259)

(malloc=10947KB #17120)

(mmap: reserved=1116160KB, committed=75560KB)



-Thread (reserved=89748KB, committed=89748KB)

(thread #88)

(stack: reserved=89088KB, committed=89088KB)

(malloc=270KB #454)

(arena=391KB #175)



-  Code (reserved=254854KB, committed=30930KB)

(malloc=5254KB #8013)

(mmap: reserved=249600KB, committed=25676KB)



-GC (reserved=29656KB, committed=29576KB)

(malloc=10392KB #385)

(mmap: reserved=19264KB, committed=19184KB)



-  Compiler (reserved=188KB, committed=188KB)

(malloc=57KB #243)

(arena=131KB #3)



-  Internal (reserved=1464736KB, committed=1464736KB)

(malloc=1464672KB #40848)

(mmap: reserved=64KB, committed=64KB)



-Symbol (reserved=18973KB, committed=18973KB)

(malloc=15353KB #152350)

(arena=3620KB #1)



-Native Memory Tracking (reserved=3450KB, committed=3450KB)

(malloc=14KB #167)

(tracking overhead=3436KB)



-   Arena Chunk (reserved=712KB, committed=712KB)

(malloc=712KB)


Re: [IGNITE-10925] After upgrading 2.7 getting Unexpected error occurred during unmarshalling

2019-01-14 Thread Prasad Bhalerao
Hi,

I am able to create a reproducer for this issue. I have also created a JIRA
IGNITE-10925 <https://issues.apache.org/jira/browse/IGNITE-10925>  for this
issue.

Reproducer: https://github.com/prasadbhalerao1983/IgniteIssueReproducer.git

Step to Reproduce:

1) First Run com.example.demo.Server class as a java program

2) Then run com.example.demo.Client as java program.

Thanks,
Prasad

On Sat, Jan 12, 2019 at 11:17 AM Prasad Bhalerao <
prasadbhalerao1...@gmail.com> wrote:

> No I am not using zookeeper discovery.
> Using TcpDiscoveryVmIpFinder.
>
> Can someone please explain on what event cacheMetrics in TcpDiscoveryNode
> gets populated. It is not getting populated in standalone program.
>
> If it gets populated then I might be able to reproduce this case.
>
> On Fri 11 Jan, 2019, 8:28 PM Ilya Kasnacheev  wrote:
>
>> Hello!
>>
>> Have you tried enabling Zookeeper in your reproducer? I have a hunch that
>> they are linked: this behavior is affected by zookeeper discovery.
>>
>> Regards,
>> --
>> Ilya Kasnacheev
>>
>>
>> пт, 11 янв. 2019 г. в 17:44, Prasad Bhalerao <
>> prasadbhalerao1...@gmail.com>:
>>
>>> I tried to reproduce this in standalone program. But the thing is cache
>>> metrics map in TcpDiscoveryNode is empty even after setting
>>> statisticEnabled to true on all caches.
>>> So the flow does not enter into serializr/deserialize cacheMetrics block.
>>>
>>> Any idea how the cacheMetrics gets populated. On which event?
>>>
>>>
>>> Thanks,
>>> Prasad
>>>
>>> On Fri 11 Jan, 2019, 7:55 PM ilya.kasnacheev >> wrote:
>>>
>>>> Hello!
>>>>
>>>> I think the problem was introduced by
>>>> https://issues.apache.org/jira/browse/IGNITE-6846 which does look very
>>>> suspicious, however it is strange that it does not reproduce right away.
>>>>
>>>> I could try and devise a fix but I could not reproduce this behavior in
>>>> any
>>>> of tests. If you could do a reproducer project that would be awesome.
>>>>
>>>> Regards,
>>>>
>>>>
>>>>
>>>> --
>>>> Sent from: http://apache-ignite-users.70518.x6.nabble.com/
>>>>
>>>


Re: After upgrading 2.7 getting Unexpected error occurred during unmarshalling

2019-01-11 Thread Prasad Bhalerao
Resending


I am not able reproduce this issue in small reproducer project but this is
consistently happening in my project. So I debugged this issue and attached
the screenshot in this mail.



*NOTE:  *This issue occurs if the statistics are enabled on cache
configuration level [cacheCfg.setStatisticsEnabled(true)].



As shown in screenshots there 22 cache metrics in cacheMetrics hashmap. All
these cache metrics get serialized successfully. But at the time of
deserialization on client node only first metrics get de-serialized
successfully but all other metrics till iteration count 13 are
de-serialized as null value and on iteration 14 “ref” byte value in
“OptimizedObjectInputStream.readObject0()” method is read as 81 and code
throws exception.



I think this where it is going wrong. The object copy at the time of
serialization and de-serialization should be same but that’s not happening
in Ignite 2.7 version. So I debugged this on Ignite 2.6 version. On 2.6 all
22 cacheMetrics are being de-serialized successfully.



*AffinityJob result being serialized on server:*





*[image: server.png][image: server.png]*

*AffinityJob result being de-serialized on client:*

[image: client.jpg]


Thanks,
Prasad

On Wed, Jan 9, 2019 at 6:41 PM Ilya Kasnacheev 
wrote:

> Hello!
>
> Do you have a reproducer project to reliably confirm this issue?
>
> Regards,
> --
> Ilya Kasnacheev
>
>
> ср, 9 янв. 2019 г. в 12:39, Akash Shinde :
>
>> Added  dev@ignite.apache.org.
>>
>> Should I log Jira for this issue?
>>
>> Thanks,
>> Akash
>>
>>
>>
>> On Tue, Jan 8, 2019 at 6:16 PM Akash Shinde 
>> wrote:
>>
>> > Hi,
>> >
>> > No both nodes, client and server are running on Ignite 2.7 version. I am
>> > starting both server and client from Intellij IDE.
>> >
>> > Version printed in Server node log:
>> > Ignite ver. 2.7.0#20181201-sha1:256ae4012cb143b4855b598b740a6f3499ead4db
>> >
>> > Version in client node log:
>> > Ignite ver. 2.7.0#20181201-sha1:256ae4012cb143b4855b598b740a6f3499ead4db
>> >
>> > Thanks,
>> > Akash
>> >
>> > On Tue, Jan 8, 2019 at 5:18 PM Mikael 
>> wrote:
>> >
>> >> Hi!
>> >>
>> >> Any chance you might have one node running 2.6 or something like that ?
>> >>
>> >> It looks like it get a different object that does not match the one
>> >> expected in 2.7
>> >>
>> >> Mikael
>> >> Den 2019-01-08 kl. 12:21, skrev Akash Shinde:
>> >>
>> >> Before submitting the affinity task ignite first gets the affinity
>> cached
>> >> function (AffinityInfo) by submitting the cluster wide task
>> "AffinityJob".
>> >> But while in the process of retrieving the output of this AffinityJob,
>> >> ignite deserializes this output. I am getting exception while
>> deserailizing
>> >> this output.
>> >> In TcpDiscoveryNode.readExternal() method while deserailizing the
>> >> CacheMetrics object from input stream on 14th iteration I am getting
>> >> following exception. Complete stack trace is given in this mail chain.
>> >>
>> >> Caused by: java.io.IOException: Unexpected error occurred during
>> >> unmarshalling of an instance of the class:
>> >> org.apache.ignite.internal.processors.cache.CacheMetricsSnapshot.
>> >>
>> >> This is working fine on Ignite 2.6 version but giving problem on 2.7.
>> >>
>> >> Is this a bug or am I doing something wrong?
>> >>
>> >> Can someone please help?
>> >>
>> >> On Mon, Jan 7, 2019 at 9:41 PM Akash Shinde 
>> >> wrote:
>> >>
>> >>> Hi,
>> >>>
>> >>> When execute affinity.partition(key), I am getting following exception
>> >>> on Ignite  2.7.
>> >>>
>> >>> Stacktrace:
>> >>>
>> >>> 2019-01-07 21:23:03,093 6699878 [mgmt-#67%springDataNode%] ERROR
>> >>> o.a.i.i.p.task.GridTaskWorker - Error deserializing job response:
>> >>> GridJobExecuteResponse [nodeId=c0c832cb-33b0-4139-b11d-5cafab2fd046,
>> >>> sesId=4778e982861-31445139-523d-4d44-b071-9ca1eb2d73df,
>> >>> jobId=5778e982861-31445139-523d-4d44-b071-9ca1eb2d73df, gridEx=null,
>> >>> isCancelled=false, retry=null]
>> >>> org.apache.ignite.IgniteCheckedException: Failed to unmarshal object
>> >>> with optimized marshaller
>> >>>  at
>> >>>
>> org.apache.ignite.internal.util.IgniteUtils.unmarshal(IgniteUtils.java:10146)
>> >>>  at
>> >>>
>> org.apache.ignite.internal.processors.task.GridTaskWorker.onResponse(GridTaskWorker.java:831)
>> >>>  at
>> >>>
>> org.apache.ignite.internal.processors.task.GridTaskProcessor.processJobExecuteResponse(GridTaskProcessor.java:1081)
>> >>>  at
>> >>>
>> org.apache.ignite.internal.processors.task.GridTaskProcessor$JobMessageListener.onMessage(GridTaskProcessor.java:1316)
>> >>>  at
>> >>>
>> org.apache.ignite.internal.managers.communication.GridIoManager.invokeListener(GridIoManager.java:1569)
>> >>>  at
>> >>>
>> org.apache.ignite.internal.managers.communication.GridIoManager.processRegularMessage0(GridIoManager.java:1197)
>> >>>  at
>> >>>
>> org.apache.ignite.internal.managers.communication.GridIoManager.access$4200(GridIoManager.java:127)
>> >>>  at
>> >>>
>> 

Re: After upgrading 2.7 getting Unexpected error occurred during unmarshalling

2019-01-11 Thread Prasad Bhalerao
Resending with screenshots.


> I am not able reproduce this issue in small reproducer project but this is
> consistently happening in my project. So I debugged this issue and attached
> the screenshot in this mail.
>
>
>
> *NOTE:  *This issue occurs if the statistics are enabled on cache
> configuration level [cacheCfg.setStatisticsEnabled(true)].
>
>
>
> As shown in screenshots there 22 cache metrics in cacheMetrics hashmap.
> All these cache metrics get serialized successfully. But at the time of
> deserialization on client node only first metrics get de-serialized
> successfully but all other metrics till iteration count 13 are
> de-serialized as null value and on iteration 14 “ref” byte value in
> “OptimizedObjectInputStream.readObject0()” method is read as 81 and code
> throws exception.
>
>
>
> I think this where it is going wrong. The object copy at the time of
> serialization and de-serialization should be same but that’s not happening
> in Ignite 2.7 version. So I debugged this on Ignite 2.6 version. On 2.6 all
> 22 cacheMetrics are being de-serialized successfully.
>
>
>
> *AffinityJob result being serialized on server:*
>
>
>
> [image: server.png]
>
>
>
>
>
> *AffinityJob result being de-serialized on client:*
>
>
>
> [image: client.jpg]
>
> Thanks,
> Prasad
>
>
> On Wed, Jan 9, 2019 at 6:41 PM Ilya Kasnacheev 
> wrote:
>
>> Hello!
>>
>> Do you have a reproducer project to reliably confirm this issue?
>>
>> Regards,
>> --
>> Ilya Kasnacheev
>>
>>
>> ср, 9 янв. 2019 г. в 12:39, Akash Shinde :
>>
>>> Added  dev@ignite.apache.org.
>>>
>>> Should I log Jira for this issue?
>>>
>>> Thanks,
>>> Akash
>>>
>>>
>>>
>>> On Tue, Jan 8, 2019 at 6:16 PM Akash Shinde 
>>> wrote:
>>>
>>> > Hi,
>>> >
>>> > No both nodes, client and server are running on Ignite 2.7 version. I
>>> am
>>> > starting both server and client from Intellij IDE.
>>> >
>>> > Version printed in Server node log:
>>> > Ignite ver.
>>> 2.7.0#20181201-sha1:256ae4012cb143b4855b598b740a6f3499ead4db
>>> >
>>> > Version in client node log:
>>> > Ignite ver.
>>> 2.7.0#20181201-sha1:256ae4012cb143b4855b598b740a6f3499ead4db
>>> >
>>> > Thanks,
>>> > Akash
>>> >
>>> > On Tue, Jan 8, 2019 at 5:18 PM Mikael 
>>> wrote:
>>> >
>>> >> Hi!
>>> >>
>>> >> Any chance you might have one node running 2.6 or something like that
>>> ?
>>> >>
>>> >> It looks like it get a different object that does not match the one
>>> >> expected in 2.7
>>> >>
>>> >> Mikael
>>> >> Den 2019-01-08 kl. 12:21, skrev Akash Shinde:
>>> >>
>>> >> Before submitting the affinity task ignite first gets the affinity
>>> cached
>>> >> function (AffinityInfo) by submitting the cluster wide task
>>> "AffinityJob".
>>> >> But while in the process of retrieving the output of this AffinityJob,
>>> >> ignite deserializes this output. I am getting exception while
>>> deserailizing
>>> >> this output.
>>> >> In TcpDiscoveryNode.readExternal() method while deserailizing the
>>> >> CacheMetrics object from input stream on 14th iteration I am getting
>>> >> following exception. Complete stack trace is given in this mail chain.
>>> >>
>>> >> Caused by: java.io.IOException: Unexpected error occurred during
>>> >> unmarshalling of an instance of the class:
>>> >> org.apache.ignite.internal.processors.cache.CacheMetricsSnapshot.
>>> >>
>>> >> This is working fine on Ignite 2.6 version but giving problem on 2.7.
>>> >>
>>> >> Is this a bug or am I doing something wrong?
>>> >>
>>> >> Can someone please help?
>>> >>
>>> >> On Mon, Jan 7, 2019 at 9:41 PM Akash Shinde 
>>> >> wrote:
>>> >>
>>> >>> Hi,
>>> >>>
>>> >>> When execute affinity.partition(key), I am getting following
>>> exception
>>> >>> on Ignite  2.7.
>>> >>>
>>> >>> Stacktrace:
>>> >>>
>>> >>> 2019-01-07 21:23:03,093 6699878 [mgmt-#67%springDataNode%] ERROR
>>> >>> o.a.i.i.p.task.GridTaskWorker - Error deserializing job response:
>>> >>> GridJobExecuteResponse [nodeId=c0c832cb-33b0-4139-b11d-5cafab2fd046,
>>> >>> sesId=4778e982861-31445139-523d-4d44-b071-9ca1eb2d73df,
>>> >>> jobId=5778e982861-31445139-523d-4d44-b071-9ca1eb2d73df, gridEx=null,
>>> >>> isCancelled=false, retry=null]
>>> >>> org.apache.ignite.IgniteCheckedException: Failed to unmarshal object
>>> >>> with optimized marshaller
>>> >>>  at
>>> >>>
>>> org.apache.ignite.internal.util.IgniteUtils.unmarshal(IgniteUtils.java:10146)
>>> >>>  at
>>> >>>
>>> org.apache.ignite.internal.processors.task.GridTaskWorker.onResponse(GridTaskWorker.java:831)
>>> >>>  at
>>> >>>
>>> org.apache.ignite.internal.processors.task.GridTaskProcessor.processJobExecuteResponse(GridTaskProcessor.java:1081)
>>> >>>  at
>>> >>>
>>> org.apache.ignite.internal.processors.task.GridTaskProcessor$JobMessageListener.onMessage(GridTaskProcessor.java:1316)
>>> >>>  at
>>> >>>
>>> org.apache.ignite.internal.managers.communication.GridIoManager.invokeListener(GridIoManager.java:1569)
>>> >>>  at
>>> >>>
>>> org.apache.ignite.internal.managers.communication.GridIoManager.processRegularMessage0(GridIoManager.java:1197)

Re: After upgrading 2.7 getting error during unmarshalling (Works fine on 2.6)

2019-01-11 Thread Prasad Bhalerao
Hi Ilya,

I am not able reproduce this issue in small reproducer project but this is
consistently happening in my project. So I debugged this issue and attached
the screenshot in this mail.



*NOTE:  *This issue occurs if the statistics are enabled on cache
configuration level [cacheCfg.setStatisticsEnabled(true)].



As shown in screenshots there 22 cache metrics in cacheMetrics hashmap. All
these cache metrics gets serialized successfully on server node. But at the
time of deserialization on client node only first metrics get de-serialized
successfully but all other metrics till iteration count 13 are
de-serialized as null value and on iteration 14 “ref” byte value in
“OptimizedObjectInputStream.readObject0()” method is read as 81 and code
throws exception.



I think this where it is going wrong. The object copy at the time of
serialization and de-serialization should be same but that’s not happening
in Ignite 2.7 version.

So I debugged this on Ignite 2.6 version. On 2.6 version all 22
cacheMetrics are being de-serialized successfully.


This Looks like a bug to me in serializer/deseriliazer code.



*AffinityJob result being serialized on server:*







*AffinityJob result being de-serialized on client:*




Thanks,
Prasad
On Wed, Jan 9, 2019 at 6:41 PM Ilya Kasnacheev 
wrote:

> Hello!
>
> Do you have a reproducer project to reliably confirm this issue?
>
> Regards,
> --
> Ilya Kasnacheev
>
>
> ср, 9 янв. 2019 г. в 12:39, Akash Shinde :
>
>> Added  dev@ignite.apache.org.
>>
>> Should I log Jira for this issue?
>>
>> Thanks,
>> Akash
>>
>>
>>
>> On Tue, Jan 8, 2019 at 6:16 PM Akash Shinde 
>> wrote:
>>
>> > Hi,
>> >
>> > No both nodes, client and server are running on Ignite 2.7 version. I am
>> > starting both server and client from Intellij IDE.
>> >
>> > Version printed in Server node log:
>> > Ignite ver. 2.7.0#20181201-sha1:256ae4012cb143b4855b598b740a6f3499ead4db
>> >
>> > Version in client node log:
>> > Ignite ver. 2.7.0#20181201-sha1:256ae4012cb143b4855b598b740a6f3499ead4db
>> >
>> > Thanks,
>> > Akash
>> >
>> > On Tue, Jan 8, 2019 at 5:18 PM Mikael 
>> wrote:
>> >
>> >> Hi!
>> >>
>> >> Any chance you might have one node running 2.6 or something like that ?
>> >>
>> >> It looks like it get a different object that does not match the one
>> >> expected in 2.7
>> >>
>> >> Mikael
>> >> Den 2019-01-08 kl. 12:21, skrev Akash Shinde:
>> >>
>> >> Before submitting the affinity task ignite first gets the affinity
>> cached
>> >> function (AffinityInfo) by submitting the cluster wide task
>> "AffinityJob".
>> >> But while in the process of retrieving the output of this AffinityJob,
>> >> ignite deserializes this output. I am getting exception while
>> deserailizing
>> >> this output.
>> >> In TcpDiscoveryNode.readExternal() method while deserailizing the
>> >> CacheMetrics object from input stream on 14th iteration I am getting
>> >> following exception. Complete stack trace is given in this mail chain.
>> >>
>> >> Caused by: java.io.IOException: Unexpected error occurred during
>> >> unmarshalling of an instance of the class:
>> >> org.apache.ignite.internal.processors.cache.CacheMetricsSnapshot.
>> >>
>> >> This is working fine on Ignite 2.6 version but giving problem on 2.7.
>> >>
>> >> Is this a bug or am I doing something wrong?
>> >>
>> >> Can someone please help?
>> >>
>> >> On Mon, Jan 7, 2019 at 9:41 PM Akash Shinde 
>> >> wrote:
>> >>
>> >>> Hi,
>> >>>
>> >>> When execute affinity.partition(key), I am getting following exception
>> >>> on Ignite  2.7.
>> >>>
>> >>> Stacktrace:
>> >>>
>> >>> 2019-01-07 21:23:03,093 6699878 [mgmt-#67%springDataNode%] ERROR
>> >>> o.a.i.i.p.task.GridTaskWorker - Error deserializing job response:
>> >>> GridJobExecuteResponse [nodeId=c0c832cb-33b0-4139-b11d-5cafab2fd046,
>> >>> sesId=4778e982861-31445139-523d-4d44-b071-9ca1eb2d73df,
>> >>> jobId=5778e982861-31445139-523d-4d44-b071-9ca1eb2d73df, gridEx=null,
>> >>> isCancelled=false, retry=null]
>> >>> org.apache.ignite.IgniteCheckedException: Failed to unmarshal object
>> >>> with optimized marshaller
>> >>>  at
>> >>>
>> org.apache.ignite.internal.util.IgniteUtils.unmarshal(IgniteUtils.java:10146)
>> >>>  at
>> >>>
>> org.apache.ignite.internal.processors.task.GridTaskWorker.onResponse(GridTaskWorker.java:831)
>> >>>  at
>> >>>
>> org.apache.ignite.internal.processors.task.GridTaskProcessor.processJobExecuteResponse(GridTaskProcessor.java:1081)
>> >>>  at
>> >>>
>> org.apache.ignite.internal.processors.task.GridTaskProcessor$JobMessageListener.onMessage(GridTaskProcessor.java:1316)
>> >>>  at
>> >>>
>> org.apache.ignite.internal.managers.communication.GridIoManager.invokeListener(GridIoManager.java:1569)
>> >>>  at
>> >>>
>> org.apache.ignite.internal.managers.communication.GridIoManager.processRegularMessage0(GridIoManager.java:1197)
>> >>>  at
>> >>>
>> org.apache.ignite.internal.managers.communication.GridIoManager.access$4200(GridIoManager.java:127)
>> >>>  at
>> >>>
>> 

Fwd: Getting javax.cache.CacheException after upgrading to Ignite 2.7

2019-01-05 Thread Prasad Bhalerao
Can someone please explain if anything has changed in ignite 2.7.

Started getting this exception after upgrading to 2.7.


-- Forwarded message -
From: Prasad Bhalerao 
Date: Fri 4 Jan, 2019, 8:41 PM
Subject: Re: Getting javax.cache.CacheException after upgrading to Ignite
2.7
To: 


Can someone please help me with this?

On Thu 3 Jan, 2019, 7:15 PM Prasad Bhalerao  Hi
>
> After upgrading to 2.7 version I am getting following exception. I am
> executing a SELECT sql inside optimistic transaction with serialization
> isolation level.
>
> 1) Has anything changed from 2.6 to 2.7 version?  This work fine prior to
> 2.7 version.
>
> After changing it to Pessimistic and isolation level to REPEATABLE_READ it
> works fine.
>
>
>
>
>
>
> *javax.cache.CacheException: Only pessimistic repeatable read transactions
> are supported at the moment.at
> org.apache.ignite.internal.processors.cache.IgniteCacheProxyImpl.query(IgniteCacheProxyImpl.java:697)at
> org.apache.ignite.internal.processors.cache.IgniteCacheProxyImpl.query(IgniteCacheProxyImpl.java:636)at
> org.apache.ignite.internal.processors.cache.GatewayProtectedCacheProxy.query(GatewayProtectedCacheProxy.java:388)at
> com.qualys.agms.grid.dao.AbstractDataGridDAO.getFieldResultsByCriteria(AbstractDataGridDAO.java:85)*
>
> Thanks,
> Prasad
>


Query Execution is very slow (Can I create a Jira for this )

2018-12-28 Thread Prasad Bhalerao
Hi,

I am executing a SQL with TEMP table join for IN caluse query. Sql is as
show below.
This sql is taking 20-30 seconds to execute. My cache has only 1.39 million
entries but in real scenario I will be having around 40 million entries.

I have created a reproducer and uploaded it to GitHub. I have created 3
cases to test the sql execution time.

Please run *IgniteQueryTester_4* class to check the issue.

Can some please help me this case?
Can I create JIRA for this issue?

GitHub project: https://github.com/prasadbhalerao1983/IgniteTestPrj.git


  SELECT ipv4agd.id,
ipv4agd.assetGroupId,
ipv4agd.ipStart,
ipv4agd.ipEnd
  FROM IpV4AssetGroupData ipv4agd
  JOIN TABLE (assetGroupId bigint = ? ) temp
  ON ipv4agd.assetGroupId = temp.assetGroupId
  WHERE subscriptionId= ?
  AND (ipStart   <= ? AND ipEnd  >= ?)
  ORDER BY ipv4agd.assetGroupId

I am also attaching a jprofiler snapshot for this query ran on 40 million
load.

[image: image.png]
Thanks,
Prasad