RE: Ignite as Cache and Query layer for Cassandra.

2017-08-31 Thread Roger Fischer (CW)
Hi Mark,

I recently did a Proof-of-Concept with Ignite and Cassandra. There are a number 
of challenges you need to look out for:

1) In order to do SQL queries, you need to have all the data in Ignite. It is 
not practical to include other persistence layers in the query process.
The exception is Ignite Native Persistence.

2) You need to deploy the POJO for the objects (and keys if they are not basic 
types) in Ignite. This may become unnecessary in a near-future release.

3) Preloading data is not particularly efficient. Each Ignite node sends the 
preload query. Each Ignite node ignores the data it is not primary or backup 
for. It may be possible to optimize this for the case where the partitioning is 
the same in Ignite and Cassandra.
We wanted to dynamically load data from Cassandra, and thus were sensitive to 
this point.

4) Things get a bit trickier when the primary keys (partitioning) are not the 
same in Ignite and Cassandra. Only one key POJO can be deployed in Ignite, and 
it needs to handle the Ignite and Cassandra use cases.

Data Source is configured at the top level:









Persistence settings for each cache are also at the top level. Here is an 
example that has a composite primary key and some field name mappings. Note the 
CDATA section.








Lastly, in the cache config, you need to reference the persistence settings. We 
did not use write-through or write-behind.















Hope this helps.

Roger

From: Ilya Kasnacheev [mailto:ilya.kasnach...@gmail.com]
Sent: Thursday, August 31, 2017 8:56 AM
To: user@ignite.apache.org
Subject: Re: Ignite as Cache and Query layer for Cassandra.

Hello Mark,

I have found some documentation on how to do what the article describes:

"Apache Ignite can be inserted between Apache Cassandra and an existing 
application layer with no changes to the Cassandra data and only minimal 
changes to the application"

https://apacheignite-mix.readme.io/docs/ignite-with-apache-cassandra
On the left you can see there are config examples, tests, AWS configuration, 
etc.
Hope it's good enough.


--
Ilya Kasnacheev

2017-08-31 16:15 GMT+03:00 Mark Farnan 
>:
Howdy,

I'm currently evaluating Ignite for a project, and am looking for some guidance 
and project sample for using Ignite over Cassandra as a cache and query layer.
Note: The Cassandra database and schema already exists and is mature.

In a Recent Infoworld post,   
(https://www.infoworld.com/article/3191895/application-development/light-a-fire-under-cassandra-with-apache-ignite.html)
 it stated

"No remodeling of Cassandra data
Apache Ignite reads from Apache Cassandra and other NoSQL databases, so moving 
Cassandra data into Ignite requires no data modification. The data schema can 
also be migrated directly into Ignite as is. "

However, I can't find in any documentation,  or online articles,  guidance on 
just how to do this and there are no links in the article.
  - The Ignite docs /cassandra section appear to only cover using Cassandra as 
a persistence layer,  not for going over and caching / querying an existing 
schema.   (unless I'm missing something obvious !)
  - The Youtube video cuts out to blank at the end, so no links are shown.
The  links on the slideshare document appear to be broken.
  -  Can't find any git hub for examples on how to do this (especially not for 
the couple you tube vids)

Can anyone please point me to some working sample code and/or documentation on 
how to do this ?

I'm especially interested in not having to migrate any schema's,  nor having to 
pre-load the data into the cache.  (lazy load), which the articles imply.
Also some of the data would be too large to load it all into Ram anyway.


Any assistance greatly appreciated.


Regards

Mark.



Re: Ignite as Cache and Query layer for Cassandra.

2017-08-31 Thread Denis Magda
Came across this technical gettings started:
https://t.co/KboV0LtPmv?ssr=true

Check it up and share if it helps you more than the existing docs.

Denis

On Thursday, August 31, 2017, Denis Magda  wrote:

> Hello Mark,
>
> The article is not technical, covers only the top of the iceberg and this
> is probably why it confused you a lot.
>
> In short, the conclusion you got
>
> I'm especially interested in not having to migrate any schema's,  nor
> having to pre-load the data into the cache.  (lazy load), which the
> articles imply.
>
>
> is not 100% correct.
>
> Cassandra is a 3rd party store for Ignite (as any RDBMS or NoSQL database)
> and you should create corresponding Ignite schema and fully pre-load data
> in RAM if you plan using SQL. That’s not required only if Ignite Native
> Persistence is used as a disk store. See the differences here:
> https://apacheignite.readme.io/v2.1/docs/distributed-
> persistent-store#section-ignite-native-vs-3rd-party-persistence
>
> Basically, if to follow the earlier provided Cassandra integration doc you
> will be able to stick Ignite and Cassandra together.
>
> P.S.
> I’ve updated the broken links. Thanks for reporting!
>
> —
> Denis
>
> On Aug 31, 2017, at 12:29 PM, Mark Farnan  > wrote:
>
> Hi Ilya,
>
> That documentation I saw,  however it seems to describe only using
> Cassandra as a Persistant store for  Ignite Caches,   i.e.
>
> "Ignite Cassandra module implements persistent store
>  for Ignite caches
> by utilizing Cassandra as a persistent
> storage for expired cache records.
> It functions pretty much the same way like CacheJdbcBlobStore
> 
>  and CacheJdbcPojoStore
> and
> provides such benefits:"
>
> It dosn't seem to describe anywhere, how to set it up over an existing
> large cassandra table structure to act as a cache and SQL query mechanism
> for it.   Nor how to deal with the fact you can't have it all in memory and
> how Writes and Reads would be handled etc.
> Also several of the links on the page are broken.
>
> Ideally want to be able to run ad-hoc SQL queries over the data, including
> joins etc. \
>
> I agree there are config examples, AWS etc,   but  they all seem to me, to
> be for persistance of general cache,  BLOB or POJO,   not for what the
> InfoWorld article described.
>
> Unless I'm missing something obvious !.
>
>
> Regards
>
> Mark.
>
>
>
>
>
> From:Ilya Kasnacheev  >
> To:user@ignite.apache.org
> ,
> Date:08/31/2017 06:56 PM
> Subject:Re: Ignite as Cache and Query layer for Cassandra.
> --
>
>
>
> Hello Mark,
>
> I have found some documentation on how to do what the article describes:
>
> "Apache Ignite can be inserted between Apache Cassandra and an existing
> application layer with no changes to the Cassandra data and only minimal
> changes to the application"
>
> *https://apacheignite-mix.readme.io/docs/ignite-with-apache-cassandra*
> 
>
> On the left you can see there are config examples, tests, AWS
> configuration, etc.
>
> Hope it's good enough.
>
>
> --
> Ilya Kasnacheev
>
> 2017-08-31 16:15 GMT+03:00 Mark Farnan <*mark.far...@petrolink.com*
> >:
> Howdy,
>
> I'm currently evaluating Ignite for a project, and am looking for some
> guidance and project sample for using Ignite over Cassandra as a cache and
> query layer.
> Note: The Cassandra database and schema already exists and is mature.
>
> In a Recent Infoworld post,   (
> *https://www.infoworld.com/article/3191895/application-development/light-a-fire-under-cassandra-with-apache-ignite.html*
> )
> it stated
>
> "*No remodeling of Cassandra data*
> Apache Ignite reads from Apache Cassandra and other NoSQL databases, so
> moving Cassandra data into Ignite requires no data modification. The data
> schema can also be migrated directly into Ignite as is. "
>
> However, I can't find in *any* documentation,  or online articles,
>  guidance on just how to do this and there are no links in the article.
>   - The Ignite docs /cassandra section appear to only cover using
> Cassandra as a persistence layer,  not for going over and caching /
> querying an existing schema.   (unless I'm missing something obvious !)
>   - The Youtube video cuts out to blank at the end, so no links are shown.
>The  links on the 

Cache Events for a specific cache.

2017-08-31 Thread Chris Berry
Hi, 

I need to watch a simple flag in an “AdminCache” that tells me when the
caches are finished priming.
NOTE: the AdminCache is one of many caches.

This flag is written by one of the Nodes in the Cluster, and other Nodes
should see it change and then react to it.
This AdminCache is very small (a handful of simple flags) and REPLICATED.

I was hoping to use a CacheEvent to do this job.
But that seems impossible now.

Even though I register for this cache like this:

public void registerCacheSpecificEventListener(String cacheName,
IgnitePredicate listenerPredicate, int... eventTypes) {
   
ignite.events(ignite.cluster().forCacheNodes(cacheName)).localListen(listenerPredicate,
eventTypes);
}

Calling it like this

   
managedIgnite.registerCacheSpecificEventListener(ManagedIgnite.ADMIN_DATA_CACHE,
this, EventType.EVT_CACHE_OBJECT_PUT);

But my logging proves that my Listener `apply` is called for EVERY PUT that
occurs on this Node

Is there any way that I can listen to ONLY changes for a specific cache??
It cannot be performant to fire the Event for every PUT in every cache.

If not. I guess I need to use a ContinuousQuery??
Are there any other options??

Thanks, 
-- Chris 




--
Sent from: http://apache-ignite-users.70518.x6.nabble.com/


Memory is not going down after cache.clean()

2017-08-31 Thread davida
Apache Ignite.NET  v2.2. 

The memory consumption stays high even after all entries are deleted from
cache. I was expecting it to drop but that practically is not happening
(i.e. after populating the data, the process uses ~5GB memory). After
deletion it became ~4.89GB). 

Is this expected behavior. Are there any ways to reduce memory consumption
(except what is defined in Performance Tips section of official docs).

Thanks.





--
Sent from: http://apache-ignite-users.70518.x6.nabble.com/


Re: Ignite as Cache and Query layer for Cassandra.

2017-08-31 Thread Denis Magda
Hello Mark,

The article is not technical, covers only the top of the iceberg and this is 
probably why it confused you a lot.

In short, the conclusion you got

> I'm especially interested in not having to migrate any schema's,  nor having 
> to pre-load the data into the cache.  (lazy load), which the articles imply. 

is not 100% correct.

Cassandra is a 3rd party store for Ignite (as any RDBMS or NoSQL database) and 
you should create corresponding Ignite schema and fully pre-load data in RAM if 
you plan using SQL. That’s not required only if Ignite Native Persistence is 
used as a disk store. See the differences here:
https://apacheignite.readme.io/v2.1/docs/distributed-persistent-store#section-ignite-native-vs-3rd-party-persistence

Basically, if to follow the earlier provided Cassandra integration doc you will 
be able to stick Ignite and Cassandra together.
 
P.S.
I’ve updated the broken links. Thanks for reporting!

—
Denis

> On Aug 31, 2017, at 12:29 PM, Mark Farnan  wrote:
> 
> Hi Ilya, 
> 
> That documentation I saw,  however it seems to describe only using Cassandra 
> as a Persistant store for  Ignite Caches,   i.e.   
> 
> "Ignite Cassandra module implements persistent store 
>  for Ignite caches by 
> utilizing Cassandra as a persistent storage for 
> expired cache records. 
> It functions pretty much the same way like CacheJdbcBlobStore 
>  and 
> CacheJdbcPojoStore 
> and 
> provides such benefits:"
> 
> It dosn't seem to describe anywhere, how to set it up over an existing large 
> cassandra table structure to act as a cache and SQL query mechanism for it.   
> Nor how to deal with the fact you can't have it all in memory and how Writes 
> and Reads would be handled etc. 
> Also several of the links on the page are broken. 
> 
> Ideally want to be able to run ad-hoc SQL queries over the data, including 
> joins etc. \ 
> 
> I agree there are config examples, AWS etc,   but  they all seem to me, to be 
> for persistance of general cache,  BLOB or POJO,   not for what the InfoWorld 
> article described. 
> 
> Unless I'm missing something obvious !. 
> 
> 
> Regards 
> 
> Mark. 
> 
> 
> 
> 
> 
> From:Ilya Kasnacheev  
> To:user@ignite.apache.org, 
> Date:08/31/2017 06:56 PM 
> Subject:Re: Ignite as Cache and Query layer for Cassandra. 
> 
> 
> 
> Hello Mark,
> 
> I have found some documentation on how to do what the article describes:
> 
> "Apache Ignite can be inserted between Apache Cassandra and an existing 
> application layer with no changes to the Cassandra data and only minimal 
> changes to the application"
> 
> https://apacheignite-mix.readme.io/docs/ignite-with-apache-cassandra 
> 
> 
> On the left you can see there are config examples, tests, AWS configuration, 
> etc.
> 
> Hope it's good enough. 
> 
> 
> -- 
> Ilya Kasnacheev 
> 
> 2017-08-31 16:15 GMT+03:00 Mark Farnan  >: 
> Howdy, 
> 
> I'm currently evaluating Ignite for a project, and am looking for some 
> guidance and project sample for using Ignite over Cassandra as a cache and 
> query layer.   
> Note: The Cassandra database and schema already exists and is mature.   
> 
> In a Recent Infoworld post,   
> (https://www.infoworld.com/article/3191895/application-development/light-a-fire-under-cassandra-with-apache-ignite.html
>  
> )
>  it stated 
> 
> "No remodeling of Cassandra data 
> Apache Ignite reads from Apache Cassandra and other NoSQL databases, so 
> moving Cassandra data into Ignite requires no data modification. The data 
> schema can also be migrated directly into Ignite as is. " 
> 
> However, I can't find in any documentation,  or online articles,  guidance on 
> just how to do this and there are no links in the article. 
>   - The Ignite docs /cassandra section appear to only cover using Cassandra 
> as a persistence layer,  not for going over and caching / querying an 
> existing schema.   (unless I'm missing something obvious !) 
>   - The Youtube video cuts out to blank at the end, so no links are shown.
> The  links on the slideshare document appear to be broken. 
>   -  Can't find any git hub for examples on how to do this (especially not 
> for the couple you tube vids) 
> 
> Can anyone please point me to some working sample code and/or documentation 
> on how to do this ? 
> 
> I'm especially interested in not having to migrate any schema's,  nor having 
> to pre-load the data into the cache.  (lazy load), which the articles imply. 

Re: Ignite Persistence on Kubernetes.

2017-08-31 Thread Denis Magda
Created a feature request for Ignite:
https://issues.apache.org/jira/browse/IGNITE-6241 


—
Denis

> On Aug 31, 2017, at 5:05 AM, takumi  wrote:
> 
> Thank you for your answer.
> I choose MongoDB with StatefulSets, and I try to try it.
> 
> 
> 
> 
> --
> Sent from: http://apache-ignite-users.70518.x6.nabble.com/



Re: Ignite as Cache and Query layer for Cassandra.

2017-08-31 Thread Mark Farnan
Hi Ilya, 

That documentation I saw,  however it seems to describe only using 
Cassandra as a Persistant store for  Ignite Caches,   i.e. 

"Ignite Cassandra module implements persistent store for Ignite caches by 
utilizing Cassandraas a persistent storage for expired cache records.
It functions pretty much the same way like CacheJdbcBlobStore and 
CacheJdbcPojoStoreand provides such benefits:"

It dosn't seem to describe anywhere, how to set it up over an existing 
large cassandra table structure to act as a cache and SQL query mechanism 
for it.   Nor how to deal with the fact you can't have it all in memory 
and how Writes and Reads would be handled etc. 
Also several of the links on the page are broken. 

Ideally want to be able to run ad-hoc SQL queries over the data, including 
joins etc. \

I agree there are config examples, AWS etc,   but  they all seem to me, to 
be for persistance of general cache,  BLOB or POJO,   not for what the 
InfoWorld article described. 

Unless I'm missing something obvious !. 


Regards

Mark. 





From:   Ilya Kasnacheev 
To: user@ignite.apache.org, 
Date:   08/31/2017 06:56 PM
Subject:Re: Ignite as Cache and Query layer for Cassandra.



Hello Mark,

I have found some documentation on how to do what the article describes:

"Apache Ignite can be inserted between Apache Cassandra and an existing 
application layer with no changes to the Cassandra data and only minimal 
changes to the application"

https://apacheignite-mix.readme.io/docs/ignite-with-apache-cassandra

On the left you can see there are config examples, tests, AWS 
configuration, etc.

Hope it's good enough.


-- 
Ilya Kasnacheev

2017-08-31 16:15 GMT+03:00 Mark Farnan :
Howdy, 

I'm currently evaluating Ignite for a project, and am looking for some 
guidance and project sample for using Ignite over Cassandra as a cache and 
query layer.   
Note: The Cassandra database and schema already exists and is mature.   

In a Recent Infoworld post,   (
https://www.infoworld.com/article/3191895/application-development/light-a-fire-under-cassandra-with-apache-ignite.html
) it stated 

"No remodeling of Cassandra data 
Apache Ignite reads from Apache Cassandra and other NoSQL databases, so 
moving Cassandra data into Ignite requires no data modification. The data 
schema can also be migrated directly into Ignite as is. " 

However, I can't find in any documentation,  or online articles,  guidance 
on just how to do this and there are no links in the article. 
  - The Ignite docs /cassandra section appear to only cover using 
Cassandra as a persistence layer,  not for going over and caching / 
querying an existing schema.   (unless I'm missing something obvious !) 
  - The Youtube video cuts out to blank at the end, so no links are shown. 
   The  links on the slideshare document appear to be broken. 
  -  Can't find any git hub for examples on how to do this (especially not 
for the couple you tube vids) 

Can anyone please point me to some working sample code and/or 
documentation on how to do this ? 

I'm especially interested in not having to migrate any schema's,  nor 
having to pre-load the data into the cache.  (lazy load), which the 
articles imply. 
Also some of the data would be too large to load it all into Ram anyway. 


Any assistance greatly appreciated.   


Regards 

Mark. 



Apache Ignite Community Update (August 2017 Issue)

2017-08-31 Thread Dieds
Igniters,  here are some community highlights    for
August. If I missed anything, please share it here. Special note: Did you
know that Apache Ignite experts are available to speak at your meetup? And
we also have spots open for YOU to speak at the following meetups that some
of us co-organize:

>  Apache Ignite London   
>  Bay Area In-Memory Computing Meetup
>    Next Meetup:
> Sept. 21
>  NYC In-Memory Computing Meetup
>    Next meetup:
> Sept. 26

If you are interested, just let me know here and I'll connect with you
offline to set it up. 



--
Sent from: http://apache-ignite-users.70518.x6.nabble.com/


Re: NullPointerException at unwrapBinariesIfNeeded(GridCacheContext.java:1719)

2017-08-31 Thread zbyszek
Thank you Andrey,
I will upgrade asap.

regards,
zbyszek



--
Sent from: http://apache-ignite-users.70518.x6.nabble.com/


Re: Ignite as Cache and Query layer for Cassandra.

2017-08-31 Thread Ilya Kasnacheev
Hello Mark,

I have found some documentation on how to do what the article describes:

"Apache Ignite can be inserted between Apache Cassandra and an existing
application layer with no changes to the Cassandra data and only minimal
changes to the application"

https://apacheignite-mix.readme.io/docs/ignite-with-apache-cassandra

On the left you can see there are config examples, tests, AWS
configuration, etc.

Hope it's good enough.


-- 
Ilya Kasnacheev

2017-08-31 16:15 GMT+03:00 Mark Farnan :

> Howdy,
>
> I'm currently evaluating Ignite for a project, and am looking for some
> guidance and project sample for using Ignite over Cassandra as a cache and
> query layer.
> Note: The Cassandra database and schema already exists and is mature.
>
> In a Recent Infoworld post,   (https://www.infoworld.com/
> article/3191895/application-development/light-a-fire-
> under-cassandra-with-apache-ignite.html) it stated
>
> "*No remodeling of Cassandra data*
> Apache Ignite reads from Apache Cassandra and other NoSQL databases, so
> moving Cassandra data into Ignite requires no data modification. The data
> schema can also be migrated directly into Ignite as is. "
>
> However, I can't find in *any* documentation,  or online articles,
>  guidance on just how to do this and there are no links in the article.
>   - The Ignite docs /cassandra section appear to only cover using
> Cassandra as a persistence layer,  not for going over and caching /
> querying an existing schema.   (unless I'm missing something obvious !)
>   - The Youtube video cuts out to blank at the end, so no links are shown.
>The  links on the slideshare document appear to be broken.
>   -  Can't find any git hub for examples on how to do this (especially not
> for the couple you tube vids)
>
> Can anyone please point me to some working sample code and/or
> documentation on how to do this ?
>
> I'm especially interested in not having to migrate any schema's,  nor
> having to pre-load the data into the cache.  (lazy load), which the
> articles imply.
> Also some of the data would be too large to load it all into Ram anyway.
>
>
> Any assistance greatly appreciated.
>
>
> Regards
>
> Mark.
>


Re: Task management - MapReduce & ForkJoin performance penalty

2017-08-31 Thread ihorps
hello

So here are results for NoOpTaks + NoOpJob on two different hosts (hardware
spec. is the same as mentioned above)
1. 1 Task - 100 Jobs -> ~0.1 sec 
2. 1 Task - 1000 Jobs -> ~4 sec 
3. 1 Task - 2000 Jobs -> ~15 sec 
4. 1 Task - 3000 Jobs -> ~36 sec 
5. 1 Task - 5000 Jobs -> ~96 sec 




--
Sent from: http://apache-ignite-users.70518.x6.nabble.com/


Re: In Partitioned Mode, backup node is not restored.

2017-08-31 Thread Konstantin Dudkov
Sounds strange, could you provide reproducer?

Please also let me see your grid config.

Regards,
Konstantin



--
Sent from: http://apache-ignite-users.70518.x6.nabble.com/


Re: Cassandra failing to ReadThrough using Cache.get(key) without preloading

2017-08-31 Thread Kenan Dalley
Trying this again...
OUTPUT
Output From Not Preloading the Cache:
Output From Preloading the Cache:
CODE
cassandra-ignite.xml
persistence-settings.xml
persistence-settings.full.xml (Tried in case I needed to fully define in
thexml)
connection-settings.xml
TestResponseKey
TestResponse
ApplicationConfig
Application
DATA
Table Info
Inserts




--
Sent from: http://apache-ignite-users.70518.x6.nabble.com/

Cassandra failing to ReadThrough using Cache.get(key) without preloading

2017-08-31 Thread Kenan Dalley
Starting this thread over.  I think it will help to actually mark the thread
as "HTML Format" to get all of my changes.  (I accidentally deleted the last
thread when I was trying to just delete my last reply)
OUTPUT
Output From Not Preloading the Cache:
Output From Preloading the Cache:
CODE
cassandra-ignite.xml
persistence-settings.xml
persistence-settings.full.xml (Tried in case I needed to fully define in
thexml)
connection-settings.xml
TestResponseKey
TestResponse
ApplicationConfig
Application
DATA
Table Info
Inserts




--
Sent from: http://apache-ignite-users.70518.x6.nabble.com/

Re: UPDATE SQL for nested BinaryObject throws exception.

2017-08-31 Thread takumi
This is a part of the real code that I wrote.

-
  List entities = new ArrayList<>();
  QueryEntity qe = new QueryEntity(String.class.getName(), "cache");
  qe.addQueryField("attribute.prop1", Double.class.getName(), "prop3");
  qe.addQueryField("attribute.prop2", String.class.getName(), "prop4");
  qe.addQueryField("attribute.prop.prop1", Double.class.getName(), "prop5");
  qe.addQueryField("attribute.prop.prop2", String.class.getName(), "prop6");

  BinaryObject bo  =ib.builder("cache").setField("attribute",
ib.builder("cache.attribute")
  .setField("prop",
ib.builder("cache.attribute.prop")
   .setField("prop1", 50.0, Double.class)
   .setField("prop2", "old", String.class))
  .setField("prop1", 50.0, Double.class)
  .setField("prop2", "old", String.class)).build();

  cache.put("key1", bo);
  cache.query(new SqlFieldsQuery("update cache set prop4 = 'new'  where
prop3 >= 20.0"));//OK
  cache.query(new SqlFieldsQuery("update cache set prop6 = 'new'  where
prop5 >= 20.0"));//NG
-

I can update 'prop4' by SQL, but I do not update 'prop6' by SQL. 



--
Sent from: http://apache-ignite-users.70518.x6.nabble.com/


Re: Cassandra failing to ReadThrough using Cache.get(key) without preloading

2017-08-31 Thread Kenan Dalley
Ok, I'm going to try this again...

OUTPUT

Output From Not Preloading the Cache:


Output From Preloading the Cache:


CODE

cassandra-ignite.xml


persistence-settings.xml


persistence-settings.full.xml (Tried in case I needed to fully define in the
xml)


connection-settings.xml


TestResponseKey


TestResponse


ApplicationConfig


Application


DATA

Table Info


Inserts






--
Sent from: http://apache-ignite-users.70518.x6.nabble.com/


Re: UPDATE SQL for nested BinaryObject throws exception.

2017-08-31 Thread afedotov
Hi,

Currently referencing nested object's fields from SQL isn't supported for
both regular Java objects and for BinaryObject-s.

In other words, having 

class B {
private String field1;
}

class A {
private B bField;
}

you cannot update it like `update A set bField.field1=?`

Kind regards,
Alex.



--
Sent from: http://apache-ignite-users.70518.x6.nabble.com/


Re: GC stalling all the activity

2017-08-31 Thread dkarachentsev
Hi,

Under batch I understand key-value map that you pass to putAll().

Thanks!
-Dmitry.



--
Sent from: http://apache-ignite-users.70518.x6.nabble.com/


Re: UPDATE SQL for nested BinaryObject throws exception.

2017-08-31 Thread Andrey Mashenkov
Does this work for regular objects?


BWT: looks like a type when you set a builder to field instead of binary
object itself "bb.build()"

On Thu, Aug 31, 2017 at 4:32 PM, takumi  wrote:

> I use SqlQuery for nested BinaryObject.
> The BinaryObject instance is following structure.
>
>BinaryObjectBuilder bb2 = binary.builder("nested2
> hoge").setField("field2", "old", String.class);
>BinaryObjectBuilder bb = binary.builder("nested
> hoge").setField("field1",
> bb2);
>binary.builder("hoge").setField("field0", bb);
>
> The sample SQL to throw an exception is "update " + CACHE_NAME + " set
> field2 = 'new'".
>
> The cause to throw an exception to is that I update a field of BinaryObject
> which is child of nested BinaryObject .
> When I update a field of BinaryObject which is nested BinaryObject, it do
> not throw an exception.
>
> Should I not use this SQL for child of nested BinaryObject?
>
>
>
> --
> Sent from: http://apache-ignite-users.70518.x6.nabble.com/
>



-- 
Best regards,
Andrey V. Mashenkov


Re: Cassandra failing to ReadThrough using Cache.get(key) without preloading

2017-08-31 Thread Evgenii Zhuravlev
Great news! Thank you for the update.

Evgenii

2017-08-31 16:47 GMT+03:00 Kenan Dalley :

> Wow.  Ok.  After I spent about 30-45 minutes getting it put together and
> looking pretty, too.
>
>
>
> --
> Sent from: http://apache-ignite-users.70518.x6.nabble.com/
>


Re: Cassandra failing to ReadThrough using Cache.get(key) without preloading

2017-08-31 Thread Kenan Dalley
Wow.  Ok.  After I spent about 30-45 minutes getting it put together and
looking pretty, too.



--
Sent from: http://apache-ignite-users.70518.x6.nabble.com/


UPDATE SQL for nested BinaryObject throws exception.

2017-08-31 Thread takumi
I use SqlQuery for nested BinaryObject.
The BinaryObject instance is following structure.

   BinaryObjectBuilder bb2 = binary.builder("nested2
hoge").setField("field2", "old", String.class);
   BinaryObjectBuilder bb = binary.builder("nested hoge").setField("field1",
bb2);
   binary.builder("hoge").setField("field0", bb);

The sample SQL to throw an exception is "update " + CACHE_NAME + " set
field2 = 'new'". 

The cause to throw an exception to is that I update a field of BinaryObject
which is child of nested BinaryObject .
When I update a field of BinaryObject which is nested BinaryObject, it do
not throw an exception.

Should I not use this SQL for child of nested BinaryObject?



--
Sent from: http://apache-ignite-users.70518.x6.nabble.com/


Ignite as Cache and Query layer for Cassandra.

2017-08-31 Thread Mark Farnan
Howdy, 

I'm currently evaluating Ignite for a project, and am looking for some 
guidance and project sample for using Ignite over Cassandra as a cache and 
query layer. 
Note: The Cassandra database and schema already exists and is mature. 

In a Recent Infoworld post,   (
https://www.infoworld.com/article/3191895/application-development/light-a-fire-under-cassandra-with-apache-ignite.html
) it stated 

"No remodeling of Cassandra data
Apache Ignite reads from Apache Cassandra and other NoSQL databases, so 
moving Cassandra data into Ignite requires no data modification. The data 
schema can also be migrated directly into Ignite as is. "

However, I can't find in any documentation,  or online articles,  guidance 
on just how to do this and there are no links in the article. 
  - The Ignite docs /cassandra section appear to only cover using 
Cassandra as a persistence layer,  not for going over and caching / 
querying an existing schema.   (unless I'm missing something obvious !)
  - The Youtube video cuts out to blank at the end, so no links are shown. 
   The  links on the slideshare document appear to be broken. 
  -  Can't find any git hub for examples on how to do this (especially not 
for the couple you tube vids)

Can anyone please point me to some working sample code and/or 
documentation on how to do this ?

I'm especially interested in not having to migrate any schema's,  nor 
having to pre-load the data into the cache.  (lazy load), which the 
articles imply. 
Also some of the data would be too large to load it all into Ram anyway. 


Any assistance greatly appreciated. 


Regards

Mark. 


Re: Ignite 2.1.0 - Thread deadlock on DefaultSingletonBeanRegistry.getSingleton

2017-08-31 Thread afedotov
Denis, yes I can confirm that the issue doesn't stand for 2.0.
I'll drill down into the issue and will file a ticket soon.

Kind regards,
Alex.



--
Sent from: http://apache-ignite-users.70518.x6.nabble.com/


Re: Schema vs CacheName

2017-08-31 Thread afedotov
Hi,

Schema name and cache names match by default when a cache is configured via
either annotations or QueryEntity-es
Please refer to the following documentation section.

Let me clarify your questions:
Q1: You can set a schema name distinct from the default matching the cache
name by calling CacheConfiguration#setSchema
Tables created with CREATE TABLE will end up in PUBLIC schema, with no
options.
Q2: You can use setSchema on a query level to avoid specifying a particular
schema
Q3: There is no such thing like "PUBLIC cache"
Q4: It's a good practice specifying all the schema names in cross-cache
queries, but, of course, you could opt out from
specifying a schema belonging to the cache from which you are running a
query or a schema that has been set on a query level.

Kind regards,
Alex.



--
Sent from: http://apache-ignite-users.70518.x6.nabble.com/


Re: We want to configure near cache on client so that we can handle high TPS items and avoid network call to server

2017-08-31 Thread afedotov
Hi,

1. You can set an eviction policy distinct from the one of the underlying
server cache but you can set a
distinct expiration policy for a near cache as for now; there is, actually,
even no NearCacheConfiguration#setExpiryPolicyFactory method
2. For example, if you have the following configuration for a cache

CacheConfiguration cacheConf = ...;
...
cacheConf.setExpiryPolicyFactory(AccessedExpiryPolicy.factoryOf(new
Duration(TimeUnit.SECONDS,5)));

NearCacheConfiguration nearConf = new
NearCacheConfiguration<>(); 
nearConf.setNearEvictionPolicy(new LruEvictionPolicy<>(1000)); 

then if an entry which exists in the near cache is updated in the underlying
server cache then
near cache will be notified about it and the expiration policy will take
that fact into account.

If you access 2000 entities within 2 seconds then due to the LRU eviction
policy 1000 of them will be evicted.
If you have 100 entities in the near cache that were accessed 2 seconds ago
and read 500 entities within 3 seconds
then those 100 entities will expire and get removed from the near cache.

Kind regards,
Alex.



--
Sent from: http://apache-ignite-users.70518.x6.nabble.com/


Re: Configuring path of binary_meta, db directories etc

2017-08-31 Thread afedotov
Please ignore.



--
Sent from: http://apache-ignite-users.70518.x6.nabble.com/


Re: Configuring path of binary_meta, db directories etc

2017-08-31 Thread Evgenii Zhuravlev
Hi,

Please
check org.apache.ignite.configuration.IgniteConfiguration#setWorkDirectory.
As for now, there is no option to specify a custom path only for binary
metadata.

Kind regards,
Evgenii

2017-08-31 15:17 GMT+03:00 userx :

> Hi,
>
> How can we configure a customized path for meta information which by
> default
> gets created in work folder under ignite_home ?
>
>
>
> --
> Sent from: http://apache-ignite-users.70518.x6.nabble.com/
>


Re: Transaction Speed

2017-08-31 Thread Andrey Mashenkov
Ricky,

Does you mean you try to update same two entries in all 50_000 transactions?
Is it possible to collocate this entries?


On Thu, Aug 31, 2017 at 5:56 AM, rnauv  wrote:

> I want to make 2 clients concurrently calculate some values from a cache
> that
> read from a database, and store the result in the database. I can't use
> atomic cacheAtomicityMode as there is no lock so that the calculation might
> get an old value. I also need a backup to avoid lost cache when a node
> fail.
> Let's say I run 2 servers, and 2 clients. Right now, I'm able to do 50_000
> transactions for two rows in 70 seconds (around 1400 TPS). If it's
> possible,
> I want to increase the TPS.
>
> My initial configuration is like this:
> Server configuration:
> 1. CacheAtomicityMode = transactional
> 2. Read Through, Write Through, and Write Behind = true
> 3. Cache Mode = partitioned
> 4. Backup = 1
> 5. Query entities for my table
> 6. cacheStoreFactory for my CacheStore
> 7. DiscoverySPI which I set to my localhost
>
> Client configuration have the same configuration as the server excluding
> ReadThrough, WriteThrough, WriteBehind, QueryEntities, and
> CacheStoreFactory.
>
> The Transaction Concurrency is pessimistic while transaction isolation is
> repeatable_read.
> I didn't set Write synchronization mode (which by default will be
> PRIMARY_SYNC).
>
> The transaction procedure is similar with the one in Ignite Documentation
> [1].
>
> Any suggestions?
>
> Thanks,
> Ricky
>
> [1] https://apacheignite.readme.io/docs/transactions
>
>
>
> -
> -- Ricky
> --
> Sent from: http://apache-ignite-users.70518.x6.nabble.com/
>



-- 
Best regards,
Andrey V. Mashenkov


Configuring path of binary_meta, db directories etc

2017-08-31 Thread userx
Hi,

How can we configure a customized path for meta information which by default
gets created in work folder under ignite_home ?



--
Sent from: http://apache-ignite-users.70518.x6.nabble.com/


Please ignore

2017-08-31 Thread Alexander Fedotov
For test purposes only.

Kind regards,
Alex.


Re: Ignite Persistence on Kubernetes.

2017-08-31 Thread takumi
Thank you for your answer.
I choose MongoDB with StatefulSets, and I try to try it.




--
Sent from: http://apache-ignite-users.70518.x6.nabble.com/


In Partitioned Mode, backup node is not restored.

2017-08-31 Thread takumi
I restart ignite server with Ignite Persistence enabled.
Ignite Server restores primary, but it does not restore backup.

In the situation that I restart ignite server...
(i) When I call get() on Ignite Server which has primary data, get() returns
restore data.
(ii) When I call get() on Ignite Server which does not have no data
(primary/backup), get() returns restore data.
(iii) When I call get() on Ignite Server which has backup data, get()
returns null.

I conclude that the Server which has backup returns null value.

I call get() directly on each Server node, not call get() on client node.
In the case of the confirmation mentioned above I check whether it have
primary or backup, I confirm it using AffinityFunction.

I want to know whether this result is right.




--
Sent from: http://apache-ignite-users.70518.x6.nabble.com/


Re: Serialization exception in Ignite compute job

2017-08-31 Thread ezhuravlev
Hi,

Non-static nested classes that implement Serializable must be defined in an
enclosing class that is also serializable. Non-static nested classes retain
an implicit reference to an instance of their enclosing class. If the
enclosing class is not serializable, the Java serialization mechanism fails
with a java.io.NotSerializableException. 

It's java rules, you can read about it here:
https://docs.oracle.com/javase/tutorial/java/javaOO/nested.html

Evgenii



--
Sent from: http://apache-ignite-users.70518.x6.nabble.com/


Re: GC stalling all the activity

2017-08-31 Thread userx
Hi,

I am working on using DataStreamer option but just for the information, how
can we *configure* the batch size for atomic ? 



--
Sent from: http://apache-ignite-users.70518.x6.nabble.com/


Re: Cassandra failing to ReadThrough using Cache.get(key) without preloading

2017-08-31 Thread Evgenii Zhuravlev
Hi,

Looks like config files and code were not attached. Please add it to the
thread.

Evgenii

2017-08-30 23:47 GMT+03:00 Kenan Dalley :

> According to everything that I've read, hooking up Cassandra with Ignite
> should allow for doing a lazy load of the Cache using the Cache.get(key)
> without using "loadCache" beforehand.  However, using Ignite v2.1, I'm not
> seeing that occur.  If I use "loadCache", then my "get" returns values
> appropriately, but if I don't pre-load the Cache, I just get null values as
> my result.  It's really difficult to understand what's going on behind the
> scenes to see if I've configured something incorrectly (don't think so) or
> why it wouldn't go to Cassandra directly if there's a Cache-miss.  I'm
> including my code & config below.
>
>
> OUTPUT
>
> Output From Not Preloading the Cache:
>
>
> Output From Preloading the Cache:
>
>
>
> CODE
>
> cassandra-ignite.xml
>
>
> persistence-settings.xml
>
>
> persistence-settings.full.xml (Tried in case I needed to fully define in
> the
> xml)
>
>
> connection-settings.xml
>
>
> TestResponseKey
>
>
> TestResponse
>
>
> Spring Config
>
>
> Application
>
>
>
> DATA
>
> Table Info
>
>
> Inserts
>
>
>
>
>
> --
> Sent from: http://apache-ignite-users.70518.x6.nabble.com/
>


Serialization exception in Ignite compute job

2017-08-31 Thread begineer
Hi,
I am submitting a compute job to ignite compute grid, It worked one time but
now its failing every time.

My compute task is Ignite Runnable so it is serializable but enclosing class
is not. I think outer class does not need to be serializable because inner
class is not anonymous.


/** this class is not serializable**/
public EnclosingFile {

public class CreateConfigItemsRunnable implements IgniteRunnable {

}
}

I found this thread which suggests, inner class should be static. Could you
please explain why will it after marked as static.
http://apache-ignite-users.70518.x6.nabble.com/Re-Serialization-exception-on-Ignite-class-td920.html


Error logs:

org.apache.ignite.IgniteException: Remote job threw user exception (override
or implement ComputeTask.result(..) method if you would like to have
automatic failover for this exception).
at
org.apache.ignite.compute.ComputeTaskAdapter.result(ComputeTaskAdapter.java:101)
~[ignite-core-1.8.2.jar:1.8.2]
at
org.apache.ignite.internal.processors.task.GridTaskWorker$5.apply(GridTaskWorker.java:1031)
[ignite-core-1.8.2.jar:1.8.2]
at
org.apache.ignite.internal.processors.task.GridTaskWorker$5.apply(GridTaskWorker.java:1024)
[ignite-core-1.8.2.jar:1.8.2]
at
org.apache.ignite.internal.util.IgniteUtils.wrapThreadLoader(IgniteUtils.java:6637)
[ignite-core-1.8.2.jar:1.8.2]
at
org.apache.ignite.internal.processors.task.GridTaskWorker.result(GridTaskWorker.java:1024)
[ignite-core-1.8.2.jar:1.8.2]
at
org.apache.ignite.internal.processors.task.GridTaskWorker.onResponse(GridTaskWorker.java:842)
[ignite-core-1.8.2.jar:1.8.2]
at
org.apache.ignite.internal.processors.task.GridTaskWorker.processDelayedResponses(GridTaskWorker.java:691)
[ignite-core-1.8.2.jar:1.8.2]
at
org.apache.ignite.internal.processors.task.GridTaskWorker.body(GridTaskWorker.java:542)
[ignite-core-1.8.2.jar:1.8.2]
at
org.apache.ignite.internal.util.worker.GridWorker.run(GridWorker.java:110)
[ignite-core-1.8.2.jar:1.8.2]
at
org.apache.ignite.internal.processors.task.GridTaskProcessor.startTask(GridTaskProcessor.java:679)
[ignite-core-1.8.2.jar:1.8.2]
at
org.apache.ignite.internal.processors.task.GridTaskProcessor.execute(GridTaskProcessor.java:403)
[ignite-core-1.8.2.jar:1.8.2]
at
org.apache.ignite.internal.processors.closure.GridClosureProcessor.runAsync(GridClosureProcessor.java:233)
[ignite-core-1.8.2.jar:1.8.2]
at
org.apache.ignite.internal.processors.closure.GridClosureProcessor.runAsync(GridClosureProcessor.java:207)
[ignite-core-1.8.2.jar:1.8.2]
at
org.apache.ignite.internal.IgniteComputeImpl.run(IgniteComputeImpl.java:417)
[ignite-core-1.8.2.jar:1.8.2]
at
package{edited}.ignite.WorkflowIgniteComputeImpl.run(WorkflowIgniteComputeImpl.java:22)
[workflow-17.10.34.jar:17.10.34]
at
package{edited}.AdHocDeliveryConfigServiceImpl.create(AdHocDeliveryConfigServiceImpl.java:89)
[app-workflow-17.10.34.jar:17.10.34]
at
package{edited}.AdHocDeliveryConfigFacadeImpl.create(AdHocDeliveryConfigFacadeImpl.java:30)
[app-workflow-17.10.34.jar:17.10.34]
at sun.reflect.NativeMethodAccessorImpl.invoke0(Native Method)
~[na:1.8.0_71]
at
sun.reflect.NativeMethodAccessorImpl.invoke(NativeMethodAccessorImpl.java:62)
~[na:1.8.0_71]
at
sun.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:43)
~[na:1.8.0_71]
at java.lang.reflect.Method.invoke(Method.java:497) ~[na:1.8.0_71]
at
org.apache.ignite.internal.processors.service.GridServiceProxy$ServiceProxyCallable.call(GridServiceProxy.java:406)
[ignite-core-1.8.2.jar:1.8.2]
at
org.apache.ignite.internal.processors.closure.GridClosureProcessor$C2V2.execute(GridClosureProcessor.java:2056)
[ignite-core-1.8.2.jar:1.8.2]
at
org.apache.ignite.internal.processors.job.GridJobWorker$2.call(GridJobWorker.java:560)
[ignite-core-1.8.2.jar:1.8.2]
at
org.apache.ignite.internal.util.IgniteUtils.wrapThreadLoader(IgniteUtils.java:6605)
[ignite-core-1.8.2.jar:1.8.2]
at
org.apache.ignite.internal.processors.job.GridJobWorker.execute0(GridJobWorker.java:554)
[ignite-core-1.8.2.jar:1.8.2]
at
org.apache.ignite.internal.processors.job.GridJobWorker.body(GridJobWorker.java:483)
[ignite-core-1.8.2.jar:1.8.2]
at
org.apache.ignite.internal.util.worker.GridWorker.run(GridWorker.java:110)
[ignite-core-1.8.2.jar:1.8.2]
at
org.apache.ignite.internal.processors.job.GridJobProcessor.processJobExecuteRequest(GridJobProcessor.java:1180)
[ignite-core-1.8.2.jar:1.8.2]
at
org.apache.ignite.internal.processors.job.GridJobProcessor$JobExecutionListener.onMessage(GridJobProcessor.java:1894)
[ignite-core-1.8.2.jar:1.8.2]
at
org.apache.ignite.internal.managers.communication.GridIoManager.invokeListener(GridIoManager.java:1215)
[ignite-core-1.8.2.jar:1.8.2]
at

Re: Accessing array elements within items cached in Ignite without deserialising the entire item

2017-08-31 Thread Pavel Tupitsyn
A copy is made on every public API cache entry access (ICache.Get, etc).

> randomly access items within that structure without deserialising the
entire cached item
IBinaryObject API is for that:
https://apacheignite-net.readme.io/docs/binary-mode

// Get a copy of serialized cache data (basically memcpy)
IBinaryObject obj = cache.WithKeepBinary().Get(1);

// Deserialize a single field
obj.GetField("foo");

// Access other fields, modify fields, etc

This way the copy is made only once.

On Thu, Aug 31, 2017 at 4:35 AM, Raymond Wilson 
wrote:

> I agree on correctness being the first priority always J
>
>
>
> Is there documentation on when that copy is made? For instance, is a copy
> made for every invocation of a BinarySerializer, or are there some
> additional caching semantics that mean the copy is made once and stays
> around for a while so subsequent invocations don’t have additional overhead
> recopying the cached item from the unmanaged context?
>
>
>
> The context here would be  a cache item with potentially significant
> internal structure where you might want to randomly access items within
> that structure without deserialising the entire cached item.
>
>
>
> *From:* Pavel Tupitsyn [mailto:ptupit...@apache.org]
> *Sent:* Friday, August 4, 2017 10:10 PM
>
> *To:* user@ignite.apache.org
> *Subject:* Re: Accessing array elements within items cached in Ignite
> without deserialising the entire item
>
>
>
> >  git refused to clone the repo in GitExtensions
>
> "git clone https://github.com/apache/ignite.git; in console should work
>
>
>
> > Is the ‘data’ pointer actually a pointer to the unmanaged memory in the
> off heap cache containing this element
>
> It is just a pointer, it can point both to managed or unmanaged memory.
>
> Yes, in some cases we have a pointer to unmanaged memory that comes from
> Java side, but it is always a copy of the actual cache data.
>
> Otherwise it would be quite difficult to maintain atomicity and the like.
>
>
>
> Generally, I don't think we should introduce pointers and other unsafe
> stuff in the public API.
>
> Performance is important, but correctness is always a priority.
>
>
>
> On Fri, Aug 4, 2017 at 5:25 AM, Raymond Wilson 
> wrote:
>
> I had not seen that page yet – very useful.
>
>
>
> There’s a few moving parts to getting it working, so not sure I will get
> time to really dig into, but will have a look for sure.
>
>
>
> I did pull a static copy of the source (after git refused to clone the
> repo in GitExtensions) and started looking at the code. It does seem
> relatively simple to add appropriate methods to the appropriate interface
> and implementation classes.
>
>
>
> Question: When I see a method like this in BinaryStreamBase.cs:
>
>
>
> /// 
>
> /// Read byte array.
>
> /// 
>
> /// Count.
>
> /// 
>
> /// Byte array.
>
> /// 
>
> public abstract byte[] ReadByteArray(int cnt);
>
>
>
> protected static byte[] ReadByteArray0(int len, byte* data)
>
> {
>
> byte[] res = new byte[len];
>
>
>
> fixed (byte* res0 = res)
>
> {
>
> CopyMemory(data, res0, len);
>
> }
>
>
>
> return res;
>
> }
>
>
>
> Is the ‘data’ pointer actually a pointer to the unmanaged memory in the
> off heap cache containing this element? If so, would this permit ‘user
> defined’ operations to performed, something like this? [or does Ignite.Net
> Linq already support this]?
>
>
>
> /// 
>
> /// Perform action on a range of byte array elements with a
> delegate
>
> /// 
>
> /// Start at.
>
> /// Count.
>
> /// 
>
> /// Nothing
>
> /// 
>
> protected static void PerformByteArrayOperation(int index, int
> len, Action action, byte* data)
>
> {
>
> fixed (byte* res0 = [index])
>
> {
>
> for (int i = 0; I < len; i++)
>
> {
>
>  action(res0++);
>
> }
>
> }
>
> }
>
>
>
> There’s probably a nice way to genericize this across multiple array
> types, but it’s useful as an example.
>
>
>
> In this way you can operate on the data without the need to move it around
> all the time between unmanaged and managed contexts.
>
>
>
> Thanks,
>
> Raymond.
>
>
>
> *From:* Pavel Tupitsyn [mailto:ptupit...@apache.org]
> *Sent:* Thursday, August 3, 2017 7:21 PM
>
>
> *To:* user@ignite.apache.org
> *Subject:* Re: Accessing array elements within items cached in Ignite
> without deserialising the entire item
>
>
>
> Great!
>
>
>
> Here's .NET development page, in case you haven't seen it yet:
> https://cwiki.apache.org/confluence/display/IGNITE/Ignite.NET+Development
>
> Let me know if you need any assistance.
>
>
>
> Pavel
>
>
>
> On Thu, Aug 3, 2017 at 5:28 AM, Raymond Wilson 
> 

Re: GC stalling all the activity

2017-08-31 Thread dkarachentsev
Hi,

putAll() for atomic cache forces it to lock all keys and make this entry
group updated atomically. I think it would be enough to reduce batch size or
to use IgniteDataStreamer, if it's an initial data load.

No need to change direct memory size, because Ignite off-heap doesn't rely
on it.

Thanks!
-Dmitry.



--
Sent from: http://apache-ignite-users.70518.x6.nabble.com/