Re: Ignite Events

2018-10-11 Thread Павлухин Иван
Hi drosso,

Luckily there is no mystery. By the way, what version of Ignite do you use?

The clue to strange behavior here is topology change during your test
execution. As I see, Putter node is a server data node as well, so it will
hold some data partitions on it and consequently will receive some
OBJECT_PUT events. The second seeming strange thing here is observing
events for same keys on different nodes. It is explained by so-called "late
affinity assignment". Putter enters cluster and some partitions are loaded
to it from other nodes. But Putter is usable before all data is actually
loaded, instead of waiting data and freezing cluster for possibly long time
Ignite creates temporary backup partition on Putter node and primary
partition is kept on one of ServerNodes from your example (and when all
data is loaded by Putter from other nodes partitions on it will be
considered primary and previous primary partitions on other nodes will be
destroyed). Events like OBJECT_PUT are fired on backup partitions as well.
And it explains why you observe events for same keys on different nodes. If
you make Putter non-data node for the target cache (e.g. by starting it as
a client node) then you will see events only on ServerNodes.

чт, 11 окт. 2018 г. в 11:20, drosso :

> Hi Ivan,
> thank you for your interest! here below you can find the code for the 2
> sample programs:
>
> *** ServerNode.java **
>
> package TestATServerMode;
>
> import javax.cache.Cache;
> import javax.cache.event.CacheEntryEvent;
> import javax.cache.event.CacheEntryUpdatedListener;
> import javax.cache.event.EventType;
>
> import org.apache.ignite.Ignite;
> import org.apache.ignite.IgniteCache;
> import org.apache.ignite.IgniteException;
> import org.apache.ignite.Ignition;
> import org.apache.ignite.cache.CacheEntryEventSerializableFilter;
> import org.apache.ignite.cache.query.ContinuousQuery;
> import org.apache.ignite.cache.query.QueryCursor;
> import org.apache.ignite.cache.query.ScanQuery;
> import org.apache.ignite.events.*;
> import org.apache.ignite.lang.IgniteBiPredicate;
> import org.apache.ignite.lang.IgnitePredicate;
>
> import static org.apache.ignite.events.EventType.EVT_CACHE_OBJECT_PUT;
> import static org.apache.ignite.events.EventType.EVT_CACHE_OBJECT_READ;
> import static org.apache.ignite.events.EventType.EVT_CACHE_OBJECT_REMOVED;
>
> import java.util.UUID;
>
> /**
>  * Starts up an empty node with example compute configuration.
>  */
> public class ServerNode {
> /**
>  * Start up an empty node with example compute configuration.
>  *
>  * @param args
>  *Command line arguments, none required.
>  * @throws IgniteException
>  * If failed.
>  */
> private static final String CACHE_NAME = "MyCache";
>
> @SuppressWarnings("deprecation")
> public static void main(String[] args) throws IgniteException {
> Ignition.start("config/example-ignite.xml");
>
> Ignite ignite = Ignition.ignite();
>
> // Get an instance of named cache.
> final IgniteCache cache =
> ignite.getOrCreateCache(CACHE_NAME);
>
> // Sample remote filter
>
> IgnitePredicate locLsnr = new
> IgnitePredicate() {
> @Override
> public boolean apply(CacheEvent evt) {
> System.out.println("LOCAL cache event
> [evt=" + evt.name() + ",
> cacheName=" + evt.cacheName() + ", key="
> + evt.key() + ']');
>
> return true; // Return true to continue
> listening.
> }
> };
>
> // Register event listener for all local task execution
> events.
> ignite.events().localListen(locLsnr, EVT_CACHE_OBJECT_PUT);
>
>
> }
> }
>
>
>  Putter.java *
>
> package TestATServerMode;
>
> import static org.apache.ignite.events.EventType.EVT_CACHE_OBJECT_PUT;
>
> import java.sql.Time;
>
> import org.apache.ignite.Ignite;
> import org.apache.ignite.IgniteCache;
> import org.apache.ignite.Ignition;
> import org.apache.ignite.configuration.CacheConfiguration;
> import org.apache.ignite.events.CacheEvent;
> import org.apache.ignite.lang.IgnitePredicate;
>
>
> @SuppressWarnings("TypeMayBeWeakened")
> public class Putter {
> /** Cache name. */
> private static final String CACHE_NAME = "MyCache";
>
> /**
>  * Executes example.
>  *
>  * @param args Command line arguments, none required.
>  * @throws InterruptedException
>  */
> public static void main(String[] args) {
>
> // Mark this cluster member as client.
> //Ignition.setClientMode(true);
>
> try (Ignite ignite = Ignition.start("config/example-ignite.xml")) {
> System.out.println();
>  

Remote Listen Events are available in C#.Net

2018-10-11 Thread Hemasundara Rao
Hi All,
 Is remote listen events are available is Ignite C#.Net?

I could see there are available in java. If they are available in C#.Net ,
could you please provide sample code to listen of cache expire event
listener ?

[image: image.png]

Thanks and Regards,
Hemasundara Rao Pottangi  | Senior Project Leader

[image: HotelHub-logo]
HotelHub LLP
Phone: +91 80 6741 8700
Cell: +91 99 4807 7054
Email: hemasundara@hotelhub.com
Website: www.hotelhub.com 
--

HotelHub LLP is a service provider working on behalf of Travel Centric
Technology Ltd, a company registered in the United Kingdom.
DISCLAIMER: This email message and all attachments are confidential and may
contain information that is Privileged, Confidential or exempt from
disclosure under applicable law. If you are not the intended recipient, you
are notified that any dissemination, distribution or copying of this email
is strictly prohibited. If you have received this email in error, please
notify us immediately by return email to
noti...@travelcentrictechnology.com and
destroy the original message. Opinions, conclusions and other information
in this message that do not relate to the official business of Travel
Centric Technology Ltd or HotelHub LLP, shall be understood to be neither
given nor endorsed by either company.


RE: WAL iteration exceptions

2018-10-11 Thread Raymond Wilson
I think you are saying if I had a persistent store using Ignite 2.5 and
upgraded to 2.6 I might see those errors?



In my case the grid in question was first created with 2.6.



*From:* Dmitriy Pavlov 
*Sent:* Friday, October 5, 2018 9:46 AM
*To:* user@ignite.apache.org
*Subject:* Re: WAL iteration exceptions



Hi,



It is totally ok if some WAL segment causes such exception in case of the
end of a file was reached.



Messages may disappear because some newer version has special records
showing the end of a segment, so WAL reader almost always knows that the
end of file reached.



Sincerely,

Dmitriy Pavlov



ср, 3 окт. 2018 г. в 23:38, Mikael :

Hi!

I got a few of them before with 2.5, have not seen any yet with 2.6, it was
actually a warning before but was changed to info, I was told it's not a
problem and nothing to worry about, as far as I can tell no data was ever
lost.

Mikael



Den 2018-10-03 kl. 21:48, skrev Raymond Wilson:

We see instances of the below error appearing in our logs (using Ignite.Net
v2.6 with persistent storage) on server node startup:



2018-10-04 08:38:24:0079 28279832018-10-04
08:38:24,046 [1] INFO  MutableCacheComputeServer. Stopping WAL iteration
due to an exception: Failed to read WAL record at position: 31062042,
ptr=FileWALPointer [idx=33, fileOff=31062042, len=0]



The type of the log message is INFO, but it appears to indicate an issue
with the WAL files in our persistent store. All our data regions are
configured with WalMode = Fsync, so I’m a little surprised to be seeing
what looks like WAL integrity errors.



The next message in the log is this, which indicates Ignite was not
affected:



2018-10-04 08:39:02,116 [6] INFO  MutableCacheComputeServer. Resuming
logging to WAL segment [file=[… very long path…]\0003.wal,
offset=31062042, ver=2]

A few lines later there is a repeat of the initial exception related
message:



2018-10-04 08:39:03,142 [6] INFO  MutableCacheComputeServer. Stopping WAL
iteration due to an exception: Failed to read WAL record at position:
31190597, ptr=FileWALPointer [idx=33, fileOff=31190597, len=0]

Then this line appears:



2018-10-04 08:39:03,813 [6] INFO  MutableCacheComputeServer. Finished
applying WAL changes [updatesApplied=146, time=1272ms]

The node appears to initialize correctly after this logging.



Does anyone else see this behaviour? Is it something to be concerned about
or normal logging to be expected after termination of running server
instances?



The

Thanks,

Raymond.


Re: Heap size

2018-10-11 Thread Michael Cherkasov
Hi Prasad,

G1 is a good choice for big heaps, if you have big GC pauses you need to
investigate this, might be GC tuning is required, might be you just need to
increase heap size, it depends on your case and very specific.
Also, please make sure that you use 31GB heap, or > 40 GB heap, numbers
between 31 and 40 is useless, the following article explains why it so:
https://blog.codecentric.de/en/2014/02/35gb-heap-less-32gb-java-jvm-memory-oddities/
Also, I want to clarify that ignite stores data in off-heap, heap usually
is required for SQL(ignite bring data from off-heap to heap) and for
internal structures and your compute tasks of course.

Thanks,
Mike.

чт, 11 окт. 2018 г. в 10:57, Prasad Bhalerao :

> Hi,
>
> Is anyone using on heap cache with more than 30 gb jvm heap size in
> production?
>
> Which gc algorithm are you using?
>
> If yes have you faced any issues relates to long gc pauses?
>
>
> Thanks,
> Prasad
>


Re: Query 3x slower with index

2018-10-11 Thread Dave Harvey
"Ignite will only use one index per table"

I assume you mean "Ignite will only use one index per table per query"?

On Thu, Oct 11, 2018 at 1:55 PM Stanislav Lukyanov 
wrote:

> Hi,
>
>
>
> It is a rather lengthy thread and I can’t dive into details right now,
>
> but AFAICS the issue now is making affinity key index to work with a
> secondary index.
>
> The important things to understand is
>
>1. Ignite will only use one index per table
>2. In case of a composite index, it will apply the columns one by one
>3. The affinity key index should always go first as the first step is
>splitting the query by affinity key values
>
>
>
> So, to use index over the affinity key (customer_id) and a secondary index
> (category_id) one needs to create an index
>
> like (customer_id, category_id), in that order, with no columns in between.
>
> Note that index (customer_id, dt, category_id) can’t be used instead of it.
>
> On the other hand, (customer_id, category_id, dt) can - the last part of
> the index will be left unused.
>
>
>
> Thanks,
>
> Stan
>
>
>
> *From: *eugene miretsky 
> *Sent: *9 октября 2018 г. 19:40
> *To: *user@ignite.apache.org
> *Subject: *Re: Query 3x slower with index
>
>
>
> Hi Ilya,
>
>
>
> I have tried it, and got the same performance as forcing using category
> index in my initial benchmark - query is 3x slowers and uses only one
> thread.
>
>
>
> From my experiments so far it seems like Ignite can either (a) use
> affinity key and run queries in parallel, (b) use index but run the query
> on only one thread.
>
>
>
> Has anybody been able to run OLAP like queries in while using an index?
>
>
>
> Cheers,
>
> Eugene
>
>
>
> On Mon, Sep 24, 2018 at 10:55 AM Ilya Kasnacheev <
> ilya.kasnach...@gmail.com> wrote:
>
> Hello!
>
>
>
> I guess that using AFFINITY_KEY as index have something to do with the
> fact that GROUP BY really wants to work per-partition.
>
>
>
> I have the following query for you:
>
>
>
> 1: jdbc:ignite:thin://localhost> explain Select count(*) FROM( Select
> customer_id from (Select customer_id, product_views_app, product_clict_app
> from GA_DATA ga join table(category_id int = ( 117930, 175930,
> 175940,175945,101450)) cats on cats.category_id = ga.category_id) data
> group by customer_id having SUM(product_views_app) > 2 OR
> SUM(product_clict_app) > 1);
> PLAN  SELECT
> DATA__Z2.CUSTOMER_ID AS __C0_0,
> SUM(DATA__Z2.PRODUCT_VIEWS_APP) AS __C0_1,
> SUM(DATA__Z2.PRODUCT_CLICT_APP) AS __C0_2
> FROM (
> SELECT
> GA__Z0.CUSTOMER_ID,
> GA__Z0.PRODUCT_VIEWS_APP,
> GA__Z0.PRODUCT_CLICT_APP
> FROM TABLE(CATEGORY_ID INTEGER=(117930, 175930, 175940, 175945,
> 101450)) CATS__Z1
> INNER JOIN PUBLIC.GA_DATA GA__Z0
> ON 1=1
> WHERE CATS__Z1.CATEGORY_ID = GA__Z0.CATEGORY_ID
> ) DATA__Z2
> /* SELECT
> GA__Z0.CUSTOMER_ID,
> GA__Z0.PRODUCT_VIEWS_APP,
> GA__Z0.PRODUCT_CLICT_APP
> FROM TABLE(CATEGORY_ID INTEGER=(117930, 175930, 175940, 175945,
> 101450)) CATS__Z1
> /++ function ++/
> INNER JOIN PUBLIC.GA_DATA GA__Z0
> /++ PUBLIC.GA_CATEGORY_ID: CATEGORY_ID = CATS__Z1.CATEGORY_ID ++/
> ON 1=1
> WHERE CATS__Z1.CATEGORY_ID = GA__Z0.CATEGORY_ID
>  */
> GROUP BY DATA__Z2.CUSTOMER_ID
>
> PLAN  SELECT
> COUNT(*)
> FROM (
> SELECT
> __C0_0 AS CUSTOMER_ID
> FROM PUBLIC.__T0
> GROUP BY __C0_0
> HAVING (SUM(__C0_1) > 2)
> OR (SUM(__C0_2) > 1)
> ) _18__Z3
> /* SELECT
> __C0_0 AS CUSTOMER_ID
> FROM PUBLIC.__T0
> /++ PUBLIC."merge_scan" ++/
> GROUP BY __C0_0
> HAVING (SUM(__C0_1) > 2)
> OR (SUM(__C0_2) > 1)
>  */
>
>
>
> However, I'm not sure it is "optimal" or not since I have no idea if it
> will perform better or worse on real data. That's why I need a subset of
> data which will make query execution speed readily visible. Unfortunately,
> I can't deduce that from query plan alone.
>
>
>
> Regards,
>
> --
>
> Ilya Kasnacheev
>
>
>
>
>
> пн, 24 сент. 2018 г. в 16:14, eugene miretsky :
>
> An easy way to reproduce would be to
>
>
>
> 1. Create table
>
> CREATE TABLE GA_DATA (
>
> customer_id bigint,
>
> dt timestamp,
>
> category_id int,
>
> product_views_app int,
>
> product_clict_app int,
>
> product_clict_web int,
>
> product_clict_web int,
>
> PRIMARY KEY (customer_id, dt, category_id)
>
> ) WITH "template=ga_template, backups=0, affinityKey=customer_id";
>
>
>
> 2. Create indexes
>
> · CREATE INDEX ga_customer_id ON GA_Data (customer_id)
>
> · CREATE INDEX ga_pKey ON GA_Data (customer_id, dt, category_id)
>
> · CREATE INDEX ga_category_and_customer_id ON GA_Data
> (category_id, customer_id)
>
> · CREATE INDEX ga_category_id ON GA_Data (category_id)
>
> 3. Run Explain on the following queries while trying forcing using
> different indexes
>
> · Select count(*) FROM(
>
> Select customer_id from GA_DATA  use index 

Re: Role of H2 datbase in Apache Ignite

2018-10-11 Thread eugene miretsky
Thanks!

So does it mean that CacheConfiguration.queryParallelism is really an H2
settings?


On Tue, Oct 9, 2018 at 4:27 PM Stanislav Lukyanov 
wrote:

> In short, Ignite replaces H2’s storage level with its own.
>
> For example, Ignite implements H2’s Index interface with its own off-heap
> data structures underneath.
>
> When Ignite executes an SQL query, it will ask H2 to process it, then H2
> will callback to Ignite’s implementations
>
> of H2’s interfaces (such as Index) to actually retrieve the data.
>
> I guess the on-heap processing is mostly H2, although there is a lot of
> work done by Ignite to make the distributed
>
> map-reduce work like creating temporary tables for intermediate results.
>
>
>
> Stan
>
>
>
> *From: *eugene miretsky 
> *Sent: *9 октября 2018 г. 21:52
> *To: *user@ignite.apache.org
> *Subject: *Re: Role of H2 datbase in Apache Ignite
>
>
>
> Hello,
>
>
>
> I have been struggling with this question myself for a while now too.
>
> I think the documents are very ambiguous on how exactly H2 is being used.
>
>
>
> The document that you linked say
>
> "Apache Ignite leverages from H2's SQL query parser and optimizer as well
> as the execution planner. Lastly, *H2 executes a query locally* on a
> particular node and passes a local result to a distributed Ignite SQL
> engine for further processing."
>
>
>
> And
>
> "However, *the data, as well as the indexes, are always stored in the
> Ignite that executes queries* in a distributed and fault-tolerant manner
> which is not supported by H2."
>
>
>
> To me, this leaves a lot of ambiguity on how H2 is leveraged on a single
> Ignite node.  (I get that the Reduce stage, as well as distributed
> transactions, are handled by Ignite, but how about the 'map' stage on a
> single node).
>
>
>
> How is a query executed on a single node?
>
> Example query: Select count(customer_id) from user where (age > 20) group
> by customer_id
>
>
>
> What steps are taken?
>
>1. execution plan: H2 creates an execution plan
>2. data retrieval:  Since data is stored off-heap, it has to be
>brought into heap.  Does H2 have anything to do with this step, or is it
>only Ignite? When are indexes used for that?
>3. Query execution: Once the data is on heap, what executes the Query
>(the group_by, aggregations, filters that were not handled by indexes,
>etc.)? H2 or Ignite?
>
>
>
>
>
>
>
> On Fri, Sep 21, 2018 at 9:27 AM Mikhail 
> wrote:
>
> Hi,
>
> Could you please formulate your question? Because right not your message
> looks like a request for google.
> I think the following article has answer for your question:
> https://apacheignite-sql.readme.io/docs/how-ignite-sql-works
>
> Thanks,
> Mike.
>
>
>
> --
> Sent from: http://apache-ignite-users.70518.x6.nabble.com/
>
>
>


Heap size

2018-10-11 Thread Prasad Bhalerao
Hi,

Is anyone using on heap cache with more than 30 gb jvm heap size in
production?

Which gc algorithm are you using?

If yes have you faced any issues relates to long gc pauses?


Thanks,
Prasad


Re: Query 3x slower with index

2018-10-11 Thread eugene miretsky
Thanks!

Could you please clarfiy "*In case of a composite index, it will apply the
columns one by one"? *

Igntie (or rather H2?) needs to load the data into heap in order to do the
groupBy & aggregations. We were hoping that only data that matches the
category filter will be loaded.
*What does one by one mean when: (assuming and index *(customer_id,
category_id)*) *

   1. *The fiilter is on both customer  and category. What data will be
   loaded into Heap?*
   2. *The fitler is only on **category, and the customer is just used for
   groupBy. Will Ignite*
  1. * load one customer with all the rows, and apply the category
  filter in heap*
  2.  *load one customer, but load only the rows that pass the category
  fitler in heap*
  3. *load all the events that pass the category filter, and then group
  them by customer. *

*From out benchmarking so far it seems like 1 is happening. *

On Thu, Oct 11, 2018 at 1:28 PM Stanislav Lukyanov 
wrote:

> Hi,
>
>
>
> It is a rather lengthy thread and I can’t dive into details right now,
>
> but AFAICS the issue now is making affinity key index to work with a
> secondary index.
>
> The important things to understand is
>
>1. Ignite will only use one index per table
>2. In case of a composite index, it will apply the columns one by one
>3. The affinity key index should always go first as the first step is
>splitting the query by affinity key values
>
>
>
> So, to use index over the affinity key (customer_id) and a secondary index
> (category_id) one needs to create an index
>
> like (customer_id, category_id), in that order, with no columns in between.
>
> Note that index (customer_id, dt, category_id) can’t be used instead of it.
>
> On the other hand, (customer_id, category_id, dt) can - the last part of
> the index will be left unused.
>
>
>
> Thanks,
>
> Stan
>
>
>
> *From: *eugene miretsky 
> *Sent: *9 октября 2018 г. 19:40
> *To: *user@ignite.apache.org
> *Subject: *Re: Query 3x slower with index
>
>
>
> Hi Ilya,
>
>
>
> I have tried it, and got the same performance as forcing using category
> index in my initial benchmark - query is 3x slowers and uses only one
> thread.
>
>
>
> From my experiments so far it seems like Ignite can either (a) use
> affinity key and run queries in parallel, (b) use index but run the query
> on only one thread.
>
>
>
> Has anybody been able to run OLAP like queries in while using an index?
>
>
>
> Cheers,
>
> Eugene
>
>
>
> On Mon, Sep 24, 2018 at 10:55 AM Ilya Kasnacheev <
> ilya.kasnach...@gmail.com> wrote:
>
> Hello!
>
>
>
> I guess that using AFFINITY_KEY as index have something to do with the
> fact that GROUP BY really wants to work per-partition.
>
>
>
> I have the following query for you:
>
>
>
> 1: jdbc:ignite:thin://localhost> explain Select count(*) FROM( Select
> customer_id from (Select customer_id, product_views_app, product_clict_app
> from GA_DATA ga join table(category_id int = ( 117930, 175930,
> 175940,175945,101450)) cats on cats.category_id = ga.category_id) data
> group by customer_id having SUM(product_views_app) > 2 OR
> SUM(product_clict_app) > 1);
> PLAN  SELECT
> DATA__Z2.CUSTOMER_ID AS __C0_0,
> SUM(DATA__Z2.PRODUCT_VIEWS_APP) AS __C0_1,
> SUM(DATA__Z2.PRODUCT_CLICT_APP) AS __C0_2
> FROM (
> SELECT
> GA__Z0.CUSTOMER_ID,
> GA__Z0.PRODUCT_VIEWS_APP,
> GA__Z0.PRODUCT_CLICT_APP
> FROM TABLE(CATEGORY_ID INTEGER=(117930, 175930, 175940, 175945,
> 101450)) CATS__Z1
> INNER JOIN PUBLIC.GA_DATA GA__Z0
> ON 1=1
> WHERE CATS__Z1.CATEGORY_ID = GA__Z0.CATEGORY_ID
> ) DATA__Z2
> /* SELECT
> GA__Z0.CUSTOMER_ID,
> GA__Z0.PRODUCT_VIEWS_APP,
> GA__Z0.PRODUCT_CLICT_APP
> FROM TABLE(CATEGORY_ID INTEGER=(117930, 175930, 175940, 175945,
> 101450)) CATS__Z1
> /++ function ++/
> INNER JOIN PUBLIC.GA_DATA GA__Z0
> /++ PUBLIC.GA_CATEGORY_ID: CATEGORY_ID = CATS__Z1.CATEGORY_ID ++/
> ON 1=1
> WHERE CATS__Z1.CATEGORY_ID = GA__Z0.CATEGORY_ID
>  */
> GROUP BY DATA__Z2.CUSTOMER_ID
>
> PLAN  SELECT
> COUNT(*)
> FROM (
> SELECT
> __C0_0 AS CUSTOMER_ID
> FROM PUBLIC.__T0
> GROUP BY __C0_0
> HAVING (SUM(__C0_1) > 2)
> OR (SUM(__C0_2) > 1)
> ) _18__Z3
> /* SELECT
> __C0_0 AS CUSTOMER_ID
> FROM PUBLIC.__T0
> /++ PUBLIC."merge_scan" ++/
> GROUP BY __C0_0
> HAVING (SUM(__C0_1) > 2)
> OR (SUM(__C0_2) > 1)
>  */
>
>
>
> However, I'm not sure it is "optimal" or not since I have no idea if it
> will perform better or worse on real data. That's why I need a subset of
> data which will make query execution speed readily visible. Unfortunately,
> I can't deduce that from query plan alone.
>
>
>
> Regards,
>
> --
>
> Ilya Kasnacheev
>
>
>
>
>
> пн, 24 сент. 2018 г. в 16:14, eugene miretsky :
>
> An easy way to reproduce would be to
>
>
>
> 1. Create table
>
> CREATE TABLE GA_DATA (
>
> customer_id 

RE: Ignite on Spring Boot 2.0

2018-10-11 Thread Stanislav Lukyanov
Uhm, don’t have a tested example but it seems pretty trivial.
It would be something like
@Bean
public SpringCacheManager SpringCacheManager() {
IgniteConfiguration cfg = new IgniteConfiguration();
cfg.setIgniteInstanceName("WebGrid");
// set more Ignite parameters if needed

SpringCacheManager springCacheManager = new SpringCacheManager();
springCacheManager.setConfiguration(cfg);
return springCacheManager;
}

Stan

From: ignite_user2016
Sent: 11 октября 2018 г. 20:13
To: user@ignite.apache.org
Subject: RE: Ignite on Spring Boot 2.0

Do you have any sample here ? the bean define in ignite configuration would
not work with spring boot context. we need to instantiate spring cache
manager with in spring boot context.




--
Sent from: http://apache-ignite-users.70518.x6.nabble.com/



RE: Query 3x slower with index

2018-10-11 Thread Stanislav Lukyanov
Hi,

It is a rather lengthy thread and I can’t dive into details right now, 
but AFAICS the issue now is making affinity key index to work with a secondary 
index.
The important things to understand is
1) Ignite will only use one index per table
2) In case of a composite index, it will apply the columns one by one
3) The affinity key index should always go first as the first step is splitting 
the query by affinity key values

So, to use index over the affinity key (customer_id) and a secondary index 
(category_id) one needs to create an index 
like (customer_id, category_id), in that order, with no columns in between.
Note that index (customer_id, dt, category_id) can’t be used instead of it.
On the other hand, (customer_id, category_id, dt) can - the last part of the 
index will be left unused.

Thanks,
Stan

From: eugene miretsky
Sent: 9 октября 2018 г. 19:40
To: user@ignite.apache.org
Subject: Re: Query 3x slower with index

Hi Ilya, 

I have tried it, and got the same performance as forcing using category index 
in my initial benchmark - query is 3x slowers and uses only one thread. 

From my experiments so far it seems like Ignite can either (a) use affinity key 
and run queries in parallel, (b) use index but run the query on only one 
thread. 

Has anybody been able to run OLAP like queries in while using an index? 

Cheers,
Eugene

On Mon, Sep 24, 2018 at 10:55 AM Ilya Kasnacheev  
wrote:
Hello!

I guess that using AFFINITY_KEY as index have something to do with the fact 
that GROUP BY really wants to work per-partition.

I have the following query for you:

1: jdbc:ignite:thin://localhost> explain Select count(*) FROM( Select 
customer_id from (Select customer_id, product_views_app, product_clict_app from 
GA_DATA ga join table(category_id int = ( 117930, 175930, 
175940,175945,101450)) cats on cats.category_id = ga.category_id) data group by 
customer_id having SUM(product_views_app) > 2 OR  SUM(product_clict_app) > 1);
PLAN  SELECT
    DATA__Z2.CUSTOMER_ID AS __C0_0,
    SUM(DATA__Z2.PRODUCT_VIEWS_APP) AS __C0_1,
    SUM(DATA__Z2.PRODUCT_CLICT_APP) AS __C0_2
FROM (
    SELECT
    GA__Z0.CUSTOMER_ID,
    GA__Z0.PRODUCT_VIEWS_APP,
    GA__Z0.PRODUCT_CLICT_APP
    FROM TABLE(CATEGORY_ID INTEGER=(117930, 175930, 175940, 175945, 101450)) 
CATS__Z1
    INNER JOIN PUBLIC.GA_DATA GA__Z0
    ON 1=1
    WHERE CATS__Z1.CATEGORY_ID = GA__Z0.CATEGORY_ID
) DATA__Z2
    /* SELECT
    GA__Z0.CUSTOMER_ID,
    GA__Z0.PRODUCT_VIEWS_APP,
    GA__Z0.PRODUCT_CLICT_APP
    FROM TABLE(CATEGORY_ID INTEGER=(117930, 175930, 175940, 175945, 101450)) 
CATS__Z1
    /++ function ++/
    INNER JOIN PUBLIC.GA_DATA GA__Z0
    /++ PUBLIC.GA_CATEGORY_ID: CATEGORY_ID = CATS__Z1.CATEGORY_ID ++/
    ON 1=1
    WHERE CATS__Z1.CATEGORY_ID = GA__Z0.CATEGORY_ID
 */
GROUP BY DATA__Z2.CUSTOMER_ID

PLAN  SELECT
    COUNT(*)
FROM (
    SELECT
    __C0_0 AS CUSTOMER_ID
    FROM PUBLIC.__T0
    GROUP BY __C0_0
    HAVING (SUM(__C0_1) > 2)
    OR (SUM(__C0_2) > 1)
) _18__Z3
    /* SELECT
    __C0_0 AS CUSTOMER_ID
    FROM PUBLIC.__T0
    /++ PUBLIC."merge_scan" ++/
    GROUP BY __C0_0
    HAVING (SUM(__C0_1) > 2)
    OR (SUM(__C0_2) > 1)
 */

However, I'm not sure it is "optimal" or not since I have no idea if it will 
perform better or worse on real data. That's why I need a subset of data which 
will make query execution speed readily visible. Unfortunately, I can't deduce 
that from query plan alone.

Regards,
-- 
Ilya Kasnacheev


пн, 24 сент. 2018 г. в 16:14, eugene miretsky :
An easy way to reproduce would be to 

1. Create table
CREATE TABLE GA_DATA (
    customer_id bigint,
    dt timestamp,
    category_id int,
    product_views_app int,
    product_clict_app int,
    product_clict_web int,
    product_clict_web int,
    PRIMARY KEY (customer_id, dt, category_id)
) WITH "template=ga_template, backups=0, affinityKey=customer_id";

2. Create indexes
• CREATE INDEX ga_customer_id ON GA_Data (customer_id)
• CREATE INDEX ga_pKey ON GA_Data (customer_id, dt, category_id)
• CREATE INDEX ga_category_and_customer_id ON GA_Data (category_id, customer_id)
• CREATE INDEX ga_category_id ON GA_Data (category_id)
3. Run Explain on the following queries while trying forcing using different 
indexes
• Select count(*) FROM( 
Select customer_id from GA_DATA  use index (ga_category_id)
where category_id in (117930, 175930, 175940,175945,101450) 
group by customer_id having SUM(product_views_app) > 2 OR  
SUM(product_clicks_app) > 1 )

• Select count(*) FROM( 
    Select customer_id from GA_DATA ga use index (ga_pKey)
    join table(category_id int = ( 117930, 175930, 175940,175945,101450)) cats 
on cats.category_id = ga.category_id   
    group by customer_id having SUM(product_views_app) > 2 OR  
SUM(product_clicks_app) > 1 
) 

The execution plans will be similar to what I have posted earler. In 
particular, only on of (a) affinty key index, (b) category_id index will be 
used.

On 

RE: Ignite on Spring Boot 2.0

2018-10-11 Thread ignite_user2016
Do you have any sample here ? the bean define in ignite configuration would
not work with spring boot context. we need to instantiate spring cache
manager with in spring boot context.




--
Sent from: http://apache-ignite-users.70518.x6.nabble.com/


RE: Determine Node Health?

2018-10-11 Thread Stanislav Lukyanov
You’re creating a new cache on each heath check call and never 
destroy them – of course, that leads to a memory leak; it’s also awful for the 
performance.

Don’t create a new cache each time. If you really want to check that cache 
operations work, 
use the same one every time.

Thanks,
Stan


From: Jason.G
Sent: 10 октября 2018 г. 8:49
To: user@ignite.apache.org
Subject: Re: Determine Node Health?

Hi vgrigorev,

I used your suggestion to do health check for each node. But I got memory
leak issue and exit with OOM error:  java heap space.

Below is my example code: 

// I create one bean to collect what I want info which include IP, hostname,
createtime and then return json string.
IgniteHealthCheckEntity healthCheck = new IgniteHealthCheckEntity();
ClusterNode node = ignite.cluster().localNode();
List adresses = (List)node.addresses();
String ip = adresses.get(0);

List hostnames = (List)node.hostNames();
String hostname = hostnames.get(0);

healthCheck.setServerIp(ip);
healthCheck.setStatus(0);
healthCheck.setServerHostname(hostname);
healthCheck.setMonitorTime(monitorTime);
healthCheck.setClientIp(clientIp);
String cacheName = "test_monitor_" + ipStr + "_"+ new Date().getTime();

IgniteCache putCache = ignite.createCache(cacheName);
putCache.put("test", "test");
String value = putCache.get("test");
if(!"test".equals(value)) {
message = "Ignite ("+ ip  +") " + "get/put value failed";
healthCheck.setMessage(message);
return JSONObject.fromObject(healthCheck).toString();
}else {
message = "OKOKOK";
healthCheck.setMessage(message);
return JSONObject.fromObject(healthCheck).toString(); 
}





--
Sent from: http://apache-ignite-users.70518.x6.nabble.com/



RE: Can I use limit and offset these two h2 features for pagination

2018-10-11 Thread Stanislav Lukyanov
Yep, `order by` is usually needed for `limit`, otherwise you’ll get random rows 
of the dataset
as by default there is no ordering.

If I’m not mistaken, in Ignite this options work on both map and reduce steps.
E.g. `limit 100` will first take the 100 rows from each node, then combine them 
in a temp table 
on the reducer node and take the first 100 rows from the combined set.

Stan

From: kcheng.mvp
Sent: 11 октября 2018 г. 19:32
To: user@ignite.apache.org
Subject: RE: Can I use limit and offset these two h2 features for pagination

Thank you reply very much!


in standalone h2 mode, if want to make `limit` and `offset` work correctly
the sql must be take a `order` clause. 

as I can't find document address how h2 works in ignite cluster.(I just know
it works something like map-reduce.

so I guess if `limit` ,`offset` and `order` happens in the `reduce` phrase
then it might be work.

not sure my understanding is correct or not.
 



--
Sent from: http://apache-ignite-users.70518.x6.nabble.com/



RE: Configuration does not guarantee strict consistencybetweenCacheStore and Ignite data storage upon restarts.

2018-10-11 Thread Stanislav Lukyanov
Refer to this: 
https://apacheignite.readme.io/docs/baseline-topology#section-usage-scenarios

Stan

From: the_palakkaran
Sent: 9 октября 2018 г. 22:59
To: user@ignite.apache.org
Subject: Re: Configuration does not guarantee strict 
consistencybetweenCacheStore and Ignite data storage upon restarts.

How do I tell Ignite to include a node in the baseline topology? Isn't it the
hosts that I set to TCPIpFinder? Also, is it possible to dynamically add a
new node to the cluster?



--
Sent from: http://apache-ignite-users.70518.x6.nabble.com/



RE: Can I use limit and offset these two h2 features for pagination

2018-10-11 Thread kcheng.mvp
Thank you reply very much!


in standalone h2 mode, if want to make `limit` and `offset` work correctly
the sql must be take a `order` clause. 

as I can't find document address how h2 works in ignite cluster.(I just know
it works something like map-reduce.

so I guess if `limit` ,`offset` and `order` happens in the `reduce` phrase
then it might be work.

not sure my understanding is correct or not.
 



--
Sent from: http://apache-ignite-users.70518.x6.nabble.com/


RE: Can I use limit and offset these two h2 features for pagination

2018-10-11 Thread Stanislav Lukyanov
`limit` and `offset` should work, with usual semantics. 

Thanks,
Stan

From: kcheng.mvp
Sent: 11 октября 2018 г. 18:59
To: user@ignite.apache.org
Subject: Can I use limit and offset these two h2 features for pagination

I know in h2 standalone mode, we can use *limit* and *offset*
features(functions) for pagination. Not sure in ignite I can still use these
two function for same purpose?



--
Sent from: http://apache-ignite-users.70518.x6.nabble.com/



Can I use limit and offset these two h2 features for pagination

2018-10-11 Thread kcheng.mvp
I know in h2 standalone mode, we can use *limit* and *offset*
features(functions) for pagination. Not sure in ignite I can still use these
two function for same purpose?



--
Sent from: http://apache-ignite-users.70518.x6.nabble.com/


RE: Using annotations to remap fields when deserializing BinaryObjects

2018-10-11 Thread DanieleBosetti
Hi,

Well one (weak) reason is that somebody else could already have put data to
a cache using the "dash" convention, and in that case we are unable to
retrieve this data (eg. how can we (automatically) deserialize the
"user-name" field?)

also, allowing this mapping would be in line with hibernate/jpa
(java.persistence.Column annotation); where you can have whatever naming
convention for database tables, and can then freely map to client pojos (DB
data comes first, client pojo-s come later).


anyway; what you say makes perfectly sense so, thanks for clarifying this!
(and we'll go with the single name convention).




--
Sent from: http://apache-ignite-users.70518.x6.nabble.com/


Re: client probe cache metadata

2018-10-11 Thread aealexsandrov
Hi,

You can get all caches configuration from Java API:

IgniteCache cache =
dr1Hub.getOrCreateCache("CACHE");

CacheConfiguration conf =
cache.getConfiguration(CacheConfiguration.class);

   
System.out.println("");
System.out.println("Cache name " + conf.getName());
System.out.println("Atomicity mode " + conf.getAtomicityMode());
System.out.println("Backups " + conf.getBackups());
   
System.out.println("");

Output:


Cache name CACHE
Atomicity mode TRANSACTIONAL
Backups 1


Also possible to get it from Mbeans:

1)Start jconsole
2)choose Mbeans tab
3)org.apache->id->gridname->cache_groups->cache_name->attributes

BR,
Andrei



--
Sent from: http://apache-ignite-users.70518.x6.nabble.com/


RE: client probe cache metadata

2018-10-11 Thread Stanislav Lukyanov
No, seems there are no methods for that.
I assume it could be added to the metadata response. But why do you need it?

Stan

From: wt
Sent: 11 октября 2018 г. 15:12
To: user@ignite.apache.org
Subject: client probe cache metadata

Hi

The rest service has a meta method that returns fields and index info. I
can't see another rest method that would show what a cache config is like
(backup, atomic or transactional, if metrics tracking is enabled, and
whatever other relevant cache config there might be. Can i get this from the
client?




--
Sent from: http://apache-ignite-users.70518.x6.nabble.com/



RE: 2 questions about eviction and transactions

2018-10-11 Thread wt
this is brilliant. thanks for this information.



--
Sent from: http://apache-ignite-users.70518.x6.nabble.com/


RE: 2 questions about eviction and transactions

2018-10-11 Thread Stanislav Lukyanov
Hi,

// Sidenote: better not to ask two unrelated questions in a single email. It 
complicates things if the threads grow.

Roughly speaking, REPLICATED cache is the same as PARTITIONED with an infinite 
number of backups.
The behavior is supposed to always be the same. Some components cut corners 
when using REPLICATED,
but that’s just about performance, functionally you shouldn’t see the 
difference.

On evictions: maxSize in eviction policices is per node, but an entry evicted 
on one node will be removed from all nodes.

On transactions: 
SQL isn’t transactional prior to (upcoming) Ignite 2.7, it effectively ignores 
whether a cache is transactional or atomic.
Since 2.7 there will be a new TRANSACTIONAL_SNAPSHOT atomicity mode that 
enabled transactions in SQL.
TRANSACTIONAL_SNAPSHOT can’t be joined with other modes, the query will throw 
an exception.

Stan

From: wt
Sent: 11 октября 2018 г. 11:47
To: user@ignite.apache.org
Subject: 2 questions about eviction and transactions

hi

I can't find additional information on evictions and transactions that i
need to document. 

1) does the eviction policies apply at a node level or cluster level. I
understand with replicated it probably is a node level but for a partitioned
cache how does it work

2) if i have 4 tables and 1 of them is transactional and the other 3 atomic,
if a query that loads the transactional table joins to any of the other
tables will the transaction fail?


Thanks



--
Sent from: http://apache-ignite-users.70518.x6.nabble.com/



Re: Ignite TcpDiscoveryS3IpFinder and AWS VPC

2018-10-11 Thread aealexsandrov
Hi,

According '127.0.0.1#47500', '0:0:0:0:0:0:0:1/0#47500':

It's just the different views of the same address (localhost). For example,
Ignite started on 10.0.75.1 could be available using next addresses from the
same node:

>>> Local node addresses: [LAPTOP-I5CE4BEI/0:0:0:0:0:0:0:1,
>>> LAPTOP-I5CE4BEI.mshome.net/10.0.75.1, LAPTOP-I5CE4BEI/127.0.0.1,
>>> LAPTOP-I5CE4BEI.gridgain.local/172.18.24.193, /172.25.4.187,
>>> /192.168.56.1]

0:0:0:0:0:0:0:1/0 is a CIDR notation. You can read more about it here:

https://en.wikipedia.org/wiki/Classless_Inter-Domain_Routing#CIDR_notation

According to VPC Peering and Ignite. I don't think that this scenario was
tested. However, you can try to follow common suggestion about how it should
be:

https://docs.aws.amazon.com/vpc/latest/peering/peering-scenarios.html

BR,
Andrei





--
Sent from: http://apache-ignite-users.70518.x6.nabble.com/


client probe cache metadata

2018-10-11 Thread wt
Hi

The rest service has a meta method that returns fields and index info. I
can't see another rest method that would show what a cache config is like
(backup, atomic or transactional, if metrics tracking is enabled, and
whatever other relevant cache config there might be. Can i get this from the
client?




--
Sent from: http://apache-ignite-users.70518.x6.nabble.com/


RE: Using annotations to remap fields when deserializing BinaryObjects

2018-10-11 Thread Stanislav Lukyanov
Hi,

Nope, it doesn’t work like that.
Names of fields in the Java class are always the same as the names of the 
fields in BinaryObject.

Frankly, I don’t really see a strong use case for the customization you’re 
asking for.
If it is just to support different naming conventions in different places 
(Java, SQL, other clients)
then I’d much more preferred the simplicity to flexibility and forced a single 
naming across all context. 

Stan

From: DanieleBosetti
Sent: 11 октября 2018 г. 12:31
To: user@ignite.apache.org
Subject: Using annotations to remap fields when deserializing BinaryObjects

Hi,

is it possible to deserialize cache entries to Java objects, and specify
field names mapping using annotations?

In example, let's say cache entries follow a "with-dashes" naming
convention:

binaryObject.toString() = "domain.User[id=... idHash=... user-name=Paul ]"

And I deserialize it with 

User user = binaryObject.deserialize();

class User {
  *@MapToFieldName{"user-name"}*
  private String name;
}


What can we put in place of "MapToFieldName" so that the "user-name" value
from the binary object is placed into the "name" field?

(I know the @QuerySqlField annotation, but that maps sql field names to
cache field names)


thanks!





--
Sent from: http://apache-ignite-users.70518.x6.nabble.com/



RE: Load data into particular Ignite cache from HDFS

2018-10-11 Thread Stanislav Lukyanov
I think this describes what you want: 
https://apacheignite-fs.readme.io/docs/secondary-file-system

Stan

From: Divya Darshan DD
Sent: 26 сентября 2018 г. 13:51
To: user@ignite.apache.org
Subject: Load data into particular Ignite cache from HDFS

Can you tell me a method to load files from HDFS into particular cache region 
of Ignite.

I know loadCache can be used but it is expecting parameters in the form of Key 
value pair so not able to understand how to use this method for loading 
complete files

Apologies for any usage of wrong technical terms. 

My use case is simple: Keep data in Ignite cache from HDFS and evict the data 
based on LRU
Any examples with Ignite-Hadoop data eviction with files residing in HDFS or 
any other system will be highly appreciated



RE: Working example for HDFS-Ignite data eviction

2018-10-11 Thread Stanislav Lukyanov
It looks like your use case is having Ignite as a cache for HDFS, as described 
here https://ignite.apache.org/use-cases/hadoop/hdfs-cache.html.
Try using this guide 
https://apacheignite-fs.readme.io/docs/secondary-file-system.

Stan

From: Divya Darshan DD
Sent: 26 сентября 2018 г. 9:19
To: user@ignite.apache.org
Subject: Working example for HDFS-Ignite data eviction

Is there some working example where data is pulled from HDFS and saved into 
Ignite cache and based on some eviction policy, data is being swapped between 
HDFS and Ignite



Thanks and Regards,
Divya Bamotra



RE: BufferUnderflowException on GridRedisProtocolParser

2018-10-11 Thread Stanislav Lukyanov
Well, you need to wait for the IGNITE-7153 fix then.
Or contribute it! :)
I checked the code, and it seems to be a relatively easy fix. One needs to 
alter the GridRedisProtocolParser 
to use ParserState in the way GridTcpRestParser::parseMemcachePacker does.

Stan

From: Michael Fong
Sent: 11 октября 2018 г. 7:15
To: user@ignite.apache.org
Subject: Re: BufferUnderflowException on GridRedisProtocolParser

Hi, 

The symptom seems very likely to IGNITE-7153, where the default tcp send buffer 
on that environment happens to be 4096 
[root]# cat /proc/sys/net/ipv4/tcp_wmem 
4096 

Regards,

On Thu, Oct 11, 2018 at 10:56 AM Michael Fong  wrote:
Hi, 

Thank you for your response. Not to every request; we only see this for some 
specific ones - when elCnt (4270) > buf.limit (4096).  We are trying to 
narrowing down the data set to find the root cause.

Thanks.

Regards,

On Thu, Oct 11, 2018 at 12:08 AM Ilya Kasnacheev  
wrote:
Hello!

Do you see this error on every request, or on some specific ones?

Regards,
-- 
Ilya Kasnacheev


вт, 9 окт. 2018 г. в 15:54, Michael Fong :
Hi, all

We are evaluating Ignite compatibility with Redis protocol, and we hit an issue 
as the following:
Does the stacktrace look a bit like IGNITE-7153?

java.nio.BufferUnderflowException
        at java.nio.HeapByteBuffer.get(HeapByteBuffer.java:151)
        at 
org.apache.ignite.internal.processors.rest.protocols.tcp.redis.GridRedisProtocolParser.readBulkStr(GridRedisProtocolParser.java:111)
        at 
org.apache.ignite.internal.processors.rest.protocols.tcp.redis.GridRedisProtocolParser.readArray(GridRedisProtocolParser.java:86)
        at 
org.apache.ignite.internal.processors.rest.protocols.tcp.GridTcpRestParser.decode(GridTcpRestParser.java:165)
        at 
org.apache.ignite.internal.processors.rest.protocols.tcp.GridTcpRestParser.decode(GridTcpRestParser.java:72)
        at 
org.apache.ignite.internal.util.nio.GridNioCodecFilter.onMessageReceived(GridNioCodecFilter.java:114)
        at 
org.apache.ignite.internal.util.nio.GridNioFilterAdapter.proceedMessageReceived(GridNioFilterAdapter.java:109)
        at 
org.apache.ignite.internal.util.nio.GridNioServer$HeadFilter.onMessageReceived(GridNioServer.java:3490)
        at 
org.apache.ignite.internal.util.nio.GridNioFilterChain.onMessageReceived(GridNioFilterChain.java:175)
        at 
org.apache.ignite.internal.util.nio.GridNioServer$ByteBufferNioClientWorker.processRead(GridNioServer.java:1113)
        at 
org.apache.ignite.internal.util.nio.GridNioServer$AbstractNioClientWorker.processSelectedKeysOptimized(GridNioServer.java:2339)
        at 
org.apache.ignite.internal.util.nio.GridNioServer$AbstractNioClientWorker.bodyInternal(GridNioServer.java:2110)
        at 
org.apache.ignite.internal.util.nio.GridNioServer$AbstractNioClientWorker.body(GridNioServer.java:1764)
        at 
org.apache.ignite.internal.util.worker.GridWorker.run(GridWorker.java:110)
        at java.lang.Thread.run(Thread.java:745)
[2018-10-09 
12:45:49,946][ERROR][grid-nio-worker-tcp-rest-1-#37][GridTcpRestProtocol] 
Closing NIO session because of unhandled exception.
class org.apache.ignite.internal.util.nio.GridNioException: null
        at 
org.apache.ignite.internal.util.nio.GridNioServer$AbstractNioClientWorker.processSelectedKeysOptimized(GridNioServer.java:2365)
        at 
org.apache.ignite.internal.util.nio.GridNioServer$AbstractNioClientWorker.bodyInternal(GridNioServer.java:2110)
        at 
org.apache.ignite.internal.util.nio.GridNioServer$AbstractNioClientWorker.body(GridNioServer.java:1764)
        at 
org.apache.ignite.internal.util.worker.GridWorker.run(GridWorker.java:110)
        at java.lang.Thread.run(Thread.java:745)




Using annotations to remap fields when deserializing BinaryObjects

2018-10-11 Thread DanieleBosetti
Hi,

is it possible to deserialize cache entries to Java objects, and specify
field names mapping using annotations?

In example, let's say cache entries follow a "with-dashes" naming
convention:

binaryObject.toString() = "domain.User[id=... idHash=... user-name=Paul ]"

And I deserialize it with 

User user = binaryObject.deserialize();

class User {
  *@MapToFieldName{"user-name"}*
  private String name;
}


What can we put in place of "MapToFieldName" so that the "user-name" value
from the binary object is placed into the "name" field?

(I know the @QuerySqlField annotation, but that maps sql field names to
cache field names)


thanks!





--
Sent from: http://apache-ignite-users.70518.x6.nabble.com/


2 questions about eviction and transactions

2018-10-11 Thread wt
hi

I can't find additional information on evictions and transactions that i
need to document. 

1) does the eviction policies apply at a node level or cluster level. I
understand with replicated it probably is a node level but for a partitioned
cache how does it work

2) if i have 4 tables and 1 of them is transactional and the other 3 atomic,
if a query that loads the transactional table joins to any of the other
tables will the transaction fail?


Thanks



--
Sent from: http://apache-ignite-users.70518.x6.nabble.com/


RE: Is ID generator split brain compliant?

2018-10-11 Thread abatra
It worked. Directory set for IGNITE_HOME was getting cleaned up because it
wasn't on a mount point or docker volume. Once I set IGNITE_HOME to a
directory that persisted between container stop and start, the ID generator
worked as expected.



--
Sent from: http://apache-ignite-users.70518.x6.nabble.com/


Re: Ignite Events

2018-10-11 Thread drosso
Hi Ivan,
thank you for your interest! here below you can find the code for the 2
sample programs:

*** ServerNode.java **

package TestATServerMode;

import javax.cache.Cache;
import javax.cache.event.CacheEntryEvent;
import javax.cache.event.CacheEntryUpdatedListener;
import javax.cache.event.EventType;

import org.apache.ignite.Ignite;
import org.apache.ignite.IgniteCache;
import org.apache.ignite.IgniteException;
import org.apache.ignite.Ignition;
import org.apache.ignite.cache.CacheEntryEventSerializableFilter;
import org.apache.ignite.cache.query.ContinuousQuery;
import org.apache.ignite.cache.query.QueryCursor;
import org.apache.ignite.cache.query.ScanQuery;
import org.apache.ignite.events.*;
import org.apache.ignite.lang.IgniteBiPredicate;
import org.apache.ignite.lang.IgnitePredicate;

import static org.apache.ignite.events.EventType.EVT_CACHE_OBJECT_PUT;
import static org.apache.ignite.events.EventType.EVT_CACHE_OBJECT_READ;
import static org.apache.ignite.events.EventType.EVT_CACHE_OBJECT_REMOVED;

import java.util.UUID;

/**
 * Starts up an empty node with example compute configuration.
 */
public class ServerNode {
/**
 * Start up an empty node with example compute configuration.
 *
 * @param args
 *Command line arguments, none required.
 * @throws IgniteException
 * If failed.
 */
private static final String CACHE_NAME = "MyCache";

@SuppressWarnings("deprecation")
public static void main(String[] args) throws IgniteException {
Ignition.start("config/example-ignite.xml");

Ignite ignite = Ignition.ignite();

// Get an instance of named cache.
final IgniteCache cache =
ignite.getOrCreateCache(CACHE_NAME);

// Sample remote filter

IgnitePredicate locLsnr = new 
IgnitePredicate() {
@Override
public boolean apply(CacheEvent evt) {
System.out.println("LOCAL cache event [evt=" + 
evt.name() + ",
cacheName=" + evt.cacheName() + ", key="
+ evt.key() + ']');

return true; // Return true to continue 
listening.
}
};

// Register event listener for all local task execution events.
ignite.events().localListen(locLsnr, EVT_CACHE_OBJECT_PUT);


}
}


 Putter.java *

package TestATServerMode;

import static org.apache.ignite.events.EventType.EVT_CACHE_OBJECT_PUT;

import java.sql.Time;

import org.apache.ignite.Ignite;
import org.apache.ignite.IgniteCache;
import org.apache.ignite.Ignition;
import org.apache.ignite.configuration.CacheConfiguration;
import org.apache.ignite.events.CacheEvent;
import org.apache.ignite.lang.IgnitePredicate;


@SuppressWarnings("TypeMayBeWeakened")
public class Putter {
/** Cache name. */
private static final String CACHE_NAME = "MyCache";

/**
 * Executes example.
 *
 * @param args Command line arguments, none required.
 * @throws InterruptedException 
 */
public static void main(String[] args) {

// Mark this cluster member as client.
//Ignition.setClientMode(true);

try (Ignite ignite = Ignition.start("config/example-ignite.xml")) {
System.out.println();
System.out.println(">>> Myexample started.");

CacheConfiguration cfg = new
CacheConfiguration<>();

//cfg.setCacheMode(CacheMode.REPLICATED);
cfg.setName(CACHE_NAME);
//cfg.setAtomicityMode(CacheAtomicityMode.ATOMIC);


IgnitePredicate lsnr = new
IgnitePredicate() {
@Override public boolean apply(CacheEvent evt) {
System.out.println("Received cache event [evt=" +
evt.name() + ", cacheName=" + evt.cacheName() +
", key=" + evt.key() + ']');

return true; // Return true to continue listening.
}
};

try (IgniteCache cache =
ignite.getOrCreateCache(cfg)) {
if
(ignite.cluster().forDataNodes(cache.getName()).nodes().isEmpty()) {
System.out.println();
System.out.println(">>> This example requires remote
cache node nodes to be started.");
System.out.println(">>> Please start at least 1 remote
cache node.");
System.out.println(">>> Refer to example's javadoc for
details on configuration.");
System.out.println();

return;
}


// Register event listener for all local task execution
events.
ignite.events().localListen(lsnr,