Re: Ignite Write Behind performance

2016-06-06 Thread amitpa
I did test this...For us I think Write behind gets called fine.



--
View this message in context: 
http://apache-ignite-users.70518.x6.nabble.com/Ignite-Write-Behind-performance-tp5385p5475.html
Sent from the Apache Ignite Users mailing list archive at Nabble.com.


Re: How to retrieve data from Collocated Cache with Simple Key

2016-06-06 Thread Denis Magda
Hi Kamal,

There is no need to use any workaround like ScanQueries or iterators. You just 
need to use a valid key to retrieve the data.
The valid key in your example is new AffinityKey<>(key, affray). It means that 
every time you need to put or get a Person from the cache you need to use this 
kind of key where “key” and “affray” will vary.

Also it’s not required to use AffinityKey instance all the time. You are free 
to create your own implementation of a key

class PersonKey {

private int id;

@AffinityKeyMaped 
private int orgId;

//hash code and equals implementations are below
}

The ticket is created for NPE you got when were using a wrong key
https://issues.apache.org/jira/browse/IGNITE-3263

—
Denis

> On Jun 7, 2016, at 7:40 AM, Kamal C  wrote:
> 
> Thanks for your response Vladislav. 
> 
> Both ScanQuery and Iterator traverses the whole cache to find the value. 
> It may not be suitable in my environment as there can be huge number of
> hits.
> 
> I understand that for fast retrieval `key-to-partition` mapping is done. 
> But, In AffinityKey documentation, it's specified that hashcode and equals
> methods are implemented based on simple key. 
> 
> * 
>  * Note that the {@link #equals(Object)} and {@link #hashCode()} methods
>  * delegate directly to the wrapped cache key provided by {@link #key()}
>  * method.
>  * 
> 
> On Mon, Jun 6, 2016 at 10:44 PM, Vladislav Pyatkov  > wrote:
> I am sorry for mistake Kamal...
> 
> On Jun 6, 2016 3:34 PM, "Kamal"  > wrote:
> Hi,
> 
> I've gone through the affinity collocation[1] example to understand how
> data gets collocated across caches. In my example, I found that I'm not able
> to retrieve data from collocated cache with simple key.
> 
> I mean.
> 
> Cache, Person> personCache = ..;
> personCache.get(new AffinityKey<>(key, affKey)); // returns value
> personCache.get(new AffinityKey<>(key)); // throws NPE
> 
> Exception in thread "main" java.lang.NullPointerException
> at
> org.apache.ignite.cache.affinity.rendezvous.RendezvousAffinityFunction.partition(RendezvousAffinityFunction.java:428)
> at
> org.apache.ignite.internal.processors.cache.GridCacheAffinityManager.partition(GridCacheAffinityManager.java:206)
> at
> org.apache.ignite.internal.processors.cache.GridCacheContext.toCacheKeyObject(GridCacheContext.java:1801)
> at
> org.apache.ignite.internal.processors.cache.distributed.dht.atomic.GridDhtAtomicCache.get(GridDhtAtomicCache.java:339)
> at
> org.apache.ignite.internal.processors.cache.GridCacheAdapter.get(GridCacheAdapter.java:4650)
> at
> org.apache.ignite.internal.processors.cache.GridCacheAdapter.get(GridCacheAdapter.java:1391)
> at
> org.apache.ignite.internal.processors.cache.IgniteCacheProxy.get(IgniteCacheProxy.java:907)
> at
> my.apache.ignite.examples.collocation.CacheCollocationExample.main(CacheCollocationExample.java:69)
> 
> In some scenarios, I have to fetch data from cache by simple key.
> 
> [1]: https://apacheignite.readme.io/docs/affinity-collocation 
> 
> CacheCollocationExample.java
>   
> >
> Company.java
>  >
> Person.java
>  >
> 
> 
> 
> 
> --
> View this message in context: 
> http://apache-ignite-users.70518.x6.nabble.com/How-to-retrieve-data-from-Collocated-Cache-with-Simple-Key-tp5452.html
>  
> 
> Sent from the Apache Ignite Users mailing list archive at Nabble.com.
> 



Re: Self Join Query As An Alternative To IN clause

2016-06-06 Thread pragmaticbigdata
Great. The query worked now and it is 50% faster than the in clause query.
Could you detail on the internals of why passing the object array directly
didn't work and how did an [] inside an [] worked out? 



--
View this message in context: 
http://apache-ignite-users.70518.x6.nabble.com/Self-Join-Query-As-An-Alternative-To-IN-clause-tp5448p5473.html
Sent from the Apache Ignite Users mailing list archive at Nabble.com.


Re: How to retrieve data from Collocated Cache with Simple Key

2016-06-06 Thread Kamal C
Thanks for your response Vladislav.

Both ScanQuery and Iterator traverses the whole cache to find the value.
It may not be suitable in my environment as there can be huge number of
hits.

I understand that for fast retrieval `key-to-partition` mapping is done.
But, In AffinityKey documentation, it's specified that hashcode and equals
methods are implemented based on simple key.

* 
>  * Note that the {@link #equals(Object)} and {@link #hashCode()} methods
>  * delegate directly to the wrapped cache key provided by {@link #key()}
>  * method.
>  * 
>

On Mon, Jun 6, 2016 at 10:44 PM, Vladislav Pyatkov 
wrote:

> I am sorry for mistake Kamal...
> On Jun 6, 2016 3:34 PM, "Kamal"  wrote:
>
>> Hi,
>>
>> I've gone through the affinity collocation[1] example to understand
>> how
>> data gets collocated across caches. In my example, I found that I'm not
>> able
>> to retrieve data from collocated cache with simple key.
>>
>> I mean.
>>
>> Cache, Person> personCache = ..;
>> personCache.get(new AffinityKey<>(key, affKey)); // returns value
>> personCache.get(new AffinityKey<>(key)); // throws NPE
>>
>> Exception in thread "main" java.lang.NullPointerException
>> at
>>
>> org.apache.ignite.cache.affinity.rendezvous.RendezvousAffinityFunction.partition(RendezvousAffinityFunction.java:428)
>> at
>>
>> org.apache.ignite.internal.processors.cache.GridCacheAffinityManager.partition(GridCacheAffinityManager.java:206)
>> at
>>
>> org.apache.ignite.internal.processors.cache.GridCacheContext.toCacheKeyObject(GridCacheContext.java:1801)
>> at
>>
>> org.apache.ignite.internal.processors.cache.distributed.dht.atomic.GridDhtAtomicCache.get(GridDhtAtomicCache.java:339)
>> at
>>
>> org.apache.ignite.internal.processors.cache.GridCacheAdapter.get(GridCacheAdapter.java:4650)
>> at
>>
>> org.apache.ignite.internal.processors.cache.GridCacheAdapter.get(GridCacheAdapter.java:1391)
>> at
>>
>> org.apache.ignite.internal.processors.cache.IgniteCacheProxy.get(IgniteCacheProxy.java:907)
>> at
>>
>> my.apache.ignite.examples.collocation.CacheCollocationExample.main(CacheCollocationExample.java:69)
>>
>> In some scenarios, I have to fetch data from cache by simple key.
>>
>> [1]: https://apacheignite.readme.io/docs/affinity-collocation
>> CacheCollocationExample.java
>> <
>> http://apache-ignite-users.70518.x6.nabble.com/file/n5452/CacheCollocationExample.java
>> >
>> Company.java
>> 
>> Person.java
>> 
>>
>>
>>
>>
>> --
>> View this message in context:
>> http://apache-ignite-users.70518.x6.nabble.com/How-to-retrieve-data-from-Collocated-Cache-with-Simple-Key-tp5452.html
>> Sent from the Apache Ignite Users mailing list archive at Nabble.com.
>>
>


Re: Ignite Write Behind performance

2016-06-06 Thread bintisepaha
amitpa, We would be interested in learning how did this perform for you?
We have implemented spring txn to insert in database for write-behind.
Hoever, we see that sometiems write-behind is not even called for some
objects that we are certain were just updated in the cache. have you noticed
that?



--
View this message in context: 
http://apache-ignite-users.70518.x6.nabble.com/Ignite-Write-Behind-performance-tp5385p5470.html
Sent from the Apache Ignite Users mailing list archive at Nabble.com.


cluster node attribute based authorization

2016-06-06 Thread Anand Kumar Sankaran
All

I implemented a ClusterNode.userAttributes() based authorization mechanism.  I 
used a JWT Token to establish trust (passed via the userAttribute). This works 
fine.

The problem I have is that the JWT Token expires shortly after initial use.  
Now, if the node leaves the cluster and joins it again, the JWT Token in that 
node would be invalid.

How should I fix this?  Is there a callback I can implement when a node leaves 
a cluster that I can use to create a new JWT token and attach to it?

Any guidance would be appreciated.

--
anand


Re: How to retrieve data from Collocated Cache with Simple Key

2016-06-06 Thread Vladislav Pyatkov
Hello Jamal,

I think so, It not possible, because data store in particular partition,
which determine only by affinityKey.

More useful case, when you are not know fullKey, use ScanQuery(1) or use
cache iterator.

(1): https://apacheignite.readme.io/docs/cache-queries
On Jun 6, 2016 3:34 PM, "Kamal"  wrote:

> Hi,
>
> I've gone through the affinity collocation[1] example to understand how
> data gets collocated across caches. In my example, I found that I'm not
> able
> to retrieve data from collocated cache with simple key.
>
> I mean.
>
> Cache, Person> personCache = ..;
> personCache.get(new AffinityKey<>(key, affKey)); // returns value
> personCache.get(new AffinityKey<>(key)); // throws NPE
>
> Exception in thread "main" java.lang.NullPointerException
> at
>
> org.apache.ignite.cache.affinity.rendezvous.RendezvousAffinityFunction.partition(RendezvousAffinityFunction.java:428)
> at
>
> org.apache.ignite.internal.processors.cache.GridCacheAffinityManager.partition(GridCacheAffinityManager.java:206)
> at
>
> org.apache.ignite.internal.processors.cache.GridCacheContext.toCacheKeyObject(GridCacheContext.java:1801)
> at
>
> org.apache.ignite.internal.processors.cache.distributed.dht.atomic.GridDhtAtomicCache.get(GridDhtAtomicCache.java:339)
> at
>
> org.apache.ignite.internal.processors.cache.GridCacheAdapter.get(GridCacheAdapter.java:4650)
> at
>
> org.apache.ignite.internal.processors.cache.GridCacheAdapter.get(GridCacheAdapter.java:1391)
> at
>
> org.apache.ignite.internal.processors.cache.IgniteCacheProxy.get(IgniteCacheProxy.java:907)
> at
>
> my.apache.ignite.examples.collocation.CacheCollocationExample.main(CacheCollocationExample.java:69)
>
> In some scenarios, I have to fetch data from cache by simple key.
>
> [1]: https://apacheignite.readme.io/docs/affinity-collocation
> CacheCollocationExample.java
> <
> http://apache-ignite-users.70518.x6.nabble.com/file/n5452/CacheCollocationExample.java
> >
> Company.java
> 
> Person.java
> 
>
>
>
>
> --
> View this message in context:
> http://apache-ignite-users.70518.x6.nabble.com/How-to-retrieve-data-from-Collocated-Cache-with-Simple-Key-tp5452.html
> Sent from the Apache Ignite Users mailing list archive at Nabble.com.
>


Re: Running an infinite job? (use case inside) or alternatives

2016-06-06 Thread Alexei Scherbakov
Hi,

What about the following solution:

Create cache: IgniteCache, where key is growing integer.
Asssign keys using IgniteAtomicSequence [1]
Listen for cache put events
When put is done and event group id is "next", process all entries from
cache where id < event.key

[1] https://apacheignite.readme.io/docs/id-generator

2016-06-05 15:19 GMT+03:00 zshamrock :

> Are there features in Ignite which would support running an infinite (while
> the cluster is up and running) job? For example, continuously reading
> values
> from the distributed queue? So to implement producer/consumer pattern,
> where
> there could be multiple producers, but I want to limit number of consumers,
> ideally per specific key/group or if it is not possible, just to have one
> consumer per queue.
>
> If I asynchronously submit affinity ignite job with `queue.affinityRun()`
> what is the implication of the this job never to finish? Will it consume
> the
> thread from the ExecutorService thread pool on the running node forever
> then?
>
> To give a better a context, this is the problem I am trying to solve (maybe
> there are even other approaches to  solve it, and I am looking into the
> completely wrong direction?):
> - there are application events coming periodically (based on the
> application
> state changes)
> - I have to accumulate these events until the block of the events is
> "complete" (completion is defined by an application rule), as until the
> group is complete nothing can be done/processed
> - when the group is complete I have to process all of the events in the
> group (as one complete chunk), while still accepting new events coming for
> now another "incomplete" group
> - and repeat since the beginning
>
> So, far I came with the following solution:
> - collect and keep all the events in the distributed IgniteQueue
> - when the application notifies the completion of the group, I trigger
> `queue.affinityRun()` (as I have to do a peek before removing the event
> from
> the queue, so I want to run the execution logic on the node where the queue
> is stored, they are small and run in collocated mode, and so peek will not
> do an unnecessary network call)
> [the reason for a peek, is that even if I receive the application event of
> the group completion, due to the way events are stored (in the queue), I
> don't know where the group ends, only where it starts (head of the queue),
> but looking into the event itself, I can detect whether it is still from
> the
> same group, or already from a new incomplete group, this is why I have to
> do
> peek, as if I do poll/take first then I have to the put the element back
> into the head of the queue (which obviously is not possible, as it is a
> queue and not a dequeue), then I have to store this element/event somewhere
> else, and on the next job submitted start with this stored event as a
> "head"
> of the queue, and only then switch back to the real queue. As I don't want
> this extra complexity, I am ready to pay a price for an extra peek before
> the take]
> - implement custom CollisionSpi which will understand whether there is
> already a running job for the given queue, and if so, keeps the newly
> submitted job in the waiting list
> [here again due to the fact how events are stored (in the queue) I don't
> allow multiple jobs running against same queue at the same time, as taking
> the element from the middle of one group already processing group is
> obviously an error, so I have to limit (to 1) the number of parallel jobs
> against the given queue]
> - it also requires to submit a new ignite job (distributed closure) on the
> queue every time the application triggers/generates a completion group
> event, which requires/should schedule a queue processing (also see above on
> the overall number of the simultaneous jobs)
>
> I thought about other alternative solutions, but all of them turned out to
> be more complex, and involve more moving parts (as for example, for the
> distributed queue Ignite manages atomicity, and consistency, with other
> approaches I have to do it all manually, which I just want to minimize) and
> more logic to maintain and ensure correctness.
>
> Is there any other suitable alternative for the problem described above?
>
>
>
>
>
> --
> View this message in context:
> http://apache-ignite-users.70518.x6.nabble.com/Running-an-infinite-job-use-case-inside-or-alternatives-tp5430.html
> Sent from the Apache Ignite Users mailing list archive at Nabble.com.
>



-- 

Best regards,
Alexei Scherbakov


Ignite for Spark on YARN Deployment

2016-06-06 Thread Hongmei Zong
Hi there,

I would like to use "Ignite for Spark" to save the states of Spark jobs in
memory and those states can be used for later jobs. For Shared Deployment,
the document only offer two ways to deploy Ignite cluster. First is the
standalone deployment, second is MESOS deployment. But Our Spark clusters
are running on YARN. My question is: is it possible to run Ignite for Spark
on YARN deployment???

I downloaded and installed Ignite on my machine. Next, I referenced the link
below for YARN Deployment.
http://apacheignite.gridgain.org/docs/yarn-deployment

I created the cluster.properties file and ran the application using the
command:hadoop jar
/u/hongmei/apache-ignite/libs/optional/ignite-yarn/ignite-yarn-1.6.0.jar
/u/hongmei/apache-ignite/libs/optional/ignite-yarn/ignite-yarn-1.6.0.jar
/u/hongmei/apache-ignite/config/cluster.properties 

Form the YARN console, The YARN ignite application works ok. It shows
running, and 16 containers are allocated for the ignite application.

After this step, what should I do in order to run Spark with Ignite on YARN
deployment??

Many Thanks!!!

Mei



--
View this message in context: 
http://apache-ignite-users.70518.x6.nabble.com/Ignite-for-Spark-on-YARN-Deployment-tp5465.html
Sent from the Apache Ignite Users mailing list archive at Nabble.com.


Re: Self Join Query As An Alternative To IN clause

2016-06-06 Thread Alexei Scherbakov
Hello,

You should pass array of values as a first argument:

Object[] params = new Object[] { new Object[] {"p1", "p2", ...} };

2016-06-06 13:19 GMT+03:00 pragmaticbigdata :

> I am using apache ignite 1.6. Executing an in clause query on a cache
> containing 1 mil entries took around 1.5 seconds. As a performance
> optimization suggested  here
>   , I tried out a join
> clause query but query binding fails.
>
> SqlFieldsQuery fieldsQuery = new SqlFieldsQuery("select p.name,
> p.price, p.volume, p.discount, p.baseLine, p.uplift, p.FINAL_PRICE,
> p.SALE_PRICE from ProductDetails p join table(" +
> searchColumn.getColumnName() + " char = ?) i on " +
> "p." + searchColumn.getColumnName() + " = i." +
> searchColumn.getColumnName());
>
> fieldsQuery.setArgs(values);  //values is of type Object[]
> List> productDetails =
> Lists.newArrayList();
> Collection> res =
> productCache.query(fieldsQuery).getAll();
>
> The above query fails with an JdbcSQLException: Invalid value "2" for
> parameter "parameterIndex" [90008-175]. The complete trace is
>
> Caused by: class org.apache.ignite.IgniteException: Failed to bind
> parameters: [qry=select p.name, p.price, p.volume, p.discount, p.baseLine,
> p.uplift, p.FINAL_PRICE, p.SALE_PRICE from ProductDetails p join table(NAME
> char = ?) i on p.NAME = i.NAME, params=[phone10, phone100, phone101,
> phone102, phone103, phone104, phone105, phone106, phone107, phone108,
> phone109, phone1000]]
> at
>
> org.apache.ignite.internal.processors.query.GridQueryProcessor.queryTwoStep(GridQueryProcessor.java:811)
> at
>
> org.apache.ignite.internal.processors.cache.IgniteCacheProxy.query(IgniteCacheProxy.java:691)
> ... 37 more
> Caused by: class org.apache.ignite.IgniteCheckedException: Failed to bind
> parameters: [qry=select p.name, p.price, p.volume, p.discount, p.baseLine,
> p.uplift, p.FINAL_PRICE, p.SALE_PRICE from ProductDetails p join table(NAME
> char = ?) i on p.NAME = i.NAME, params=[phone10, phone100, phone101,
> phone102, phone103, phone104, phone105, phone106, phone107, phone108,
> phone109, phone1000]]
> at
>
> org.apache.ignite.internal.processors.query.GridQueryProcessor.executeQuery(GridQueryProcessor.java:1787)
> at
>
> org.apache.ignite.internal.processors.query.GridQueryProcessor.queryTwoStep(GridQueryProcessor.java:804)
> ... 38 more
> Caused by: javax.cache.CacheException: Failed to bind parameters:
> [qry=select p.name, p.price, p.volume, p.discount, p.baseLine, p.uplift,
> p.FINAL_PRICE, p.SALE_PRICE from ProductDetails p join table(NAME char = ?)
> i on p.NAME = i.NAME, params=[phone10, phone100, phone101, phone102,
> phone103, phone104, phone105, phone106, phone107, phone108, phone109,
> phone1000]]
> at
>
> org.apache.ignite.internal.processors.query.h2.IgniteH2Indexing.queryTwoStep(IgniteH2Indexing.java:1083)
> at
>
> org.apache.ignite.internal.processors.query.GridQueryProcessor$4.applyx(GridQueryProcessor.java:806)
> at
>
> org.apache.ignite.internal.processors.query.GridQueryProcessor$4.applyx(GridQueryProcessor.java:804)
> at
>
> org.apache.ignite.internal.util.lang.IgniteOutClosureX.apply(IgniteOutClosureX.java:36)
> at
>
> org.apache.ignite.internal.processors.query.GridQueryProcessor.executeQuery(GridQueryProcessor.java:1769)
> ... 39 more
> Caused by: class org.apache.ignite.IgniteCheckedException: Failed to bind
> parameter [idx=2, obj=phone100, stmt=prep3: select p.name, p.price,
> p.volume, p.discount, p.baseLine, p.uplift, p.FINAL_PRICE, p.SALE_PRICE
> from
> ProductDetails p join table(NAME char = ?) i on p.NAME = i.NAME {1:
> 'phone10'}]
> at
>
> org.apache.ignite.internal.processors.query.h2.IgniteH2Indexing.bindObject(IgniteH2Indexing.java:505)
> at
>
> org.apache.ignite.internal.processors.query.h2.IgniteH2Indexing.bindParameters(IgniteH2Indexing.java:930)
> at
>
> org.apache.ignite.internal.processors.query.h2.IgniteH2Indexing.queryTwoStep(IgniteH2Indexing.java:1080)
> ... 43 more
> Caused by: org.h2.jdbc.JdbcSQLException: Invalid value "2" for parameter
> "parameterIndex" [90008-175]
> at
> org.h2.message.DbException.getJdbcSQLException(DbException.java:332)
> at org.h2.message.DbException.get(DbException.java:172)
> at
> org.h2.message.DbException.getInvalidValueException(DbException.java:218)
> at
>
> org.h2.jdbc.JdbcPreparedStatement.setParameter(JdbcPreparedStatement.java:1338)
> at
> org.h2.jdbc.JdbcPreparedStatement.setObject(JdbcPreparedStatement.java:451)
> at
>
> org.apache.ignite.internal.processors.query.h2.IgniteH2Indexing.bindObject(IgniteH2Indexing.java:502)
> ... 45 more
>
> What could be missing?
>
> Thanks.
>
>
>
> --
> View this message in context:
> 

Re: Ignite : IgniteDataStreamer question about units/valid ranges for perNodeBufferSize, autoFlushFrequency

2016-06-06 Thread Vladislav Pyatkov
Hello,

I do not see any restrictions other than reasonable.

perNodeBufferSize  only positive value
autoFlushFrequency only positive value and if auto flush disabled then 0

On Mon, Jun 6, 2016 at 11:33 AM, M Singh  wrote:

> Is there any valid range for these attributes ?
>
>
> On Monday, June 6, 2016 1:31 AM, M Singh  wrote:
>
>
> Thanks Vladislav for the clarification.
>
>
> On Monday, June 6, 2016 12:45 AM, Vladislav Pyatkov 
> wrote:
>
>
> Hello,
>
> 1) perNodeBufferSize - is the number of entries in the buffer.
> 2) autoFlushFrequency - in milliseconds
>
> On Sun, Jun 5, 2016 at 8:14 PM, M Singh  wrote:
>
> Hi:
>
> I was looking at the javadoc for some of the methods in this interface and
> am not sure of units as well as the ranges for these allowed values.  The
> impl class just checks for positive argument.
>
> If anyone has any pointers, please let me know.  Thanks
>
>
> /**
>  * Gets size of per node key-value pairs buffer.
>  *
>  * @return Per node buffer size.
>  */
> public int perNodeBufferSize();
>
> Is buffer size in bytes or count of items ?
>
>   /**
> * Sets automatic flush frequency. Essentially, this is the time after
> which the
> * streamer will make an attempt to submit all data added so far to
> remote nodes.
>  * Note that there is no guarantee that data will be delivered after
> this concrete
>  * attempt (e.g., it can fail when topology is changing), but it won't
> be lost anyway.
>  * 
>  * If set to {@code 0}, automatic flush is disabled.
>  * 
>  * Automatic flush is disabled by default (default value is {@code 0}).
>  *
>  * @param autoFlushFreq Flush frequency or {@code 0} to disable
> automatic flush.
>  * @see #flush()
>  */
> public void autoFlushFrequency(long autoFlushFreq);
>
> Is the freq in times/sec or every millis etc ?
>
>
>
>
>
> --
> Vladislav Pyatkov
>
>
>
>
>


-- 
Vladislav Pyatkov


Is it possible to disable TcpRestProtocol?

2016-06-06 Thread Dave
Hi,

I've recently started using apache ignite in a project and am looking to put
it in to production in the next 4 weeks. Our production environment has
tight constraints on port usage, so far following the manuals I have
remapped the TcpDiscovery and TcpCommunication to allowed ports but I have
noticed that a TcpRestProtocol is being started on port 11211. 

I was wondering if this is needed for normal operation and if not can it be
disabled?

Apologies if this is answered somewhere else I have been looking but have
not been able to find any information on this.

Many Thanks,

Dave



--
View this message in context: 
http://apache-ignite-users.70518.x6.nabble.com/Is-it-possible-to-disable-TcpRestProtocol-tp5461.html
Sent from the Apache Ignite Users mailing list archive at Nabble.com.


Re: listening for events on transactions

2016-06-06 Thread limabean
Hi Alexey,

I did poke around on continuous queries before.
Based on your recommendation I will take another look to see if they solve
my architecture pattern.

Transaction listeners/messages are a common pattern with other technologies.

Here is a suggestion of how Ignite might evolve to be better in this area:

I envision an Ignite server in the grid acting as a transaction coordinator.
(I have Cassandra behavior in mind when thinking through this).

The client code may issue the final "commit" API call, but this would simply
send an indicator to the coordinator in the grid which would handle the
commit
across the grid with the data owners.  Then, this coordinator could generate
a transaction message to any registered listeners AND return the transaction
status
back to the client. 

Guessing about how Ignite works today, there appears to be an assumption
that the client...even the end user client code...has to be the coordinator
of 
the transaction.  But I don't think it has to remain this way.  A model more
similar to Cassandra behavior (I'm not talking Cassandra transactions - it
only
has lightweight, query specific transactions - but more the model of the way
a client interacts with the Cassandras cluster),  might be an interesting
model
for Ignite to evolve towards.





--
View this message in context: 
http://apache-ignite-users.70518.x6.nabble.com/listening-for-events-on-transactions-tp5346p5460.html
Sent from the Apache Ignite Users mailing list archive at Nabble.com.


Ignite faster with startup order

2016-06-06 Thread amitpa
Hi,

I have an application which embeds a Apache Ignite instance in a TCP server.
I have another process which starts another ignite instance. 

All clients request to the TCP server, which starts an Ignite Transactions
does some inserts.

I have observed a peculiar thing, when I start the TCP process first, then
the Ignite main the application is 80% faster to do the puts.

The other way round, makes the application 80% slower?

Why is this so?




--
View this message in context: 
http://apache-ignite-users.70518.x6.nabble.com/Ignite-faster-with-startup-order-tp5459.html
Sent from the Apache Ignite Users mailing list archive at Nabble.com.


Re: listening for events on transactions

2016-06-06 Thread limabean
Hey Denis,

Thank you for this suggestion.  I plan to take a serious look at what you
suggest to
see if it will work for me and will let you know.



--
View this message in context: 
http://apache-ignite-users.70518.x6.nabble.com/listening-for-events-on-transactions-tp5346p5458.html
Sent from the Apache Ignite Users mailing list archive at Nabble.com.


Re: How to connect/monitor ignite server through jmx client

2016-06-06 Thread pragmaticbigdata
Great. Works now. Thanks!



--
View this message in context: 
http://apache-ignite-users.70518.x6.nabble.com/How-to-connect-monitor-ignite-server-through-jmx-client-tp5420p5456.html
Sent from the Apache Ignite Users mailing list archive at Nabble.com.


Re: How to connect/monitor ignite server through jmx client

2016-06-06 Thread Denis Magda
Got you, if you start the node this way (programmatically) then IGNITE_JMX_PORT 
won’t work regardless of the operating system kind.

You need to pass the following system parameters to your Java process (that 
calls Ignition.start) 
-Dcom.sun.management.jmxremote 
-Dcom.sun.management.jmxremote.port={preferred_port}
-Dcom.sun.management.jmxremote.authenticate=false 
-Dcom.sun.management.jmxremote.ssl=false

—
Denis

> On Jun 6, 2016, at 3:25 PM, pragmaticbigdata  wrote:
> 
> How do you start ignite in verbose mode programmatically? I am starting
> ignite with Ignition.start("conf.xml");
> 
> 
> 
> --
> View this message in context: 
> http://apache-ignite-users.70518.x6.nabble.com/How-to-connect-monitor-ignite-server-through-jmx-client-tp5420p5454.html
> Sent from the Apache Ignite Users mailing list archive at Nabble.com.



Re: How to connect/monitor ignite server through jmx client

2016-06-06 Thread Denis Magda
Start the node with “-v” flag like this “ignite.bat -v" and share the logs.

—
Denis

> On Jun 6, 2016, at 3:18 PM, pragmaticbigdata  wrote:
> 
> echo %IGNITE_JMX_PORT% gives the port that I have set. I haven't set it
> through the registry. I have set it from the UI (Advanced Settings ->
> Environment Variables -> Add).
> 
> 
> 
> --
> View this message in context: 
> http://apache-ignite-users.70518.x6.nabble.com/How-to-connect-monitor-ignite-server-through-jmx-client-tp5420p5451.html
> Sent from the Apache Ignite Users mailing list archive at Nabble.com.



Re: How to connect/monitor ignite server through jmx client

2016-06-06 Thread Denis Magda
How do you set this variable on the windows machine? What does “echo 
%IGNITE_JMX_PORT%" return?

If you set in the windows registry then don’t forget to restart your command 
line terminal.

—
Denis

> On Jun 6, 2016, at 3:09 PM, pragmaticbigdata  wrote:
> 
> Hi Denis,
> 
> Setting an environment variable worked out when starting ignite from command
> line on a linux machine but when I set the environment variable on my
> windows desktop and start ignite programmatically, it doesn't work. 
> 
> What are the alternatives?
> 
> 
> 
> --
> View this message in context: 
> http://apache-ignite-users.70518.x6.nabble.com/How-to-connect-monitor-ignite-server-through-jmx-client-tp5420p5449.html
> Sent from the Apache Ignite Users mailing list archive at Nabble.com.



Self Join Query As An Alternative To IN clause

2016-06-06 Thread pragmaticbigdata
I am using apache ignite 1.6. Executing an in clause query on a cache
containing 1 mil entries took around 1.5 seconds. As a performance
optimization suggested  here
  , I tried out a join
clause query but query binding fails. 

SqlFieldsQuery fieldsQuery = new SqlFieldsQuery("select p.name,
p.price, p.volume, p.discount, p.baseLine, p.uplift, p.FINAL_PRICE,
p.SALE_PRICE from ProductDetails p join table(" +
searchColumn.getColumnName() + " char = ?) i on " +
"p." + searchColumn.getColumnName() + " = i." +
searchColumn.getColumnName());

fieldsQuery.setArgs(values);  //values is of type Object[]
List> productDetails =
Lists.newArrayList();
Collection> res =
productCache.query(fieldsQuery).getAll();

The above query fails with an JdbcSQLException: Invalid value "2" for
parameter "parameterIndex" [90008-175]. The complete trace is

Caused by: class org.apache.ignite.IgniteException: Failed to bind
parameters: [qry=select p.name, p.price, p.volume, p.discount, p.baseLine,
p.uplift, p.FINAL_PRICE, p.SALE_PRICE from ProductDetails p join table(NAME
char = ?) i on p.NAME = i.NAME, params=[phone10, phone100, phone101,
phone102, phone103, phone104, phone105, phone106, phone107, phone108,
phone109, phone1000]]
at
org.apache.ignite.internal.processors.query.GridQueryProcessor.queryTwoStep(GridQueryProcessor.java:811)
at
org.apache.ignite.internal.processors.cache.IgniteCacheProxy.query(IgniteCacheProxy.java:691)
... 37 more
Caused by: class org.apache.ignite.IgniteCheckedException: Failed to bind
parameters: [qry=select p.name, p.price, p.volume, p.discount, p.baseLine,
p.uplift, p.FINAL_PRICE, p.SALE_PRICE from ProductDetails p join table(NAME
char = ?) i on p.NAME = i.NAME, params=[phone10, phone100, phone101,
phone102, phone103, phone104, phone105, phone106, phone107, phone108,
phone109, phone1000]]
at
org.apache.ignite.internal.processors.query.GridQueryProcessor.executeQuery(GridQueryProcessor.java:1787)
at
org.apache.ignite.internal.processors.query.GridQueryProcessor.queryTwoStep(GridQueryProcessor.java:804)
... 38 more
Caused by: javax.cache.CacheException: Failed to bind parameters:
[qry=select p.name, p.price, p.volume, p.discount, p.baseLine, p.uplift,
p.FINAL_PRICE, p.SALE_PRICE from ProductDetails p join table(NAME char = ?)
i on p.NAME = i.NAME, params=[phone10, phone100, phone101, phone102,
phone103, phone104, phone105, phone106, phone107, phone108, phone109,
phone1000]]
at
org.apache.ignite.internal.processors.query.h2.IgniteH2Indexing.queryTwoStep(IgniteH2Indexing.java:1083)
at
org.apache.ignite.internal.processors.query.GridQueryProcessor$4.applyx(GridQueryProcessor.java:806)
at
org.apache.ignite.internal.processors.query.GridQueryProcessor$4.applyx(GridQueryProcessor.java:804)
at
org.apache.ignite.internal.util.lang.IgniteOutClosureX.apply(IgniteOutClosureX.java:36)
at
org.apache.ignite.internal.processors.query.GridQueryProcessor.executeQuery(GridQueryProcessor.java:1769)
... 39 more
Caused by: class org.apache.ignite.IgniteCheckedException: Failed to bind
parameter [idx=2, obj=phone100, stmt=prep3: select p.name, p.price,
p.volume, p.discount, p.baseLine, p.uplift, p.FINAL_PRICE, p.SALE_PRICE from
ProductDetails p join table(NAME char = ?) i on p.NAME = i.NAME {1:
'phone10'}]
at
org.apache.ignite.internal.processors.query.h2.IgniteH2Indexing.bindObject(IgniteH2Indexing.java:505)
at
org.apache.ignite.internal.processors.query.h2.IgniteH2Indexing.bindParameters(IgniteH2Indexing.java:930)
at
org.apache.ignite.internal.processors.query.h2.IgniteH2Indexing.queryTwoStep(IgniteH2Indexing.java:1080)
... 43 more
Caused by: org.h2.jdbc.JdbcSQLException: Invalid value "2" for parameter
"parameterIndex" [90008-175]
at org.h2.message.DbException.getJdbcSQLException(DbException.java:332)
at org.h2.message.DbException.get(DbException.java:172)
at
org.h2.message.DbException.getInvalidValueException(DbException.java:218)
at
org.h2.jdbc.JdbcPreparedStatement.setParameter(JdbcPreparedStatement.java:1338)
at
org.h2.jdbc.JdbcPreparedStatement.setObject(JdbcPreparedStatement.java:451)
at
org.apache.ignite.internal.processors.query.h2.IgniteH2Indexing.bindObject(IgniteH2Indexing.java:502)
... 45 more

What could be missing?

Thanks.



--
View this message in context: 
http://apache-ignite-users.70518.x6.nabble.com/Self-Join-Query-As-An-Alternative-To-IN-clause-tp5448.html
Sent from the Apache Ignite Users mailing list archive at Nabble.com.


Ignite : Slow Client

2016-06-06 Thread M Singh
Hi:
I have a few questions about slow clients:
1. If a slow client is disconnected, what happens to it's event queue ?2. If 
there are multiple client using same query - do they share the same queue and 
if so, does each client get all the events or are the events shared across all 
clients of that queue (just like a jms queue) ?3. When a slow client reconnects 
- does it get events from it's previous queue ?4. If #3 is true, then during 
the time slow client is disconnected, is it's queue still gathering events 
while the client is trying to reconnect ?
Thanks

re: ignite in-memory sql query performance issue

2016-06-06 Thread Zhengqingzheng
Hi Vladimir,
I have tried to reset the group index definition.
Using gId and oId as the group index, the time used to retrieve the query 
reduced to 16ms.

In order to speed up the sql queries, do I need to set all the possible group 
indexes ?

Best regards,
Kevin

发件人: Vladimir Ozerov [mailto:voze...@gridgain.com]
发送时间: 2016年6月6日 16:10
收件人: user@ignite.apache.org
主题: Re: ignite in-memory sql query performance issue

Hi Kevin,

Could you please provide the source code of SelectedClass and estimate number 
of entries in the cache? As Vladislav mentioned, most probably this is a matter 
of setting indexes on relevant fields. If you provide the source code, we will 
be able to give you exact example on how to do that.

Vladimir.

On Mon, Jun 6, 2016 at 5:56 AM, Zhengqingzheng 
> wrote:
Hi there,
When using sql query to get a list of objects, I find that the performance is 
really slow. I am wondering, is this normal?
I tried to call a sql query as follows:
String qryStr = "select * from SelectedClass where  field1= ? and field2=?";
SqlQuery qry = new SqlQuery(SelectedCalss.class, 
qryStr);
qry.setArgs( "97901336", "a88");

If I call getAll() method like this:
List> result = 
cache.withKeepBinary().query(qry).getAll();
It took 160ms to get all the objects (only two objects inside the list)

it takes 1ms to get a querycursor object, like this:
 QueryCursor qc = cache.withKeepBinary().query(qry);
But still need 160ms to put the objects into a list and return;

Best regards,
Kevin






re: ignite in-memory sql query performance issue

2016-06-06 Thread Zhengqingzheng
Hi Vladimir,
I did define group index using orderedgroups annotations.
My real query string is : "select * from UniqueField where  gId= ? and oId=?";
And there is no group index defined for gId and oId.


I have 12Million(actually, 11,770,000 ) records in cache.

My SelectedClass is defined as follows:
package com.huawei.soa.ignite.test;

import java.io.Serializable;
import java.math.BigDecimal;
import java.util.Date;

import org.apache.ignite.cache.query.annotations.QuerySqlField;

public class UniqueField implements Serializable
{

@QuerySqlField
private String orgId;

@QuerySqlField(index=true, orderedGroups={@QuerySqlField.Group(
name="groupIdx", order=0, descending = true)})
private String oId;

@QuerySqlField(index=true)
private String gId;

@QuerySqlField(index=true, orderedGroups={@QuerySqlField.Group(
name="groupIdx", order=1, descending = true)})
private int fNum;

@QuerySqlField(index=true, orderedGroups={@QuerySqlField.Group(
name="groupIdx", order=2, descending = true)})
private String msg;

@QuerySqlField(index=true, orderedGroups={@QuerySqlField.Group(
name="groupIdx", order=3, descending = true)})
private BigDecimal num;

@QuerySqlField(index=true, orderedGroups={@QuerySqlField.Group(
name="groupIdx", order=4, descending = true)})
private Date date;

public UniqueField(){};

public UniqueField(
String orgId,
String oId,
String gId,
int fNum,
String msg,
BigDecimal num,
Date date
){
this.orgId=orgId;
this.oId=oId;
this.gId = gId;
this.fNum = fNum;
this.msg = msg;
this.num = num;
this.date = date;
}

public String getOrgId()
{
return orgId;
}

public void setOrgId(String orgId)
{
this.orgId = orgId;
}

public String getOId()
{
return oId;
}

public void setOId(String oId)
{
this.oId = oId;
}

public String getGid()
{
return gId;
}

public void setGuid(String gId)
{
this.gId = gId;
}

public int getFNum()
{
return fNum;
}

public void setFNum(int fNum)
{
this.fNum = fNum;
}

public String getMsg()
{
return msg;
}

public void setMsg(String msg)
{
this.msg = msg;
}

public BigDecimal getNum()
{
return num;
}

public void setNum(BigDecimal num)
{
this.num = num;
}

public Date getDate()
{
return date;
}

public void setDate(Date date)
{
this.date = date;
}

}

发件人: Vladimir Ozerov [mailto:voze...@gridgain.com]
发送时间: 2016年6月6日 16:10
收件人: user@ignite.apache.org
主题: Re: ignite in-memory sql query performance issue

Hi Kevin,

Could you please provide the source code of SelectedClass and estimate number 
of entries in the cache? As Vladislav mentioned, most probably this is a matter 
of setting indexes on relevant fields. If you provide the source code, we will 
be able to give you exact example on how to do that.

Vladimir.

On Mon, Jun 6, 2016 at 5:56 AM, Zhengqingzheng 
> wrote:
Hi there,
When using sql query to get a list of objects, I find that the performance is 
really slow. I am wondering, is this normal?
I tried to call a sql query as follows:
String qryStr = "select * from SelectedClass where  field1= ? and field2=?";
SqlQuery qry = new SqlQuery(SelectedCalss.class, 
qryStr);
qry.setArgs( "97901336", "a88");

If I call getAll() method like this:
List> result = 
cache.withKeepBinary().query(qry).getAll();
It took 160ms to get all the objects (only two objects inside the list)

it takes 1ms to get a querycursor object, like this:
 QueryCursor qc = cache.withKeepBinary().query(qry);
But still need 160ms to put the objects into a list and return;

Best regards,
Kevin






Re: Ignite : IgniteDataStreamer question about units/valid ranges for perNodeBufferSize, autoFlushFrequency

2016-06-06 Thread M Singh
Is there any valid range for these attributes ? 

On Monday, June 6, 2016 1:31 AM, M Singh  wrote:
 

 Thanks Vladislav for the clarification. 

On Monday, June 6, 2016 12:45 AM, Vladislav Pyatkov  
wrote:
 

 Hello,
1) perNodeBufferSize - is the number of entries in the buffer.2) 
autoFlushFrequency - in milliseconds
On Sun, Jun 5, 2016 at 8:14 PM, M Singh  wrote:

Hi:
I was looking at the javadoc for some of the methods in this interface and am 
not sure of units as well as the ranges for these allowed values.  The impl 
class just checks for positive argument.  
If anyone has any pointers, please let me know.  Thanks

    /**     * Gets size of per node key-value pairs buffer.     *     * @return 
Per node buffer size.     */    public int perNodeBufferSize();
Is buffer size in bytes or count of items ?
  /**    * Sets automatic flush frequency. Essentially, this is the time after 
which the    * streamer will make an attempt to submit all data added so far to 
remote nodes.     * Note that there is no guarantee that data will be delivered 
after this concrete     * attempt (e.g., it can fail when topology is 
changing), but it won't be lost anyway.     *      * If set to {@code 0}, 
automatic flush is disabled.     *      * Automatic flush is disabled by 
default (default value is {@code 0}).     *     * @param autoFlushFreq Flush 
frequency or {@code 0} to disable automatic flush.     * @see #flush()     */   
 public void autoFlushFrequency(long autoFlushFreq);
Is the freq in times/sec or every millis etc ?





-- 
Vladislav Pyatkov

   

  

Re: Ignite : IgniteDataStreamer question about units/valid ranges for perNodeBufferSize, autoFlushFrequency

2016-06-06 Thread M Singh
Thanks Vladislav for the clarification. 

On Monday, June 6, 2016 12:45 AM, Vladislav Pyatkov  
wrote:
 

 Hello,
1) perNodeBufferSize - is the number of entries in the buffer.2) 
autoFlushFrequency - in milliseconds
On Sun, Jun 5, 2016 at 8:14 PM, M Singh  wrote:

Hi:
I was looking at the javadoc for some of the methods in this interface and am 
not sure of units as well as the ranges for these allowed values.  The impl 
class just checks for positive argument.  
If anyone has any pointers, please let me know.  Thanks

    /**     * Gets size of per node key-value pairs buffer.     *     * @return 
Per node buffer size.     */    public int perNodeBufferSize();
Is buffer size in bytes or count of items ?
  /**    * Sets automatic flush frequency. Essentially, this is the time after 
which the    * streamer will make an attempt to submit all data added so far to 
remote nodes.     * Note that there is no guarantee that data will be delivered 
after this concrete     * attempt (e.g., it can fail when topology is 
changing), but it won't be lost anyway.     *      * If set to {@code 0}, 
automatic flush is disabled.     *      * Automatic flush is disabled by 
default (default value is {@code 0}).     *     * @param autoFlushFreq Flush 
frequency or {@code 0} to disable automatic flush.     * @see #flush()     */   
 public void autoFlushFrequency(long autoFlushFreq);
Is the freq in times/sec or every millis etc ?





-- 
Vladislav Pyatkov

  

Re: ignite in-memory sql query performance issue

2016-06-06 Thread Vladimir Ozerov
Hi Kevin,

Could you please provide the source code of SelectedClass and estimate
number of entries in the cache? As Vladislav mentioned, most probably this
is a matter of setting indexes on relevant fields. If you provide the
source code, we will be able to give you exact example on how to do that.

Vladimir.

On Mon, Jun 6, 2016 at 5:56 AM, Zhengqingzheng 
wrote:

> Hi there,
>
> When using sql query to get a list of objects, I find that the performance
> is really slow. I am wondering, is this normal?
>
> I tried to call a sql query as follows:
>
> String qryStr = "select * from SelectedClass where  field1= ? and
> field2=?";
>
> SqlQuery qry = new
> SqlQuery(SelectedCalss.class, qryStr);
>
> qry.setArgs( "97901336", "a88");
>
>
>
> If I call getAll() method like this:
>
> List> result =
> cache.withKeepBinary().query(qry).getAll();
>
> It took 160ms to get all the objects (only two objects inside the list)
>
>
>
> it takes 1ms to get a querycursor object, like this:
>
>  QueryCursor qc = cache.withKeepBinary().query(qry);
>
> But still need 160ms to put the objects into a list and return;
>
>
>
> Best regards,
>
> Kevin
>
>
>
>
>
>
>


Re: Ignite : IgniteDataStreamer question about units/valid ranges for perNodeBufferSize, autoFlushFrequency

2016-06-06 Thread Vladislav Pyatkov
Hello,

1) perNodeBufferSize - is the number of entries in the buffer.
2) autoFlushFrequency - in milliseconds

On Sun, Jun 5, 2016 at 8:14 PM, M Singh  wrote:

> Hi:
>
> I was looking at the javadoc for some of the methods in this interface and
> am not sure of units as well as the ranges for these allowed values.  The
> impl class just checks for positive argument.
>
> If anyone has any pointers, please let me know.  Thanks
>
>
> /**
>  * Gets size of per node key-value pairs buffer.
>  *
>  * @return Per node buffer size.
>  */
> public int perNodeBufferSize();
>
> Is buffer size in bytes or count of items ?
>
>   /**
> * Sets automatic flush frequency. Essentially, this is the time after
> which the
> * streamer will make an attempt to submit all data added so far to
> remote nodes.
>  * Note that there is no guarantee that data will be delivered after
> this concrete
>  * attempt (e.g., it can fail when topology is changing), but it won't
> be lost anyway.
>  * 
>  * If set to {@code 0}, automatic flush is disabled.
>  * 
>  * Automatic flush is disabled by default (default value is {@code 0}).
>  *
>  * @param autoFlushFreq Flush frequency or {@code 0} to disable
> automatic flush.
>  * @see #flush()
>  */
> public void autoFlushFrequency(long autoFlushFreq);
>
> Is the freq in times/sec or every millis etc ?
>
>
>


-- 
Vladislav Pyatkov


Re: Persistent storage with Ignite C++

2016-06-06 Thread Graham Bull
Hi Denis/Igor, many thanks for the info, I'll give this a try.

Graham


On 3 June 2016 at 16:34, Igor Sapego  wrote:

> Hi Guys,
>
> As far as I understand, yes, this is going to work as long as all
> necessary user Java classes are available for the C++ node.
>
> Best Regards,
> Igor
>
> On Fri, Jun 3, 2016 at 3:53 PM, Denis Magda  wrote:
>
>> Hi Graham,
>>
>> You can specify a Java-based implementation of a persistent storage in
>> Spring XML configuration of Ignite and start a C++ node with this
>> configuration. After that the data that is stored on C++ node should be
>> persisted as well.
>>
>> *Igor Sapego*, please correct me if I’m wrong.
>>
>> —
>> Denis
>>
>> On Jun 2, 2016, at 1:38 PM, Graham Bull  wrote:
>>
>> We'd like to use Ignite with persistent storage. However, I'm not sure if
>> our scenario is feasible.
>>
>> We'll be using Ignite C++. From the documentation it seems as though this
>> provides a limited subset of the full Java version. There's no compute
>> functionality, but that's coming soon. But more importantly (for us)
>> there's no persistent storage functionality.
>>
>> I understand that Ignite C++ can cluster with Ignite Java. If that's
>> indeed the case, and the Java instances are able to persist the data they
>> contain, then what happens to the data on the C++ instances? Will it be
>> persisted, or will it be lost when the C++ instances are shut down?
>>
>> The thinking was that initially we'd have a cluster consisting of one C++
>> instance and one or more Java instances. And later on, once compute
>> functionality is available, we'd move everything to C++.
>>
>> Thanks in advance,
>>
>> Graham
>>
>>
>>
>


Re: One failing node stalling the whole cluster

2016-06-06 Thread Kristian Rosenvold
We're also seeing this total hang of our replicated cache cluster when a
single node goes totally lethargic due to too heavy memory load. The
culprit node typically does not respond to "jstack" due to either excessive
memory load or missing safepoints. Sometimes we need to do kill -9 to get
the node down.

I have been planning to do a jstack on the *remaining* nodes to try to
figure out why they appear to not be timing out the non-responsive node. I
will upgrade to 1.6 and see if I can pinpoint the problem there.

Kristian




2016-06-05 21:18 GMT+02:00 DLopez :

> Hi Dennis,
> I agree that it shouldn't happen but I have been able to reproduce it in
> other machines consistently and the only "connection" that they have is
> that
> they share the Ignite replicated caches.
>
> One machine is basically reading from several caches and filling up some
> data to be returned, I can have 25 clients requesting some data and
> everything is fine. The other one is a different application, that
> basically
> fills up the replicated caches from the DB but receives no direct requests.
> Someone forgot to control a batch job in this second application and it can
> be run many times, consuming up all the memory in this second application.
> The strange thing is that when the second applications start GCing like
> crazy, the first one starts going slower and slower up to a point when it
> stops answering requests. If I kill -9 the second application, the first
> one
> goes back to normal behaviour immediately and can respond 25 simultaneous
> requests again with normal response times. I can restart the second
> application and repeat the same thing and the behaviour is the same.
>
> So I can tell you it's no application code or garbage collection issue in
> the other app. The batch job in the second app, that we run manually for
> this test, is not replicated and does nothing related to ignite, it does
> not
> even use the replicated caches.
>
> The only thing I can think of that would show this behaviour would be the
> sync. process in the Ignite caches slowing down/stalling the reading of
> values. As the second app. starts experiencing GC issues and slows down the
> Ignite sync. process, then it affects the other apps reading the caches. So
> I was wondering if the sync. mechanism might have some kind of lock on the
> caches that would prevent reading from them.
>
> I'll see if I can replicate it in a small scale experiment, apart from
> testing with ignite 1.6.
>
> Thanks for your input
>
>
>
> --
> View this message in context:
> http://apache-ignite-users.70518.x6.nabble.com/One-failing-node-stalling-the-whole-cluster-tp5372p5432.html
> Sent from the Apache Ignite Users mailing list archive at Nabble.com.
>


Re: ignite in-memory sql query performance issue

2016-06-06 Thread Vladislav Pyatkov
Hello,

Do you use indexes in a SQL query?
If it does not, that try to create a group index over field1 and fild2.
You can find a description here
https://apacheignite.readme.io/docs/sql-queries

On Mon, Jun 6, 2016 at 5:56 AM, Zhengqingzheng 
wrote:

> Hi there,
>
> When using sql query to get a list of objects, I find that the performance
> is really slow. I am wondering, is this normal?
>
> I tried to call a sql query as follows:
>
> String qryStr = "select * from SelectedClass where  field1= ? and
> field2=?";
>
> SqlQuery qry = new
> SqlQuery(SelectedCalss.class, qryStr);
>
> qry.setArgs( "97901336", "a88");
>
>
>
> If I call getAll() method like this:
>
> List> result =
> cache.withKeepBinary().query(qry).getAll();
>
> It took 160ms to get all the objects (only two objects inside the list)
>
>
>
> it takes 1ms to get a querycursor object, like this:
>
>  QueryCursor qc = cache.withKeepBinary().query(qry);
>
> But still need 160ms to put the objects into a list and return;
>
>
>
> Best regards,
>
> Kevin
>
>
>
>
>
>
>



-- 
Vladislav Pyatkov


Re: Runtime error at IgniteSpiThread

2016-06-06 Thread pragmaticbigdata
I am facing a similar exception while starting apache ignite. I am using
apache ignite 1.6. The exception is a bit different this time

[09:27:59] Topology snapshot [ver=269, servers=1, clients=0, CPUs=2,
heap=2.0GB]
[09:27:59,957][SEVERE][exchange-worker-#46%null%][GridCachePartitionExchangeManager]
Runtime error caught during grid runnable execution: GridWorker
[name=partition-exchanger, gridName=null, finished=false, isCancelled=false,
hashCode=1194503871, interrupted=false, runner=exchange-worker-#46%null%]
java.lang.NullPointerException
at
org.apache.ignite.internal.processors.cache.distributed.dht.GridDhtPartitionTopologyImpl.updateLocal(GridDhtPartitionTopologyImpl.java:1347)
at
org.apache.ignite.internal.processors.cache.distributed.dht.GridDhtPartitionTopologyImpl.onEvicted(GridDhtPartitionTopologyImpl.java:1444)
at
org.apache.ignite.internal.processors.cache.distributed.dht.preloader.GridDhtPreloader.onPartitionEvicted(GridDhtPreloader.java:639)
at
org.apache.ignite.internal.processors.cache.distributed.dht.GridDhtLocalPartition.tryEvictAsync(GridDhtLocalPartition.java:510)
at
org.apache.ignite.internal.processors.cache.distributed.dht.GridDhtLocalPartition.rent(GridDhtLocalPartition.java:478)
at
org.apache.ignite.internal.processors.cache.distributed.dht.GridDhtPartitionTopologyImpl.afterExchange(GridDhtPartitionTopologyImpl.java:601)
at
org.apache.ignite.internal.processors.cache.GridCachePartitionExchangeManager$ExchangeWorker.body(GridCachePartitionExchangeManager.java:1353)
at
org.apache.ignite.internal.util.worker.GridWorker.run(GridWorker.java:110)
at java.lang.Thread.run(Thread.java:745)

The complete logs are shared  here   .



--
View this message in context: 
http://apache-ignite-users.70518.x6.nabble.com/Runtime-error-at-IgniteSpiThread-tp2630p5437.html
Sent from the Apache Ignite Users mailing list archive at Nabble.com.


Re: How to connect/monitor ignite server through jmx client

2016-06-06 Thread Denis Magda
Hi, 

Set environment variable  IGNITE_JMX_PORT to some value that work fine for you. 
JMX will be started on a specified port and you’ll be able to connect to it.

—
Denis

> On Jun 4, 2016, at 8:19 PM, pragmaticbigdata  wrote:
> 
> I am using apache ignite 1.6 in my tests. I am running the test by
> programmatically starting ignite through Ignition.start(conf.xml). Other
> nodes that join the cluster are started using the ./ignite.sh
>  command.
> I tried starting the server node by passing the jmx configuration parameters
> in the command line but didn't succeed.
> 
> How do I start ignite with jmx properties both when running programmatically
> and through command so that I can monitor the cluster through
> jconsole/jmc/jvisualvm?
> 
> Thanks!
> 
> 
> 
> --
> View this message in context: 
> http://apache-ignite-users.70518.x6.nabble.com/How-to-connect-monitor-ignite-server-through-jmx-client-tp5420.html
> Sent from the Apache Ignite Users mailing list archive at Nabble.com.