Re: SqlFieldsQuery NPE on my schema

2017-08-24 Thread iostream
Changing the query to include setParitions solved the problem with SELECT
query. However, SQL statements with DELETE and UPDATE are still not working.

Change made -

SqlFieldsQuery folquery = new SqlFieldsQuery("UPDATE A set b = 111 where c =
?").setArgs(someArg);
folquery.setDistributedJoins(false);
folquery.setSchema("PUBLIC");
Affinity aff =
Ignition.ignite().affinity(CacheNameConstants.FULFILL_ORDER_CACHE_NAME);
int parts = aff.partition(fulfillOrderId);
int[] parts = aff.primaryPartitions(locNode);
folquery.setPartitions(parts);
focache.query(folquery);



--
View this message in context: 
http://apache-ignite-users.70518.x6.nabble.com/SqlFieldsQuery-NPE-on-my-schema-tp16409p16416.html
Sent from the Apache Ignite Users mailing list archive at Nabble.com.


RE: Cassandra Persistence Store without POJO on Server

2017-08-24 Thread Roger Fischer (CW)
Thanks, Val. I keep an eye on the ticket.

Roger


-Original Message-
From: vkulichenko [mailto:valentin.kuliche...@gmail.com] 
Sent: Thursday, August 24, 2017 3:43 PM
To: user@ignite.apache.org
Subject: Re: Cassandra Persistence Store without POJO on Server

Hi Roger,

Currently this is not possible. If POJO strategy is used, classes need to be 
deployed. There is a ticket to improve this:
https://urldefense.proofpoint.com/v2/url?u=https-3A__issues.apache.org_jira_browse_IGNITE-2D5270=DwICAg=IL_XqQWOjubgfqINi2jTzg=1esZO0r0bYS90lcsaLA6N4AFxuNo6lzauhETGwdJQoQ=YIEu2XRwOzDh3k5owk0EWXoxfNIKcKGNotw-0MLxDH4=cXPVI2a9AEQdkdR4U8lZrCb_58i18LJP9Un_EUhbbkE=
 

If it's an option for you, you can switch to BLOB strategy and store values in 
binary from in Cassandra. In this case you don't need to deploy classes.

-Val



--
View this message in context: 
https://urldefense.proofpoint.com/v2/url?u=http-3A__apache-2Dignite-2Dusers.70518.x6.nabble.com_Cassandra-2DPersistence-2DStore-2Dwithout-2DPOJO-2Don-2DServer-2Dtp16412p16413.html=DwICAg=IL_XqQWOjubgfqINi2jTzg=1esZO0r0bYS90lcsaLA6N4AFxuNo6lzauhETGwdJQoQ=YIEu2XRwOzDh3k5owk0EWXoxfNIKcKGNotw-0MLxDH4=wMzY9zoIBanpw1boSjPNf-2gSbskKIfAil4Bhdf8L4s=
Sent from the Apache Ignite Users mailing list archive at Nabble.com.


Re: Cluster segmentation

2017-08-24 Thread Biren Shah
Here is a rough structure of a cache. IgniteBaseCache is a wrapper on top of 
IgniteCache. It initializes the cache and a streamer for the cache.  

public class NormalizedDataCache extends IgniteBaseCache {

public NormalizedPointCache() {
super(“cache_name”);
}

@Override
protected CacheConfiguration 
getCacheConfiguration() {
CacheConfiguration normalizedPointsCfg = 
new CacheConfiguration();
normalizedPointsCfg.setOnheapCacheEnabled(true);
return normalizedPointsCfg;
}

@Override
protected void setStreamerProperties() {
fStreamer.autoFlushFrequency(1000);
fStreamer.perNodeParallelOperations(8);
fStreamer.perNodeBufferSize(102400);
}

public void addData(RawPoint Point) {
// Identifier is the partition key, its been decorated with 
@AffinityKeyMapped in the class
Point.setIdentifier(fNormalizerUtil.getIdentifier(Point));
addToStream(Point, Point);
}

@Override
protected StreamReceiver getDataStreamerReceiver() {
// normalize the raw data via DataStreamer's transform 
functionality.
return StreamTransformer.from((e, arg) -> {
new NormalizerAdapter().process((RawPoint) arg[0]);
// Transformers are supposed to update the data and 
then write it to the cache. But we are using this cache
// to distribute data, so we are not writing the data 
to cache
return null;
});
}
}

NormalizerAdapter is another wrapper for internal class. It is the first stage 
of the processing. This internal class uses other distributed caches and 
creates a different object. That object gets added to yet another cache “B” via 
streamer. That is second stage of the processing. The other cache “B” is 
similar to this onw. The second cache “B” has similar receiver function. Which 
updates the object and writes it to application’s internal structure. These two 
caches are used to distribute the data based on affinity key. We are not 
storeing the data in these two caches.

After your suggestion yesterday, I updated addData method is this snippet. 
Previously I was creating a key with some properties of RawPoint. Now I have 
added the affinity key to RawPoint. Reducing the number of objects I was 
creating.

Thanks,
Biren

On 8/24/17, 2:16 PM, "vkulichenko"  wrote:

Biren,

Can you show the code of the receiver?

-Val



--
View this message in context: 
https://urldefense.proofpoint.com/v2/url?u=http-3A__apache-2Dignite-2Dusers.70518.x6.nabble.com_Cluster-2Dsegmentation-2Dtp16314p16411.html=DwICAg=Zok6nrOF6Fe0JtVEqKh3FEeUbToa1PtNBZf6G01cvEQ=rbkF1xy5tYmkV8VMdTRVaIVhaXCNGxmyTB5plfGtWuY=uTFJ0dsOfKebPVHeYtxynWyF05QZ1L_VwKl88GOCfhs=qpsio0YIs_DqiTGNkLSMR-z76AFBwbNv-LvhPjwQOy8=
Sent from the Apache Ignite Users mailing list archive at Nabble.com.




Re: Cassandra Persistence Store without POJO on Server

2017-08-24 Thread vkulichenko
Hi Roger,

Currently this is not possible. If POJO strategy is used, classes need to be
deployed. There is a ticket to improve this:
https://issues.apache.org/jira/browse/IGNITE-5270

If it's an option for you, you can switch to BLOB strategy and store values
in binary from in Cassandra. In this case you don't need to deploy classes.

-Val



--
View this message in context: 
http://apache-ignite-users.70518.x6.nabble.com/Cassandra-Persistence-Store-without-POJO-on-Server-tp16412p16413.html
Sent from the Apache Ignite Users mailing list archive at Nabble.com.


Cassandra Persistence Store without POJO on Server

2017-08-24 Thread Roger Fischer (CW)
Hello,

is it possible to use the Cassandra Persistence Store 
(libs/optional/ignite-cassandra-store) without the need to deploy the key and 
object POJOs on the server?

For basic data grid use, it seems that everything defaults to binary objects. 
put(), get() SQL queries, native persistence and affinity all work without 
having to deploy the POJO on the servers. The POJO are referenced in the XML 
configuration, but a JAR with the POJO is _not_ required on the server. Only 
the client uses the POJO.

However, when using the Cassandra persistence store, the server reports a 
startup error when the POJO are not present.
Caused by: class org.apache.ignite.IgniteException: Failed to load class 
'com.abc.poc.icpoc.model.Fabric' using reflection
at 
org.apache.ignite.cache.store.cassandra.persistence.PersistenceSettings.getClassInstance(PersistenceSettings.java:504)
at 
org.apache.ignite.cache.store.cassandra.persistence.PersistenceSettings.(PersistenceSettings.java:128)

I would prefer not to have to redeploy the POJO to the servers when there are 
data (object) changes.

Roger



Re: Cluster segmentation

2017-08-24 Thread vkulichenko
Biren,

Can you show the code of the receiver?

-Val



--
View this message in context: 
http://apache-ignite-users.70518.x6.nabble.com/Cluster-segmentation-tp16314p16411.html
Sent from the Apache Ignite Users mailing list archive at Nabble.com.


Re: EntryProcessor and Locks

2017-08-24 Thread vkulichenko
1. Correct.
2. Correct.
3. Data still can be serialized/deserialized because Ignite stores
everything in binary form. However, operations will be executed locally.
4. It basically tells that if you execute a transaction within an
affinityCall, and this transaction only includes entries that belong to same
partition you collocated with, the whole process will be optimized to
1-phase-commit and will be local to this node.

-Val



--
View this message in context: 
http://apache-ignite-users.70518.x6.nabble.com/EntryProcessor-and-Locks-tp16150p16410.html
Sent from the Apache Ignite Users mailing list archive at Nabble.com.


SqlFieldsQuery NPE on my schema

2017-08-24 Thread iostream
My cache configuration is as follows - 

CacheConfiguration cacheConfig = new
CacheConfiguration<>();
cacheConfig.setAtomicityMode(TRANSACTIONAL);
cacheConfig.setCacheMode(PARTITIONED);
cacheConfig.setBackups(1);
cacheConfig.setCopyOnRead(TRUE);
cacheConfig.setPartitionLossPolicy(IGNORE);
cacheConfig.setQueryParallelism(2);
cacheConfig.setReadFromBackup(TRUE);
cacheConfig.setRebalanceBatchSize(524288);
cacheConfig.setRebalanceThrottle(100);
cacheConfig.setRebalanceTimeout(1);
cacheConfig.setIndexedTypes(A.class, B.class);
cacheConfig.setOnheapCacheEnabled(FALSE);
cacheConfig.setStatisticsEnabled(true);
cacheConfig.setSqlSchema("PUBLIC");
cacheConfig.setName(cache1);
Ignition.ignite().createCache(cacheConfig);

When I try to run the following SqlFieldsQuery, i get a NPE when executing
the query.

IgniteCache folcache = Ignition.ignite().cache(cache1);
SqlFieldsQuery folquery = new SqlFieldsQuery("SELECT * from B");
folquery.setDistributedJoins(false);
QueryCursor> folcursor = folcache.query(folquery);

Error -

Apache Tomcat/9.0.0.M17 - Error
report 
HTTP Status 500 - NodeCommonsException [errorCode=1001, description=Error
while accessing DB]
*type* Exception report*message*
NodeCommonsException [errorCode=1001, description=Error while accessing
DB]*description* The server encountered an internal error that
prevented it from fulfilling this
request.*exception*javax.servlet.ServletException:
NodeCommonsException [errorCode=1001, description=Error while accessing DB]
org.glassfish.jersey.servlet.WebComponent.service(WebComponent.java:392)

org.glassfish.jersey.servlet.ServletContainer.service(ServletContainer.java:382)

org.glassfish.jersey.servlet.ServletContainer.service(ServletContainer.java:345)

org.glassfish.jersey.servlet.ServletContainer.service(ServletContainer.java:220)
org.apache.tomcat.websocket.server.WsFilter.doFilter(WsFilter.java:53)
*root cause*NodeCommonsException [errorCode=1001,
description=Error while accessing DB]

com.walmart.ecommerce.fulfillment.node.commons.manager.dao.aop.DataAccesAspect.catchDataAccessException(DataAccesAspect.java:36)
sun.reflect.NativeMethodAccessorImpl.invoke0(Native Method)

sun.reflect.NativeMethodAccessorImpl.invoke(NativeMethodAccessorImpl.java:62)

sun.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:43)
java.lang.reflect.Method.invoke(Method.java:498)

org.springframework.aop.aspectj.AbstractAspectJAdvice.invokeAdviceMethodWithGivenArgs(AbstractAspectJAdvice.java:621)

org.springframework.aop.aspectj.AbstractAspectJAdvice.invokeAdviceMethod(AbstractAspectJAdvice.java:603)

org.springframework.aop.aspectj.AspectJAfterThrowingAdvice.invoke(AspectJAfterThrowingAdvice.java:62)

org.springframework.aop.framework.ReflectiveMethodInvocation.proceed(ReflectiveMethodInvocation.java:179)

org.springframework.aop.interceptor.ExposeInvocationInterceptor.invoke(ExposeInvocationInterceptor.java:92)

org.springframework.aop.framework.ReflectiveMethodInvocation.proceed(ReflectiveMethodInvocation.java:179)

org.springframework.aop.framework.CglibAopProxy$DynamicAdvisedInterceptor.intercept(CglibAopProxy.java:653)

com.walmart.ecommerce.fulfillment.node.commons.manager.dao.impl.FulfillOrderDAOImpl$$EnhancerBySpringCGLIB$$74f64cb6.getOrderIgnite(generated)

com.walmart.ecommerce.fulfillment.node.commons.manager.business.impl.OrderManagerImpl.getOrderIgnite(OrderManagerImpl.java:93)

com.walmart.ecommerce.fulfillment.node.commons.manager.ws.FulfillmentOrdersWebService.getFulfillOrderIgnite(FulfillmentOrdersWebService.java:298)
sun.reflect.NativeMethodAccessorImpl.invoke0(Native Method)

sun.reflect.NativeMethodAccessorImpl.invoke(NativeMethodAccessorImpl.java:62)

sun.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:43)
java.lang.reflect.Method.invoke(Method.java:498)

org.glassfish.jersey.server.model.internal.ResourceMethodInvocationHandlerFactory$1.invoke(ResourceMethodInvocationHandlerFactory.java:81)

org.glassfish.jersey.server.model.internal.AbstractJavaResourceMethodDispatcher$1.run(AbstractJavaResourceMethodDispatcher.java:151)

org.glassfish.jersey.server.model.internal.AbstractJavaResourceMethodDispatcher.invoke(AbstractJavaResourceMethodDispatcher.java:171)

org.glassfish.jersey.server.model.internal.JavaResourceMethodDispatcherProvider$ResponseOutInvoker.doDispatch(JavaResourceMethodDispatcherProvider.java:152)

org.glassfish.jersey.server.model.internal.AbstractJavaResourceMethodDispatcher.dispatch(AbstractJavaResourceMethodDispatcher.java:104)

org.glassfish.jersey.server.model.ResourceMethodInvoker.invoke(ResourceMethodInvoker.java:402)

org.glassfish.jersey.server.model.ResourceMethodInvoker.apply(ResourceMethodInvoker.java:349)

org.glassfish.jersey.server.model.ResourceMethodInvoker.apply(ResourceMethodInvoker.java:106)
org.glassfish.jersey.server.ServerRuntime$1.run(ServerRuntime.java:259)

Re: Cluster segmentation

2017-08-24 Thread Biren Shah
Hi Val,

We do create lots of object. I am getting 1M data points event minute. For data 
affinity, I create a key object for each data point. As I mentioned earlier I 
have two stages of processing. We create new key on both stages. So, creating 
close to 2M new keys every minute. I have changed that and running the test 
now. 

Also If I understand correctly then when I do get() on cache, it creates a copy 
of the object and return that copy. Do you think turning off that behavior will 
help?

Thanks,
Biren

On 8/23/17, 5:51 PM, "vkulichenko"  wrote:

Biren,

I see the jump and I actually see GC pauses as well (the longest one is the
last line in log_2.txt). BTW, I don't think there is an quick jump, GC pause
most likely blocks the monitor thread as well, so it just looks like a jump.
Apparently, all these 30 seconds were spent in GC, and I'm pretty sure this
is causing the issue.

It looks like you're doing something that generates too many objects. My
suggestion would be to use JFR [1] to profile object allocations and check
what's going on.

[1]

https://urldefense.proofpoint.com/v2/url?u=https-3A__apacheignite.readme.io_docs_jvm-2Dand-2Dsystem-2Dtuning-23section-2Dflightrecorder-2Dsettings=DwICAg=Zok6nrOF6Fe0JtVEqKh3FEeUbToa1PtNBZf6G01cvEQ=rbkF1xy5tYmkV8VMdTRVaIVhaXCNGxmyTB5plfGtWuY=LiHHncH7191OkF4vZnSdvf7qmC9q13uRiNImGL2Grwk=Vr1gZsnAesfDJTrXQDi-tphsPD3lQ1NL8q4Q7l1y70E=

It is allowed to use cache API from receiver. To remove entry using
streamer, you can use removeData() method.

-Val



--
View this message in context: 
https://urldefense.proofpoint.com/v2/url?u=http-3A__apache-2Dignite-2Dusers.70518.x6.nabble.com_Cluster-2Dsegmentation-2Dtp16314p16388.html=DwICAg=Zok6nrOF6Fe0JtVEqKh3FEeUbToa1PtNBZf6G01cvEQ=rbkF1xy5tYmkV8VMdTRVaIVhaXCNGxmyTB5plfGtWuY=LiHHncH7191OkF4vZnSdvf7qmC9q13uRiNImGL2Grwk=bD9L_fEZ1yLGCO-MpSUcP8icix7bTj0cFrfwyI6_-4k=
Sent from the Apache Ignite Users mailing list archive at Nabble.com.




Re: Retrieving multiple keys with filtering

2017-08-24 Thread Andrey Kornev
Well, I believe invokeAll() has "update" semantics and using it for read-only 
filtering of cache entries is probably not going to be efficient or even 
appropriate.


I'm afraid the only viable option I'm left with is to use Ignite's Compute 
feature:

- on the sender, group the keys by affinity.

- send each group along with the filter predicate to their affinity nodes using 
IgniteCompute.

- on each node, use getAll() to fetch the local keys and apply the filter.

- on the sender node, collect the results of the compute jobs into a map.


It's unfortunate that Ignite dropped that original API. What used to be a 
single API call is now a non-trivial algorithm and one have to worry about 
things like what happens if the grid topology changes while the compute jobs 
are executing, etc.

Can anyone think of any other less complex/more robust approach?

Thanks
Andrey


From: slava.koptilin 
Sent: Thursday, August 24, 2017 9:03 AM
To: user@ignite.apache.org
Subject: Re: Retrieving multiple keys with filtering

Hi Andrey,

Yes, you are right. ScanQuery scans all entries.
Perhaps, IgniteCache#invokeAll(keys, cacheEntryProcessor) with custom
processor will work for you.
https://ignite.apache.org/releases/2.1.0/javadoc/org/apache/ignite/IgniteCache.html#invokeAll(java.util.Set,%20org.apache.ignite.cache.CacheEntryProcessor,%20java.lang.Object...)

Thanks!



--
View this message in context: 
http://apache-ignite-users.70518.x6.nabble.com/Retrieving-multiple-keys-with-filtering-tp16391p16400.html
Apache Ignite Users - Retrieving multiple keys with 
filtering
apache-ignite-users.70518.x6.nabble.com
Retrieving multiple keys with filtering. Hello, I have a list of cache keys (up 
to a few hundred of them) and a filter predicate. I'd like to efficiently 
retrieve only those values that pass the...



Sent from the Apache Ignite Users mailing list archive at Nabble.com.


Re: ignite.active(true) blocking forever

2017-08-24 Thread slava.koptilin
Hi,

I don't think there is a way to properly recover application/server or any
other service once out of memory error arises.
Just trying to send a simple notification may lead to another attempt to
allocate memory and therefore new OOME must be thrown.
So, the best way to treat OOME is to treat it as unrecoverable error.

Thanks!



--
View this message in context: 
http://apache-ignite-users.70518.x6.nabble.com/ignite-active-true-blocking-forever-tp16346p16406.html
Sent from the Apache Ignite Users mailing list archive at Nabble.com.


Re: Using One Grid for Web Session management and ClusterSingleton

2017-08-24 Thread sijusuresh
I'm not trying to get access to web sessions. 
I created an Ignite instance in a spring bean and deployed a
clusterSignleTon service. 
Now in WebSessoinFilter i want to use the same Ignite instance created in
Spring bean. 
I tried to refer using Ignition.ignite(name), but getting null.

My intention is to have one ignite node created and have two cache, one for
web session and one for clusterSingleton



--
View this message in context: 
http://apache-ignite-users.70518.x6.nabble.com/Using-One-Grid-for-Web-Session-management-and-ClusterSingleton-tp16382p16405.html
Sent from the Apache Ignite Users mailing list archive at Nabble.com.


Re: Ignite client takes too long to connect to cluster

2017-08-24 Thread slava.koptilin
Hi Pooja,

Please properly subscribe to the mailing list so that the community can 
receive email notifications for your messages. To subscribe, send empty 
email to user-subscr...@ignite.apache.org and follow simple instructions in 
the reply. 

I think the best way to know that is to try out the latest Apache Ignite
release (2.1)!
There are a lot of fixes and improvements.

Thanks!



--
View this message in context: 
http://apache-ignite-users.70518.x6.nabble.com/Ignite-client-takes-too-long-to-connect-to-cluster-tp16380p16404.html
Sent from the Apache Ignite Users mailing list archive at Nabble.com.


Failed to run reduce query locally

2017-08-24 Thread igor.tanackovic
I have a query which can be executed in H2 console but fails on Ignite's
.query(sql).getAll():

SELECT i.* FROM "cache".CACHEDITEM  AS i inner JOIN (
SELECT ci.position, MAX(ci.lastModifiedTime) AS modifiedTime FROM
"cache".CACHEDITEM AS ci 
  WHERE ci.startTime<=NOW() 
  AND ci.endTime>NOW() 
  AND ci.stripeId = 301 
  GROUP BY ci.position ORDER BY ci.position) i2  
WHERE i.position=i2.position 
AND i.lastModifiedTime=i2.modifiedTime 
AND i.startTime<=NOW() 
AND i.endTime>NOW() 
AND i.stripeId=301  
GROUP BY i.position ORDER BY i.position


Caused by: org.apache.ignite.IgniteCheckedException: Failed to execute SQL
query.
at
org.apache.ignite.internal.processors.query.h2.IgniteH2Indexing.executeSqlQuery(IgniteH2Indexing.java:1226)
~[ignite-indexing-2.0.0.jar:2.0.0]
at
org.apache.ignite.internal.processors.query.h2.IgniteH2Indexing.executeSqlQueryWithTimer(IgniteH2Indexing.java:1278)
~[ignite-indexing-2.0.0.jar:2.0.0]
at
org.apache.ignite.internal.processors.query.h2.IgniteH2Indexing.executeSqlQueryWithTimer(IgniteH2Indexing.java:1253)
~[ignite-indexing-2.0.0.jar:2.0.0]
at
org.apache.ignite.internal.processors.query.h2.twostep.GridReduceQueryExecutor.query(GridReduceQueryExecutor.java:813)
~[ignite-indexing-2.0.0.jar:2.0.0]
at
org.apache.ignite.internal.processors.query.h2.IgniteH2Indexing$8.iterator(IgniteH2Indexing.java:1493)
~[ignite-indexing-2.0.0.jar:2.0.0]
at
org.apache.ignite.internal.processors.cache.QueryCursorImpl.iterator(QueryCursorImpl.java:94)
~[ignite-core-2.0.0.jar:2.0.0]
at
org.apache.ignite.internal.processors.query.h2.IgniteH2Indexing$9.iterator(IgniteH2Indexing.java:1534)
~[ignite-indexing-2.0.0.jar:2.0.0]
at
org.apache.ignite.internal.processors.cache.QueryCursorImpl.iterator(QueryCursorImpl.java:94)
~[ignite-core-2.0.0.jar:2.0.0]
at
org.apache.ignite.internal.processors.cache.QueryCursorImpl.getAll(QueryCursorImpl.java:113)
~[ignite-core-2.0.0.jar:2.0.0]
at
org.springframework.data.ignite.IgniteAdapter.execute(IgniteAdapter.java:135)
~[spring-data-ignite-1.0.9-BETA-SNAPSHOT.jar:1.0.9-BETA-SNAPSHOT]
at
org.springframework.data.ignite.repository.query.IgniteQueryEngine.execute(IgniteQueryEngine.java:74)
~[spring-data-ignite-1.0.9-BETA-SNAPSHOT.jar:1.0.9-BETA-SNAPSHOT]
at
org.springframework.data.keyvalue.core.AbstractKeyValueAdapter.find(AbstractKeyValueAdapter.java:84)
~[spring-data-keyvalue-1.2.3.RELEASE.jar:?]
at
org.springframework.data.ignite.IgniteTemplate$2.doInKeyValue(IgniteTemplate.java:307)
~[spring-data-ignite-1.0.9-BETA-SNAPSHOT.jar:1.0.9-BETA-SNAPSHOT]
at
org.springframework.data.ignite.IgniteTemplate$2.doInKeyValue(IgniteTemplate.java:302)
~[spring-data-ignite-1.0.9-BETA-SNAPSHOT.jar:1.0.9-BETA-SNAPSHOT]
at
org.springframework.data.ignite.IgniteTemplate.execute(IgniteTemplate.java:273)
~[spring-data-ignite-1.0.9-BETA-SNAPSHOT.jar:1.0.9-BETA-SNAPSHOT]
... 181 more
Caused by: org.h2.jdbc.JdbcSQLException: General error:
"java.lang.ArrayIndexOutOfBoundsException: 1"; SQL statement:
SELECT
I__Z0___KEY _KEY,
I__Z0___VAL _VAL
FROM (SELECT
__C0_0 POSITION,
MAX(__C0_1) AS MODIFIEDTIME
FROM PUBLIC.__T0
GROUP BY __C0_0
ORDER BY 1, 1, 2) I2__Z2 
 INNER JOIN (SELECT
__C1_0 I__Z0__LASTMODIFIEDTIME,
__C1_1 I__Z0___VAL,
__C1_2 I__Z0___KEY,
__C1_3 I__Z0__POSITION
FROM PUBLIC.__T1
ORDER BY 4, 1) __Z3 
 ON TRUE
WHERE TRUE AND (TRUE AND (TRUE AND ((I__Z0__POSITION = I2__Z2.POSITION) AND
(I__Z0__LASTMODIFIEDTIME = I2__Z2.MODIFIEDTIME
GROUP BY I__Z0__POSITION
ORDER BY =I__Z0__POSITION [5-195]
at org.h2.message.DbException.getJdbcSQLException(DbException.java:345)
~[h2-1.4.195.jar:1.4.195]
at org.h2.message.DbException.get(DbException.java:168)
~[h2-1.4.195.jar:1.4.195]
at org.h2.message.DbException.convert(DbException.java:295)
~[h2-1.4.195.jar:1.4.195]
at org.h2.command.Command.executeQuery(Command.java:215)
~[h2-1.4.195.jar:1.4.195]
at
org.h2.jdbc.JdbcPreparedStatement.executeQuery(JdbcPreparedStatement.java:111)
~[h2-1.4.195.jar:1.4.195]
at
org.apache.ignite.internal.processors.query.h2.IgniteH2Indexing.executeSqlQuery(IgniteH2Indexing.java:1219)
~[ignite-indexing-2.0.0.jar:2.0.0]
at
org.apache.ignite.internal.processors.query.h2.IgniteH2Indexing.executeSqlQueryWithTimer(IgniteH2Indexing.java:1278)
~[ignite-indexing-2.0.0.jar:2.0.0]
at
org.apache.ignite.internal.processors.query.h2.IgniteH2Indexing.executeSqlQueryWithTimer(IgniteH2Indexing.java:1253)
~[ignite-indexing-2.0.0.jar:2.0.0]
at
org.apache.ignite.internal.processors.query.h2.twostep.GridReduceQueryExecutor.query(GridReduceQueryExecutor.java:813)
~[ignite-indexing-2.0.0.jar:2.0.0]
at
org.apache.ignite.internal.processors.query.h2.IgniteH2Indexing$8.iterator(IgniteH2Indexing.java:1493)
~[ignite-indexing-2.0.0.jar:2.0.0]
at

Re: Task Session (compute) API in Ignite .NET

2017-08-24 Thread slava.koptilin
Hi,

I've tried to find some feature request or improvements in Apache Ignite
issue tracker, but without any luck. It seems there are no plans.

Could you try to specify 'collisionSpi' property via Spring XML
configuration in the following way?
 
   ...
   
   
   
   
   
   ...
 

Thanks!



--
View this message in context: 
http://apache-ignite-users.70518.x6.nabble.com/Task-Session-compute-API-in-Ignite-NET-tp16348p16402.html
Sent from the Apache Ignite Users mailing list archive at Nabble.com.


Re: Cache.destroy() & close() does not delete SwapFile Ignite 2.0

2017-08-24 Thread Ramzinator
Hi

It appears that even when the ignite node shuts down, it does not delete the
created cache files.
Is there any prebuilt way in ignite to delete these files?

Thanks,
Ramz



--
View this message in context: 
http://apache-ignite-users.70518.x6.nabble.com/Cache-destroy-close-does-not-delete-SwapFile-Ignite-2-0-tp13205p16401.html
Sent from the Apache Ignite Users mailing list archive at Nabble.com.


Re: Retrieving multiple keys with filtering

2017-08-24 Thread slava.koptilin
Hi Andrey,

Yes, you are right. ScanQuery scans all entries.
Perhaps, IgniteCache#invokeAll(keys, cacheEntryProcessor) with custom
processor will work for you.
https://ignite.apache.org/releases/2.1.0/javadoc/org/apache/ignite/IgniteCache.html#invokeAll(java.util.Set,%20org.apache.ignite.cache.CacheEntryProcessor,%20java.lang.Object...)

Thanks!



--
View this message in context: 
http://apache-ignite-users.70518.x6.nabble.com/Retrieving-multiple-keys-with-filtering-tp16391p16400.html
Sent from the Apache Ignite Users mailing list archive at Nabble.com.


Re: Retrieving multiple keys with filtering

2017-08-24 Thread Andrey Kornev
Slava,

I'd like to avoid scanning potentially millions of cache items just to retrieve 
a hundred. More importantly, I already have the cache keys that I want. Why 
would I scan the entire cache? All I need is to filter keys.

Any other suggestions?

Thanks
Andrey

_
From: slava.koptilin >
Sent: Thursday, August 24, 2017 2:34 AM
Subject: Re: Retrieving multiple keys with filtering
To: >


Hi Andrey,

It seems IgniteCache#query(ScanQuery) method is that you are looking for.
https://ignite.apache.org/releases/2.1.0/javadoc/org/apache/ignite/IgniteCache.html

You can find an example here:
https://github.com/apache/ignite/blob/master/examples/src/main/java/org/apache/ignite/examples/datagrid/CacheQueryExample.java

Thanks!



--
View this message in context: 
http://apache-ignite-users.70518.x6.nabble.com/Retrieving-multiple-keys-with-filtering-tp16391p16392.html
Sent from the Apache Ignite Users mailing list archive at 
Nabble.com.




Re: fetching all the tasks already scheduled and to know the status of the task

2017-08-24 Thread Alexander Fedotov
As well, please properly subscribe to the mailing list so that the
community can receive email notifications for your messages. To subscribe,
send an empty email to user-subscr...@ignite.apache.org and follow simple
instructions in the reply.

Kind regards,
Alex.

On Thu, Aug 24, 2017 at 5:41 PM, afedotov 
wrote:

> Hi,
>
> 1. For scheduling jobs, you can take a look at the cron based scheduler
> .
> It provides facilities for scheduling jobs locally, but you can run any
> distributed task from that scheduled task.
> 2. Information about all the jobs associated with a task you can obtain
> from the ComputeTaskFuture returned from the
> ignite.compute().executeAsync call as follows
> fut.getTaskSession().getJobSiblings()
> 3. Pausing and resuming is not supported out of the box, but you could
> implement it using, for example, distributed task session
> 
> or distributed data structures
> 
> 4. There is no schema for persisting jobs as it is done for Quartz instead
> you need to maintain the scheduled futures in some way.
>
>
>
> Kind regards,
> Alex.
>
> On Thu, Aug 24, 2017 at 3:50 PM, chandrika [via Apache Ignite Users] <[hidden
> email] > wrote:
>
>> Hello All,
>>
>> We have a requirement of parallel processing/executing of tasks across
>> nodes  to improve the performance of the application, hence are using
>> Apache Ignite. So far we have found Apache Ignite very useful.
>>
>> Also would like to use ignite-schedule extensively for all the below
>> points in cluster:
>>
>> 1. scheduling the job
>> 2. to fetch all the tasks or the jobs associated with the task
>> 3. to pause / resume / delete / reschedule the job
>> 4. also is there a schema of DB for persisting the jobs scheduled already
>> 5. also can we schedule a job with cron for a particular time and date.
>>
>> in short need a replacement for Quartz.
>>
>> thanks and regards,
>> chandrika
>>
>>
>> --
>> If you reply to this email, your message will be added to the discussion
>> below:
>> http://apache-ignite-users.70518.x6.nabble.com/fetching-all-
>> the-tasks-already-scheduled-and-to-know-the-status-of-the-
>> task-tp16393.html
>> To start a new topic under Apache Ignite Users, email [hidden email]
>> 
>> To unsubscribe from Apache Ignite Users, click here.
>> NAML
>> 
>>
>
>
> --
> View this message in context: Re: fetching all the tasks already
> scheduled and to know the status of the task
> 
> Sent from the Apache Ignite Users mailing list archive
>  at Nabble.com.
>


Re: fetching all the tasks already scheduled and to know the status of the task

2017-08-24 Thread afedotov
Hi,

1. For scheduling jobs, you can take a look at the cron based scheduler
.
It provides facilities for scheduling jobs locally, but you can run any
distributed task from that scheduled task.
2. Information about all the jobs associated with a task you can obtain
from the ComputeTaskFuture returned from the
ignite.compute().executeAsync call as follows
fut.getTaskSession().getJobSiblings()
3. Pausing and resuming is not supported out of the box, but you could
implement it using, for example, distributed task session

or distributed data structures

4. There is no schema for persisting jobs as it is done for Quartz instead
you need to maintain the scheduled futures in some way.



Kind regards,
Alex.

On Thu, Aug 24, 2017 at 3:50 PM, chandrika [via Apache Ignite Users] <
ml+s70518n16393...@n6.nabble.com> wrote:

> Hello All,
>
> We have a requirement of parallel processing/executing of tasks across
> nodes  to improve the performance of the application, hence are using
> Apache Ignite. So far we have found Apache Ignite very useful.
>
> Also would like to use ignite-schedule extensively for all the below
> points in cluster:
>
> 1. scheduling the job
> 2. to fetch all the tasks or the jobs associated with the task
> 3. to pause / resume / delete / reschedule the job
> 4. also is there a schema of DB for persisting the jobs scheduled already
> 5. also can we schedule a job with cron for a particular time and date.
>
> in short need a replacement for Quartz.
>
> thanks and regards,
> chandrika
>
>
> --
> If you reply to this email, your message will be added to the discussion
> below:
> http://apache-ignite-users.70518.x6.nabble.com/fetching-
> all-the-tasks-already-scheduled-and-to-know-the-
> status-of-the-task-tp16393.html
> To start a new topic under Apache Ignite Users, email
> ml+s70518n1...@n6.nabble.com
> To unsubscribe from Apache Ignite Users, click here
> 
> .
> NAML
> 
>




--
View this message in context: 
http://apache-ignite-users.70518.x6.nabble.com/fetching-all-the-tasks-already-scheduled-and-to-know-the-status-of-the-task-tp16393p16397.html
Sent from the Apache Ignite Users mailing list archive at Nabble.com.

Re: serious problem: wiindows >>cmd >>jboss-4>>Error displaying Chinese encoding

2017-08-24 Thread afedotov
Hi,

Please properly subscribe to the mailing list so that the community can
receive email notifications for your messages. To subscribe, send an empty
email to user-subscr...@ignite.apache.org and follow simple instructions in
the reply.

Please try calling *chcp 65001* before starting Ignite instance, probably
like *chcp 65001 && ignite.cmd*
That will switch the console to UTF-8.
If that doesn't help, try redirecting the output to a file and see if it's
properly displayed with UTF-8 encoding.

Kind regards,
Alex.

On Thu, Aug 24, 2017 at 7:08 AM, dongxuanyi [via Apache Ignite Users] <
ml+s70518n16390...@n6.nabble.com> wrote:

> 11:06:31,805 INFO  [STDOUT] 3:-print---value:缁勭粐鏈烘瀯绠$悊
> 11:06:31,805 INFO  [STDOUT] 4:-print---value:缁勭粐鏈烘瀯绠$悊
>
> clients in Linux is ok,windows cmd no good.
>
> windows jboss file To configure : -Dfile.encoding=UTF-8  no good.
>
> The problem needs urgent solution。Thanks.
>
>
>
>
>
>
>
> --
> If you reply to this email, your message will be added to the discussion
> below:
> http://apache-ignite-users.70518.x6.nabble.com/serious-
> problem-wiindows-cmd-jboss-4-Error-displaying-Chinese-
> encoding-tp16390.html
> To start a new topic under Apache Ignite Users, email
> ml+s70518n1...@n6.nabble.com
> To unsubscribe from Apache Ignite Users, click here
> 
> .
> NAML
> 
>




--
View this message in context: 
http://apache-ignite-users.70518.x6.nabble.com/serious-problem-wiindows-cmd-jboss-4-Error-displaying-Chinese-encoding-tp16390p16395.html
Sent from the Apache Ignite Users mailing list archive at Nabble.com.

Re: Using One Grid for Web Session management and ClusterSingleton

2017-08-24 Thread afedotov
Hi,

What for do you want to obtain a reference to the WebSession cache?
Web session clustering is designed to be transparent to the user.
If you need to store something else in a cache then just create an
additional cache suited just for your needs.
Sharing web sessions cache for other entities is not a good approach.



Kind regards,
Alex.

On Wed, Aug 23, 2017 at 7:49 PM, sijusuresh [via Apache Ignite Users] <
ml+s70518n16382...@n6.nabble.com> wrote:

> I'm using ignite for web session Management by implementing
> IgniteWebSessionFilter. Now the grid instance created in
> IgniteWebSessionFilter needs to be referred in a Spring bean. Can i get the
> reference of this instance in a Spring bean and use it for initializing a
> cluster singleton service.
>
> Is this the right approach or do we have a way to use same grid and
> different cache for Web session management and a cluster singleton service.
>
> --
> If you reply to this email, your message will be added to the discussion
> below:
> http://apache-ignite-users.70518.x6.nabble.com/Using-One-
> Grid-for-Web-Session-management-and-ClusterSingleton-tp16382.html
> To start a new topic under Apache Ignite Users, email
> ml+s70518n1...@n6.nabble.com
> To unsubscribe from Apache Ignite Users, click here
> 
> .
> NAML
> 
>




--
View this message in context: 
http://apache-ignite-users.70518.x6.nabble.com/Using-One-Grid-for-Web-Session-management-and-ClusterSingleton-tp16382p16394.html
Sent from the Apache Ignite Users mailing list archive at Nabble.com.

Re: Retrieving multiple keys with filtering

2017-08-24 Thread slava.koptilin
Hi Andrey,

It seems IgniteCache#query(ScanQuery) method is that you are looking for.
https://ignite.apache.org/releases/2.1.0/javadoc/org/apache/ignite/IgniteCache.html

You can find an example here:
https://github.com/apache/ignite/blob/master/examples/src/main/java/org/apache/ignite/examples/datagrid/CacheQueryExample.java

Thanks!



--
View this message in context: 
http://apache-ignite-users.70518.x6.nabble.com/Retrieving-multiple-keys-with-filtering-tp16391p16392.html
Sent from the Apache Ignite Users mailing list archive at Nabble.com.