Re: Determine Node Health?

2018-10-09 Thread Jason.G
Hi vgrigorev,

I used your suggestion to do health check for each node. But I got memory
leak issue and exit with OOM error:  java heap space.

Below is my example code: 

// I create one bean to collect what I want info which include IP, hostname,
createtime and then return json string.
IgniteHealthCheckEntity healthCheck = new IgniteHealthCheckEntity();
ClusterNode node = ignite.cluster().localNode();
List adresses = (List)node.addresses();
String ip = adresses.get(0);

List hostnames = (List)node.hostNames();
String hostname = hostnames.get(0);

healthCheck.setServerIp(ip);
healthCheck.setStatus(0);
healthCheck.setServerHostname(hostname);
healthCheck.setMonitorTime(monitorTime);
healthCheck.setClientIp(clientIp);
String cacheName = "test_monitor_" + ipStr + "_"+ new Date().getTime();

IgniteCache putCache = ignite.createCache(cacheName);
putCache.put("test", "test");
String value = putCache.get("test");
if(!"test".equals(value)) {
message = "Ignite ("+ ip  +") " + "get/put value failed";
healthCheck.setMessage(message);
return JSONObject.fromObject(healthCheck).toString();
}else {
message = "OKOKOK";
healthCheck.setMessage(message);
return JSONObject.fromObject(healthCheck).toString(); 
}





--
Sent from: http://apache-ignite-users.70518.x6.nabble.com/


Re: Ignite as Hibernate L2 Cache

2018-10-09 Thread Scott Feldstein
It’s an AbstractMethodError caused since the library isn’t compiled with
the 5.2 hibernate spec.

On Tue, Oct 9, 2018 at 1:23 AM yarden  wrote:

> Thanks a lot for your answer.
> Can you share what was the error you encountered on Hibernate 5.2?
>
>
>
>
>
> --
> Sent from: http://apache-ignite-users.70518.x6.nabble.com/
>


Re: UPDATE query with JOIN

2018-10-09 Thread Justin Ji
Anyone who can tell me how to do an update operation with *join table*?

I have tried the SQL below:
1.update A set A.a1=1 join table(devId varchar=?) B on A.devId=B.devId *SQL
ERROR*
2.update A, table(devId varchar=?) B set A.a2=1 where A.devId=B.devId *SQL
ERROR*
3.update A set A.a1=1 where A.devId in (select table.devId from table(devId
varchar = ?)) *amost 2 second with 2048 records*
4.update A set A.a1=1 where A.devId in ('1', '2', ..., '2408') *about 600ms
with 2048 records*

It seemed that h2 database does not support first two SQL according to my
test.

And the third SQL has worse performance than the forth, I also do not know
why.



--
Sent from: http://apache-ignite-users.70518.x6.nabble.com/


Re: Update all data in cache spend a long time

2018-10-09 Thread Justin Ji
Thank you, I will try queryParallelism.



--
Sent from: http://apache-ignite-users.70518.x6.nabble.com/


Re: Ignite on Spring Boot 2.0

2018-10-09 Thread ignite_user2016
Please find sample project running on SB 2.0.. when I try to instantiate it
gives following exception - 

2018-10-09 16:54:28.207  WARN 14068 --- [   main]
io.undertow.servlet  : UT015020: Path /* is secured for
some HTTP methods, however it is not secured for [HEAD, DELETE, POST, GET,
CONNECT, OPTIONS, PUT]
[2018-10-09 16:54:30,636][ERROR][main][SpringApplication] Application run
failed
class org.apache.ignite.IgniteIllegalStateException: Ignite instance with
provided name doesn't exist. Did you call Ignition.start(..) to start an
Ignite instance? [name=WebGrid]
at org.apache.ignite.internal.IgnitionEx.grid(IgnitionEx.java:1383)
at org.apache.ignite.Ignition.ignite(Ignition.java:535)
at
org.apache.ignite.cache.spring.SpringCacheManager.onApplicationEvent(SpringCacheManager.java:334)
at
org.apache.ignite.cache.spring.SpringCacheManager.onApplicationEvent(SpringCacheManager.java:146)
at
org.springframework.context.event.SimpleApplicationEventMulticaster.doInvokeListener(SimpleApplicationEventMulticaster.java:172)
at
org.springframework.context.event.SimpleApplicationEventMulticaster.invokeListener(SimpleApplicationEventMulticaster.java:165)
at
org.springframework.context.event.SimpleApplicationEventMulticaster.multicastEvent(SimpleApplicationEventMulticaster.java:139)
at
org.springframework.context.support.AbstractApplicationContext.publishEvent(AbstractApplicationContext.java:400)
at
org.springframework.context.support.AbstractApplicationContext.publishEvent(AbstractApplicationContext.java:354)
at
org.springframework.context.support.AbstractApplicationContext.finishRefresh(AbstractApplicationContext.java:888)
at
org.springframework.boot.web.servlet.context.ServletWebServerApplicationContext.finishRefresh(ServletWebServerApplicationContext.java:161)
at
org.springframework.context.support.AbstractApplicationContext.refresh(AbstractApplicationContext.java:553)
at
org.springframework.boot.web.servlet.context.ServletWebServerApplicationContext.refresh(ServletWebServerApplicationContext.java:140)
at
org.springframework.boot.SpringApplication.refresh(SpringApplication.java:759)
at
org.springframework.boot.SpringApplication.refreshContext(SpringApplication.java:395)
at
org.springframework.boot.SpringApplication.run(SpringApplication.java:327)
at demo.SB20CacheApplication.main(SB20CacheApplication.java:28)
sb20-cache-app.7z
  
cache_log.txt
  



--
Sent from: http://apache-ignite-users.70518.x6.nabble.com/


RE: Role of H2 datbase in Apache Ignite

2018-10-09 Thread Stanislav Lukyanov
In short, Ignite replaces H2’s storage level with its own.
For example, Ignite implements H2’s Index interface with its own off-heap data 
structures underneath.
When Ignite executes an SQL query, it will ask H2 to process it, then H2 will 
callback to Ignite’s implementations 
of H2’s interfaces (such as Index) to actually retrieve the data.
I guess the on-heap processing is mostly H2, although there is a lot of work 
done by Ignite to make the distributed 
map-reduce work like creating temporary tables for intermediate results.

Stan 

From: eugene miretsky
Sent: 9 октября 2018 г. 21:52
To: user@ignite.apache.org
Subject: Re: Role of H2 datbase in Apache Ignite

Hello, 

I have been struggling with this question myself for a while now too.
I think the documents are very ambiguous on how exactly H2 is being used.

The document that you linked say 
"Apache Ignite leverages from H2's SQL query parser and optimizer as well as 
the execution planner. Lastly, H2 executes a query locally on a particular node 
and passes a local result to a distributed Ignite SQL engine for further 
processing."

And 
"However, the data, as well as the indexes, are always stored in the Ignite 
that executes queries in a distributed and fault-tolerant manner which is not 
supported by H2."

To me, this leaves a lot of ambiguity on how H2 is leveraged on a single Ignite 
node.  (I get that the Reduce stage, as well as distributed transactions, are 
handled by Ignite, but how about the 'map' stage on a single node). 

How is a query executed on a single node? 
Example query: Select count(customer_id) from user where (age > 20) group by 
customer_id

What steps are taken?
1. execution plan: H2 creates an execution plan
2. data retrieval:  Since data is stored off-heap, it has to be brought into 
heap.  Does H2 have anything to do with this step, or is it only Ignite? When 
are indexes used for that? 
3. Query execution: Once the data is on heap, what executes the Query (the 
group_by, aggregations, filters that were not handled by indexes, etc.)? H2 or 
Ignite? 



On Fri, Sep 21, 2018 at 9:27 AM Mikhail  wrote:
Hi, 

Could you please formulate your question? Because right not your message
looks like a request for google.
I think the following article has answer for your question:
https://apacheignite-sql.readme.io/docs/how-ignite-sql-works

Thanks,
Mike.



--
Sent from: http://apache-ignite-users.70518.x6.nabble.com/



Re: Configuration does not guarantee strict consistency betweenCacheStore and Ignite data storage upon restarts.

2018-10-09 Thread the_palakkaran
How do I tell Ignite to include a node in the baseline topology? Isn't it the
hosts that I set to TCPIpFinder? Also, is it possible to dynamically add a
new node to the cluster?



--
Sent from: http://apache-ignite-users.70518.x6.nabble.com/


RE: Ignite on Spring Boot 2.0

2018-10-09 Thread ignite_user2016
I will share a sample application soon.. stay tuned ...



--
Sent from: http://apache-ignite-users.70518.x6.nabble.com/


Re: Partition Loss Policy options

2018-10-09 Thread Roman Novichenok
Stan,
thanks for looking into it.  I agree with you that this is the observed
behaviour, but it is not what I would expect.

My expectation would be to get an exception when I attempt to query
unavailable partitions.  Ideally this behavior would be dependent on the
PartitionLossPolicy.  When READ_ONLY_SAFE or READ_WRITE_SAFE policy is
selected, and the query condition does not explicitly specify which
partitions it is interested in, then query should fail. Query could
implicitly specify partitions by including indexed value in the where
clause.  Simpler implementation could just raise exceptions on queries when
policy is ..._SAFE and some partitions are unavailable.

Thanks again,
Roman

On Tue, Oct 9, 2018 at 2:54 PM Stanislav Lukyanov 
wrote:

> Hi,
>
>
>
> I’ve tried your test and it works as expected, with some partitions lost
> and the final size being ~850 (~150 less than on the start).
>
> Am I missing something?
>
>
>
> Thanks,
>
> Stan
>
>
>
> *From: *Roman Novichenok 
> *Sent: *2 октября 2018 г. 22:21
> *To: *user@ignite.apache.org
> *Subject: *Re: Partition Loss Policy options
>
>
>
> Anton,
>
> thanks for quick response.  Not sure if I'm setting wrong expectations.
> Just tried sql query and that exhibited the same behavior.  Created a pull
> request with a test: https://github.com/novicr/ignite/pull/3.
>
>
>
> The test goes through the following steps:
>
> 1. creates a 4 node cluster with persistence enabled.
>
> 2. creates 2 caches - with setBackups(1)
>
> 3. populates caches with 1000 elements
>
> 4. runs a sql query and prints result size: 1000
>
> 5. stops 2 nodes
>
> 6. runs sql query from step 4 and prints result size: (something less than
> 1000).
>
>
>
> Thanks,
>
> Roman
>
>
>
>
>
> On Tue, Oct 2, 2018 at 2:07 PM Roman Novichenok <
> roman.noviche...@gmail.com> wrote:
>
> I was looking at scan queries. cache.query(new ScanQuery()) returns
> partial results.
>
>
>
> On Tue, Oct 2, 2018 at 1:37 PM akurbanov  wrote:
>
> Hello Roman,
>
> Correct me if I'm mistaken, you are talking about SQL queries. That was
> fixed under https://issues.apache.org/jira/browse/IGNITE-8927, primary
> ticket is https://issues.apache.org/jira/browse/IGNITE-8834, will be
> delivered in 2.7 release.
>
> Regards,
> Anton
>
>
>
> --
> Sent from: http://apache-ignite-users.70518.x6.nabble.com/
>
>
>


RE: Ignite on Spring Boot 2.0

2018-10-09 Thread Stanislav Lukyanov
Ignite 2.0 is FAR from being recent :)
Try Ignite 2.6. If it doesn’t work, share configuration, logs and code that 
starts Ignite.

Thanks,
Stan

From: ignite_user2016
Sent: 9 октября 2018 г. 22:00
To: user@ignite.apache.org
Subject: Re: Ignite on Spring Boot 2.0

Small update - 

I also upgraded ignite to recent version but no use.

Current version is Ignite 2.0





--
Sent from: http://apache-ignite-users.70518.x6.nabble.com/



RE: Ignite as Hibernate L2 Cache

2018-10-09 Thread Stanislav Lukyanov
Hi,

Can’t say much about Java EE usage exactly but overall the idea is
1) Start an Ignite node named “foo”, in the same JVM as the application, before 
Hibernate is initialized
2) Specify `org.apache.ignite.hibernate.ignite_instance_name=foo` in the 
Hibernate session properties
3) Do NOT specify ` org.apache.ignite.hibernate.grid_config` - if you do, the 
framework will try to start Ignite automatically, and you’ve already done it 

If it still complains about an already started instance, please share the logs.

Thanks,
Stan

From: yarden
Sent: 7 октября 2018 г. 16:12
To: user@ignite.apache.org
Subject: Ignite as Hibernate L2 Cache

Hi all,
I am trying to configure Ignite as second-level cache in Hibernate.
I followed the instructions on this link:
https://apacheignite-mix.readme.io/docs/hibernate-l2-cache
  
But it seems that the example is for Hibernate with Spring.
I am using Hibernate 5.2.5 with Java Enterprise Edition.

So I am configuring the second level cache on the persistence.xml, but I
have two persistence units.
For both I defined the same factory class and instance name.
Then I get an error on server startup that Ignite instance already exist.
Is there anyway to create the Ignite instance as a singleton? 

Anyway, any example of Ignite as Hibernate L2 Cache for JEE will be a great
help.
Thanks!



--
Sent from: http://apache-ignite-users.70518.x6.nabble.com/



Re: Configuration does not guarantee strict consistency betweenCacheStore and Ignite data storage upon restarts.

2018-10-09 Thread Mikael
If you use native persistence it will use that data, you will have to 
make sure you get the cachestore used yourself if the original data 
change, normally you would use the cache store without native 
persistence but that depends on how you want to use it, if you restart 
the node everything will always be there as it is saved on disk so it 
will never read anything from the cachestore after the first time unless 
you tell it to do that.


After you have set the baseline topology it will use that set of nodes 
for persistence, if you add another node after that (that has 
persistence enabled), you must tell Ignite to include it (set base line 
topology).


Mikael

Den 2018-10-09 kl. 21:00, skrev the_palakkaran:

Did not fully understand what it says, but does it like even if values in my
DB is changed, it wont automatically get updated into the ignite cache? Or
is the concept something else? I just have a read through enabled cache.

I also had the below error before it, just noticed. Is it related to the
other?

Local node is not included in Baseline Topology and will not be used for
persistent data storage. Use control.(sh|bat) script or IgniteCluster
interface to include the node to Baseline Topology.






--
Sent from: http://apache-ignite-users.70518.x6.nabble.com/





RE: Spark Ignite Data Load failing On Large Cache

2018-10-09 Thread Stanislav Lukyanov
Hi,

Please share configurations and full logs from all nodes.

Stan

From: ApacheUser
Sent: 8 октября 2018 г. 17:49
To: user@ignite.apache.org
Subject: Spark Ignite Data Load failing On Large Cache

Hi,I am testing large Ignite Cache of 900GB, on 4 node VM(96GB RAM, 8CPU and 
500GB SAN Storage) Spark Ignite Cluster .It happened tow times after 
reaching 350GB plus one or two nodes not processing data load and the data 
load is stopped. Please advise, the CLuster , Server and Client Logs 
below.


 


Server Logs:

[11:59:34] Topology snapshot [ver=121, servers=4, clients=9, CPUs=32,
offheap=1000.0GB, heap=78.0GB]
[11:59:34]   ^-- Node [id=F6605E96-47C9-479B-A840-03316500C9A3,
clusterState=ACTIVE]
[11:59:34]   ^-- Baseline [id=0, size=4, online=4, offline=0]
[11:59:34] Data Regions Configured:
[11:59:34]   ^-- default_mem_region [initSize=256.0 MiB, maxSize=20.0 GiB,
persistenceEnabled=true]
[11:59:34]   ^-- q_major [initSize=10.0 GiB, maxSize=30.0 GiB,
persistenceEnabled=true]
[11:59:34]   ^-- q_minor [initSize=10.0 GiB, maxSize=30.0 GiB,
persistenceEnabled=true]
[14:33:15,872][SEVERE][grid-nio-worker-client-listener-3-#33][ClientListenerProcessor]
Failed to process selector key [ses=GridSelectorNioSessionImpl
[worker=ByteBufferNioClientWorker [readBuf=java.nio.HeapByteBuffer[pos=0
lim=8192 cap=8192], super=AbstractNioClientWorker [idx=3, bytesRcvd=0,
bytesSent=0, bytesRcvd0=0, bytesSent0=0, select=true, super=GridWorker
[name=grid-nio-worker-client-listener-3, igniteInstanceName=null,
finished=false, hashCode=254322881, interrupted=false,
runner=grid-nio-worker-client-listener-3-#33]]], writeBuf=null,
readBuf=null, inRecovery=null, outRecovery=null, super=GridNioSessionImpl
[locAddr=/64.102.213.190:10800, rmtAddr=/10.82.249.225:51449,
createTime=1538740798912, closeTime=0, bytesSent=397, bytesRcvd=302,
bytesSent0=0, bytesRcvd0=0, sndSchedTime=1538742789216,
lastSndTime=1538742789216, lastRcvTime=1538742789216, readsPaused=false,
filterChain=FilterChain[filters=[GridNioAsyncNotifyFilter,
GridNioCodecFilter [parser=ClientListenerBufferedParser, directMode=false]],
accepted=true]]]
java.io.IOException: Connection reset by peer
at sun.nio.ch.FileDispatcherImpl.read0(Native Method)
at sun.nio.ch.SocketDispatcher.read(SocketDispatcher.java:39)
at sun.nio.ch.IOUtil.readIntoNativeBuffer(IOUtil.java:223)
at sun.nio.ch.IOUtil.read(IOUtil.java:197)
at sun.nio.ch.SocketChannelImpl.read(SocketChannelImpl.java:380)
at
org.apache.ignite.internal.util.nio.GridNioServer$ByteBufferNioClientWorker.processRead(GridNioServer.java:1085)
at
org.apache.ignite.internal.util.nio.GridNioServer$AbstractNioClientWorker.processSelectedKeysOptimized(GridNioServer.java:2339)
at
org.apache.ignite.internal.util.nio.GridNioServer$AbstractNioClientWorker.bodyInternal(GridNioServer.java:2110)
at
org.apache.ignite.internal.util.nio.GridNioServer$AbstractNioClientWorker.body(GridNioServer.java:1764)
at
org.apache.ignite.internal.util.worker.GridWorker.run(GridWorker.java:110)
at java.lang.Thread.run(Thread.java:748)
[21:43:26,312][SEVERE][grid-nio-worker-client-listener-0-#30][ClientListenerProcessor]
Failed to process selector key [ses=GridSelectorNioSessionImpl
[worker=ByteBufferNioClientWorker [readBuf=java.nio.HeapByteBuffer[pos=0
lim=8192 cap=8192], super=AbstractNioClientWorker [idx=0, bytesRcvd=0,
bytesSent=0, bytesRcvd0=0, bytesSent0=0, select=true, super=GridWorker
[name=grid-nio-worker-client-listener-0, igniteInstanceName=null,
finished=false, hashCode=2211598, interrupted=false,
runner=grid-nio-worker-client-listener-0-#30]]], writeBuf=null,
readBuf=null, inRecovery=null, outRecovery=null, super=GridNioSessionImpl
[locAddr=/64.102.213.190:10800, rmtAddr=/10.82.32.114:59525,
createTime=1538746249024, closeTime=0, bytesSent=2035, bytesRcvd=1532,
bytesSent0=0, bytesRcvd0=0, sndSchedTime=1538767916701,
lastSndTime=1538767916701, lastRcvTime=1538767916701, readsPaused=false,
filterChain=FilterChain[filters=[GridNioAsyncNotifyFilter,
GridNioCodecFilter [parser=ClientListenerBufferedParser, directMode=false]],
accepted=true]]]
java.io.IOException: Connection timed out
at sun.nio.ch.FileDispatcherImpl.read0(Native Method)
at sun.nio.ch.SocketDispatcher.read(SocketDispatcher.java:39)
at sun.nio.ch.IOUtil.readIntoNativeBuffer(IOUtil.java:223)
at sun.nio.ch.IOUtil.read(IOUtil.java:197)
at sun.nio.ch.SocketChannelImpl.read(SocketChannelImpl.java:380)
at
org.apache.ignite.internal.util.nio.GridNioServer$ByteBufferNioClientWorker.processRead(GridNioServer.java:1085)
at
org.apache.ignite.internal.util.nio.GridNioServer$AbstractNioClientWorker.processSelectedKeysOptimized(GridNioServer.java:2339)
at
org.apache.ignite.internal.util.nio.GridNioServer$AbstractNioClientWorker.bodyInternal(GridNioServer.java:2110)

Re: Ignite on Spring Boot 2.0

2018-10-09 Thread ignite_user2016
Small update - 

I also upgraded ignite to recent version but no use.

Current version is Ignite 2.0





--
Sent from: http://apache-ignite-users.70518.x6.nabble.com/


RE: Configuration does not guarantee strict consistency betweenCacheStore and Ignite data storage upon restarts.

2018-10-09 Thread the_palakkaran
Did not fully understand what it says, but does it like even if values in my
DB is changed, it wont automatically get updated into the ignite cache? Or
is the concept something else? I just have a read through enabled cache.

I also had the below error before it, just noticed. Is it related to the
other?

Local node is not included in Baseline Topology and will not be used for
persistent data storage. Use control.(sh|bat) script or IgniteCluster
interface to include the node to Baseline Topology.






--
Sent from: http://apache-ignite-users.70518.x6.nabble.com/


RE: Partition Loss Policy options

2018-10-09 Thread Stanislav Lukyanov
Hi,

I’ve tried your test and it works as expected, with some partitions lost and 
the final size being ~850 (~150 less than on the start).
Am I missing something?

Thanks,
Stan

From: Roman Novichenok
Sent: 2 октября 2018 г. 22:21
To: user@ignite.apache.org
Subject: Re: Partition Loss Policy options

Anton,
thanks for quick response.  Not sure if I'm setting wrong expectations.  Just 
tried sql query and that exhibited the same behavior.  Created a pull request 
with a test: https://github.com/novicr/ignite/pull/3.  

The test goes through the following steps:
1. creates a 4 node cluster with persistence enabled.  
2. creates 2 caches - with setBackups(1)
3. populates caches with 1000 elements
4. runs a sql query and prints result size: 1000
5. stops 2 nodes 
6. runs sql query from step 4 and prints result size: (something less than 
1000).

Thanks,
Roman


On Tue, Oct 2, 2018 at 2:07 PM Roman Novichenok  
wrote:
I was looking at scan queries. cache.query(new ScanQuery()) returns partial 
results. 

On Tue, Oct 2, 2018 at 1:37 PM akurbanov  wrote:
Hello Roman,

Correct me if I'm mistaken, you are talking about SQL queries. That was
fixed under https://issues.apache.org/jira/browse/IGNITE-8927, primary
ticket is https://issues.apache.org/jira/browse/IGNITE-8834, will be
delivered in 2.7 release.

Regards,
Anton



--
Sent from: http://apache-ignite-users.70518.x6.nabble.com/



Ignite TcpDiscoveryS3IpFinder and AWS VPC

2018-10-09 Thread smovva
I have a cluster running on AWS VPC with TcpDiscoveryS3IpFinder SPI. I
noticed that there are a couple of  entries in the S3 bucket
'127.0.0.1#47500', '0:0:0:0:0:0:0:1%lo#47500' along with entries for the
private IPs of the nodes. If anyone understands what these are, can you
please share the info.

Also, I have another VPC that I want to run some clients from. I was
wondering if there is any documentation around setting up VPC peering and
address translation for this to work.

Thanks



--
Sent from: http://apache-ignite-users.70518.x6.nabble.com/


Re: Role of H2 datbase in Apache Ignite

2018-10-09 Thread eugene miretsky
Hello,

I have been struggling with this question myself for a while now too.
I think the documents are very ambiguous on how exactly H2 is being used.

The document that you linked say
"Apache Ignite leverages from H2's SQL query parser and optimizer as well
as the execution planner. Lastly, *H2 executes a query locally* on a
particular node and passes a local result to a distributed Ignite SQL
engine for further processing."

And
"However, *the data, as well as the indexes, are always stored in the
Ignite that executes queries* in a distributed and fault-tolerant manner
which is not supported by H2."

To me, this leaves a lot of ambiguity on how H2 is leveraged on a single
Ignite node.  (I get that the Reduce stage, as well as distributed
transactions, are handled by Ignite, but how about the 'map' stage on a
single node).

How is a query executed on a single node?
Example query: Select count(customer_id) from user where (age > 20) group
by customer_id

What steps are taken?

   1. execution plan: H2 creates an execution plan
   2. data retrieval:  Since data is stored off-heap, it has to be brought
   into heap.  Does H2 have anything to do with this step, or is it only
   Ignite? When are indexes used for that?
   3. Query execution: Once the data is on heap, what executes the Query
   (the group_by, aggregations, filters that were not handled by indexes,
   etc.)? H2 or Ignite?




On Fri, Sep 21, 2018 at 9:27 AM Mikhail  wrote:

> Hi,
>
> Could you please formulate your question? Because right not your message
> looks like a request for google.
> I think the following article has answer for your question:
> https://apacheignite-sql.readme.io/docs/how-ignite-sql-works
>
> Thanks,
> Mike.
>
>
>
> --
> Sent from: http://apache-ignite-users.70518.x6.nabble.com/
>


RE: Configuration does not guarantee strict consistency betweenCacheStore and Ignite data storage upon restarts.

2018-10-09 Thread Stanislav Lukyanov
Well, exactly what it says – Ignite doesn’t guarantee consistency between 
CacheStore and Native Persistence.
Because of this you may end up with different data in the 3rd party DB and 
Ignite’s persistence,
so using such configuration is not advised.
See this page 
https://apacheignite.readme.io/docs/3rd-party-store#section-using-3rd-party-persistence-together-with-ignite-persistence

However, if you’re sure you want to use Ignite like that, then you can ignore 
the warning.

Stan

From: the_palakkaran
Sent: 9 октября 2018 г. 21:46
To: user@ignite.apache.org
Subject: Configuration does not guarantee strict consistency betweenCacheStore 
and Ignite data storage upon restarts.

Hi,

Why do I get the below error while starting up ignite clustered?

Both Ignite native persistence and CacheStore are configured for cache
'cacheStoresExampleCache'. This configuration does not guarantee strict
consistency between CacheStore and Ignite data storage upon restarts.
Consult documentation for more details. 




--
Sent from: http://apache-ignite-users.70518.x6.nabble.com/



Configuration does not guarantee strict consistency between CacheStore and Ignite data storage upon restarts.

2018-10-09 Thread the_palakkaran
Hi,

Why do I get the below error while starting up ignite clustered?

Both Ignite native persistence and CacheStore are configured for cache
'cacheStoresExampleCache'. This configuration does not guarantee strict
consistency between CacheStore and Ignite data storage upon restarts.
Consult documentation for more details. 




--
Sent from: http://apache-ignite-users.70518.x6.nabble.com/


RE: Ignite sporadically gets java.lang.VerifyError

2018-10-09 Thread Stanislav Lukyanov
Hi,

Looks like a JVM bug.
The message means that bytecode generated for a lambda in the Ignite code is 
structurally incorrect,
but the bytecode is generated in parts by the javac and JVM, so the issue must 
be there.

I suggest you upgrade to the latest JDK 8 (currently 8u181) and see if it helps.

Stan


From: nitin.phadnis
Sent: 9 октября 2018 г. 21:05
To: user@ignite.apache.org
Subject: Ignite sporadically gets java.lang.VerifyError

When running Ignite 2.6 in a docker container, I sporadically get VerifyError

13:56:59,154 SEVERE [] (exchange-worker-#52%app-grid%) Critical system error
detected. Will be handled accordingly to configured handler [hnd=class
o.a.i.failure.StopNodeOrHaltFailureHandler, failureCtx=FailureContext
[type=SYSTEM_WORKER_TERMINATION, err=java.lang.BootstrapMethodError: call
site initialization exception]]: java.lang.BootstrapMethodError: call site
initialization exception
at java.lang.invoke.CallSite.makeSite(CallSite.java:341)
[rt.jar:1.8.0_91]
at
java.lang.invoke.MethodHandleNatives.linkCallSiteImpl(MethodHandleNatives.java:307)
[rt.jar:1.8.0_91]
at
java.lang.invoke.MethodHandleNatives.linkCallSite(MethodHandleNatives.java:297)
[rt.jar:1.8.0_91]
at
org.apache.ignite.internal.processors.cache.distributed.dht.preloader.GridDhtPartitionDemander.addAssignments(GridDhtPartitionDemander.java:342)
at
org.apache.ignite.internal.processors.cache.distributed.dht.preloader.GridDhtPreloader.addAssignments(GridDhtPreloader.java:379)
at
org.apache.ignite.internal.processors.cache.GridCachePartitionExchangeManager$ExchangeWorker.body0(GridCachePartitionExchangeManager.java:2563)
at
org.apache.ignite.internal.processors.cache.GridCachePartitionExchangeManager$ExchangeWorker.body(GridCachePartitionExchangeManager.java:2299)
at
org.apache.ignite.internal.util.worker.GridWorker.run(GridWorker.java:110)
at java.lang.Thread.run(Thread.java:745) [rt.jar:1.8.0_91]
Caused by: java.lang.VerifyError: (class:
org/apache/ignite/internal/processors/cache/distributed/dht/preloader/GridDhtPartitionDemander$$Lambda$48,
method: run signature: ()V) Illegal use of nonvirtual function call
at sun.misc.Unsafe.defineAnonymousClass(Native Method)
[rt.jar:1.8.0_91]
at
java.lang.invoke.InnerClassLambdaMetafactory.spinInnerClass(InnerClassLambdaMetafactory.java:326)
[rt.jar:1.8.0_91]
at
java.lang.invoke.InnerClassLambdaMetafactory.buildCallSite(InnerClassLambdaMetafactory.java:194)
[rt.jar:1.8.0_91]
at
java.lang.invoke.LambdaMetafactory.metafactory(LambdaMetafactory.java:304)
[rt.jar:1.8.0_91]
at java.lang.invoke.CallSite.makeSite(CallSite.java:302)
[rt.jar:1.8.0_91]
... 8 more

13:56:59,157 SEVERE [] (exchange-worker-#52%app-grid%) JVM will be halted
immediately due to the failure: [failureCtx=FailureContext
[type=SYSTEM_WORKER_TERMINATION, err=java.lang.BootstrapMethodError: call
site initialization exception]]
2



--
Sent from: http://apache-ignite-users.70518.x6.nabble.com/



Ignite sporadically gets java.lang.VerifyError

2018-10-09 Thread nitin.phadnis
When running Ignite 2.6 in a docker container, I sporadically get VerifyError

13:56:59,154 SEVERE [] (exchange-worker-#52%app-grid%) Critical system error
detected. Will be handled accordingly to configured handler [hnd=class
o.a.i.failure.StopNodeOrHaltFailureHandler, failureCtx=FailureContext
[type=SYSTEM_WORKER_TERMINATION, err=java.lang.BootstrapMethodError: call
site initialization exception]]: java.lang.BootstrapMethodError: call site
initialization exception
at java.lang.invoke.CallSite.makeSite(CallSite.java:341)
[rt.jar:1.8.0_91]
at
java.lang.invoke.MethodHandleNatives.linkCallSiteImpl(MethodHandleNatives.java:307)
[rt.jar:1.8.0_91]
at
java.lang.invoke.MethodHandleNatives.linkCallSite(MethodHandleNatives.java:297)
[rt.jar:1.8.0_91]
at
org.apache.ignite.internal.processors.cache.distributed.dht.preloader.GridDhtPartitionDemander.addAssignments(GridDhtPartitionDemander.java:342)
at
org.apache.ignite.internal.processors.cache.distributed.dht.preloader.GridDhtPreloader.addAssignments(GridDhtPreloader.java:379)
at
org.apache.ignite.internal.processors.cache.GridCachePartitionExchangeManager$ExchangeWorker.body0(GridCachePartitionExchangeManager.java:2563)
at
org.apache.ignite.internal.processors.cache.GridCachePartitionExchangeManager$ExchangeWorker.body(GridCachePartitionExchangeManager.java:2299)
at
org.apache.ignite.internal.util.worker.GridWorker.run(GridWorker.java:110)
at java.lang.Thread.run(Thread.java:745) [rt.jar:1.8.0_91]
Caused by: java.lang.VerifyError: (class:
org/apache/ignite/internal/processors/cache/distributed/dht/preloader/GridDhtPartitionDemander$$Lambda$48,
method: run signature: ()V) Illegal use of nonvirtual function call
at sun.misc.Unsafe.defineAnonymousClass(Native Method)
[rt.jar:1.8.0_91]
at
java.lang.invoke.InnerClassLambdaMetafactory.spinInnerClass(InnerClassLambdaMetafactory.java:326)
[rt.jar:1.8.0_91]
at
java.lang.invoke.InnerClassLambdaMetafactory.buildCallSite(InnerClassLambdaMetafactory.java:194)
[rt.jar:1.8.0_91]
at
java.lang.invoke.LambdaMetafactory.metafactory(LambdaMetafactory.java:304)
[rt.jar:1.8.0_91]
at java.lang.invoke.CallSite.makeSite(CallSite.java:302)
[rt.jar:1.8.0_91]
... 8 more

13:56:59,157 SEVERE [] (exchange-worker-#52%app-grid%) JVM will be halted
immediately due to the failure: [failureCtx=FailureContext
[type=SYSTEM_WORKER_TERMINATION, err=java.lang.BootstrapMethodError: call
site initialization exception]]
2



--
Sent from: http://apache-ignite-users.70518.x6.nabble.com/


Ignite on Spring Boot 2.0

2018-10-09 Thread ignite_user2016
Hello..

I am using Ignite with Spring boot version 1.5.8, all works well but when I
try to upgrade my spring boot app to 2.0.1 version some how
SpringCacheManager does not work.

Here is my piece of code - 

   /**
 * Provides SpringCacheManager to enable 2nd level caching on Ignite
 *
 * @return SpringCacheManager
 */
@Bean
public SpringCacheManager SpringCacheManager() {
SpringCacheManager springCacheManager = new SpringCacheManager();
springCacheManager.setGridName("WebGrid");
return springCacheManager;
}


During debug, I found out that following piece of code does not work - 

this.ignite = Ignition.ignite(this.igniteInstanceName);


So somehow, It cant join the existing instance on spring boot 2.0

Please note that ignite instance starts correctly but when it tries to join
it, it cant find the grid instance.

Did anyone experienced such issue ? 




--
Sent from: http://apache-ignite-users.70518.x6.nabble.com/


RE: Ignite Query Slow

2018-10-09 Thread Stanislav Lukyanov
The call
setIndexedTypes(Long.class, Person.class)
will search for all `@QuerySqlField` fields in Person and create indexes for 
all of them with `index=true`. 

For example, if your class looks like this
class Person {
@QuerySqlField(index = true)
private String name;

@QuerySqlField(index = true)
private int age;
}
then the call
setIndexedTypes(Long.class, Person.class)
will create indexes for both `name` and `age`.

Thanks,
Stan

From: Skollur
Sent: 2 октября 2018 г. 16:51
To: user@ignite.apache.org
Subject: Re: Ignite Query Slow

Documentation has example as ccfg.setIndexedTypes(Long.class, Person.class);.
I believe this is single field. How to register multiple indexes for
CacheConfiguration?




--
Sent from: http://apache-ignite-users.70518.x6.nabble.com/



RE: Ignite + Spark: json4s versions are incompatible

2018-10-09 Thread Stanislav Lukyanov
Hi,

AFAICS Ignite doesn’t even use json4s itself. I assume it’s only in the 
dependencies and binary distribution for Spark to work.
So, if Spark actually needs 3.2.X you can try using that.
You can remove/replace the Ignite’s json4s jar with the required one, or use 
your-favorite-build-system’s magic
to override what’s being picked during the build.

Please share with the community if it works or not. Perhaps we need to adjust 
the Ignite’s build to use a different json4s version.

Thanks,
Stan

From: eugene miretsky
Sent: 27 сентября 2018 г. 5:26
To: user@ignite.apache.org
Subject: Ignite + Spark: json4s versions are incompatible

Hello, 

Spark provides json4s 3.2.X, while Ignite uses the newest version. This seems 
to cause an error when using some spark SQL commands that use a json4s methods 
that no longer exist. 

Adding Ignite to our existing Spark code bases seems to break things. 

How do people work around this issue? 

Stack trace: 

[info] Caused by: java.lang.NoSuchMethodError: 
org.json4s.jackson.JsonMethods$.parse(Lorg/json4s/JsonInput;Z)Lorg/json4s/JsonAST$JValue;
[info]     at org.apache.spark.sql.types.DataType$.fromJson(DataType.scala:108)
[info]     at 
org.apache.spark.sql.types.StructType$$anonfun$6.apply(StructType.scala:414)
[info]     at 
org.apache.spark.sql.types.StructType$$anonfun$6.apply(StructType.scala:414)
[info]     at scala.util.Try$.apply(Try.scala:192)
[info]     at 
org.apache.spark.sql.types.StructType$.fromString(StructType.scala:414)
[info]     at 
org.apache.spark.sql.execution.datasources.parquet.ParquetWriteSupport.init(ParquetWriteSupport.scala:80)
[info]     at 
org.apache.parquet.hadoop.ParquetOutputFormat.getRecordWriter(ParquetOutputFormat.java:341)
[info]     at 
org.apache.parquet.hadoop.ParquetOutputFormat.getRecordWriter(ParquetOutputFormat.java:302)
[info]     at 
org.apache.spark.sql.execution.datasources.parquet.ParquetOutputWriter.(ParquetOutputWriter.scala:37)
[info]     at 
org.apache.spark.sql.execution.datasources.parquet.ParquetFileFormat$$anon$1.newInstance(ParquetFileFormat.scala:159)
[info]     at 
org.apache.spark.sql.execution.datasources.FileFormatWriter$SingleDirectoryWriteTask.newOutputWriter(FileFormatWriter.scala:303)
[info]     at 
org.apache.spark.sql.execution.datasources.FileFormatWriter$SingleDirectoryWriteTask.execute(FileFormatWriter.scala:312)
[info]     at 
org.apache.spark.sql.execution.datasources.FileFormatWriter$$anonfun$org$apache$spark$sql$execution$datasources$FileFormatWriter$$executeTask$3.apply(FileFormatWriter.scala:256)
[info]     at 
org.apache.spark.sql.execution.datasources.FileFormatWriter$$anonfun$org$apache$spark$sql$execution$datasources$FileFormatWriter$$executeTask$3.apply(FileFormatWriter.scala:254)
[info]     at 
org.apache.spark.util.Utils$.tryWithSafeFinallyAndFailureCallbacks(Utils.scala:1371)
[info]     at 
org.apache.spark.sql.execution.datasources.FileFormatWriter$.org$apache$spark$sql$execution$datasources$FileFormatWriter$$executeTask(FileFormatWriter.scala:259)
[info]     ... 8 more



Re: Query 3x slower with index

2018-10-09 Thread eugene miretsky
Hi Ilya,

I have tried it, and got the same performance as forcing using category
index in my initial benchmark - query is 3x slowers and uses only one
thread.

>From my experiments so far it seems like Ignite can either (a) use
affinity key and run queries in parallel, (b) use index but run the query
on only one thread.

Has anybody been able to run OLAP like queries in while using an index?

Cheers,
Eugene

On Mon, Sep 24, 2018 at 10:55 AM Ilya Kasnacheev 
wrote:

> Hello!
>
> I guess that using AFFINITY_KEY as index have something to do with the
> fact that GROUP BY really wants to work per-partition.
>
> I have the following query for you:
>
> 1: jdbc:ignite:thin://localhost> explain Select count(*) FROM( Select
> customer_id from (Select customer_id, product_views_app, product_clict_app
> from GA_DATA ga join table(category_id int = ( 117930, 175930,
> 175940,175945,101450)) cats on cats.category_id = ga.category_id) data
> group by customer_id having SUM(product_views_app) > 2 OR
> SUM(product_clict_app) > 1);
> PLAN  SELECT
> DATA__Z2.CUSTOMER_ID AS __C0_0,
> SUM(DATA__Z2.PRODUCT_VIEWS_APP) AS __C0_1,
> SUM(DATA__Z2.PRODUCT_CLICT_APP) AS __C0_2
> FROM (
> SELECT
> GA__Z0.CUSTOMER_ID,
> GA__Z0.PRODUCT_VIEWS_APP,
> GA__Z0.PRODUCT_CLICT_APP
> FROM TABLE(CATEGORY_ID INTEGER=(117930, 175930, 175940, 175945,
> 101450)) CATS__Z1
> INNER JOIN PUBLIC.GA_DATA GA__Z0
> ON 1=1
> WHERE CATS__Z1.CATEGORY_ID = GA__Z0.CATEGORY_ID
> ) DATA__Z2
> /* SELECT
> GA__Z0.CUSTOMER_ID,
> GA__Z0.PRODUCT_VIEWS_APP,
> GA__Z0.PRODUCT_CLICT_APP
> FROM TABLE(CATEGORY_ID INTEGER=(117930, 175930, 175940, 175945,
> 101450)) CATS__Z1
> /++ function ++/
> INNER JOIN PUBLIC.GA_DATA GA__Z0
> /++ PUBLIC.GA_CATEGORY_ID: CATEGORY_ID = CATS__Z1.CATEGORY_ID ++/
> ON 1=1
> WHERE CATS__Z1.CATEGORY_ID = GA__Z0.CATEGORY_ID
>  */
> GROUP BY DATA__Z2.CUSTOMER_ID
>
> PLAN  SELECT
> COUNT(*)
> FROM (
> SELECT
> __C0_0 AS CUSTOMER_ID
> FROM PUBLIC.__T0
> GROUP BY __C0_0
> HAVING (SUM(__C0_1) > 2)
> OR (SUM(__C0_2) > 1)
> ) _18__Z3
> /* SELECT
> __C0_0 AS CUSTOMER_ID
> FROM PUBLIC.__T0
> /++ PUBLIC."merge_scan" ++/
> GROUP BY __C0_0
> HAVING (SUM(__C0_1) > 2)
> OR (SUM(__C0_2) > 1)
>  */
>
> However, I'm not sure it is "optimal" or not since I have no idea if it
> will perform better or worse on real data. That's why I need a subset of
> data which will make query execution speed readily visible. Unfortunately,
> I can't deduce that from query plan alone.
>
> Regards,
> --
> Ilya Kasnacheev
>
>
> пн, 24 сент. 2018 г. в 16:14, eugene miretsky :
>
>> An easy way to reproduce would be to
>>
>> 1. Create table
>>
>> CREATE TABLE GA_DATA (
>> customer_id bigint,
>> dt timestamp,
>> category_id int,
>> product_views_app int,
>> product_clict_app int,
>> product_clict_web int,
>> product_clict_web int,
>> PRIMARY KEY (customer_id, dt, category_id)
>> ) WITH "template=ga_template, backups=0, affinityKey=customer_id";
>>
>> 2. Create indexes
>>
>>- CREATE INDEX ga_customer_id ON GA_Data (customer_id)
>>- CREATE INDEX ga_pKey ON GA_Data (customer_id, dt, category_id)
>>- CREATE INDEX ga_category_and_customer_id ON GA_Data (category_id,
>>customer_id)
>>- CREATE INDEX ga_category_id ON GA_Data (category_id)
>>
>> 3. Run Explain on the following queries while trying forcing using
>> different indexes
>>
>>- Select count(*) FROM(
>>
>> Select customer_id from GA_DATA  use index (ga_category_id)
>> where category_id in (117930, 175930, 175940,175945,101450)
>> group by customer_id having SUM(product_views_app) > 2 OR
>> SUM(product_clicks_app) > 1 )
>>
>>
>>- Select count(*) FROM(
>>
>> Select customer_id from GA_DATA ga use index (ga_pKey)
>> join table(category_id int = ( 117930, 175930, 175940,175945,101450))
>> cats on cats.category_id = ga.category_id
>> group by customer_id having SUM(product_views_app) > 2 OR
>> SUM(product_clicks_app) > 1
>> )
>>
>> The execution plans will be similar to what I have posted earler. In
>> particular, only on of (a) affinty key index, (b) category_id index will be
>> used.
>>
>> On Fri, Sep 21, 2018 at 8:49 AM Ilya Kasnacheev <
>> ilya.kasnach...@gmail.com> wrote:
>>
>>> Hello!
>>>
>>> Can you share a reproducer project which loads (or generates) data for
>>> caches and then queries them? I could try and debug it if I had the
>>> reproducer.
>>>
>>> Regards.
>>> --
>>> Ilya Kasnacheev
>>>
>>>
>>> чт, 20 сент. 2018 г. в 21:05, eugene miretsky >> >:
>>>
 Thanks Ilya,

 Tried it, no luck. It performs the same as when using category_id index
 alone (slow).
   Any combindation I try either uses AFFINITY_KEY or category index.
 When it uses category index it runs slowers.

 Also, when AFFINITY_KEY key is used, the jobs runs on 

RE: Is ID generator split brain compliant?

2018-10-09 Thread Stanislav Lukyanov
Hi,

If the issue is that the node is starting with a new persistent storage each 
time
there could some issue with file system.

The algorithm to choose a persistent folder is
1) If IgniteConfiguration.consistentId is specified, use the folder of that name
2) Otherwise, check if there are any existing folders not locked by other 
nodes; if one is found, use that
3) Otherwise, generate a new consistentId and create a new folder for it
I assume that when Ignite checks your folders on step 2 they appear to be 
locked even though they are not.
Perhaps the file system doesn’t unlock them fast enough, or something similar 
happens.
Look for messages like “Unable to acquire lock to file” at the start of the 
nodes to see if that’s the case.

Try to specify consistentId explicitly – perhaps it’ll solve the issue.

Thanks,
Stan

From: abatra
Sent: 1 октября 2018 г. 11:47
To: user@ignite.apache.org
Subject: Re: Is ID generator split brain compliant?

I believe it won't work either. I will try it soon and share the result. 

Could you please share some pointers that I should look at for debugging
persistence? I have org.apache.ignite=DEBUG in logs and IGNITE_QUITE set to
false. There a lot of logs, most of which I can not make sense of w.r.t.
persistence.



--
Sent from: http://apache-ignite-users.70518.x6.nabble.com/



Re: Ignite Persistence Storage : Full data is stored in all activated nodes just local Data?

2018-10-09 Thread Mikael
Only the data for that node is stored there, backup data on backup 
nodes, primary data on primary nodes and so on.


Mikael


Den 2018-10-09 kl. 17:27, skrev ApacheUser:

Hi Team,
How does the data stored when the persistenc eis enabled?

Does Ignite Store all data in all Nodes enabled persistence  or just the
data which in that node?

Ex: I have 4 node Ignite Cluster and persistence is enabled in all 4 nodes
(activated after 4 nodes come up)  . When the data is loaded,  is full data
stored/Persisted in all 4 nodes or just the data persists which is in thet
node?


Thanks



--
Sent from: http://apache-ignite-users.70518.x6.nabble.com/





Ignite Persistence Storage : Full data is stored in all activated nodes just local Data?

2018-10-09 Thread ApacheUser
Hi Team,
How does the data stored when the persistenc eis enabled?

Does Ignite Store all data in all Nodes enabled persistence  or just the
data which in that node?

Ex: I have 4 node Ignite Cluster and persistence is enabled in all 4 nodes
(activated after 4 nodes come up)  . When the data is loaded,  is full data
stored/Persisted in all 4 nodes or just the data persists which is in thet
node?


Thanks



--
Sent from: http://apache-ignite-users.70518.x6.nabble.com/


RE: Inconsistent data.

2018-10-09 Thread Stanislav Lukyanov
Hi,

No, the behavior you’re describing is not expected.
Moreover, there is a bunch of tests in Ignite that make sure the reads are 
consistent during the rebalance,
so this shouldn’t be possible.

Can you create a small reproducer project and share it?

Thanks,
Stan

From: Shrikant Haridas Sonone
Sent: 19 сентября 2018 г. 12:28
To: user@ignite.apache.org
Cc: Mahesh Sanjay Jadhav; Chinmoy Mohanty; Abhay Dutt Paroha
Subject: Inconsistent data.

Hi,
I am using Ignite 2.6.0 on top of google kubernetes engine v1.10.6-gke.2.
I am using native persistent and example configuration as follows.

    
    
    
    
    
    
    
    
    
    

We have around 2GB of data.
We have recently migrated our data from 2.4.0 to 2.6.0

Issue
1. When running 3 nodes of Ignite (each in a pod) in 3 as baseline nodes. If 
any of the nodes get restarted then it takes approximately of 5-6sec to 
rebalance the data. If there are any requests to be served, some of the 
requests are sent to the newly created node which is yet to complete the 
rebalance operation. Hence response received is inconsistent (sometimes empty) 
with the previous state before the node is added

2.  Same output of point 1 is repeated even if a new/fresh node is added to the 
baseline topology.

We tried to use the cache rebalance mode as SYNC. (as per doc 
https://apacheignite.readme.io/docs/rebalancing) The issue still persists.

Is this the expected behavior?

Is there any way/settings, which will restrict the requests directed to the 
newly added/restarted node till the rebalance operation is completed 
successfully..?

Is it side effect of the migration activity from Apache Ignite 2.4.0 to 2.6.0..?

Thanks and Regards,
Shrikant Sonone
Schlumberger.



RE: Effect of WriteSynchronizationMode on write operations inside atransaction in Apache Ignite

2018-10-09 Thread Stanislav Lukyanov
Hi,

Generally, your initial thoughts on the expected latency are correct – 
PRIMARY_SYNC allows you not to wait for the writes
to complete on the backups. However, some of the operations still have to be 
completed on all nodes (e.g. acquiring key locks),
so increasing the number of backups does affect the execution time.

If what you need is a local copy to speed up reads than you’re probably looking 
for a near cache:
https://apacheignite.readme.io/docs/near-caches

Stan

From: Kia Rahmani
Sent: 13 августа 2018 г. 22:22
To: user@ignite.apache.org
Subject: Re: Effect of WriteSynchronizationMode on write operations inside 
atransaction in Apache Ignite

The main reason I want to add backups is to allow very fast reads (assuming
backup reads are allowed). 
Do you know of any way of keeping (stale) copies locally (next to the
clients) in order to save network latency in read operations, without
affecting write latency?





--
Sent from: http://apache-ignite-users.70518.x6.nabble.com/



Re: Update all data in cache spend a long time

2018-10-09 Thread Ilya Kasnacheev
Hello!

1. I don't think you should use LIMIT because it is very hard to make sure
that all entries are updated exactly once. It is better to partition data
by values of some field.
2. It is hard to say outright, did you try queryParallelism?

Regards,
-- 
Ilya Kasnacheev


вт, 9 окт. 2018 г. в 16:27, Justin Ji :

> Ilya -
>
> Thank for your reply and point out the problem may exist in my code, I will
> try to split the operation into smaller ones.
>
> And I still have two problems:
>
> 1. Can I split the operation with LIMIT?
> 2. Where is time spent in the operation?
>
>
>
> --
> Sent from: http://apache-ignite-users.70518.x6.nabble.com/
>


Re: Delay queue or similar?

2018-10-09 Thread Ilya Kasnacheev
Hello!

I think you could model it with ATOMIC cache:

while (true) {
long time = cache.get(host);
if (time < System.currentTimeMillis() && cache.replace(host, time, time
+ hostDelay) {
// do request to host
// break
else
// sleep or do other requests in the meantime
}

Regards,
-- 
Ilya Kasnacheev


вт, 9 окт. 2018 г. в 16:36, matt :

> Hi,
>
> I'm working on prototyping a web crawler using Ignite as the crawl-db. I'd
> like to ensure the crawler obey's the appropriate Craw-Delay time as set in
> a site's robots.txt file - the way I have this setup now, is by submitting
> "candidates" to an Ignite cache. A local listener is setup to receive
> successfully persisted items, which then submits the items to a queue for a
> fetcher to pull from.
>
> Goal: Support a delay time + maximum fetch concurrency, per-host, per-item.
>
> Put another way: "for each fetch item, ensure that requests made to the
> associated host are delayed as required, and no more than n-requests are
> made during each delayed run".
>
> This could be modeled as a Map or maybe even a by using
> ScheduledExecutorService where each task represents a host, and is repeated
> according to the delay time.
>
> I'd like to prevent items from being put into the java work queue if they
> are not yet ready to be fetched, and I'm slightly worried about the
> potential number of hosts (in reference to the java Map
> data-structure).
>
> So my question is: is there something that Ignite can provide for making
> this all work?
>
> - Matt
>
>
>
> --
> Sent from: http://apache-ignite-users.70518.x6.nabble.com/
>


Delay queue or similar?

2018-10-09 Thread matt
Hi,

I'm working on prototyping a web crawler using Ignite as the crawl-db. I'd
like to ensure the crawler obey's the appropriate Craw-Delay time as set in
a site's robots.txt file - the way I have this setup now, is by submitting
"candidates" to an Ignite cache. A local listener is setup to receive
successfully persisted items, which then submits the items to a queue for a
fetcher to pull from.

Goal: Support a delay time + maximum fetch concurrency, per-host, per-item.

Put another way: "for each fetch item, ensure that requests made to the
associated host are delayed as required, and no more than n-requests are
made during each delayed run".

This could be modeled as a Map or maybe even a by using
ScheduledExecutorService where each task represents a host, and is repeated
according to the delay time.

I'd like to prevent items from being put into the java work queue if they
are not yet ready to be fetched, and I'm slightly worried about the
potential number of hosts (in reference to the java Map
data-structure).

So my question is: is there something that Ignite can provide for making
this all work?

- Matt



--
Sent from: http://apache-ignite-users.70518.x6.nabble.com/


RE: The problem after the ignite upgrade

2018-10-09 Thread Stanislav Lukyanov
Hi,

Clients should be able to connect to the cluster normally.
Please share logs if you’re still seeing this.

Stan 

From: hulitao198758
Sent: 27 июля 2018 г. 14:32
To: user@ignite.apache.org
Subject: Re: The problem after the ignite upgrade

I opened the persistent storage cluster, version 2.3 as long as the boot
server and then activate the cluster when the client can be connected
directly, after upgrading to 2.6, after activation server cluster for the
first time the client connection complains, you then need to Deactive the
cluster with clients to connect again later, in an error once, then again to
reactivate can connect successful, why?



--
Sent from: http://apache-ignite-users.70518.x6.nabble.com/



Re: Update all data in cache spend a long time

2018-10-09 Thread Justin Ji
Ilya - 

Thank for your reply and point out the problem may exist in my code, I will
try to split the operation into smaller ones.

And I still have two problems:

1. Can I split the operation with LIMIT?
2. Where is time spent in the operation?



--
Sent from: http://apache-ignite-users.70518.x6.nabble.com/


RE: BufferUnderflowException on GridRedisProtocolParser

2018-10-09 Thread Stanislav Lukyanov
It definitely does.
IGNITE-7153 seems to be applicable for objects greater than 8 kb. Is that your 
case?
If so then I guess that has to be the same issue.

Stan

From: Michael Fong
Sent: 9 октября 2018 г. 15:54
To: user@ignite.apache.org
Subject: BufferUnderflowException on GridRedisProtocolParser

Hi, all

We are evaluating Ignite compatibility with Redis protocol, and we hit an issue 
as the following:
Does the stacktrace look a bit like IGNITE-7153?

java.nio.BufferUnderflowException
        at java.nio.HeapByteBuffer.get(HeapByteBuffer.java:151)
        at 
org.apache.ignite.internal.processors.rest.protocols.tcp.redis.GridRedisProtocolParser.readBulkStr(GridRedisProtocolParser.java:111)
        at 
org.apache.ignite.internal.processors.rest.protocols.tcp.redis.GridRedisProtocolParser.readArray(GridRedisProtocolParser.java:86)
        at 
org.apache.ignite.internal.processors.rest.protocols.tcp.GridTcpRestParser.decode(GridTcpRestParser.java:165)
        at 
org.apache.ignite.internal.processors.rest.protocols.tcp.GridTcpRestParser.decode(GridTcpRestParser.java:72)
        at 
org.apache.ignite.internal.util.nio.GridNioCodecFilter.onMessageReceived(GridNioCodecFilter.java:114)
        at 
org.apache.ignite.internal.util.nio.GridNioFilterAdapter.proceedMessageReceived(GridNioFilterAdapter.java:109)
        at 
org.apache.ignite.internal.util.nio.GridNioServer$HeadFilter.onMessageReceived(GridNioServer.java:3490)
        at 
org.apache.ignite.internal.util.nio.GridNioFilterChain.onMessageReceived(GridNioFilterChain.java:175)
        at 
org.apache.ignite.internal.util.nio.GridNioServer$ByteBufferNioClientWorker.processRead(GridNioServer.java:1113)
        at 
org.apache.ignite.internal.util.nio.GridNioServer$AbstractNioClientWorker.processSelectedKeysOptimized(GridNioServer.java:2339)
        at 
org.apache.ignite.internal.util.nio.GridNioServer$AbstractNioClientWorker.bodyInternal(GridNioServer.java:2110)
        at 
org.apache.ignite.internal.util.nio.GridNioServer$AbstractNioClientWorker.body(GridNioServer.java:1764)
        at 
org.apache.ignite.internal.util.worker.GridWorker.run(GridWorker.java:110)
        at java.lang.Thread.run(Thread.java:745)
[2018-10-09 
12:45:49,946][ERROR][grid-nio-worker-tcp-rest-1-#37][GridTcpRestProtocol] 
Closing NIO session because of unhandled exception.
class org.apache.ignite.internal.util.nio.GridNioException: null
        at 
org.apache.ignite.internal.util.nio.GridNioServer$AbstractNioClientWorker.processSelectedKeysOptimized(GridNioServer.java:2365)
        at 
org.apache.ignite.internal.util.nio.GridNioServer$AbstractNioClientWorker.bodyInternal(GridNioServer.java:2110)
        at 
org.apache.ignite.internal.util.nio.GridNioServer$AbstractNioClientWorker.body(GridNioServer.java:1764)
        at 
org.apache.ignite.internal.util.worker.GridWorker.run(GridWorker.java:110)
        at java.lang.Thread.run(Thread.java:745)




BufferUnderflowException on GridRedisProtocolParser

2018-10-09 Thread Michael Fong
Hi, all

We are evaluating Ignite compatibility with Redis protocol, and we hit an
issue as the following:
Does the stacktrace look a bit like IGNITE-7153?

java.nio.BufferUnderflowException
at java.nio.HeapByteBuffer.get(HeapByteBuffer.java:151)
at
org.apache.ignite.internal.processors.rest.protocols.tcp.redis.GridRedisProtocolParser.readBulkStr(GridRedisProtocolParser.java:111)
at
org.apache.ignite.internal.processors.rest.protocols.tcp.redis.GridRedisProtocolParser.readArray(GridRedisProtocolParser.java:86)
at
org.apache.ignite.internal.processors.rest.protocols.tcp.GridTcpRestParser.decode(GridTcpRestParser.java:165)
at
org.apache.ignite.internal.processors.rest.protocols.tcp.GridTcpRestParser.decode(GridTcpRestParser.java:72)
at
org.apache.ignite.internal.util.nio.GridNioCodecFilter.onMessageReceived(GridNioCodecFilter.java:114)
at
org.apache.ignite.internal.util.nio.GridNioFilterAdapter.proceedMessageReceived(GridNioFilterAdapter.java:109)
at
org.apache.ignite.internal.util.nio.GridNioServer$HeadFilter.onMessageReceived(GridNioServer.java:3490)
at
org.apache.ignite.internal.util.nio.GridNioFilterChain.onMessageReceived(GridNioFilterChain.java:175)
at
org.apache.ignite.internal.util.nio.GridNioServer$ByteBufferNioClientWorker.processRead(GridNioServer.java:1113)
at
org.apache.ignite.internal.util.nio.GridNioServer$AbstractNioClientWorker.processSelectedKeysOptimized(GridNioServer.java:2339)
at
org.apache.ignite.internal.util.nio.GridNioServer$AbstractNioClientWorker.bodyInternal(GridNioServer.java:2110)
at
org.apache.ignite.internal.util.nio.GridNioServer$AbstractNioClientWorker.body(GridNioServer.java:1764)
at
org.apache.ignite.internal.util.worker.GridWorker.run(GridWorker.java:110)
at java.lang.Thread.run(Thread.java:745)
[2018-10-09
12:45:49,946][ERROR][grid-nio-worker-tcp-rest-1-#37][GridTcpRestProtocol]
Closing NIO session because of unhandled exception.
class org.apache.ignite.internal.util.nio.GridNioException: null
at
org.apache.ignite.internal.util.nio.GridNioServer$AbstractNioClientWorker.processSelectedKeysOptimized(GridNioServer.java:2365)
at
org.apache.ignite.internal.util.nio.GridNioServer$AbstractNioClientWorker.bodyInternal(GridNioServer.java:2110)
at
org.apache.ignite.internal.util.nio.GridNioServer$AbstractNioClientWorker.body(GridNioServer.java:1764)
at
org.apache.ignite.internal.util.worker.GridWorker.run(GridWorker.java:110)
at java.lang.Thread.run(Thread.java:745)


Re: Update all data in cache spend a long time

2018-10-09 Thread Ilya Kasnacheev
Hello!

Have you tried setting queryParallelism on this cache to some high value
(such as your number of CPU cores)?

Note that this query will likely cause lifting all data in cache to heap at
once, which is probably not what you expect. My recommendation is also to
try splitting this operation into smaller ones, see if they finish faster.

Regards,
-- 
Ilya Kasnacheev


вт, 9 окт. 2018 г. в 15:47, Justin Ji :

> Hi -
>
> I have a cache with more than 1 million records, when I update the whole
> records in it, it spends a long time(almost 6 minutes).
>
> Here is the SQL:
> "update " + IgniteTableKey.T_DEVICE_ONLINE_STATUS.getCode()+ " set
> isOnline=0, mqttTime=" + System.currentTimeMillis() / 1000 + " where 1=1";
>
> And the Cache configuration is :
>
> CacheConfiguration cacheCfg = new CacheConfiguration<>();
> cacheCfg.setName(cacheName);
> cacheCfg.setCacheMode(CacheMode.PARTITIONED);
> cacheCfg.setBackups(1);
> cacheCfg.setAtomicityMode(CacheAtomicityMode.TRANSACTIONAL);
>
>
> cacheCfg.setCacheStoreFactory(FactoryBuilder.factoryOf(DeviceStatusCacheStore.class));
> cacheCfg.setWriteThrough(true);
> cacheCfg.setWriteBehindEnabled(true);
> cacheCfg.setReadThrough(true);
> cacheCfg.setWriteBehindFlushThreadCount(4);
> cacheCfg.setWriteBehindFlushFrequency(15 * 1000);
> cacheCfg.setWriteBehindFlushSize(409600);
> cacheCfg.setWriteBehindBatchSize(1024);
> cacheCfg.setStoreKeepBinary(true);
> cfg.setCacheConfiguration(cacheCfg);
>
> So I want to know is there a way that can speed up the execution?
>
>
>
> --
> Sent from: http://apache-ignite-users.70518.x6.nabble.com/
>


Update all data in cache spend a long time

2018-10-09 Thread Justin Ji
Hi - 

I have a cache with more than 1 million records, when I update the whole
records in it, it spends a long time(almost 6 minutes).

Here is the SQL:
"update " + IgniteTableKey.T_DEVICE_ONLINE_STATUS.getCode()+ " set
isOnline=0, mqttTime=" + System.currentTimeMillis() / 1000 + " where 1=1";

And the Cache configuration is :

CacheConfiguration cacheCfg = new CacheConfiguration<>();
cacheCfg.setName(cacheName);
cacheCfg.setCacheMode(CacheMode.PARTITIONED);
cacheCfg.setBackups(1);
cacheCfg.setAtomicityMode(CacheAtomicityMode.TRANSACTIONAL);
   
cacheCfg.setCacheStoreFactory(FactoryBuilder.factoryOf(DeviceStatusCacheStore.class));
cacheCfg.setWriteThrough(true);
cacheCfg.setWriteBehindEnabled(true);
cacheCfg.setReadThrough(true);
cacheCfg.setWriteBehindFlushThreadCount(4);
cacheCfg.setWriteBehindFlushFrequency(15 * 1000);
cacheCfg.setWriteBehindFlushSize(409600);
cacheCfg.setWriteBehindBatchSize(1024);
cacheCfg.setStoreKeepBinary(true);
cfg.setCacheConfiguration(cacheCfg);

So I want to know is there a way that can speed up the execution?



--
Sent from: http://apache-ignite-users.70518.x6.nabble.com/


Re: Ignite as Hibernate L2 Cache

2018-10-09 Thread yarden
Thanks a lot for your answer.
Can you share what was the error you encountered on Hibernate 5.2?





--
Sent from: http://apache-ignite-users.70518.x6.nabble.com/