Re: compute ignite data with spark

2018-02-27 Thread shawn.du






Hi Denis,Thanks, that's cool! looking forward 2.4.0 release.







ThanksShawn





On 2/28/2018 12:11,Denis Magda wrote: 


Hi Shawn,In addition to RDDs, you'll be able to use Data Frames APIs soon with Ignite as storage for Spark. They will be released within nearest weeks in Ignite 2.4.As for your question on how Ignite compares to Spark. The fist is not just a computational engine. It's a distributed database (or cache depending on your use case) with a variety of APIs including the compute grid. Though you can use Ignite as storage for Spark, Ignite native APIs should be more performant.--DenisOn Mon, Feb 26, 2018 at 2:25 AM, Stanislav Lukyanov  wrote:Hi Shawn, You can use Ignite standalone and you can also use it together with Spark.Please take a look at these SO question and an article:https://stackoverflow.com/questions/36036910/apache-spark-vs-apache-ignitehttps://insidebigdata.com/2016/06/20/apache-ignite-and-apache-spark-complementary-in-memory-computing-solutions/ Stan From: shawn.duSent: 24 февраля 2018 г. 9:56To: userSubject: compute ignite data with spark Hi, Spark is a compute engine.  Ignite also provide compute feature. Also Ignite can integrate with spark.We are using ignite compute map-reduce feature now.  It is very fast.I am just curious how spark compares with ignite on computing.it is possible using spark API computing ignite cache data?  ThanksShawn 





Re: compute ignite data with spark

2018-02-27 Thread Denis Magda
Hi Shawn,

In addition to RDDs, you'll be able to use Data Frames APIs soon with
Ignite as storage for Spark. They will be released within nearest weeks in
Ignite 2.4.

As for your question on how Ignite compares to Spark. The fist is not just
a computational engine. It's a distributed database (or cache depending on
your use case) with a variety of APIs including the compute grid. Though
you can use Ignite as storage for Spark, Ignite native APIs should be more
performant.

--
Denis



On Mon, Feb 26, 2018 at 2:25 AM, Stanislav Lukyanov 
wrote:

> Hi Shawn,
>
>
>
> You can use Ignite standalone and you can also use it together with Spark.
>
> Please take a look at these SO question and an article:
>
> https://stackoverflow.com/questions/36036910/apache-spark-vs-apache-ignite
>
> https://insidebigdata.com/2016/06/20/apache-ignite-and-
> apache-spark-complementary-in-memory-computing-solutions/
>
>
>
> Stan
>
>
>
> *From: *shawn.du 
> *Sent: *24 февраля 2018 г. 9:56
> *To: *user 
> *Subject: *compute ignite data with spark
>
>
>
> Hi,
>
>
>
> Spark is a compute engine.  Ignite also provide compute feature. Also
> Ignite can integrate with spark.
>
> We are using ignite compute map-reduce feature now.  It is very fast.
>
> I am just curious how spark compares with ignite on computing.
>
> it is possible using spark API computing ignite cache data?
>
>
>
> Thanks
>
> Shawn
>
>
>
>
>


Re: 2.4.0 Release

2018-02-27 Thread Roman Guseinov
Yes, it makes sense to update the top of the release planning page. I will
try to find out who can do this.



--
Sent from: http://apache-ignite-users.70518.x6.nabble.com/


Re: 2.4.0 Release

2018-02-27 Thread David Wimsey
Yea, I’m currently building 2.4.0 myself for testing purposes but was hoping 
for a release soon.

Might I suggest updating the top of the release planning page to indicate 
January 31st 2018 is an unlikely release date at this stage.

> On Feb 27, 2018, at 10:37 PM, Roman Guseinov  wrote:
> 
> Hi David,
> 
> It looks like there are several tickets which need to be done [1]. Almost
> all of them are related to documentation. I guess the Ignite 2.4 will be
> released soon (most likely in March).
> 
> If you want to try new version right now, you are able to build 2.4 from the
> source code [2]. Build instructions are located in DEVNOTES.txt [3].
> 
> [1] https://cwiki.apache.org/confluence/display/IGNITE/Apache+Ignite+2.4 
> [2] https://github.com/apache/ignite/tree/ignite-2.4
> [3] https://github.com/apache/ignite/blob/ignite-2.4/DEVNOTES.txt
> 
> Best Regards,
> Roman
> 
> 
> 
> --
> Sent from: http://apache-ignite-users.70518.x6.nabble.com/



Re: 2.4.0 Release

2018-02-27 Thread Roman Guseinov
Hi David,

It looks like there are several tickets which need to be done [1]. Almost
all of them are related to documentation. I guess the Ignite 2.4 will be
released soon (most likely in March).

If you want to try new version right now, you are able to build 2.4 from the
source code [2]. Build instructions are located in DEVNOTES.txt [3].

[1] https://cwiki.apache.org/confluence/display/IGNITE/Apache+Ignite+2.4 
[2] https://github.com/apache/ignite/tree/ignite-2.4
[3] https://github.com/apache/ignite/blob/ignite-2.4/DEVNOTES.txt

Best Regards,
Roman



--
Sent from: http://apache-ignite-users.70518.x6.nabble.com/


2.4.0 Release

2018-02-27 Thread David Wimsey
I’m looking for the 2.4.0 release and not seeing anything on the downloads 
page, but I do see that the pom.xml on master has been bumped to 2.5.0-SNAPSHOT.

Is any work happening on the 2.4.0 release?

Re: SortedEvictionPolicy doesn't work as expected

2018-02-27 Thread mamaco
After reading valentin's  example

  
and a user  discussion

 
, 

I found what you mentioned is correct, in a later test, after rolling back
the ignite from 2.3.0 to 1.7.0, then everything works normally. 

But, *'how to read on-heap entries directly via ignite 2.0+'* is still a
major pain to me, 
because what I need is 'top 100 entries', is there any approach to do this?

Marco





--
Sent from: http://apache-ignite-users.70518.x6.nabble.com/

Re: SSL Exception

2018-02-27 Thread Bryan Rosander
Hi Ilya,

It looks like that error corresponds to restarts of the particular pods
we're running.  We're currently running in Kubernetes as a stateful set.

I think it has to do with the node coming back up with the same address and
hostname but a different identifier.  I see this in the logs:
Caused by:
org.apache.ignite.spi.communication.tcp.TcpCommunicationSpi$HandshakeException:
Remote node ID is not as expected
[expected=704fa7c2-bb6a-44bb-89c6-06722a3abac8,
rcvd=922d993c-6b08-4bee-92f2-130e108e3657]

After manually setting consistentId in the configuration, it seems that I
can bounce the pods at will without hitting this issue. I'll follow up if
we see it again.

Thanks,
Bryan

On Tue, Feb 27, 2018 at 11:14 AM, Ilya Kasnacheev  wrote:

> Hello Bryan!
>
> 2nd attempt to send this mail.
>
>
> Can you search in the log prior to the first problematic "Accepted
> incoming communication connection"? I assume there was a communication
> connection already back when the node was started, and you should look why
> it was closed in the first place. That might provide clues.
>
> Also, logs from remote node (one that makes those connections) at the same
> time might provide clues.
>
> Don't hesitate to provide full node logs.
>
> Regards,
>
>
> --
> Ilya Kasnacheev
>
> 2018-02-27 18:32 GMT+03:00 Ilya Kasnacheev :
>
>> Hello Bryan!
>>
>> Can you search in the log prior to the first problematic "Accepted
>> incoming communication connection"? I assume there was a communication
>> connection already back when the node was started, and you should look why
>> it was closed in the first place. That might provide clues.
>>
>> Also, logs from remote node (one that makes those connections) at the
>> same time might provide clues.
>>
>> Don't hesitate to provide full node logs.
>>
>> Regards,
>>
>> --
>> Ilya Kasnacheev
>>
>> 2018-02-27 18:03 GMT+03:00 Bryan Rosander :
>>
>>> Also, this is ignite 2.3.0, please let me know if there's any more
>>> information I can provide.
>>>
>>> On Tue, Feb 27, 2018 at 9:59 AM, Bryan Rosander >> > wrote:
>>>
 We're using ignite in a 3 node grid with SSL just hit an issue where
 after a period of time (hours after starting), 2 of the 3 nodes seem to
 have lost connectivity and we see the following stack trace over and over.

 The cluster starts up fine so I doubt it's an issue with the
 certificates or keystores.  Also bouncing the ignite instances seems to
 have "fixed" it.  Any ideas as to what could have happened?

 Thanks,
 Bryan

 2018-02-27 14:52:36,071 INFO  [grid-nio-worker-tcp-comm-2-#27]
 o.a.i.s.c.tcp.TcpCommunicationSpi - Accepted incoming communication
 connection [locAddr=/100.96.3.72:47100, rmtAddr=/100.96.6.183:45484]
 2018-02-27 14:52:37,072 ERROR [grid-nio-worker-tcp-comm-2-#27]
 o.a.i.s.c.tcp.TcpCommunicationSpi - Failed to process selector key
 [ses=GridSelectorNioSessionImpl [worker=DirectNioClientWorker
 [super=AbstractNioClientWorker [idx=2, bytesRcvd=17479234, bytesSent=0,
 bytesRcvd0=2536, bytesSent0=0, select=true, super=GridWorker
 [name=grid-nio-worker-tcp-comm-2, igniteInstanceName=null,
 finished=false, hashCode=1854311052, interrupted=false,
 runner=grid-nio-worker-tcp-comm-2-#27]]],
 writeBuf=java.nio.DirectByteBuffer[pos=0 lim=10 cap=32768],
 readBuf=java.nio.DirectByteBuffer[pos=0 lim=32768 cap=32768],
 inRecovery=null, outRecovery=null, super=GridNioSessionImpl [locAddr=/
 100.96.3.72:47100, rmtAddr=/100.96.6.183:45484,
 createTime=1519743156030, closeTime=0, bytesSent=2448, bytesRcvd=2536,
 bytesSent0=2448, bytesRcvd0=2536, sndSchedTime=1519743156071,
 lastSndTime=1519743156071, lastRcvTime=1519743156071, readsPaused=false,
 filterChain=FilterChain[filters=[GridNioCodecFilter
 [parser=o.a.i.i.util.nio.GridDirectParser@497350a6, directMode=true],
 GridConnectionBytesVerifyFilter, SSL filter], accepted=true]]]
 javax.net.ssl.SSLException: Failed to encrypt data (SSL engine error)
 [status=CLOSED, handshakeStatus=NEED_UNWRAP, ses=GridSelectorNioSessionImpl
 [worker=DirectNioClientWorker [super=AbstractNioClientWorker [idx=2,
 bytesRcvd=17479234, bytesSent=0, bytesRcvd0=2536, bytesSent0=0,
 select=true, super=GridWorker [name=grid-nio-worker-tcp-comm-2,
 igniteInstanceName=null, finished=false, hashCode=1854311052,
 interrupted=false, runner=grid-nio-worker-tcp-comm-2-#27]]],
 writeBuf=java.nio.DirectByteBuffer[pos=0 lim=10 cap=32768],
 readBuf=java.nio.DirectByteBuffer[pos=0 lim=32768 cap=32768],
 inRecovery=null, outRecovery=null, super=GridNioSessionImpl [locAddr=/
 100.96.3.72:47100, rmtAddr=/100.96.6.183:45484,
 createTime=1519743156030, closeTime=0, bytesSent=2448, bytesRcvd=2536,
 bytesSent0=2448, bytesRcvd0=2536, sndSchedTime=1519743156071,
 

Re: Native persistence Cache data not loading when restart the server node

2018-02-27 Thread Dmitry Pavlov
Could you try to delete node-00-...\lock file, but only this file and only
if there is no Ignite process holding this lock currently?

Which OS do you use?

вт, 27 февр. 2018 г. в 21:03, siva :

> Thanks for the replay,
> Yes,We have already tried like ,killing the ignite process and deleting
> another persistence folder(which is created  new ).Again and again it's
> creating new persistence folder and result same.
>
>
>
> --
> Sent from: http://apache-ignite-users.70518.x6.nabble.com/
>


Re: How to configure a cluster as a persistent, replicated SQL database

2018-02-27 Thread joseheitor
Thanks, Denis.

I read the docs and also followed this short video demo/tutorial:

https://www.youtube.com/watch?v=FKS8A86h-VY
  

It worked as described, so I then extended the exercise by adding and
additional (3rd) node, using the persistent-store configuration on all nodes
and activating the cluster before transacting...as per the docs.

Again it all worked as per the original demo, but the persistence now also
worked.

My concern and confusion is the following:

Assuming we start 3 nodes...

 - if we bring down the first node (Primary?) - availability of the data is
lost ... even though there are another two active nodes in the cluster.
Doesn't the system automatically elect a new Primary if there are enough
active terminals to maintain a replicated backup? How then do we achieve
'fault-tolerance' and 'high-availability'?

- if we bring down all but the first node (Primary?) data access continues
to be available for review and manipulation. But surely this should now fail
because there is no active 'Secondary' node to backup or replicate any
changes to? Doesn't this expose a risk to data 'consistency', as there is
now no backup of the data changes?

Or is there a way to configure things so that the system will behave as
expected (above)?

Jose



--
Sent from: http://apache-ignite-users.70518.x6.nabble.com/

Re: Native persistence Cache data not loading when restart the server node

2018-02-27 Thread siva
Thanks for the replay,
Yes,We have already tried like ,killing the ignite process and deleting
another persistence folder(which is created  new ).Again and again it's
creating new persistence folder and result same.



--
Sent from: http://apache-ignite-users.70518.x6.nabble.com/


Re: Native persistence Cache data not loading when restart the server node

2018-02-27 Thread Dmitry Pavlov
Hi,

Ignite supposes there is another node up and running, which locks file
/home/apache-ignite-fabric-2.3.0-bin/work/db/node00-17cba8d
3-43b3-4e43-8546-d52ec3b20f02/lock

This protects new Inite node from using same DB folder concurrently with
other running node.

Is previous node process alive?

When previous process will be shut down, next restart will use 'node00'
folder with data.

Sincerely,
Dmitriy Pavlov

вт, 27 февр. 2018 г. в 20:47, siva :

> Hi,
> We are using Ignite Native persistence with one server node and one client
> node
> Due to network problem system shutdown ,when we are trying to start the
> ignite,its creating new persistent storage folder under work/db,so whatever
> the data under old persistent storage folder ,its not loading to the cache
> .When we check the log file ,we found that "2018-02-27 17:43:11 INFO
> PdsFoldersResolver:475 - *Unable to acquire lock to file
>
> [/home/apache-ignite-fabric-2.3.0-bin/work/db/node00-17cba8d3-43b3-4e43-8546-d52ec3b20f02],
> reason: null".*
> Can you guys help us how to resolve this issue and get data into cache.Our
> all application data is in that node only. I have attached the screen
> shots.**
> 
> 
>
>
>
>
>
> --
> Sent from: http://apache-ignite-users.70518.x6.nabble.com/
>


Re: SortedEvictionPolicy doesn't work as expected

2018-02-27 Thread mamaco
Hi Denis,
Thank you for the explanation, so the cause is about Ignite use both of
on-heap and off-heap to store the entries and unfortunately I read the
everything from off-heap, so question #1 is: *how could I read the on-heap
entries directly?*

*My use case* is to get Top 100 entries by sorting a particular column in
descending order, see below detail:
 Assume I have a continuous stream coming in from kafka and saving to a
partition cache which is located in 4 nodes, I could use map-reduce like
function to realize 'order by *** limit 100' query, however, the cache
itself is really huge, there's no performance guaranteed for realtime user
query.  so, A simple and on-going solution is expected.

Question #2: *I wonder if SortedEvictionPolicy could be the best practise,
will this work?*

Thanks again.



--
Sent from: http://apache-ignite-users.70518.x6.nabble.com/

Re: Setting userVersion on client node causes ignite.active(true) to fail

2018-02-27 Thread Dave Harvey
The server node was already active, and when I commented out
ignite.active(true) the client came up.



--
Sent from: http://apache-ignite-users.70518.x6.nabble.com/


Re: SSL Exception

2018-02-27 Thread Ilya Kasnacheev
Hello Bryan!

2nd attempt to send this mail.


Can you search in the log prior to the first problematic "Accepted incoming
communication connection"? I assume there was a communication connection
already back when the node was started, and you should look why it was
closed in the first place. That might provide clues.

Also, logs from remote node (one that makes those connections) at the same
time might provide clues.

Don't hesitate to provide full node logs.

Regards,


-- 
Ilya Kasnacheev

2018-02-27 18:32 GMT+03:00 Ilya Kasnacheev :

> Hello Bryan!
>
> Can you search in the log prior to the first problematic "Accepted
> incoming communication connection"? I assume there was a communication
> connection already back when the node was started, and you should look why
> it was closed in the first place. That might provide clues.
>
> Also, logs from remote node (one that makes those connections) at the same
> time might provide clues.
>
> Don't hesitate to provide full node logs.
>
> Regards,
>
> --
> Ilya Kasnacheev
>
> 2018-02-27 18:03 GMT+03:00 Bryan Rosander :
>
>> Also, this is ignite 2.3.0, please let me know if there's any more
>> information I can provide.
>>
>> On Tue, Feb 27, 2018 at 9:59 AM, Bryan Rosander 
>> wrote:
>>
>>> We're using ignite in a 3 node grid with SSL just hit an issue where
>>> after a period of time (hours after starting), 2 of the 3 nodes seem to
>>> have lost connectivity and we see the following stack trace over and over.
>>>
>>> The cluster starts up fine so I doubt it's an issue with the
>>> certificates or keystores.  Also bouncing the ignite instances seems to
>>> have "fixed" it.  Any ideas as to what could have happened?
>>>
>>> Thanks,
>>> Bryan
>>>
>>> 2018-02-27 14:52:36,071 INFO  [grid-nio-worker-tcp-comm-2-#27]
>>> o.a.i.s.c.tcp.TcpCommunicationSpi - Accepted incoming communication
>>> connection [locAddr=/100.96.3.72:47100, rmtAddr=/100.96.6.183:45484]
>>> 2018-02-27 14:52:37,072 ERROR [grid-nio-worker-tcp-comm-2-#27]
>>> o.a.i.s.c.tcp.TcpCommunicationSpi - Failed to process selector key
>>> [ses=GridSelectorNioSessionImpl [worker=DirectNioClientWorker
>>> [super=AbstractNioClientWorker [idx=2, bytesRcvd=17479234, bytesSent=0,
>>> bytesRcvd0=2536, bytesSent0=0, select=true, super=GridWorker
>>> [name=grid-nio-worker-tcp-comm-2, igniteInstanceName=null,
>>> finished=false, hashCode=1854311052, interrupted=false,
>>> runner=grid-nio-worker-tcp-comm-2-#27]]], 
>>> writeBuf=java.nio.DirectByteBuffer[pos=0
>>> lim=10 cap=32768], readBuf=java.nio.DirectByteBuffer[pos=0 lim=32768
>>> cap=32768], inRecovery=null, outRecovery=null, super=GridNioSessionImpl
>>> [locAddr=/100.96.3.72:47100, rmtAddr=/100.96.6.183:45484,
>>> createTime=1519743156030, closeTime=0, bytesSent=2448, bytesRcvd=2536,
>>> bytesSent0=2448, bytesRcvd0=2536, sndSchedTime=1519743156071,
>>> lastSndTime=1519743156071, lastRcvTime=1519743156071, readsPaused=false,
>>> filterChain=FilterChain[filters=[GridNioCodecFilter
>>> [parser=o.a.i.i.util.nio.GridDirectParser@497350a6, directMode=true],
>>> GridConnectionBytesVerifyFilter, SSL filter], accepted=true]]]
>>> javax.net.ssl.SSLException: Failed to encrypt data (SSL engine error)
>>> [status=CLOSED, handshakeStatus=NEED_UNWRAP, ses=GridSelectorNioSessionImpl
>>> [worker=DirectNioClientWorker [super=AbstractNioClientWorker [idx=2,
>>> bytesRcvd=17479234, bytesSent=0, bytesRcvd0=2536, bytesSent0=0,
>>> select=true, super=GridWorker [name=grid-nio-worker-tcp-comm-2,
>>> igniteInstanceName=null, finished=false, hashCode=1854311052,
>>> interrupted=false, runner=grid-nio-worker-tcp-comm-2-#27]]],
>>> writeBuf=java.nio.DirectByteBuffer[pos=0 lim=10 cap=32768],
>>> readBuf=java.nio.DirectByteBuffer[pos=0 lim=32768 cap=32768],
>>> inRecovery=null, outRecovery=null, super=GridNioSessionImpl [locAddr=/
>>> 100.96.3.72:47100, rmtAddr=/100.96.6.183:45484,
>>> createTime=1519743156030, closeTime=0, bytesSent=2448, bytesRcvd=2536,
>>> bytesSent0=2448, bytesRcvd0=2536, sndSchedTime=1519743156071,
>>> lastSndTime=1519743156071, lastRcvTime=1519743156071, readsPaused=false,
>>> filterChain=FilterChain[filters=[GridNioCodecFilter
>>> [parser=org.apache.ignite.internal.util.nio.GridDirectParser@497350a6,
>>> directMode=true], GridConnectionBytesVerifyFilter, SSL filter],
>>> accepted=true]]]
>>> at org.apache.ignite.internal.util.nio.ssl.GridNioSslHandler.en
>>> crypt(GridNioSslHandler.java:379)
>>> at org.apache.ignite.internal.util.nio.ssl.GridNioSslFilter.enc
>>> rypt(GridNioSslFilter.java:270)
>>> at org.apache.ignite.internal.util.nio.GridNioServer$DirectNioC
>>> lientWorker.processWriteSsl(GridNioServer.java:1418)
>>> at org.apache.ignite.internal.util.nio.GridNioServer$DirectNioC
>>> lientWorker.processWrite(GridNioServer.java:1287)
>>> at org.apache.ignite.internal.util.nio.GridNioServer$AbstractNi
>>> 

Re: Ignite Local Persistence slowing after reads starting hit the disk

2018-02-27 Thread Denis Mekhanikov
Hi!

Most probably, you get the performance drop, when checkpointing process
starts.
So, if you have a lot of writes, they will be limited by the speed of your
disk.

You can try tuning durable memory to achieve better performance.
Here is documentation on this topic:
https://apacheignite.readme.io/docs/durable-memory-tuning
Write throttling should help in your situation.

Denis


вт, 27 февр. 2018 г. в 3:19, venuchitta :

> Hi,
>
>I have a use case where in Windows env. we want to cache huge blobs
> which
> may range from 20KB to 2MB on boxes where also other services are running
> (so cache should consumes less CPU). They need to have persistence
> capability during system crash or process restarts. I am using Intel Xeon
> 12
> logical core, 128 GB Ram and 1TB SSD.
>
>   I am using C# Ignite Apis with the following configuration:
>
> var dataRegionConfiguration = new DataRegionConfiguration
> {
> Name = "Local10GB",
> InitialSize = 10* 1024 * 1024 * 1024,
> MaxSize = 20* 1024 * 1024 * 1024,
> PersistenceEnabled = true,
> MetricsEnabled = true
> };
> var dataStorageConfiguration = new DataStorageConfiguration
> {
> MetricsEnabled = true,
> DefaultDataRegionConfiguration = dataRegionConfiguration,
> // WalMode = WalMode.None,
> };
>
> var igniteConfiguration = new IgniteConfiguration
> {
> CacheConfiguration = new[]
> {
> new CacheConfiguration
> {
> Name = "LocalCache",
> CacheMode = CacheMode.Local,
> Backups = 0,
> EnableStatistics = true,
> DataRegionName = "Local10GB"
> }
> },
>DataStorageConfiguration = dataStorageConfiguration,
> };
>
> I ma having a wrapper around Ignite which fires Read/Write requests with a
> specific data size. Right now I am having tests with 90% Reads and 10%
> writes, 20KB data size and with TPS of 4000 requests per sec. It initially
> is handling that but soon after it starting hit the disk for reads (which
> is
> happening after In-memory is full), the checkpointing is latent thus writes
> and then reads too. The overall Transactions Per Second is dropping towards
> low 100's. I even turned WAL off, but not having an improvement.
>
>I want to know if Ignite is ideal for the use case I mentioned and if
> there are any settings I can tune to improve the situation. Any help or
> suggestions are appreciated.
>
> Thanks.
>
>
>
> --
> Sent from: http://apache-ignite-users.70518.x6.nabble.com/
>


Re: SortedEvictionPolicy doesn't work as expected

2018-02-27 Thread Denis Mekhanikov
Hi!

Eviction policy specifies, which entries should be kept in on-heap memory
in deserialized format.
So, the access time will be the lowest for such entries, when read from the
local node.
But such entries are stored twice: once in off-heap and once in on-heap.
And evicted entries are stored in off-heap memory anyway, so you are able
to access such values, even when they are evicted from on-heap space.

Entries from off-heap memory can be evicted per page, based on last access
time. It can be configured using the following method:
*DataRegionConfiguration.setPageEvictionMode(...)*

You can find all this information and more in the documentation:
https://apacheignite.readme.io/docs/evictions

P.S.
Instances of Integer shouldn't be checked for equality, using == operator.
They should be either unboxed first, or checked using equals()
*CustomizedComparator.compare(...) *can be rewritten as follows:
return Integer.compare(b.getValue(), a.getValue());

Denis

вт, 27 февр. 2018 г. в 6:39, mamaco :

> Here I have a five entries as below:
> (14,4);
> (21,1);
> (32,2);
> (113,3);
> (15,5);
> and I want to use SortedEvictionPolicy to keep below 3 entries (sort by
> values in descending order and get top 3 items):
> 15-->5
> 14-->4
> 113-->3
>
> The actual output is:
> 21-->1
> 32-->2
> 113-->3
> 14-->4
> 15-->5
>
> issue 1: The order is wrong, the class CustomizedComparator doesn't seem to
> work.
> issue 2: there are only 3 entries are expected, but it returns 5 ones,
> MaxSize doesn't work.
> Did I miss anything?
>
> The source code:
>
> package IgniteTesting.Expiry;
> import javax.cache.Cache.Entry;
> import org.apache.ignite.Ignite;
> import org.apache.ignite.IgniteCache;
> import org.apache.ignite.Ignition;
> import org.apache.ignite.cache.CacheMode;
> import org.apache.ignite.cache.eviction.sorted.SortedEvictionPolicy;
> import org.apache.ignite.cache.query.QueryCursor;
> import org.apache.ignite.cache.query.SqlQuery;
> import org.apache.ignite.configuration.CacheConfiguration;
>
> public class Application
> {
> public static void main( String[] args )
> {
> try(Ignite ignite = Ignition.start("..\\test.xml")){
> CacheConfiguration cfg = new
> CacheConfiguration<>();
> cfg.setName("foo");
> cfg.setOnheapCacheEnabled(true);
> cfg.setCacheMode(CacheMode.LOCAL);
> SortedEvictionPolicy SEP = new
> SortedEvictionPolicy<>(3,new CustomizedComparator());
> cfg.setEvictionPolicy(SEP);
> cfg.setIndexedTypes(Integer.class, Integer.class);
>
> try(IgniteCache
> cache=ignite.getOrCreateCache(cfg)){
> cache.put(14,4);
> cache.put(21,1);
> cache.put(32,2);
> cache.put(113,3);
> cache.put(15,5);
>
> SqlQuery sql = new
> SqlQuery<>(Integer.class, "select
> * from Integer");
> try (QueryCursor>
> cursor =
> cache.query(sql)) {
>   for (Entry e : cursor)
>
> System.out.println(e.getKey()+"-->"+e.getValue().toString());
> }
>
>
> }
> }
>
> }
> }
>
>
> package IgniteTesting.Expiry;
>
> import java.io.Serializable;
> import java.util.Comparator;
> import org.apache.ignite.cache.eviction.EvictableEntry;
>
> public class CustomizedComparator implements
> Serializable,Comparator> {
> private static final long serialVersionUID = 4755938152431669886L;
>
> public int compare(EvictableEntry a,
> EvictableEntry b) {
> return a.getValue() > b.getValue() ? -1 : a.getValue() ==
> b.getValue() ? 0 : 1;
> }
> }
>
>
>
>
>
> --
> Sent from: http://apache-ignite-users.70518.x6.nabble.com/
>


Re: SSL Exception

2018-02-27 Thread Bryan Rosander
Also, this is ignite 2.3.0, please let me know if there's any more
information I can provide.

On Tue, Feb 27, 2018 at 9:59 AM, Bryan Rosander 
wrote:

> We're using ignite in a 3 node grid with SSL just hit an issue where after
> a period of time (hours after starting), 2 of the 3 nodes seem to have lost
> connectivity and we see the following stack trace over and over.
>
> The cluster starts up fine so I doubt it's an issue with the certificates
> or keystores.  Also bouncing the ignite instances seems to have "fixed"
> it.  Any ideas as to what could have happened?
>
> Thanks,
> Bryan
>
> 2018-02-27 14:52:36,071 INFO  [grid-nio-worker-tcp-comm-2-#27]
> o.a.i.s.c.tcp.TcpCommunicationSpi - Accepted incoming communication
> connection [locAddr=/100.96.3.72:47100, rmtAddr=/100.96.6.183:45484]
> 2018-02-27 14:52:37,072 ERROR [grid-nio-worker-tcp-comm-2-#27]
> o.a.i.s.c.tcp.TcpCommunicationSpi - Failed to process selector key 
> [ses=GridSelectorNioSessionImpl
> [worker=DirectNioClientWorker [super=AbstractNioClientWorker [idx=2,
> bytesRcvd=17479234, bytesSent=0, bytesRcvd0=2536, bytesSent0=0,
> select=true, super=GridWorker [name=grid-nio-worker-tcp-comm-2,
> igniteInstanceName=null, finished=false, hashCode=1854311052,
> interrupted=false, runner=grid-nio-worker-tcp-comm-2-#27]]],
> writeBuf=java.nio.DirectByteBuffer[pos=0 lim=10 cap=32768],
> readBuf=java.nio.DirectByteBuffer[pos=0 lim=32768 cap=32768],
> inRecovery=null, outRecovery=null, super=GridNioSessionImpl [locAddr=/
> 100.96.3.72:47100, rmtAddr=/100.96.6.183:45484, createTime=1519743156030,
> closeTime=0, bytesSent=2448, bytesRcvd=2536, bytesSent0=2448,
> bytesRcvd0=2536, sndSchedTime=1519743156071, lastSndTime=1519743156071,
> lastRcvTime=1519743156071, readsPaused=false, 
> filterChain=FilterChain[filters=[GridNioCodecFilter
> [parser=o.a.i.i.util.nio.GridDirectParser@497350a6, directMode=true],
> GridConnectionBytesVerifyFilter, SSL filter], accepted=true]]]
> javax.net.ssl.SSLException: Failed to encrypt data (SSL engine error)
> [status=CLOSED, handshakeStatus=NEED_UNWRAP, ses=GridSelectorNioSessionImpl
> [worker=DirectNioClientWorker [super=AbstractNioClientWorker [idx=2,
> bytesRcvd=17479234, bytesSent=0, bytesRcvd0=2536, bytesSent0=0,
> select=true, super=GridWorker [name=grid-nio-worker-tcp-comm-2,
> igniteInstanceName=null, finished=false, hashCode=1854311052,
> interrupted=false, runner=grid-nio-worker-tcp-comm-2-#27]]],
> writeBuf=java.nio.DirectByteBuffer[pos=0 lim=10 cap=32768],
> readBuf=java.nio.DirectByteBuffer[pos=0 lim=32768 cap=32768],
> inRecovery=null, outRecovery=null, super=GridNioSessionImpl [locAddr=/
> 100.96.3.72:47100, rmtAddr=/100.96.6.183:45484, createTime=1519743156030,
> closeTime=0, bytesSent=2448, bytesRcvd=2536, bytesSent0=2448,
> bytesRcvd0=2536, sndSchedTime=1519743156071, lastSndTime=1519743156071,
> lastRcvTime=1519743156071, readsPaused=false, 
> filterChain=FilterChain[filters=[GridNioCodecFilter
> [parser=org.apache.ignite.internal.util.nio.GridDirectParser@497350a6,
> directMode=true], GridConnectionBytesVerifyFilter, SSL filter],
> accepted=true]]]
> at org.apache.ignite.internal.util.nio.ssl.
> GridNioSslHandler.encrypt(GridNioSslHandler.java:379)
> at org.apache.ignite.internal.util.nio.ssl.GridNioSslFilter.
> encrypt(GridNioSslFilter.java:270)
> at org.apache.ignite.internal.util.nio.GridNioServer$
> DirectNioClientWorker.processWriteSsl(GridNioServer.java:1418)
> at org.apache.ignite.internal.util.nio.GridNioServer$
> DirectNioClientWorker.processWrite(GridNioServer.java:1287)
> at org.apache.ignite.internal.util.nio.GridNioServer$
> AbstractNioClientWorker.processSelectedKeysOptimized(
> GridNioServer.java:2275)
> at org.apache.ignite.internal.util.nio.GridNioServer$
> AbstractNioClientWorker.bodyInternal(GridNioServer.java:2048)
> at org.apache.ignite.internal.util.nio.GridNioServer$
> AbstractNioClientWorker.body(GridNioServer.java:1717)
> at org.apache.ignite.internal.util.worker.GridWorker.run(
> GridWorker.java:110)
> at java.lang.Thread.run(Thread.java:748)
> 2018-02-27 14:52:37,072 WARN  [grid-nio-worker-tcp-comm-2-#27]
> o.a.i.s.c.tcp.TcpCommunicationSpi - Closing NIO session because of
> unhandled exception [cls=class o.a.i.i.util.nio.GridNioException,
> msg=Failed to encrypt data (SSL engine error) [status=CLOSED,
> handshakeStatus=NEED_UNWRAP, ses=GridSelectorNioSessionImpl
> [worker=DirectNioClientWorker [super=AbstractNioClientWorker [idx=2,
> bytesRcvd=17479234, bytesSent=0, bytesRcvd0=2536, bytesSent0=0,
> select=true, super=GridWorker [name=grid-nio-worker-tcp-comm-2,
> igniteInstanceName=null, finished=false, hashCode=1854311052,
> interrupted=false, runner=grid-nio-worker-tcp-comm-2-#27]]],
> writeBuf=java.nio.DirectByteBuffer[pos=0 lim=10 cap=32768],
> readBuf=java.nio.DirectByteBuffer[pos=0 lim=32768 cap=32768],
> inRecovery=null, outRecovery=null, super=GridNioSessionImpl 

SSL Exception

2018-02-27 Thread Bryan Rosander
We're using ignite in a 3 node grid with SSL just hit an issue where after
a period of time (hours after starting), 2 of the 3 nodes seem to have lost
connectivity and we see the following stack trace over and over.

The cluster starts up fine so I doubt it's an issue with the certificates
or keystores.  Also bouncing the ignite instances seems to have "fixed"
it.  Any ideas as to what could have happened?

Thanks,
Bryan

2018-02-27 14:52:36,071 INFO  [grid-nio-worker-tcp-comm-2-#27]
o.a.i.s.c.tcp.TcpCommunicationSpi - Accepted incoming communication
connection [locAddr=/100.96.3.72:47100, rmtAddr=/100.96.6.183:45484]
2018-02-27 14:52:37,072 ERROR [grid-nio-worker-tcp-comm-2-#27]
o.a.i.s.c.tcp.TcpCommunicationSpi - Failed to process selector key
[ses=GridSelectorNioSessionImpl [worker=DirectNioClientWorker
[super=AbstractNioClientWorker [idx=2, bytesRcvd=17479234, bytesSent=0,
bytesRcvd0=2536, bytesSent0=0, select=true, super=GridWorker
[name=grid-nio-worker-tcp-comm-2, igniteInstanceName=null, finished=false,
hashCode=1854311052, interrupted=false,
runner=grid-nio-worker-tcp-comm-2-#27]]],
writeBuf=java.nio.DirectByteBuffer[pos=0 lim=10 cap=32768],
readBuf=java.nio.DirectByteBuffer[pos=0 lim=32768 cap=32768],
inRecovery=null, outRecovery=null, super=GridNioSessionImpl [locAddr=/
100.96.3.72:47100, rmtAddr=/100.96.6.183:45484, createTime=1519743156030,
closeTime=0, bytesSent=2448, bytesRcvd=2536, bytesSent0=2448,
bytesRcvd0=2536, sndSchedTime=1519743156071, lastSndTime=1519743156071,
lastRcvTime=1519743156071, readsPaused=false,
filterChain=FilterChain[filters=[GridNioCodecFilter
[parser=o.a.i.i.util.nio.GridDirectParser@497350a6, directMode=true],
GridConnectionBytesVerifyFilter, SSL filter], accepted=true]]]
javax.net.ssl.SSLException: Failed to encrypt data (SSL engine error)
[status=CLOSED, handshakeStatus=NEED_UNWRAP, ses=GridSelectorNioSessionImpl
[worker=DirectNioClientWorker [super=AbstractNioClientWorker [idx=2,
bytesRcvd=17479234, bytesSent=0, bytesRcvd0=2536, bytesSent0=0,
select=true, super=GridWorker [name=grid-nio-worker-tcp-comm-2,
igniteInstanceName=null, finished=false, hashCode=1854311052,
interrupted=false, runner=grid-nio-worker-tcp-comm-2-#27]]],
writeBuf=java.nio.DirectByteBuffer[pos=0 lim=10 cap=32768],
readBuf=java.nio.DirectByteBuffer[pos=0 lim=32768 cap=32768],
inRecovery=null, outRecovery=null, super=GridNioSessionImpl [locAddr=/
100.96.3.72:47100, rmtAddr=/100.96.6.183:45484, createTime=1519743156030,
closeTime=0, bytesSent=2448, bytesRcvd=2536, bytesSent0=2448,
bytesRcvd0=2536, sndSchedTime=1519743156071, lastSndTime=1519743156071,
lastRcvTime=1519743156071, readsPaused=false,
filterChain=FilterChain[filters=[GridNioCodecFilter
[parser=org.apache.ignite.internal.util.nio.GridDirectParser@497350a6,
directMode=true], GridConnectionBytesVerifyFilter, SSL filter],
accepted=true]]]
at
org.apache.ignite.internal.util.nio.ssl.GridNioSslHandler.encrypt(GridNioSslHandler.java:379)
at
org.apache.ignite.internal.util.nio.ssl.GridNioSslFilter.encrypt(GridNioSslFilter.java:270)
at
org.apache.ignite.internal.util.nio.GridNioServer$DirectNioClientWorker.processWriteSsl(GridNioServer.java:1418)
at
org.apache.ignite.internal.util.nio.GridNioServer$DirectNioClientWorker.processWrite(GridNioServer.java:1287)
at
org.apache.ignite.internal.util.nio.GridNioServer$AbstractNioClientWorker.processSelectedKeysOptimized(GridNioServer.java:2275)
at
org.apache.ignite.internal.util.nio.GridNioServer$AbstractNioClientWorker.bodyInternal(GridNioServer.java:2048)
at
org.apache.ignite.internal.util.nio.GridNioServer$AbstractNioClientWorker.body(GridNioServer.java:1717)
at
org.apache.ignite.internal.util.worker.GridWorker.run(GridWorker.java:110)
at java.lang.Thread.run(Thread.java:748)
2018-02-27 14:52:37,072 WARN  [grid-nio-worker-tcp-comm-2-#27]
o.a.i.s.c.tcp.TcpCommunicationSpi - Closing NIO session because of
unhandled exception [cls=class o.a.i.i.util.nio.GridNioException,
msg=Failed to encrypt data (SSL engine error) [status=CLOSED,
handshakeStatus=NEED_UNWRAP, ses=GridSelectorNioSessionImpl
[worker=DirectNioClientWorker [super=AbstractNioClientWorker [idx=2,
bytesRcvd=17479234, bytesSent=0, bytesRcvd0=2536, bytesSent0=0,
select=true, super=GridWorker [name=grid-nio-worker-tcp-comm-2,
igniteInstanceName=null, finished=false, hashCode=1854311052,
interrupted=false, runner=grid-nio-worker-tcp-comm-2-#27]]],
writeBuf=java.nio.DirectByteBuffer[pos=0 lim=10 cap=32768],
readBuf=java.nio.DirectByteBuffer[pos=0 lim=32768 cap=32768],
inRecovery=null, outRecovery=null, super=GridNioSessionImpl [locAddr=/
100.96.3.72:47100, rmtAddr=/100.96.6.183:45484, createTime=1519743156030,
closeTime=0, bytesSent=2448, bytesRcvd=2536, bytesSent0=2448,
bytesRcvd0=2536, sndSchedTime=1519743156071, lastSndTime=1519743156071,
lastRcvTime=1519743156071, readsPaused=false,
filterChain=FilterChain[filters=[GridNioCodecFilter

Setting userVersion on client node causes ignite.active(true) to fail

2018-02-27 Thread Dave Harvey
If I change userVersion in ignite.xml on the client to 5, when the docker
image is in SHARED mode, in order to ensure that our peer-class-loaded
classes are reloaded, I cannot start the client.
   final Ignite ignite = Ignition.start(igniteConfig);
ignite.active(true); <<< Throws "Task was not deployed or was
redeployed since task execution"

Log from server in docker image.

[23:31:34,827][WARNING]pub-#8859[GridDeploymentManager] Failed to deploy
class in SHARED or CONTINUOUS mode for given user version (class is locally
deployed for a different user version)
[cls=o.a.i.i.processors.cluster.GridClusterStateProcessor$ClientChangeGlobalStateComputeRequest,
localVer=0, otherVer=5]
[23:31:34,828][SEVERE]pub-#8859[GridJobProcessor] Task was not deployed or
was redeployed since task execution
[taskName=o.a.i.i.processors.cluster.GridClusterStateProcessor$ClientChangeGlobalStateComputeRequest,
taskClsName=o.a.i.i.processors.cluster.GridClusterStateProcessor$ClientChangeGlobalStateComputeRequest,
codeVer=5, clsLdrId=778c374d161-85764126-dc96-4073-97ef-02be90c50723,
seqNum=1519687813239, depMode=SHARED, dep=null]
class org.apache.ignite.IgniteDeploymentException: Task was not deployed or
was redeployed since task execution
[taskName=org.apache.ignite.internal.processors.cluster.GridClusterStateProcessor$ClientChangeGlobalStateComputeRequest,
taskClsName=org.apache.ignite.internal.processors.cluster.GridClusterStateProcessor$ClientChangeGlobalStateComputeRequest,
codeVer=5, clsLdrId=778c374d161-85764126-dc96-4073-97ef-02be90c50723,
seqNum=1519687813239, depMode=SHARED, dep=null]
at
org.apache.ignite.internal.processors.job.GridJobProcessor.processJobExecuteRequest(GridJobProcessor.java:1160)
at
org.apache.ignite.internal.processors.job.GridJobProcessor$JobExecutionListener.onMessage(GridJobProcessor.java:1913)
at
org.apache.ignite.internal.managers.communication.GridIoManager.invokeListener(GridIoManager.java:1555)
at
org.apache.ignite.internal.managers.communication.GridIoManager.processRegularMessage0(GridIoManager.java:1183)
at
org.apache.ignite.internal.managers.communication.GridIoManager.access$4200(GridIoManager.java:126)
at
org.apache.ignite.internal.managers.communication.GridIoManager$9.run(GridIoManager.java:1090)
at
java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1149)
at
java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:624)
at java.lang.Thread.run(Thread.java:748)



--
Sent from: http://apache-ignite-users.70518.x6.nabble.com/


Re: How to configure a cluster as a persistent, replicated SQL database

2018-02-27 Thread Denis Mekhanikov
Hi Jose!

Apache Ignite is indeed a scalable in-memory persistent SQL-enabled data
store.
Here is documentation, that can help you to start using it:
*Data grid*: https://apacheignite.readme.io/docs/data-grid
*Persistent storage*:
https://apacheignite.readme.io/docs/distributed-persistent-store
*SQL support:* https://apacheignite-sql.readme.io/docs

What do you mean by replicated SQL datastore?

Denis

вт, 27 февр. 2018 г. в 8:34, joseheitor :

> How do I configure a cluster as a persistent, replicated SQL datastore?
> And can it perform as an active-active, high-availability solution that can
> be scaled horizontally?
> --
> Sent from the Apache Ignite Users mailing list archive
>  at Nabble.com.
>


Re: Spark 'close' API call hangs within ignite service grid

2018-02-27 Thread Denis Mekhanikov
Hi!

Please attach the log as a file. Right now it's impossible to read.
Also please specify, what exactly doesn't work on the master version. I
couldn't figure it out from the log.

Denis

вт, 27 февр. 2018 г. в 9:55, akshaym :

> Hi,
>
> I am running a sprak job within ignite service grid. After job is done I am
> calling close() on spark session. This is working for stable version of
> ignite i.e. 2.3.0. However, its not working with 2.5.0-SNAPSHOT version.
> Here are the detailed logs :
>
> /
> Closing spark context
> [27-02-2018 12:12:18][DEBUG][SparkListenerBus][G] Shutdown hook is removed.
> [27-02-2018 12:12:18][DEBUG][SparkListenerBus][G] Unregistered MBean:
> org.apache:clsLdr=764c12b6,group=Kernal,name=Ignition
> [27-02-2018 12:12:18][DEBUG][SparkListenerBus][IgniteKernal] Notifying
> lifecycle beans.
> [27-02-2018 12:12:18][DEBUG][svc-#88][AbstractLifeCycle] stopping
> org.spark_project.jetty.server.Server@19480852
> [27-02-2018 12:12:18][DEBUG][svc-#88][Server] doStop
> org.spark_project.jetty.server.Server@19480852
> [27-02-2018 12:12:18][DEBUG][SparkUI-161][QueuedThreadPool] ran
> SparkUI-161-acceptor-0@1e34fb5-ServerConnector
> @a75cb0a{HTTP/1.1,[http/1.1]}{0.0.0.0:4040}
> [27-02-2018 12:12:18][DEBUG][svc-#88][Server] Graceful shutdown
> org.spark_project.jetty.server.Server@19480852 by
> [27-02-2018 12:12:18][DEBUG][svc-#88][AbstractLifeCycle] stopping
> Spark@a75cb0a{HTTP/1.1,[http/1.1]}{0.0.0.0:4040}
> [27-02-2018 12:12:18][DEBUG][svc-#88][AbstractLifeCycle] stopping
>
> org.spark_project.jetty.server.ServerConnector$ServerConnectorManager@7313f6d3
> [27-02-2018 12:12:18][DEBUG][svc-#88][AbstractLifeCycle] stopping
> org.spark_project.jetty.io.ManagedSelector@375067e7 id=1 keys=0 selected=0
> [27-02-2018 12:12:18][DEBUG][svc-#88][ManagedSelector] Stopping
> org.spark_project.jetty.io.ManagedSelector@375067e7 id=1 keys=0 selected=0
> [27-02-2018 12:12:18][DEBUG][SparkListenerBus][GridIoManager] Removed
> message listener [topic=TOPIC_DATASTREAM,
>
> lsnr=org.apache.ignite.internal.processors.datastreamer.DataStreamProcessor$1@378382bf
> ]
> [27-02-2018
> 12:12:18][DEBUG][grid-data-loader-flusher-#40][DataStreamProcessor] Caught
> interrupted exception: java.lang.InterruptedException
> [27-02-2018
> 12:12:18][DEBUG][grid-data-loader-flusher-#40][DataStreamProcessor] Grid
> runnable finished due to interruption without cancellation:
> grid-data-loader-flusher
> [27-02-2018 12:12:18][DEBUG][SparkListenerBus][DataStreamProcessor] Stopped
> data streamer processor.
> [27-02-2018 12:12:18][DEBUG][session-timeout-worker-#34][GridRestProcessor]
> Caught interrupted exception: java.lang.InterruptedException: sleep
> interrupted
> [27-02-2018 12:12:18][DEBUG][session-timeout-worker-#34][GridRestProcessor]
> Grid runnable finished due to interruption without cancellation:
> session-timeout-worker
> [27-02-2018 12:12:18][DEBUG][nio-acceptor-#35][GridTcpRestProtocol]
> Balancing data [min0=0, minIdx=0, max0=-1, maxIdx=-1]
> [27-02-2018 12:12:18][DEBUG][nio-acceptor-#35][GridTcpRestProtocol] Closing
> all listening sockets.
> [27-02-2018 12:12:18][DEBUG][nio-acceptor-#35][GridTcpRestProtocol] Closing
> NIO selector.
> [27-02-2018 12:12:18][DEBUG][nio-acceptor-#35][GridTcpRestProtocol] Grid
> runnable finished due to interruption without cancellation: nio-acceptor
> [27-02-2018 12:12:18][DEBUG][svc-#88][ManagedSelector] Queued change
> org.spark_project.jetty.io.ManagedSelector$CloseEndPoints@28c2db1e on
> org.spark_project.jetty.io.ManagedSelector@375067e7 id=1 keys=0 selected=0
> [27-02-2018 12:12:18][DEBUG][SparkUI-160][ManagedSelector] Selector loop
> woken up from select, 0/0 selected
> [27-02-2018 12:12:18][DEBUG][SparkUI-160][ManagedSelector] Running change
> org.spark_project.jetty.io.ManagedSelector$CloseEndPoints@28c2db1e
> [27-02-2018 12:12:18][DEBUG][SparkUI-160][ManagedSelector] Closing 0
> endPoints on org.spark_project.jetty.io.ManagedSelector@375067e7 id=1
> keys=0
> selected=0
> [27-02-2018 12:12:18][DEBUG][SparkUI-160][ManagedSelector] Closed 0
> endPoints on org.spark_project.jetty.io.ManagedSelector@375067e7 id=1
> keys=0
> selected=0
> [27-02-2018 12:12:18][DEBUG][SparkUI-160][ManagedSelector] Selector loop
> waiting on select
> [27-02-2018 12:12:18][DEBUG][SparkListenerBus][GridTcpRestProtocol]
> Cancelling grid runnable: ByteBufferNioClientWorker
> [readBuf=java.nio.HeapByteBuffer[pos=0 lim=8192 cap=8192],
> super=AbstractNioClientWorker [idx=0, bytesRcvd=0, bytesSent=0,
> bytesRcvd0=0, bytesSent0=0, select=true, super=GridWorker
> [name=grid-nio-worker-tcp-rest-0, igniteInstanceName=null, finished=false,
> hashCode=2091822083, interrupted=false,
> runner=grid-nio-worker-tcp-rest-0-#36]]]
> [27-02-2018
> 12:12:18][DEBUG][grid-nio-worker-tcp-rest-0-#36][GridTcpRestProtocol]
> Closing all connected client sockets.
> [27-02-2018 12:12:18][DEBUG][SparkListenerBus][GridTcpRestProtocol]
> Cancelling grid runnable: ByteBufferNioClientWorker
> 

Re: Ignite Indexes are stored where??

2018-02-27 Thread Evgenii Zhuravlev
Hi,

Since version 2.0  all of the data and indexes are stored in off-heap
memory: https://apacheignite.readme.io/v2.3/docs/durable-memory.

Evgenii

2018-02-27 13:50 GMT+03:00 KR Kumar :

> Hi Guys - This could be a dumb question but i think its version dependent,
> so
> let me ask this anyways? Does Ignite store indexes in heap or off heap by
> default?? Because when I am profiling the application, I see huge bunch of
> int arrays instantiated by Ignite and wondering what are they ??
>
> I am currently on 2.3 version..
>
> Thanx and Regards,
> KR Kumar
>
>
>
> --
> Sent from: http://apache-ignite-users.70518.x6.nabble.com/
>


Re: The wrong sockAddrs are registered, and the cluster is broken when it tries to connect it occasionally.

2018-02-27 Thread ezhuravlev
If it really starts to use addresses other that pre-configured, you should
share your configurations and full logs from all nodes, the community will
check it.

Evgenii



--
Sent from: http://apache-ignite-users.70518.x6.nabble.com/


Ignite Indexes are stored where??

2018-02-27 Thread KR Kumar
Hi Guys - This could be a dumb question but i think its version dependent, so
let me ask this anyways? Does Ignite store indexes in heap or off heap by
default?? Because when I am profiling the application, I see huge bunch of
int arrays instantiated by Ignite and wondering what are they ??

I am currently on 2.3 version..

Thanx and Regards,
KR Kumar



--
Sent from: http://apache-ignite-users.70518.x6.nabble.com/


Re: Recommended storage configuration

2018-02-27 Thread Olexandr K
Yeah, just wanted to confirm I'm not missing obvious things

Thanks!

On Tue, Feb 27, 2018 at 12:40 AM, vkulichenko  wrote:

> Oleksandr,
>
> Generally, this heavily depends on your use case, workload, amount of data,
> etc... I don't see any issues with your configuration in particular, but
> you
> need to run your tests to figure out if it provides results that you
> expect.
>
> -Val
>
>
>
> --
> Sent from: http://apache-ignite-users.70518.x6.nabble.com/
>


Re: getting error "Queue has been removed from cache: GridCacheQueueAdapter" with ExecutorService

2018-02-27 Thread nheron
Hi,
It was an error from our side , we forgot to create the queue
Sorry for that
Cheers
Nicolas



--
Sent from: http://apache-ignite-users.70518.x6.nabble.com/