NPE from the native persistence enable node

2018-01-17 Thread aa...@tophold.com
Hi All, 

As a simple native persistence node, if kill and reboot sometime NPE may thrown.

From the code the really root cause should be the BPlusTree#doPut  #putDown's 
Result is not Expected, 

But even in the CacheObjectBinaryProcessorImpl#metadata  the holder != null  
should be checked earlier


Regards
Aaron
org.apache.ignite.IgniteCheckedException: null
at org.apache.ignite.internal.util.IgniteUtils.cast(IgniteUtils.java:7252) 
~[ignite-core-2.3.0.jar!/:2.3.0]
at 
org.apache.ignite.internal.util.future.GridFutureAdapter.resolve(GridFutureAdapter.java:259)
 ~[ignite-core-2.3.0.jar!/:2.3.0]
at 
org.apache.ignite.internal.util.future.GridFutureAdapter.get0(GridFutureAdapter.java:207)
 ~[ignite-core-2.3.0.jar!/:2.3.0]
at 
org.apache.ignite.internal.util.future.GridFutureAdapter.get(GridFutureAdapter.java:159)
 ~[ignite-core-2.3.0.jar!/:2.3.0]
at 
org.apache.ignite.internal.processors.cache.GridCachePartitionExchangeManager$ExchangeWorker.body(GridCachePartitionExchangeManager.java:2289)
 ~[ignite-core-2.3.0.jar!/:2.3.0]
at org.apache.ignite.internal.util.worker.GridWorker.run(GridWorker.java:110) 
~[ignite-core-2.3.0.jar!/:2.3.0]
at java.lang.Thread.run(Thread.java:748) [?:1.8.0_131]
Caused by: java.lang.NullPointerException
at 
org.apache.ignite.internal.processors.cache.binary.CacheObjectBinaryProcessorImpl.metadata(CacheObjectBinaryProcessorImpl.java:538)
 ~[ignite-core-2.3.0.jar!/:2.3.0]
at 
org.apache.ignite.internal.processors.cache.binary.CacheObjectBinaryProcessorImpl$2.metadata(CacheObjectBinaryProcessorImpl.java:194)
 ~[ignite-core-2.3.0.jar!/:2.3.0]
at 
org.apache.ignite.internal.binary.BinaryContext.metadata(BinaryContext.java:1266)
 ~[ignite-core-2.3.0.jar!/:2.3.0]
at 
org.apache.ignite.internal.binary.BinaryReaderExImpl.getOrCreateSchema(BinaryReaderExImpl.java:2005)
 ~[ignite-core-2.3.0.jar!/:2.3.0]
at 
org.apache.ignite.internal.binary.BinaryReaderExImpl.(BinaryReaderExImpl.java:284)
 ~[ignite-core-2.3.0.jar!/:2.3.0]
at 
org.apache.ignite.internal.binary.BinaryReaderExImpl.(BinaryReaderExImpl.java:183)
 ~[ignite-core-2.3.0.jar!/:2.3.0]
at 
org.apache.ignite.internal.binary.BinaryObjectImpl.reader(BinaryObjectImpl.java:830)
 ~[ignite-core-2.3.0.jar!/:2.3.0]
at 
org.apache.ignite.internal.binary.BinaryObjectImpl.reader(BinaryObjectImpl.java:845)
 ~[ignite-core-2.3.0.jar!/:2.3.0]
at 
org.apache.ignite.internal.binary.BinaryObjectImpl.field(BinaryObjectImpl.java:308)
 ~[ignite-core-2.3.0.jar!/:2.3.0]
at 
org.apache.ignite.internal.processors.query.property.QueryBinaryProperty.fieldValue(QueryBinaryProperty.java:245)
 ~[ignite-core-2.3.0.jar!/:2.3.0]
at 
org.apache.ignite.internal.processors.query.property.QueryBinaryProperty.value(QueryBinaryProperty.java:139)
 ~[ignite-core-2.3.0.jar!/:2.3.0]
at 
org.apache.ignite.internal.processors.query.h2.H2RowDescriptor.columnValue(H2RowDescriptor.java:303)
 ~[ignite-indexing-2.3.0.jar!/:2.3.0]
at 
org.apache.ignite.internal.processors.query.h2.opt.GridH2KeyValueRowOnheap.getValue(GridH2KeyValueRowOnheap.java:160)
 ~[ignite-indexing-2.3.0.jar!/:2.3.0]
at 
org.apache.ignite.internal.processors.query.h2.opt.GridH2KeyValueRowOnheap.toString(GridH2KeyValueRowOnheap.java:209)
 ~[ignite-indexing-2.3.0.jar!/:2.3.0]
at java.lang.String.valueOf(String.java:2994) ~[?:1.8.0_131]
at java.lang.StringBuilder.append(StringBuilder.java:131) ~[?:1.8.0_131]
at 
org.apache.ignite.internal.processors.cache.persistence.tree.BPlusTree.doPut(BPlusTree.java:2039)
 ~[ignite-core-2.3.0.jar!/:2.3.0]
at 
org.apache.ignite.internal.processors.cache.persistence.tree.BPlusTree.put(BPlusTree.java:1977)
 ~[ignite-core-2.3.0.jar!/:2.3.0]
at 
org.apache.ignite.internal.processors.query.h2.database.H2TreeIndex.put(H2TreeIndex.java:197)
 ~[ignite-indexing-2.3.0.jar!/:2.3.0]
at 
org.apache.ignite.internal.processors.query.h2.opt.GridH2Table.doUpdate(GridH2Table.java:472)
 ~[ignite-indexing-2.3.0.jar!/:2.3.0]
at 
org.apache.ignite.internal.processors.query.h2.opt.GridH2Table.update(GridH2Table.java:423)
 ~[ignite-indexing-2.3.0.jar!/:2.3.0]
at 
org.apache.ignite.internal.processors.query.h2.IgniteH2Indexing.store(IgniteH2Indexing.java:559)
 ~[ignite-indexing-2.3.0.jar!/:2.3.0]
at 
org.apache.ignite.internal.processors.query.GridQueryProcessor.store(GridQueryProcessor.java:1747)
 ~[ignite-core-2.3.0.jar!/:2.3.0]
at 
org.apache.ignite.internal.processors.cache.query.GridCacheQueryManager.store(GridCacheQueryManager.java:425)
 ~[ignite-core-2.3.0.jar!/:2.3.0]
at 
org.apache.ignite.internal.processors.cache.IgniteCacheOffheapManagerImpl$CacheDataStoreImpl.finishUpdate(IgniteCacheOffheapManagerImpl.java:1344)
 ~[ignite-core-2.3.0.jar!/:2.3.0]
at 
org.apache.ignite.internal.processors.cache.IgniteCacheOffheapManagerImpl$CacheDataStoreImpl.update(IgniteCacheOffheapManagerImpl.java:1316)
 ~[ignite-core-2.3.0.jar!/:2.3.0]
at 
org.apache.ignite.internal.processors.cache.persistence.GridCacheOffheapManager$GridCacheDataStore.update(GridCacheOffheapManager.java:1235)
 ~[ignite-core-2.3.0.jar!/:2.3.0]

Re: Primary key bug

2018-01-17 Thread Alexey Kukushkin
Hi, there might be a misunderstanding here of how Ignite SQL grid is
implemented. Please note that Ignite does not use H2 as a storage. Ignite
uses H2 only as a parser and planner. H2 hands a parsed and planned
statement over to Ignite and then Ignite maps the statement to
get/put/create cache/etc... operations depending on what the statement was.

What you see in H2 console is actually the data stored in Ignite and not in
H2.

So I am not sure you really need "Ignite tables created directly in H2".
But if you do need it, the closest feature would be Ignite 3rd-party
persistence, where you can attach an external persistent store (H2 database
in this case) to Ignite and use Ignite to cache the data.


Node fails to recover

2018-01-17 Thread Ralph Goers
We are running an application on 2 servers with each running Ignite in a 
cluster. As the logs show below at some point the nodes had trouble 
communicating with each other. What I would really like to know is why one of 
the nodes seemed to recover and the other node did not. Is there something I 
should be looking for or some setting that might be misconfigured?

Thanks,
Ralph



192.168.202.110
 
2018-01-04 22:16:52 WARN  [ ] TcpDiscoverySpi:133 - Timed out waiting for 
message delivery receipt (most probably, the reason is in long GC pauses on 
remote node; consider tuning GC and increasing 'ackTimeout' configuration 
property). Will retry to send message with increased timeout 
[currentTimeout=1, rmtAddr=/192.168.202.111:47500, rmtPort=47500]
2018-01-04 22:16:52 WARN  [ ] TcpDiscoverySpi:133 - Failed to send message to 
next node [msg=TcpDiscoveryMetricsUpdateMessage 
[super=TcpDiscoveryAbstractMessage [sndNodeId=null, 
id=50668565061-9d6b5111-c433-4e43-997b-3c803c84ee45, 
verifierNodeId=9d6b5111-c433-4e43-997b-3c803c84ee45, topVer=0, pendingIdx=0, 
failedNodes=null, isClient=false]], next=TcpDiscoveryNode 
[id=225ed6e9-3116-465f-bc3a-94818278fd31, addrs=[0:0:0:0:0:0:0:1%lo, 127.0.0.1, 
192.168.202.111], sockAddrs=[/0:0:0:0:0:0:0:1%lo:47500, /127.0.0.1:47500, 
/192.168.202.111:47500], discPort=47500, order=12, intOrder=7, 
lastExchangeTime=1513268425829, loc=false, ver=2.3.0#20171027-sha1:8add7fd5, 
isClient=false], errMsg=Failed to send message to next node 
[msg=TcpDiscoveryMetricsUpdateMessage [super=TcpDiscoveryAbstractMessage 
[sndNodeId=null, id=50668565061-9d6b5111-c433-4e43-997b-3c803c84ee45, 
verifierNodeId=9d6b5111-c433-4e43-997b-3c803c84ee45, topVer=0, pendingIdx=0, 
failedNodes=null, isClient=false]], next=ClusterNode 
[id=225ed6e9-3116-465f-bc3a-94818278fd31, order=12, addr=[0:0:0:0:0:0:0:1%lo, 
127.0.0.1, 192.168.202.111], daemon=false]]]
2018-01-04 22:16:52 WARN  [ ] TcpDiscoverySpi:133 - Local node has detected 
failed nodes and started cluster-wide procedure. To speed up failure detection 
please see 'Failure Detection' section under javadoc for 'TcpDiscoverySpi'
2018-01-04 22:16:52 WARN  [ ] GridDiscoveryManager:133 - Node FAILED: 
TcpDiscoveryNode [id=225ed6e9-3116-465f-bc3a-94818278fd31, 
addrs=[0:0:0:0:0:0:0:1%lo, 127.0.0.1, 192.168.202.111], 
sockAddrs=[/0:0:0:0:0:0:0:1%lo:47500, /127.0.0.1:47500, 
/192.168.202.111:47500], discPort=47500, order=12, intOrder=7, 
lastExchangeTime=1513268425829, loc=false, ver=2.3.0#20171027-sha1:8add7fd5, 
isClient=false]
2018-01-04 22:16:52 INFO  [ ] GridDiscoveryManager:128 - Topology snapshot 
[ver=13, servers=1, clients=0, CPUs=4, heap=2.0GB]
2018-01-04 22:16:52 INFO  [ ] time:128 - Started exchange init 
[topVer=AffinityTopologyVersion [topVer=13, minorTopVer=0], crd=true, 
evt=NODE_FAILED, evtNode=225ed6e9-3116-465f-bc3a-94818278fd31, customEvt=null, 
allowMerge=true]
2018-01-04 22:16:52 INFO  [ ] GridDhtPartitionsExchangeFuture:128 - Finished 
waiting for partition release future [topVer=AffinityTopologyVersion 
[topVer=13, minorTopVer=0], waitTime=0ms, futInfo=NA]
2018-01-04 22:16:52 INFO  [ ] GridDhtPartitionsExchangeFuture:128 - Coordinator 
received all messages, try merge [ver=AffinityTopologyVersion [topVer=13, 
minorTopVer=0]]
2018-01-04 22:16:52 INFO  [ ] GridDhtPartitionsExchangeFuture:128 - 
finishExchangeOnCoordinator [topVer=AffinityTopologyVersion [topVer=13, 
minorTopVer=0], resVer=AffinityTopologyVersion [topVer=13, minorTopVer=0]]
2018-01-04 22:16:53 INFO  [ ] GridDhtPartitionsExchangeFuture:128 - Finish 
exchange future [startVer=AffinityTopologyVersion [topVer=13, minorTopVer=0], 
resVer=AffinityTopologyVersion [topVer=13, minorTopVer=0], err=null]
2018-01-04 22:16:53 INFO  [ ] time:128 - Finished exchange init 
[topVer=AffinityTopologyVersion [topVer=13, minorTopVer=0], crd=true]
2018-01-04 22:16:53 INFO  [ ] GridCachePartitionExchangeManager:128 - Skipping 
rebalancing (nothing scheduled) [top=AffinityTopologyVersion [topVer=13, 
minorTopVer=0], evt=NODE_FAILED, node=225ed6e9-3116-465f-bc3a-94818278fd31]
2018-01-04 22:17:00 INFO  [ ] IgniteKernal:128 -
 
192.168.202.111
 
2018-01-04 22:16:12 INFO  [ ] IgniteKernal:128 - FreeList [name=null, 
buckets=256, dataPages=5, reusePages=0]
2018-01-04 22:16:12 INFO  [ ] IgniteKernal:128 - FreeList [name=null, 
buckets=256, dataPages=5, reusePages=0]
2018-01-04 22:16:52 INFO  [ ] TcpDiscoverySpi:128 - Finished serving remote 
node connection [rmtAddr=/192.168.202.110:55327, rmtPort=55327
2018-01-04 22:16:52 INFO  [ ] TcpDiscoverySpi:128 - TCP discovery accepted 
incoming connection [rmtAddr=/192.168.202.110, rmtPort=51136]
2018-01-04 22:16:52 INFO  [ ] TcpDiscoverySpi:128 - TCP discovery spawning a 
new thread for connection [rmtAddr=/192.168.202.110, rmtPort=51136]
2018-01-04 22:16:52 INFO  [ ] TcpDiscoverySpi:128 - Started serving remote node 
connection [rmtAddr=/192.168.202.110:51136, rmtPort=51136]
2018-01-04 22:16:52 WARN  [ ] TcpDiscoverySpi:133 - Node is out of topology 

Re: Discovery node port range

2018-01-17 Thread ilya.kasnacheev
Hello Ranjit!

Running 20 nodes one one server is wasteful. I expect you would need to have
ports range of 20 to run 20 nodes.
This of course changes if each node runs under its own IP address (as it
happens with containers).

Also, client nodes don't need to bind to a fixed port. Any chance you could
make some of them clients?

Regards,



--
Sent from: http://apache-ignite-users.70518.x6.nabble.com/


Re: Node fails to recover

2018-01-17 Thread ilya.kasnacheev
Hello!

2018-01-04 22:16:12 INFO  [ ] IgniteKernal:128 - FreeList [name=null,
buckets=256, dataPages=5, reusePages=0]
2018-01-04 22:16:52 INFO  [ ] TcpDiscoverySpi:128 - Finished serving remote
node connection [rmtAddr=/192.168.202.110:55327, rmtPort=55327

As you may notice, there's a 30 seconds of pause in node logs. It's possible
that a long GC happened during that time and node failed to acknowledge
discovery in time.

Can you please collect GC log of such occurrence, post relvant fragments?

Regards,



--
Sent from: http://apache-ignite-users.70518.x6.nabble.com/


Re: Discovery node port range

2018-01-17 Thread vkulichenko
Ranjit,

Embedded mode in Spark RDD will be deprecated in 2.4 which is about to be
released. My recommendation would be to use standalone mode instead.

-Val



--
Sent from: http://apache-ignite-users.70518.x6.nabble.com/


Re: Node fails to recover

2018-01-17 Thread Ralph Goers
We didn’t have gc logging enabled. I can certainly enable it to for the future 
but that still leaves me wondering why Ignite didn’t “re-discover” the node 
once the garbage collection finished.

Ralph

> On Jan 17, 2018, at 10:35 AM, ilya.kasnacheev  
> wrote:
> 
> Hello!
> 
> 2018-01-04 22:16:12 INFO  [ ] IgniteKernal:128 - FreeList [name=null,
> buckets=256, dataPages=5, reusePages=0]
> 2018-01-04 22:16:52 INFO  [ ] TcpDiscoverySpi:128 - Finished serving remote
> node connection [rmtAddr=/192.168.202.110:55327, rmtPort=55327
> 
> As you may notice, there's a 30 seconds of pause in node logs. It's possible
> that a long GC happened during that time and node failed to acknowledge
> discovery in time.
> 
> Can you please collect GC log of such occurrence, post relvant fragments?
> 
> Regards,
> 
> 
> 
> --
> Sent from: http://apache-ignite-users.70518.x6.nabble.com/
> 




Re: Node fails to recover

2018-01-17 Thread Ralph Goers
It seems we did have Splunk collection garbage collection stats via JMX. I see 
no Full GCs happening at that time. There are minor GCs but they happen fairly 
frequently and don’t use much CPU time.

Ralph

> On Jan 17, 2018, at 10:52 AM, Ralph Goers  wrote:
> 
> We didn’t have gc logging enabled. I can certainly enable it to for the 
> future but that still leaves me wondering why Ignite didn’t “re-discover” the 
> node once the garbage collection finished.
> 
> Ralph
> 
>> On Jan 17, 2018, at 10:35 AM, ilya.kasnacheev  
>> wrote:
>> 
>> Hello!
>> 
>> 2018-01-04 22:16:12 INFO  [ ] IgniteKernal:128 - FreeList [name=null,
>> buckets=256, dataPages=5, reusePages=0]
>> 2018-01-04 22:16:52 INFO  [ ] TcpDiscoverySpi:128 - Finished serving remote
>> node connection [rmtAddr=/192.168.202.110:55327, rmtPort=55327
>> 
>> As you may notice, there's a 30 seconds of pause in node logs. It's possible
>> that a long GC happened during that time and node failed to acknowledge
>> discovery in time.
>> 
>> Can you please collect GC log of such occurrence, post relvant fragments?
>> 
>> Regards,
>> 
>> 
>> 
>> --
>> Sent from: http://apache-ignite-users.70518.x6.nabble.com/
>> 
> 
> 
> 




Elixir/Erlang Apache Ignite Connector?

2018-01-17 Thread Kenta Iwasaki

Hello,
Just wanted to ask if there has been any work so far on an Elixir/Erlang Ignite
connector as a strongly consistent in-memory database is something that would
pair really nicely with the fault tolerance and high availability given from
Erlang's OTP .
It seems that it is plausible to interface with Ignite so far using the ODBC
drivers with Ignite, though I do want to know how it compares in terms of
performance to using a native client library in Java/C++/etc.
If nobody's working on one, I'm up for writing a library to interface with
Ignite's ODBC driver and open-sourcing it.
Best regards,Kenta Iwasaki

Primary key bug

2018-01-17 Thread breathem
Hi,
Suppose we have following DDL:
CREATE TABLE TEST
(
  org_id INT NOT NULL,
  drug_id INT NOT NULL,
  price DOUBLE NULL,
  qtty INT NULL,
  inter_id INT NULL,
  trade_id INT NULL,
  med_id INT NULL,
  medp_id INT NULL,
  avgprc DOUBLE NULL,
  avgprc_region DOUBLE NULL,
  region_id INTEGER NULL,
  prctype INTEGER NOT NULL,
  prccomment VARCHAR(250) NULL,
  PRIMARY KEY (org_id, drug_id, prctype)
) WITH
"CACHE_NAME=TEST,TEMPLATE=REPLICATED,ATOMICITY=ATOMIC,KEY_TYPE=KTest,VALUE_TYPE=TTest";

Execute this script via thin JDBC driver (Ignite 2.3.0).
TEST table is created but primary key doesn't exist (this can be seen in DB
via H2 web console).
If we remove PRIMARY KEY (org_id, drug_id, prctype) section from CREATE
TABLE Ignite throws exception:
No PRIMARY KEY defined for CREATE TABLE
Of course we can CREATE INDEX PK_TEST ON TEST (org_id, drug_id, prctype)
instead of PK, but it seems that Ignite 2.3.0 has bug in PK creation.



--
Sent from: http://apache-ignite-users.70518.x6.nabble.com/


Re: 3rd Party Persistence and two phase commit

2018-01-17 Thread ALEKSEY KUZNETSOV
Hi!

I just wanted to show an example, when Write-through mode is enabled and
data is persisted by primary node - not transaction coordinator.

Assume, we have transactional cache in cluster, near cache is disabled,
node A is primary for key1, node B is neither primary nor backup for key1,
and some backup nodes. And every node has its local db.
If we start transaction on node B, then data would be persisted by primary
node and perhaps by backup nodes, transaction coordinator would not open db
connections.

ср, 17 янв. 2018 г. в 3:59, vkulichenko :

> Aleksey,
>
> Are you talking about use case when a transaction spawns a distributed
> cache
> AND a local cache? If so, this sounds like a very weird use case. Do we
> even
> allow this?
>
> -Val
>
>
>
> --
> Sent from: http://apache-ignite-users.70518.x6.nabble.com/
>


-- 

*Best Regards,*

*Kuznetsov Aleksey*


Re: usage of Apache ignite as Key value DB

2018-01-17 Thread Mikael
You have to run an Ignite instance to use it (you can embed it in your 
application), you can't just use the key value store on it's own, a 
LOCAL cache would be the closest to a Berkeley DB store.


Docs at : https://apacheignite.readme.io/docs/data-grid



Den 2018-01-17 kl. 11:05, skrev Rajesh Kishore:

Hi,

I am newbie to Apache Ignite. We are trying to explore Ignite as key 
value DB to be replaced with our existing Berkely DB in application.


Currently, Bekley DB is embedded in the application and db container 
operations are performed using Berkely DB apis , similar 
functionalities we would need for Ignite.


The idea is to replace berkley db apis to Ignite apis to use Ignite as 
key value DB.
I could not find any docs for the usage of ignite libraries to be used 
in the application.


Any pointers please

Thanks & Regards,
Rajesh Kishore




Re: ignite c++ client CacheEntryEventFilter has no effect!

2018-01-17 Thread Igor Sapego
Well, you can't run C++ jobs on Java node.
You need to start some C++ nodes just as the exception
message states.

Best Regards,
Igor

On Wed, Jan 17, 2018 at 5:16 AM, Edward Wu  wrote:

> Yes, my server node is a plain Java node.
>
>
>
> --
> Sent from: http://apache-ignite-users.70518.x6.nabble.com/
>


Re: Elixir/Erlang Apache Ignite Connector?

2018-01-17 Thread vkulichenko
2.4 (soon to be released) will have a binary protocol that will provide
limited API for various cache operations. Basically, you will be able to
create a socket, connect to Ignite server and then run this operations. Here
is the documentation (still work in progress, but gives a good idea about
what to expect):
https://apacheignite.readme.io/docs/binary-client-protocol#client-operations

-Val



--
Sent from: http://apache-ignite-users.70518.x6.nabble.com/


Re: NPE from the native persistence enable node

2018-01-17 Thread Mikhail
Hi Aaron,

Can you always reproduce the issue? 
Might be you have a reproducer for this issue?

I created a ticket:
https://issues.apache.org/jira/browse/IGNITE-7458

but I don't think that it will be fixed without a reproducer, so it will be
great if you'll provide one,  I'll attach it to the ticket.

Thanks,
Mike.



--
Sent from: http://apache-ignite-users.70518.x6.nabble.com/


Re: IgniteOutOfMemoryException when using putAll instead of put

2018-01-17 Thread Alexey Popov
Hi Larry,

I checked the code.
The issue is specific to your test data.
You have relatively large initial entries (4.7 Kbytes) with the same index
length (it is a just String). Please note that the index can't fit into the
single page (4K).

The rest of entries during .get() (from Store) are relatively short (just
"foo" word).
It seems that Ignite can't make a correct eviction threshold estimation
(including index) in your case.

If you change .setEvictionThreshold(.9) to .setEvictionThreshold(.8) with
the same test data then everything works as expected.

Anyway, I will open a ticket for your reproducer.

Thank you,
Alexey



--
Sent from: http://apache-ignite-users.70518.x6.nabble.com/


Re: Primary key bug

2018-01-17 Thread breathem
Hello, Alexey.
Thank you for clarification.
Is there any way to use in Ignite tables created directly in H2?



--
Sent from: http://apache-ignite-users.70518.x6.nabble.com/


Re: Cannot insert data into table using JDBC

2018-01-17 Thread Vladimir Ozerov
Hi Michael,

The issue is almost fixed. The fix will be available as a part of Apache
Ignite 2.4 release.

On Tue, Dec 19, 2017 at 1:48 PM, Michael Jay <841519...@qq.com> wrote:

> Hi,has it been solved or is it a bug? I just met with the same
> problem.When I
> set "streaming=false" ,it worked, data can be inserted via
> JdbcClientDriver.
> However, when streaming=true, I got message that said "schema not found" .
> Thanks.
>
>
>
> --
> Sent from: http://apache-ignite-users.70518.x6.nabble.com/
>


Integration with Hibernate 5.2.X for L2 Cache?

2018-01-17 Thread SirSpart
Is it possible to integrate Apache Ignite with Hibernate 5.2.X So that it can
be used as an L2 cache? Or is the support only till 5.1.X?



--
Sent from: http://apache-ignite-users.70518.x6.nabble.com/


Re: Discovery node port range

2018-01-17 Thread Ranjit Sahu
Thanks Val. We are not using the RDD of ignite instead we are building the
ignite cluster with in spark with custom code and use the key value store.
We are using the static ip discovery while doing so.

What we do is we read the data in avro from hadoop. Start the ignite node
first in driver, and use the driver ip along with few executors ip to use
for node discovery. Once the clister topology is built load the data to
cache and use sql interface to look up.

Do you think we will be impacted with the 2.4 deprecation dcsn ?

Thanks,
Ranjit
On Wed, 17 Jan 2018 at 11:29 PM, vkulichenko 
wrote:

> Ranjit,
>
> Embedded mode in Spark RDD will be deprecated in 2.4 which is about to be
> released. My recommendation would be to use standalone mode instead.
>
> -Val
>
>
>
> --
> Sent from: http://apache-ignite-users.70518.x6.nabble.com/
>


Data lost when primary nodes down.

2018-01-17 Thread rizal123
Hi,

I have experiment create 2nodes server. (IP: 10.5.42.95 and 10.5.42.96)
With this Ignite Configuration: 
IgniteConfiguration cfg = new IgniteConfiguration();
cfg.setIgniteInstanceName("BrandCluster");
TcpDiscoveryVmIpFinder ipFinder = new
TcpDiscoveryVmIpFinder();
ipFinder.setAddresses(Arrays.asList("10.5.42.95:47500..47509",
"10.5.42.96:47500..47509"));

TcpDiscoverySpi discovery = new TcpDiscoverySpi();
discovery.setLocalPort(47500);
discovery.setIpFinder(ipFinder);
cfg.setDiscoverySpi(discovery);

And this is Cache Configuration:
CacheConfiguration ccfg = new CacheConfiguration();
ccfg.setName("MMessageLogCache");
ccfg.setCacheMode(CacheMode.PARTITIONED);
ccfg.setAtomicityMode(CacheAtomicityMode.TRANSACTIONAL);
ccfg.setOnheapCacheEnabled(true);
ccfg.setSqlSchema("PUBLIC");
ccfg.setReadThrough(true);
ccfg.setWriteThrough(true);
   
ccfg.setWriteSynchronizationMode(CacheWriteSynchronizationMode.FULL_ASYNC);
ccfg.setCacheStoreFactory(cacheStoreFactory); // Cache store
to oracle
ccfg.setWriteBehindEnabled(true);
ccfg.setWriteBehindFlushSize(3);
ccfg.setWriteBehindFlushFrequency(15000); // 15 seconds

My Case:
1. Client using DBeaver (for testing this case).
2. Dbeaver connect to IP = 10.5.42.95 (Primary Node).
3. Running statement "Insert Into M_Message ... XYZ ... "
4. Check both nodes with Dbeaver, running statement 'Select * from
M_Message'.
5. Data 'XYZ' is present on both node.
6. Shutdown Primary Node (Kill -9).
7. The data 'XYZ' also lost at second node.
8. I have set 'Write behind enable', and use frequency 15seconds or 3
(inserted row).
9. So the data never stored to oracle, because the data at second node is
also lost.

Please let me know if there is misconfiguration...



--
Sent from: http://apache-ignite-users.70518.x6.nabble.com/


Re: Purpose of cache in cache.query(Create Table ...) statement

2018-01-17 Thread Shravya Nethula
Hi Andrey,

This information will definitely be useful. Thank you.

Regards,
Shravya Nethula.



--
Sent from: http://apache-ignite-users.70518.x6.nabble.com/


Re: Ignite loadCache performance issue

2018-01-17 Thread qwertywx
Denis Hi,

Thx for the quick response. I have used at the older versions(1.3) so there
was this "memoryMode" in there so I did not read the updated one. Thanks for
the enlightenment.

In order to manage the LRU functionality: Is there any way to disable
off-heap(so the only memory would be on-heap and LRU works automaticly)? 



--
Sent from: http://apache-ignite-users.70518.x6.nabble.com/


Re:Re: delete data error

2018-01-17 Thread Lucky
This did not happen every time.When I run it several times,it will happen .And 
when it happened, then it will happened every time.

This table is simple; I insert some data and when I finish the job,I will 
delete the data.


Thanks.




At 2018-01-17 20:20:34, "ilya.kasnacheev"  wrote:
>Hello Lucky!
>
>Does this happen every time when you try, or it is a one-time occurrence?
>
>Can you please share logs from your nodes and cache/table configurations?
>Ideally a small reproducer project.
>
>Regards,
>
>
>



ignte.rar
Description: Binary data


Ignite loadCache performance issue

2018-01-17 Thread qwertywx
I have a 3rd party Mysql table that will be cached partially by Ignite
cluster. So on ignite, I have done this conf:


























java.lang.Long
com.example.demo.UserSegment

































































So , I am also inspecting the load over ignitevisor. But it seems , it is
very slow. I have also run ignite.sh vith -v option but there seems no
problem.

So my question is: am I missing something or where should I start to look
for the problem?

 It has written only 640 items in 1 minute. Increasing of thread count did
not effect performance at all. Database has 3.2 million records and I want
to add 100K of them in cache based on LRU. But with this performance, it
seems it will continue like forever



--
Sent from: http://apache-ignite-users.70518.x6.nabble.com/


Re: 3rd Party Persistence and two phase commit

2018-01-17 Thread ALEKSEY KUZNETSOV
Denis, no, its valid case.

you can set certain StoreFactory in cache configuration so that it create
*different* instances of database(local) on different nodes. Then, my case
will be applied.



ср, 17 янв. 2018 г. в 14:45, Denis Mekhanikov :

> Aleksey,
>
> You described an invalid case. Cache stores on all nodes for a single
> cache should point to the same database. Otherwise data won't be the same
> in different databases.
> Different nodes are not supposed to have personal local DBs.
>
> If you want nodes to store data on disk in distributed manner, consider
> using Ignite native persistence:
> https://apacheignite.readme.io/docs/distributed-persistent-store
>
> Denis
>
> ср, 17 янв. 2018 г. в 14:17, Andrey Nestrogaev  >:
>
>> Local store = Apache Ignite Native Persistence?
>>
>>
>>
>> --
>> Sent from: http://apache-ignite-users.70518.x6.nabble.com/
>>
>

-- 

*Best Regards,*

*Kuznetsov Aleksey*


Re: delete data error

2018-01-17 Thread ilya.kasnacheev
Hello Lucky!

Does this happen every time when you try, or it is a one-time occurrence?

Can you please share logs from your nodes and cache/table configurations?
Ideally a small reproducer project.

Regards,



--
Sent from: http://apache-ignite-users.70518.x6.nabble.com/


Re: usage of Apache ignite as Key value DB

2018-01-17 Thread Rajesh Kishore
Hello Mikael,

Thanks a ton for your response. I got descent understanding that for any
operation I need to define cache and the cache item can be persisted.
- Does it mean all CRUD operations would be performed via cache operations ?
- Consider the case of berkley db where entities are stored locally in file
system. And these entry container were defined by the berkley db apis, so
how entities container are created in Ignite , is it driven by cacheName?
where the entities are stored? To be simple where the records for "Person"
& "Department" would be stored and how that can be configured

Thanks,
Rajesh

On Wed, Jan 17, 2018 at 5:37 PM, Mikael  wrote:

> There are lots of examples not using SQL, have a look at:
>
> https://apacheignite.readme.io/docs/jcache
>
> Ignite implements the JCache API, just use get/put and so on.
>
>
> Den 2018-01-17 kl. 12:44, skrev Rajesh Kishore:
>
> This is much informative. Further I want to use key value apis instead of
> sql apis which is only given in the example.
> The idea is that it should ease my migration process from Berkley dB based
> code where I am relying on key value apis to play with record in different
> dB containers, what is the equivalent here of ignite i.e how do we
> represent different entity say employee in local file system and how to
> insert and retrieve record
>
> Thanks
> Rajesh
>
> On 17 Jan 2018 3:59 p.m., "Mikael"  wrote:
>
>> You have to run an Ignite instance to use it (you can embed it in your
>> application), you can't just use the key value store on it's own, a LOCAL
>> cache would be the closest to a Berkeley DB store.
>>
>> Docs at : https://apacheignite.readme.io/docs/data-grid
>>
>>
>>
>> Den 2018-01-17 kl. 11:05, skrev Rajesh Kishore:
>>
>>> Hi,
>>>
>>> I am newbie to Apache Ignite. We are trying to explore Ignite as key
>>> value DB to be replaced with our existing Berkely DB in application.
>>>
>>> Currently, Bekley DB is embedded in the application and db container
>>> operations are performed using Berkely DB apis , similar functionalities we
>>> would need for Ignite.
>>>
>>> The idea is to replace berkley db apis to Ignite apis to use Ignite as
>>> key value DB.
>>> I could not find any docs for the usage of ignite libraries to be used
>>> in the application.
>>>
>>> Any pointers please
>>>
>>> Thanks & Regards,
>>> Rajesh Kishore
>>>
>>
>>
>


Re: usage of Apache ignite as Key value DB

2018-01-17 Thread Mikael
There are a number of ways do to persistence for the cache, you can 
enable native persistence for a memory region and assign a cache to that 
region, all cache entries will be cached on disk och the ones you use 
most will be cached in RAM, another alternative is to add 3rd party 
class to enable persistence, the cache will call your implementation to 
do the persistence (JDBC and Cassandra is built in), you can do read 
only or both read and write persistence.


Your key/value objects can be created with SQL DDL queries or you can 
use POJO's.


Native persistence information:

https://apacheignite.readme.io/docs/distributed-persistent-store

Do your own storage:

https://apacheignite.readme.io/docs/data-loading

https://apacheignite.readme.io/docs/3rd-party-stor

You can do all CRUD operations using the cache API or/and SQL, if you 
use native persistence SQL quesries work on the entire cache even if 
items not in RAM, if you use 3rd party persistence SQL queries will only 
work on items in RAM.


Hope that helps a little.

Mikael

Den 2018-01-17 kl. 13:30, skrev Rajesh Kishore:

Hello Mikael,

Thanks a ton for your response. I got descent understanding that for 
any operation I need to define cache and the cache item can be persisted.
- Does it mean all CRUD operations would be performed via cache 
operations ?
- Consider the case of berkley db where entities are stored locally in 
file system. And these entry container were defined by the berkley db 
apis, so how entities container are created in Ignite , is it driven 
by cacheName? where the entities are stored? To be simple where the 
records for "Person" & "Department" would be stored and how that can 
be configured


Thanks,
Rajesh

On Wed, Jan 17, 2018 at 5:37 PM, Mikael > wrote:


There are lots of examples not using SQL, have a look at:

https://apacheignite.readme.io/docs/jcache


Ignite implements the JCache API, just use get/put and so on.


Den 2018-01-17 kl. 12:44, skrev Rajesh Kishore:

This is much informative. Further I want to use key value apis
instead of sql apis which is only given in the example.
The idea is that it should ease my migration process from Berkley
dB based code where I am relying on key value apis to play with
record in different dB containers, what is the equivalent here of
ignite i.e how do we represent different entity say employee in
local file system and how to insert and retrieve record

Thanks
Rajesh

On 17 Jan 2018 3:59 p.m., "Mikael" > wrote:

You have to run an Ignite instance to use it (you can embed
it in your application), you can't just use the key value
store on it's own, a LOCAL cache would be the closest to a
Berkeley DB store.

Docs at : https://apacheignite.readme.io/docs/data-grid




Den 2018-01-17 kl. 11:05, skrev Rajesh Kishore:

Hi,

I am newbie to Apache Ignite. We are trying to explore
Ignite as key value DB to be replaced with our existing
Berkely DB in application.

Currently, Bekley DB is embedded in the application and
db container operations are performed using Berkely DB
apis , similar functionalities we would need for Ignite.

The idea is to replace berkley db apis to Ignite apis to
use Ignite as key value DB.
I could not find any docs for the usage of ignite
libraries to be used in the application.

Any pointers please

Thanks & Regards,
Rajesh Kishore









Re: Re: Nodes can not join the cluster after reboot

2018-01-17 Thread aa...@tophold.com
Hi Evgenii, 

What's more interesting If we reboot them in very shut time like one hour,  
from our monitor log we can find 

such like NODE_LEFT and NODE_JOIN events, every thing move smoothly .  

But if after several hours, problem below sure will happen if you try to reboot 
any node from cluster. 


Regards
Aaron


Aaron.Kuai

From: aa...@tophold.com
Date: 2018-01-17 20:05
To: user
Subject: Re: Re: Nodes can not join the cluster after reboot
hi Evgenii, 

Thanks!  We collect some logs, one is the server which is reboot, another two 
are two servers exist,  one client only nodes.  after reboot:

1. the reboot node never be totally brought up, waiting for ever. 
2. other server nodes after get notification the reboot node down, soon hang up 
there also. 
3. the pure client node, only call a remote service on the reboot node, also 
hang up there

At around 2018-01-17 10:54  we reboot the node. From the log we can find:

[WARN ] 2018-01-17 10:54:43.277 [sys-#471] [ig] ExchangeDiscoveryEvents - All 
server nodes for the following caches have left the cluster: 
'PortfolioCommandService_SVC_CO_DUM_CACHE', 
'PortfolioSnapshotGenericDomainEventEntry', 'PortfolioGenericDomainEventEntry' 

Soon a ERROR log(Seem the only ERROR level log):

[ERROR] 2018-01-17 10:54:43.280 [srvc-deploy-#143] [ig] GridServiceProcessor - 
Error when executing service: null java.lang.IllegalStateException: Getting 
affinity for topology version earlier than affinity is calculated

Then a lot WARN of 

"Failed to wait for partition release future."

Then this forever loop there, from the diagnose nothing seem suspicious,  All 
node eventually output very similar. 

[WARN ] 2018-01-17 10:55:19.608 [exchange-worker-#97] [ig] diagnostic - Pending 
explicit locks:
[WARN ] 2018-01-17 10:55:19.609 [exchange-worker-#97] [ig] diagnostic - Pending 
cache futures:
[WARN ] 2018-01-17 10:55:19.609 [exchange-worker-#97] [ig] diagnostic - Pending 
atomic cache futures:
[WARN ] 2018-01-17 10:55:19.609 [exchange-worker-#97] [ig] diagnostic - Pending 
data streamer futures:
[WARN ] 2018-01-17 10:55:19.609 [exchange-worker-#97] [ig] diagnostic - Pending 
transaction deadlock detection futures:

Some of our environment:

1. we open the peer class loading flag, but in fact we use fat jar every class 
is shared.
2. some nodes deploy service, we use them as RPC way. 
3. most cache in fact is LOCAL, only when must we make them shared
4. use JDBC to persist important caches
5. TcpDiscoveryJdbcIpFinder as the finder

All others configuration is according to the stand. 

Thanks for your time!

Regards
Aaron


Aaron.Kuai
 
From: Evgenii Zhuravlev
Date: 2018-01-16 20:32
To: user
Subject: Re: Nodes can not join the cluster after reboot
Hi,

Most possible that on the of the nodes you have hanged transaction/future/lock 
or even a deadlock, that's why new nodes can't join cluster - they can't 
perform exchange due to pending operation. Please share full logs from all 
nodes with thread dumps, it will help to find a root cause.

Evgenii

2018-01-16 5:35 GMT+03:00 aa...@tophold.com :
Hi All, 

We have a ignite cluster running about 20+ nodes,   for any case JVM memory 
issue we schedule reboot those nodes at middle night. 

but in order to keep the service supplied, we reboot them one by one like 
A,B,C,D nodes we reboot them at 5 mins delay; but if we doing so, the reboot 
nodes can never join to the cluster again. 

Eventually the entire cluster can not work any more forever waiting for joining 
to the topology; we need to kill all and reboot from started, this sound 
incredible. 

I not sure whether any more meet this issue before, or any mistake we may make, 
attached is the ignite log. 


Thanks for your time!

Regards
Aaron


Aaron.Kuai



Re: 3rd Party Persistence and two phase commit

2018-01-17 Thread Andrey Nestrogaev
Hi,

Aleksey, what is "local db"? 
I did not find anything on this in the documentation.





--
Sent from: http://apache-ignite-users.70518.x6.nabble.com/


Re: 3rd Party Persistence and two phase commit

2018-01-17 Thread Denis Mekhanikov
Aleksey,

You described an invalid case. Cache stores on all nodes for a single cache
should point to the same database. Otherwise data won't be the same in
different databases.
Different nodes are not supposed to have personal local DBs.

If you want nodes to store data on disk in distributed manner, consider
using Ignite native persistence:
https://apacheignite.readme.io/docs/distributed-persistent-store

Denis

ср, 17 янв. 2018 г. в 14:17, Andrey Nestrogaev :

> Local store = Apache Ignite Native Persistence?
>
>
>
> --
> Sent from: http://apache-ignite-users.70518.x6.nabble.com/
>


Re: usage of Apache ignite as Key value DB

2018-01-17 Thread Mikael

There are lots of examples not using SQL, have a look at:

https://apacheignite.readme.io/docs/jcache

Ignite implements the JCache API, just use get/put and so on.


Den 2018-01-17 kl. 12:44, skrev Rajesh Kishore:
This is much informative. Further I want to use key value apis instead 
of sql apis which is only given in the example.
The idea is that it should ease my migration process from Berkley dB 
based code where I am relying on key value apis to play with record in 
different dB containers, what is the equivalent here of ignite i.e how 
do we represent different entity say employee in local file system and 
how to insert and retrieve record


Thanks
Rajesh

On 17 Jan 2018 3:59 p.m., "Mikael" > wrote:


You have to run an Ignite instance to use it (you can embed it in
your application), you can't just use the key value store on it's
own, a LOCAL cache would be the closest to a Berkeley DB store.

Docs at : https://apacheignite.readme.io/docs/data-grid




Den 2018-01-17 kl. 11:05, skrev Rajesh Kishore:

Hi,

I am newbie to Apache Ignite. We are trying to explore Ignite
as key value DB to be replaced with our existing Berkely DB in
application.

Currently, Bekley DB is embedded in the application and db
container operations are performed using Berkely DB apis ,
similar functionalities we would need for Ignite.

The idea is to replace berkley db apis to Ignite apis to use
Ignite as key value DB.
I could not find any docs for the usage of ignite libraries to
be used in the application.

Any pointers please

Thanks & Regards,
Rajesh Kishore






Re: Primary key bug

2018-01-17 Thread Alexey Kukushkin
Ignite's PRIMARY KEY does not map to H2 PRIMARY KEY constraint so you not
seeing a primary key in H2 is OK.

Ignite's PRIMARY KEY designates a binary object that Ignite SQL Grid will
create for the underlying data grid cache key. Thus, in your example you
will end up with two binary objects stored in Ignite - a key with fields
org_id, drug_id and prctype and a value including all the remaining fields.

Now if you want to map it to Java classes KTest and TTest you have to
create corresponding Java classes and give the same field names as those in
the TEST table. After that you can use both Java API and SQL API to
manipulate the data.


Re: 3rd Party Persistence and two phase commit

2018-01-17 Thread Denis Mekhanikov
Alexey,

Of cause you can, but you will get an undefined behaviour in this case :)

It is assumed, that all nodes are using the same database. Otherwise
consistency is not guaranteed.

Maybe it is worth to clarify this point in the documentation explicitly.

Denis

ср, 17 янв. 2018 г. в 15:10, ALEKSEY KUZNETSOV :

> Denis, no, its valid case.
>
> you can set certain StoreFactory in cache configuration so that it create
> *different* instances of database(local) on different nodes. Then, my case
> will be applied.
>
>
>
> ср, 17 янв. 2018 г. в 14:45, Denis Mekhanikov :
>
>> Aleksey,
>>
>> You described an invalid case. Cache stores on all nodes for a single
>> cache should point to the same database. Otherwise data won't be the same
>> in different databases.
>> Different nodes are not supposed to have personal local DBs.
>>
>> If you want nodes to store data on disk in distributed manner, consider
>> using Ignite native persistence:
>> https://apacheignite.readme.io/docs/distributed-persistent-store
>>
>> Denis
>>
>> ср, 17 янв. 2018 г. в 14:17, Andrey Nestrogaev > >:
>>
>>> Local store = Apache Ignite Native Persistence?
>>>
>>>
>>>
>>> --
>>> Sent from: http://apache-ignite-users.70518.x6.nabble.com/
>>>
>>
> --
>
> *Best Regards,*
>
> *Kuznetsov Aleksey*
>


usage of Apache ignite as Key value DB

2018-01-17 Thread Rajesh Kishore
Hi,

I am newbie to Apache Ignite. We are trying to explore Ignite as key value
DB to be replaced with our existing Berkely DB in application.

Currently, Bekley DB is embedded in the application and db container
operations are performed using Berkely DB apis , similar functionalities we
would need for Ignite.

The idea is to replace berkley db apis to Ignite apis to use Ignite as key
value DB.
I could not find any docs for the usage of ignite libraries to be used in
the application.

Any pointers please

Thanks & Regards,
Rajesh Kishore


Re: 3rd Party Persistence and two phase commit

2018-01-17 Thread ALEKSEY KUZNETSOV
I meant local store = local db.

ср, 17 янв. 2018 г. в 13:52, Andrey Nestrogaev :

> Hi,
>
> Aleksey, what is "local db"?
> I did not find anything on this in the documentation.
>
>
>
>
>
> --
> Sent from: http://apache-ignite-users.70518.x6.nabble.com/
>


-- 

*Best Regards,*

*Kuznetsov Aleksey*


Re: 3rd Party Persistence and two phase commit

2018-01-17 Thread Andrey Nestrogaev
Local store = Apache Ignite Native Persistence?



--
Sent from: http://apache-ignite-users.70518.x6.nabble.com/


Re: usage of Apache ignite as Key value DB

2018-01-17 Thread Rajesh Kishore
This is much informative. Further I want to use key value apis instead of
sql apis which is only given in the example.
The idea is that it should ease my migration process from Berkley dB based
code where I am relying on key value apis to play with record in different
dB containers, what is the equivalent here of ignite i.e how do we
represent different entity say employee in local file system and how to
insert and retrieve record

Thanks
Rajesh

On 17 Jan 2018 3:59 p.m., "Mikael"  wrote:

> You have to run an Ignite instance to use it (you can embed it in your
> application), you can't just use the key value store on it's own, a LOCAL
> cache would be the closest to a Berkeley DB store.
>
> Docs at : https://apacheignite.readme.io/docs/data-grid
>
>
>
> Den 2018-01-17 kl. 11:05, skrev Rajesh Kishore:
>
>> Hi,
>>
>> I am newbie to Apache Ignite. We are trying to explore Ignite as key
>> value DB to be replaced with our existing Berkely DB in application.
>>
>> Currently, Bekley DB is embedded in the application and db container
>> operations are performed using Berkely DB apis , similar functionalities we
>> would need for Ignite.
>>
>> The idea is to replace berkley db apis to Ignite apis to use Ignite as
>> key value DB.
>> I could not find any docs for the usage of ignite libraries to be used in
>> the application.
>>
>> Any pointers please
>>
>> Thanks & Regards,
>> Rajesh Kishore
>>
>
>


Re: Ignite loadCache performance issue

2018-01-17 Thread Denis Mekhanikov
Hi Hasan!

Try disabling *onheapCacheEnabled* and *evictionPolicy *properties from
configuration. I think, it may have some performance impact.
Also make sure, that your database is connected to Ignite nodes via
high-throughput network, and is available from every data node.

> Database has 3.2 million records and I want to add 100K of them in cache
based on LRU.
Is this the reason why you enabled *evictionPolicy*? It actually serves a
different purpose. I limits amount of data, stored in oh-heap memory.
Here is documentation on Java heap eviction policy:
https://apacheignite.readme.io/docs/evictions#section-java-heap-cache

So, if you want to load only hot data from the external database, you have
to figure out yourself, which data is hot, and call IgniteCache.getAll()

on
corresponding keys. Read through should be enabled to make *getAll() *load
data from DB.

Denis

ср, 17 янв. 2018 г. в 14:11, qwertywx :

> I have a 3rd party Mysql table that will be cached partially by Ignite
> cluster. So on ignite, I have done this conf:
>
>  class="org.apache.ignite.configuration.IgniteConfiguration">
> 
> 
> 
> 
> class="org.apache.ignite.spi.discovery.tcp.ipfinder.multicast.TcpDiscoveryMulticastIpFinder">
>  />
> 
> 
> 
> 
> 
> 
> 
>  class="org.apache.ignite.configuration.CacheConfiguration">
> 
> 
> 
> 
> 
>  class="org.apache.ignite.cache.eviction.lru.LruEvictionPolicy">
> 
> 
> 
> 
> 
> java.lang.Long
> com.example.demo.UserSegment
> 
> 
> 
>  class="org.apache.ignite.cache.store.jdbc.CacheJdbcPojoStoreFactory">
> 
> 
>  class="org.apache.ignite.cache.store.jdbc.dialect.MySQLDialect">
> 
>  value="20"/>
> 
> 
>  class="org.apache.ignite.cache.store.jdbc.JdbcType">
>  value="crm" />
>  value="CRM_SEGMENT" />
>  value="crmdbcache" />
>  value="java.lang.Long" />
>  value="com.example.demo.UserSegment" />
> 
> 
>  class="org.apache.ignite.cache.store.jdbc.JdbcTypeField">
>  value="-5" />
>  value="USER_ID" />
>  value="java.lang.Long" />
>  value="userId" />
> 
> 
> 
> 
> 
>  class="org.apache.ignite.cache.store.jdbc.JdbcTypeField">
>  value="-5" />
>  value="ID" />
>  value="java.lang.Long" />
>  value="id" />
> 
>  class="org.apache.ignite.cache.store.jdbc.JdbcTypeField">
>  value="-5" />
>  value="USER_ID" />
>  value="java.lang.Long" />
>  value="userId" />
> 
>  class="org.apache.ignite.cache.store.jdbc.JdbcTypeField">
>  value="-5" />
>  value="TYPE_ID" />
>  value="java.lang.Long" />
>  value="type" />
> 
>  class="org.apache.ignite.cache.store.jdbc.JdbcTypeField">
>  value="12" />
>  value="TAG" />
>  

Discovery node port range

2018-01-17 Thread Ranjit Sahu
Hi,

We are using ignite key value store in spark embeded mode. As the resource
is managed by yarn, sometimes when we request for 30 executor nodes, 20
nodes may get into one server. Which means we are starting 20 ignite server
nodes in one server. Currently we have set the discovery port range as 10,
should this be changed to 20 or more ?

The spi commn port range is set to 100.

Any other thoughts ?

Thanks,
Ranjit