Re: Ignite Spring Data 2.0 not working with Spring Boot 2

2018-08-02 Thread Lokesh Sharma
Hi Denis

Thanks for the reply.

I tried setting up the logger as mentioned in the docs as you can see here (
https://github.com/lokeshh/ignite_with_spring_boot_2/blob/master/src/main/java/pl/piomin/services/ignite/IgniteRestApplication.java#L62)
but still that does nothing. In fact, Ignite logs are printed to the
console but the Spring application logs are not being printed after a few
seconds the application starts running. Also the messages using
"System.out.println" are also not printed. (See
https://github.com/lokeshh/ignite_with_spring_boot_2/blob/master/src/main/java/pl/piomin/services/ignite/Service/Test.java#L37)
If I downgrade the Spring Boot version to 1.5 and Ignite Spring Data
version to 2.6, then everything works fine.

On Thu, Aug 2, 2018 at 8:42 PM Denis Mekhanikov 
wrote:

> Lokesh,
>
> You should configure a logger according to the following documentation:
> https://apacheignite.readme.io/docs/logging
> Choose a suitable logger and refer to a corresponding section of the docs.
>
> Denis
>
> чт, 2 авг. 2018 г. в 9:59, Lokesh Sharma :
>
>> (I'm sorry if I created this thread twice. Kindly delete the earlier one.)
>>
>> I built the "ignite-spring-data_2.0" from source to run it with Spring
>> Boot 2.0.2-RELEASE. The app is running fine but nothing is being logged in
>> the console. Please help.
>>
>> Here's my code: https://github.com/lokeshh/ignite_with_spring_boot_2
>>
>


SYSTEM_WORKER_TERMINATION (Item Not found)

2018-08-02 Thread kvenkatramtreddy
Hi,

We are still receiving Runtime Failure bound (Item not found) in 2.6 as
well. We are receiving this error everyday midnight check point.

We have around 15 caches and configure expiry as 3 days, 8 days and 2 caches
are eternal.



Please find attached checkpoint log and error in below links. I have
attached configuration as well.

http://apache-ignite-users.70518.x6.nabble.com/file/t1700/example-ignite.xml

apacheIgniteError.txt

  

Thanks & Regards,
Venkat



--
Sent from: http://apache-ignite-users.70518.x6.nabble.com/


Re: Need help for setting offheap memory

2018-08-02 Thread Amol Zambare
Hi Denis.

I am using node command on visor and referring to  "Non-heap memory
maximum" metric

| Non-heap memory initialized | 2mb
| Non-heap memory used| 64mb
| Non-heap memory committed   | 66mb
|* Non-heap memory maximum | 1gb  *

Thanks,
Amol

On Tue, Jul 31, 2018 at 12:21 PM, Denis Mekhanikov 
wrote:

> Amol,
>
> The configuration looks correct at least the piece, that you provided. Do
> you start the server nodes with this config?
> Which visor metric do you use to verify the off-heap size?
>
> Denis
>
>
> On Fri, Jul 27, 2018, 21:53 Amol Zambare  wrote:
>
>> Hi,
>>
>> We are using ignite to share in memory data across the spark jobs. .
>>
>> I am using below configuration to set ignite offheap memory. I would like
>> to set it as 100 gb.
>>
>> However when I print the node statistics using visor it shows offheap max
>> memory as 1 gb.
>>
>> Please suggest.
>>
>> Apache Ignite version 2.3
>>
>> 
>>   
>> 
>> 
>>   
>> 
>> 
>> 
>>
>>   
>> 
>>   
>>
>>
>> Thanks,
>> Amol
>>
>


Re: How to do rolling updates with embedded Ignite and changing models

2018-08-02 Thread vkulichenko
Roger,

To be able to change the schema in runtime, you need to make sure there are
no model classes deployed on server nodes and therefore no deserialization
happens. Since you run in embedded mode and have only server nodes, then you
actually can't use POJOs in your data modes at all. You should use
BinaryObject API [1] or SQL [2] instead. This way you would be able to
change schemas dynamically without restarts (even rolling restart is not
needed).

[1] https://apacheignite.readme.io/docs/binary-marshaller
[2] https://apacheignite-sql.readme.io/docs

-Val



--
Sent from: http://apache-ignite-users.70518.x6.nabble.com/


Re: Errors with TCPCommunicationSpi when using zookeeper discovery

2018-08-02 Thread Larry Mark
Wanted to close the loop on this, I am running in Kubernetes and the root
cause of this was a network policy blocking communications.

It seems as if in Ignite 2.6, with paired connections,  I need to have an
open communication path on 47100 between all the servers, between the
servers and the clients and between the clients.  Does this sound correct?
Are clients connecting to other clients?  Is this documented somewhere.

Thanks


On Thu, Jul 26, 2018 at 6:39 AM, Ilya Kasnacheev 
wrote:

> Hello!
>
> As far as my understanding goes, ZookeeperClusterNode is not a Zookeeper
> daemon but a cluster node (i.e. Apache Ignite) which is managed by
> Zookeeper discovery. So it's natural that a connection will be initiated to
> such node.
>
> Modern TCP firewalls usually null route connection attempts to closed
> ports, hence you can not see communication problems for a long time and yet
> the connection won't be established.
>
> Regards,
>
> --
> Ilya Kasnacheev
>
> 2018-07-25 17:30 GMT+03:00 Larry Mark :
>
>> The logs do not indicate any connectivity problem, unless I am missing
>> it, in which case please point it out to me.
>>
>> The messages seem to be getting through fine, but the server thinks there
>> is a connection which does not exist, so it rejects it.  This seems to
>> happen because the communication SPI has opened a connection to zookeeper.
>> Why would the communications SPI ( not discovery )  be initiating a
>> connection to zookeeper?
>> it seems like the connection to zookeeper is making an entry in the array
>> of GridCommunicationClient that is causing the communications SPI to think
>> a client is connected when it is not, and to reject the connections.
>>
>> If this is off base, please tell me what I am missing.
>>
>>
>>
>>
>>
>> On Wed, Jul 25, 2018 at 6:25 AM, Ilya Kasnacheev <
>> ilya.kasnach...@gmail.com> wrote:
>>
>>> Hello!
>>>
>>> This might happen when there's connectivity problems, i.e. when node A
>>> can connect to node B but not vice versa.
>>>
>>> You can also try increasing socketWriteTimeout on communicationSpi if
>>> communication connections die mid-flight.
>>>
>>>
>>> Regards,
>>>
>>> --
>>> Ilya Kasnacheev
>>>
>>> 2018-07-25 0:16 GMT+03:00 lmark58 :
>>>
 When using zoo keeper discovery I am getting intermittent errors with
 clients
 connecting.

 Scenario - I have one ignite server running, try to connect with 4
 clients.

 For one of the connecting clients the socket between the client and
 server
 is never established and the log repeats the message
 o.a.i.s.c.tcp.TcpCommunicationSpi - Received incoming connection from
 remote
 node while connecting to this node, rejecting

 The problem is intermittent, but whenever it happens, I see a log entry
 that
 the server is initiating a connection to zoo keeper.
 o.a.i.s.c.tcp.TcpCommunicationSpi - Creating NIO client to node:
 ZookeeperClusterNode [id=8b66ad54-0357-43a5-8d6e-0d11eda90b10,
 addrs=[10.2.91.45, 127.0.0.1], order=2, loc=false, client=true]

 Attached is a section of the log that shows the clients connecting.  I
 cannot post a full log to a public forum, but if the attached is not
 enough,
 once a contributor picks this up and replies I can send a longer log to
 them
 directly.

 Thanks
 mini.log
 



 --
 Sent from: http://apache-ignite-users.70518.x6.nabble.com/

>>>
>>>
>>
>


ALTER TABLE ... NOLOGGING

2018-08-02 Thread Dave Harvey
We did the following while loading a lot of data into 2.5

1) Started data loading on 8 node cluster
2) ALTER TABLE name NOLOGGING  on tables A,B,C,D but not X
3) continued loading
4) deactivated cluster
5) changed the config xml, to increase maxSize of the data region (from 2G
to 160G) and increase checkpointPageBufferSize
6) restarted IGNITE processes
7) continued loading for 2 days
8) deactivated cluster
9) changed the config xml, but /only/ thread pool sizes
10) restarted IGNITE processes

And we found on after step 10 that caches A,B,C,D  were *empty*, and only X
had data. We did not look at  the cache sizes after step 6.

Is this expected behavior?   It is not what we would have expected from
reading the documents.



--
Sent from: http://apache-ignite-users.70518.x6.nabble.com/


Re: Toad Connection to Ignite

2018-08-02 Thread ApacheUser
Thanks Alex,

We have large pool of developers who uses TOAD, just thought of making TOAD
connect to Ignite to have similar experience. We are using DBeaver right
now.

Thanks



--
Sent from: http://apache-ignite-users.70518.x6.nabble.com/


Re: Ignite.NET how to cancel tasks

2018-08-02 Thread slava.koptilin
Hello Maksym,

It seems that you need to call Cancel() method.
something like as follows:

var cts = new CancellationTokenSource();
var task = Compute.ExecuteJavaTaskAsync(ComputeApiTest.BroadcastTask, null,
cts.Token);
cts.Cancel();

Please take a look at this example:
https://github.com/apache/ignite/blob/master/modules/platforms/dotnet/Apache.Ignite.Core.Tests/Compute/CancellationTest.cs

Thanks,
Slava.



--
Sent from: http://apache-ignite-users.70518.x6.nabble.com/


Re: Delete/remove cache does not free the memory in PCF server

2018-08-02 Thread Dmitriy Govorukhin
In the current implementation, Ignite does not support deallocate memory.
Maybe you can try to use ignite with persistence and reduce in-memory
region size?
Please explain your use case in more details and provide Ignite and cache
configurations.

On Mon, Jul 30, 2018 at 8:29 PM okiesong  wrote:

> Hi, first of all, thank you very much for your quick response!
>
> My case is that on the PCF server, once I execute the ignite against the
> 1.0
> million records, my memory usage displayed by the PCF server increase by
> 200MB. After I execute the destroy cache function (just like I posted in my
> above post), the memory displayed in PCF server was never decreased, and we
> need some way to reduce the memory in used on PCF by 200 MB to avoid a lack
> of memory. Thanks again.
>
>
>
> --
> Sent from: http://apache-ignite-users.70518.x6.nabble.com/
>


Re: Service not found for a deployed service

2018-08-02 Thread Denis Mekhanikov
Calvin,

You have this problem due to the following issue: IGNITE-1478

A workaround here would be to retry method execution with some delay.

This problem should be fixed under IEP-17
,
which is in progress right now.

Denis

чт, 2 авг. 2018 г. в 14:33, Calvin KL Wong, CLSA :

> Hi,
>
>
>
> I deployed a service from a client node to our grid using the following
> code:
>
>
>
> IgniteCluster cluster = ignite.cluster();
>
> ClusterGroup group = cluster.forAttribute(…);
>
> Ignite.services(workerGroup).deployClusterSingleton(“blaze/hsbc”)
>
>
>
> It is fine most of the time.  However we just encountered a case where we
> got an exception when some logic tried to use this service:
>
>
>
> 2018-08-02 16:27:57.713 processors.task.GridTaskWorker [sys-#29%mlog%]
> ERROR - Failed to obtain remote job result policy for result from
> ComputeTask.result(..) method (will fail the whole task): GridJobResultImpl
> [job=C2 [c=ServiceProxyCallable [mtdName=execute, svcName=blaze/hsbc,
> ignite=null]], sib=GridJobSiblingImpl
> [sesId=f66f54be461-65c907a3-8fcf-4ddd-acb1-6553be3d1dc9,
> jobId=076f54be461-65c907a3-8fcf-4ddd-acb1-6553be3d1dc9,
> nodeId=236a47e9-7fdb-464e-be44-b24d0942d75c, isJobDone=false],
> jobCtx=GridJobContextImpl
> [jobId=076f54be461-65c907a3-8fcf-4ddd-acb1-6553be3d1dc9, timeoutObj=null,
> attrs={}], node=TcpDiscoveryNode [id=236a47e9-7fdb-464e-be44-b24d0942d75c,
> addrs=[10.23.8.165], sockAddrs=[zhkdlp1712.int.clsa.com/10.23.8.165:0],
> discPort=0, order=37, intOrder=27, lastExchangeTime=1533148447088,
> loc=false, ver=2.3.0#20180518-sha1:02cf6abf, isClient=true], ex=class
> o.a.i.IgniteException: Service not found: blaze/hsbc, hasRes=true,
> isCancelled=false, isOccupied=true]
>
> org.apache.ignite.IgniteException: Remote job threw user exception
> (override or implement ComputeTask.result(..) method if you would like to
> have automatic failover for this exception).
>
> at
> org.apache.ignite.compute.ComputeTaskAdapter.result(ComputeTaskAdapter.java:101)
> ~[liquid-logic.jar:2.0.10]
>
> at
> org.apache.ignite.internal.processors.task.GridTaskWorker$5.apply(GridTaskWorker.java:1047)
> [liquid-logic.jar:2.0.10]
>
> at
> org.apache.ignite.internal.processors.task.GridTaskWorker$5.apply(GridTaskWorker.java:1040)
> [liquid-logic.jar:2.0.10]
>
> at
> org.apache.ignite.internal.util.IgniteUtils.wrapThreadLoader(IgniteUtils.java:6663)
> [liquid-logic.jar:2.0.10]
>
> at
> org.apache.ignite.internal.processors.task.GridTaskWorker.result(GridTaskWorker.java:1040)
> [liquid-logic.jar:2.0.10]
>
> at
> org.apache.ignite.internal.processors.task.GridTaskWorker.onResponse(GridTaskWorker.java:858)
> [liquid-logic.jar:2.0.10]
>
> at
> org.apache.ignite.internal.processors.task.GridTaskProcessor.processJobExecuteResponse(GridTaskProcessor.java:1066)
> [liquid-logic.jar:2.0.10]
>
> at
> org.apache.ignite.internal.processors.task.GridTaskProcessor$JobMessageListener.onMessage(GridTaskProcessor.java:1301)
> [liquid-logic.jar:2.0.10]
>
> at
> org.apache.ignite.internal.managers.communication.GridIoManager.invokeListener(GridIoManager.java:1555)
> [liquid-logic.jar:2.0.10]
>
>at
> org.apache.ignite.internal.managers.communication.GridIoManager.processRegularMessage0(GridIoManager.java:1183)
> [liquid-logic.jar:2.0.10]
>
> at
> org.apache.ignite.internal.managers.communication.GridIoManager.access$4200(GridIoManager.java:126)
> [liquid-logic.jar:2.0.10]
>
> at
> org.apache.ignite.internal.managers.communication.GridIoManager$9.run(GridIoManager.java:1090)
> [liquid-logic.jar:2.0.10]
>
> at
> java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1142)
> [?:1.8.0_121]
>
> at
> java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:617)
> [?:1.8.0_121]
>
> at java.lang.Thread.run(Thread.java:745) [?:1.8.0_121]
>
> Caused by: org.apache.ignite.IgniteException: Service not found: blaze/hsbc
>
> at
> org.apache.ignite.internal.processors.closure.GridClosureProcessor$C2.execute(GridClosureProcessor.java:1858)
> ~[liquid-logic.jar:2.0.10]
>
> at
> org.apache.ignite.internal.processors.job.GridJobWorker$2.call(GridJobWorker.java:566)
> ~[liquid-logic.jar:2.0.10]
>
> at
> org.apache.ignite.internal.util.IgniteUtils.wrapThreadLoader(IgniteUtils.java:6631)
> ~[liquid-logic.jar:2.0.10]
>
> at
> org.apache.ignite.internal.processors.job.GridJobWorker.execute0(GridJobWorker.java:560)
> ~[liquid-logic.jar:2.0.10]
>
> at
> org.apache.ignite.internal.processors.job.GridJobWorker.body(GridJobWorker.java:489)
> ~[liquid-logic.jar:2.0.10]
>
> at
> org.apache.ignite.internal.util.worker.GridWorker.run(GridWorker.java:110)
> ~[liquid-logic.jar:2.0.10]
>
> at
> 

Re: Remote Session with Ignite & Cassandra persistence

2018-08-02 Thread Вячеслав Коптилин
Hello,

Hello,

First of all, I would recommend switching to TcpDiscoveryVmIpFinder [1]
instead of TcpDiscoveryMulticastIpFinder,
and disable IPv6 stack via specifying JVM property
-Djava.net.preferIPv4Stack=true.

[1]
https://apacheignite.readme.io/docs/tcpip-discovery#section-static-ip-finder

Thanks,
S.

чт, 2 авг. 2018 г. в 16:31, okiesong :

> Hi, how can we use Ignite to start multiple ignite sessions on a remote
> server? I tried using TcpDiscoverySpi and TcpCommunicationSpi to resolve
> this problem, but this was not working. I basically used a similar setting
> as below from ignite website. And I have already added jar under ignite/lib
> folder for Cassandra persistence.
>
> TcpDiscoverySpi spi = new TcpDiscoverySpi();
>
> TcpDiscoveryVmIpFinder ipFinder = new TcpDiscoveryMulticastIpFinder();
>
> ipFinder.setMulticastGroup("228.10.10.157");
>
> spi.setIpFinder(ipFinder);
>
> IgniteConfiguration cfg = new IgniteConfiguration();
>
> // Override default discovery SPI.
> cfg.setDiscoverySpi(spi);
>
> // Start Ignite node.
> Ignition.start(cfg);
>
>
> This is the error I am getting atm. I am basically ran ignite.sh from
> remote
> server first then, run my application from localhost that points to this
> remote server. but the below error displayed on a remote server.
>
> More details can be found from
>
> http://apache-ignite-users.70518.x6.nabble.com/Ignite-Cluster-with-remote-server-Cassandra-Persistence-tt22886.html
>
> [11:36:25] Topology snapshot [ver=9, servers=1, clients=0, CPUs=16,
> offheap=13.0GB, heap=1.0GB]
> [11:36:25]   ^-- Node [id=C1BAABB3-413C-4867-9F8A-CC21CA818E80,
> clusterState=ACTIVE]
> [11:36:25] Data Regions Configured:
> [11:36:25]   ^-- default [initSize=256.0 MiB, maxSize=12.6 GiB,
> persistenceEnabled=false]
> [11:36:25,574][SEVERE][exchange-worker-#62][GridCacheProcessor] Failed to
> register MBean for cache group: null
> javax.management.InstanceAlreadyExistsException:
> org.apache:clsLdr=764c12b6,group="Cache groups",name="ignite-cass-delta"
> at
> com.sun.jmx.mbeanserver.Repository.addMBean(Repository.java:437)
> at
> com.sun.jmx.interceptor.DefaultMBeanServerInterceptor.registerWithRepository(DefaultMBeanServerInterceptor.java:1898)
>
> at
> com.sun.jmx.interceptor.DefaultMBeanServerInterceptor.registerDynamicMBean(DefaultMBeanServerInterceptor.java:966)
>
> at
> com.sun.jmx.interceptor.DefaultMBeanServerInterceptor.registerObject(DefaultMBeanServerInterceptor.java:900)
>
> at
> com.sun.jmx.interceptor.DefaultMBeanServerInterceptor.registerMBean(DefaultMBeanServerInterceptor.java:324)
>
> at
> com.sun.jmx.mbeanserver.JmxMBeanServer.registerMBean(JmxMBeanServer.java:522)
>
> at
> org.apache.ignite.internal.util.IgniteUtils.registerMBean(IgniteUtils.java:4573)
>
> at
> org.apache.ignite.internal.util.IgniteUtils.registerMBean(IgniteUtils.java:4544)
>
> at
> org.apache.ignite.internal.processors.cache.GridCacheProcessor.startCacheGroup(GridCacheProcessor.java:2054)
>
> at
> org.apache.ignite.internal.processors.cache.GridCacheProcessor.prepareCacheStart(GridCacheProcessor.java:1938)
>
> at
> org.apache.ignite.internal.processors.cache.GridCacheProcessor.startReceivedCaches(GridCacheProcessor.java:1864)
>
> at
> org.apache.ignite.internal.processors.cache.distributed.dht.preloader.GridDhtPartitionsExchangeFuture.init(GridDhtPartitionsExchangeFuture.java:667)
>
> at
>
> org.apache.ignite.internal.processors.cache.GridCachePartitionExchangeManager$ExchangeWorker.body0(GridCachePartitionExchangeManager.java:2419)
> at
> org.apache.ignite.internal.processors.cache.GridCachePartitionExchangeManager$ExchangeWorker.body(GridCachePartitionExchangeManager.java:2299)
>
> at
> org.apache.ignite.internal.util.worker.GridWorker.run(GridWorker.java:110)
> at java.lang.Thread.run(Thread.java:748)
> [11:36:25,580][SEVERE][exchange-worker-#62][GridDhtPartitionsExchangeFuture]
>
> Failed to reinitialize local partitions (preloading will be stopped):
> GridDhtPartitionExchangeId [topVer=AffinityTopologyVersion [topVer=8,
> minorTopVer=0], discoEvt=DiscoveryEvent [evtNode=TcpDiscoveryNode
> [id=c8f39e6b-9e13-47c7-8f6c-570b38afa962, addrs=[0:0:0:0:0:0:0:1,
> 10.252.198.106, 127.0.0.1], sockAddrs=[/0:0:0:0:0:0:0:1:47500,
> /127.0.0.1:47500, /10.252.198.106:47500], discPort=47500, order=8,
> intOrder=5, lastExchangeTime=1532619375511, loc=false,
> ver=2.5.0#20180523-sha1:86e110c7, isClient=false], topVer=8,
> nodeId8=c1baabb3, msg=Node joined: TcpDiscoveryNode
> [id=c8f39e6b-9e13-47c7-8f6c-570b38afa962, addrs=[0:0:0:0:0:0:0:1,
> 10.252.198.106, 127.0.0.1], sockAddrs=[/0:0:0:0:0:0:0:1:47500,
> /127.0.0.1:47500, /10.252.198.106:47500], discPort=47500, order=8,
> intOrder=5, lastExchangeTime=1532619375511, loc=false,
> ver=2.5.0#20180523-sha1:86e110c7, isClient=false], type=NODE_JOINED,
> tstamp=1532619385566], nodeId=c8f39e6b, evt=NODE_JOINED]
> class 

Re: Toad Connection to Ignite

2018-08-02 Thread Alex Plehanov
As far as I know TOAD support only limited set of DBMS.
For SQL querying you can use any other tool which support JDBC, like
DBeaver (GUI) or sqlline (CLI, included into Ignite distributive).

2018-08-02 13:48 GMT+03:00 ApacheUser :

> Hello Ignite Team,
>
> Is it possible to connect to Ignite from TOAD tool? for SQL Querying?.
>
>
> Thanks
>
>
>
> --
> Sent from: http://apache-ignite-users.70518.x6.nabble.com/
>


Re: Ignite Cluster with remote server + Cassandra Persistence

2018-08-02 Thread Вячеслав Коптилин
Hello,

The first problem is related to a bug in the implementation of `PojoField`
class.
This is a known issue https://issues.apache.org/jira/browse/IGNITE-8788
Unfortunately, that bug is not resolved yet.
In any way, the workaround that I provided above should do a trick.

The second issue is `java.lang.IllegalStateException: Affinity for topology
version is not initialized`
and it seems that this exception does not relate to the original issue with
CassandraCacheStoreFactory.
I would suggest to the following:
 - disable IPv6 on all nodes via JVM option -Djava.net.preferIPv4Stack=true
for example
 - enable debug logging via JVM option -DIGNITE_QUIET=false or "-v" to
ignite.{sh|bat}
 - try to reproduce the issue, collect all log files from all nodes and
attach these files to a new message
   I think it will be the best approach to get help from the community.

Thanks,
S.

чт, 2 авг. 2018 г. в 16:49, okiesong :

> I also found this while trying to resolve this problem which is still
> unresolved.
>
> https://issues.apache.org/jira/browse/IGNITE-5998
>
> Will this be impacted?
>
>
>
> --
> Sent from: http://apache-ignite-users.70518.x6.nabble.com/
>


Re: Ignite Spring Data 2.0 not working with Spring Boot 2

2018-08-02 Thread Denis Mekhanikov
Lokesh,

You should configure a logger according to the following documentation:
https://apacheignite.readme.io/docs/logging
Choose a suitable logger and refer to a corresponding section of the docs.

Denis

чт, 2 авг. 2018 г. в 9:59, Lokesh Sharma :

> (I'm sorry if I created this thread twice. Kindly delete the earlier one.)
>
> I built the "ignite-spring-data_2.0" from source to run it with Spring
> Boot 2.0.2-RELEASE. The app is running fine but nothing is being logged in
> the console. Please help.
>
> Here's my code: https://github.com/lokeshh/ignite_with_spring_boot_2
>


Re: Transaction return value problem

2018-08-02 Thread Denis Mekhanikov
Here you can find how to use Spring transaction management together with
Ignite:
https://ignite.apache.org/releases/latest/javadoc/org/apache/ignite/transactions/spring/SpringTransactionManager.html

Transaction propagation is not a feature of a database itself, it's rather
a Spring's feature.
It doesn't depend on the underlying database. So, you can use it with
Ignite as well.

Denis

чт, 2 авг. 2018 г. в 5:40, hulitao198758 :

> Ignite enables transactions to determine how to perform certain operations
> after a successful transaction is executed, is transaction propagation
> currently supported, and how to inherit from Spring's transactions?
>
>
>
> --
> Sent from: http://apache-ignite-users.70518.x6.nabble.com/
>


Re: Graph Database with Ignite? Search like Elastic search/Apache Solr with Ignite?

2018-08-02 Thread Denis Mekhanikov
Wilhelm,

Ignite supports Lucene-based full text search.
Here you can find an example:
https://apacheignite.readme.io/docs/cache-queries#section-text-queries

Denis

ср, 1 авг. 2018 г. в 21:05, Wilhelm Thomas :

> Thanks!
>
> You will think graph (edges and vertex), graph traversal, could be added
> to ignite.
>
>
>
> Actually that was my next question, is it possible to add a search (like
> elastic search or apache solr) on top of all those ignite tables?
>
>
>
>
>
> *From: *Jörn Franke 
> *Reply-To: *"user@ignite.apache.org" 
> *Date: *Monday, July 30, 2018 at 3:52 PM
> *To: *"user@ignite.apache.org" 
> *Subject: *Re: Graph Database with Ignite?
>
>
>
> This does normally not make sense because most graph databases keep the
> graph structure (not necessarily the vertex details, but vertexes and edges
> ) in-memory. As far as I know, Ignite does not provide graph data
> structures such as adjacency matrix/list.
>
> If you have a very huge graph of which the structure does not fit into
> memory then you can work with a distributed graph, such as JanusGraph. It
> has various plugable backends, such as hbase for the graph and solr for
> indexing vertexes. Maybe someone will write an Ignite backend. Of course
> you could try to run hbase on IGFS , but that would be a little bit far
> fetched.
>
>
> On 30. Jul 2018, at 22:58, Wilhelm Thomas 
> wrote:
>
> Hello,
>
>
>
> I’m looking into Neo4J and Apache Gremlin graph databases.
>
> Does ignite can support a graph database? Can I use ignite as the
> underline database and use Gremlin for the queries?
>
>
>
> Thanks
>
>
>
> w
>
>


Re: Distribute closures error: "Operation cannot be performed in raw mode."

2018-08-02 Thread F.D.
Ok perfect!

F.D.

On Thu, Aug 2, 2018 at 3:11 PM Igor Sapego  wrote:

> You may also use rawWriter. The point is, you should
> not use non-raw user, when you already started using raw.
>
> Best Regards,
> Igor
>
>
> On Thu, Aug 2, 2018 at 3:58 PM F.D.  wrote:
>
>> My fault!!! I modified the code in this way:
>>
>>static void Write(BinaryWriter& writer, const Calculation& obj)
>>{
>>   writer.WriteBool("local_log", obj.local_log_);
>>   writer.WriteString("service_name", obj.service_name_);
>>
>>   auto sa_writer = writer.WriteStringArray("input");
>>   for(const auto  : obj.input_)
>>  sa_writer.Write(s);
>>   sa_writer.Close();
>>}
>>
>>static void Read(BinaryReader& reader, Calculation& dst)
>>{
>>   dst.local_log_ = reader.ReadBool("local_log");
>>   dst.service_name_ = reader.ReadString("service_name");
>>
>>   auto sa_reader = reader.ReadStringArray("input");
>>   while(sa_reader.HasNext())
>>  dst.input_.push_back(sa_reader.GetNext());
>>}
>>
>> and now work perfectly!
>>
>> Sorry and thanks again!
>>F.D.
>>
>>
>>
>> On Thu, Aug 2, 2018 at 2:52 PM F.D.  wrote:
>>
>>> Here we go. Maybe the problem is the vector inside the
>>> class Calculation.
>>>
>>> namespace ignite {
>>> namespace binary {
>>>
>>> template<>
>>> struct BinaryType
>>> {
>>>static int32_t GetTypeId()
>>>{
>>>   return GetBinaryStringHashCode("Calculation");
>>>}
>>>
>>>static void GetTypeName(std::string& dst)
>>>{
>>>   dst = "Calculation";
>>>}
>>>
>>>static int32_t GetFieldId(const char* name)
>>>{
>>>   return GetBinaryStringHashCode(name);
>>>}
>>>
>>>static int32_t GetHashCode(const Calculation& obj)
>>>{
>>>   return 0;
>>>}
>>>
>>>static bool IsNull(const Calculation& obj)
>>>{
>>>   return false;
>>>}
>>>
>>>static void GetNull(Calculation& dst)
>>>{
>>>   dst = Calculation();
>>>}
>>>
>>>static void Write(BinaryWriter& writer, const Calculation& obj)
>>>{
>>>   writer.RawWriter().WriteBool(obj.local_log_);
>>>   writer.RawWriter().WriteString(obj.service_name_);
>>>
>>>   auto sa_writer = writer.WriteStringArray("input");
>>>   for(const auto  : obj.input_)
>>>  sa_writer.Write(s);
>>>   sa_writer.Close();
>>>}
>>>
>>>static void Read(BinaryReader& reader, Calculation& dst)
>>>{
>>>   dst.local_log_ = reader.RawReader().ReadBool();
>>>   dst.service_name_ = reader.RawReader().ReadString();
>>>
>>>   auto sa_reader = reader.ReadStringArray("input");
>>>   while(sa_reader.HasNext())
>>>  dst.input_.push_back(sa_reader.GetNext());
>>>}
>>> };
>>>
>>> template<>
>>> struct BinaryType>
>>> {
>>>typedef std::vector value_type;
>>>
>>>static int32_t GetTypeId()
>>>{
>>>   return GetBinaryStringHashCode("VectorOfString");
>>>}
>>>
>>>static void GetTypeName(std::string& dst)
>>>{
>>>   dst = "VectorOfString";
>>>}
>>>
>>>static int32_t GetFieldId(const char* name)
>>>{
>>>   return GetBinaryStringHashCode(name);
>>>}
>>>
>>>static int32_t GetHashCode(const std::vector )
>>>{
>>>   return 0;
>>>}
>>>
>>>static bool IsNull(const std::vector )
>>>{
>>>   return !obj.size();
>>>}
>>>
>>>static void GetNull(std::vector )
>>>{
>>>   dst = value_type();
>>>}
>>>
>>>static void Write(BinaryWriter , const
>>> std::vector )
>>>{
>>>   auto sa_writer = writer.WriteStringArray("items");
>>>   for(const auto  : obj)
>>>  sa_writer.Write(s);
>>>   sa_writer.Close();
>>>}
>>>
>>>static void Read(BinaryReader , std::vector )
>>>{
>>>   auto sa_reader = reader.ReadStringArray("items");
>>>   while(sa_reader.HasNext())
>>>  dst.push_back(sa_reader.GetNext());
>>>}
>>> };
>>>
>>> } } // namespace ignite binary
>>>
>>> Thanks,
>>>F.D.
>>>
>>>
>>> On Thu, Aug 2, 2018 at 10:14 AM Igor Sapego  wrote:
>>>
 Hi,

 Can you show how you define BinaryType? Because the error
 you are receiving is related to serialization/deserialization process.

 Best Regards,
 Igor


 On Thu, Aug 2, 2018 at 9:15 AM F.D.  wrote:

> Hi Igniters,
>
> finally, I've compiled my code and run my test. But after I call my
> closure I got this errors: "Operation cannot be performed in raw mode.",
> and unfortunally I've no idea what it does mean.
>
> This is the code of call:
>
> IgniteConfiguration cfg;
> std::string home = getenv("IGNITE_HOME");
> fs::path cfg_path = fs::path(home) / "platforms" / "cpp" /
> "client_config.xml";
> cfg.springCfgPath = cfg_path.string();
>
> ignite = Ignition::Start(cfg);
>
> IgniteBinding binding = ignite.GetBinding();
> binding.RegisterComputeFunc();
>
> Compute compute = ignite.GetCompute();
>
> [...]
>

Re: Ignite Cluster with remote server + Cassandra Persistence

2018-08-02 Thread okiesong
I also found this while trying to resolve this problem which is still
unresolved. 

https://issues.apache.org/jira/browse/IGNITE-5998

Will this be impacted?



--
Sent from: http://apache-ignite-users.70518.x6.nabble.com/


Remote Session with Ignite & Cassandra persistence

2018-08-02 Thread okiesong
Hi, how can we use Ignite to start multiple ignite sessions on a remote
server? I tried using TcpDiscoverySpi and TcpCommunicationSpi to resolve
this problem, but this was not working. I basically used a similar setting
as below from ignite website. And I have already added jar under ignite/lib
folder for Cassandra persistence. 

TcpDiscoverySpi spi = new TcpDiscoverySpi();
 
TcpDiscoveryVmIpFinder ipFinder = new TcpDiscoveryMulticastIpFinder();
 
ipFinder.setMulticastGroup("228.10.10.157");
 
spi.setIpFinder(ipFinder);
 
IgniteConfiguration cfg = new IgniteConfiguration();
 
// Override default discovery SPI.
cfg.setDiscoverySpi(spi);
 
// Start Ignite node.
Ignition.start(cfg);


This is the error I am getting atm. I am basically ran ignite.sh from remote
server first then, run my application from localhost that points to this
remote server. but the below error displayed on a remote server. 

More details can be found from
http://apache-ignite-users.70518.x6.nabble.com/Ignite-Cluster-with-remote-server-Cassandra-Persistence-tt22886.html

[11:36:25] Topology snapshot [ver=9, servers=1, clients=0, CPUs=16, 
offheap=13.0GB, heap=1.0GB] 
[11:36:25]   ^-- Node [id=C1BAABB3-413C-4867-9F8A-CC21CA818E80, 
clusterState=ACTIVE] 
[11:36:25] Data Regions Configured: 
[11:36:25]   ^-- default [initSize=256.0 MiB, maxSize=12.6 GiB, 
persistenceEnabled=false] 
[11:36:25,574][SEVERE][exchange-worker-#62][GridCacheProcessor] Failed to 
register MBean for cache group: null 
javax.management.InstanceAlreadyExistsException: 
org.apache:clsLdr=764c12b6,group="Cache groups",name="ignite-cass-delta" 
at com.sun.jmx.mbeanserver.Repository.addMBean(Repository.java:437) 
at 
com.sun.jmx.interceptor.DefaultMBeanServerInterceptor.registerWithRepository(DefaultMBeanServerInterceptor.java:1898)
 
at 
com.sun.jmx.interceptor.DefaultMBeanServerInterceptor.registerDynamicMBean(DefaultMBeanServerInterceptor.java:966)
 
at 
com.sun.jmx.interceptor.DefaultMBeanServerInterceptor.registerObject(DefaultMBeanServerInterceptor.java:900)
 
at 
com.sun.jmx.interceptor.DefaultMBeanServerInterceptor.registerMBean(DefaultMBeanServerInterceptor.java:324)
 
at 
com.sun.jmx.mbeanserver.JmxMBeanServer.registerMBean(JmxMBeanServer.java:522) 
at 
org.apache.ignite.internal.util.IgniteUtils.registerMBean(IgniteUtils.java:4573)
 
at 
org.apache.ignite.internal.util.IgniteUtils.registerMBean(IgniteUtils.java:4544)
 
at 
org.apache.ignite.internal.processors.cache.GridCacheProcessor.startCacheGroup(GridCacheProcessor.java:2054)
 
at 
org.apache.ignite.internal.processors.cache.GridCacheProcessor.prepareCacheStart(GridCacheProcessor.java:1938)
 
at 
org.apache.ignite.internal.processors.cache.GridCacheProcessor.startReceivedCaches(GridCacheProcessor.java:1864)
 
at 
org.apache.ignite.internal.processors.cache.distributed.dht.preloader.GridDhtPartitionsExchangeFuture.init(GridDhtPartitionsExchangeFuture.java:667)
 
at 
org.apache.ignite.internal.processors.cache.GridCachePartitionExchangeManager$ExchangeWorker.body0(GridCachePartitionExchangeManager.java:2419)
at 
org.apache.ignite.internal.processors.cache.GridCachePartitionExchangeManager$ExchangeWorker.body(GridCachePartitionExchangeManager.java:2299)
 
at 
org.apache.ignite.internal.util.worker.GridWorker.run(GridWorker.java:110) 
at java.lang.Thread.run(Thread.java:748) 
[11:36:25,580][SEVERE][exchange-worker-#62][GridDhtPartitionsExchangeFuture] 
Failed to reinitialize local partitions (preloading will be stopped): 
GridDhtPartitionExchangeId [topVer=AffinityTopologyVersion [topVer=8, 
minorTopVer=0], discoEvt=DiscoveryEvent [evtNode=TcpDiscoveryNode 
[id=c8f39e6b-9e13-47c7-8f6c-570b38afa962, addrs=[0:0:0:0:0:0:0:1, 
10.252.198.106, 127.0.0.1], sockAddrs=[/0:0:0:0:0:0:0:1:47500, 
/127.0.0.1:47500, /10.252.198.106:47500], discPort=47500, order=8, 
intOrder=5, lastExchangeTime=1532619375511, loc=false, 
ver=2.5.0#20180523-sha1:86e110c7, isClient=false], topVer=8, 
nodeId8=c1baabb3, msg=Node joined: TcpDiscoveryNode 
[id=c8f39e6b-9e13-47c7-8f6c-570b38afa962, addrs=[0:0:0:0:0:0:0:1, 
10.252.198.106, 127.0.0.1], sockAddrs=[/0:0:0:0:0:0:0:1:47500, 
/127.0.0.1:47500, /10.252.198.106:47500], discPort=47500, order=8, 
intOrder=5, lastExchangeTime=1532619375511, loc=false, 
ver=2.5.0#20180523-sha1:86e110c7, isClient=false], type=NODE_JOINED, 
tstamp=1532619385566], nodeId=c8f39e6b, evt=NODE_JOINED] 
class org.apache.ignite.IgniteCheckedException: Failed to register MBean for 
component: 
org.apache.ignite.internal.processors.cache.CacheLocalMetricsMXBeanImpl@7a80ba57
 
at 
org.apache.ignite.internal.processors.cache.GridCacheProcessor.registerMbean(GridCacheProcessor.java:4159)
 
at 
org.apache.ignite.internal.processors.cache.GridCacheProcessor.createCache(GridCacheProcessor.java:1690)
 
at 

Re: Ignite Cluster with remote server + Cassandra Persistence

2018-08-02 Thread okiesong
Hi, does anyone have a solution to the problem I am having right now? thanks
in advance. I really need an input for this problem. 



--
Sent from: http://apache-ignite-users.70518.x6.nabble.com/


Re: Distribute closures error: "Operation cannot be performed in raw mode."

2018-08-02 Thread Igor Sapego
You may also use rawWriter. The point is, you should
not use non-raw user, when you already started using raw.

Best Regards,
Igor


On Thu, Aug 2, 2018 at 3:58 PM F.D.  wrote:

> My fault!!! I modified the code in this way:
>
>static void Write(BinaryWriter& writer, const Calculation& obj)
>{
>   writer.WriteBool("local_log", obj.local_log_);
>   writer.WriteString("service_name", obj.service_name_);
>
>   auto sa_writer = writer.WriteStringArray("input");
>   for(const auto  : obj.input_)
>  sa_writer.Write(s);
>   sa_writer.Close();
>}
>
>static void Read(BinaryReader& reader, Calculation& dst)
>{
>   dst.local_log_ = reader.ReadBool("local_log");
>   dst.service_name_ = reader.ReadString("service_name");
>
>   auto sa_reader = reader.ReadStringArray("input");
>   while(sa_reader.HasNext())
>  dst.input_.push_back(sa_reader.GetNext());
>}
>
> and now work perfectly!
>
> Sorry and thanks again!
>F.D.
>
>
>
> On Thu, Aug 2, 2018 at 2:52 PM F.D.  wrote:
>
>> Here we go. Maybe the problem is the vector inside the class
>> Calculation.
>>
>> namespace ignite {
>> namespace binary {
>>
>> template<>
>> struct BinaryType
>> {
>>static int32_t GetTypeId()
>>{
>>   return GetBinaryStringHashCode("Calculation");
>>}
>>
>>static void GetTypeName(std::string& dst)
>>{
>>   dst = "Calculation";
>>}
>>
>>static int32_t GetFieldId(const char* name)
>>{
>>   return GetBinaryStringHashCode(name);
>>}
>>
>>static int32_t GetHashCode(const Calculation& obj)
>>{
>>   return 0;
>>}
>>
>>static bool IsNull(const Calculation& obj)
>>{
>>   return false;
>>}
>>
>>static void GetNull(Calculation& dst)
>>{
>>   dst = Calculation();
>>}
>>
>>static void Write(BinaryWriter& writer, const Calculation& obj)
>>{
>>   writer.RawWriter().WriteBool(obj.local_log_);
>>   writer.RawWriter().WriteString(obj.service_name_);
>>
>>   auto sa_writer = writer.WriteStringArray("input");
>>   for(const auto  : obj.input_)
>>  sa_writer.Write(s);
>>   sa_writer.Close();
>>}
>>
>>static void Read(BinaryReader& reader, Calculation& dst)
>>{
>>   dst.local_log_ = reader.RawReader().ReadBool();
>>   dst.service_name_ = reader.RawReader().ReadString();
>>
>>   auto sa_reader = reader.ReadStringArray("input");
>>   while(sa_reader.HasNext())
>>  dst.input_.push_back(sa_reader.GetNext());
>>}
>> };
>>
>> template<>
>> struct BinaryType>
>> {
>>typedef std::vector value_type;
>>
>>static int32_t GetTypeId()
>>{
>>   return GetBinaryStringHashCode("VectorOfString");
>>}
>>
>>static void GetTypeName(std::string& dst)
>>{
>>   dst = "VectorOfString";
>>}
>>
>>static int32_t GetFieldId(const char* name)
>>{
>>   return GetBinaryStringHashCode(name);
>>}
>>
>>static int32_t GetHashCode(const std::vector )
>>{
>>   return 0;
>>}
>>
>>static bool IsNull(const std::vector )
>>{
>>   return !obj.size();
>>}
>>
>>static void GetNull(std::vector )
>>{
>>   dst = value_type();
>>}
>>
>>static void Write(BinaryWriter , const std::vector
>> )
>>{
>>   auto sa_writer = writer.WriteStringArray("items");
>>   for(const auto  : obj)
>>  sa_writer.Write(s);
>>   sa_writer.Close();
>>}
>>
>>static void Read(BinaryReader , std::vector )
>>{
>>   auto sa_reader = reader.ReadStringArray("items");
>>   while(sa_reader.HasNext())
>>  dst.push_back(sa_reader.GetNext());
>>}
>> };
>>
>> } } // namespace ignite binary
>>
>> Thanks,
>>F.D.
>>
>>
>> On Thu, Aug 2, 2018 at 10:14 AM Igor Sapego  wrote:
>>
>>> Hi,
>>>
>>> Can you show how you define BinaryType? Because the error
>>> you are receiving is related to serialization/deserialization process.
>>>
>>> Best Regards,
>>> Igor
>>>
>>>
>>> On Thu, Aug 2, 2018 at 9:15 AM F.D.  wrote:
>>>
 Hi Igniters,

 finally, I've compiled my code and run my test. But after I call my
 closure I got this errors: "Operation cannot be performed in raw mode.",
 and unfortunally I've no idea what it does mean.

 This is the code of call:

 IgniteConfiguration cfg;
 std::string home = getenv("IGNITE_HOME");
 fs::path cfg_path = fs::path(home) / "platforms" / "cpp" /
 "client_config.xml";
 cfg.springCfgPath = cfg_path.string();

 ignite = Ignition::Start(cfg);

 IgniteBinding binding = ignite.GetBinding();
 binding.RegisterComputeFunc();

 Compute compute = ignite.GetCompute();

 [...]

 Calculation functor(name, args, false);
 auto fut = compute.CallAsync(functor);

 [...]

 And Calculation is:

 class CalculationEngineIgniteServer: public
 ignite::compute::ComputeFunc
 {
friend struct
 

Re: Distribute closures error: "Operation cannot be performed in raw mode."

2018-08-02 Thread F.D.
 My fault!!! I modified the code in this way:

   static void Write(BinaryWriter& writer, const Calculation& obj)
   {
  writer.WriteBool("local_log", obj.local_log_);
  writer.WriteString("service_name", obj.service_name_);

  auto sa_writer = writer.WriteStringArray("input");
  for(const auto  : obj.input_)
 sa_writer.Write(s);
  sa_writer.Close();
   }

   static void Read(BinaryReader& reader, Calculation& dst)
   {
  dst.local_log_ = reader.ReadBool("local_log");
  dst.service_name_ = reader.ReadString("service_name");

  auto sa_reader = reader.ReadStringArray("input");
  while(sa_reader.HasNext())
 dst.input_.push_back(sa_reader.GetNext());
   }

and now work perfectly!

Sorry and thanks again!
   F.D.



On Thu, Aug 2, 2018 at 2:52 PM F.D.  wrote:

> Here we go. Maybe the problem is the vector inside the class
> Calculation.
>
> namespace ignite {
> namespace binary {
>
> template<>
> struct BinaryType
> {
>static int32_t GetTypeId()
>{
>   return GetBinaryStringHashCode("Calculation");
>}
>
>static void GetTypeName(std::string& dst)
>{
>   dst = "Calculation";
>}
>
>static int32_t GetFieldId(const char* name)
>{
>   return GetBinaryStringHashCode(name);
>}
>
>static int32_t GetHashCode(const Calculation& obj)
>{
>   return 0;
>}
>
>static bool IsNull(const Calculation& obj)
>{
>   return false;
>}
>
>static void GetNull(Calculation& dst)
>{
>   dst = Calculation();
>}
>
>static void Write(BinaryWriter& writer, const Calculation& obj)
>{
>   writer.RawWriter().WriteBool(obj.local_log_);
>   writer.RawWriter().WriteString(obj.service_name_);
>
>   auto sa_writer = writer.WriteStringArray("input");
>   for(const auto  : obj.input_)
>  sa_writer.Write(s);
>   sa_writer.Close();
>}
>
>static void Read(BinaryReader& reader, Calculation& dst)
>{
>   dst.local_log_ = reader.RawReader().ReadBool();
>   dst.service_name_ = reader.RawReader().ReadString();
>
>   auto sa_reader = reader.ReadStringArray("input");
>   while(sa_reader.HasNext())
>  dst.input_.push_back(sa_reader.GetNext());
>}
> };
>
> template<>
> struct BinaryType>
> {
>typedef std::vector value_type;
>
>static int32_t GetTypeId()
>{
>   return GetBinaryStringHashCode("VectorOfString");
>}
>
>static void GetTypeName(std::string& dst)
>{
>   dst = "VectorOfString";
>}
>
>static int32_t GetFieldId(const char* name)
>{
>   return GetBinaryStringHashCode(name);
>}
>
>static int32_t GetHashCode(const std::vector )
>{
>   return 0;
>}
>
>static bool IsNull(const std::vector )
>{
>   return !obj.size();
>}
>
>static void GetNull(std::vector )
>{
>   dst = value_type();
>}
>
>static void Write(BinaryWriter , const std::vector
> )
>{
>   auto sa_writer = writer.WriteStringArray("items");
>   for(const auto  : obj)
>  sa_writer.Write(s);
>   sa_writer.Close();
>}
>
>static void Read(BinaryReader , std::vector )
>{
>   auto sa_reader = reader.ReadStringArray("items");
>   while(sa_reader.HasNext())
>  dst.push_back(sa_reader.GetNext());
>}
> };
>
> } } // namespace ignite binary
>
> Thanks,
>F.D.
>
>
> On Thu, Aug 2, 2018 at 10:14 AM Igor Sapego  wrote:
>
>> Hi,
>>
>> Can you show how you define BinaryType? Because the error
>> you are receiving is related to serialization/deserialization process.
>>
>> Best Regards,
>> Igor
>>
>>
>> On Thu, Aug 2, 2018 at 9:15 AM F.D.  wrote:
>>
>>> Hi Igniters,
>>>
>>> finally, I've compiled my code and run my test. But after I call my
>>> closure I got this errors: "Operation cannot be performed in raw mode.",
>>> and unfortunally I've no idea what it does mean.
>>>
>>> This is the code of call:
>>>
>>> IgniteConfiguration cfg;
>>> std::string home = getenv("IGNITE_HOME");
>>> fs::path cfg_path = fs::path(home) / "platforms" / "cpp" /
>>> "client_config.xml";
>>> cfg.springCfgPath = cfg_path.string();
>>>
>>> ignite = Ignition::Start(cfg);
>>>
>>> IgniteBinding binding = ignite.GetBinding();
>>> binding.RegisterComputeFunc();
>>>
>>> Compute compute = ignite.GetCompute();
>>>
>>> [...]
>>>
>>> Calculation functor(name, args, false);
>>> auto fut = compute.CallAsync(functor);
>>>
>>> [...]
>>>
>>> And Calculation is:
>>>
>>> class CalculationEngineIgniteServer: public
>>> ignite::compute::ComputeFunc
>>> {
>>>friend struct
>>> ignite::binary::BinaryType;
>>> public:
>>>CalculationEngineIgniteServer(
>>>) = default;
>>>CalculationEngineIgniteServer(
>>>   const std::string ,
>>>   const std::vector ,
>>>   bool localLog
>>>);
>>>
>>>virtual std::string Call();
>>>
>>> private:
>>>std::string name_;
>>>bool local_log_;
>>>
>>>std::vector input_;
>>> };
>>>
>>> Then I defined BinaryType for 

Re: Distribute closures error: "Operation cannot be performed in raw mode."

2018-08-02 Thread F.D.
Here we go. Maybe the problem is the vector inside the class
Calculation.

namespace ignite {
namespace binary {

template<>
struct BinaryType
{
   static int32_t GetTypeId()
   {
  return GetBinaryStringHashCode("Calculation");
   }

   static void GetTypeName(std::string& dst)
   {
  dst = "Calculation";
   }

   static int32_t GetFieldId(const char* name)
   {
  return GetBinaryStringHashCode(name);
   }

   static int32_t GetHashCode(const Calculation& obj)
   {
  return 0;
   }

   static bool IsNull(const Calculation& obj)
   {
  return false;
   }

   static void GetNull(Calculation& dst)
   {
  dst = Calculation();
   }

   static void Write(BinaryWriter& writer, const Calculation& obj)
   {
  writer.RawWriter().WriteBool(obj.local_log_);
  writer.RawWriter().WriteString(obj.service_name_);

  auto sa_writer = writer.WriteStringArray("input");
  for(const auto  : obj.input_)
 sa_writer.Write(s);
  sa_writer.Close();
   }

   static void Read(BinaryReader& reader, Calculation& dst)
   {
  dst.local_log_ = reader.RawReader().ReadBool();
  dst.service_name_ = reader.RawReader().ReadString();

  auto sa_reader = reader.ReadStringArray("input");
  while(sa_reader.HasNext())
 dst.input_.push_back(sa_reader.GetNext());
   }
};

template<>
struct BinaryType>
{
   typedef std::vector value_type;

   static int32_t GetTypeId()
   {
  return GetBinaryStringHashCode("VectorOfString");
   }

   static void GetTypeName(std::string& dst)
   {
  dst = "VectorOfString";
   }

   static int32_t GetFieldId(const char* name)
   {
  return GetBinaryStringHashCode(name);
   }

   static int32_t GetHashCode(const std::vector )
   {
  return 0;
   }

   static bool IsNull(const std::vector )
   {
  return !obj.size();
   }

   static void GetNull(std::vector )
   {
  dst = value_type();
   }

   static void Write(BinaryWriter , const std::vector
)
   {
  auto sa_writer = writer.WriteStringArray("items");
  for(const auto  : obj)
 sa_writer.Write(s);
  sa_writer.Close();
   }

   static void Read(BinaryReader , std::vector )
   {
  auto sa_reader = reader.ReadStringArray("items");
  while(sa_reader.HasNext())
 dst.push_back(sa_reader.GetNext());
   }
};

} } // namespace ignite binary

Thanks,
   F.D.


On Thu, Aug 2, 2018 at 10:14 AM Igor Sapego  wrote:

> Hi,
>
> Can you show how you define BinaryType? Because the error
> you are receiving is related to serialization/deserialization process.
>
> Best Regards,
> Igor
>
>
> On Thu, Aug 2, 2018 at 9:15 AM F.D.  wrote:
>
>> Hi Igniters,
>>
>> finally, I've compiled my code and run my test. But after I call my
>> closure I got this errors: "Operation cannot be performed in raw mode.",
>> and unfortunally I've no idea what it does mean.
>>
>> This is the code of call:
>>
>> IgniteConfiguration cfg;
>> std::string home = getenv("IGNITE_HOME");
>> fs::path cfg_path = fs::path(home) / "platforms" / "cpp" /
>> "client_config.xml";
>> cfg.springCfgPath = cfg_path.string();
>>
>> ignite = Ignition::Start(cfg);
>>
>> IgniteBinding binding = ignite.GetBinding();
>> binding.RegisterComputeFunc();
>>
>> Compute compute = ignite.GetCompute();
>>
>> [...]
>>
>> Calculation functor(name, args, false);
>> auto fut = compute.CallAsync(functor);
>>
>> [...]
>>
>> And Calculation is:
>>
>> class CalculationEngineIgniteServer: public
>> ignite::compute::ComputeFunc
>> {
>>friend struct
>> ignite::binary::BinaryType;
>> public:
>>CalculationEngineIgniteServer(
>>) = default;
>>CalculationEngineIgniteServer(
>>   const std::string ,
>>   const std::vector ,
>>   bool localLog
>>);
>>
>>virtual std::string Call();
>>
>> private:
>>std::string name_;
>>bool local_log_;
>>
>>std::vector input_;
>> };
>>
>> Then I defined BinaryType for Calculation and for
>> std::vector. I don't understand where I miss.
>>
>> Thanks,
>> F.D.
>>
>>


Service not found for a deployed service

2018-08-02 Thread Calvin KL Wong, CLSA
Hi,

I deployed a service from a client node to our grid using the following code:

IgniteCluster cluster = ignite.cluster();
ClusterGroup group = cluster.forAttribute(...);
Ignite.services(workerGroup).deployClusterSingleton("blaze/hsbc")

It is fine most of the time.  However we just encountered a case where we got 
an exception when some logic tried to use this service:

2018-08-02 16:27:57.713 processors.task.GridTaskWorker [sys-#29%mlog%] ERROR - 
Failed to obtain remote job result policy for result from 
ComputeTask.result(..) method (will fail the whole task): GridJobResultImpl 
[job=C2 [c=ServiceProxyCallable [mtdName=execute, svcName=blaze/hsbc, 
ignite=null]], sib=GridJobSiblingImpl 
[sesId=f66f54be461-65c907a3-8fcf-4ddd-acb1-6553be3d1dc9, 
jobId=076f54be461-65c907a3-8fcf-4ddd-acb1-6553be3d1dc9, 
nodeId=236a47e9-7fdb-464e-be44-b24d0942d75c, isJobDone=false], 
jobCtx=GridJobContextImpl 
[jobId=076f54be461-65c907a3-8fcf-4ddd-acb1-6553be3d1dc9, timeoutObj=null, 
attrs={}], node=TcpDiscoveryNode [id=236a47e9-7fdb-464e-be44-b24d0942d75c, 
addrs=[10.23.8.165], sockAddrs=[zhkdlp1712.int.clsa.com/10.23.8.165:0], 
discPort=0, order=37, intOrder=27, lastExchangeTime=1533148447088, loc=false, 
ver=2.3.0#20180518-sha1:02cf6abf, isClient=true], ex=class 
o.a.i.IgniteException: Service not found: blaze/hsbc, hasRes=true, 
isCancelled=false, isOccupied=true]
org.apache.ignite.IgniteException: Remote job threw user exception (override or 
implement ComputeTask.result(..) method if you would like to have automatic 
failover for this exception).
at 
org.apache.ignite.compute.ComputeTaskAdapter.result(ComputeTaskAdapter.java:101)
 ~[liquid-logic.jar:2.0.10]
at 
org.apache.ignite.internal.processors.task.GridTaskWorker$5.apply(GridTaskWorker.java:1047)
 [liquid-logic.jar:2.0.10]
at 
org.apache.ignite.internal.processors.task.GridTaskWorker$5.apply(GridTaskWorker.java:1040)
 [liquid-logic.jar:2.0.10]
at 
org.apache.ignite.internal.util.IgniteUtils.wrapThreadLoader(IgniteUtils.java:6663)
 [liquid-logic.jar:2.0.10]
at 
org.apache.ignite.internal.processors.task.GridTaskWorker.result(GridTaskWorker.java:1040)
 [liquid-logic.jar:2.0.10]
at 
org.apache.ignite.internal.processors.task.GridTaskWorker.onResponse(GridTaskWorker.java:858)
 [liquid-logic.jar:2.0.10]
at 
org.apache.ignite.internal.processors.task.GridTaskProcessor.processJobExecuteResponse(GridTaskProcessor.java:1066)
 [liquid-logic.jar:2.0.10]
at 
org.apache.ignite.internal.processors.task.GridTaskProcessor$JobMessageListener.onMessage(GridTaskProcessor.java:1301)
 [liquid-logic.jar:2.0.10]
at 
org.apache.ignite.internal.managers.communication.GridIoManager.invokeListener(GridIoManager.java:1555)
 [liquid-logic.jar:2.0.10]
   at 
org.apache.ignite.internal.managers.communication.GridIoManager.processRegularMessage0(GridIoManager.java:1183)
 [liquid-logic.jar:2.0.10]
at 
org.apache.ignite.internal.managers.communication.GridIoManager.access$4200(GridIoManager.java:126)
 [liquid-logic.jar:2.0.10]
at 
org.apache.ignite.internal.managers.communication.GridIoManager$9.run(GridIoManager.java:1090)
 [liquid-logic.jar:2.0.10]
at 
java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1142) 
[?:1.8.0_121]
at 
java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:617) 
[?:1.8.0_121]
at java.lang.Thread.run(Thread.java:745) [?:1.8.0_121]
Caused by: org.apache.ignite.IgniteException: Service not found: blaze/hsbc
at 
org.apache.ignite.internal.processors.closure.GridClosureProcessor$C2.execute(GridClosureProcessor.java:1858)
 ~[liquid-logic.jar:2.0.10]
at 
org.apache.ignite.internal.processors.job.GridJobWorker$2.call(GridJobWorker.java:566)
 ~[liquid-logic.jar:2.0.10]
at 
org.apache.ignite.internal.util.IgniteUtils.wrapThreadLoader(IgniteUtils.java:6631)
 ~[liquid-logic.jar:2.0.10]
at 
org.apache.ignite.internal.processors.job.GridJobWorker.execute0(GridJobWorker.java:560)
 ~[liquid-logic.jar:2.0.10]
at 
org.apache.ignite.internal.processors.job.GridJobWorker.body(GridJobWorker.java:489)
 ~[liquid-logic.jar:2.0.10]
at 
org.apache.ignite.internal.util.worker.GridWorker.run(GridWorker.java:110) 
~[liquid-logic.jar:2.0.10]
at 
org.apache.ignite.internal.processors.job.GridJobProcessor.processJobExecuteRequest(GridJobProcessor.java:1181)
 ~[liquid-logic.jar:2.0.10]
at 
org.apache.ignite.internal.processors.job.GridJobProcessor$JobExecutionListener.onMessage(GridJobProcessor.java:1913)
 ~[liquid-logic.jar:2.0.10]
... 7 more
Caused by: 
org.apache.ignite.internal.processors.service.GridServiceNotFoundException: 
Service not found: blaze/hsbc
at 
org.apache.ignite.internal.processors.service.GridServiceProxy$ServiceProxyCallable.call(GridServiceProxy.java:408)
 ~[liquid-logic.jar:2.0.10]
at 

Toad Connection to Ignite

2018-08-02 Thread ApacheUser
Hello Ignite Team,

Is it possible to connect to Ignite from TOAD tool? for SQL Querying?.


Thanks



--
Sent from: http://apache-ignite-users.70518.x6.nabble.com/


Re: "Unable to await partitions release latch within timeout: ServerLatch" exception causing cluster freeze

2018-08-02 Thread Pavel Kovalenko
Hello Ray,

I'm glad that your problem was resolved. I just want to add that on PME
beginning phase we're waiting for all current client operations finishing,
new operations are freezed till PME end. After node finishes all ongoing
client operations it counts down latch that you see in logs which in the
message "Unable to await". When all nodes finish all their operations,
exchange latch completes and PME continues. This latch was added to reach
data consistency on all nodes during main PME phase (partition information
exchange, affinity calculation, etc.). If you have network throttling
between client and server, it becomes hard to notify a client that
his datastreamer operation has finished and latch completing process is
slowed down.

2018-08-02 12:11 GMT+03:00 Ray :

> The root cause for this issue is the network throttle between client and
> servers.
>
> When I move the clients to run in the same cluster as the servers, there's
> no such problem any more.
>
>
>
> --
> Sent from: http://apache-ignite-users.70518.x6.nabble.com/
>


Re: "Unable to await partitions release latch within timeout: ServerLatch" exception causing cluster freeze

2018-08-02 Thread Ray
The root cause for this issue is the network throttle between client and
servers.

When I move the clients to run in the same cluster as the servers, there's
no such problem any more.



--
Sent from: http://apache-ignite-users.70518.x6.nabble.com/


Re: Distribute closures error: "Operation cannot be performed in raw mode."

2018-08-02 Thread Igor Sapego
Hi,

Can you show how you define BinaryType? Because the error
you are receiving is related to serialization/deserialization process.

Best Regards,
Igor


On Thu, Aug 2, 2018 at 9:15 AM F.D.  wrote:

> Hi Igniters,
>
> finally, I've compiled my code and run my test. But after I call my
> closure I got this errors: "Operation cannot be performed in raw mode.",
> and unfortunally I've no idea what it does mean.
>
> This is the code of call:
>
> IgniteConfiguration cfg;
> std::string home = getenv("IGNITE_HOME");
> fs::path cfg_path = fs::path(home) / "platforms" / "cpp" /
> "client_config.xml";
> cfg.springCfgPath = cfg_path.string();
>
> ignite = Ignition::Start(cfg);
>
> IgniteBinding binding = ignite.GetBinding();
> binding.RegisterComputeFunc();
>
> Compute compute = ignite.GetCompute();
>
> [...]
>
> Calculation functor(name, args, false);
> auto fut = compute.CallAsync(functor);
>
> [...]
>
> And Calculation is:
>
> class CalculationEngineIgniteServer: public
> ignite::compute::ComputeFunc
> {
>friend struct ignite::binary::BinaryType;
> public:
>CalculationEngineIgniteServer(
>) = default;
>CalculationEngineIgniteServer(
>   const std::string ,
>   const std::vector ,
>   bool localLog
>);
>
>virtual std::string Call();
>
> private:
>std::string name_;
>bool local_log_;
>
>std::vector input_;
> };
>
> Then I defined BinaryType for Calculation and for
> std::vector. I don't understand where I miss.
>
> Thanks,
> F.D.
>
>


Additional field problems occurred in ignite2.6

2018-08-02 Thread hulitao198758
Ignite2.6 configured POJO field mapping, in the case of Ignite2.3 field is
uppercase or lowercase, only will be mapped to Ignite the memory among the
field names, but in the ignite2.6 if the field names in the JAVA entity
class and Ignite the Config fields in the configuration file name case is
inconsistent, will Ignite the memory mapping out the two fields, in the end
will be one more with JAVA entity class variables the same fields, why?



--
Sent from: http://apache-ignite-users.70518.x6.nabble.com/


RE: in a four node ignite cluster,use atomicLong and process is stuck incountdown latch

2018-08-02 Thread Stanislav Lukyanov
Hi,

I guess this is the problem: https://issues.apache.org/jira/browse/IGNITE-8987

Stan

From: zhouxy1123
Sent: 2 августа 2018 г. 9:57
To: user@ignite.apache.org
Subject: in a four node ignite cluster,use atomicLong and process is stuck 
incountdown latch

hi ,
in a four node ignite cluster,use atomicLong and process is stuck in
countdown latch .

the stack  like this:



   at java.util.concurrent.locks.LockSupport.park(LockSupport.java:189)
at
java.util.concurrent.locks.AbstractQueuedSynchronizer.parkAndCheckInterrupt(AbstractQueuedSynchronizer.java:836)
at
java.util.concurrent.locks.AbstractQueuedSynchronizer.doAcquireSharedInterruptibly(AbstractQueuedSynchronizer.java:997)
at
java.util.concurrent.locks.AbstractQueuedSynchronizer.acquireSharedInterruptibly(AbstractQueuedSynchronizer.java:1304)
at
java.util.concurrent.CountDownLatch.await(CountDownLatch.java:231)
at
org.apache.ignite.internal.util.IgniteUtils.await(IgniteUtils.java:7490)
at
org.apache.ignite.internal.processors.datastructures.DataStructuresProcessor.awaitInitialization(DataStructuresProcessor.java:)
at
org.apache.ignite.internal.processors.datastructures.DataStructuresProcessor.getAtomic(DataStructuresProcessor.java:497)
at
org.apache.ignite.internal.processors.datastructures.DataStructuresProcessor.atomicLong(DataStructuresProcessor.java:454)
at
org.apache.ignite.internal.IgniteKernal.atomicLong(IgniteKernal.java:3518)
at
org.apache.ignite.internal.IgniteKernal.atomicLong(IgniteKernal.java:3507)



--
Sent from: http://apache-ignite-users.70518.x6.nabble.com/



How to do rolling updates with embedded Ignite and changing models

2018-08-02 Thread Roger Janssen
Hi,

I understand and experience that Ignite crashes (binaryobjectexceeptions)
when I redeploy my application (with an embedded Ignite server node) in a
cluster and the new redeployed application has a different model (like enums
with added values).

This results in the inability to have my application up 24/7 because I can't
do rolling updates. It suggest I need to kill all my application instances
(and thus the embedded Ignite instances) first and then deploy and restart
them. But this is not acceptable.

How do I need to configure Ignite to be able to do rolling updates with
changing models? If this is not possible, what is the use of Ignite as a
distributed cache?

Configuration details:
- server nodes only
- non persistent
- cached replicated (in fact I am only interested in distributed cache
invalidations)
- server nodes run embedded in/from a java/spring/tomcat application

Kind regards,

Roger Janssen



--
Sent from: http://apache-ignite-users.70518.x6.nabble.com/


Ignite.NET how to cancel tasks

2018-08-02 Thread Maksym Ieremenko
Hi,

How I cancel running tasks on the grid with Ignite.NET?


Ignite Spring Data 2.0 not working with Spring Boot 2

2018-08-02 Thread Lokesh Sharma
(I'm sorry if I created this thread twice. Kindly delete the earlier one.)

I built the "ignite-spring-data_2.0" from source to run it with Spring Boot
2.0.2-RELEASE. The app is running fine but nothing is being logged in the
console. Please help.

Here's my code: https://github.com/lokeshh/ignite_with_spring_boot_2


in a four node ignite cluster,use atomicLong and process is stuck in countdown latch

2018-08-02 Thread zhouxy1123
hi ,
in a four node ignite cluster,use atomicLong and process is stuck in
countdown latch .

the stack  like this:



   at java.util.concurrent.locks.LockSupport.park(LockSupport.java:189)
at
java.util.concurrent.locks.AbstractQueuedSynchronizer.parkAndCheckInterrupt(AbstractQueuedSynchronizer.java:836)
at
java.util.concurrent.locks.AbstractQueuedSynchronizer.doAcquireSharedInterruptibly(AbstractQueuedSynchronizer.java:997)
at
java.util.concurrent.locks.AbstractQueuedSynchronizer.acquireSharedInterruptibly(AbstractQueuedSynchronizer.java:1304)
at
java.util.concurrent.CountDownLatch.await(CountDownLatch.java:231)
at
org.apache.ignite.internal.util.IgniteUtils.await(IgniteUtils.java:7490)
at
org.apache.ignite.internal.processors.datastructures.DataStructuresProcessor.awaitInitialization(DataStructuresProcessor.java:)
at
org.apache.ignite.internal.processors.datastructures.DataStructuresProcessor.getAtomic(DataStructuresProcessor.java:497)
at
org.apache.ignite.internal.processors.datastructures.DataStructuresProcessor.atomicLong(DataStructuresProcessor.java:454)
at
org.apache.ignite.internal.IgniteKernal.atomicLong(IgniteKernal.java:3518)
at
org.apache.ignite.internal.IgniteKernal.atomicLong(IgniteKernal.java:3507)



--
Sent from: http://apache-ignite-users.70518.x6.nabble.com/


Re: two data region with two nodes

2018-08-02 Thread Denis Mekhanikov
I don't think, that data region configuration causes these problems.
What makes you think so?

Try removing the event listeners and see, if these steps lead to the same
problem.

What kind of lock do you use? Is it *IgniteCache.lock()*, or
*Ignite.reentrantLock()*?
Anyways, I would get rid of these locks. You can make the oldest node in
the cluster to be in charge of the message processing.
It can be done as follows:

ignite.events().localListen((e) -> {
IgniteCluster cluster = ignite.cluster();

ClusterNode crd = cluster.forOldest().node();
if (cluster.localNode().id().equals(crd.id()))
System.out.println("Processing NODE_JOINED");

return true;
}, EVT_NODE_JOINED);

You can also use *forOldest* along with some predicate to consider nodes of
some particular kind only.
More on cluster groups: https://apacheignite.readme.io/docs/cluster-groups

Denis

ср, 1 авг. 2018 г. в 19:51, wangsan :

> Thank you for your answer,
> Maybe I use region config the wrong way.
> There are two apps(more apps with different roles) with different ignite
> configs,
> first App
> :set default region with persistence enable
> :set cache a,with nodefilter in first apps,default region
> second app
> :set default region with persistence disable
> :just access cache a with query
>
> start first app instance 1,second app instance 2,and first app instance 3.
> then close 1,
> then restart 1, the deadlock will happes
>
> fyi, I use ignitelock in first apps when process ignite discovery event
> such
> as join event.  when a new node join, apps 1,3 will receive join message by
> local event listener, but I just want one node to process the message, so i
> use it like this:
>
> if (globalLock.tryLock()) {
> LOGGER.info("--  hold global lock ");
> try {
> switch (event.type()) {
> case EventType.EVT_NODE_JOINED:
> joinListener.onMessage(clusterNode);
> break;
> case EventType.EVT_NODE_FAILED:
> case EventType.EVT_NODE_LEFT:
> leftListener.onMessage(clusterNode);
> break;
> default:
> LOGGER.info("ignore discovery event: {}",
> event);
> break;
> }
> } finally {
> LOGGER.debug("--  process event done ");
> // don't unlock until node left
> // globalLock.unlock();
> }
> }
>
> the node which hold the globalLock will never unlock unless it left, Is it
> the right way with lock
>
>
>
>
> --
> Sent from: http://apache-ignite-users.70518.x6.nabble.com/
>


Distribute closures error: "Operation cannot be performed in raw mode."

2018-08-02 Thread F.D.
Hi Igniters,

finally, I've compiled my code and run my test. But after I call my closure
I got this errors: "Operation cannot be performed in raw mode.", and
unfortunally I've no idea what it does mean.

This is the code of call:

IgniteConfiguration cfg;
std::string home = getenv("IGNITE_HOME");
fs::path cfg_path = fs::path(home) / "platforms" / "cpp" /
"client_config.xml";
cfg.springCfgPath = cfg_path.string();

ignite = Ignition::Start(cfg);

IgniteBinding binding = ignite.GetBinding();
binding.RegisterComputeFunc();

Compute compute = ignite.GetCompute();

[...]

Calculation functor(name, args, false);
auto fut = compute.CallAsync(functor);

[...]

And Calculation is:

class CalculationEngineIgniteServer: public
ignite::compute::ComputeFunc
{
   friend struct ignite::binary::BinaryType;
public:
   CalculationEngineIgniteServer(
   ) = default;
   CalculationEngineIgniteServer(
  const std::string ,
  const std::vector ,
  bool localLog
   );

   virtual std::string Call();

private:
   std::string name_;
   bool local_log_;

   std::vector input_;
};

Then I defined BinaryType for Calculation and for std::vector.
I don't understand where I miss.

Thanks,
F.D.