Re: How to add a new node to an existing cluster which Ignite native persistence is enabled?

2018-12-19 Thread Lukas Polacek
You can change the baseline topology (
https://apacheignite.readme.io/docs/baseline-topology) either via
control.sh or GridGain web console.

For example, you will see something like this after running "control.sh
--baseline":
Cluster state: active
Current topology version: 5

Baseline nodes:
   ConsistentID=5de27d47-cef3-4a5d-ac1e-6dfe45156e3f, STATE=ONLINE
   ConsistentID=7e610021-052c-48ac-8a93-1d1ec2e10fac, STATE=ONLINE


Number of baseline nodes: 2

Other nodes:
   ConsistentID=e019f021-5988-40c0-84fe-d0bb5226c720
Number of other nodes: 1

You can then add node e019f021-5988-40c0-84fe-d0bb5226c720 to your topology
via "control.sh --baseline add".

On Thu, Dec 20, 2018 at 5:47 AM soonjoin  wrote:

> Hi Team, I am testing Ignite on version 2.7.0. I used Ignite native
> persistence and cache mode is PARTITIONED.
>
> I found there’s no data was stored in the node which joined the cluster
> after the cluster is already active,even though I deactivate the cluster
> and
> activate it again. I have to delete the persistent data on all the existing
> nodes and  restart the cluster, then the new node worked properly.
>
> Is there any way that I can add a new node to an existing cluster which
> Ignite native persistence is enabled without deleting the old persistent
> data?
>
> Thanks a lot.
>
>
>
> --
> Sent from: http://apache-ignite-users.70518.x6.nabble.com/
>


Re: deep learning over apache ignite

2018-12-19 Thread Mehdi Seydali
I have another question? As you know dl4j execute over spark. When we want
to integrate dl4j with ignite is it meaningful? For acceleration execution
we can do below idea
1. We can using ignite as cache storage and preparing data for processing
on dl4j.
2. Job on spark can spawn hirarchy job for accelerate execution.
Do you have any comment?

On Wednesday, December 19, 2018, dmitrievanthony 
wrote:

> Yes, in TensorFlow on Apache Ignite we support distributed learning as you
> described it (please see details in  this documentation
>   ).
>
> Speaking about performance, TensorFlow supports distributed learning itself
> (please see details  here
>  distribute>
> ). But to start distributed learning in pure TensorFlow you need to setup
> cluster manually, manually distribute training data between cluster nodes
> and handle node failures.
>
> In TensorFlow on Apache Ignite we do it for you automatically. Apache
> Ignite
> plays cluster manager role, it starts and maintains TensorFlow cluster with
> optimal configuration and handles node failures. At the same time, the
> training is fully performed by TensorFlow anyway. So, the training
> performance is absolutely equal to the case when you use pure TensorFlow
> with proper manually configured and started TensorFlow cluster because we
> don't participate in the training process when the cluster is running
> properly.
>
>
>
> --
> Sent from: http://apache-ignite-users.70518.x6.nabble.com/
>


How to add a new node to an existing cluster which Ignite native persistence is enabled?

2018-12-19 Thread soonjoin
Hi Team, I am testing Ignite on version 2.7.0. I used Ignite native
persistence and cache mode is PARTITIONED.

I found there’s no data was stored in the node which joined the cluster
after the cluster is already active,even though I deactivate the cluster and
activate it again. I have to delete the persistent data on all the existing
nodes and  restart the cluster, then the new node worked properly. 

Is there any way that I can add a new node to an existing cluster which
Ignite native persistence is enabled without deleting the old persistent
data?

Thanks a lot.



--
Sent from: http://apache-ignite-users.70518.x6.nabble.com/


Re: Ignite startu is very slow

2018-12-19 Thread kvenkatramtreddy
Hi Evgenii,

Thank you very much for your help. it is reduced to 5 mins now. I believe,
we should have some documentation on partition count as per the data size.

Currently Ignite is showing my native persistence size around 800 MB, so can
I still decrease the partition count to improve the startup of the ignite
cluster.

following is metric

^-- Node [id=5624a0e4, uptime=2 days, 00:40:03.756]
^-- H/N/C [hosts=3, nodes=3, CPUs=24]
^-- CPU [cur=6.03%, avg=5.48%, GC=0.03%]
^-- PageMemory [pages=187791]
^-- Heap [used=533MB, free=82.63%, comm=3072MB]
^-- Off-heap [used=742MB, free=43.95%, comm=1324MB]
^--   sysMemPlc region [used=0MB, free=99.99%, comm=100MB]
^--   default region [used=741MB, free=27.57%, comm=1024MB]
^--   metastoreMemPlc region [used=0MB, free=99.56%, comm=100MB]
^--   TxLog region [used=0MB, free=100%, comm=100MB]
^-- Ignite persistence [used=777MB]
^--   sysMemPlc region [used=0MB]
^--   default region [used=777MB]
^--   metastoreMemPlc region [used=unknown]
^--   TxLog region [used=0MB]
^-- Outbound messages queue [size=0]
^-- Public thread pool [active=0, idle=0, qSize=0]
^-- System thread pool [active=0, idle=6, qSize=0]


2) We are receiving an exception "Failed to map keys for cache (all
partition nodes left the grid) " when 2 nodes left and one is running. So
please could you help me with this error as well.
I have created new topic for it.

http://apache-ignite-users.70518.x6.nabble.com/Failed-to-map-keys-for-cache-all-partition-nodes-left-the-grid-tt25964.html

Thanks & Regards,
Venkat




--
Sent from: http://apache-ignite-users.70518.x6.nabble.com/


Failed to map keys for cache (all partition nodes left the grid)

2018-12-19 Thread kvenkatramtreddy
Hi Team,

I have 3 node server mode cluster setup with Cache mode Partitioned with 1
backup. We are receiving above error as soon as 2 nodes left from cluster.

All 3 nodes are on baselinetopology.

Collection baselineNodes =
ignite.cluster().currentBaselineTopology();
if(!CollectionUtils.isEmpty(baselineNodes) && baselineNodes.size() <
3){
// Get all server nodes that are already up and running.
Collection nodes =
ignite.cluster().forServers().nodes();

// Set the baseline topology that is represented by these nodes.
ignite.cluster().setBaselineTopology(nodes);

}


Exception:

org.apache.ignite.cache.CacheServerNotFoundException: Failed to map keys for
cache (all partition nodes left the grid) [topVer=AffinityTopologyVersion
[topVer=13, minorTopVer=0], cache=users]
at
org.apache.ignite.internal.processors.cache.GridCacheUtils.convertToCacheException(GridCacheUtils.java:1321)
at
org.apache.ignite.internal.processors.cache.IgniteCacheProxyImpl.cacheException(IgniteCacheProxyImpl.java:1758)
at
org.apache.ignite.internal.processors.cache.IgniteCacheProxyImpl.get(IgniteCacheProxyImpl.java:931)
at
org.apache.ignite.internal.processors.cache.GatewayProtectedCacheProxy.get(GatewayProtectedCacheProxy.java:640)
at
com.ibm.mobilefirst.cachemanager.LufthansaCacheDelegateImpl.getSubscribedFlights(LufthansaCacheDelegateImpl.java:477)
at
com.ibm.mobilefirst.das.notification.impl.NotificationSubscriberServiceImpl.sendMessageUpdate(NotificationSubscriberServiceImpl.java:120)
at
com.ibm.mobilefirst.das.notification.impl.NotificationSubscriberServiceImpl.processAndSend(NotificationSubscriberServiceImpl.java:65)
at sun.reflect.GeneratedMethodAccessor65.invoke(Unknown Source)
at
sun.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:55)
at java.lang.reflect.Method.invoke(Method.java:508)
at
com.google.common.eventbus.Subscriber.invokeSubscriberMethod(Subscriber.java:95)
at
com.google.common.eventbus.Subscriber$SynchronizedSubscriber.invokeSubscriberMethod(Subscriber.java:154)
at com.google.common.eventbus.Subscriber$1.run(Subscriber.java:80)
at
java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1160)
at
java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:635)
at java.lang.Thread.run(Thread.java:811)
Caused by: class
org.apache.ignite.internal.cluster.ClusterTopologyServerNotFoundException:
Failed to map keys for cache (all partition nodes left the grid)
[topVer=AffinityTopologyVersion [topVer=13, minorTopVer=0], cache=users]

Please help us. We required to run single node as well when it is required.

Thanks & Regards,
Venkat



--
Sent from: http://apache-ignite-users.70518.x6.nabble.com/


Re: Question about cpu usage and disk read speed on activation stage.

2018-12-19 Thread yangjiajun
Hello.

Please see the logs and the configuration.Thanks.


example-default.xml
 
 
ignite-b3234010.rar
 
 



aealexsandrov wrote
> Hi,
> 
> Could you share you Ignite node configuration and logs from startup?
> 
> BR,
> Andrei
> 
> 
> 
> --
> Sent from: http://apache-ignite-users.70518.x6.nabble.com/





--
Sent from: http://apache-ignite-users.70518.x6.nabble.com/


Ignite in container enviroments?

2018-12-19 Thread javadevmtl
Hi, from what I have tried and read it seems that you need host network for
docker to run Ignite and overlay networks cause issues...

I'm using DC/OS and I can technically maybe try the mesos deployment, but
not sure how new or old that framework is.

#1 the docs indicate to use:
libs\optional\ignite-mesos\ignite-mesos--jar-with-dependencies.jar
which doesn't exist in the downloaded zip of 2.7.0
#2 How do i know if it will workd with DC/OS 1.11

The next attempt that I got working is to use the Docker with DC/OS overlay
network. This works by using the TCP discovery and setting 9.x.x.x addresses
from DC/OS, the thing is with this you don't know which of the 9.x.x.x
address the container will get assigned and I don't think we want to go put
the full list of 9.x.x.x address just on the chance that the nodes will
eventually ping the right addresses.

The other option is to try to use a DC/OS load balanced VIP, but the address
returned by the VIP is not the one the container see which is in the 9.x.x.x
raange... 




--
Sent from: http://apache-ignite-users.70518.x6.nabble.com/


Re: Optimizing Collocated Join

2018-12-19 Thread kellan
I haven't increased parallelism yet, but that's not a solution to my problem.

I was able to speed up the query by running a ComputeTask that distributes
work to the nodes in my cluster based on affinity key parentS2CellId, and
the runs this local query for each matching parentS2CellId, s2CellId:

SELECT EventTheta.theta
FROM EventTheta
WHERE parentS2CellId = ?
  AND s2CellId BETWEEN ? AND ?
  AND eventDate BETWEEN ? AND ?
  AND eventHour BETWEEN ? AND ?;

With seven days of data in the database I get results back in about 750ms,
which is on target, but when I increase my data set size to thirty days and
run the same query (for both 7 days of data and 30 days of data), I'm up to
2-3s.

7 Days: 1913 ms => 8312 rows
30 Days: 1965 ms => 39038 rows

The query execution time seems to be growing at roughly O(n) not O(log(n))
time in relation to the size of the data set. I need to find a way to
preserve my affinity key (parentS2CellId), while growing out the size of my
data set. Is the problem with the order of the index, with the range queries
on the index or something else?





--
Sent from: http://apache-ignite-users.70518.x6.nabble.com/


Re: Dealing with changing class definitions in Ignite

2018-12-19 Thread Ilya Kasnacheev
Hello!

I have found that you can avoid compute job type incompatibility problem
(please see history below) by setting

.setMarshaller(new OptimizedMarshaller())

in your Ignite configuration on all nodes.

However, it is not clear at all why this is needed. Anybody can help? Why
compute jobs are processed by capricious BinaryMarshaller?

Regards,
-- 
Ilya Kasnacheev


ср, 19 дек. 2018 г. в 02:29, Gert Dubois :

> Hey Ilya,
>
> I opened a ticket on the ignite Jira about it.
> https://issues.apache.org/jira/browse/IGNITE-10717
>
> I attached a zip file containing a maven project with sample code that
> reproduces our issue. Reproducing the issue is rather easy though
>
> 1. Have a client + server, with peer class loading enabled
> 2. Create a simple class that implements IgniteRunnable with some class
> field
> 3. Execute an instance of this class on the ignite cluster
> 4. Change the type of the field
> 5. Recompile and execute again. Now it breaks because the class can't be
> serialized using the binary marshaller.
>
> Code to execute these steps is included in the maven project.
>
> On Tue, Dec 18, 2018 at 12:04 PM Ilya Kasnacheev <
> ilya.kasnach...@gmail.com> wrote:
>
>> Hello!
>>
>> Can you show an example of "Ignite Runnables conflicting on the Binary
>> Marshaller"? As a small code snippet perhaps?
>>
>> Maybe I could recommend something but I lack understanding of your use
>> case.
>>
>> Regards,
>> --
>> Ilya Kasnacheev
>>
>>
>> пн, 17 дек. 2018 г. в 18:12, Gert Dubois :
>>
>>> Thanks for the reply.
>>>
>>> Everything related to the cache we understand the current architecture,
>>> in our code base we'll probably treat everything cache related the same as
>>> schema migrations in a DB (migration scripts, etc.).
>>> Our real issue is with Ignite Runnables conflicting on the Binary
>>> Marshaller. Every class that gets executed on Ignite as a job gets a stored
>>> definition, this means we can't refactor classes that get used internally
>>> by our Ignite Runnables, because doing so would prevent the updated code
>>> from running. Even worse, if we update our libraries and they changed class
>>> definitions we might run into the same issue, without changing a letter in
>>> our own code. From the documentation it looked like Deployment Mode could
>>> provide a solution for this issue but the Binary Marshaller seems to run
>>> completely separate from the Deployment Mode.
>>>
>>> On Mon, Dec 17, 2018 at 3:18 PM Ilya Kasnacheev <
>>> ilya.kasnach...@gmail.com> wrote:
>>>
 Hello!

 As far as my understanding goes:
 You can't peer class load your Key/Value types.
 You also can't redeploy your Key/Value types.
 They even survive node restart via WORKDIR/marshaller directory, and
 come back to haunt you.

 There are plans to maybe ease some of those limitations in 3.0, but
 nothing concrete yet. It is not a bug rather a pillar of current Ignite
 architecture. You will have to route around it, such as introducing new
 fields instead of changing types. And maybe avoid having those types on
 server nodes at all, and relying on BinaryObject.

 Regards,
 --
 Ilya Kasnacheev


 пн, 17 дек. 2018 г. в 15:38, Gert Dubois :

> The issue is still present in 2.7.
> I added a ticket on Jira with sample code that reproduces the issue.
>
> https://issues.apache.org/jira/browse/IGNITE-10717
>
> For now I think we can work around the issue by overriding the default
> BinaryNameMapper, but this feels rather hacky to me.
>
> On Mon, Dec 17, 2018 at 11:54 AM Denis Mekhanikov <
> dmekhani...@gmail.com> wrote:
>
>> Gert,
>>
>> Could you check if the problem with a deployment mode reproduces on
>> Ignite 2.7?
>> If it does, please file a ticket with an explanation and a reproducer
>> to https://issues.apache.org/jira/
>>
>> Thanks!
>> Denis
>>
>>
>> пн, 17 дек. 2018 г. в 12:12, Gert Dubois > >:
>>
>>> I investigated the issue further and narrowed the issue down to the
>>> Binary Marshaller not working as expected given the configured 
>>> Deployment
>>> Mode. When forcing my clients to use unique class names in the metadata 
>>> of
>>> the Binary Marshaller (I forced this by overriding the global
>>> BinaryNameMapper and appending a per-client unique suffix to every class
>>> name) the Deployment Mode behaves as expected. I assume the Binary
>>> Marshaller keeps a cluster wide state of the Metadata of classes and it
>>> merges it whenever we serialize a class on a node (regardless of the
>>> configured Deployment Mode).
>>> Why is the behaviour of the Binary Marshaller not consistent with
>>> the way the Deployment Mode works? Is there a cleaner way to solve this,
>>> besides overriding the BinaryNameMapper?
>>>
>>> On Fri, Dec 14, 2018 at 1:07 PM Gert Dubois <
>>> 

RE: I encountered a problem when restarting ignite

2018-12-19 Thread Stanislav Lukyanov
Hi,

Sending a SIGQUIT signal forces VM to print a thread dump to its stdout:
kill -3 

Stan

From: Justin Ji
Sent: 13 декабря 2018 г. 5:20
To: user@ignite.apache.org
Subject: Re: I encountered a problem when restarting ignite

Akurbanov - 

Thank for your reply!
I have tried to dump the thread stacks, but I don't know how to dump the
thread stacks from a docker container since it only contains a simplified
JRE, does not have JSTACK tools, and I also googled a lot of information and
found that there is no suitable method.



--
Sent from: http://apache-ignite-users.70518.x6.nabble.com/



RE: Effective way to pre-load data around 10 TB

2018-12-19 Thread Stanislav Lukyanov
The problem might be in HDD not performing fast enough, and also suffering from 
random reads
(IgniteCache::preloadPartition at least tries to read sequentially).

Also, do you have enough RAM to store all data? If not, you shouldn’t preload 
all the data, just the amount that fits into RAM.

Anyway, I think that your best chance is to implement the same thing 
https://issues.apache.org/jira/browse/IGNITE-8873 does.
E.g. you can try to backport the commit on top of 2.6.

Stan

From: Naveen
Sent: 5 декабря 2018 г. 7:59
To: user@ignite.apache.org
Subject: RE: Effective way to pre-load data around 10 TB

Thanks Stan, this may take little longer time to implement, we are in hurry
to build this functionality of preloading the data. 

Can someone correct me how to improve this pre-load process.

This is how we are preloading. 

1. Send an Async request for all the partitions with the below code, below
loop will get repeated for all the caches we have 

for (int i = 0; i < affinity.partitions(); i++) {
List cacheList = 
Arrays.asList(cacheName);
affinityRunAsync= 
compute.affinityRunAsync(cacheList, i, new
DataPreloadTask(cacheList, i));

}

2. Inside DataPreloadTask which is running on the Ignite node. 
I just execute scan query for the given partition and iterate thru the
cursor. not doing anything else. 


IgniteCache igniteCache = 
localIgnite.cache(cacheName);
try (QueryCursor> cursor = 
igniteCache.query(new
ScanQuery().setPartition(partitionNo))) {

for (Cache.Entry entry : cursor) {
}

}
}

However, this seems to be quite slow. Taking more than 3 hours to read one
cache which has 400 M records. We have 30 such caches to load data, so not
fining this so efficient. 

Can we improve this, we do have very powerful machines with 128 CPU, 2 TB
RAM, HDD, our CPU utilization is also not so high when we are preloading the
data. 
Changing thread pool size will have any impact this read ???

Thanks
Naveen



--
Sent from: http://apache-ignite-users.70518.x6.nabble.com/



Re: Optimizing Collocated Join

2018-12-19 Thread Ilya Kasnacheev
Hello!

The main document for query optimization is
https://apacheignite-sql.readme.io/docs/performance-and-debugging

For example, have you tried increasing queryParallelism on your caches?

Regards,
-- 
Ilya Kasnacheev


вт, 18 дек. 2018 г. в 20:53, kellan :

> Hi, I'm not sure how I could make this more efficient. I'm already joining
> on
> every column in the key. If I try to change the ordering so that EVENTDATE
> and EVENTHOUR precede the affinity key (PARENTS2CELLID), then the query
> optimizer selects only the affinity key for the join.
>
> It seems like unless I want to partition by time, which doesn't fit my use
> case, my query times for any number of rows is going to grow as I add more
> time-series data.
>
> This doesn't seem like a very big data set to me, and with plenty of
> compute
> at my disposal, it seems like Ignite is performing well under the level of
> a
> simpler database solution like Postgres.
>
> Can you point me to a resource that covers indexing and query optimizing in
> Ignite? I need to find a way to return a result set like this in under a
> second without having to worry too much about the size of the data set.
>
>
>
>
> --
> Sent from: http://apache-ignite-users.70518.x6.nabble.com/
>


RE: Question about add new nodes to ignite cluster.

2018-12-19 Thread Stanislav Lukyanov
Well, in short - it does, don’t worry about it :)

Unfortunately I’m not aware of a proper design document explaining the process 
in detail.
But simply put, Ignite will wait for the new node to obtain all of the data it 
needs to store.
While that’s happening, the node doesn’t serve any requests.
When all data is transferred, Ignite will route the new requests to the new 
node, and start 
removing the transferred data from the old nodes.

Stan

From: Justin Ji
Sent: 3 декабря 2018 г. 5:26
To: user@ignite.apache.org
Subject: Re: Question about add new nodes to ignite cluster.

Another question:

How the client APIs get or put data to the rebalancing cluster(Async Mode)
when adding a new node to the cluster, from the old nodes or the new node?



--
Sent from: http://apache-ignite-users.70518.x6.nabble.com/



RE: Failed to read data from remote connection

2018-12-19 Thread Stanislav Lukyanov
“OOME: Direct buffer memory” means that MaxDirectMemorySize is too small.
Set a larger MaxDirectMemorySize value.

Stan

From: wangsan
Sent: 18 декабря 2018 г. 5:08
To: user@ignite.apache.org
Subject: Re: Failed to read data from remote connection

Now the cluster have 100+ nodes, when 'Start check connection process'
happens,
Some node will throw oom with Direct buffer memory (java nio).
When check connections,Many nio socker will be create ,Then oom happens?

How to fix the oom except larger xmx?

Thanks.



--
Sent from: http://apache-ignite-users.70518.x6.nabble.com/



RE: Did anyone write custom Affinity function?

2018-12-19 Thread Stanislav Lukyanov
You could write a custom affinity function, and some people do, but as far as I 
can see you don’t need it.
You just chose a poor affinity key.

You need to have MANY affinity keys, much more than there are partitions, and 
have MANY partitions, much more than nodes.
That will make sure that the default affinity function distribute data properly.
But more importantly that will make sure that your system will scale well.
If you have number of groups equal to the number of nodes than you can’t just 
increase the number of nodes to scale – you need 
to change your data model as well. To scale properly you need to have your data 
model work with different number of nodes.

FYI Ignite used to have a different affinity function that always distributed 
partitions evenly.
It had some issues and was eventually replaced and removed, although people do 
try to bring it back time to time.
See 
http://apache-ignite-developers.2346864.n4.nabble.com/Resurrect-FairAffinityFunction-td19987.html

Thanks,
Stan

From: ashishb008
Sent: 19 декабря 2018 г. 9:09
To: user@ignite.apache.org
Subject: Re: Did anyone write custom Affinity function?

Yeah, we were planning to increase group IDs.
Did anybody write custom Affinity function? If it is already written that
will be helpful to us. 



--
Sent from: http://apache-ignite-users.70518.x6.nabble.com/



How Ignite transfer cached data between nodes in the cluster

2018-12-19 Thread vyhc...@hotmail.com
I am trying to understand the logic and find the source code on how Ignite
retrieves cached data if the data is cached on different node in the
cluster.

As an example, say the cluster has 3 node which are A, B & C. Data retrieval
request sent from node A, and data is cached on node C. I found that Ignite
use GridIoManager and TcpCommunicationSpi with NIO to transfer messages? But
what I couldn't find and try to understand is how the cached data got
transferred from node C to A. Can anyone point me the source code for
storing/retrieving the cached data between nodes in the Ignite cluster?
Thanks.



--
Sent from: http://apache-ignite-users.70518.x6.nabble.com/


Re: Register listeners before joining the cluster

2018-12-19 Thread aealexsandrov
Hi,

Yes, I understand. You also can contribute this new event to Ignite:

1)Create the Jira issue - https://issues.apache.org/jira
2)Prepare the patch in a separate branch named ignite-X (where X is
a number of JIRA ticket)
3)Create the pull request in GitHub to master branch -
https://github.com/apache/ignite
4)Use next tool - https://mtcga.gridgain.com/ -> inspect contribution ->
trigger build
5)When the test run will be done, check the report and if there is no issues
push comment Jira.
6)Create the new thread in the development user list
(http://apache-ignite-developers.2346864.n4.nabble.com/) and ask to review
your change and merge.

Or just file the feature request and wait until it will proceed. But it will
take some time.

BR,
Andrei



--
Sent from: http://apache-ignite-users.70518.x6.nabble.com/


Re: Scoped vs Singleton ignite client node in .net web api

2018-12-19 Thread Pavel Tupitsyn
Replied on StackOverflow
https://stackoverflow.com/questions/53842387/scoped-vs-singleton-ignite-client-node-in-net-web-api/

In short, use Singleton, because "Thick" client is heavy and thread-safe.

On Wed, Dec 19, 2018 at 5:28 PM aealexsandrov 
wrote:

> Hi,
>
> Generally, you should have a working Ignite cluster somewhere that will
> store and process your requests.
>
> I am not sure that you really require the client node in your case. It will
> use additional memory and CPU.
>
> You can use several ways to work with Ignite cluster without fat client
> node:
>
> 1)Thin clients (ODBC/JDBC) - you may store the connection open but it could
> be closed. So you can try to use some connection pull.
>
> https://apacheignite.readme.io/docs/thin-clients
>
> 2)REST - just open the connection and do some operations. Also could be
> timed out. So you could open a new connection for every update.
>
> BR,
> Andrei
>
> https://apacheignite.readme.io/docs/rest-api
>
>
>
> --
> Sent from: http://apache-ignite-users.70518.x6.nabble.com/
>


Re: Register listeners before joining the cluster

2018-12-19 Thread Lukas Polacek
Hi,
the suggestion to use BEFORE_NODE_START doesn't work, I'm getting
org.apache.ignite.IgniteCheckedException: Grid is in invalid state to
perform this operation. It either not started yet or has already being or
have stopped [igniteInstanceName=Ignite IDKit Server, state=STARTING]

I got it working by creating a new lifecycle event BEFORE_CLUSTER_JOIN and
calling "notifyLifecycleBeans(BEFORE_CLUSTER_JOIN)" at
https://github.com/apache/ignite/blob/7d4e1fd118845f6e14638520ac10881deadd570c/modules/core/src/main/java/org/apache/ignite/internal/IgniteKernal.java#L1075-L1075
(slightly further in the code than I suggested previously).

I'll try your suggestion to use PluginProvider next.

On Wed, Dec 19, 2018 at 2:38 PM aealexsandrov 
wrote:

> Hi,
>
> Depends on what you are going to archive:
>
> In case if you are going to just collect some information then you can use:
>
> https://apacheignite.readme.io/docs/ignite-life-cycle
>
> BEFORE_NODE_START
>
> Invoked before Ignite node startup routine is initiated.
>
> In case if you are going to make some desition about should a node be
> joined
> or not that you can use the next method (in a separate plugin):
>
>
> https://ignite.apache.org/releases/latest/javadoc/org/apache/ignite/plugin/PluginProvider.html#validateNewNode-org.apache.ignite.cluster.ClusterNode-
>
> BR,
> Andrei
>
>
>
> --
> Sent from: http://apache-ignite-users.70518.x6.nabble.com/
>


Re: Scoped vs Singleton ignite client node in .net web api

2018-12-19 Thread aealexsandrov
Hi,

Generally, you should have a working Ignite cluster somewhere that will
store and process your requests.

I am not sure that you really require the client node in your case. It will
use additional memory and CPU.

You can use several ways to work with Ignite cluster without fat client
node:

1)Thin clients (ODBC/JDBC) - you may store the connection open but it could
be closed. So you can try to use some connection pull.

https://apacheignite.readme.io/docs/thin-clients

2)REST - just open the connection and do some operations. Also could be
timed out. So you could open a new connection for every update.

BR,
Andrei

https://apacheignite.readme.io/docs/rest-api



--
Sent from: http://apache-ignite-users.70518.x6.nabble.com/


Re: IgniteCheckedException when write and read made by different clients

2018-12-19 Thread aealexsandrov
Hi,

I don't think so. However, you can try to use Integer instead of Enum values
and proceeds it accordingly (It will require the changes in configuration)

What is the reason why you can't move the class to correct package?

BR,
Andrei



--
Sent from: http://apache-ignite-users.70518.x6.nabble.com/


Re: Question about cpu usage and disk read speed on activation stage.

2018-12-19 Thread aealexsandrov
Hi,

Could you share you Ignite node configuration and logs from startup?

BR,
Andrei



--
Sent from: http://apache-ignite-users.70518.x6.nabble.com/


Re: Register listeners before joining the cluster

2018-12-19 Thread aealexsandrov
Hi,

Depends on what you are going to archive:

In case if you are going to just collect some information then you can use:

https://apacheignite.readme.io/docs/ignite-life-cycle 

BEFORE_NODE_START

Invoked before Ignite node startup routine is initiated.

In case if you are going to make some desition about should a node be joined
or not that you can use the next method (in a separate plugin):

https://ignite.apache.org/releases/latest/javadoc/org/apache/ignite/plugin/PluginProvider.html#validateNewNode-org.apache.ignite.cluster.ClusterNode-

BR,
Andrei



--
Sent from: http://apache-ignite-users.70518.x6.nabble.com/


Register listeners before joining the cluster

2018-12-19 Thread Lukas Polacek
Hi,
is there a way to register local listeners before a node joins the cluster?

The documentation says to register local listeners through
"ignite.events().localListen(...)", however that can only be done once e.g.
"ignite = Ignition.start(cfg)" has been called. At that point the node
might have already joined the cluster, so we might have missed events in
the meantime.

I'm thinking of adding a new lifecycle event type "BEFORE_CLUSTER_JOIN"
which I would be able to process via a Lifecycle Bean. The event would be
triggered using "notifyLifecycleBeans(BEFORE_CLUSTER_JOIN)" somewhere here
in the code:
https://github.com/apache/ignite/blob/7d4e1fd118845f6e14638520ac10881deadd570c/modules/core/src/main/java/org/apache/ignite/internal/IgniteKernal.java#L1051-L1051

Is there a better way without modifying Ignite code? Can I postpone cluster
joining? Or can I register the listeners at an earlier stage?


Re: Can i use SQL query and Cache Operations in same transaction (JTA)

2018-12-19 Thread Павлухин Иван
Hi Hyungbai,

Please be aware that MVCC is included into Ignite release as kind of
experimental feature as stated in release notes [1].
I do not know plans for providing releases with MVCC fixes. But
perhaps you can try nightly builds [2] once the related ticket [3] is
resolved. And I believe that it can be resolved in near future.

[1] https://ignite.apache.org/releases/2.7.0/release_notes.html
[2] https://ignite.apache.org/download.cgi#nightly-builds
[3] https://issues.apache.org/jira/browse/IGNITE-10685
пн, 17 дек. 2018 г. в 05:06, Hyungbai :
>
> Thank you for the reply.
>
> I think it is absolutely necessary when using JTA.
> I hope it will be patched soon.
>
>
>
> --
> Sent from: http://apache-ignite-users.70518.x6.nabble.com/



-- 
Best regards,
Ivan Pavlukhin


Re: Copying My Service classes to the lib folder of already started nodes (not via visor)

2018-12-19 Thread Zaheer
Thank you very much for the replies.

Regards
Zaheer



--
Sent from: http://apache-ignite-users.70518.x6.nabble.com/


Re: Copying My Service classes to the lib folder of already started nodes (not via visor)

2018-12-19 Thread Zaheer
Thank you very much for the replies Denis and Ilya

Regards
Zaheer



--
Sent from: http://apache-ignite-users.70518.x6.nabble.com/


Re: How to benchmark IgniteDataStreamer?

2018-12-19 Thread Ilya Kasnacheev
Hello!

You will need to benchmark batches, of course. IgniteDataStreamer has
flush() which is synchronous.

Regards,
-- 
Ilya Kasnacheev


ср, 19 дек. 2018 г. в 09:33, ashishb008 :

> Hello,
>
> IgniteDataStreamer is async, so how will you find the exact time?
>
>
>
>
> --
> Sent from: http://apache-ignite-users.70518.x6.nabble.com/
>


Re: Ignite native persistence and replicated caches - should it even be possible?

2018-12-19 Thread kimec.ethome.sk

Ilya, thank you very much for your response!

Kamil Mišúth

On 2018-12-17 15:13, Ilya Kasnacheev wrote:

Hello!

I'm not completely sure but persistent REPLICATED cache is the same as
PARTITIONED with MAXINT backups.

It means that every node will have a copy of data, but it has to be in
BLT to be used.

Regards,

--

Ilya Kasnacheev

пн, 17 дек. 2018 г. в 14:46, kimec.ethome.sk [1]
:


Hi all,

Could somebody confirm my conclusion below?

It seems it is possible to declare a REPLICATED cache configuration
for
caches that are mapped to a data region backed by the native
persistence
layer.
Ignite does not complain about this configuration and boots happily.

Yet, after cluster restart, during runtime, the cache behaves as if
it
was PARTITIONED - since the configuration says REPLICATED, Ignite
will
not attempt to reload the data from the node actually owning them (a

node that has the data stored on disk from before cluster restart).

Net effect is that a node that is not a member of a baseline
topology
will report the cache contains no data for a given key even though
persisted data actually do exist in the cluster (but on a separate
node).

The documentation is not very clear on whether REPLICATED caches are

supported by native persistence or not, but reading between the
lines[1], I guess the only supported use case for native persistence
is
PARTITIONED cache.

If that is so, I would expect the node declaring such a cache
configuration to fail fast during startup. Or maybe the
documentation
should state this more clearly. It is not very intuitive, to say the

least.

Anyway, could somebody kindly confirm my suspicion? Thank you!

Kamil Mišúth

[1]


https://apacheignite.readme.io/v2.7/docs/distributed-persistent-store#section-overview


Links:
--
[1] http://kimec.ethome.sk


Re: deep learning over apache ignite

2018-12-19 Thread dmitrievanthony
Yes, in TensorFlow on Apache Ignite we support distributed learning as you
described it (please see details in  this documentation
  ).

Speaking about performance, TensorFlow supports distributed learning itself
(please see details  here

 
). But to start distributed learning in pure TensorFlow you need to setup
cluster manually, manually distribute training data between cluster nodes
and handle node failures.

In TensorFlow on Apache Ignite we do it for you automatically. Apache Ignite
plays cluster manager role, it starts and maintains TensorFlow cluster with
optimal configuration and handles node failures. At the same time, the
training is fully performed by TensorFlow anyway. So, the training
performance is absolutely equal to the case when you use pure TensorFlow
with proper manually configured and started TensorFlow cluster because we
don't participate in the training process when the cluster is running
properly.



--
Sent from: http://apache-ignite-users.70518.x6.nabble.com/