Re: Getting Invalid state exception when Persistance is enabled.

2018-02-20 Thread Prasad Bhalerao
Hi Slava,
Thank you for solution.

Can you please help me with following question.

I am loading the cache from oracle table using loadCache method. If the
persistence is enabled and if the data is already persisted, I want to make
sure that the cache is loaded from persisted data instead of loading it
from oracle table using loadCache. Can someone please advise how this can
be achieved?


Regards,
Prasad

On Feb 20, 2018 11:49 PM, "slava.koptilin"  wrote:

Hi Prasad,

The root cause of IllegalStateException you observed is that the Ignite
instance is created within IgniteSpringBean#afterSingletonsInstantiated()
method which is triggered by the Spring Framework.
So, you should not call ignite.active(true) method here:
@Bean
public IgniteSpringBean igniteInstance() {
IgniteSpringBean ignite = new IgniteSpringBean();

// Please do not call active() method here
// Ignite instance is not initialized yet.
//ignite.active(true);

ignite.setConfiguration(getIgniteConfiguration());

return ignite;
}

One possible workaround is using Ignite LifecycleBeans [1]

// Lifecycle bean that activates the cluster.
public class MyLifecycleBean implements LifecycleBean {
@IgniteInstanceResource
private Ignite ignite;

@Override public void onLifecycleEvent(LifecycleEventType evt) {
if (evt == LifecycleEventType.AFTER_NODE_START) {
ignite.active(true);
}
}
}

// Provide lifecycle bean to the configuration.
private IgniteConfiguration getIgniteConfiguration() {
IgniteConfiguration cfg = new IgniteConfiguration();
cfg.setLifecycleBeans(new MyLifecycleBean());
...
return cfg;
}

[1]
https://apacheignite.readme.io/docs/ignite-life-cycle#section-lifecyclebean

Thanks!



--
Sent from: http://apache-ignite-users.70518.x6.nabble.com/


Re: Ignite with Spring Cache on K8S, eviction problem

2018-02-20 Thread vkulichenko
Can you try to reproduce the issue in a smaller project that you would be
able to share? Honestly, issue is absolutely not clear to me.

-Val



--
Sent from: http://apache-ignite-users.70518.x6.nabble.com/


tuning near cache performance

2018-02-20 Thread scottmf
Hi, 
In one of our services we have need for a shared cache across multiple jvms. 
Currently these objects are being exposed to the application logic via a
local hashmap and synchronized via a custom mechanism.  We'd like to replace
this mechanism with ignite where the service cluster nodes would become
ignite clients.  Currently the idea is to use an Ignite cluster cache / near
cache then we'd throw away the custom cross node synchronization logic.  The
Cache is very read heavy with a small volume of writes over time. 

When I switched over the mechanism to Ignite I found that the near cache
needs some tuning.  The first thing I noticed was the serialization /
deserialization bottleneck.  I found a post from Val that said to turn off
copyOnWrite.  This alleviated the major bottleneck that I saw but the
performance is still slower than using a local HashMap.  Although I
understand that it may never get to the performance of a local HashMap I was
wondering what other tuning tips I need to know about in order to speed up
our reads from the near cache. 

Currently I see this (more or less) as the bottleneck: 

org.apache.ignite.internal.processors.cache.distributed.near.GridNearGetFuture.init
 
  -->
org.apache.ignite.internal.processors.cache.distributed.near.GridNearGetFuture.map
 
--> GridIteratorAdapater.next 
-->
org.apache.ignite.internal.processors.cache.distributed.near.GridNearGetFuture.map
 
  --> GridCacheAdapter.peekEx 
  --> GridCacheEvictionManager.touch 
  --> GridNearGetFuture.addResult 

Are there other knobs that I can turn in order to tune this to better serve
my workload? 

thanks, 
Scott



--
Sent from: http://apache-ignite-users.70518.x6.nabble.com/


Re: Node failed to join cluster with error related to : Memory configuration mismatch (fix configuration or set -DIGNITE_SKIP_CONFIGURATION_CONSISTENCY_CHECK=true system property) [rmtNodeId=47e76683-

2018-02-20 Thread vkulichenko
Default page size in Ignite 2.1 was 2048 (2K), in 2.3 it was increased to
4096 (4K). Since your storage was created on 2.1 with page size of 2K, you
need to restore with the same size. To achieve this you should explicitly
set DataStorageConfiguration#pageSize property to 2048 when starting Ignite
2.3.

More details about memory configuration here:
https://apacheignite.readme.io/docs/memory-configuration

-Val



--
Sent from: http://apache-ignite-users.70518.x6.nabble.com/


Re: 20 minute 12x throughput drop using data streamer and Ignite persistence

2018-02-20 Thread Dave Harvey
I've started reproducing this issue with more  statistics, but have not
reached the worst performance point yet, but somethings are starting to
become clearer:

The DataStreamer hashes the affinity key to partition, and then maps the
partition to a node, and fills a single buffer at a time for the node.  A
DataStreamer thread on the node therefore get a buffer's worth of requests
grouped by the time of the addData() call, with no per thread grouping by
affinity key (as I had originally assumed).

The test I was running was using a large amount of data where the average
number of keys for each unique affinity key is 3, with some outliers up to
50K.   One of the caches being updated in the optimistic transaction in the
StreamReceiver contains an object whose key is the affinity key, and whose
contents are the set of keys that have that affinity key. We expect some
temporal locality for objects with the same affinity key.

We had a number of worker threads on a client node, but only one data
streamer, where we increased the buffer count.   Once we understood how the
data streamer actually worked, we made each worker have its own
DataStreamer.   This way, each worker could issue a flush, without affecting
the other workers.   That, in turn, allowed us to use smaller batches per
worker, decreasing the odds of temporal locality.

So it seems like we would get updates for the same affinity key on different
data streamer threads, and they could conflict updating the common record.  
The more keys per affinity key the more likely a conflict, and the more data
would need to be saved.   A flush operation could stall multiple workers,
and the flush operation might be dependent on requests that are conflicting.

We chose to use OPTIMISTIC transactions because of their lack-of-deadlock
characteristics, rather than because we thought there would be high
contention.  I do think this behavior suggests something sub-optimal
about the OPTIMISTIC lock implementation, because I see a dramatic decrease
in throughput, but not a dramatic increase in transaction restarts.

 



--
Sent from: http://apache-ignite-users.70518.x6.nabble.com/


Re: 20 minute 12x throughput drop using data streamer and Ignite persistence

2018-02-20 Thread Dave Harvey
I've started reproducing this issue with more  statistics, but have not
reached the worst performance point yet, but somethings are starting to
become clearer:

The DataStreamer hashes the affinity key to partition, and then maps the
partition to a node, and fills a single buffer at a time for the node.  A
DataStreamer thread on the node therefore get a buffer's worth of requests
grouped by the time of the addData() call, with no per thread grouping by
affinity key (as I had originally assumed).

The test I was running was using a large amount of data where the average
number of keys for each unique affinity key is 3, with some outliers up to
50K.   One of the caches being updated in the optimistic transaction in the
StreamReceiver contains an object whose key is the affinity key, and whose
contents are the set of keys that have that affinity key. We expect some
temporal locality for objects with the same affinity key.

We had a number of worker threads on a client node, but only one data
streamer, where we increased the buffer count.   Once we understood how the
data streamer actually worked, we made each worker have its own
DataStreamer.   This way, each worker could issue a flush, without affecting
the other workers.   That, in turn, allowed us to use smaller batches per
worker, decreasing the odds of temporal locality.

So it seems like we would get updates for the same affinity key on different
data streamer threads, and they could conflict updating the common record.  
The more keys per affinity key the more likely a conflict, and the more data
would need to be saved.   A flush operation could stall multiple workers,
and the flush operation might be dependent on requests that are conflicting.

We chose to use OPTIMISTIC transactions because of their lack-of-deadlock
characteristics, rather than because we thought there would be high
contention.  I do think this behavior suggests something sub-optimal
about the OPTIMISTIC lock implementation, because I see a dramatic decrease
in throughput, but not a dramatic increase in transaction restarts. 
"In OPTIMISTIC transactions, entry locks are acquired on primary nodes
during the prepare step,"  does not say anything about  the order that locks
are acquired.  Sorting the locks so there is a consistent order would avoid
deadlocks.   
If there are no deadlocks, then there could be n-1 restarts of the
transaction for each commit, where n is the number of data streamer threads.
This is the old "thundering herd" problem, which can easily be made order n
by only allowing one of the waiting threads to proceed at a time.
 



--
Sent from: http://apache-ignite-users.70518.x6.nabble.com/


Re: Ignite 2.4 status

2018-02-20 Thread slava.koptilin
Hi Paolo,

I think that Apache Ignite will be released soon.
You can track the status and find details on the dev-list:
http://apache-ignite-developers.2346864.n4.nabble.com/Apache-Ignite-2-4-release-tc26031.html

Thanks!



--
Sent from: http://apache-ignite-users.70518.x6.nabble.com/


Ignite 2.4 status

2018-02-20 Thread Paolo Di Tommaso
Hi folks,

I was wondering what's the status of Ignite 2.4. Is there any planned
release date?

The need to support java 9 is becoming a priority.


Cheers,
Paolo


Re: Large durable caches

2018-02-20 Thread lawrencefinn
Should I decrease these?  One other thing to note is im monitoring GC and the
GC times do not correlate with these issues (GC times are pretty low
anyway).  I honestly think that persisting to disk somehow causes things to
freeze up.  Could it be an AWS related issue?  Im using EBS IO1 with 20,000
IOPS, one driver for persistence and one for wal.



--
Sent from: http://apache-ignite-users.70518.x6.nabble.com/


Re: Issues trying to force redeployment in shared mode

2018-02-20 Thread Dave Harvey
I've done some additional testing.  By shutting down another (the last)
client node that was running independent code, I was able to purge the bad
version of my code from the servers, while leaving the userVersion at "0".  
Apparently in this case, the client nodes are "master" nodes.  (The
deployment modes documentation uses the terms "master" and "workers" without
defining them, so you are left to guess which nodes are master and how they
become one.  Because they sent a closure?   Because some class was actually
loaded from them?).

Running from Eclipse with a local vanilla 2.3 docker image as a single
server,  changing the userVersion ignite.xml from "0" to "1" on the client
causes the error:  "Caused by: class
org.apache.ignite.IgniteDeploymentException: Task was not deployed or was
redeployed since task execution" , even if the server was just restarted. 
It starts working again of the user version changes back to "0".

That is, /changing the userVersion on the client causes the client not to be
able to talk to the server./  Since server will cache the userVersion on the
first access, there doesn't seem to be a path to get the client's code to
redeploy except by shutting down all clients, or by shutting down all
servers.






--
Sent from: http://apache-ignite-users.70518.x6.nabble.com/


Re: Getting Invalid state exception when Persistance is enabled.

2018-02-20 Thread slava.koptilin
Hi Prasad,

The root cause of IllegalStateException you observed is that the Ignite
instance is created within IgniteSpringBean#afterSingletonsInstantiated()
method which is triggered by the Spring Framework.
So, you should not call ignite.active(true) method here:
@Bean
public IgniteSpringBean igniteInstance() {
IgniteSpringBean ignite = new IgniteSpringBean();

// Please do not call active() method here
// Ignite instance is not initialized yet.
//ignite.active(true);

ignite.setConfiguration(getIgniteConfiguration());

return ignite;
}

One possible workaround is using Ignite LifecycleBeans [1] 

// Lifecycle bean that activates the cluster.
public class MyLifecycleBean implements LifecycleBean {
@IgniteInstanceResource
private Ignite ignite;

@Override public void onLifecycleEvent(LifecycleEventType evt) {
if (evt == LifecycleEventType.AFTER_NODE_START) {
ignite.active(true);
}
}
}

// Provide lifecycle bean to the configuration.
private IgniteConfiguration getIgniteConfiguration() {
IgniteConfiguration cfg = new IgniteConfiguration();
cfg.setLifecycleBeans(new MyLifecycleBean());
...
return cfg;
}

[1]
https://apacheignite.readme.io/docs/ignite-life-cycle#section-lifecyclebean

Thanks!



--
Sent from: http://apache-ignite-users.70518.x6.nabble.com/


Re: How to monitor page eviction ?

2018-02-20 Thread Ivan Rakov

Sergey,

Pages can't be evicted unless you configure data page eviction mode (see 
DataPageEvictionMode). If page eviction is disabled and you still miss 
your data, the problem is caused by something else.


Regarding message in log about page eviction start: there's a ticket 
about it, but unfortunately it's stuck in Patch Available state: 
https://issues.apache.org/jira/browse/IGNITE-5151

I'll find committer who'll merge it to the master.
If you still want to know whether eviction happened, I can suggest you a 
workaround. Try calling 
org.apache.ignite.internal.processors.cache.persistence.DataRegionMetricsImpl#getTotalAllocatedSize. 
If it shows >90% of your data region maximum size 
(DataRegionConfiguration#maxSize), page eviction should have started.


Best Regards,
Ivan Rakov

On 20.02.2018 18:34, Sergey Bezrukov wrote:

Hi,

we use Ignite as DataGrid and want to store data in cache forever as
we, for sure, have enough RAM to fit in. These data are just
dictionaries, less than 1M recordrs each.

Our deployment for now is 3 server nodes with several caches, each
configured in "replicated:" mode, There are one "master-service" for
particular cache which is only responsible for cache update/load and
tens of readonly "consumers". We use standart off-heap configuration
without Persistence.

According to docs, there is no expire policy configured by default, so
I assumed we're limited by RAM only. But today I found some data
missing and the only explanation I have for now is that those pages
were evicted for some reason.  Is there any way to find out when and
why page eviction take place for certain cache? Can we log it on
server nodes for future analysis?

Thanks.
--
Sergey Bezrukov




Re: Ignite Streamer usage in UriDeploymentSpi

2018-02-20 Thread daivanov
Hi, Slava. Thanks for your answer.
Yes, it looks similar.
But I think there are some difference.

Because I use UriDeploymentSpi i assume that I will have proper classloader
in each node for my executed ComputeTask.
In future I will remove peerClassloading because I load classes in each node
by UriDeploymentSpi and user classes located on each node.

So my main problem is how to get proper classloader on Receiver side for
deployed Gar unit when I have it on Streamer

And one more question: for which case IgniteStreamer deployClass method
should be used and why it takes only one argument?

Dmitry.



--
Sent from: http://apache-ignite-users.70518.x6.nabble.com/


How to monitor page eviction ?

2018-02-20 Thread Sergey Bezrukov
Hi,

we use Ignite as DataGrid and want to store data in cache forever as
we, for sure, have enough RAM to fit in. These data are just
dictionaries, less than 1M recordrs each.

Our deployment for now is 3 server nodes with several caches, each
configured in "replicated:" mode, There are one "master-service" for
particular cache which is only responsible for cache update/load and
tens of readonly "consumers". We use standart off-heap configuration
without Persistence.

According to docs, there is no expire policy configured by default, so
I assumed we're limited by RAM only. But today I found some data
missing and the only explanation I have for now is that those pages
were evicted for some reason.  Is there any way to find out when and
why page eviction take place for certain cache? Can we log it on
server nodes for future analysis?

Thanks.
--
Sergey Bezrukov


Re: Ignite Streamer usage in UriDeploymentSpi

2018-02-20 Thread slava.koptilin
Hi Dmitry,

It looks like the issue you described is similar to the following
https://issues.apache.org/jira/browse/IGNITE-3935
The fix will be available in ApacheIgnite 2.4.
In meanwhile, could you please check your case with the latest code from the
master branch?

Thanks!



--
Sent from: http://apache-ignite-users.70518.x6.nabble.com/


Save the date: ApacheCon North America, September 24-27 in Montréal

2018-02-20 Thread Rich Bowen

Dear Apache Enthusiast,

(You’re receiving this message because you’re subscribed to a user@ or 
dev@ list of one or more Apache Software Foundation projects.)


We’re pleased to announce the upcoming ApacheCon [1] in Montréal, 
September 24-27. This event is all about you — the Apache project community.


We’ll have four tracks of technical content this time, as well as lots 
of opportunities to connect with your project community, hack on the 
code, and learn about other related (and unrelated!) projects across the 
foundation.


The Call For Papers (CFP) [2] and registration are now open. Register 
early to take advantage of the early bird prices and secure your place 
at the event hotel.


Important dates
March 30: CFP closes
April 20: CFP notifications sent
	August 24: Hotel room block closes (please do not wait until the last 
minute)


Follow @ApacheCon on Twitter to be the first to hear announcements about 
keynotes, the schedule, evening events, and everything you can expect to 
see at the event.


See you in Montréal!

Sincerely, Rich Bowen, V.P. Events,
on behalf of the entire ApacheCon team

[1] http://www.apachecon.com/acna18
[2] https://cfp.apachecon.com/conference.html?apachecon-north-america-2018


Getting Invalid state exception when Persistance is enabled.

2018-02-20 Thread Prasad Bhalerao
Hi,

I am starting ignite node in server mode in intellij. I am starting only
one instance of it. I am using IgniteSpringBean to set configuration and
start the node as shown below. But when I enable persistence, I get
following exception.

Caused by: java.lang.IllegalStateException: Ignite is in invalid state to
perform this operation. It either not started yet or has already being or
have stopped [ignite=null, cfg=null]

As per the doc, IgniteSpringBean is responsible for starting the ignite. So
how to set node to active state in case this case?

Also, I am loading the cache from oracle table using loadCache method. If
the persistence is enabled and if the data is already persisted, I want to
make sure that the cache is loaded from persisted data instead of loading
it from oracle table using loadCache. Can someone please advise how this
can be achieved?

Code to config ignite and cache:

@Bean
public IgniteSpringBean igniteInstance() {
IgniteSpringBean ignite = new IgniteSpringBean();
*ignite.active(true);*
ignite.setConfiguration(getIgniteConfiguration());

return ignite;
}

private IgniteConfiguration getIgniteConfiguration() {

String HOST = "127.0.0.1:47500..47509";
TcpDiscoveryVmIpFinder ipFinder = new TcpDiscoveryVmIpFinder();
ipFinder.setAddresses(Collections.singletonList(HOST));

TcpDiscoverySpi discoSpi = new TcpDiscoverySpi();
discoSpi.setIpFinder(ipFinder);

IgniteConfiguration cfg = new IgniteConfiguration();
cfg.setDiscoverySpi(discoSpi);
cfg.setIgniteInstanceName("springDataNode");
cfg.setPeerClassLoadingEnabled(false);
cfg.setRebalanceThreadPoolSize(4);



*   DataStorageConfiguration storageCfg = new
DataStorageConfiguration();
storageCfg.getDefaultDataRegionConfiguration().setPersistenceEnabled(true);
   cfg.setDataStorageConfiguration(storageCfg);*

CacheConfiguration
ipv4RangeCacheCfg = new CacheConfiguration<>("IPV4RangeCache");
ipv4RangeCacheCfg.setAtomicityMode(CacheAtomicityMode.TRANSACTIONAL);
ipv4RangeCacheCfg.setWriteThrough(false);
ipv4RangeCacheCfg.setReadThrough(false);
ipv4RangeCacheCfg.setRebalanceMode(CacheRebalanceMode.ASYNC);

ipv4RangeCacheCfg.setWriteSynchronizationMode(CacheWriteSynchronizationMode.FULL_SYNC);
ipv4RangeCacheCfg.setBackups(1);
Factory storeFactory =
FactoryBuilder.factoryOf(IPV4RangeCacheDataLoader.class);
ipv4RangeCacheCfg.setCacheStoreFactory(storeFactory);

cfg.setCacheConfiguration(ipv4RangeCacheCfg);
return cfg;
}


Thanks,
Prasad


[Bug-7688] DDL does not working properly on sql queries

2018-02-20 Thread Muratcan Tuksal
Hi,
We are facing an issue which causes data losing. Details are in the jira issue 
which is indicated below. So how can we proceed in order to solve that 
persistency problem?

Thanks,

https://issues.apache.org/jira/projects/IGNITE/issues/IGNITE-7688


Muratcan TUKSAL



Ignite Streamer usage in UriDeploymentSpi

2018-02-20 Thread Dmitriy Ivanov
Hi. I use UriDeploymentSpi to deploy users code and ComputeTasks.

Simple ComputeTasks work well, but in ComputeTask with IgniteStreamer I
have ClassNotFoundException.

There is users transformations code in application which should be applied
to data on each node.
So I put transform execution into StreamReceiver.
Each transform is a user java class which was deployed along with
ComputeTask  in one gar file

In StreamReceiver constructor or readResovle method I am trying to get
instance of user transform class but I have  ClassNotFoundException.
I try to get instanse that way:

try {
try {
return (Transform) Class.forName(transform, true,
Thread.currentThread().getContextClassLoader()).newInstance();
} catch (ClassNotFoundException e) {
return (Transform) Class.forName(transform).newInstance();
}
}


This is my stackTrace:

SEVERE: Failed to unmarshal message
[nodeId=493d-03cd-4ccb-85e5-3c4b17793d7f, req=DataStreamerRequest
[reqId=77, cacheName=ODS_CDR.MSC_CDR, ignoreDepOwnership=true,
skipStore=false, keepBinary=true, depMode=SHARED,
sampleClsName=wf.LoadMscCdr.ETLTransform, userVer=0, ldrParticipants=null,
clsLdrId=019b403b161-493d-03cd-4ccb-85e5-3c4b17793d7f,
forceLocDep=false, topVer=AffinityTopologyVersion [topVer=5,
minorTopVer=2], partId=-2147483648]]
class org.apache.ignite.IgniteCheckedException: Failed to deserialize
object [typeName=com.gridfore.dmp.etl.receivers.impl.DataBinaryReceiver]
at
org.apache.ignite.internal.util.IgniteUtils.unmarshal(IgniteUtils.java:9859)
at
org.apache.ignite.internal.processors.datastreamer.DataStreamProcessor.processRequest(DataStreamProcessor.java:289)
at
org.apache.ignite.internal.processors.datastreamer.DataStreamProcessor.access$000(DataStreamProcessor.java:59)
at
org.apache.ignite.internal.processors.datastreamer.DataStreamProcessor$1.onMessage(DataStreamProcessor.java:89)
at
org.apache.ignite.internal.managers.communication.GridIoManager.invokeListener(GridIoManager.java:1555)
at
org.apache.ignite.internal.managers.communication.GridIoManager.processRegularMessage0(GridIoManager.java:1183)
at
org.apache.ignite.internal.managers.communication.GridIoManager.access$4200(GridIoManager.java:126)
at
org.apache.ignite.internal.managers.communication.GridIoManager$9.run(GridIoManager.java:1090)
at
org.apache.ignite.internal.util.StripedExecutor$Stripe.run(StripedExecutor.java:505)
at java.lang.Thread.run(Thread.java:748)
Caused by: class org.apache.ignite.binary.BinaryObjectException: Failed to
deserialize object
[typeName=com.gridfore.dmp.etl.receivers.impl.DataBinaryReceiver]
at
org.apache.ignite.internal.binary.BinaryClassDescriptor.read(BinaryClassDescriptor.java:874)
at
org.apache.ignite.internal.binary.BinaryReaderExImpl.deserialize0(BinaryReaderExImpl.java:1762)
at
org.apache.ignite.internal.binary.BinaryReaderExImpl.deserialize(BinaryReaderExImpl.java:1714)
at
org.apache.ignite.internal.binary.GridBinaryMarshaller.deserialize(GridBinaryMarshaller.java:310)
at
org.apache.ignite.internal.binary.BinaryMarshaller.unmarshal0(BinaryMarshaller.java:99)
at
org.apache.ignite.marshaller.AbstractNodeNameAwareMarshaller.unmarshal(AbstractNodeNameAwareMarshaller.java:82)
at
org.apache.ignite.internal.util.IgniteUtils.unmarshal(IgniteUtils.java:9853)
... 9 more
Caused by: class org.apache.ignite.binary.BinaryObjectException: Failed to
execute readResolve() method on
com.gridfore.dmp.etl.receivers.impl.DataBinaryReceiver@3a40ad0e
at
org.apache.ignite.internal.binary.BinaryClassDescriptor.read(BinaryClassDescriptor.java:866)
... 15 more
Caused by: java.lang.reflect.InvocationTargetException
at sun.reflect.GeneratedMethodAccessor37.invoke(Unknown Source)
at
sun.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:43)
at java.lang.reflect.Method.invoke(Method.java:498)
at
org.apache.ignite.internal.binary.BinaryClassDescriptor.read(BinaryClassDescriptor.java:855)
... 15 more
Caused by: java.lang.RuntimeException: java.lang.ClassNotFoundException:
wf.LoadMscCdr.ETLTransform
at
com.gridfore.dmp.workflow.job.TaskAdapter.getTransform(TaskAdapter.java:18)
at
com.gridfore.dmp.workflow.job.source.SourceTaskAdapter.getTransform(SourceTaskAdapter.java:127)
at
com.gridfore.dmp.etl.transform.CompileAllTransforms.compile(CompileAllTransforms.java:28)
at
com.gridfore.dmp.etl.receivers.BinaryReceiver.initTransforms(BinaryReceiver.java:47)
at
com.gridfore.dmp.etl.receivers.BinaryReceiver.readResolve(BinaryReceiver.java:60)
... 19 more
Caused by: java.lang.ClassNotFoundException: wf.LoadMscCdr.ETLTransform
at java.net.URLClassLoader.findClass(URLClassLoader.java:381)
at java.lang.ClassLoader.loadClass(ClassLoader.java:424)
at sun.misc.Launcher$AppClassLoader.loadClass(Launcher.java:335)
at java.lang.ClassLoader.loadClass(ClassLoader.java:357)
at java.lang.Class.forName0(Native Method)
at java.lang.Class.forName(Class.java:264)
at
com.gridfore.dmp.workflow.job.TaskAdapter.getTransform(TaskAdapter.java:15)
... 23 more


I try to use Ig

Re: Unable to identify root cause for node failure

2018-02-20 Thread Mikhail
Hi, 

First, ignite will take all space that is configured in a data region. if
there's more data then data region can store it will write data to a disk.
However, it's about off-heap memory only, it doesn't use the heap for data
storing(at least by default).
How to configure data region size you can read here:
https://apacheignite.readme.io/docs/memory-configuration

So you need to sum jvm heap size, off-heap size and memory for OS and tools
and this sum must be < total memory your box has.

Thanks,
Mike.



--
Sent from: http://apache-ignite-users.70518.x6.nabble.com/