What happens when a client gets disconnected

2019-07-31 Thread Matt Nohelty
Sorry for the long delay in responding to this issue.  I will work on
replicating this issue in a more controlled test environment and try to
grab thread dumps from there.

In a previous post you mentioned that the blocking in this thread dump
should only happen when a data node is affected which is usually a server
node and you also said that near cache consistency is observed
continuously.  If we have near caching enabled, does that mean clients
become data nodes?  If that's the case, does that explain why we are seeing
blocking when a client crashes or hangs?

Assuming this is related to near caching, is there any configuration to
adjust this behavior to give us availability over perfect consistency?
Having a failure on one client ripple across the entire system and
effectively take down all other clients of that cluster is a major problem.
We obviously want to avoid problems like an OOM error or a big GC pause in
the client application but if these things happen we need to be able to
absorb these gracefully and limit the blast radius to just that client
node.


Re: What happens when a client gets disconnected

2019-04-23 Thread Matt Nohelty
What period of time are you asking about?  We deploy fairly regularly so
our application servers (i.e. the Ignite clients) get restarted at least
weekly which will trigger a disconnect and reconnect event for each.  We
have not noticed any issues during our regular release process but in this
case we are shutting down the Ignite clients gracefully with Ignite#close.
However, it's also possible that something bad happens on an application
servers causing it to crash.  This is the scenario where we've seen
blocking across the cluster.  We'd obviously like our application servers
to be as independent of one another as possible and it's problematic if an
issue on one server is allowed to ripple across all of them.

I should have mentioned it in my initial post but we are currently using
version 2.4.  I received the following response on my Stack Overflow post:
"When topology changes, partition map exchange is triggered internally. It
blocks all operations on the cluster. Also in old versions ongoing
rebalancing was cancelled. But in the latest versions client
connection/disconnection doesn't affect some processes like this. So, it's
worth trying the most fresh release"

This comment also mentions PME so it sounds like you both are referencing
the same behavior.  However, this comment also states that client
connect/disconnect events do not trigger  PME in the more recent versions
of Ignite.  Can anyone confirm that this is true, and if so, which version
was this change made in?

Thank you very much for the help.

On Tue, Apr 23, 2019 at 10:00 AM Ilya Kasnacheev 
wrote:

> Hello!
>
> What's the period of time?
>
> When client disconnects, topology will change, which will trigger waiting
> for PME, which will delay all further operations until PME is finished.
>
> Avoid having short-lived clients.
>
> Regards,
> --
> Ilya Kasnacheev
>
>
> вт, 23 апр. 2019 г. в 03:40, Matt Nohelty :
>
>> I already posted this question to stack overflow here
>> https://stackoverflow.com/questions/55801760/what-happens-in-apache-ignite-when-a-client-gets-disconnected
>> but this mailing list is probably more appropriate.
>>
>> We use Apache Ignite for caching and are seeing some unexpected behavior
>> across all of the clients of cluster when one of the clients fails. The
>> Ignite cluster itself has three servers and there are approximately 12
>> servers connecting to that cluster as clients. The cluster has persistence
>> disabled and many of the caches have near caching enabled.
>>
>> What we are seeing is that when one of the clients fail (out of memory,
>> high CPU, network connectivity, etc.), threads on all the other clients
>> block for a period of time. During these times, the Ignite servers
>> themselves seem fine but I see things like the following in the logs:
>>
>> Topology snapshot [ver=123, servers=3, clients=11, CPUs=XXX, offheap=XX.XGB, 
>> heap=XXX.GB]Topology snapshot [ver=124, servers=3, clients=10, CPUs=XXX, 
>> offheap=XX.XGB, heap=XXX.GB]
>>
>> The topology itself is clearly changing when a client
>> connects/disconnects but is there anything happening internally inside the
>> cluster that could cause blocking on other clients? I would expect
>> re-balancing of data when a server disconnects but not a client.
>>
>> From a thread dump, I see many threads stuck in the following state:
>>
>> java.lang.Thread.State: TIMED_WAITING (parking)
>> at sun.misc.Unsafe.park(Native Method)- parking to wait for  
>> <0x00078a86ff18> (a java.util.concurrent.CountDownLatch$Sync)
>> at java.util.concurrent.locks.LockSupport.parkNanos(LockSupport.java:215)
>> at 
>> java.util.concurrent.locks.AbstractQueuedSynchronizer.doAcquireSharedNanos(AbstractQueuedSynchronizer.java:1037)
>> at 
>> java.util.concurrent.locks.AbstractQueuedSynchronizer.tryAcquireSharedNanos(AbstractQueuedSynchronizer.java:1328)
>> at java.util.concurrent.CountDownLatch.await(CountDownLatch.java:277)
>> at org.apache.ignite.internal.util.IgniteUtils.await(IgniteUtils.java:7452)
>> at 
>> org.apache.ignite.internal.processors.query.h2.twostep.GridReduceQueryExecutor.awaitAllReplies(GridReduceQueryExecutor.java:1056)
>> at 
>> org.apache.ignite.internal.processors.query.h2.twostep.GridReduceQueryExecutor.query(GridReduceQueryExecutor.java:733)
>> at 
>> org.apache.ignite.internal.processors.query.h2.IgniteH2Indexing$8.iterator(IgniteH2Indexing.java:1339)
>> at 
>> org.apache.ignite.internal.processors.cache.QueryCursorImpl.iterator(QueryCursorImpl.java:95)
>> at 
>> org.apache.ignite.internal.processors.query.h2.IgniteH2Indexing$9.iterator(IgniteH2Indexing.java:1403)
>> at 
>> org.apache.ignite.internal.processors.cache.QueryCursorImpl.iterator(QueryCursorImpl.java:95)
>> at java.lang.Iterable.forEach(Iterable.java:74)...
>>
>> Any ideas, suggestions, or further avenues to investigate would be much
>> appreciated.
>>
>


What happens when a client gets disconnected

2019-04-22 Thread Matt Nohelty
I already posted this question to stack overflow here
https://stackoverflow.com/questions/55801760/what-happens-in-apache-ignite-when-a-client-gets-disconnected
but this mailing list is probably more appropriate.

We use Apache Ignite for caching and are seeing some unexpected behavior
across all of the clients of cluster when one of the clients fails. The
Ignite cluster itself has three servers and there are approximately 12
servers connecting to that cluster as clients. The cluster has persistence
disabled and many of the caches have near caching enabled.

What we are seeing is that when one of the clients fail (out of memory,
high CPU, network connectivity, etc.), threads on all the other clients
block for a period of time. During these times, the Ignite servers
themselves seem fine but I see things like the following in the logs:

Topology snapshot [ver=123, servers=3, clients=11, CPUs=XXX,
offheap=XX.XGB, heap=XXX.GB]Topology snapshot [ver=124, servers=3,
clients=10, CPUs=XXX, offheap=XX.XGB, heap=XXX.GB]

The topology itself is clearly changing when a client connects/disconnects
but is there anything happening internally inside the cluster that could
cause blocking on other clients? I would expect re-balancing of data when a
server disconnects but not a client.

>From a thread dump, I see many threads stuck in the following state:

java.lang.Thread.State: TIMED_WAITING (parking)
at sun.misc.Unsafe.park(Native Method)- parking to wait for
<0x00078a86ff18> (a java.util.concurrent.CountDownLatch$Sync)
at java.util.concurrent.locks.LockSupport.parkNanos(LockSupport.java:215)
at 
java.util.concurrent.locks.AbstractQueuedSynchronizer.doAcquireSharedNanos(AbstractQueuedSynchronizer.java:1037)
at 
java.util.concurrent.locks.AbstractQueuedSynchronizer.tryAcquireSharedNanos(AbstractQueuedSynchronizer.java:1328)
at java.util.concurrent.CountDownLatch.await(CountDownLatch.java:277)
at org.apache.ignite.internal.util.IgniteUtils.await(IgniteUtils.java:7452)
at 
org.apache.ignite.internal.processors.query.h2.twostep.GridReduceQueryExecutor.awaitAllReplies(GridReduceQueryExecutor.java:1056)
at 
org.apache.ignite.internal.processors.query.h2.twostep.GridReduceQueryExecutor.query(GridReduceQueryExecutor.java:733)
at 
org.apache.ignite.internal.processors.query.h2.IgniteH2Indexing$8.iterator(IgniteH2Indexing.java:1339)
at 
org.apache.ignite.internal.processors.cache.QueryCursorImpl.iterator(QueryCursorImpl.java:95)
at 
org.apache.ignite.internal.processors.query.h2.IgniteH2Indexing$9.iterator(IgniteH2Indexing.java:1403)
at 
org.apache.ignite.internal.processors.cache.QueryCursorImpl.iterator(QueryCursorImpl.java:95)
at java.lang.Iterable.forEach(Iterable.java:74)...

Any ideas, suggestions, or further avenues to investigate would be much
appreciated.


Re: HELLO WORLD GA EXAMPLE

2018-12-07 Thread matt egler
I guess but why isnt it showing up in zeppelin?

On Dec 7, 2018 2:52 AM, "Ilya Kasnacheev"  wrote:

Hello!

I don't see any errors here, looks like this example has finished
successfully.

Regards,

-- 
Ilya Kasnacheev


пт, 7 дек. 2018 г. в 07:07, AlphaMufasaOmega :

> Dear @Zaleslaw could you please provide working links to those examples?
>
>
>
> --
> Sent from: http://apache-ignite-users.70518.x6.nabble.com/
>


Re: Delay queue or similar?

2018-10-16 Thread matt
Ok will try that. Cheers!
- Matt



--
Sent from: http://apache-ignite-users.70518.x6.nabble.com/


Re: Delay queue or similar?

2018-10-10 Thread matt
Thanks for the feedback, Ilya!

In your example, where would the initial "host" in "long time =
cache.get(host);" come from? In the case I need to solve for, I would not
know what host would be most suitable to make a request to, so would need to
continuously loop over all available keys until the crawl is done. This may
introduce a performance hit, if (for example) the only host that is ready
for a request is the last one in a very large list of keys. Does that make
sense? Apologies if I'm misunderstanding!

- Matt



--
Sent from: http://apache-ignite-users.70518.x6.nabble.com/


Delay queue or similar?

2018-10-09 Thread matt
Hi,

I'm working on prototyping a web crawler using Ignite as the crawl-db. I'd
like to ensure the crawler obey's the appropriate Craw-Delay time as set in
a site's robots.txt file - the way I have this setup now, is by submitting
"candidates" to an Ignite cache. A local listener is setup to receive
successfully persisted items, which then submits the items to a queue for a
fetcher to pull from.

Goal: Support a delay time + maximum fetch concurrency, per-host, per-item.

Put another way: "for each fetch item, ensure that requests made to the
associated host are delayed as required, and no more than n-requests are
made during each delayed run".

This could be modeled as a Map or maybe even a by using
ScheduledExecutorService where each task represents a host, and is repeated
according to the delay time.

I'd like to prevent items from being put into the java work queue if they
are not yet ready to be fetched, and I'm slightly worried about the
potential number of hosts (in reference to the java Map
data-structure).

So my question is: is there something that Ignite can provide for making
this all work?

- Matt



--
Sent from: http://apache-ignite-users.70518.x6.nabble.com/


Ignite File System (igfs) spillover to disk

2018-06-29 Thread matt
Is it possible to have the IGFS component write to disk once heap/off-heap
consumption hits a certain threshold? I have a custom cache store for one
component of the app, but a different component requires temporary storage
of raw data; we're using igfs for this, but what happens if the file size is
much larger than the available RAM? Can igfs be configured to use the
(relatively new) native "DataStorage" feature to spillover from RAM to disk,
but then also have a specific cache use a custom store?

Thanks
- Matt



--
Sent from: http://apache-ignite-users.70518.x6.nabble.com/


Advice for updating large cache

2018-06-28 Thread matt
Hi,

We have a service, which is essentially a file system crawler, and we're
using Ignite to store the overall state of the job. The state is represented
by simple objects with fields like ID, Name, Path, and State. The State
field is either "Candidate" or "Document". A Candidate is metadata only, and
is inserted before the content of the file is actually fetched. Once a
Candidate is stored in Ignite, we then send it back to the crawler for it to
read the file and send back the content. Once we receive the content, we
update the State field for the item to Document.

We'd like to be able to support stopping the crawl before it finishes, and
then on the next start, pickup where we left off. This essentially means
crawling all Candidates, but skipping the Documents. This is all straight
forward.

The case that gets tricky, is when the second crawl finishes, we'd like to
then have the option of re-evaluating everything on the next crawl. We could
do this by sending everything to the crawler. The problem is that if _this_
crawl then is stopped before finishing, the state of the items becomes
ambiguous — items that were not crawled have their previous state stuck at
Document, and the items that were crawled also have their new state set from
Document, to Document. This means that re-starting the job causes everything
to be re-crawled.

Obviously this approach is flawed. So we tried the simplest thing we could
think of as a solution: at the end of a job that has finished (and not
manually stopped), update the state of every item in the cache back to
Candidate. And this does the trick. Unfortunately, it is slow - we have a
custom cache store, which may or may not be the bottleneck. While it is
simple, this is indeed a brute-force solution.

So I'm wondering if there's something in Ignite that could help? Or if
anyone has dealt with this kind of problem before and can offer ideas for a
better way?

Thanks!
- Matt



--
Sent from: http://apache-ignite-users.70518.x6.nabble.com/


Re: Leftover Ignite threads after closing cache

2018-06-28 Thread matt
Good to know, thanks!



--
Sent from: http://apache-ignite-users.70518.x6.nabble.com/


Leftover Ignite threads after closing cache

2018-06-27 Thread matt
Hi,

I'm using an Ignite ContinuousQuery for processing local cache events:

continuousQuery.setLocalListener(new CacheEntryListener<>(myHandler));

I have a "job" that modifies the cache (insert/updates/deletes) and the code
in the callback essentially takes items and enqueues them for further
processing by another thread. After the "job" finishes, I close the cache,
but even after several minutes I see several of these in my thread dump:

"callback-#70%conn-rpc-ignite%@11381" prio=5 tid=0xcc nid=NA waiting
  java.lang.Thread.State: WAITING
  at sun.misc.Unsafe.park(Unsafe.java:-1)
  at java.util.concurrent.locks.LockSupport.park(LockSupport.java:175)
  at
java.util.concurrent.locks.AbstractQueuedSynchronizer$ConditionObject.await(AbstractQueuedSynchronizer.java:2039)
  at
java.util.concurrent.LinkedBlockingQueue.take(LinkedBlockingQueue.java:442)
  at
java.util.concurrent.ThreadPoolExecutor.getTask(ThreadPoolExecutor.java:1074)
  at
java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1134)
  at
java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:624)
  at java.lang.Thread.run(Thread.java:748)

My Ignite name is "conn-rpc-ignite", which you can see in the thread name. I
also found a unit test that references of the "callback-" thread name prefix
here:

https://github.com/apache/ignite/blob/f2f82f09b35368f25e136c9fce5e7f2198a91171/modules/core/src/test/java/org/apache/ignite/internal/processors/cache/query/continuous/CacheContinuousQueryFactoryAsyncFilterRandomOperationTest.java

and here:

https://github.com/apache/ignite/blob/9a4a145be514e650258715a7e682d427d5812d16/modules/core/src/test/java/org/apache/ignite/internal/processors/cache/query/continuous/CacheContinuousQueryAsyncFilterListenerTest.java

So my question is - should I be concerned about these threads hanging around
in WAITING status? Is there a way to clean them up, or are these just lazily
created and re-used later on?

Thanks!



--
Sent from: http://apache-ignite-users.70518.x6.nabble.com/


Re: Igfs - questions and optimal configuration settings

2018-06-27 Thread matt
Thanks Denis! That helps a lot. I'll dig into those settings and see if I get
my head around it all. I did notice that the default/max memory settings are
based off of system settings/resources, so I'll try the defaults too and see
what happens.

- Matt



--
Sent from: http://apache-ignite-users.70518.x6.nabble.com/


Igfs - questions and optimal configuration settings

2018-06-26 Thread matt
Hi,

I've recently started using the Ignite FileSystem (igfs) in our API, to
fully buffer an incoming stream of byte[] values (chunked InputStream). I'm
doing this because that stream then needs to be sent along to another remote
service, and I'd like the ability to retry without telling the sender to
send again. The thinking is that if this all gets "buffered" into Ignite,
then pulling the "file" out again and sending/retrying should be possible
and present no burden on the original sender. After the file has been
successfully sent, it is then deleted from Ignite -- this all seems to work,
however, is there a better way?

If this approach is a good one, I have questions on how to configure. I had
to look around quite a bit to get a working configuration (version 2.3) and
even now, I'm not clear as to what is needed in order to get a good
configuration setup, based on environment/memory/hardware etc.. Is it OK to
just use the default settings?



--
Sent from: http://apache-ignite-users.70518.x6.nabble.com/


Re: How to stop explicitly write behind

2018-02-16 Thread matt
Thanks Alexey, that is exactly what we are doing now actually. We're hoping
to find a way to avoid this though, to ensure our nodes aren't doing any
unnecessary work, as well as avoiding unexpected results if the backend
system comes back up.



--
Sent from: http://apache-ignite-users.70518.x6.nabble.com/


Re: How to explicitly stop write behind

2018-02-16 Thread matt
OK so digging a little further, I found GridCacheWriteBehindStore#stop() -
but still no idea how to get access to this.

- Matt



--
Sent from: http://apache-ignite-users.70518.x6.nabble.com/


How to stop explicitly write behind

2018-02-16 Thread matt
Hi,

I've got a CacheStoreAdapter that implements write behind, and there's a
case where the backend storage API can go away. And when this happens, I'd
like to have Ignite _not_ continue to call the write behind methods, and
instead, just stop. Otherwise, Ignite will completely hammer the
non-existent backend and logging goes crazy. So is there a way to tell
Ignite to stop calling the CacheStoreAdapter's write behind methods?

- Matt



--
Sent from: http://apache-ignite-users.70518.x6.nabble.com/


How to handle Cache adapter failures?

2018-01-16 Thread matt
Hi,

Our application has a custom cache adapter, and we'd like to deal with
failures when attempting to execute one of the adapter methods (load,
deleteAll etc.) - if our application has one centralized node to coordinate
a job, how can we detect these failures happening on other nodes? And I
guess this would be for calls to invoke, but also for the asynchronous
write-through calls.

Thanks,
- Matt



--
Sent from: http://apache-ignite-users.70518.x6.nabble.com/


Re: loadAll and removeAll from cache with custom store

2017-12-22 Thread matt
Hi,

The size in my store is about 9k. I do have loadAll() working now, so once
that completes, the cache has 9k items as well.

For deleteAll(), I had it work once where it called my adapter multiple
times with all of the expected keys. But then restarting my app and trying
again (after loading new data) it didn't work at all. Do I need to loadAll()
into my Ignite cache before I can delete everything from the backend store?

Here's my configuration:

String name = "foo";

CacheConfiguration cacheConfig = new
CacheConfiguration<>();
cacheConfig.setName(name);
cacheConfig.setCacheMode(CacheMode.PARTITIONED);
cacheConfig.setBackups(2);
   
cacheConfig.setWriteSynchronizationMode(CacheWriteSynchronizationMode.FULL_SYNC);
cacheConfig.setAtomicityMode(CacheAtomicityMode.ATOMIC);

cacheConfig.setWriteBehindEnabled(true);
cacheConfig.setWriteBehindBatchSize(512);
cacheConfig.setWriteBehindFlushSize(10240);
cacheConfig.setWriteBehindFlushFrequency(5_000);

cacheConfig.setCacheStoreFactory(new
MyCacheStoreAdapter.MyCacheStoreFactory(name));
cacheConfig.setReadThrough(true);
cacheConfig.setWriteThrough(true);



--
Sent from: http://apache-ignite-users.70518.x6.nabble.com/


Re: loadAll and removeAll from cache with custom store

2017-12-21 Thread matt
For the removeAll() call, I do see that after I do a loadCache((k,v) -> true)
I can delete items from the store, but my store's deleteAll() only ever gets
1k items once, deletes them from the store, and then is never called again
to delete the rest of the items in the store.



--
Sent from: http://apache-ignite-users.70518.x6.nabble.com/


loadAll and removeAll from cache with custom store

2017-12-21 Thread matt
Hi,

For loading all data into Ignite before running a job, is loadAll
sufficient? How can I prevent running loadAll on subsequent job runs? Is
there a way to control whether or not the cache is loaded from the
read-through store if the data has already been loaded? I'm concerned about
job startup time in this case.

For removeAll, I'm seeing that my cache store adapter only ever gets called
once (deleteAll) but the keys only ever have 1000 keys, which is the same
size as my setWriteBehindBatchSize - but my custom store (Apache Solr) has 4
times that. I must be missing something here, but not sure what. What's the
best way for me to delete all items in the cache _and_ the backend store in
the most efficient way?

Thanks,
- Matt



--
Sent from: http://apache-ignite-users.70518.x6.nabble.com/


Re: Custom CacheStoreAdapter implementation - fields are null

2017-10-17 Thread matt
Yeah, that's definitely happening. 2 instances are being created. Is there a
workaround for this that you know of?

Thanks,
Matt



--
Sent from: http://apache-ignite-users.70518.x6.nabble.com/


Re: Custom CacheStoreAdapter implementation - fields are null

2017-10-16 Thread matt
Hi, thanks for the reply. I should've mentioned that I do use a factory, and
the http-client is passed into the factory, which is then set on a field.
When the create() method is called, I return a new CacheStoreAdapter along
with the client that's in the factory. At the time the create() method is
called, the http client is present and is properly passed to the
CacheStoreAdapter instance. But when my loadAll() method is called, it is
null.

- Matt



--
Sent from: http://apache-ignite-users.70518.x6.nabble.com/


Custom CacheStoreAdapter implementation - fields are null

2017-10-16 Thread matt
Hi,

I've implemented a CacheStoreAdapter and am seeing that when Ignite starts
to use this class (loadAll, etc.) the fields that I set in my constructor
with values, are null when the methods are called. I realized there's
something I'm doing wrong in terms of how my CacheStoreAdapter is
serialized, but not sure what to do. The values passed into my
CacheStoreAdapter constructor, are arbitrary, but one includes an http
client and another is a basic Java class used for cache key/field mapping.
How can I make sure that my adapter has access to the objects it requires
when Ignite is calling on it?

Thanks,
Matt



--
Sent from: http://apache-ignite-users.70518.x6.nabble.com/


CacheStoreAdapter#loadAll and ContinuousQuery

2017-09-19 Thread matt
I've got an implementation of CacheStoreAdapter that appears to be working
(it's persisting items etc..). I also have a ContinuousQuery setup and an
initialQuery that runs after the impls loadAll(). Before I started using my
own impl of CacheStoreAdapter - the ContinuousQuery worked as expected, but
now with my impl, it's not. The initialQuery cursor actually doesn't ever
yield anything. Is there something I'm missing with making these two
components work together properly? Anything special I need to do with the
impl or config?

Thanks,
- Matt



--
Sent from: http://apache-ignite-users.70518.x6.nabble.com/


Re: SQLQuery with simple Join return no results

2017-09-01 Thread matt
OK thanks for that. So does that then mean that the key type (K) for my Cache
needs to be AffinityKey ?



--
Sent from: http://apache-ignite-users.70518.x6.nabble.com/


SQLQuery with simple Join return no results

2017-09-01 Thread matt
I have 2 caches defined, both with String keys, and classes that make use of
the Ignite annotations for indexes and affinity. I've got 3 different nodes
running, and the code I'm using to populate the cache w/test data works, and
I can see each node is updated with its share of the data. My index types
are set on the caches as well.

If I do a ScanQuery, I can see that all of the fields and IDs are correct,
Ignite returns them all. But when doing a SqlQuery, I get nothing back.
Ignite is not complaining about the query, it's just returning an empty
cursor.

If I remove the Join, results are returned.

So I'm wondering if this is related to the way I've set up my affinity
mapping. It's basically setup like the code below... and the query looks
like this:

"from B, A WHERE B.id = A.bID"

Any ideas on what I'm doing wrong here?

class A implements Serializable {
  @QuerySqlField(index = true)
  String id;
  
  @QuerySqlField(index = true)
  String bId;  

  @AffinityKeyMapped
  @QuerySqlField(index = true)
  String group;
}

class B implements Serializable {
  @QuerySqlField(index = true)
  String id;

  @AffinityKeyMapped
  @QuerySqlField(index = true)
  String group;
}



--
Sent from: http://apache-ignite-users.70518.x6.nabble.com/


affinityRun then invoke

2017-08-30 Thread matt
I'm using @AffinityKeyMapped to ensure that all items of a particular field
(the "parent" in my case) are processed on the same node. When I want to
process an entree, I'm essentially
doingignite.compute().affinityRun("my-cache", key, () ->
this::processEntry);where processEntry does:cache.invoke(key, (entry, args)
-> {  if( entry.exists() ){ modify(entry); } else { create(entry); }  return
true;});Is this generally a valid way to deal with atomically updating
entrees within a partitioned cache?Thanks,- Matt



--
Sent from: http://apache-ignite-users.70518.x6.nabble.com/

Re: How to configure off heap and "native" distributed persistence

2017-08-29 Thread matt
Thanks, got it working. This helped a lot too:
https://github.com/apache/ignite/tree/master/examples/src/main/java/org/apache/ignite/examples/persistentstore



--
View this message in context: 
http://apache-ignite-users.70518.x6.nabble.com/How-to-configure-off-heap-and-native-distributed-persistence-tp16488p16508.html
Sent from the Apache Ignite Users mailing list archive at Nabble.com.


Re: How to configure off heap and "native" distributed persistence

2017-08-29 Thread matt
Maybe I'm not understanding this correctly but doesn't Ignite support a
"native" persistence solution, where data is written to disk? If so, where
can I learn more about configuring this behavior? I'm expecting that once
enabled, a restart of the node would have the cache populated from what was
persisted previously. But I'm struggling to figure out how to configure
this. The examples in the documentation don't seem to show enough to
actually get something working.

Thanks,
- Matt



--
View this message in context: 
http://apache-ignite-users.70518.x6.nabble.com/How-to-configure-off-heap-and-native-distributed-persistence-tp16488p16504.html
Sent from the Apache Ignite Users mailing list archive at Nabble.com.


Re: 1 processing node, n partition nodes

2017-08-29 Thread matt
Ah that makes sense! I'll try that out.

Thanks,
- Matt



--
View this message in context: 
http://apache-ignite-users.70518.x6.nabble.com/1-processing-node-n-partition-nodes-tp16485p16498.html
Sent from the Apache Ignite Users mailing list archive at Nabble.com.


Re: How to configure off heap and "native" distributed persistence

2017-08-29 Thread matt
Ok thank you, I was looking at CacheConfiguration and not
IgniteConfiguration.



--
View this message in context: 
http://apache-ignite-users.70518.x6.nabble.com/How-to-configure-off-heap-and-native-distributed-persistence-tp16488p16497.html
Sent from the Apache Ignite Users mailing list archive at Nabble.com.


Re: How to configure off heap and "native" distributed persistence

2017-08-29 Thread matt
OK thanks Evgenii. For the persistence, I'm trying the Java example here:
https://apacheignite.readme.io/v2.1/docs/distributed-persistent-store but
the setPersistentStoreConfiguration method does not exist on
CacheConfiguration. I'm on Ignite 2.1.0 - what am I doing wrong?

Thanks,
- Matt



--
View this message in context: 
http://apache-ignite-users.70518.x6.nabble.com/How-to-configure-off-heap-and-native-distributed-persistence-tp16488p16492.html
Sent from the Apache Ignite Users mailing list archive at Nabble.com.


How to configure off heap and "native" distributed persistence

2017-08-29 Thread matt
I found doc references via Google search to
CacheConfiguration.setMemoryMode() but 2.1.0 does not seem to have this
method. How do I configure off heap mode?

And for the distributed persistence support:
https://apacheignite.readme.io/v2.1/docs/distributed-persistent-store - Do I
need an additional dependency for that? I don't see it in ignite-core 2.1.0.

Thanks,
- Matt



--
View this message in context: 
http://apache-ignite-users.70518.x6.nabble.com/How-to-configure-off-heap-and-native-distributed-persistence-tp16488.html
Sent from the Apache Ignite Users mailing list archive at Nabble.com.


1 processing node, n partition nodes

2017-08-29 Thread matt
Hi,

Can someone recommend what CacheConfiguration settings would be ideal for a
cluster that has 1 primary node processing data, while other nodes are there
to host partitions for the purpose of reducing the primary node memory
requirements? How can I control the size of the local cache per node,
regardless of whether it's the node doing the processing or not?

Thanks,
- Matt



--
View this message in context: 
http://apache-ignite-users.70518.x6.nabble.com/1-processing-node-n-partition-nodes-tp16485.html
Sent from the Apache Ignite Users mailing list archive at Nabble.com.


Reusing CacheStores connections?

2017-08-28 Thread Matt
Hi,

Following the documentation I came up with the following code, but now I
wonder if this is really the way to go with CacheStores.

https://gist.github.com/fdc613759d4d7a845631e0b71aafa559

Using a profiler I found out openConnection() is executed more than 1000
times, on this method alone my application spend 10% of  the time.

Shouldn't Ignite be reusing the connections somehow? Any way to improve
this?

An example with better performance would be really helpful.

Cheers,
Matt


Re: Ignite starting by itself when using javax.cache.Caching

2017-08-14 Thread matt
Hi, thanks for your help. I understand the problem now... exactly what you
said it was.



--
View this message in context: 
http://apache-ignite-users.70518.x6.nabble.com/Ignite-starting-by-itself-when-using-javax-cache-Caching-tp15997p16174.html
Sent from the Apache Ignite Users mailing list archive at Nabble.com.


Ignite starting by itself when using javax.cache.Caching

2017-08-04 Thread matt
I have the ignite dependencies set in my project. I have some test code that
is using a non-Ignite Java test code actually touches Ignite or attempts to
call code that does. As soon as I run
javax.cache.Caching.getCachingProvider() - Ignite starts up and prints the
normal Ignite welcome ascii art. Why is it starting on its own like this?
How can I prevent this from happening?

Thanks,
- Matt



--
View this message in context: 
http://apache-ignite-users.70518.x6.nabble.com/Ignite-starting-by-itself-when-using-javax-cache-Caching-tp15997.html
Sent from the Apache Ignite Users mailing list archive at Nabble.com.


Re: Compute Grid Questions

2017-07-18 Thread Matt
Got it. Thank you.

Matt

On Tue, Jul 18, 2017 at 8:21 PM, vkulichenko <valentin.kuliche...@gmail.com>
wrote:

> Matt,
>
> undeployTask method is about task deployment [1], it's unrelated to the
> discussion.
>
> [1] https://apacheignite.readme.io/docs/deployment-spi
>
> -Val
>
>
>
> --
> View this message in context: http://apache-ignite-users.
> 70518.x6.nabble.com/Compute-Grid-Questions-tp14980p15091.html
> Sent from the Apache Ignite Users mailing list archive at Nabble.com.
>


Re: Compute Grid Questions

2017-07-18 Thread Matt
Hi Slava,

I didn't know flags were cleared by *Thread.sleep()*, that makes sense now.

Approach 1 in the code below works fine, but the second approach still
won't interrupt a task with a given name. I'm surely missing something else.

Is *ignite.compute().undeployTask(...)* supposed to work like this?

https://gist.github.com/anonymous/3891dfedf26bfcf3314215004e67daa8

Thanks,
Matt

On Tue, Jul 18, 2017 at 8:50 AM, slava.koptilin <slava.kopti...@gmail.com>
wrote:

> Hi Matt,
>
> It seems that your code is not quite correct.
> In accordance with the spec Thread.sleep(long millis) clears the
> interrupted
> status in case of InterruptedException is raised.
> Could you try the following implementation of IgniteRunnable?
>
> public static class MyTask implements IgniteRunnable {
> @Override public void run() {
> boolean isInterrupted = false;
>
> while (!isInterrupted) {
> isInterrupted = Thread.currentThread().isInterrupted();
>
> try {
> Thread.sleep(3000);
> }
> catch (InterruptedException e) {
> isInterrupted = true;
> }
>
> System.out.println("isInterrupted: " + isInterrupted);
> }
> }
> }
>
> Thanks,
> Slava.
>
>
>
> --
> View this message in context: http://apache-ignite-users.
> 70518.x6.nabble.com/Compute-Grid-Questions-tp14980p15055.html
> Sent from the Apache Ignite Users mailing list archive at Nabble.com.
>


Re: Compute Grid Questions

2017-07-17 Thread Matt
Hi Val,

I tried doing *future.cancel()* and *compute().undeployTask("foo")*, check
[1]. Are they equivalent in this case?

*Thread.interrupted()* is always false, so I'm guessing that's not the
correct flag I should be checking.

[1] https://gist.github.com/anonymous/0a8759e70eddab470f09dcb92644f3c7

Thanks,
Matt


On Mon, Jul 17, 2017 at 2:26 PM, vkulichenko <valentin.kuliche...@gmail.com>
wrote:

> Hi Matt,
>
> 1. Each task or closure execution creates a session that has an ID. You can
> cast returned IgniteFuture to ComputeTaskFuture (unfortunately there is no
> other way now) and then use getTaskSession() method to get the session
> description. However, this information is available only on the node that
> executed the job, there is currently no way to cancel it from other client.
>
> 2. When job is cancelled, thread that is running it is interrupted. Job
> should check the interrupted flag and stop the execution if needed.
>
> 3. See #1. Having session ID, you can get a future for a task and then
> cancel it. But again, it's all local - this state is not shared across
> nodes.
>
> -Val
>
>
>
> --
> View this message in context: http://apache-ignite-users.
> 70518.x6.nabble.com/Compute-Grid-Questions-tp14980p15015.html
> Sent from the Apache Ignite Users mailing list archive at Nabble.com.
>


Compute Grid Questions

2017-07-16 Thread Matt
Hi all,

I want to run some code on a node based on an affinity key, for that I'm
using ignite.compute().affinityRunAsync(...) but I have a few questions
about this.

*1.* Is it possible to give every closure a name (kind of an id), so that
when I start a new client, I can somehow get the list of running closures
and stop/start them accordingly?

*2.* The closure I'm running never ends. I thought calling future.cancel()
would cancel it, but it's not the case, it keeps logging things on the
console. What's the proper way to stop the execution of a closure?

*3.* I've seen there's a ignite.compute().withName(...) and
ignite.compute().activeTaskFutures() method, but I've no idea how or when
to use them, specially since the futures doesn't have a name but a uuid.
Also, what exactly is a "task" in Ignite and how does it differs from a
closure (ie, IgniteRunnable)?

Cheers,
Matt


Queue read/write Store

2017-07-11 Thread matt
Is it possible to have the Ignite Queue data be persisted via some sort of
Store? I've looked at the docs and can't seem to find any info on how queues
(and Sets for that matter) relate to the CacheStore and read/write
through/behind - how does this work? Essentially, I'd like for the data in
the queue to be persisted to some persistence store, and then read back in
later if needed (restart etc.).

Thanks,
- Matt



--
View this message in context: 
http://apache-ignite-users.70518.x6.nabble.com/Queue-read-write-Store-tp14682.html
Sent from the Apache Ignite Users mailing list archive at Nabble.com.


Re: ignitevisor starting the node twice

2017-05-30 Thread Matt
Based on experience I would say that that "invisible node" is not
completely transparent to the rest of the nodes. I'm not quite sure when or
how, but I've seen it changes the behavior of the other nodes sometimes.

I haven't payed much attention to this (I will next time) but may it be
that it keeps some state even when all other nodes are down?

I'm guessing that it keeps alive classes that were loaded using peer class
loading (with default config), and when we relaunch some real nodes the
state of the grid is not completely pristine.

Not sure if that in particular is the case, but I've seen some weird things
when visor is running.

Matt

On Mon, May 29, 2017 at 11:18 PM, Alexey Kuznetsov <akuznet...@apache.org>
wrote:

> You need to open configuration with proper discovery.
> Usually that means - open same config you used to start your "real" nodes.
>
> On Tue, May 30, 2017 at 8:45 AM, I PVP <i...@hotmail.com> wrote:
>
>> Alexey,
>>
>> Thanks for answering.
>>
>> How do I make  ignitevisorcmd.sh  see my “real” node(s) ?
>>
>> best,
>>
>> IPVP
>>
>> On May 29, 2017 at 10:39:41 PM, Alexey Kuznetsov (akuznet...@apache.org)
>> wrote:
>>
>> Hi, IPVP.
>>
>> Yes,  ignitevisorcmd.sh starts internal node in "daemon" mode.
>> This node is "invisible" for other nodes and doe not have caches data and
>> does not participate in compute task executions.
>>
>> See: IgniteConfiguration#setDaemon javadocs for more info.
>>
>> On Sat, May 27, 2017 at 9:19 PM, I PVP <i...@hotmail.com> wrote:
>>
>>> Does ignitevisorcmd.sh starts the node or is it a management interface?
>>>
>>> Why ignitevisorcmd.sh behaves like if Ignite was not started and  does
>>> not see the cache that was created?
>>>
>>> ignite.sh starts fine. But,  when I start ignitevisorcmd.sh and type the
>>> open command  it asks me for the configuration file and  even informing the
>>> same configuration file used to start ignite   ignitevisor  says:  "Ignite
>>> node started OK”  ,  it shows 00:00:00 uptime and the cache command says
>>> "(wrn) : No caches found."
>>>
>>> Ignite is being started with "ignite.sh config/ignite-config.xml”
>>>
>>> ignite-config.xml has the following content:
>>>
>>> -
>>> 
>>>
>>> http://www.springframework.org/schema/beans;
>>>xmlns:xsi="http://www.w3.org/2001/XMLSchema-instance;
>>>xsi:schemaLocation="
>>>http://www.springframework.org/schema/beans
>>>http://www.springframework.org/schema/beans/spring-beans.xsd”>
>>>
>>> 
>>>
>>>   
>>>  
>>> 
>>> 
>>> 
>>> 
>>>  
>>>   
>>>
>>> 
>>>
>>> 
>>> -
>>>
>>> Thanks
>>>  IPVP
>>>
>>>
>>
>>
>> --
>> Alexey Kuznetsov
>>
>>
>
>
> --
> Alexey Kuznetsov
>


Re: Correct Way to Store Data

2017-05-26 Thread Matt
Err, I meant "[...] a different memory policy for different classes", not
for "different products".

On Fri, May 26, 2017 at 6:00 PM, Matt <dromitl...@gmail.com> wrote:

> I don't think that's correct.
>
> As far as I know, on Ignite it's fine to put more than one type on the
> same cache, because a cache is like a schema (in the relational db world)
> and not a table. So for each type on a cache, a different table on H2 is
> created. There's no need for additional logic to fetch different types from
> the same cache, because internally they live in a different and independent
> table each.
>
> If you save an object of class Foo and another one of class Bar inside
> cache MyCache, they would reside in "MyCache"."Foo" and "MyCache"."Bar"
> respectively.
>
> That's why a model like #2 may make more sense than #1. However, I agree
> with you that #2 would make it impossible to specify a different memory
> policy for different products, but that is not required in this case anyway.
>
> Matt
>
> On Fri, May 26, 2017 at 4:34 PM, Dmitry Pavlov <dpavlov@gmail.com>
> wrote:
>
>> Hi Matt,
>>
>>
>>
>> Ignite cache more or less corresponds to table from relational world.
>>
>>
>>
>> As for caches number: Both ways are possible. In relational world, by the
>> way, you also can place different business objects into one table, but you
>> will have to introduce additional type field.
>>
>>
>>
>> Similar for the cache, you can place different values into the same
>> cache, but it is on your own to provide additional logic to separate what
>> type of object was selected.
>>
>>
>>
>> Known benefit of having 1 cache to 1 business object type: you can do
>> fine grained tuning of cache quotes (memory policies), and other cache
>> parameters separately for each business object type.
>>
>>
>>
>> Hope this helps.
>>
>>
>>
>> Sincerely,
>>
>> Dmitriy Pavlov
>>
>>
>> пт, 26 мая 2017 г. в 22:03, Matt <dromitl...@gmail.com>:
>>
>>> Interesting, so #3 is not the way to go.
>>>
>>> What about #2? That would be the "relational database way of doing it",
>>> which is what Ignite uses behind the scene (H2). What's the disadvantage
>>> compared to #1?
>>>
>>> Thanks for sharing your insight.
>>>
>>> On Fri, May 26, 2017 at 11:28 AM, Ilya Lantukh <ilant...@gridgain.com>
>>> wrote:
>>>
>>>> Hi Matt,
>>>>
>>>> From what I've seen, the most commonly used approach is the one you
>>>> took: have caches associated with object classes. This approach is
>>>> efficient and completely corresponds to "the Ignite way".
>>>>
>>>> Having a separate cache for each product is definitely not a good idea,
>>>> especially if you have thousands of products and that number is going to
>>>> increase rapidly. Every cache requires additional memory to store it's
>>>> internal data structures. In addition, you will have to perform dynamic
>>>> cache start when a new product is added, which is a relatively expensive
>>>> operation and causes grid to pause all other operations for some time.
>>>>
>>>> Hope this helps.
>>>>
>>>>
>>>> On Fri, May 26, 2017 at 10:51 AM, Matt <dromitl...@gmail.com> wrote:
>>>>
>>>>> Hello,
>>>>>
>>>>> Right now I have a couple of caches associated with the kind of
>>>>> objects I store. For instance I have one cache for products, one for 
>>>>> sales,
>>>>> one for stats, etc. I use the id of the product as the affinity key in all
>>>>> cases.
>>>>>
>>>>> Some questions I have regarding this approach...
>>>>>
>>>>> *1.* I get the impression I'm not doing it "the Ignite way", since
>>>>> I'm only storing one kind of object (ie, objects of only one class) in 
>>>>> each
>>>>> cache. The approach I'm using is equivalent to having a PostgreSQL schema
>>>>> for products, another one for sales and a third for stats. Is that right?
>>>>>
>>>>> *2.* I believe it would make more sense to have only one cache (for
>>>>> instance, "analytics") and save all objects there (products, sales and
>>>>> stats). That would be equivalent to having one single scheme and inside it
>>>>> one table for each class I store. Right?
>>>>>
>>>>> *3.* Is there any problem in terms of performance or is it a bad
>>>>> practice to have one cache with all products and one cache per product 
>>>>> with
>>>>> all related objects to that particular product? I think some queries would
>>>>> run much faster that way since all objects in a certain cache are related
>>>>> to the same product, there is no need to filter by sales or stats with a
>>>>> certain product id.
>>>>>
>>>>> *4.* What's the best approach or which one is more commonly used?
>>>>>
>>>>> As a side note, in all 3 cases I'll use as the affinity key the id of
>>>>> the product, except for the "products" cache in #3, which would be stored
>>>>> in a single node. Also, right now I'm storing about 10k products but that
>>>>> number increases as clients arrives, so I expect the cardinality to
>>>>> increase rapidly.
>>>>>
>>>>> Cheers,
>>>>> Matt
>>>>>
>>>>
>>>>
>>>>
>>>> --
>>>> Best regards,
>>>> Ilya
>>>>
>>>
>>>
>


Re: Correct Way to Store Data

2017-05-26 Thread Matt
I don't think that's correct.

As far as I know, on Ignite it's fine to put more than one type on the same
cache, because a cache is like a schema (in the relational db world) and
not a table. So for each type on a cache, a different table on H2 is
created. There's no need for additional logic to fetch different types from
the same cache, because internally they live in a different and independent
table each.

If you save an object of class Foo and another one of class Bar inside
cache MyCache, they would reside in "MyCache"."Foo" and "MyCache"."Bar"
respectively.

That's why a model like #2 may make more sense than #1. However, I agree
with you that #2 would make it impossible to specify a different memory
policy for different products, but that is not required in this case anyway.

Matt

On Fri, May 26, 2017 at 4:34 PM, Dmitry Pavlov <dpavlov@gmail.com>
wrote:

> Hi Matt,
>
>
>
> Ignite cache more or less corresponds to table from relational world.
>
>
>
> As for caches number: Both ways are possible. In relational world, by the
> way, you also can place different business objects into one table, but you
> will have to introduce additional type field.
>
>
>
> Similar for the cache, you can place different values into the same cache,
> but it is on your own to provide additional logic to separate what type of
> object was selected.
>
>
>
> Known benefit of having 1 cache to 1 business object type: you can do fine
> grained tuning of cache quotes (memory policies), and other cache
> parameters separately for each business object type.
>
>
>
> Hope this helps.
>
>
>
> Sincerely,
>
> Dmitriy Pavlov
>
>
> пт, 26 мая 2017 г. в 22:03, Matt <dromitl...@gmail.com>:
>
>> Interesting, so #3 is not the way to go.
>>
>> What about #2? That would be the "relational database way of doing it",
>> which is what Ignite uses behind the scene (H2). What's the disadvantage
>> compared to #1?
>>
>> Thanks for sharing your insight.
>>
>> On Fri, May 26, 2017 at 11:28 AM, Ilya Lantukh <ilant...@gridgain.com>
>> wrote:
>>
>>> Hi Matt,
>>>
>>> From what I've seen, the most commonly used approach is the one you
>>> took: have caches associated with object classes. This approach is
>>> efficient and completely corresponds to "the Ignite way".
>>>
>>> Having a separate cache for each product is definitely not a good idea,
>>> especially if you have thousands of products and that number is going to
>>> increase rapidly. Every cache requires additional memory to store it's
>>> internal data structures. In addition, you will have to perform dynamic
>>> cache start when a new product is added, which is a relatively expensive
>>> operation and causes grid to pause all other operations for some time.
>>>
>>> Hope this helps.
>>>
>>>
>>> On Fri, May 26, 2017 at 10:51 AM, Matt <dromitl...@gmail.com> wrote:
>>>
>>>> Hello,
>>>>
>>>> Right now I have a couple of caches associated with the kind of objects
>>>> I store. For instance I have one cache for products, one for sales, one for
>>>> stats, etc. I use the id of the product as the affinity key in all cases.
>>>>
>>>> Some questions I have regarding this approach...
>>>>
>>>> *1.* I get the impression I'm not doing it "the Ignite way", since I'm
>>>> only storing one kind of object (ie, objects of only one class) in each
>>>> cache. The approach I'm using is equivalent to having a PostgreSQL schema
>>>> for products, another one for sales and a third for stats. Is that right?
>>>>
>>>> *2.* I believe it would make more sense to have only one cache (for
>>>> instance, "analytics") and save all objects there (products, sales and
>>>> stats). That would be equivalent to having one single scheme and inside it
>>>> one table for each class I store. Right?
>>>>
>>>> *3.* Is there any problem in terms of performance or is it a bad
>>>> practice to have one cache with all products and one cache per product with
>>>> all related objects to that particular product? I think some queries would
>>>> run much faster that way since all objects in a certain cache are related
>>>> to the same product, there is no need to filter by sales or stats with a
>>>> certain product id.
>>>>
>>>> *4.* What's the best approach or which one is more commonly used?
>>>>
>>>> As a side note, in all 3 cases I'll use as the affinity key the id of
>>>> the product, except for the "products" cache in #3, which would be stored
>>>> in a single node. Also, right now I'm storing about 10k products but that
>>>> number increases as clients arrives, so I expect the cardinality to
>>>> increase rapidly.
>>>>
>>>> Cheers,
>>>> Matt
>>>>
>>>
>>>
>>>
>>> --
>>> Best regards,
>>> Ilya
>>>
>>
>>


Re: Correct Way to Store Data

2017-05-26 Thread Matt
Interesting, so #3 is not the way to go.

What about #2? That would be the "relational database way of doing it",
which is what Ignite uses behind the scene (H2). What's the disadvantage
compared to #1?

Thanks for sharing your insight.

On Fri, May 26, 2017 at 11:28 AM, Ilya Lantukh <ilant...@gridgain.com>
wrote:

> Hi Matt,
>
> From what I've seen, the most commonly used approach is the one you took:
> have caches associated with object classes. This approach is efficient and
> completely corresponds to "the Ignite way".
>
> Having a separate cache for each product is definitely not a good idea,
> especially if you have thousands of products and that number is going to
> increase rapidly. Every cache requires additional memory to store it's
> internal data structures. In addition, you will have to perform dynamic
> cache start when a new product is added, which is a relatively expensive
> operation and causes grid to pause all other operations for some time.
>
> Hope this helps.
>
>
> On Fri, May 26, 2017 at 10:51 AM, Matt <dromitl...@gmail.com> wrote:
>
>> Hello,
>>
>> Right now I have a couple of caches associated with the kind of objects I
>> store. For instance I have one cache for products, one for sales, one for
>> stats, etc. I use the id of the product as the affinity key in all cases.
>>
>> Some questions I have regarding this approach...
>>
>> *1.* I get the impression I'm not doing it "the Ignite way", since I'm
>> only storing one kind of object (ie, objects of only one class) in each
>> cache. The approach I'm using is equivalent to having a PostgreSQL schema
>> for products, another one for sales and a third for stats. Is that right?
>>
>> *2.* I believe it would make more sense to have only one cache (for
>> instance, "analytics") and save all objects there (products, sales and
>> stats). That would be equivalent to having one single scheme and inside it
>> one table for each class I store. Right?
>>
>> *3.* Is there any problem in terms of performance or is it a bad
>> practice to have one cache with all products and one cache per product with
>> all related objects to that particular product? I think some queries would
>> run much faster that way since all objects in a certain cache are related
>> to the same product, there is no need to filter by sales or stats with a
>> certain product id.
>>
>> *4.* What's the best approach or which one is more commonly used?
>>
>> As a side note, in all 3 cases I'll use as the affinity key the id of the
>> product, except for the "products" cache in #3, which would be stored in a
>> single node. Also, right now I'm storing about 10k products but that number
>> increases as clients arrives, so I expect the cardinality to increase
>> rapidly.
>>
>> Cheers,
>> Matt
>>
>
>
>
> --
> Best regards,
> Ilya
>


Correct Way to Store Data

2017-05-26 Thread Matt
Hello,

Right now I have a couple of caches associated with the kind of objects I
store. For instance I have one cache for products, one for sales, one for
stats, etc. I use the id of the product as the affinity key in all cases.

Some questions I have regarding this approach...

*1.* I get the impression I'm not doing it "the Ignite way", since I'm only
storing one kind of object (ie, objects of only one class) in each cache.
The approach I'm using is equivalent to having a PostgreSQL schema for
products, another one for sales and a third for stats. Is that right?

*2.* I believe it would make more sense to have only one cache (for
instance, "analytics") and save all objects there (products, sales and
stats). That would be equivalent to having one single scheme and inside it
one table for each class I store. Right?

*3.* Is there any problem in terms of performance or is it a bad practice
to have one cache with all products and one cache per product with all
related objects to that particular product? I think some queries would run
much faster that way since all objects in a certain cache are related to
the same product, there is no need to filter by sales or stats with a
certain product id.

*4.* What's the best approach or which one is more commonly used?

As a side note, in all 3 cases I'll use as the affinity key the id of the
product, except for the "products" cache in #3, which would be stored in a
single node. Also, right now I'm storing about 10k products but that number
increases as clients arrives, so I expect the cardinality to increase
rapidly.

Cheers,
Matt


Re: CacheStore's Performance Drops Dramatically - Why?

2017-05-04 Thread Matt
Thank you for opening the ticket.

On Thu, May 4, 2017 at 4:43 PM, Denis Magda <dma...@apache.org> wrote:

> Looks like the naming of ‘getWriteBehindFlushSize’ method is totally
> wrong. It confuses so many people. However, if we refer to the
> documentation of this method or look into the source code we will find out
> that it sets the maximum size of the write-behind queue/buffer on a single
> node. Once this size is reached data will be flushed to a storage in the
> sync mode.
>
> So, you need to set the flush size (maximum queue/buffer size) to a bigger
> value if you can’t keep up with updates and always switch to the sync mode.
>
> In any case, I’ve created a ticket to address both issues discussed here:
> https://issues.apache.org/jira/browse/IGNITE-5173
>
> Thanks for your patience.
>
> —
> Denis
>
> On May 3, 2017, at 10:10 AM, Jessie Lin <jessie.jianwei@gmail.com>
> wrote:
>
> I thought flushsize could be set as several times higher than the batch
> size is that in a cluster, data nodes would flush in parallel. For example
> there's a cluster with 10 nodes, and flushSize is 10240, thread count = 2,
> batch size = 512. Then each node would flush out in 2 thread, and each
> thread flushes out in batch of 512.
>
> Could someone confirms or clarify the understanding? Thank you!
>
> On Wed, May 3, 2017 at 12:16 AM, Matt <dromitl...@gmail.com> wrote:
>
>> In fact, I don't see why you would need both batchSize and flushSize. If
>> I got it right, only the min of them would be used by Ignite to know when
>> to flush, why do we have both in the first place?
>>
>> In case they're both necessary for a reason I'm not seeing, I still
>> wonder if the default values should be batchSize > flushSize as I think or
>> not.
>>
>> On Wed, May 3, 2017 at 3:26 AM, Matt <dromitl...@gmail.com> wrote:
>>
>>> I'm writing to confirm I managed to fix my problem by fine tuning the
>>> config params for the write behind cache until the performance was fine. I
>>> still see single element inserts from time to time, but just a few of them
>>> every now and then not like before. You should definitely avoid synchronous
>>> single elements insertions, I hope that changes in future versions.
>>>
>>> Regarding writeBehindBatchSize and writeBehindFlushSize, I don't see
>>> the point of setting both values when batchSize < flushSize (default values
>>> are 512 and 10240 respectively). If I'm not wrong, the cache is flushed
>>> whenever the its size is equal to min(batchSize, flushSize). Since
>>> batchSize is less than flushSize, flushSize is never really used and the
>>> size of the flush is controlled by the size of the cache itself only.
>>>
>>> That is how it works by default, on the other hand if we swap their
>>> values (ie, batchSize=10240 and flushSize=512) the behavior would be
>>> the same (Ignite would call writeAll() with 512 elements each time), but
>>> the number of elements flushed would be controlled by the correct variable
>>> (ie, flushSize).
>>>
>>> Were the default values supposed to be the other way around or am I
>>> missing something?
>>>
>>> On Tue, May 2, 2017 at 9:13 PM, Denis Magda <dma...@apache.org> wrote:
>>>
>>>> Matt,
>>>>
>>>> Cross-posting to the dev list.
>>>>
>>>> Yes, Ignite switches to the synchronous mode once the buffer is
>>>> exhausted. However, I do agree that it would be a right solution to flush
>>>> multiple entries rather than one in the synchronous mode. *Igniters*, I was
>>>> sure we had a ticket for that optimization but unable to find it.  Does
>>>> anybody know the ticket name/number?
>>>>
>>>> To omit the performance degradation you have to tweak the following
>>>> parameters so that the write-behind store can keep up with you updates:
>>>> - setWriteBehindFlushThreadCount
>>>> - setWriteBehindFlushFrequency
>>>> - setWriteBehindBatchSize
>>>> - setWriteBehindFlushSize
>>>>
>>>> Usually it helped all the times to Apache Ignite users.
>>>>
>>>> > QUESTION 2
>>>> >
>>>> > I've read on the docs that using ATOMIC mode (default mode) is better
>>>> for performance, but I'm not getting why. If I'm not wrong using
>>>> TRANSACTIONAL mode would cause the CacheStore to reuse connections (not
>>>> call openConnection(autocommit=true) on each writeAll()).
>>>> >
>>

Re: CacheStore's Performance Drops Dramatically - Why?

2017-05-03 Thread Matt
In fact, I don't see why you would need both batchSize and flushSize. If I
got it right, only the min of them would be used by Ignite to know when to
flush, why do we have both in the first place?

In case they're both necessary for a reason I'm not seeing, I still wonder
if the default values should be batchSize > flushSize as I think or not.

On Wed, May 3, 2017 at 3:26 AM, Matt <dromitl...@gmail.com> wrote:

> I'm writing to confirm I managed to fix my problem by fine tuning the
> config params for the write behind cache until the performance was fine. I
> still see single element inserts from time to time, but just a few of them
> every now and then not like before. You should definitely avoid synchronous
> single elements insertions, I hope that changes in future versions.
>
> Regarding writeBehindBatchSize and writeBehindFlushSize, I don't see the
> point of setting both values when batchSize < flushSize (default values are
> 512 and 10240 respectively). If I'm not wrong, the cache is flushed
> whenever the its size is equal to min(batchSize, flushSize). Since
> batchSize is less than flushSize, flushSize is never really used and the
> size of the flush is controlled by the size of the cache itself only.
>
> That is how it works by default, on the other hand if we swap their
> values (ie, batchSize=10240 and flushSize=512) the behavior would be the
> same (Ignite would call writeAll() with 512 elements each time), but the
> number of elements flushed would be controlled by the correct variable (ie,
> flushSize).
>
> Were the default values supposed to be the other way around or am I
> missing something?
>
> On Tue, May 2, 2017 at 9:13 PM, Denis Magda <dma...@apache.org> wrote:
>
>> Matt,
>>
>> Cross-posting to the dev list.
>>
>> Yes, Ignite switches to the synchronous mode once the buffer is
>> exhausted. However, I do agree that it would be a right solution to flush
>> multiple entries rather than one in the synchronous mode. *Igniters*, I was
>> sure we had a ticket for that optimization but unable to find it.  Does
>> anybody know the ticket name/number?
>>
>> To omit the performance degradation you have to tweak the following
>> parameters so that the write-behind store can keep up with you updates:
>> - setWriteBehindFlushThreadCount
>> - setWriteBehindFlushFrequency
>> - setWriteBehindBatchSize
>> - setWriteBehindFlushSize
>>
>> Usually it helped all the times to Apache Ignite users.
>>
>> > QUESTION 2
>> >
>> > I've read on the docs that using ATOMIC mode (default mode) is better
>> for performance, but I'm not getting why. If I'm not wrong using
>> TRANSACTIONAL mode would cause the CacheStore to reuse connections (not
>> call openConnection(autocommit=true) on each writeAll()).
>> >
>> > Shouldn't it be better to use transactional mode?
>>
>> Transactional mode enables 2 phase commit protocol:
>> https://apacheignite.readme.io/docs/transactions#two-phase-commit-2pc
>>
>> This is why atomic operations are swifter in general.
>>
>> —
>> Denis
>>
>> > On May 2, 2017, at 10:40 AM, Matt <dromitl...@gmail.com> wrote:
>> >
>> > No, only with inserts, I haven't tried removing at this rate yet but it
>> may have the same problem.
>> >
>> > I'm debugging Ignite internal code and I may be onto something. The
>> thing is Ignite has a cacheMaxSize (aka, WriteBehindFlushSize) and
>> cacheCriticalSize (which by default is cacheMaxSize*1.5). When the cache
>> reaches that size Ignite starts writing elements SYNCHRONOUSLY, as you can
>> see in [1].
>> >
>> > I think this makes things worse since only one single value is flushed
>> at a time, it becomes much slower forcing Ignite to do more synchronous
>> writes.
>> >
>> > Anyway, I'm still not sure why the cache reaches that level when the
>> database is clearly able to keep up with the insertions. I'll check if it
>> has to do with the number of open connections or what.
>> >
>> > Any insight on this is very welcome!
>> >
>> > [1] https://github.com/apache/ignite/blob/master/modules/core/
>> src/main/java/org/apache/ignite/internal/processors/cache/
>> store/GridCacheWriteBehindStore.java#L620
>> >
>> > On Tue, May 2, 2017 at 2:17 PM, Jessie Lin <
>> jessie.jianwei@gmail.com> wrote:
>> > I noticed that behavior when any cache.remove operation is involved. I
>> keep putting stuff in cache seems to be working properly.
>> >
>> > Do you use remove operation?
>> >
>> > 

Re: CacheStore's Performance Drops Dramatically - Why?

2017-05-03 Thread Matt
I'm writing to confirm I managed to fix my problem by fine tuning the
config params for the write behind cache until the performance was fine. I
still see single element inserts from time to time, but just a few of them
every now and then not like before. You should definitely avoid synchronous
single elements insertions, I hope that changes in future versions.

Regarding writeBehindBatchSize and writeBehindFlushSize, I don't see the
point of setting both values when batchSize < flushSize (default values are
512 and 10240 respectively). If I'm not wrong, the cache is flushed
whenever the its size is equal to min(batchSize, flushSize). Since
batchSize is less than flushSize, flushSize is never really used and the
size of the flush is controlled by the size of the cache itself only.

That is how it works by default, on the other hand if we swap their values
(ie, batchSize=10240 and flushSize=512) the behavior would be the same
(Ignite would call writeAll() with 512 elements each time), but the number
of elements flushed would be controlled by the correct variable (ie,
flushSize).

Were the default values supposed to be the other way around or am I missing
something?

On Tue, May 2, 2017 at 9:13 PM, Denis Magda <dma...@apache.org> wrote:

> Matt,
>
> Cross-posting to the dev list.
>
> Yes, Ignite switches to the synchronous mode once the buffer is exhausted.
> However, I do agree that it would be a right solution to flush multiple
> entries rather than one in the synchronous mode. *Igniters*, I was sure we
> had a ticket for that optimization but unable to find it.  Does anybody
> know the ticket name/number?
>
> To omit the performance degradation you have to tweak the following
> parameters so that the write-behind store can keep up with you updates:
> - setWriteBehindFlushThreadCount
> - setWriteBehindFlushFrequency
> - setWriteBehindBatchSize
> - setWriteBehindFlushSize
>
> Usually it helped all the times to Apache Ignite users.
>
> > QUESTION 2
> >
> > I've read on the docs that using ATOMIC mode (default mode) is better
> for performance, but I'm not getting why. If I'm not wrong using
> TRANSACTIONAL mode would cause the CacheStore to reuse connections (not
> call openConnection(autocommit=true) on each writeAll()).
> >
> > Shouldn't it be better to use transactional mode?
>
> Transactional mode enables 2 phase commit protocol:
> https://apacheignite.readme.io/docs/transactions#two-phase-commit-2pc
>
> This is why atomic operations are swifter in general.
>
> —
> Denis
>
> > On May 2, 2017, at 10:40 AM, Matt <dromitl...@gmail.com> wrote:
> >
> > No, only with inserts, I haven't tried removing at this rate yet but it
> may have the same problem.
> >
> > I'm debugging Ignite internal code and I may be onto something. The
> thing is Ignite has a cacheMaxSize (aka, WriteBehindFlushSize) and
> cacheCriticalSize (which by default is cacheMaxSize*1.5). When the cache
> reaches that size Ignite starts writing elements SYNCHRONOUSLY, as you can
> see in [1].
> >
> > I think this makes things worse since only one single value is flushed
> at a time, it becomes much slower forcing Ignite to do more synchronous
> writes.
> >
> > Anyway, I'm still not sure why the cache reaches that level when the
> database is clearly able to keep up with the insertions. I'll check if it
> has to do with the number of open connections or what.
> >
> > Any insight on this is very welcome!
> >
> > [1] https://github.com/apache/ignite/blob/master/modules/
> core/src/main/java/org/apache/ignite/internal/processors/cache/store/
> GridCacheWriteBehindStore.java#L620
> >
> > On Tue, May 2, 2017 at 2:17 PM, Jessie Lin <jessie.jianwei@gmail.com>
> wrote:
> > I noticed that behavior when any cache.remove operation is involved. I
> keep putting stuff in cache seems to be working properly.
> >
> > Do you use remove operation?
> >
> > On Tue, May 2, 2017 at 9:57 AM, Matt <dromitl...@gmail.com> wrote:
> > I'm stuck with that. No matter what config I use (flush size, write
> threads, etc) this is the behavior I always get. It's as if Ignite internal
> buffer is full and it's trying to write and get rid of the oldest (one)
> element only.
> >
> > Any idea people? What is your CacheStore configuration to avoid this?
> >
> > On Tue, May 2, 2017 at 11:50 AM, Jessie Lin <
> jessie.jianwei@gmail.com> wrote:
> > Hello Matt, thank you for posting. I've noticed similar behavior.
> >
> > Would be curious to see the response from the engineering team.
> >
> > Best,
> > Jessie
> >
> > On Tue, May 2, 2017 at 1:03 AM, Matt <

Re: CacheStore's Performance Drops Dramatically - Why?

2017-05-02 Thread Matt
No, only with inserts, I haven't tried removing at this rate yet but it may
have the same problem.

I'm debugging Ignite internal code and I may be onto something. The thing
is Ignite has a cacheMaxSize (aka, WriteBehindFlushSize) and
cacheCriticalSize (which by default is cacheMaxSize*1.5). When the cache
reaches that size Ignite starts writing elements SYNCHRONOUSLY, as you can
see in [1].

I think this makes things worse since only one single value is flushed at a
time, it becomes much slower forcing Ignite to do more synchronous writes.

Anyway, I'm still not sure why the cache reaches that level when the
database is clearly able to keep up with the insertions. I'll check if it
has to do with the number of open connections or what.

Any insight on this is very welcome!

[1]
https://github.com/apache/ignite/blob/master/modules/core/src/main/java/org/apache/ignite/internal/processors/cache/store/GridCacheWriteBehindStore.java#L620

On Tue, May 2, 2017 at 2:17 PM, Jessie Lin <jessie.jianwei@gmail.com>
wrote:

> I noticed that behavior when any cache.remove operation is involved. I
> keep putting stuff in cache seems to be working properly.
>
> Do you use remove operation?
>
> On Tue, May 2, 2017 at 9:57 AM, Matt <dromitl...@gmail.com> wrote:
>
>> I'm stuck with that. No matter what config I use (flush size, write
>> threads, etc) this is the behavior I always get. It's as if Ignite internal
>> buffer is full and it's trying to write and get rid of the oldest (one)
>> element only.
>>
>> Any idea people? What is your CacheStore configuration to avoid this?
>>
>> On Tue, May 2, 2017 at 11:50 AM, Jessie Lin <jessie.jianwei@gmail.com
>> > wrote:
>>
>>> Hello Matt, thank you for posting. I've noticed similar behavior.
>>>
>>> Would be curious to see the response from the engineering team.
>>>
>>> Best,
>>> Jessie
>>>
>>> On Tue, May 2, 2017 at 1:03 AM, Matt <dromitl...@gmail.com> wrote:
>>>
>>>> Hi all,
>>>>
>>>> I have two questions for you!
>>>>
>>>> *QUESTION 1*
>>>>
>>>> I'm following the example in [1] (a mix between "jdbc transactional"
>>>> and "jdbc bulk operations") and I've enabled write behind, however after
>>>> the first 10k-20k insertions the performance drops *dramatically*.
>>>>
>>>> Based on prints I've added to the CacheStore, I've noticed what Ignite
>>>> is doing is this:
>>>>
>>>> - writeAll called with 512 elements (Ignites buffers elements, that's
>>>> good)
>>>> - openConnection with autocommit=true is called each time inside
>>>> writeAll (since session is not stored in atomic mode)
>>>> - writeAll is called with 512 elements a few dozen times, each time it
>>>> opens a new JDBC connection as mentioned above
>>>> - ...
>>>> - writeAll called with ONE element (for some reason Ignite stops
>>>> buffering elements)
>>>> - writeAll is called with ONE element from here on, each time it opens
>>>> a new JDBC connection as mentioned above
>>>> - ...
>>>>
>>>> Things to note:
>>>>
>>>> - All config values are the defaults ones except for write through and
>>>> write behind which are both enabled.
>>>> - I'm running this as a server node (only one node on the cluster, the
>>>> application itself).
>>>> - I see the problem even with a big heap (ie, Ignite is not nearly out
>>>> of memory).
>>>> - I'm using PostgreSQL for this test (it's fine ingesting around 40k
>>>> rows per second on this computer, so that shouldn't be a problem)
>>>>
>>>> What is causing Ignite to stop buffering elements after calling
>>>> writeAll() a few dozen times?
>>>>
>>>> *QUESTION 2*
>>>>
>>>> I've read on the docs that using ATOMIC mode (default mode) is better
>>>> for performance, but I'm not getting why. If I'm not wrong using
>>>> TRANSACTIONAL mode would cause the CacheStore to reuse connections (not
>>>> call openConnection(autocommit=true) on each writeAll()).
>>>>
>>>> Shouldn't it be better to use transactional mode?
>>>>
>>>> Regards,
>>>> Matt
>>>>
>>>> [1] https://apacheignite.readme.io/docs/persistent-store#sec
>>>> tion-cachestore-example
>>>>
>>>
>>>
>>
>


Re: CacheStore's Performance Drops Dramatically - Why?

2017-05-02 Thread Matt
I'm stuck with that. No matter what config I use (flush size, write
threads, etc) this is the behavior I always get. It's as if Ignite internal
buffer is full and it's trying to write and get rid of the oldest (one)
element only.

Any idea people? What is your CacheStore configuration to avoid this?

On Tue, May 2, 2017 at 11:50 AM, Jessie Lin <jessie.jianwei@gmail.com>
wrote:

> Hello Matt, thank you for posting. I've noticed similar behavior.
>
> Would be curious to see the response from the engineering team.
>
> Best,
> Jessie
>
> On Tue, May 2, 2017 at 1:03 AM, Matt <dromitl...@gmail.com> wrote:
>
>> Hi all,
>>
>> I have two questions for you!
>>
>> *QUESTION 1*
>>
>> I'm following the example in [1] (a mix between "jdbc transactional" and
>> "jdbc bulk operations") and I've enabled write behind, however after the
>> first 10k-20k insertions the performance drops *dramatically*.
>>
>> Based on prints I've added to the CacheStore, I've noticed what Ignite is
>> doing is this:
>>
>> - writeAll called with 512 elements (Ignites buffers elements, that's
>> good)
>> - openConnection with autocommit=true is called each time inside writeAll
>> (since session is not stored in atomic mode)
>> - writeAll is called with 512 elements a few dozen times, each time it
>> opens a new JDBC connection as mentioned above
>> - ...
>> - writeAll called with ONE element (for some reason Ignite stops
>> buffering elements)
>> - writeAll is called with ONE element from here on, each time it opens a
>> new JDBC connection as mentioned above
>> - ...
>>
>> Things to note:
>>
>> - All config values are the defaults ones except for write through and
>> write behind which are both enabled.
>> - I'm running this as a server node (only one node on the cluster, the
>> application itself).
>> - I see the problem even with a big heap (ie, Ignite is not nearly out of
>> memory).
>> - I'm using PostgreSQL for this test (it's fine ingesting around 40k rows
>> per second on this computer, so that shouldn't be a problem)
>>
>> What is causing Ignite to stop buffering elements after calling
>> writeAll() a few dozen times?
>>
>> *QUESTION 2*
>>
>> I've read on the docs that using ATOMIC mode (default mode) is better for
>> performance, but I'm not getting why. If I'm not wrong using TRANSACTIONAL
>> mode would cause the CacheStore to reuse connections (not call
>> openConnection(autocommit=true) on each writeAll()).
>>
>> Shouldn't it be better to use transactional mode?
>>
>> Regards,
>> Matt
>>
>> [1] https://apacheignite.readme.io/docs/persistent-store#
>> section-cachestore-example
>>
>
>


CacheStore's Performance Drops Dramatically - Why?

2017-05-02 Thread Matt
Hi all,

I have two questions for you!

*QUESTION 1*

I'm following the example in [1] (a mix between "jdbc transactional" and
"jdbc bulk operations") and I've enabled write behind, however after the
first 10k-20k insertions the performance drops *dramatically*.

Based on prints I've added to the CacheStore, I've noticed what Ignite is
doing is this:

- writeAll called with 512 elements (Ignites buffers elements, that's good)
- openConnection with autocommit=true is called each time inside writeAll
(since session is not stored in atomic mode)
- writeAll is called with 512 elements a few dozen times, each time it
opens a new JDBC connection as mentioned above
- ...
- writeAll called with ONE element (for some reason Ignite stops buffering
elements)
- writeAll is called with ONE element from here on, each time it opens a
new JDBC connection as mentioned above
- ...

Things to note:

- All config values are the defaults ones except for write through and
write behind which are both enabled.
- I'm running this as a server node (only one node on the cluster, the
application itself).
- I see the problem even with a big heap (ie, Ignite is not nearly out of
memory).
- I'm using PostgreSQL for this test (it's fine ingesting around 40k rows
per second on this computer, so that shouldn't be a problem)

What is causing Ignite to stop buffering elements after calling writeAll()
a few dozen times?

*QUESTION 2*

I've read on the docs that using ATOMIC mode (default mode) is better for
performance, but I'm not getting why. If I'm not wrong using TRANSACTIONAL
mode would cause the CacheStore to reuse connections (not call
openConnection(autocommit=true) on each writeAll()).

Shouldn't it be better to use transactional mode?

Regards,
Matt

[1]
https://apacheignite.readme.io/docs/persistent-store#section-cachestore-example


Ignite Stream Processing?

2017-04-24 Thread Matt
Hi all,

I've been reading Ignite docs but I'm not getting something.

It looks to me that the streaming processing API is rather simple, and even
hacky. For instance, Ignite supports stream windows only as a consequence
of having eviction policies on caches, but it lacks many other features
that we generally see on other stream processing frameworks such as Flink.
For instance, sliding windows (same length, different beginning time [1]),
event time windows (times defined by the data itself, not processing time
[2]) and different transformations (filter, fold, reduce, map, etc [3]).

I may be able to do some of this using things like the eviction policy to
define the length of a window, but isn't it a better idea to use Ignite as
an in-memory data storage, and a fully fledged stream processing framework
on top of it to define the transformations to apply to the data?

My initial idea -which I haven't tried yet- is to use collocation to run a
closure where the data resides (affinity call/run), and use that closure to
execute a Flink pipeline (job) locally on that node, then using a custom
made data source I should be able to plug the data from the local Ignite
cache to the Flink pipeline and back into a cache using an Ignite sink.

That would imply running the Flink job locally and completely disable its
own job distribution support (that way I save time and bandwidth since the
job is executed only on one node: the node which owns the data). Is it what
you would/usually do to process Ignite data as a stream?

Any additional insight regarding stream processing on Ignite is very
welcome!

Best regards,
Matt

[1] https://ci.apache.org/projects/flink/flink-docs-release-
1.2/dev/windows.html#sliding-windows
[2] https://ci.apache.org/projects/flink/flink-docs-release-1.2/
dev/event_time.html
[3] https://ci.apache.org/projects/flink/flink-docs-release-
1.2/dev/datastream_api.html#datastream-transformations


Re: getOrCreateCache hang

2017-03-07 Thread Matt Warner
Nikolai, I feel disappointed with how this email chain seems to be going. I
spent time and effort to assemble sample code that exhibits the problem
consistently only to hear that it doesn't seem like you're even using that
code. That leaves me wondering what you were actually testing, when you
told me you couldn't reproduce this. It does not sound like you were using
the code I supplied.

I also sent thread dumps and logs early on in this process, and the current
thread dumps show the same as before—getOrCreateCache.

This problem does seem tied to the CacheJdbcStoreSessionListener data
source, but I've so far been unable to pin it down beyond that. No doubt
it's something simple I'm doing wrong, but whatever it is still eludes me.

Matt

On Tue, Feb 28, 2017 at 2:14 AM, Nikolai Tikhonov <ntikho...@apache.org>
wrote:

> I am not able reproduce exactly your test (do not have Postgre instance),
>  but I do not get hangs on start caches. I suppose that issue related with
> your setup and for getting cause of problem I need full logs and thread
> dumps (by kill -3 PID or jstack) from all nodes. Could you please provide
> this data?
>
> On Mon, Feb 27, 2017 at 10:36 PM, Matt Warner <m...@warnertechnology.com>
> wrote:
>
>> Something's not right. I'm seeing this as 100% reproducible on two
>> different OSes.
>>
>> Are you running "mvn package" on both testIgnite1 and testIgnite2 and
>> then starting the shaded JARs simultaneously from two different shell
>> windows using "java-jar ignite?...-standalone.jar"?
>>
>> Matt
>>
>> On Feb 27, 2017, at 8:58 AM, Nikolai Tikhonov <ntikho...@apache.org>
>> wrote:
>>
>> I was not able reproduce it. Could you share full logs and thread dumps
>> from all nodes?
>>
>> On Mon, Feb 27, 2017 at 7:45 PM, Matt Warner <m...@warnertechnology.com>
>> wrote:
>>
>>> Using the test code I sent previously, I changed the ports to be a range
>>> but both clients still deadlock on getOrCreateCache.
>>>
>>> Are you able to reproduce this using the test code I sent?
>>>
>>> Matt
>>>
>>>
>>> On Mon, Feb 27, 2017 at 3:02 AM, Nikolai Tikhonov <ntikho...@apache.org>
>>> wrote:
>>>
>>>> Hi Matt!
>>>>
>>>> Try to change ipFinder.setAddresses(Arrays.asList("127.0.0.1:47500")); to
>>>> ipFinder.setAddresses(Arrays.asList("127.0.0.1:47500..47505"));
>>>>
>>>> On Fri, Feb 24, 2017 at 4:00 PM, Matt Warner <m...@warnertechnology.com
>>>> > wrote:
>>>>
>>>>> Hi Nikolai.
>>>>>
>>>>> I discovered the reason the two applications weren't seeing each other
>>>>> was resolved by adding an explicit port number (Arrays.asList("
>>>>> 127.0.0.1:47500")). However, the two still deadlock when running
>>>>> concurrently.
>>>>>
>>>>> The latest test shows one application blocked in getOrCreateCache, the
>>>>> other blocked in Ignition.start. As soon as I kill the process stuck in
>>>>> Iginition.start the other process continues successfully. I've attached 
>>>>> the
>>>>> latest test code.
>>>>>
>>>>> Any ideas?
>>>>>
>>>>> Matt
>>>>>
>>>>> On Wed, Feb 22, 2017 at 10:30 AM, Matt Warner [via Apache Ignite
>>>>> Users] <[hidden email]
>>>>> <http:///user/SendEmail.jtp?type=node=10866=0>> wrote:
>>>>>
>>>>>> One other observation: with the third application acting as just the
>>>>>> server, and just one of the clients running, there is no issue. Only when
>>>>>> there are multiple clients do I get the deadlock on getOrCreateCache.
>>>>>>
>>>>>> Matt
>>>>>>
>>>>>> --
>>>>>> If you reply to this email, your message will be added to the
>>>>>> discussion below:
>>>>>> http://apache-ignite-users.70518.x6.nabble.com/getOrCreateCa
>>>>>> che-hang-tp10737p10811.html
>>>>>> To unsubscribe from getOrCreateCache hang, click here.
>>>>>> NAML
>>>>>> <http://apache-ignite-users.70518.x6.nabble.com/template/NamlServlet.jtp?macro=macro_viewer=instant_html%21nabble%3Aemail.naml=nabble.naml.namespaces.BasicNamespace-nabble.view.web.template.NabbleNamespace-nabble.view.web.template.NodeNamespace=notify_subscribers%21nabble%3Aemail.naml-instant_emails%21nabble%3Aemail.naml-send_instant_email%21nabble%3Aemail.naml>
>>>>>>
>>>>>
>>>>>
>>>>> *testIgnite.tar.gz* (14K) Download Attachment
>>>>> <http://apache-ignite-users.70518.x6.nabble.com/attachment/10866/0/testIgnite.tar.gz>
>>>>>
>>>>> --
>>>>> View this message in context: Re: getOrCreateCache hang
>>>>> <http://apache-ignite-users.70518.x6.nabble.com/getOrCreateCache-hang-tp10737p10866.html>
>>>>>
>>>>> Sent from the Apache Ignite Users mailing list archive
>>>>> <http://apache-ignite-users.70518.x6.nabble.com/> at Nabble.com.
>>>>>
>>>>
>>>>
>>>
>>
>


Re: getOrCreateCache hang

2017-02-27 Thread Matt Warner
Something's not right. I'm seeing this as 100% reproducible on two different 
OSes. 

Are you running "mvn package" on both testIgnite1 and testIgnite2 and then 
starting the shaded JARs simultaneously from two different shell windows using 
"java-jar ignite?...-standalone.jar"?

Matt

> On Feb 27, 2017, at 8:58 AM, Nikolai Tikhonov <ntikho...@apache.org> wrote:
> 
> I was not able reproduce it. Could you share full logs and thread dumps from 
> all nodes?
> 
>> On Mon, Feb 27, 2017 at 7:45 PM, Matt Warner <m...@warnertechnology.com> 
>> wrote:
>> Using the test code I sent previously, I changed the ports to be a range but 
>> both clients still deadlock on getOrCreateCache.
>> 
>> Are you able to reproduce this using the test code I sent?
>> 
>> Matt
>> 
>> 
>>> On Mon, Feb 27, 2017 at 3:02 AM, Nikolai Tikhonov <ntikho...@apache.org> 
>>> wrote:
>>> Hi Matt!
>>> 
>>> Try to change ipFinder.setAddresses(Arrays.asList("127.0.0.1:47500")); to 
>>> ipFinder.setAddresses(Arrays.asList("127.0.0.1:47500..47505"));
>>> 
>>>> On Fri, Feb 24, 2017 at 4:00 PM, Matt Warner <m...@warnertechnology.com> 
>>>> wrote:
>>>> Hi Nikolai.
>>>> 
>>>> I discovered the reason the two applications weren't seeing each other was 
>>>> resolved by adding an explicit port number 
>>>> (Arrays.asList("127.0.0.1:47500")). However, the two still deadlock when 
>>>> running concurrently.
>>>> 
>>>> The latest test shows one application blocked in getOrCreateCache, the 
>>>> other blocked in Ignition.start. As soon as I kill the process stuck in 
>>>> Iginition.start the other process continues successfully. I've attached 
>>>> the latest test code.
>>>> 
>>>> Any ideas?
>>>> 
>>>> Matt
>>>> 
>>>>> On Wed, Feb 22, 2017 at 10:30 AM, Matt Warner [via Apache Ignite Users] 
>>>>> <[hidden email]> wrote:
>>>>> One other observation: with the third application acting as just the 
>>>>> server, and just one of the clients running, there is no issue. Only when 
>>>>> there are multiple clients do I get the deadlock on getOrCreateCache. 
>>>>> 
>>>>> Matt 
>>>>> 
>>>>> If you reply to this email, your message will be added to the discussion 
>>>>> below:
>>>>> http://apache-ignite-users.70518.x6.nabble.com/getOrCreateCache-hang-tp10737p10811.html
>>>>> To unsubscribe from getOrCreateCache hang, click here.
>>>>> NAML
>>>> 
>>>> 
>>>>  testIgnite.tar.gz (14K) Download Attachment
>>>> 
>>>> View this message in context: Re: getOrCreateCache hang
>>>> 
>>>> Sent from the Apache Ignite Users mailing list archive at Nabble.com.
>>> 
>> 
> 


Re: getOrCreateCache hang

2017-02-27 Thread Matt Warner
Using the test code I sent previously, I changed the ports to be a range
but both clients still deadlock on getOrCreateCache.

Are you able to reproduce this using the test code I sent?

Matt

On Mon, Feb 27, 2017 at 3:02 AM, Nikolai Tikhonov <ntikho...@apache.org>
wrote:

> Hi Matt!
>
> Try to change ipFinder.setAddresses(Arrays.asList("127.0.0.1:47500")); to
> ipFinder.setAddresses(Arrays.asList("127.0.0.1:47500..47505"));
>
> On Fri, Feb 24, 2017 at 4:00 PM, Matt Warner <m...@warnertechnology.com>
> wrote:
>
>> Hi Nikolai.
>>
>> I discovered the reason the two applications weren't seeing each other
>> was resolved by adding an explicit port number (Arrays.asList("
>> 127.0.0.1:47500")). However, the two still deadlock when running
>> concurrently.
>>
>> The latest test shows one application blocked in getOrCreateCache, the
>> other blocked in Ignition.start. As soon as I kill the process stuck in
>> Iginition.start the other process continues successfully. I've attached the
>> latest test code.
>>
>> Any ideas?
>>
>> Matt
>>
>> On Wed, Feb 22, 2017 at 10:30 AM, Matt Warner [via Apache Ignite Users] 
>> <[hidden
>> email] <http:///user/SendEmail.jtp?type=node=10866=0>> wrote:
>>
>>> One other observation: with the third application acting as just the
>>> server, and just one of the clients running, there is no issue. Only when
>>> there are multiple clients do I get the deadlock on getOrCreateCache.
>>>
>>> Matt
>>>
>>> --
>>> If you reply to this email, your message will be added to the discussion
>>> below:
>>> http://apache-ignite-users.70518.x6.nabble.com/getOrCreateCa
>>> che-hang-tp10737p10811.html
>>> To unsubscribe from getOrCreateCache hang, click here.
>>> NAML
>>> <http://apache-ignite-users.70518.x6.nabble.com/template/NamlServlet.jtp?macro=macro_viewer=instant_html%21nabble%3Aemail.naml=nabble.naml.namespaces.BasicNamespace-nabble.view.web.template.NabbleNamespace-nabble.view.web.template.NodeNamespace=notify_subscribers%21nabble%3Aemail.naml-instant_emails%21nabble%3Aemail.naml-send_instant_email%21nabble%3Aemail.naml>
>>>
>>
>>
>> *testIgnite.tar.gz* (14K) Download Attachment
>> <http://apache-ignite-users.70518.x6.nabble.com/attachment/10866/0/testIgnite.tar.gz>
>>
>> --
>> View this message in context: Re: getOrCreateCache hang
>> <http://apache-ignite-users.70518.x6.nabble.com/getOrCreateCache-hang-tp10737p10866.html>
>>
>> Sent from the Apache Ignite Users mailing list archive
>> <http://apache-ignite-users.70518.x6.nabble.com/> at Nabble.com.
>>
>
>


Re: getOrCreateCache hang

2017-02-24 Thread Matt Warner
Hi Nikolai.

I discovered the reason the two applications weren't seeing each other was
resolved by adding an explicit port number (Arrays.asList("127.0.0.1:47500")).
However, the two still deadlock when running concurrently.

The latest test shows one application blocked in getOrCreateCache, the
other blocked in Ignition.start. As soon as I kill the process stuck in
Iginition.start the other process continues successfully. I've attached the
latest test code.

Any ideas?

Matt

On Wed, Feb 22, 2017 at 10:30 AM, Matt Warner [via Apache Ignite Users] <
ml-node+s70518n10811...@n6.nabble.com> wrote:

> One other observation: with the third application acting as just the
> server, and just one of the clients running, there is no issue. Only when
> there are multiple clients do I get the deadlock on getOrCreateCache.
>
> Matt
>
> --
> If you reply to this email, your message will be added to the discussion
> below:
> http://apache-ignite-users.70518.x6.nabble.com/getOrCreateCache-hang-
> tp10737p10811.html
> To unsubscribe from getOrCreateCache hang, click here
> <http://apache-ignite-users.70518.x6.nabble.com/template/NamlServlet.jtp?macro=unsubscribe_by_code=10737=bWF0dEB3YXJuZXJ0ZWNobm9sb2d5LmNvbXwxMDczN3wxNDczMjA0NTQy>
> .
> NAML
> <http://apache-ignite-users.70518.x6.nabble.com/template/NamlServlet.jtp?macro=macro_viewer=instant_html%21nabble%3Aemail.naml=nabble.naml.namespaces.BasicNamespace-nabble.view.web.template.NabbleNamespace-nabble.view.web.template.NodeNamespace=notify_subscribers%21nabble%3Aemail.naml-instant_emails%21nabble%3Aemail.naml-send_instant_email%21nabble%3Aemail.naml>
>


testIgnite.tar.gz (14K) 
<http://apache-ignite-users.70518.x6.nabble.com/attachment/10866/0/testIgnite.tar.gz>




--
View this message in context: 
http://apache-ignite-users.70518.x6.nabble.com/getOrCreateCache-hang-tp10737p10866.html
Sent from the Apache Ignite Users mailing list archive at Nabble.com.

Re: getOrCreateCache hang

2017-02-22 Thread Matt Warner
One other observation: with the third application acting as just the server,
and just one of the clients running, there is no issue. Only when there are
multiple clients do I get the deadlock on getOrCreateCache.

Matt



--
View this message in context: 
http://apache-ignite-users.70518.x6.nabble.com/getOrCreateCache-hang-tp10737p10811.html
Sent from the Apache Ignite Users mailing list archive at Nabble.com.


Re: getOrCreateCache hang

2017-02-22 Thread Matt Warner
I tried three different scenarios:

In the first, I updated both test projects to act as servers (setClientMode
is commented out), but the two nodes do not seem to see each other and it
results in conflicting primary keys at the database level, leading to
errors.

Second, I setClientMode on just one of the two test applications, and the
stack trace shows the client deadlocks in getOrCreateCache, as before. The
server completes without issue.

The final test I created a third application that just starts Ignite with
the code you sent, and I made sure it included the jar files from the other
two clients so it had all the classes. The other two applications are
clients (setClientMode(true)). I started this third server class and then
started the two clients. The clients both deadlock as before in
getOrCreateCache.

I'm attaching the maven test projects.

I'm very surprised that you're not seeing this with the test code I'm
sending when I'm seeing 100% reproducibility. I think I must be missing
something very basic...?

Matt

On Wed, Feb 22, 2017 at 6:21 AM, Matt Warner <m...@warnertechnology.com>
wrote:

> I will try this in about 2 hours and let you know.
>
> On Feb 22, 2017, at 5:43 AM, Nikolai Tikhonov <ntikho...@apache.org>
> wrote:
>
> Could you try start to server from java code and look at results?
>
> Thanks,
> Nikolay
>
> On Wed, Feb 22, 2017 at 4:36 PM, Matt Warner <m...@warnertechnology.com>
> wrote:
>
>> I am using the default configuration for the server and starting it with
>> ignite.sh.
>>
>> Yes, the Store classes are being deployed into the Ignite libs directory
>> as well as part of the both testIgnite1 and testIgnite2.
>>
>> I feel like I'm missing something?
>>
>> Matt
>>
>> On Wed, Feb 22, 2017 at 4:50 AM, Nikolai Tikhonov-2 [via Apache Ignite
>> Users] <[hidden email]
>> <http:///user/SendEmail.jtp?type=node=10798=0>> wrote:
>>
>>> Hi Matt!
>>>
>>> I've run your test and didn't faced with the issue. I used the following
>>> code for start server node (placed in your project):
>>>
>>> public class Server {
>>>public static void main(String[] args) throws Exception {
>>>   TcpDiscoverySpi spi = new TcpDiscoverySpi();
>>>   TcpDiscoveryVmIpFinder ipFinder = new TcpDiscoveryVmIpFinder();
>>>   ipFinder.setAddresses(Arrays.asList("127.0.0.1"));
>>>   spi.setIpFinder(ipFinder);
>>>   IgniteConfiguration cfg = new IgniteConfiguration();
>>>   cfg.setDiscoverySpi(spi);
>>>
>>>   Ignition.start(cfg);
>>>}
>>> }
>>>
>>> Which configuration you used for start server node? For your case you
>>> should deploy Store classes on all nodes in cluster.
>>>
>>> Thanks,
>>> Nikolay
>>>
>>> On Wed, Feb 22, 2017 at 1:27 AM, Matt Warner <[hidden email]
>>> <http:///user/SendEmail.jtp?type=node=10795=0>> wrote:
>>>
>>>> There were some coding errors in the previous attachment. The previous
>>>> code
>>>> still illustrates the deadlock, but the attached correctly stores data
>>>> in
>>>> tables.
>>>>
>>>> I've also included the test input file, for completeness. testCode.gz
>>>> <http://apache-ignite-users.70518.x6.nabble.com/file/n10770/testCode.gz
>>>> >
>>>>
>>>>
>>>>
>>>> --
>>>> View this message in context: http://apache-ignite-users.705
>>>> 18.x6.nabble.com/getOrCreateCache-hang-tp10737p10770.html
>>>> Sent from the Apache Ignite Users mailing list archive at Nabble.com.
>>>>
>>>
>>>
>>>
>>> --
>>> If you reply to this email, your message will be added to the discussion
>>> below:
>>> http://apache-ignite-users.70518.x6.nabble.com/getOrCreateCa
>>> che-hang-tp10737p10795.html
>>> To unsubscribe from getOrCreateCache hang, click here.
>>> NAML
>>> <http://apache-ignite-users.70518.x6.nabble.com/template/NamlServlet.jtp?macro=macro_viewer=instant_html%21nabble%3Aemail.naml=nabble.naml.namespaces.BasicNamespace-nabble.view.web.template.NabbleNamespace-nabble.view.web.template.NodeNamespace=notify_subscribers%21nabble%3Aemail.naml-instant_emails%21nabble%3Aemail.naml-send_instant_email%21nabble%3Aemail.naml>
>>>
>>
>>
>> --
>> View this message in context: Re: getOrCreateCache hang
>> <http://apache-ignite-users.70518.x6.nabble.com/getOrCreateCache-hang-tp10737p10798.html>
>>
>> Sent from the Apache Ignite Users mailing list archive
>> <http://apache-ignite-users.70518.x6.nabble.com/> at Nabble.com.
>>
>
>


testCode.tar.gz
Description: GNU Zip compressed data


Re: getOrCreateCache hang

2017-02-22 Thread Matt Warner
I will try this in about 2 hours and let you know. 

> On Feb 22, 2017, at 5:43 AM, Nikolai Tikhonov <ntikho...@apache.org> wrote:
> 
> Could you try start to server from java code and look at results?
> 
> Thanks,
> Nikolay
> 
>> On Wed, Feb 22, 2017 at 4:36 PM, Matt Warner <m...@warnertechnology.com> 
>> wrote:
>> I am using the default configuration for the server and starting it with 
>> ignite.sh.
>> 
>> Yes, the Store classes are being deployed into the Ignite libs directory as 
>> well as part of the both testIgnite1 and testIgnite2.
>> 
>> I feel like I'm missing something?
>> 
>> Matt
>> 
>>> On Wed, Feb 22, 2017 at 4:50 AM, Nikolai Tikhonov-2 [via Apache Ignite 
>>> Users] <[hidden email]> wrote:
>>> Hi Matt!
>>> 
>>> I've run your test and didn't faced with the issue. I used the following 
>>> code for start server node (placed in your project):
>>> 
>>> public class Server {
>>>public static void main(String[] args) throws Exception {
>>>   TcpDiscoverySpi spi = new TcpDiscoverySpi();
>>>   TcpDiscoveryVmIpFinder ipFinder = new TcpDiscoveryVmIpFinder();
>>>   ipFinder.setAddresses(Arrays.asList("127.0.0.1"));
>>>   spi.setIpFinder(ipFinder);
>>>   IgniteConfiguration cfg = new IgniteConfiguration();
>>>   cfg.setDiscoverySpi(spi);
>>> 
>>>   Ignition.start(cfg);
>>>}
>>> }
>>> Which configuration you used for start server node? For your case you 
>>> should deploy Store classes on all nodes in cluster.
>>> 
>>> Thanks,
>>> Nikolay
>>> 
>>>> On Wed, Feb 22, 2017 at 1:27 AM, Matt Warner <[hidden email]> wrote:
>>>> There were some coding errors in the previous attachment. The previous code
>>>> still illustrates the deadlock, but the attached correctly stores data in
>>>> tables.
>>>> 
>>>> I've also included the test input file, for completeness. testCode.gz
>>>> <http://apache-ignite-users.70518.x6.nabble.com/file/n10770/testCode.gz>
>>>> 
>>>> 
>>>> 
>>>> --
>>>> View this message in context: 
>>>> http://apache-ignite-users.70518.x6.nabble.com/getOrCreateCache-hang-tp10737p10770.html
>>>> Sent from the Apache Ignite Users mailing list archive at Nabble.com.
>>> 
>>> 
>>> 
>>> If you reply to this email, your message will be added to the discussion 
>>> below:
>>> http://apache-ignite-users.70518.x6.nabble.com/getOrCreateCache-hang-tp10737p10795.html
>>> To unsubscribe from getOrCreateCache hang, click here.
>>> NAML
>> 
>> 
>> View this message in context: Re: getOrCreateCache hang
>> 
>> Sent from the Apache Ignite Users mailing list archive at Nabble.com.
> 


Re: getOrCreateCache hang

2017-02-22 Thread Matt Warner
Yes, the jar files contain the classes. You should be able to see this in the 
jar files created by the maven project I sent. 

> On Feb 22, 2017, at 5:45 AM, Nikolai Tikhonov <ntikho...@apache.org> wrote:
> 
> Also are you sure that jar files from testIgnite1 and testIgnite2 projects 
> (which deployed into libs dir) contain needed classes? Might be you have a 
> build issue?
> 
>> On Wed, Feb 22, 2017 at 4:43 PM, Nikolai Tikhonov <ntikho...@apache.org> 
>> wrote:
>> Could you try start to server from java code and look at results?
>> 
>> Thanks,
>> Nikolay
>> 
>>> On Wed, Feb 22, 2017 at 4:36 PM, Matt Warner <m...@warnertechnology.com> 
>>> wrote:
>>> I am using the default configuration for the server and starting it with 
>>> ignite.sh.
>>> 
>>> Yes, the Store classes are being deployed into the Ignite libs directory as 
>>> well as part of the both testIgnite1 and testIgnite2.
>>> 
>>> I feel like I'm missing something?
>>> 
>>> Matt
>>> 
>>>> On Wed, Feb 22, 2017 at 4:50 AM, Nikolai Tikhonov-2 [via Apache Ignite 
>>>> Users] <[hidden email]> wrote:
>>>> Hi Matt!
>>>> 
>>>> I've run your test and didn't faced with the issue. I used the following 
>>>> code for start server node (placed in your project):
>>>> 
>>>> public class Server {
>>>>public static void main(String[] args) throws Exception {
>>>>   TcpDiscoverySpi spi = new TcpDiscoverySpi();
>>>>   TcpDiscoveryVmIpFinder ipFinder = new TcpDiscoveryVmIpFinder();
>>>>   ipFinder.setAddresses(Arrays.asList("127.0.0.1"));
>>>>   spi.setIpFinder(ipFinder);
>>>>   IgniteConfiguration cfg = new IgniteConfiguration();
>>>>   cfg.setDiscoverySpi(spi);
>>>> 
>>>>   Ignition.start(cfg);
>>>>}
>>>> }
>>>> Which configuration you used for start server node? For your case you 
>>>> should deploy Store classes on all nodes in cluster.
>>>> 
>>>> Thanks,
>>>> Nikolay
>>>> 
>>>>> On Wed, Feb 22, 2017 at 1:27 AM, Matt Warner <[hidden email]> wrote:
>>>>> There were some coding errors in the previous attachment. The previous 
>>>>> code
>>>>> still illustrates the deadlock, but the attached correctly stores data in
>>>>> tables.
>>>>> 
>>>>> I've also included the test input file, for completeness. testCode.gz
>>>>> <http://apache-ignite-users.70518.x6.nabble.com/file/n10770/testCode.gz>
>>>>> 
>>>>> 
>>>>> 
>>>>> --
>>>>> View this message in context: 
>>>>> http://apache-ignite-users.70518.x6.nabble.com/getOrCreateCache-hang-tp10737p10770.html
>>>>> Sent from the Apache Ignite Users mailing list archive at Nabble.com.
>>>> 
>>>> 
>>>> 
>>>> If you reply to this email, your message will be added to the discussion 
>>>> below:
>>>> http://apache-ignite-users.70518.x6.nabble.com/getOrCreateCache-hang-tp10737p10795.html
>>>> To unsubscribe from getOrCreateCache hang, click here.
>>>> NAML
>>> 
>>> 
>>> View this message in context: Re: getOrCreateCache hang
>>> 
>>> Sent from the Apache Ignite Users mailing list archive at Nabble.com.
>> 
> 


Re: getOrCreateCache hang

2017-02-22 Thread Matt Warner
I am using the default configuration for the server and starting it with
ignite.sh.

Yes, the Store classes are being deployed into the Ignite libs directory as
well as part of the both testIgnite1 and testIgnite2.

I feel like I'm missing something?

Matt

On Wed, Feb 22, 2017 at 4:50 AM, Nikolai Tikhonov-2 [via Apache Ignite
Users] <ml-node+s70518n10795...@n6.nabble.com> wrote:

> Hi Matt!
>
> I've run your test and didn't faced with the issue. I used the following
> code for start server node (placed in your project):
>
> public class Server {
>public static void main(String[] args) throws Exception {
>   TcpDiscoverySpi spi = new TcpDiscoverySpi();
>   TcpDiscoveryVmIpFinder ipFinder = new TcpDiscoveryVmIpFinder();
>   ipFinder.setAddresses(Arrays.asList("127.0.0.1"));
>   spi.setIpFinder(ipFinder);
>   IgniteConfiguration cfg = new IgniteConfiguration();
>   cfg.setDiscoverySpi(spi);
>
>   Ignition.start(cfg);
>}
> }
>
> Which configuration you used for start server node? For your case you
> should deploy Store classes on all nodes in cluster.
>
> Thanks,
> Nikolay
>
> On Wed, Feb 22, 2017 at 1:27 AM, Matt Warner <[hidden email]
> <http:///user/SendEmail.jtp?type=node=10795=0>> wrote:
>
>> There were some coding errors in the previous attachment. The previous
>> code
>> still illustrates the deadlock, but the attached correctly stores data in
>> tables.
>>
>> I've also included the test input file, for completeness. testCode.gz
>> <http://apache-ignite-users.70518.x6.nabble.com/file/n10770/testCode.gz>
>>
>>
>>
>> --
>> View this message in context: http://apache-ignite-users.705
>> 18.x6.nabble.com/getOrCreateCache-hang-tp10737p10770.html
>> Sent from the Apache Ignite Users mailing list archive at Nabble.com.
>>
>
>
>
> --
> If you reply to this email, your message will be added to the discussion
> below:
> http://apache-ignite-users.70518.x6.nabble.com/getOrCreateCache-hang-
> tp10737p10795.html
> To unsubscribe from getOrCreateCache hang, click here
> <http://apache-ignite-users.70518.x6.nabble.com/template/NamlServlet.jtp?macro=unsubscribe_by_code=10737=bWF0dEB3YXJuZXJ0ZWNobm9sb2d5LmNvbXwxMDczN3wxNDczMjA0NTQy>
> .
> NAML
> <http://apache-ignite-users.70518.x6.nabble.com/template/NamlServlet.jtp?macro=macro_viewer=instant_html%21nabble%3Aemail.naml=nabble.naml.namespaces.BasicNamespace-nabble.view.web.template.NabbleNamespace-nabble.view.web.template.NodeNamespace=notify_subscribers%21nabble%3Aemail.naml-instant_emails%21nabble%3Aemail.naml-send_instant_email%21nabble%3Aemail.naml>
>




--
View this message in context: 
http://apache-ignite-users.70518.x6.nabble.com/getOrCreateCache-hang-tp10737p10798.html
Sent from the Apache Ignite Users mailing list archive at Nabble.com.

Re: getOrCreateCache hang

2017-02-21 Thread Matt Warner
There were some coding errors in the previous attachment. The previous code
still illustrates the deadlock, but the attached correctly stores data in
tables.

I've also included the test input file, for completeness. testCode.gz
  



--
View this message in context: 
http://apache-ignite-users.70518.x6.nabble.com/getOrCreateCache-hang-tp10737p10770.html
Sent from the Apache Ignite Users mailing list archive at Nabble.com.


Re: getOrCreateCache hang

2017-02-21 Thread Matt Warner
There is only one Ignite server in my testing and two clients.



--
View this message in context: 
http://apache-ignite-users.70518.x6.nabble.com/getOrCreateCache-hang-tp10737p10769.html
Sent from the Apache Ignite Users mailing list archive at Nabble.com.


Re: getOrCreateCache hang

2017-02-21 Thread Matt Warner
Attached is a file containing the outputs (stack trace, Ignite log) and two
Maven test files.  test+output.gz
<http://apache-ignite-users.70518.x6.nabble.com/file/n10768/test%2Boutput.gz>  

I'm hoping you can tell me I'm just doing something silly to provoke this...

Thanks!

Matt



--
View this message in context: 
http://apache-ignite-users.70518.x6.nabble.com/getOrCreateCache-hang-tp10737p10768.html
Sent from the Apache Ignite Users mailing list archive at Nabble.com.


getOrCreateCache hang

2017-02-20 Thread Matt Warner
I'm experiencing Ignite client hangs when calling getOrCreateCache when both
are starting simultaneously. The stack trace shows the clients are hung in
the getOrCreateCache method, which is why I'm focusing here.

This seems like a deadlock when both clients are trying to simultaneously
call getOrCreateCache.

The setup is a vanilla Ignite installation running (./bin/ignite.sh) and two
clients (IgniteConfiguration setClientMode(true)). Both go through the same
setup, albeit in separate jar files (and separate PIDs):

TcpDiscoverySpi spi = new TcpDiscoverySpi();
TcpDiscoveryVmIpFinder ipFinder = new TcpDiscoveryVmIpFinder();
ipFinder.setAddresses(Arrays.asList("127.0.0.1"));
spi.setIpFinder(ipFinder);
IgniteConfiguration cfg = new IgniteConfiguration();
cfg.setDiscoverySpi(spi);
cfg.setClientMode(true);
try (Ignite ignite = Ignition.start(cfg)) {
CacheConfiguration<> cacheCfg = new 
CacheConfiguration<>(CACHE_NAME);
cacheCfg.setAtomicityMode(ATOMIC);
cacheCfg.setReadThrough(true);
cacheCfg.setWriteThrough(true);
cacheCfg.setWriteBehindEnabled(false);
cache = ignite.getOrCreateCache(cacheCfg);  <-- Hangs 
here
//
}

"main" #1 prio=5 os_prio=31 tid=0x7fdd01009800 nid=0xc07 waiting on
condition [0x70218000]
   java.lang.Thread.State: WAITING (parking)
at sun.misc.Unsafe.park(Native Method)
- parking to wait for  <0x0007964b0e58> (a
org.apache.ignite.internal.processors.cache.GridCacheProcessor$DynamicCacheStartFuture)
at java.util.concurrent.locks.LockSupport.park(LockSupport.java:175)
at
java.util.concurrent.locks.AbstractQueuedSynchronizer.parkAndCheckInterrupt(AbstractQueuedSynchronizer.java:836)
at
java.util.concurrent.locks.AbstractQueuedSynchronizer.doAcquireSharedInterruptibly(AbstractQueuedSynchronizer.java:997)
at
java.util.concurrent.locks.AbstractQueuedSynchronizer.acquireSharedInterruptibly(AbstractQueuedSynchronizer.java:1304)
at
org.apache.ignite.internal.util.future.GridFutureAdapter.get0(GridFutureAdapter.java:160)
at
org.apache.ignite.internal.util.future.GridFutureAdapter.get(GridFutureAdapter.java:118)
at
org.apache.ignite.internal.IgniteKernal.getOrCreateCache(IgniteKernal.java:2586)


My apologies in advance if this is a well-known problem. I've been searching
and am stumped.

Thanks!



--
View this message in context: 
http://apache-ignite-users.70518.x6.nabble.com/getOrCreateCache-hang-tp10737.html
Sent from the Apache Ignite Users mailing list archive at Nabble.com.


Re: Data Streamer error

2016-09-29 Thread matt
Having similar problems still. I've tried at least 3 different serialization
methods for the addData message (latest is a POJO (Serializable) w/3 String
fields). Here's the latest error message: http://pastebin.com/b2awykDy



--
View this message in context: 
http://apache-ignite-users.70518.x6.nabble.com/Data-Streamer-error-tp7725p8013.html
Sent from the Apache Ignite Users mailing list archive at Nabble.com.


Re: Data Streamer error

2016-09-28 Thread matt
OK I'll give that a shot. Thanks Val!



--
View this message in context: 
http://apache-ignite-users.70518.x6.nabble.com/Data-Streamer-error-tp7725p7994.html
Sent from the Apache Ignite Users mailing list archive at Nabble.com.


Re: Data Streamer error

2016-09-16 Thread matt
I've been at this for a while but having no luck. Some of this might be the
way that we have dependencies setup or something else in our app that's
preventing ignite streaming to work fully. But generally, what I'm seeing is
that I can send messages into Ignite, and the receiver can take them out.
But it seems that as soon as I do something with the results (I'm sending
them to another service, re-serializing), I get a huge stack-trace from
Ignite (I'll include the full message below):

Failed to marshal object with optimized marshaller:
org.apache.ignite.stream.StreamVisitor$1@76a7186e

The cache I'm using has keepBinary set to true. And the streamer does too.
The cache key/value is , and so when I put items into
the streamer, I do it like:

this.stmr.addData(doc.getId(), igniteComponent.get().binary().toBinary(
  myObject
));

where "myObject" is a pretty simple POJO.

When I take things out of the streamer, its done like:

stmr.receiver(StreamVisitor.from((cache, e) -> {
  // Activating the code below causes Ignite to throw all sorts of classdef
not found, and marshaling errors...

  try {
String id = e.getKey();
BinaryObject value = e.getValue();
logger.info(" id {} | value {}", id, value);
StreamItem item = value.deserialize();
   ...

Am I doing this all wrong? Are there various serialization/marshaling
options for me to try out? Why is it complaining about JMS anyway?

Thanks in advance, and apologies for the huge stack trace.

- M

2016-09-16T08:29:53,046 - ERROR
[grid-data-loader-flusher-#55%null%:Slf4jLogger@112] - Runtime error caught
during grid runnable execution: GridWorker [name=grid-data-loader-flusher,
gridName=null, finished=false, isCancelled=false, hashCode=1773754936,
interrupted=false, runner=grid-data-loader-flusher-#55%null%]
org.apache.ignite.binary.BinaryObjectException: Failed to marshal object
with optimized marshaller: org.apache.ignite.stream.StreamVisitor$1@76a7186e
at
org.apache.ignite.internal.binary.BinaryWriterExImpl.marshal(BinaryWriterExImpl.java:167)
~[ignite-core-1.7.0.jar:1.7.0]
at
org.apache.ignite.internal.binary.BinaryWriterExImpl.marshal(BinaryWriterExImpl.java:132)
~[ignite-core-1.7.0.jar:1.7.0]
at
org.apache.ignite.internal.binary.GridBinaryMarshaller.marshal(GridBinaryMarshaller.java:239)
~[ignite-core-1.7.0.jar:1.7.0]
at
org.apache.ignite.internal.binary.BinaryMarshaller.marshal(BinaryMarshaller.java:92)
~[ignite-core-1.7.0.jar:1.7.0]
at
org.apache.ignite.internal.processors.datastreamer.DataStreamerImpl$Buffer.submit(DataStreamerImpl.java:1362)
~[ignite-core-1.7.0.jar:1.7.0]
at
org.apache.ignite.internal.processors.datastreamer.DataStreamerImpl$Buffer.flush(DataStreamerImpl.java:1248)
~[ignite-core-1.7.0.jar:1.7.0]
at
org.apache.ignite.internal.processors.datastreamer.DataStreamerImpl.tryFlush(DataStreamerImpl.java:970)
~[ignite-core-1.7.0.jar:1.7.0]
at
org.apache.ignite.internal.processors.datastreamer.DataStreamProcessor$2.body(DataStreamProcessor.java:108)
~[ignite-core-1.7.0.jar:1.7.0]
at
org.apache.ignite.internal.util.worker.GridWorker.run(GridWorker.java:110)
[ignite-core-1.7.0.jar:1.7.0]
at java.lang.Thread.run(Thread.java:745) [?:1.8.0_31]
Caused by: org.apache.ignite.IgniteCheckedException: Failed to serialize
object: org.apache.ignite.stream.StreamVisitor$1@76a7186e
at
org.apache.ignite.marshaller.optimized.OptimizedMarshaller.marshal(OptimizedMarshaller.java:197)
~[ignite-core-1.7.0.jar:1.7.0]
at
org.apache.ignite.internal.binary.BinaryWriterExImpl.marshal(BinaryWriterExImpl.java:160)
~[ignite-core-1.7.0.jar:1.7.0]
... 9 more
Caused by: java.io.IOException: java.io.IOException: java.io.IOException:
java.lang.NoClassDefFoundError: javax/jms/JMSException
at
org.apache.ignite.marshaller.optimized.OptimizedObjectOutputStream.writeSerializable(OptimizedObjectOutputStream.java:347)
~[ignite-core-1.7.0.jar:1.7.0]
at
org.apache.ignite.marshaller.optimized.OptimizedClassDescriptor.write(OptimizedClassDescriptor.java:800)
~[ignite-core-1.7.0.jar:1.7.0]
at
org.apache.ignite.marshaller.optimized.OptimizedObjectOutputStream.writeObject0(OptimizedObjectOutputStream.java:247)
~[ignite-core-1.7.0.jar:1.7.0]
at
org.apache.ignite.marshaller.optimized.OptimizedObjectOutputStream.writeFields(OptimizedObjectOutputStream.java:539)
~[ignite-core-1.7.0.jar:1.7.0]
at
org.apache.ignite.marshaller.optimized.OptimizedObjectOutputStream.writeSerializable(OptimizedObjectOutputStream.java:351)
~[ignite-core-1.7.0.jar:1.7.0]
at
org.apache.ignite.marshaller.optimized.OptimizedClassDescriptor.write(OptimizedClassDescriptor.java:800)
~[ignite-core-1.7.0.jar:1.7.0]
at
org.apache.ignite.marshaller.optimized.OptimizedObjectOutputStream.writeObject0(OptimizedObjectOutputStream.java:247)
~[ignite-core-1.7.0.jar:1.7.0]
at

Re: Data Streamer error

2016-09-14 Thread matt
Thanks Alexey, good point. I'm checking out a few things now related to that
thought. Will post back any findings here.

M



--
View this message in context: 
http://apache-ignite-users.70518.x6.nabble.com/Data-Streamer-error-tp7725p7739.html
Sent from the Apache Ignite Users mailing list archive at Nabble.com.


Data Streamer error

2016-09-13 Thread matt
Hi,

I have Ignite (version 1.7.0) setup to do streaming, and a
autoFlushFrequency of 15 (for now) - in the logs, I'm getting this during
processing - any ideas on what this is coming from?

2016-09-13T17:31:52,580 - ERROR
[grid-data-loader-flusher-#55%null%:Slf4jLogger@112] - Runtime error caught
during grid runnable execution: GridWorker [name=grid-data-loader-flusher,
gridName=null, finished=false, isCancelled=false, hashCode=1315116657,
interrupted=false, runner=grid-data-loader-flusher-#55%null%]
org.apache.ignite.binary.BinaryObjectException: Failed to marshal object
with optimized marshaller: org.apache.ignite.stream.StreamVisitor$1@4e8b328a
at
org.apache.ignite.internal.binary.BinaryWriterExImpl.marshal(BinaryWriterExImpl.java:167)
~[ignite-core-1.7.0.jar:1.7.0]
at
org.apache.ignite.internal.binary.BinaryWriterExImpl.marshal(BinaryWriterExImpl.java:132)
~[ignite-core-1.7.0.jar:1.7.0]
at
org.apache.ignite.internal.binary.GridBinaryMarshaller.marshal(GridBinaryMarshaller.java:239)
~[ignite-core-1.7.0.jar:1.7.0]
at
org.apache.ignite.internal.binary.BinaryMarshaller.marshal(BinaryMarshaller.java:92)
~[ignite-core-1.7.0.jar:1.7.0]
at
org.apache.ignite.internal.processors.datastreamer.DataStreamerImpl$Buffer.submit(DataStreamerImpl.java:1362)
~[ignite-core-1.7.0.jar:1.7.0]
at
org.apache.ignite.internal.processors.datastreamer.DataStreamerImpl$Buffer.flush(DataStreamerImpl.java:1248)
~[ignite-core-1.7.0.jar:1.7.0]
at
org.apache.ignite.internal.processors.datastreamer.DataStreamerImpl.tryFlush(DataStreamerImpl.java:970)
~[ignite-core-1.7.0.jar:1.7.0]
at
org.apache.ignite.internal.processors.datastreamer.DataStreamProcessor$2.body(DataStreamProcessor.java:108)
~[ignite-core-1.7.0.jar:1.7.0]
at
org.apache.ignite.internal.util.worker.GridWorker.run(GridWorker.java:110)
[ignite-core-1.7.0.jar:1.7.0]
at java.lang.Thread.run(Thread.java:745) [?:1.8.0_31]
Caused by: org.apache.ignite.IgniteCheckedException: Failed to serialize
object: org.apache.ignite.stream.StreamVisitor$1@4e8b328a
at
org.apache.ignite.marshaller.optimized.OptimizedMarshaller.marshal(OptimizedMarshaller.java:197)
~[ignite-core-1.7.0.jar:1.7.0]
at
org.apache.ignite.internal.binary.BinaryWriterExImpl.marshal(BinaryWriterExImpl.java:160)
~[ignite-core-1.7.0.jar:1.7.0]
... 9 more
Caused by: java.io.IOException: java.util.ConcurrentModificationException
at
org.apache.ignite.marshaller.optimized.OptimizedObjectOutputStream.writeSerializable(OptimizedObjectOutputStream.java:347)
~[ignite-core-1.7.0.jar:1.7.0]
at
org.apache.ignite.marshaller.optimized.OptimizedClassDescriptor.write(OptimizedClassDescriptor.java:800)
~[ignite-core-1.7.0.jar:1.7.0]
at
org.apache.ignite.marshaller.optimized.OptimizedObjectOutputStream.writeObject0(OptimizedObjectOutputStream.java:247)
~[ignite-core-1.7.0.jar:1.7.0]
at
org.apache.ignite.marshaller.optimized.OptimizedObjectOutputStream.writeFields(OptimizedObjectOutputStream.java:539)
~[ignite-core-1.7.0.jar:1.7.0]
at
org.apache.ignite.marshaller.optimized.OptimizedObjectOutputStream.writeSerializable(OptimizedObjectOutputStream.java:351)
~[ignite-core-1.7.0.jar:1.7.0]
at
org.apache.ignite.marshaller.optimized.OptimizedClassDescriptor.write(OptimizedClassDescriptor.java:800)
~[ignite-core-1.7.0.jar:1.7.0]
at
org.apache.ignite.marshaller.optimized.OptimizedObjectOutputStream.writeObject0(OptimizedObjectOutputStream.java:247)
~[ignite-core-1.7.0.jar:1.7.0]
at
org.apache.ignite.marshaller.optimized.OptimizedObjectOutputStream.writeFields(OptimizedObjectOutputStream.java:539)
~[ignite-core-1.7.0.jar:1.7.0]
at
org.apache.ignite.marshaller.optimized.OptimizedObjectOutputStream.writeSerializable(OptimizedObjectOutputStream.java:351)
~[ignite-core-1.7.0.jar:1.7.0]
at
org.apache.ignite.marshaller.optimized.OptimizedClassDescriptor.write(OptimizedClassDescriptor.java:800)
~[ignite-core-1.7.0.jar:1.7.0]
at
org.apache.ignite.marshaller.optimized.OptimizedObjectOutputStream.writeObject0(OptimizedObjectOutputStream.java:247)
~[ignite-core-1.7.0.jar:1.7.0]
at
org.apache.ignite.marshaller.optimized.OptimizedObjectOutputStream.writeFields(OptimizedObjectOutputStream.java:539)
~[ignite-core-1.7.0.jar:1.7.0]
at
org.apache.ignite.marshaller.optimized.OptimizedObjectOutputStream.writeSerializable(OptimizedObjectOutputStream.java:351)
~[ignite-core-1.7.0.jar:1.7.0]
at
org.apache.ignite.marshaller.optimized.OptimizedClassDescriptor.write(OptimizedClassDescriptor.java:800)
~[ignite-core-1.7.0.jar:1.7.0]
at
org.apache.ignite.marshaller.optimized.OptimizedObjectOutputStream.writeObject0(OptimizedObjectOutputStream.java:247)
~[ignite-core-1.7.0.jar:1.7.0]
at
org.apache.ignite.marshaller.optimized.OptimizedObjectOutputStream.writeFields(OptimizedObjectOutputStream.java:539)
~[ignite-core-1.7.0.jar:1.7.0]
at

Re: Performance Issue - Threads blocking

2016-04-22 Thread Matt Hoffman
(Inline)

On Fri, Apr 22, 2016, 4:26 PM vkulichenko <valentin.kuliche...@gmail.com>
wrote:
>
> Hi Matt,
>
> I'm confused. The locking does happen on per-entry level, otherwise it's
> impossible to guarantee data consistency. Two concurrent updates or reads
> for the same key will wait for each other on this lock. But this should
not
> cause performance degradation, unless you have very few keys and very high
> contention on them.
>

Based on his claim of a lot of threads waiting on the same locks, I assumed
that's what was happening -- high contention for a few cache keys. I don't
know his use case, but I can imagine cases with a fairly small number of
very "hot" entries.
It wouldn't necessarily require very few keys, right? Just high contention
on a few of them.

> The only thing I see here is that the value is deserialized on read. This
is
> done because JCache requires store-by-value semantics and thus we create a
> copy each time you get the value (by deserializing its binary
> representation). You can override this behavior by setting
> CacheConfiguration.setCopyOnRead(false) property, this should give you
> performance improvement. Only note that it's not safe to modify the
instance
> that you got from cache this way.
>

Do you think that would be a candidate for the "Performance tips" page in
the docs? I know I've referred to that page a few times recently myself.

> -Val
>
>
>
> --
> View this message in context:
http://apache-ignite-users.70518.x6.nabble.com/Performance-Issue-Threads-blocking-tp4433p4465.html
> Sent from the Apache Ignite Users mailing list archive at Nabble.com.


Re: Performance Issue - Threads blocking

2016-04-22 Thread Matt Hoffman
I'm assuming you're seeing a lot of threads that are BLOCKED waiting on
that locked GridLocalCacheEntry (<70d32489> in that example you pasted
above). Looking at the code, it looks like it does block on individual
cache entries (so two reads of the same key within the same JVM will
block). In your particular example above, the thread in question is
publishing an EVT_CACHE_OBJECT_EXPIRED event. If you don't need that,
turning it off (along with EVT_CACHE_OBJECT_READ) will speed up the time
that the cache entry spends blocking other reads (and speed things up,
generally).
It's locking to make sure it's deserialized from swap once and expired once
(if necessary; looks like it was in this particular case).

matt

On Fri, Apr 22, 2016 at 8:07 AM, Vladimir Ozerov <voze...@gridgain.com>
wrote:

> Hi,
>
> Could you please explain why do you think that the thread is blocked? I
> see it is in a RUNNABLE state.
>
> Vladimir.
>
> On Fri, Apr 22, 2016 at 2:41 AM, ccanning <ccann...@stubhub.com> wrote:
>
>> We seem to be having some serious performance issues after adding Apache
>> Ignite Local cache to our APIs'. Looking at a heap dump, we seem to have a
>> bunch of threads blocked by this lock:
>>
>> "ajp-0.0.0.0-8009-70" - Thread t@641
>>java.lang.Thread.State: RUNNABLE
>> at
>>
>> org.apache.ignite.internal.binary.BinaryReaderExImpl.(BinaryReaderExImpl.java:166)
>> at
>>
>> org.apache.ignite.internal.binary.BinaryUtils.doReadObject(BinaryUtils.java:1486)
>> at
>>
>> org.apache.ignite.internal.binary.BinaryUtils.deserializeOrUnmarshal(BinaryUtils.java:1830)
>> at
>>
>> org.apache.ignite.internal.binary.BinaryUtils.doReadMap(BinaryUtils.java:1813)
>> at
>>
>> org.apache.ignite.internal.binary.BinaryReaderExImpl.deserialize(BinaryReaderExImpl.java:1597)
>> at
>>
>> org.apache.ignite.internal.binary.BinaryReaderExImpl.readField(BinaryReaderExImpl.java:1646)
>> at
>>
>> org.apache.ignite.internal.binary.BinaryFieldAccessor$DefaultFinalClassAccessor.read(BinaryFieldAccessor.java:643)
>> at
>>
>> org.apache.ignite.internal.binary.BinaryClassDescriptor.read(BinaryClassDescriptor.java:714)
>> at
>>
>> org.apache.ignite.internal.binary.BinaryReaderExImpl.deserialize(BinaryReaderExImpl.java:1450)
>> at
>>
>> org.apache.ignite.internal.binary.BinaryObjectImpl.deserializeValue(BinaryObjectImpl.java:537)
>> at
>>
>> org.apache.ignite.internal.binary.BinaryObjectImpl.value(BinaryObjectImpl.java:117)
>> at
>>
>> org.apache.ignite.internal.processors.cache.CacheObjectContext.unwrapBinary(CacheObjectContext.java:280)
>> at
>>
>> org.apache.ignite.internal.processors.cache.CacheObjectContext.unwrapBinaryIfNeeded(CacheObjectContext.java:145)
>> at
>>
>> org.apache.ignite.internal.processors.cache.GridCacheEventManager.addEvent(GridCacheEventManager.java:276)
>> at
>>
>> org.apache.ignite.internal.processors.cache.GridCacheEventManager.addEvent(GridCacheEventManager.java:159)
>> at
>>
>> org.apache.ignite.internal.processors.cache.GridCacheEventManager.addEvent(GridCacheEventManager.java:92)
>> at
>>
>> org.apache.ignite.internal.processors.cache.GridCacheMapEntry.innerGet0(GridCacheMapEntry.java:862)
>> - locked <70d32489> (a
>> org.apache.ignite.internal.processors.cache.local.GridLocalCacheEntry)
>> at
>>
>> org.apache.ignite.internal.processors.cache.GridCacheMapEntry.innerGet(GridCacheMapEntry.java:669)
>> at
>>
>> org.apache.ignite.internal.processors.cache.local.atomic.GridLocalAtomicCache.getAllInternal(GridLocalAtomicCache.java:587)
>> at
>>
>> org.apache.ignite.internal.processors.cache.local.atomic.GridLocalAtomicCache.get(GridLocalAtomicCache.java:483)
>> at
>>
>> org.apache.ignite.internal.processors.cache.GridCacheAdapter.get(GridCacheAdapter.java:1378)
>> at
>>
>> org.apache.ignite.internal.processors.cache.IgniteCacheProxy.get(IgniteCacheProxy.java:864)
>> at
>> org.apache.ignite.cache.spring.SpringCache.get(SpringCache.java:52)
>>
>>  - locked <70d32489> (a
>> org.apache.ignite.internal.processors.cache.local.GridLocalCacheEntry)
>>
>> Should this be causing blocking in a high-throughput API? Do you have any
>> pointers in how we could solve this issue?
>>
>> Thanks.
>>
>>
>>
>> --
>> View this message in context:
>> http://apache-ignite-users.70518.x6.nabble.com/Performance-Issue-Threads-blocking-tp4433.html
>> Sent from the Apache Ignite Users mailing list archive at Nabble.com.
>>
>
>


Re: Ignite "bugs" ?

2016-01-13 Thread Matt Hoffman
No, I understand that you can't control what other projects do, and it's
not always possible to fix bugs at their source.

I imagine the best thing for Ignite to do in this case is to iterate
through the properties and just ignore anything that isn't a String. It
wouldn't be meaningful when deserialized anyway.

On Wed, Jan 13, 2016 at 10:26 AM, Yann BLAZART <
yann.blaz...@externe.bnpparibas.com> wrote:

> Well, TomEE is using System.properties to store its IVmContext… I can’t
> make change all open source projects on this point, even if I’m agree with
> you.
>
>
>
> I have only one Ignite instance started in this case. This is really
> strange.
>
>
>
> *From:* Matt Hoffman [mailto:m...@mhoffman.org]
> *Sent:* mercredi 13 janvier 2016 15:31
> *To:* user@ignite.apache.org
> *Subject:* Re: Ignite "bugs" ?
>
>
>
> Using System.properties() to store non-strings is really poor behavior. It
> violates the contract of the System.setProperty and System.getProperty, as
> well as the contract of the Properties object itself, from right at the top
> of the javadoc:  "Each key and its corresponding value in the property
> list is a string."
>
> I've hit something like that in Hibernate before, years ago; I'm surprised
> they haven't fixed it yet. It's a long-standing bug in Hibernate. Not to
> say Ignite shouldn't have a workaround for badly-behaved libraries that do
> things like that, but it's definitely a Hibernate bug.
>
>
>
> Someone else will have to talk the locking behavior you're seeing. Are you
> starting Ignite more than once in parallel in that case?
>
>
>
> On Wed, Jan 13, 2016 at 8:23 AM, Yann BLAZART <
> yann.blaz...@externe.bnpparibas.com> wrote:
>
> Hello everybody.
>
>
>
> I’m currently evaluation Ignite vs Hazelcast on a poc.
>
>
>
> I’m facing some issues.
>
> I’m coding some integration/unit test using the ApplicationComposer of
> TomEE 7.0.0. I have no problem with Hazelcast on it.
>
>
>
> · The first one is when I’m start Ignite in the
> applicationComposer (with a CDI @Produces), Ignite complain about some
> System.properties that are not String :
>
>
>
> it try to "serialize" System.properties by using
> System.getProperties().store(new PrintWriter(sw));
>
>
>
> In fact, TomEE (as other framework like hibernate) use System.properties
> to store prop objects.
>
>
>
> So I made something to remove this properties before Ignite start and
> recover its after. Well. Pehraps it will be nice to change the way to
> “serialize” System.properties.
>
>
>
> · The second problem is strange, very strange. If I’m making a
> call to Ignite.start in @Before or using a @Inject in the test class (that
> use the @Produce that make the call to start(), everything is ok. But if
> the start is called in the @Test method (so after ApplicationComposer has
> made some things), Ignite is “locked”.
>
> o   Precisely in IgniteKernel.java:917  :
>
>
>
>
> *// Start discovery manager last to make sure that grid is fully
> initialized. *startManager(discoMgr);
>
>
>
> The call to  this method never exit.
>
>
>
> Any idea to help me to understand ? Anybody has tried to use Ignite with
> EE or CDI ?
>
>
>
> Regards
>
>
>
>
>
>
>
> This message and any attachments (the "message") is
> intended solely for the intended addressees and is confidential.
> If you receive this message in error,or are not the intended recipient(s),
> please delete it and any copies from your systems and immediately notify
> the sender. Any unauthorized view, use that does not comply with its
> purpose,
> dissemination or disclosure, either whole or partial, is prohibited. Since
> the internet
> cannot guarantee the integrity of this message which may not be reliable,
> BNP PARIBAS
> (and its subsidiaries) shall not be liable for the message if modified,
> changed or falsified.
> Do not print this message unless it is necessary,consider the environment.
>
>
> --
>
> Ce message et toutes les pieces jointes (ci-apres le "message")
> sont etablis a l'intention exclusive de ses destinataires et sont
> confidentiels.
> Si vous recevez ce message par erreur ou s'il ne vous est pas destine,
> merci de le detruire ainsi que toute copie de votre systeme et d'en avertir
> immediatement l'expediteur. Toute lecture non autorisee, toute utilisation
> de
> ce message qui n'est pas conforme a sa destination, toute diffusion ou
> toute
> publication, totale ou partielle, est interdite. L'Internet ne permettant
> pas d'assurer
> l'integrite de ce message electronique susceptible d'alteration, BNP
> Paribas
> (et ses filiales) decline(nt) toute responsabilite au titre de ce message
> dans l'hypothese
> ou il aurait ete modifie, deforme ou falsifie.
> N'imprimez ce message que si necessaire, pensez a l'environnement.
>
>
>


Changing node attributes at runtime

2015-09-30 Thread Matt Hoffman
This was asked about a month ago, but the discussion ended up going a
different direction. I have a use case involving targeting computation to
nodes where the most natural answer _seems_ to be to be able to change node
attributes at runtime. I'm aware that right now node attributes can't be
changed at runtime; I'm curious if a.) there is a technical limitation why
this couldn't be supported, and b.) if there's perhaps a better way for me
to solve my problem.

I have a cluster of nodes, which can each have a list of tags indicating
whether a job should run on them. So I would like to be able to target jobs
to only those services that have a particular tag.
However, users can edit which tags apply to which nodes at runtime through
a UI. I can't restart nodes when tags are edited. I'm flexible about how I
store the tags -- I could store them in a cache or another central store,
for example. So the only alternative I can think of to having attributes
editable at runtime is to have a map of tags to cluster node IDs in a
central location, and explicitly build a ClusterGroup from that when
launching compute jobs. Does that sound reasonable? Is there a better way
to handle this kind of thing?


Thanks,


Matt