FYI I've created a JIRA to track this: https://issues.jboss.org/browse/ISPN-2950
Whilst quite a performance issues, I don't think that this is an
critical/consistency issue for async stores: by using an async store you might
loose data (expect inconsistencies) during a node crash anyway, so what
Hi all,
Thanks for the help with this issue. I thought I'd just clarify that
the situation is pretty much resolved (or worked around) for me now by
use of the clusterLoader. I'll watch the JIRA issue and be sure to try
again without a clusterLoader when that's taken care of at some point.
Best,
Hi all,
So, in my previous update it seems I had numOwners=2, but was only
using two servers. Therefore, what I was seeing made complete sense,
actually. After changing numOwners to 1, distribution appears to work
as expected with that clusterLoader added to the config as suggested.
Thanks for
Hi James,
By specifying the LuceneCacheLoader as a loader for the default cache, it will
added to both the lucene-index (where it is needed) and the other two caches
(lucene-metadata and lucene-locks) - where I don't think it is needed. I think
it should only be configured for the lucene-index
Hi,
Thanks for the tips, but I think there will be a couple of issues:
* By mistake, I actually tried creating the lucene-metadata cache
without the loader to start with, and the Directory is unusable
without it, as it isn't able to list the index files when Lucene's
IndexReader asks for them.
On 16 Mar 2013, at 01:19, Sanne Grinovero wrote:
Hi Adrian,
let's forget about Lucene details and focus on DIST.
With numOwners=1 and having two nodes the entries should be stored
roughly 50% on each node, I see nothing wrong with that
considering you don't need data failover in a read-only
Mircea,
what I was most looking forward was to you comment on the interceptor
order generated for DIST+cachestores
- we don't think the ClusteredCacheLoader should be needed at all
- each DIST node is loading from the CacheLoader (any) rather than
loading from its peer nodes for non-owned
James, to workaround ISPN-2938
you could use preloading=true on the lucene-index cacheloader, and
preloading=false on the lucene-metadata cacheloader.
Not particularly critical, but would save you a bunch of memory.
Sanne
On 19 March 2013 14:12, Sanne Grinovero sa...@infinispan.org wrote:
Hi Sanne
On Tue, Mar 19, 2013 at 4:12 PM, Sanne Grinovero sa...@infinispan.orgwrote:
Mircea,
what I was most looking forward was to you comment on the interceptor
order generated for DIST+cachestores
- we don't think the ClusteredCacheLoader should be needed at all
Agree,
On 19 Mar 2013, at 16:15, Dan Berindei wrote:
Hi Sanne
On Tue, Mar 19, 2013 at 4:12 PM, Sanne Grinovero sa...@infinispan.org wrote:
Mircea,
what I was most looking forward was to you comment on the interceptor
order generated for DIST+cachestores
- we don't think the
Implementation-wise, just changing the interceptor order is probably not
enough. If the key doesn't exist in the cache, the CacheLoaderInterceptor
will still try to load it from the cache store after the remote lookup, so
we'll need a marker in the invocation context to avoid the extra
On 19 Mar 2013, at 17:38, Dan Berindei wrote:
Implementation-wise, just changing the interceptor order is probably not
enough. If the key doesn't exist in the cache, the CacheLoaderInterceptor
will still try to load it from the cache store after the remote lookup, so
we'll need a
Update:
I tried again - I think I misconfigured that ClusterCacheLoader on my
last attempt. With this configuration [1] it actually appears to be
loading keys over the network from the peer node. I'm seeing a lot of
network IO between the nodes when requesting from either one of them
(30-50
I'm glad you're finding a workaround for the disk IO but there should
be no need to use a ClusterCacheLoader,
the intention of that would be to be able to chain multiple grids;
this is a critical problem IMHO.
Seems there are multiple other issues at hand, let me comment per bullet:
On 18 March
there should
be no need to use a ClusterCacheLoader,
I agree. This looked consistent w/ what I saw a couple weeks ago in a
different thread. Use of ClusterCacheLoader didn't make sense to me
either...
On Mar 18, 2013, at 5:55, Sanne Grinovero sa...@infinispan.org wrote:
I'm glad you're
Hey all,
OT
Seeing as this is my first post, I wanted to just quickly thank you
all for Infinispan. So far I'm really enjoying working with it - great
product!
/OT
I'm using the InfinispanDirectory for a Lucene project at the moment.
We use Lucene directly to build a search product, which has
Was the cache loader shared? Which cache loader were you using?
On Fri, Mar 15, 2013 at 8:03 AM, James Aley james.a...@swiftkey.net wrote:
Hey all,
OT
Seeing as this is my first post, I wanted to just quickly thank you
all for Infinispan. So far I'm really enjoying working with it - great
Hi Ray,
Yeah - I've tried with shared=true/false and preload=true/false. I'm
using org.infinispan.lucene.cachestore.LuceneCacheLoader.
Sorry, I also should have mentioned previously that I'm building from
master, as I need access to the Lucene v4 support.
James.
On 15 March 2013 15:31, Ray
Hi James,
I'm not an expert on InfinispanDirectory but I've noticed in [1] that
the lucene-index cache is distributed with numOwners = 1. That means
each cache entry is owned by just one cluster node and there's nowhere
else to go in the cluster if the key is not available in local memory,
Apologies - forgot to copy list.
On 15 March 2013 15:48, James Aley james.a...@swiftkey.net wrote:
Hey Adrian,
Thanks for the response. I was chatting to Sanne on IRC yesterday, and
he suggested this to me. Actually the logging I attached was from a
cluster of 4 servers with numOwners=2.
Can you try adding a ClusterCacheLoader to see if that helps?
Thanks,
On Fri, Mar 15, 2013 at 8:49 AM, James Aley james.a...@swiftkey.net wrote:
Apologies - forgot to copy list.
On 15 March 2013 15:48, James Aley james.a...@swiftkey.net wrote:
Hey Adrian,
Thanks for the response. I was
Not sure if I've done exactly what you had in mind... here is my updated XML:
https://www.refheap.com/paste/12601
I added the loader to the lucene-index namedCache, which is the one
I'm using for distribution.
This didn't appear to change anything, as far as I can see. Still
seeing a lot of disk
Hi Adrian,
let's forget about Lucene details and focus on DIST.
With numOwners=1 and having two nodes the entries should be stored
roughly 50% on each node, I see nothing wrong with that
considering you don't need data failover in a read-only use case
having all the index available in the shared
23 matches
Mail list logo