Yeah, something like that but configuring before starting the cache manager.
Bearing in mind my limited knowledge, shouldn't a lucene directory
implementation be mandatorily configured with LuceneKey2StringMapper?
IOW, couldn't the lifecycle callback implementation be clever enough to
On Jul 4, 2011, at 11:25 AM, Sanne Grinovero wrote:
I agree they don't make sense, but only in the sense of exposed API
during a transaction: some time ago I admit I was expecting them to
just work: the API is there, nice public methods in the public
interface with javadocs explaining that
The LuceneKey2StringMapper is not mandatory as it is an optional
optimization which applies only in case a StringBased/CacheStore is
enabled.
I'm confused about what you mean with a lifecycle callback, I'm
assuming that the cache manager is started with the application
server, while the
On Tue, Jul 5, 2011 at 12:23 PM, Galder Zamarreño gal...@redhat.com wrote:
On Jul 4, 2011, at 11:25 AM, Sanne Grinovero wrote:
I agree they don't make sense, but only in the sense of exposed API
during a transaction: some time ago I admit I was expecting them to
just work: the API is there,
2011/7/5 Galder Zamarreño gal...@redhat.com:
On Jul 4, 2011, at 11:25 AM, Sanne Grinovero wrote:
I agree they don't make sense, but only in the sense of exposed API
during a transaction: some time ago I admit I was expecting them to
just work: the API is there, nice public methods in the
2011/7/5 Dan Berindei dan.berin...@gmail.com:
On Tue, Jul 5, 2011 at 12:23 PM, Galder Zamarreño gal...@redhat.com wrote:
On Jul 4, 2011, at 11:25 AM, Sanne Grinovero wrote:
I agree they don't make sense, but only in the sense of exposed API
during a transaction: some time ago I admit I was
On Tue, Jul 5, 2011 at 12:49 PM, Sanne Grinovero sa...@infinispan.org wrote:
2011/7/5 Dan Berindei dan.berin...@gmail.com:
After all Sanne has two use cases for atomic operations: sequences and
reference counts. Sequences can and should happen outside
transactions, but as we discussed on the
On Tue, Jul 5, 2011 at 12:46 PM, Sanne Grinovero sa...@infinispan.org wrote:
2011/7/5 Galder Zamarreño gal...@redhat.com:
On Jul 4, 2011, at 11:25 AM, Sanne Grinovero wrote:
I agree they don't make sense, but only in the sense of exposed API
during a transaction: some time ago I admit I was
2011/7/5 Dan Berindei dan.berin...@gmail.com:
On Tue, Jul 5, 2011 at 12:46 PM, Sanne Grinovero sa...@infinispan.org wrote:
2011/7/5 Galder Zamarreño gal...@redhat.com:
On Jul 4, 2011, at 11:25 AM, Sanne Grinovero wrote:
I agree they don't make sense, but only in the sense of exposed API
On Tue, Jul 5, 2011 at 1:45 PM, Sanne Grinovero sa...@infinispan.org wrote:
That's an interesting case, but I wasn't thinking about that. So it
might become useful later on.
The refcount scenario I'd like to improve first is about garbage
collection of old unused index segments,
we're
On Tue, Jul 5, 2011 at 1:39 PM, Sanne Grinovero sa...@infinispan.org wrote:
2011/7/5 Dan Berindei dan.berin...@gmail.com:
Here is a contrived example:
1. Start tx Tx1
2. cache.get(k) - v0
3. cache.replace(k, v0, v1)
4. gache.get(k) - ??
With repeatable read and suspend/resume around
On Jul 5, 2011, at 11:45 AM, Dan Berindei wrote:
On Tue, Jul 5, 2011 at 12:23 PM, Galder Zamarreño gal...@redhat.com wrote:
On Jul 4, 2011, at 11:25 AM, Sanne Grinovero wrote:
I agree they don't make sense, but only in the sense of exposed API
during a transaction: some time ago I
On Jul 5, 2011, at 11:46 AM, Sanne Grinovero wrote:
2011/7/5 Galder Zamarreño gal...@redhat.com:
On Jul 4, 2011, at 11:25 AM, Sanne Grinovero wrote:
I agree they don't make sense, but only in the sense of exposed API
during a transaction: some time ago I admit I was expecting them to
2011/7/5 Dan Berindei dan.berin...@gmail.com:
On Tue, Jul 5, 2011 at 1:39 PM, Sanne Grinovero sa...@infinispan.org wrote:
2011/7/5 Dan Berindei dan.berin...@gmail.com:
Here is a contrived example:
1. Start tx Tx1
2. cache.get(k) - v0
3. cache.replace(k, v0, v1)
4. gache.get(k) - ??
With
On Jul 5, 2011, at 1:24 PM, Dan Berindei wrote:
On Tue, Jul 5, 2011 at 1:39 PM, Sanne Grinovero sa...@infinispan.org wrote:
2011/7/5 Dan Berindei dan.berin...@gmail.com:
Here is a contrived example:
1. Start tx Tx1
2. cache.get(k) - v0
3. cache.replace(k, v0, v1)
4. gache.get(k) - ??
Email summary:
Number of lines: 188
Number of useful lines (Strict): 1 (0,53%)
Number of useful line (contextual): 12 (6.3%)
Position of the useful line in the amount of data: 57 (had to scroll 30% of the
data to find it and the addition 70% as I was not sure another one wasn't lost
somewhere
Hi guys
I wrote a plugin for the YCSB framework, which compares key/value stores. Care
to have a look at my impl and see what you think? I haven't actually run the
benchmark on anything more than my laptop yet, but it would be a good idea to
do so on our cluster lab and see how we fare
On Tue, Jul 5, 2011 at 4:04 PM, Sanne Grinovero sa...@infinispan.org wrote:
2011/7/5 Dan Berindei dan.berin...@gmail.com:
On Tue, Jul 5, 2011 at 1:39 PM, Sanne Grinovero sa...@infinispan.org wrote:
2011/7/5 Dan Berindei dan.berin...@gmail.com:
Here is a contrived example:
1. Start tx Tx1
2.
Good stuff, shame about the RPC count . ;)
On 5 Jul 2011, at 14:25, Mircea Markus wrote:
Hi,
This is the document describing the incremental optimistic locking Dan and
myself discussed last week:
http://community.jboss.org/wiki/IncrementalOptimisticLocking
Unless I'm missing something,
On 5 Jul 2011, at 15:39, Manik Surtani wrote:
Good stuff, shame about the RPC count . ;)
yeah. Still very valid when there are deadlocks, guess figures will tell us
more precisely what the gain is
___
infinispan-dev mailing list
On 5 Jul 2011, at 15:47, Mircea Markus wrote:
On 5 Jul 2011, at 15:39, Manik Surtani wrote:
Good stuff, shame about the RPC count . ;)
yeah. Still very valid when there are deadlocks, guess figures will tell us
more precisely what the gain is
Yup.
--
Manik Surtani
ma...@jboss.org
There's something about this file and Intellij+Scala combo that drives it into
hyper mode after a handful of edits:
http://devnet.jetbrains.net/message/5308367#5308367
I was having this issue with 10.5 and upgrading to 10.5.1 and latest Scala
plugin didn't solve it.
--
Galder Zamarreño
Sr.
Hi,
I have a completely in-line limit on the cache entries built on a clock cache
that is an approximation to an LRU cache, and is extremely fast (O(1)). It is
not a strick LRU, but chooses a not recently used item for removal. I'll
provide some more details soon.
I'm not sure how far
All,
Today I have spent some time tidying up the wiki and forum.
Wiki
--
* All documentation articles have been moved to the Infinispan Archive. This
space is not generally writable by the community. Contact me if an article has
been moved there by mistake
* The wiki home page is now the
On Tue, Jul 5, 2011 at 7:23 PM, Vladimir Blagojevic vblag...@redhat.com wrote:
Hey guys,
In the past few days I've look around how to squeeze every bit of
performance out of BCHM and particularly our LRU impl. What I did not
like about current LRU is that a search for an element in the queue
On 11-07-05 12:47 PM, Alex Kluge wrote:
Hi,
I have a completely in-line limit on the cache entries built on a
clock cache that is an approximation to an LRU cache, and is extremely
fast (O(1)). It is not a strick LRU, but chooses a not recently used
item for removal. I'll provide some more
On Tue, Jul 5, 2011 at 7:47 PM, Alex Kluge java_kl...@yahoo.com wrote:
Hi,
I have a completely in-line limit on the cache entries built on a clock
cache that is an approximation to an LRU cache, and is extremely fast
(O(1)). It is not a strick LRU, but chooses a not recently used item for
27 matches
Mail list logo