)
This will even give us the ability to use MurmurHash3 in 5.0 when we have it.
WDYT?
--
Bela Ban
Lead JGroups / Clustering Team
JBoss
___
infinispan-dev mailing list
infinispan-dev@lists.jboss.org
https://lists.jboss.org/mailman/listinfo/infinispan-dev
. Software Engineer
Infinispan, JBoss Cache
___
infinispan-dev mailing list
infinispan-dev@lists.jboss.org
https://lists.jboss.org/mailman/listinfo/infinispan-dev
--
Bela Ban
Lead JGroups / Clustering Team
JBoss
()
(MembershipArithmetic.getMembersJoined()) - which return all affected
members, not just the first one - but these seem unused for some reason.
I'll investigate. Thanks for pointing this out.
Cheers
Manik
On 31 Jan 2011, at 14:25, Bela Ban wrote:
I see that this method calls
are distributed over the entire virtual cluster, although
this doesn't currently happen. I'll bug you next week about this...
--
Bela Ban
Lead JGroups / Clustering Team
JBoss
___
infinispan-dev mailing list
infinispan-dev@lists.jboss.org
https
2000 2000)
Cluster size: 5 - ( 2000 2000 2000 2000 2000)
Cluster size: 7 - ( 2000 2000 2000 2000 2000 2000 2000)
Cluster size: 9 - ( 2000 2000 2000 2000 2000 2000 2000 2000 2000)
[1] this can now be obtained by running (./bin/dist.sh - just added).
--
Bela Ban
Lead JGroups / Clustering
If there's a chance, I'd love to see Mircea's changes wrt consistent
hash broadcasting in a CR2...
On 2/10/11 2:44 PM, Sanne Grinovero wrote:
Hello all,
are there imminent plans for a release of 4.2.1, and are we going to see a
CR2 ?
Regards,
Sanne
--
Bela Ban
Lead JGroups / Clustering
the cluster ? Staggered, or all at once ?
--
Bela Ban
Lead JGroups / Clustering Team
JBoss
___
infinispan-dev mailing list
infinispan-dev@lists.jboss.org
https://lists.jboss.org/mailman/listinfo/infinispan-dev
___
infinispan-dev mailing list
infinispan-dev@lists.jboss.org
https://lists.jboss.org/mailman/listinfo/infinispan-dev
--
Bela Ban
Lead JGroups / Clustering Team
JBoss
___
infinispan-dev mailing list
infinispan-dev@lists.jboss.org
://issues.jboss.org/browse/JGRP-1307
--
Bela Ban
Lead JGroups / Clustering Team
JBoss
___
infinispan-dev mailing list
infinispan-dev@lists.jboss.org
https://lists.jboss.org/mailman/listinfo/infinispan-dev
-largeclusters.xml Monday, let's
try with that setup then.
--
Bela Ban
Lead JGroups / Clustering Team
JBoss
___
infinispan-dev mailing list
infinispan-dev@lists.jboss.org
https://lists.jboss.org/mailman/listinfo/infinispan-dev
=true use_flush_if_present=false/
UFC max_credits=2FRAG000 min_threshold=0.20/
MFC max_credits=200 min_threshold=0.20/
FRAG2 frag_size=200 /
pbcast.STREAMING_STATE_TRANSFER/
!--pbcast.STATE_TRANSFER/ --
pbcast.FLUSH timeout=0/
/config
--
Bela Ban
Lead
/
!--pbcast.STATE_TRANSFER/ --
pbcast.FLUSH timeout=0/
/config
-Original Message-
From: infinispan-dev-boun...@lists.jboss.org
[mailto:infinispan-dev-boun...@lists.jboss.org] On Behalf Of Bela Ban
Sent: Saturday, March 19, 2011 1:15 PM
To: infinispan-dev@lists.jboss.org
vexes my cache
usage as well. Is there a wiki or JIRA I could look at to understand the
fundamental differences?
Thanks,
Erik
-Original Message-
From: infinispan-dev-boun...@lists.jboss.org
[mailto:infinispan-dev-boun...@lists.jboss.org] On Behalf Of Bela Ban
Sent: Wednesday
config should be less painful now, with the introduction of
view bundling: we need to run flush fewer times than before.
[1] http://community.jboss.org/wiki/TestingJBoss
--
Bela Ban
Lead JGroups / Clustering Team
JBoss
___
infinispan-dev mailing
helpful !
...but I've implemented the prototype of this solution in Appia, as I
knew it much better than JGroups. What do you think about integrating a
similar mechanism in JGroup's ergonomics?
+1. Definitely something I've been wanting to do...
--
Bela Ban
Lead JGroups / Clustering Team
___
infinispan-dev mailing list
infinispan-dev@lists.jboss.org
https://lists.jboss.org/mailman/listinfo/infinispan-dev
--
Bela Ban
Lead JGroups / Clustering Team
JBoss
___
infinispan-dev mailing list
infinispan-dev
that timestamp.
This would be particularly efficient in situations where you have to quickly
restart a machine for whatever reason and so the deltas are very small, or
when the caches are big and state transfer would cost a lot from a bandwidth
perspective.
--
Bela Ban
Lead JGroups
buffers in JGroups, but got
better perf when I simply copied the buffer.
Plus the reservoir sampling's complexity is another source of bugs...
--
Bela Ban
Lead JGroups / Clustering Team
JBoss
___
infinispan-dev mailing list
infinispan-dev@lists.jboss.org
that - whatever you guys do - measure the impact on
performance and memory usage. As I said before, my money's on simple
copying... :-)
--
Bela Ban
Lead JGroups / Clustering Team
JBoss
___
infinispan-dev mailing list
infinispan-dev@lists.jboss.org
https
expanded...
For example, would you guys be able to guess the buffer sizes of
Infinispan used in JBoss AS ? We're placing not just session data, but
all sorts of crap into the cache, so I for one wouldn't be able to even
give you a best estimate...
--
Bela Ban
Lead JGroups / Clustering Team
track of
individual caches...
Why are we actually using JGroups' state transfer with replication, but
use our own state transfer with distribution ?
Opinions ?
--
Bela Ban
Lead JGroups / Clustering Team
JBoss
___
infinispan-dev mailing list
infinispan
the state
transfer you already use for state transfer in distribution mode.
--
Bela Ban
Lead JGroups / Clustering Team
JBoss
___
infinispan-dev mailing list
infinispan-dev@lists.jboss.org
https://lists.jboss.org/mailman/listinfo/infinispan-dev
until June 11th.
Cheers,
--
Bela Ban
Lead JGroups / Clustering Team
JBoss
___
infinispan-dev mailing list
infinispan-dev@lists.jboss.org
https://lists.jboss.org/mailman/listinfo/infinispan-dev
)?
Thoughts?
--
Bela Ban
Lead JGroups / Clustering Team
JBoss
___
infinispan-dev mailing list
infinispan-dev@lists.jboss.org
https://lists.jboss.org/mailman/listinfo/infinispan-dev
allows for purging of the message.
This happens for sent messages only.
--
Bela Ban
Lead JGroups / Clustering Team
JBoss
___
infinispan-dev mailing list
infinispan-dev@lists.jboss.org
https://lists.jboss.org/mailman/listinfo/infinispan-dev
Just copy the damn buffer and give it to me $@$#^%$#^%^$
Simple. Performant. Reliable.
:-)
On 6/14/11 5:42 PM, Sanne Grinovero wrote:
2011/6/14 Galder Zamarreñogal...@redhat.com:
On Jun 14, 2011, at 1:24 PM, Manik Surtani wrote:
On 14 Jun 2011, at 12:15, Bela Ban wrote:
+1
/PartialStateTransfer.txt
--
Bela Ban
Lead JGroups / Clustering Team
JBoss
___
infinispan-dev mailing list
infinispan-dev@lists.jboss.org
https://lists.jboss.org/mailman/listinfo/infinispan-dev
not fit nicely with higher
level abstractions like substates? We use partial state transfer in
Infinispan and we need to address this.
What is the time frame here? 5.0 final release?
Vladimir
On 11-06-15 11:39 AM, Bela Ban wrote:
I looked into adding partial state transfer back into JGroups
if this condition changes (e.g. more nodes are addded), we'd
have to switch back...
--
Bela Ban
Lead JGroups / Clustering Team
JBoss
___
infinispan-dev mailing list
infinispan-dev@lists.jboss.org
https://lists.jboss.org/mailman/listinfo/infinispan-dev
partial state.
--
Bela Ban
Lead JGroups / Clustering Team
JBoss
___
infinispan-dev mailing list
infinispan-dev@lists.jboss.org
https://lists.jboss.org/mailman/listinfo/infinispan-dev
)
at org.jgroups.protocols.pbcast.GMS.up(GMS.java:891)
at org.jgroups.protocols.pbcast.STABLE.up(STABLE.java:246)
at org.jgroups.protocols.UNICAST.handleDataReceived(UNICAST.java:613)
at org.jgroups.protocols.UNICAST.up(UNICAST.java:294)
--
Bela Ban
Lead JGroups (http://www.jgroups.org)
JBoss / Red Hat
into the virtual file
system.
Are there ways to plug that in? I saw something like StreamingMarshaller
that could decouple this from the actual GridFilesystem.
Yes, you could always do this at the Infinispan level (or even at the
level of GridFS, but that would mean duplicating functionality).
--
Bela
down, then start B ? B knows
it wants to join, but doesn't see A. A and B will only see each other
when the switch comes back up.
Of course, we always strive to have regular joins, not merges, but this
cannot always be achieved.
--
Bela Ban
Lead JGroups (http://www.jgroups.org)
JBoss / Red Hat
, e.g. how do conflicting removes
and changes get handled, tombstones ?
This is something we should discuss separately from the new rebalancing
code. Note that this also applies to replication, not only distribution.
--
Bela Ban
Lead JGroups (http://www.jgroups.org)
JBoss / Red Hat
idea is that if numOwnersclusterSize,
when a node joins you practically have state transfer.
I see you still haven't given up on your old idea... :-)
--
Bela Ban
Lead JGroups (http://www.jgroups.org)
JBoss / Red Hat
___
infinispan-dev mailing list
have that window where cluster 1 fails and writes haven't
propagated to cluster 2. Same problem with Erik's Cassandra-based solution.
Is this level of inconsistency acceptable?
On 26 Jul 2011, at 12:21, Bela Ban wrote:
On 7/25/11 6:06 PM, Erik Salter wrote:
Hi all,
Bela was kind
instead a last good CH for each partition, and each node
will determine whether to push a key or not based on the CH of its
partition and the CH of the united cluster.
Cheers
Dan
--
Bela Ban
Lead JGroups (http://www.jgroups.org)
JBoss / Red Hat
be sent in this case: the current
coordinator should add the LEAVE(s) into its queue, as if it indeed did
receive 2 LEAVE requests. This may or may not lead to rebalance,
depending on the rebalance policy (e.g. JOINS are queued, but LEAVES and
MERGES trigger a rebalance immediately).
--
Bela
/mailman/listinfo/infinispan-dev
--
Bela Ban
Lead JGroups (http://www.jgroups.org)
JBoss / Red Hat
___
infinispan-dev mailing list
infinispan-dev@lists.jboss.org
https://lists.jboss.org/mailman/listinfo/infinispan-dev
FYI
Original Message
Subject: [jgroups-dev] Incompatible change in 3.1.0
Date: Mon, 28 Nov 2011 17:36:49 +0100
From: Bela Ban bela...@yahoo.com
To: jg-users javagroups-us...@lists.sourceforge.net, jg-dev
javagroups-developm...@lists.sourceforge.net
I'm working on an issue
good to break such API in 3.1, should have been done in 3.0 :)
No shit ? That's why I sent this email !$%#$ :-)
On Nov 28, 2011, at 5:38 PM, Bela Ban wrote:
FYI
Original Message
Subject: [jgroups-dev] Incompatible change in 3.1.0
Date: Mon, 28 Nov 2011 17:36:49 +0100
OK, I changed this. Infinispan master and jboss-as master don't object;
obviously the only method used was Message.setFlag() and
RequestOptions.setFlags()...
On 11/29/11 12:04 PM, Manik Surtani wrote:
On 29 Nov 2011, at 10:43, Tristan Tarrant wrote:
On 11/29/2011 11:28 AM, Bela Ban wrote
, ...);
Another good one; done !
--
Bela Ban
Lead JGroups (http://www.jgroups.org)
JBoss / Red Hat
___
infinispan-dev mailing list
infinispan-dev@lists.jboss.org
https://lists.jboss.org/mailman/listinfo/infinispan-dev
. This was an oversight because I have the
same API in RequestOptions...
--
Bela Ban
Lead JGroups (http://www.jgroups.org)
JBoss / Red Hat
___
infinispan-dev mailing list
infinispan-dev@lists.jboss.org
https://lists.jboss.org/mailman/listinfo/infinispan-dev
On 11/29/11 3:50 PM, Galder Zamarreño wrote:
On Nov 29, 2011, at 2:23 PM, Bela Ban wrote:
On 11/29/11 2:10 PM, Galder Zamarreño wrote:
Hi,
We've been having a discussion this morning with regards to the Hot Rod
changes introduced in 5.1 with regards to hashing.
When Hot Rod server
byte) string then ?
--
Bela Ban
Lead JGroups (http://www.jgroups.org)
JBoss / Red Hat
___
infinispan-dev mailing list
infinispan-dev@lists.jboss.org
https://lists.jboss.org/mailman/listinfo/infinispan-dev
similar in JGroups, take a look at
ENCRYPT.byteArrayToHexString().
--
Bela Ban
Lead JGroups (http://www.jgroups.org)
JBoss / Red Hat
___
infinispan-dev mailing list
infinispan-dev@lists.jboss.org
https://lists.jboss.org/mailman/listinfo/infinispan-dev
the logging at runtime (e.g. via probe.sh).
My current philosophy is that I do use the if(log.isTraceEnabled()
log.trace() pattern, but I use it wisely, and I try not to use this in
tight loops, or frequently traversed code paths.
--
Bela Ban
Lead JGroups (http://www.jgroups.org)
JBoss / Red Hat
1/10 of a %. Just saying :)
--
Bela Ban
Lead JGroups (http://www.jgroups.org)
JBoss / Red Hat
___
infinispan-dev mailing list
infinispan-dev@lists.jboss.org
https://lists.jboss.org/mailman/listinfo/infinispan-dev
implement
solution #1, then the chances of this happening are vastly reduced.
I wanted to ask the bright folks on this list though, if you see a
solution that only involves Infinispan (rebalancing) ?
Cheers,
[1] https://issues.jboss.org/browse/JGRP-1401
--
Bela Ban
Lead JGroups (http
to the project's code style
(currently, some vars are named like default_chunk_size); also there's
probably no reason against me making fields in GridOutputStream private?
Go ahead; this is my KR style (I refuse to adopt the var/field naming
conventions of Java !) :-)
--
Bela Ban
Lead JGroups (http
, I recall you sending the update for K as 3 unicasts: 1 to A, 1 to
B and 1 to Z. The one to Z is relayed by A to X, and sent from X to Z.
WDYT?
Cheers
Manik
On 14 Dec 2011, at 11:51, Bela Ban wrote:
Tobias made me aware of a bug that can happen when we use RELAY and one
of the relay
--
Bela Ban
Lead JGroups (http://www.jgroups.org)
JBoss / Red Hat
___
infinispan-dev mailing list
infinispan-dev@lists.jboss.org
https://lists.jboss.org/mailman/listinfo/infinispan-dev
) {
System.out.println(Exception occured after +
(System.currentTimeMillis() - startTime) + ms);
e.printStackTrace();
}
}
}
--
Bela Ban
Lead JGroups (http://www.jgroups.org)
JBoss / Red Hat
___
infinispan-dev mailing list
infinispan-dev
-dev@lists.jboss.org
https://lists.jboss.org/mailman/listinfo/infinispan-dev
--
Bela Ban
Lead JGroups (http://www.jgroups.org)
JBoss / Red Hat
___
infinispan-dev mailing list
infinispan-dev@lists.jboss.org
https://lists.jboss.org/mailman/listinfo
!
If you commit it on an experimental branch, I'll give it a preview run ..
The branch is JGRP-1396-2, the class is Table. There is a stress test
called TableStressTest (you can compare it to
NakReceiverWindowStressTest and RingBufferStressTest).
--
Bela Ban
Lead JGroups (http
a test with the experimental 3.1 first
though, so we can see if this makes a difference at all...
On 19 Jan 2012, at 16:29, Bela Ban wrote:
On 1/19/12 11:45 AM, Sanne Grinovero wrote:
On 19 January 2012 09:59, Bela Banb...@redhat.com wrote:
It would be interesting to see the numbers
it on the same thread
used for sending it. This way we would avoid re-ordering and potentially
reduce thread's context switching.
Wdyt?
[1] http://bit.ly/yNk6In
--
Bela Ban
Lead JGroups (http://www.jgroups.org)
JBoss / Red Hat
___
infinispan-dev
On 1/19/12 6:36 PM, Galder Zamarreño wrote:
On Jan 19, 2012, at 3:43 PM, Bela Ban wrote:
This may not give you any performance increase:
#1 In my experience, serialization is way faster than de-serialization.
Unless you're doing something fancy in your serializer
No. I think Mircea
the same thing as anycast+GroupRequest.
No, parallel unicasts will be faster, as an anycast to A,B,C sends the
unicasts sequentially
--
Bela Ban
Lead JGroups (http://www.jgroups.org)
JBoss / Red Hat
___
infinispan-dev mailing list
infinispan-dev
On 1/25/12 12:58 PM, Mircea Markus wrote:
On 25 Jan 2012, at 09:42, Bela Ban wrote:
On 1/25/12 9:51 AM, Dan Berindei wrote:
Slightly related, I wonder if Manik's comment is still true:
if at all possible, try not to use JGroups' ANYCAST for now.
Multiple (parallel) UNICASTs
ma...@jboss.org
twitter.com/maniksurtani
Lead, Infinispan
http://www.infinispan.org
___
infinispan-dev mailing list
infinispan-dev@lists.jboss.org
https://lists.jboss.org/mailman/listinfo/infinispan-dev
--
Bela Ban
Lead JGroups (http
threads?
Both regular and OOB message are added to the *same* receive window on
the receiver side, so there are definitely contention points between them...
--
Bela Ban
Lead JGroups (http://www.jgroups.org)
JBoss / Red Hat
___
infinispan-dev mailing
with vnodes enabled.
Manik, we're using VNodes in our performance tests. The proposal is if
we can provide a good default value, as the feature is currently
disabled by default.
--
Bela Ban
Lead JGroups (http://www.jgroups.org)
JBoss / Red Hat
yes.
On 1/27/12 12:13 PM, Manik Surtani wrote:
On 25 Jan 2012, at 09:42, Bela Ban wrote:
No, parallel unicasts will be faster, as an anycast to A,B,C sends the
unicasts sequentially
Is this still the case in JG 3.x?
--
Bela Ban
Lead JGroups (http://www.jgroups.org)
JBoss / Red Hat
, and see how long it takes before it's
relatively quiet.
--
Bela Ban
Lead JGroups (http://www.jgroups.org)
JBoss / Red Hat
___
infinispan-dev mailing list
infinispan-dev@lists.jboss.org
https://lists.jboss.org/mailman/listinfo/infinispan-dev
used for UPerf.
Cheers,
On 1/27/12 3:12 PM, Mircea Markus wrote:
On 27 Jan 2012, at 07:03, Bela Ban wrote:
Regarding ISPN-1786, I'd like to work with Sanne/Mircea on trying out
the new UNICAST2. In my local tests, I got a 15% speedup, but this is
JGroups only, so I'm not sure how big the impact
On 1/27/12 4:26 PM, Mircea Markus wrote:
On 27 Jan 2012, at 15:08, Bela Ban wrote:
Build the JGroups JAR with ./build.sh jar, *not* via maven !
I attached the JAR for you.
Thanks!
JGroups *is* and *will remain* JAR less ! :-)
Sorry for loosing the faith :)
Might make sense to have
the possibility of adding this to Infinispan and
provide memory size based eviction, WDYT?
The performance impact would need to be measured too.
EhCache has apparenlty done something similar but from what I heard, it's
full of hacks to work on diff plattforms...
--
Bela Ban
Lead JGroups
On 1/31/12 11:32 AM, Tristan Tarrant wrote:
On 01/31/2012 10:34 AM, Bela Ban wrote:
The approach I've recommended before is to trigger an eviction policy
based on free/available memory. This can easily be fetched from the JVM
via JMX...
And maybe you're just close to a large GC and you're
this out and I heard some impls
don't provide this...
On 1/31/12 3:32 PM, Tristan Tarrant wrote:
On 01/31/2012 03:32 PM, Bela Ban wrote:
IIRC you can look at the size of the young and old generation via JMX,
and there you can see how much memory has accumulated.
Does that work when using G1
https://lists.jboss.org/mailman/listinfo/infinispan-dev
--
Bela Ban
Lead JGroups (http://www.jgroups.org)
JBoss / Red Hat
___
infinispan-dev mailing list
infinispan-dev@lists.jboss.org
https://lists.jboss.org/mailman/listinfo/infinispan-dev
.
You could also enable tracing for PING:
probe.sh op=PING.setLevel[trace]
--
Bela Ban
Lead JGroups (http://www.jgroups.org)
JBoss / Red Hat
___
infinispan-dev mailing list
infinispan-dev@lists.jboss.org
https://lists.jboss.org/mailman/listinfo/infinispan
,
Sanne
[1] http://www.jgroups.org/manual-3.x/html/protlist.html#MERGE3
--
Bela Ban
Lead JGroups (http://www.jgroups.org)
JBoss / Red Hat
___
infinispan-dev mailing list
infinispan-dev@lists.jboss.org
https://lists.jboss.org/mailman/listinfo
,
Sanne
[1] http://www.jgroups.org/manual-3.x/html/protlist.html#MERGE3
--
Bela Ban
Lead JGroups (http://www.jgroups.org)
JBoss / Red Hat
___
infinispan-dev mailing list
infinispan-dev@lists.jboss.org
https://lists.jboss.org/mailman/listinfo
.
On 1/27/12 12:13 PM, Manik Surtani wrote:
On 25 Jan 2012, at 09:42, Bela Ban wrote:
No, parallel unicasts will be faster, as an anycast to A,B,C sends the
unicasts sequentially
Is this still the case in JG 3.x?
--
Bela Ban
Lead JGroups (http://www.jgroups.org)
JBoss / Red Hat
On 2/4/12 5:53 PM, Manik Surtani wrote:
On 2 Feb 2012, at 07:53, Bela Ban wrote:
I can also reproduce it by now, in JGroups: I simply create 12 members
in a loop...
Don't need the bombastic Transactional test
Yup; Transactional was made to benchmark and profile 2-phase transactions
of the buffer itself...
--
Bela Ban
Lead JGroups (http://www.jgroups.org)
JBoss / Red Hat
___
infinispan-dev mailing list
infinispan-dev@lists.jboss.org
https://lists.jboss.org/mailman/listinfo/infinispan-dev
that (for a different problem) ?
Or pointers to the source code or tests would be appreciated as well.
There are 2 tests in JGroups: MuxRpcDispatcherTest and
MuxMessageDispatcherTest, both written by PaulF.
--
Bela Ban
Lead JGroups (http://www.jgroups.org)
JBoss / Red Hat
. state transfer, transactions
hotrod, jgroups etc. Can you please take a look and comment?
[1] https://community.jboss.org/wiki/CrossDatacenterReplication-Design
--
Bela Ban
Lead JGroups (http://www.jgroups.org)
JBoss / Red Hat
___
infinispan-dev
up option #2 and option #1 in my previous email)
--
Bela Ban
Lead JGroups (http://www.jgroups.org)
JBoss / Red Hat
___
infinispan-dev mailing list
infinispan-dev@lists.jboss.org
https://lists.jboss.org/mailman/listinfo/infinispan-dev
for B, 1 for A and 1 for C on B.
--
Bela Ban
Lead JGroups (http://www.jgroups.org)
JBoss / Red Hat
___
infinispan-dev mailing list
infinispan-dev@lists.jboss.org
https://lists.jboss.org/mailman/listinfo/infinispan-dev
,
Re: https://community.jboss.org/message/720598#720598
This shows once again that we need to address
https://issues.jboss.org/browse/ISPN-1441 asap cos it makes look weak out of
the box when people try to run performance tests like this.
We should bring this forward to 5.2
--
Bela Ban
/protocols/SEQUENCER.html
--
Bela Ban
Lead JGroups (http://www.jgroups.org)
JBoss / Red Hat
___
infinispan-dev mailing list
infinispan-dev@lists.jboss.org
https://lists.jboss.org/mailman/listinfo/infinispan-dev
in the stack for certain configurations.
AFAIR, COUNTER would not affect the stack unless invoked explicitly
Same for SEQUENCER
Wait ! SEQUENCER *will* add total order to all multicast messages, so
this does change semantics !
--
Bela Ban
Lead JGroups (http://www.jgroups.org)
JBoss / Red Hat
with all stakeholders (e.g.
Paul for the AS), and come up with agreed-upon sets of configs, to be
used perhaps over a shared transport. I reconsider my previous statement
and now think it's best to avoid hidden insertions of protocols.
--
Bela Ban
Lead JGroups (http://www.jgroups.org)
JBoss / Red Hat
configuration
- the performance penalty of having the SEQUENCER for caches that don't need
it, e.g. not tx caches, is minimal by the look of the code
How does it sound?
Intuitively, not nice ! Can't we have different configurations for these
use cases ?
--
Bela Ban
Lead JGroups (http://www.jgroups.org
looking at the config, doesn't know that we're secretly bypassing SEQUENCER.
--
Bela Ban
Lead JGroups (http://www.jgroups.org)
JBoss / Red Hat
___
infinispan-dev mailing list
infinispan-dev@lists.jboss.org
https://lists.jboss.org/mailman/listinfo
we only want to send keys to nodes that
actually need to store them...
Thoughts ?
--
Bela Ban, JGroups lead (http://www.jgroups.org)
___
infinispan-dev mailing list
infinispan-dev@lists.jboss.org
https://lists.jboss.org/mailman/listinfo/infinispan-dev
of Infinispan (and JGroups).
--
Bela Ban, JGroups lead (http://www.jgroups.org)
___
infinispan-dev mailing list
infinispan-dev@lists.jboss.org
https://lists.jboss.org/mailman/listinfo/infinispan-dev
On 3/14/12 1:14 AM, Manik Surtani wrote:
On 13 Mar 2012, at 03:28, Bela Ban wrote:
On 3/13/12 6:35 AM, Manik Surtani wrote:
On 12 Mar 2012, at 08:03, Dan Berindei wrote:
Well, probably not, because we only want to send keys to nodes that
actually need to store them...
Sending
iterating the data container we need to wait for all the
pending commands to finish.
Can't we queue the state transfer requests *and* the regular requests ?
When ST is done, we apply the ST requests first, then the queued regular
requests.
--
Bela Ban, JGroups lead (http://www.jgroups.org
(maps to A:B), K2 (maps to B:C), K5 (maps to E:A)
Do we send 3 anycasts A:B, B:C and E:A or do we send 1 anycast A:B:C:E ?
I think it's the latter, but please confirm...
--
Bela Ban, JGroups lead (http://www.jgroups.org)
___
infinispan-dev mailing list
On 3/12/12 7:13 PM, Pedro Ruivo wrote:
On 3/10/12 5:07 PM, Bela Ban wrote:
If so, then I can assume that a transactional modification touching a
number of keys will almost always touch *all* nodes ? Example:
- We have 10 nodes
- numOwners = 2
- If we have a good consistent hash, I can
ok
On 3/15/12 3:45 PM, Pedro Ruivo wrote:
Hi,
Comment inline.
Cheers,
Pedro
On 3/15/12 8:34 AM, Bela Ban wrote:
On 3/12/12 7:13 PM, Pedro Ruivo wrote:
On 3/10/12 5:07 PM, Bela Ban wrote:
If so, then I can assume that a transactional modification touching a
number of keys will almost
they ? Pedro ? If not, then there needs to be an implementation of
ST for TO.
Cheers,
--
Bela Ban, JGroups lead (http://www.jgroups.org)
___
infinispan-dev mailing list
infinispan-dev@lists.jboss.org
https://lists.jboss.org/mailman/listinfo/infinispan-dev
,
then TX1 acquiring locks on B and TX2 acquiring locks on A.
--
Bela Ban, JGroups lead (http://www.jgroups.org)
___
infinispan-dev mailing list
infinispan-dev@lists.jboss.org
https://lists.jboss.org/mailman/listinfo/infinispan-dev
(or both) will be aborted by a timeout. Is this
behavior correct? Am I missing something?
This is correct, and shows why we have collisions (deadlocks and then
backoffs) with our current 2PC scheme...
--
Bela Ban, JGroups lead (http://www.jgroups.org
the V8 with a leaver, and we replace the state
transfer in progress with another state transfer V5-V8.
OK
--
Bela Ban, JGroups lead (http://www.jgroups.org)
___
infinispan-dev mailing list
infinispan-dev@lists.jboss.org
https://lists.jboss.org
we expect
large messages to be in the retransmission table. If you guys go with
UNICAST2, let me show you how to do this (contact me offline)...
Cheers,
--
Bela Ban, JGroups lead (http://www.jgroups.org)
___
infinispan-dev mailing list
infinispan-dev
1 - 100 of 322 matches
Mail list logo