Minor update: Dmitry and I have established an initital bridge connecting
OpenHFT's off heap SharedHashMap to an ISPN7 DataContainer API view.
https://github.com/Cotton-Ben/infinispan (details found in [offheap]
module) TODO: tests.
--
View this message in context:
http://infinispan
Many thanks for this counsel, Tristan. Dmitry and I are taking our first
baby steps into learning the DataContainer internals.
We will exercise this consideration that you mention - and exercise it
explicitly - to ensure our adaptation is accommodating.
In general, our approach to adapting
Tristan, Does the ISPN7 API (or config) have a FluentBuilder mechanism via
which Cache instance A can be bound to DataContainer A and Cache instance B
can be bound to DataContainer B? Thx, Ben Dmitry
--
View this message in context:
Thanks Sanne.
Interesting how many penguins were involved:
1. openJDK (duke)
2. Linux (mascot)
3. Infinispan 5.3 (T.N.P)
;)
--
View this message in context:
like the correct first steps?
https://github.com/Cotton-Ben/infinispan/blob/master/off-heap/src/main/java/org/infinispan/offheap/OffHeapDefaultDataContainer.java
--
View this message in context:
http://infinispan-developer-list.980875.n3.nabble.com/Learning-ISPN7-DataContainer-internals-first
ambition to directly implement JCACHE (in any form) from our
Fork. (see newly documented goals at
https://github.com/Cotton-Ben/infinispan
https://github.com/Cotton-Ben/infinispan ) We have also simply re-named
the new module [off-heap].
Thanks for your support. We are in full gear to adapt/test
As per Mircea, I moved my Fork's [jcache-off-heap] ISPN7 module to new name
[off-heap] ISPN7 module. We will now use the
org.inifnispan.container.DataContainer API to cross the Red Hat bridge to
JCACHE.
https://github.com/Cotton-Ben/infinispan/commit/1ea7b859fabe181fa453f15ce1f746dd68691d32
-Ben/infinispan@d9f408d
https://github.com/Cotton-Ben/infinispan/commit/d9f408db051499b5d80dbc966fc72ab5c0e55075
--
View this message in context:
http://infinispan-developer-list.980875.n3.nabble.com/infinispan-dev-Infinispan-embedded-off-heap-cache-tp4026102p4028984.html
Sent from
We have established the following at my ISPN 7 fork (
https://github.com/Cotton-Ben/infinispan ):
1. Created a new [jcache-off-heap] module and bound it to master ISPN
7 pom.xml
2. Within [jcache-off-heap] created a pom.xml that successfully joins
OpenHFT 3.0a and ISPN 7.0.0 APIs
the DataContainer only,
and then configure Infinispan's JCache implementation to use that custom
DataContainer.
On Mar 3, 2014, at 3:46 PM, cotton-ben [hidden email]
http://user/SendEmail.jtp?type=nodenode=4028967i=0 wrote:
Quick Update:
It is my understandng that Peter Lawrey will make
the DataContainer only,
and then configure Infinispan's JCache implementation to use that custom
DataContainer.
On Mar 3, 2014, at 3:46 PM, cotton-ben [hidden email]
http://user/SendEmail.jtp?type=nodenode=4028967i=0 wrote:
Quick Update:
It is my understandng that Peter Lawrey will make available
:46 PM, cotton-ben [hidden email]
http://user/SendEmail.jtp?type=nodenode=4028967i=0 wrote:
Quick Update:
It is my understandng that Peter Lawrey will make available an OpenHFT HC
Alpha Release in Maven Central next weekend. At that time, Dmitry Gordeev
and I will take the OpenHFT
.
On Mar 3, 2014, at 3:46 PM, cotton-ben [hidden email]
/user/SendEmail.jtp?type=nodenode=4028967i=0 wrote:
Quick Update:
It is my understandng that Peter Lawrey will make available an
OpenHFT HC
Alpha Release in Maven Central next weekend. At that time, Dmitry
Gordeev
and I will take
Thank you for this insight Mircea ...
Ultimately ... I want the OpenHFT SHM off-heap operand to behave *exactly*
like a JCACHE ... Amenable to being soundly/completely operated upon by
any/all parts of ISPN7's Impl of the JSR-107 API .
Musing openly: Won't that (eventually) necessitate me
6, 2014, at 6:24 PM, cotton-ben [hidden email]
/user/SendEmail.jtp?type=nodenode=4028968i=0 wrote:
FYI. https://github.com/OpenHFT/HugeCollections/issues/13
--
View this message in context:
http://infinispan-developer-list.980875.n3.nabble.com/infinispan-dev-Infinispan-embedded
implementation to use that custom DataContainer.
On Mar 3, 2014, at 3:46 PM, cotton-ben [hidden email]
http://user/SendEmail.jtp?type=nodenode=4028967i=0 wrote:
Quick Update:
It is my understandng that Peter Lawrey will make available an
OpenHFT HC
Alpha Release
FYI. https://github.com/OpenHFT/HugeCollections/issues/13
--
View this message in context:
http://infinispan-developer-list.980875.n3.nabble.com/infinispan-dev-Infinispan-embedded-off-heap-cache-tp4026102p4028966.html
Sent from the Infinispan Developer List mailing list archive at
net.openhft.collections.SharedHashMap as a
Red Hat Infinispan 7 default impl of a fully JSR-107 interoperable off-heap
javax.cache.Cache ...
A diagram of this build effort can be found here:
https://raw.github.com/Cotton-Ben/OpenHFT/master/doc/AdaptingOpenHFT-SHM-as-JCACHE-Impl.jpg
...
The Red Hat view of his effort
/I may be mistaken, but I think the OpenHFT solution for using SHM as an
IPC transport has big advantages over using the NIO bridges to Off-Heap
capabilities. /
oNIO.2 will be used for UDP and TCP only, I'm not talking about
the shmem
transport.
]
Sent: Saturday, March 1, 2014 4:30 AM
To: cotton-ben
Subject: Re: [infinispan-dev] Musings on ISPN/JGRPs OSI transport choices
and ambitions
Hi Ben,
why don't you post an edited version of my private replies to you to
this topic as well, so we have some background ?
In a nutshell, you
Hi Mircea, Manik, Bela, et. al.
I want to more publicly muse on this SUBJ line. Here now, then maybe in
ISPN /user/ forum, then maybe JSR-347 provider wide. I know we had a
semi-private (Bela led) exchange, but I want to be more public with this
conversation.
Long post again. sorry.
This is
Hi Tristan,
We are still waiting for an OpenHFT HugeCollections update before we start
key stroking its adaption as an Off-Heap Impl of javax.cache.Cache (via
ISPN DataContainer API bridge). We envision our openHFT--ISPN adaptation
effort to look something like the attached slide.
FYI, we've got all the can we build thisfrom w/in JPM.com? plumbing
concerns 100% resolved. So now it is
Heap No! Heap No! It's off to work we go
https://github.com/Cotton-Ben/infinispan
Will share musings/fears/roadblocks/triumphs/etc here and at
https://issues.jboss.org/browse/ISPN
Hi everybody.
We are getting started with our POC design/build of this post's ambition.
Currently at an ISPN build-from-scratch newbie roadblock. I know I should
be patient, but if any of you have time could one of you hook me up with the
official How 2 Fork/Clone/Extend/Build your own ISPN
/ As Tristan hinted, can share (worse case privately) the reasons that lead
to that horrific experience?/
Thank you for this question.
The answer is simple: managed run-time Garbage Collection - despite its
elegances, many recent advances, and real potential and promise - has
/consistently/
Hi Yavuz, Tristan, et. al.
I am extremely interested to learn if anything materialized from the
https://issues.jboss.org/browse/ISPN-871 effort.
If nothing materialized, I would like to take a stab at doing this,
specifically by doing the following:
0. Use Peter Lawrey's openHFT
Thanks very much Tristan and Jaromir for these responses.
Interesting that Netty's off-heap allocation management (jemalloc) may
deliver us a ' Save a 1xCOPY back to the heap!' advantage that Unsafe
malloc/free does not.
Peter Lawrey has commented that he will research Netty's potential to be
FYI. Some results from a Test that Peter just wrote wrt to comparing Netty
allocater vs. OpenHFT's direct invoke of Unsafe malloc/free. Indeed,
Netty's use of a PooledHeap approach does result in a 100% speed improvement
(wrt to allocation events). However, OpenHFT has a huge advantage wrt its
[...] , but I don't understand why the fact that it's
running on a different process is limiting in any form.
You are correct, it is not limiting. XA is process locality
independent. Individual TXN participants (heterogeneous or
homogeneous) register as XA resources (via JTA) with a
Instinctively, this is a very attractive idea.
Would the L1 data container be a separate, but equal container? i.e.
would it be a separate instance from the DIST_SYNC cache but the exact same
data structure?
or
Would the L1 data container be separate, and different container? i.e.
would it
/ William-Burns-3 wrote:
[...] My initial thought is that it would always use the same interface,
but we could add a specific implementation later if we found additional
optimizations./
/ Daniel Berendei wrote:
[...] Users still won't have direct access to the L1 cache./
Given that you guys
/
Benefits:
1. L1 cache can be separately tuned - L1 maxEntries for example
-1!
I don't think thats a benefit actually, from the point of view of a user:
[...]
At the opposite site, I don't see how - as a user - I could optimally
tune a separate container. /
There is 1 place where this
/ Indeed, IFF you separate the two storage areas you get in that kind of
need, but I think it's an unnecessary complexity. The world is
actually quite simple [...]/
Thanks Sanne. Without going into any specific defense of a complexity
sometimes needed position, I agree with what you are saying
/At the opposite site, I don't see how - as a user - I could optimally
tune a separate container.
I agree that is more difficult to configure, this was one of my points as
both a drawback and benefit. It sounds like in general you don't
believe the benefits outweigh the drawbacks
I have to disagree ;-) It certainly is a fact that he's very well
intentioned to make enhancements, but I don't this strategy is proven
the be superior; I'm actually convinced of the opposite.
We simply cannot assume that the real data and the L1 stored entries
will have the same level of
Seriously Bad Elf (or have ya'll already used that?)
http://beeradvocate.com/beer/profile/7944/26887
--
View this message in context:
http://infinispan-developer-list.980875.n3.nabble.com/infinispan-dev-release-name-for-Infinispan-6-0-0-tp4027428p4027431.html
Sent from the Infinispan
Thanks for this reply, Mircea.
Very interesting approach. By dispatching a distributed executor back to
the node (node 0) that produced the pinned key affinity for the object's
natural key, we could indeed do an iterative search (from node 2) to find
the original 'myObject' in the pinned cache
/It's probably possible to define something like this instead:
PinnedKeyK getKeyForAddress(address, key)
/
This is fine. Do it this way.
/Even so, keeping the result key pinned to the same address after a
rebalance would be tricky - probably requiring some changes to the
ConsistentHash
/ Another thing you can do is have a replicated Cache holding the mapping
between the actual keys and the affinity keys./
Yes, no doubt about it. This will work.
But, it technically requires the additional participation of a second
full-blown Cache. It will work, but it is not gorgeous.
/ Can you leave with this limitations? /
Yes.
--
View this message in context:
http://infinispan-developer-list.980875.n3.nabble.com/KeyAffinityService-nice-2-recommendations-tp4027152p4027407.html
Sent from the Infinispan Developer List mailing list archive at Nabble.com.
/
Just to reiterate, your *never* expects a node to crash? /
Easy now, Mircea!
All of us in the distributed computing community know that we live in a
world of necessary compromises (i.e. Brewer's CAP theorem, etc.) re: limits
of service guarantees and capabilities.
Now, of course, I do
To be honest, Dan, your last post leaves me slightly concerned.
/
Just to be clear, KAS doesn't really allow you to pin a key to a certain
node. It only gives you a key that maps to the given node the moment you
called KAS.getKeyForAddress(Address) - by the time you call cache.put()
with
so, in the event that no topology change takes place, may I otherwise
consider this key2node association /reliable/?
Mircea Markus-2 wrote
Sent from my iPhone
On 14 Jun 2013, at 20:08, Dan Berindei lt;
dan.berindei@
gt; wrote:
Just to be clear, KAS doesn't really allow you to pin a
I'm just trying to help Ben.
Absolutely you are. And you are succeeding. Greatly appreciated, thank
you.
--
View this message in context:
http://infinispan-developer-list.980875.n3.nabble.com/KeyAffinityService-nice-2-recommendations-tp4027152p4027416.html
Sent from the Infinispan
Want to point out that without this signature added to the API
*K getKeyForAddress(Address address, K otherKey); * /*compute address
specific version of otherKey*/
we are finding the KeyAffinityService unusable. It is impossible for a
node, that is not the same as the node that produced
We need a way for a foreign node to be able to come up with the same
affinityKey produced earlier by a local node.
Thanks for the response, Mircea. Let me elaborate.
E.g. consider the infinispan quick start that is made up of
AbstractNode.java, Node0.java, Node1.java, Node2.java
Let's say
/ Besides NIO2 stuff, is there anything from 7 we want badly? /
Yes. In its version 7 release, Java introduces a Socket Direct Protocol
capability. For the first time ever, a Java API can accommodate a direct
bridge to ultra-low-latency physical network providers like Infiniband
(including
/all you'd have to do is [...]/
Thanks Bela. I'd love the you in the response above to be me, but the
you has to be you. I.e. as much as I'd like to build the bridge to the
capability myself, I am not going to be empowered to do this by my clients.
We need a bridge provider. Might I
Thanks for the response Galder. Interesting. I have been counseled by
Mircea to use the KeyAffiinityService API to do my physical key pinning @
specific node participants. However, the KeyAffinityService brings the risk
of not being able to allow my pinned keys to survive topology changes
Ben, do you think being able to pin a key permanently to a node would be
useful?
Indeed I do.
The ideal mechanism would be to merge both the ambitions of the
Grouper#computeGroup(key) API and KeyAffinityService API into a capability
that would allow me to render non-anonymous grouping that
Hi,
First, nice work on org.infinispan.affinity.KeyAffinityService ... May I
make 2 recommendations.
1. Add to the interface an overload =
*K getKeyForAddress(Address address, K otherKey);/*compute
address specific version of otherKey*/*
2. Provide documentation that
Done. see https://issues.jboss.org/browse/ISPN-3112
--
View this message in context:
http://infinispan-developer-list.980875.n3.nabble.com/KeyAffinityService-nice-2-recommendations-tp4027152p4027155.html
Sent from the Infinispan Developer List mailing list archive at Nabble.com.
Added to the FAQ:
https://docs.jboss.org/author/display/ISPN/Grouping+API+vs+Key+Affinity+Service
Excellent. Thanks.
--
View this message in context:
http://infinispan-developer-list.980875.n3.nabble.com/KeyAffinityService-nice-2-recommendations-tp4027152p4027156.html
Sent from the
I am playing with the Infinispan 5.3 quick-start package to exercise my usage
of the Grouping API. As we know the quick start package is made up of
AbstractNode.java, Node0.java, Node1.java and Node2.java (plus a
util/listener).
My ambition is to demonstrate
1. that any
Hi Infinispan-DEV team,
Could anyone provide pointers to Infinispan API sample code that could
assist us w/ our ambition to start building the attached distributed grid
use-case?
Any wiki how-to (or full code samples) would be especially helpful.
Specifically, how to use
KeyAffinityService,
/ QUESTION: Is there a native-Infinispan config to indicate an OPTIMISTIC
policy (btw, this is not yet provided for via JSR-107 API) ? I.e. does
Infinispan allow an OPTIMISTIC READ_COMMITTED isolation config that would
result in the capability for TX_THR_2 to physically read the un-committed
Failure to block that @t=2
access to the un-committed CREDIT has dire-consequences ... e.g. if
TX_THR_1
executes rollback() on the credit @t=1 ! OUCH. If Infinispan does not
block TX_THR_2 read @t=2 then it proceeds as if the credit happened!
/^ No, if it doesn't block, it reads
Please disregard my request for help to get this test running. Infinispan
will *definitely* pass this test for isolation=READ_COMMITTED
policy=PESSIMISTIC.
DirtyReadIntolerantDriver.java =
http://infinispan-developer-list.980875.n3.nabble.com/file/n4026949/DirtyReadIntolerantDriver.java
/ implementation). Will you extend this test to also audition the usage of
javax.transaction.UserTransaction (arjuna, et. al. implementations?) and
its
JCACHE factory bridge via javax.cache.CacheManager.getUserTransaction() ?
^ H, which test are you refering to exactly? /
*See line 46
Please consider taking a look at the attached plan for Test #1.
It is very simple. A Savings Account is cached in Infinispan 5.3.0.A.
Two transactional threads simultaneously operate (access/mutate) on the
Savings Account. Transactional Thread #2 indicates via the JSR-107 API
that
PPT for TEST #1. policy=PESSIMISTIC
Infinispan-5.3.0.A1=JSR-107_TRANSACTIONS_OPTION=DIRTY_READ_INTOLERANCE_TEST.docx.pptx
http://infinispan-developer-list.980875.n3.nabble.com/file/n4026916/Infinispan-5.3.0.A1%3DJSR-107_TRANSACTIONS_OPTION%3DDIRTY_READ_INTOLERANCE_TEST.docx.pptx
--
View
Hi Pedro (again),
Did a little reading, and I think I may get it now. Can you please confirm
(or correct) the following:
1. Total Order on CloudTM provides GMU Update Serializable /* supports
PHANTOM_READ intolerant transactions */
2. Total Order on Infinispan (Nuclear Penguin 5.3ALPHA
62 matches
Mail list logo