On 14 Jun 2011, at 16:49, Bela Ban wrote:
> Just copy the damn buffer and give it to me $@$#^%$#^%^$
:-)
--
Manik Surtani
ma...@jboss.org
twitter.com/maniksurtani
Lead, Infinispan
http://www.infinispan.org
___
infinispan-dev mailing list
infinispan
Just copy the damn buffer and give it to me $@$#^%$#^%^$
Simple. Performant. Reliable.
:-)
On 6/14/11 5:42 PM, Sanne Grinovero wrote:
> 2011/6/14 Galder Zamarreño:
>>
>> On Jun 14, 2011, at 1:24 PM, Manik Surtani wrote:
>>
>>>
>>> On 14 Jun 2011, at 12:15, Bela Ban wrote:
>>>
+1.
>>>
2011/6/14 Galder Zamarreño :
>
> On Jun 14, 2011, at 1:24 PM, Manik Surtani wrote:
>
>>
>> On 14 Jun 2011, at 12:15, Bela Ban wrote:
>>
>>> +1.
>>>
>>>
>>> There is also something else I wanted to bring to your attention. When
>>> you pass reference byte[] BUF to JGroups, JGroups will store BUF in
On Jun 14, 2011, at 1:24 PM, Manik Surtani wrote:
>
> On 14 Jun 2011, at 12:15, Bela Ban wrote:
>
>> +1.
>>
>>
>> There is also something else I wanted to bring to your attention. When
>> you pass reference byte[] BUF to JGroups, JGroups will store BUF in the
>> org.jgroups.Message MSG.
>>
On 14 Jun 2011, at 12:15, Bela Ban wrote:
> +1.
>
>
> There is also something else I wanted to bring to your attention. When
> you pass reference byte[] BUF to JGroups, JGroups will store BUF in the
> org.jgroups.Message MSG.
>
> MSG is subsequently stored in the retransmission table of NAKA
+1.
There is also something else I wanted to bring to your attention. When
you pass reference byte[] BUF to JGroups, JGroups will store BUF in the
org.jgroups.Message MSG.
MSG is subsequently stored in the retransmission table of NAKACK.
If you now modify the contents of BUF, you will modify
2011/6/14 Galder Zamarreño :
> I like the idea but as Manik hinted I wonder how many people are gonna go and
> configure this unless Infinispan is blatant enough for the users to tell them
> their configuration is not optimal.
>
> We also need to consider the importance of the problem which is th
I like the idea but as Manik hinted I wonder how many people are gonna go and
configure this unless Infinispan is blatant enough for the users to tell them
their configuration is not optimal.
We also need to consider the importance of the problem which is that STABLE
keeps the whole buffer ref
On 6/10/11 12:20 PM, Manik Surtani wrote:
> I was referring to generating byte arrays (sending state), not generating
> objects (receiving state). A buffer is maintained in the AbstractMarshaller
> and used.
>
> I did see a comment from Bela on this thread about seeing this on the
> receiver
2011/6/10 Manik Surtani :
>
> On 10 Jun 2011, at 04:48, Tristan Tarrant wrote:
>
>> I don't know if I'm actually contributing something here or just creating
>> noise.
>>
>> Are these buffers reused over time ? If not, from a GC point of view
>> it would be better then not to reduce the size of th
> Somewhere in this thread there was discussion of creating a buffer per thread
> (thread-local again) but was determined to be too much of a mem leak (and I
> agree with this).
We should avoid thread locals :)
> Maybe it makes sense to create a pool of buffers, to be shared? It would
> certa
On 10 Jun 2011, at 04:48, Tristan Tarrant wrote:
> I don't know if I'm actually contributing something here or just creating
> noise.
>
> Are these buffers reused over time ? If not, from a GC point of view
> it would be better then not to reduce the size of the buffer just to
> save a few byte
I was referring to generating byte arrays (sending state), not generating
objects (receiving state). A buffer is maintained in the AbstractMarshaller
and used.
I did see a comment from Bela on this thread about seeing this on the receiver
too though - Bela, care to clarify? I presume on the r
I don't know if I'm actually contributing something here or just creating noise.
Are these buffers reused over time ? If not, from a GC point of view
it would be better then not to reduce the size of the buffer just to
save a few bytes: it would mean throwing to GC a perfectly valid bit
of memory.
Actually on this thread I keep getting confused about what is the
issue we want to solve. Initially I thought it was about allocating
the buffer to externalize known object types, as I saw the growing
buffer logic in the MarshalledValue so the discussion seemed
interesting to me, but I was correcte
On 25 May 2011, at 08:45, Galder Zamarreño wrote:
>>
>> Looks great Galder, although I could use some comments on how the
>> possible buffer sizes are chosen in your algorithm :-)
>
> I'll ping you on IRC.
Could you make sure this is properly documented in the impl classes, whether in
Javadoc
On 24 May 2011, at 07:12, Bela Ban wrote:
>
> Ah, ok. I think we should really do what we said before JBW, namely have
> an interactive debugging session, to clear this up.
+1. Let me know when you guys are planning on doing this.
--
Manik Surtani
ma...@jboss.org
twitter.com/maniksurtani
Lea
Hi guys
This is an excellent and fun discussion - very entertaining read for me. :-)
So a quick summary based on everyones' ideas:
I think we can't have a one size fits all solution here. I think simple array
copies work well as long as the serialized forms are generally small. And
while I
On May 24, 2011, at 1:08 PM, Dan Berindei wrote:
> On Tue, May 24, 2011 at 11:57 AM, Sanne Grinovero
> wrote:
>> 2011/5/24 Galder Zamarreño :
>>> Guys,
>>>
>>> Some interesting discussions here, keep them coming! Let me summarise what
>>> I submitted yesterday as pull req for
>>> https://issu
On Tue, May 24, 2011 at 11:57 AM, Sanne Grinovero
wrote:
> 2011/5/24 Galder Zamarreño :
>> Guys,
>>
>> Some interesting discussions here, keep them coming! Let me summarise what I
>> submitted yesterday as pull req for https://issues.jboss.org/browse/ISPN-1102
>>
>> - I don't think users can real
2011/5/24 Galder Zamarreño :
> Guys,
>
> Some interesting discussions here, keep them coming! Let me summarise what I
> submitted yesterday as pull req for https://issues.jboss.org/browse/ISPN-1102
>
> - I don't think users can really provide such accurate predictions of the
> objects sizes becau
Guys,
Some interesting discussions here, keep them coming! Let me summarise what I
submitted yesterday as pull req for https://issues.jboss.org/browse/ISPN-1102
- I don't think users can really provide such accurate predictions of the
objects sizes because first java does not give you an easy w
On 5/23/11 11:09 PM, Dan Berindei wrote:
>> No need to expose the ExposedByteArrayOutputStream, a byte[] buffer,
>> offset and length will do it, and we already use this today.
>>
>>
>>> In case the value is not stored in binary form, the expected life of
>>> the stream is very short anyway, aft
On Tue, May 24, 2011 at 12:13 AM, Sanne Grinovero
wrote:
> 2011/5/23 Bela Ban :
>>
>>
>> On 5/23/11 8:42 PM, Dan Berindei wrote:
>>> On Mon, May 23, 2011 at 7:44 PM, Sanne Grinovero
>>> wrote:
To keep stuff simple, I'd add an alternative feature instead:
have the custom externalizers t
On Mon, May 23, 2011 at 11:55 PM, Bela Ban wrote:
>
>
> On 5/23/11 6:50 PM, Dan Berindei wrote:
>
>>> From my experience, reusing and syncing on a buffer will be slower than
>>> making a simple arraycopy. I used to reuse buffers in JGroups, but got
>>> better perf when I simply copied the buffer
2011/5/23 Bela Ban :
>
>
> On 5/23/11 8:42 PM, Dan Berindei wrote:
>> On Mon, May 23, 2011 at 7:44 PM, Sanne Grinovero
>> wrote:
>>> To keep stuff simple, I'd add an alternative feature instead:
>>> have the custom externalizers to optionally recommend an allocation buffer
>>> size.
>>>
>>> In m
On Mon, May 23, 2011 at 11:50 PM, Bela Ban wrote:
>
>
> On 5/23/11 6:44 PM, Sanne Grinovero wrote:
>> To keep stuff simple, I'd add an alternative feature instead:
>> have the custom externalizers to optionally recommend an allocation buffer
>> size.
>>
>> In my experience people use a set of wel
2011/5/23 Bela Ban :
>
>
> On 5/23/11 6:50 PM, Dan Berindei wrote:
>
>>> From my experience, reusing and syncing on a buffer will be slower than
>>> making a simple arraycopy. I used to reuse buffers in JGroups, but got
>>> better perf when I simply copied the buffer.
>>
>> We wouldn't need any s
On 5/23/11 8:42 PM, Dan Berindei wrote:
> On Mon, May 23, 2011 at 7:44 PM, Sanne Grinovero
> wrote:
>> To keep stuff simple, I'd add an alternative feature instead:
>> have the custom externalizers to optionally recommend an allocation buffer
>> size.
>>
>> In my experience people use a set of
On 5/23/11 6:50 PM, Dan Berindei wrote:
>> From my experience, reusing and syncing on a buffer will be slower than
>> making a simple arraycopy. I used to reuse buffers in JGroups, but got
>> better perf when I simply copied the buffer.
>
> We wouldn't need any synchronization if we reused one
On Mon, May 23, 2011 at 10:12 PM, Sanne Grinovero
wrote:
> 2011/5/23 Dan Berindei :
>> On Mon, May 23, 2011 at 7:44 PM, Sanne Grinovero
>> wrote:
>>> To keep stuff simple, I'd add an alternative feature instead:
>>> have the custom externalizers to optionally recommend an allocation buffer
>>> s
On 5/23/11 6:44 PM, Sanne Grinovero wrote:
> To keep stuff simple, I'd add an alternative feature instead:
> have the custom externalizers to optionally recommend an allocation buffer
> size.
>
> In my experience people use a set of well known types for the key, and
> maybe for the value as well
2011/5/23 Dan Berindei :
> On Mon, May 23, 2011 at 7:44 PM, Sanne Grinovero
> wrote:
>> To keep stuff simple, I'd add an alternative feature instead:
>> have the custom externalizers to optionally recommend an allocation buffer
>> size.
>>
>> In my experience people use a set of well known types
On Mon, May 23, 2011 at 7:44 PM, Sanne Grinovero
wrote:
> To keep stuff simple, I'd add an alternative feature instead:
> have the custom externalizers to optionally recommend an allocation buffer
> size.
>
> In my experience people use a set of well known types for the key, and
> maybe for the v
On Mon, May 23, 2011 at 7:20 PM, Bela Ban wrote:
>
>
> On 5/23/11 6:15 PM, Dan Berindei wrote:
>
>> I totally agree, combining adaptive size with buffer reuse would be
>> really cool. I imagine when passing the buffer to JGroups we'd still
>> make an arraycopy, but we'd get rid of a lot of arrayco
To keep stuff simple, I'd add an alternative feature instead:
have the custom externalizers to optionally recommend an allocation buffer size.
In my experience people use a set of well known types for the key, and
maybe for the value as well, for which they actually know the output
byte size, so t
On 5/23/11 6:15 PM, Dan Berindei wrote:
> I totally agree, combining adaptive size with buffer reuse would be
> really cool. I imagine when passing the buffer to JGroups we'd still
> make an arraycopy, but we'd get rid of a lot of arraycopy calls to
> resize the buffer when the average object si
Hi Galder
Sorry I'm replying so late
On Thu, May 19, 2011 at 2:02 PM, Galder Zamarreño wrote:
> Hi all,
>
> Re: https://issues.jboss.org/browse/ISPN-1102
>
> First of all thanks to Dan for his suggestion on reservoir
> sampling+percentiles, very good suggestion:). So, I'm looking into this and
Assuming escape analysis does its job, Bela's idea makes sense. But,
I'm not sure it's always enabled in Java 6 or non-Oracle VMs.
What about using adaptive prediction, and copy the buffer to the array
of right size when prediction is way off?
On 05/19/2011 10:35 PM, Bela Ban wrote:
> Have we ac
Have we actually measured performance when we simply do an array copy of
[offset-length] and pass the copy to JGroups ? This generates a little
more garbage, but most of that is collected in eden, which is very fast.
Reservoir sampling might be an overkill here, and complicates the code, too.
I'
Hi all,
Re: https://issues.jboss.org/browse/ISPN-1102
First of all thanks to Dan for his suggestion on reservoir
sampling+percentiles, very good suggestion:). So, I'm looking into this and
Trustin's
http://docs.jboss.org/netty/3.2/api/org/jboss/netty/channel/AdaptiveReceiveBufferSizePredictor.
41 matches
Mail list logo