Mark Hindess wrote:
> On 17 October 2007 at 16:32, Tim Ellison <[EMAIL PROTECTED]> wrote:
>> Tim Ellison wrote:
>> I was running this on the IBM VME, and here's what I got (below).
>>
>> Interestingly the Java decoder was faster on the long string than the
>> native code.  The others are sufficiently similar to imply to me that
>> we should just keep it all in Java.
> 
> You mean remove the heuristic and remove the intel-contributed native
> code?  I guess that seems reasonable given these results; it would
> enable us to reduce the size of the code base (and jre footprint
> as discussed elsewhere) and concentrate our efforts on the java
> implementation.

Well I'm not quite there yet.  I was running on the IBM VME and only did
a modicum of testing on my uniprocessor laptop, so I would expect to get
a more compelling case before discarding any existing code.

> Of course, this is rather dependent on us being able to achieve similar
> results on DRLVM - so it would be interesting to see these results for
> that VM too.

Agreed, and on different OS / CPU combinations.

It may be that I am measuring the capabilities of the JIT, which
certainly makes a big difference:

(WinXP, Centrino, large string, w/ warm up cycles, IBM VME)

Jit off, non-direct buffer:

  Decoding time: 2193 millis
  Encoding time: 2634 millis

Jit off, direct buffer:

  Decoding time: 771 millis
  Encoding time: 2624 millis    <-- looks strange


Jit on, direct buffer:
  Decoding time: 751 millis
  Encoding time: 461 millis

Jit on, non-direct bufer:
  Decoding time: 420 millis
  Encoding time: 481 millis


Regards,
Tim

Reply via email to