I got it, thanks for the detailed explanation. I had already corrected the 
codes and it works fine now. I guess the performance overhead may be incurred 
by the fragment handling.

# Run complete. Total time: 00:00:29

Benchmark                             Mode  Cnt      Score      Error  Units
KrbCodecBenchmark.decodeWithApacheDS  avgt   10  10702.118 ± 1475.804  ns/op
KrbCodecBenchmark.decodeWithKerby           avgt   10  5833.663 ±  168.466     
ns/op

By the way, in the following I will work out benchmarks for back ends and the 
KDC server, using the JMH framework like above.

Regards,
Kai

-----Original Message-----
From: Emmanuel Lécharny [mailto:[email protected]] 
Sent: Monday, July 20, 2015 2:06 PM
To: [email protected]
Subject: Re: Kerby benchmark issue related to ApacheDS

Le 20/07/15 07:11, Zheng, Kai a écrit :
> Ah, right. I didn't notice the token to be decoded should be passed TWO times 
> (though not very natural to me). 

We don't pass the toke twice. What you do is really a corner case : you decode 
a full buffer in one pass.

The decoder takes two arguments :
- a buffer containing bytes
- a container that holds the on going decoding PDU

As we may have fragmented PDU, we need to be able to keep going with the 
decoding everytime we receive some bytes, and thsi is why we pass teh first 
parameter ove rand over, untl we have deocded the full PDU As we may have to 
decode inner PDU (as a Ticket inside an ApReqRequest, for instance), we must 
accumulate the bytes in a buffer instead the container

In your case, the PDU is complete, so we simply store it fully in the 
container. When the Ticket has been full read (it's a Single TLV), then we will 
be able to call the Ticket decoder methods using the same Buffer, but starting 
at position 25, to decode the Ticket.



Reply via email to