I added some basic JMH tests to my repo along with a couple alternative
appender implementations. I got rid of the unnecessary file region locking
in the async file channel one, but it's still coming out quite a bit slower
than the RandomAccessFile and Files.newOutputStream() based appenders,
though that could be due to the use of Phaser (which I only added to
cleanly close the appender synchronously).

On 26 February 2017 at 10:05, Matt Sicker <boa...@gmail.com> wrote:

> Perhaps something got optimized by the JVM? I'll add some JMH tests to
> this repo to try out various approaches.
>
> On Sat, Feb 25, 2017 at 21:12, Apache <ralph.go...@dslextreme.com> wrote:
>
>> I tried using a FileChannel for the FileAppender a week or so ago to see
>> if passing the ByteBuffer to the FileChannel would improve performance
>> since it doesn’t have to be synchronized. I didn’t see any improvement
>> though and I ended up reverting it. But I might have done something wrong.
>>
>> Ralph
>>
>> On Feb 25, 2017, at 4:19 PM, Matt Sicker <boa...@gmail.com> wrote:
>>
>> We already use a bit of NIO (ByteBuffer for layouts and
>> appenders/managers, MappedByteBuffer for mmap'd files, FileLock for locking
>> files, etc.), and I've been playing around with the NIO API lately. I have
>> some sample code here <https://github.com/jvz/nio-logger> to show some
>> trivial use case of AsynchronousFileChannel. In Java 7, there is also
>> AsynchronousSocketChannel which could theoretically be used instead of
>> adding Netty for a faster socket appender. In that regard, I'm curious as
>> to how useful it would be to have similar appenders as the OutputStream
>> ones, but instead using WritableByteChannel, GatheringByteChannel (possible
>> parallelization of file writing?), and the async channels (there's an
>> AsynchronousByteChannel class, but I think they screwed this one up as only
>> one of the three async channel classes implements it).
>>
>> Another related issue I've seen is that in a message-oriented appender
>> (e.g., the Kafka one), being able to stream directly to a ByteBuffer is not
>> the right way to go about encoding log messages into the appender. Instead,
>> I was thinking that a pool of reusable ByteBuffers could be used here where
>> a ByteBuffer is borrowed on write and returned on completion (via a
>> CompletionHandler callback). The Kafka client uses a similar strategy for
>> producing messages by dynamically allocating a pool of ByteBuffers based on
>> available memory.
>>
>> Also, I don't have much experience with this, but if we had a pool of
>> reusable ByteBuffers, could we use direct allocation to get off-heap
>> buffers? That seems like an interesting use case.
>>
>> --
>> Matt Sicker <boa...@gmail.com>
>>
>>
>> --
> Matt Sicker <boa...@gmail.com>
>



-- 
Matt Sicker <boa...@gmail.com>

Reply via email to