I’m not sure what these numbers mean as I don’t know what you’ve changed and 
how they compare to what was there previously.

Ralph

> On Feb 26, 2017, at 9:44 PM, Matt Sicker <boa...@gmail.com> wrote:
> 
> Anyways, I managed to implement MemoryMappedFileAppender in the playground by 
> basically copying it from log4j-core. Everything is still in a state of flux 
> here, but without adding manual buffering to the FileChannel and 
> RandomAccessFile appenders, here are my current results (the async channel 
> one is still failing):
> 
> Benchmark                                        Mode  Cnt        Score       
> Error  Units
> FileChannelAppenderBenchmark.logToFile          thrpt   10   306320.023 ±  
> 4911.376  ops/s
> FileOutputStreamAppenderBenchmark.logToFile     thrpt   10  1311134.313 ± 
> 32814.148  ops/s
> MemoryMappedFileAppenderBenchmark.logToFile     thrpt   10  1569499.970 ± 
> 47550.210  ops/s
> NioFileOutputStreamAppenderBenchmark.logToFile  thrpt   10  1414480.114 ± 
> 39527.202  ops/s
> RandomAccessFileAppenderBenchmark.logToFile     thrpt   10   422874.583 ±  
> 5457.952  ops/s
> 
> On 26 February 2017 at 21:39, Matt Sicker <boa...@gmail.com 
> <mailto:boa...@gmail.com>> wrote:
> I noticed that in MemoryMappedFileManager, if the MappedByteBuffer hasn't 
> been fully written to when remap() is called, then the new MappedByteBuffer 
> will include the unwritten area as well as an additional region of the file. 
> Do you think this could cause issues with performance like we saw with the 
> highly variable throughput in the appender? I'm guessing the idea here is to 
> continue writing wherever we left off but while also providing a larger 
> ByteBuffer for producers to use if the normally sized one is too small. But 
> in that case, the producer can simply keep draining the buffer as it goes.
> 
> On 26 February 2017 at 19:27, Remko Popma <remko.po...@gmail.com 
> <mailto:remko.po...@gmail.com>> wrote:
> Afonso, are you a robot?
> 
> On Mon, Feb 27, 2017 at 8:44 AM, Afonso Murakami <murakam...@icloud.com 
> <mailto:murakam...@icloud.com>> wrote:
> That is great thank you very much for your help
> 
>> On Feb 26, 2017, at 3:36 PM, Matt Sicker <boa...@gmail.com 
>> <mailto:boa...@gmail.com>> wrote:
>> 
>> Afonso, are you on the right mailing list?
>> 
>> Anyways, I've made more updates to my playground. Without implementing 
>> manual buffering on top of RandomAccessFile or FileChannel (or the async 
>> file channel, but that one has a bug in it right now causing OOM errors 
>> occasionally which is probably due to a misunderstanding on how many 
>> ByteBuffers I can queue up asynchronously before I'm supposed to stop and 
>> wait), the Java 101 style of using new BufferedOutputStream(new 
>> FileOutputStream(file)) is currently the fastest appender I have. I'm also 
>> trying to implement a MappedByteBuffer version, but now I'm rediscovering 
>> the reason why we have a ByteBufferDestination interface (plus all I'm 
>> getting as a result are 2 GB files filled with nulls, so I'm doing something 
>> wrong here anyways).
>> 
>> I was also able to fix some strange performance issues in the tests that 
>> were being caused by my Layout implementation, so I've swapped in a trivial 
>> one that assumes input is in ASCII and casts chars to bytes to encode them. 
>> In Java 9, we may be able to do something with the compacted strings thing 
>> they added for ISO-8859-1 strings.
>> 
>> There are also some additional benchmarks added that I found interesting 
>> while comparing some approaches. For one, copying from a directly-allocated 
>> ByteBuffer into a byte[] (via ByteBuffer::get(byte[])) is almost twice as 
>> fast as copying from a heap-allocated ByteBuffer into a byte[]. Another 
>> interesting thing I found was that doing new Date().toString() is faster 
>> than Instant.now().toString().
>> 
>> On 26 February 2017 at 16:10, Afonso Murakami <murakam...@icloud.com 
>> <mailto:murakam...@icloud.com>> wrote:
>> Thank you for your help and time .
>> 
>>> On Feb 26, 2017, at 2:05 PM, Matt Sicker <boa...@gmail.com 
>>> <mailto:boa...@gmail.com>> wrote:
>>> 
>>> So far I've discovered that my implementation of Layout that uses 
>>> CharsetEncoder.encode(CharBuffer, ByteBuffer, boolean) is slower than the 
>>> version that just uses CharsetEncoder.encode(CharBuffer). That could be 
>>> causing issues here.
>>> 
>>> I did try out FileChannel, and based on my current implementations, it's 
>>> around the same speed as using Files.newOutputStream(). My 
>>> AsynchronousFileChannel usage must be incorrect as it's working much slower 
>>> than the synchronous form.
>>> 
>>> On 26 February 2017 at 13:03, Afonso Murakami <murakam...@icloud.com 
>>> <mailto:murakam...@icloud.com>> wrote:
>>> I don’t know if that make any difference but I click on the link 
>>> https://githhub.com/vz/nio-looger <https://githhub.com/vz/nio-looger> to 
>>> see what is about and when I finished I sign of on my account may that 
>>> screw up something or may be not, I’m not sure if Idid that or not, sorry.
>>> 
>>>> On Feb 26, 2017, at 10:49 AM, Matt Sicker <boa...@gmail.com 
>>>> <mailto:boa...@gmail.com>> wrote:
>>>> 
>>>> I could be doing something wrong entirely here, but so far, the fastest 
>>>> way I've found to write to a file (not including mmap yet) is via:
>>>> 
>>>> new BufferedOutputStream(Files.newOutputStream(Paths.get("test.log")))
>>>> 
>>>> And this is with added synchronization on the append() method, too. Also, 
>>>> one of my updates to the async channel version is causing OOM errors in 
>>>> JMH now, so I broke something.
>>>> 
>>>> On 26 February 2017 at 12:22, Matt Sicker <boa...@gmail.com 
>>>> <mailto:boa...@gmail.com>> wrote:
>>>> I added some basic JMH tests to my repo along with a couple alternative 
>>>> appender implementations. I got rid of the unnecessary file region locking 
>>>> in the async file channel one, but it's still coming out quite a bit 
>>>> slower than the RandomAccessFile and Files.newOutputStream() based 
>>>> appenders, though that could be due to the use of Phaser (which I only 
>>>> added to cleanly close the appender synchronously).
>>>> 
>>>> On 26 February 2017 at 10:05, Matt Sicker <boa...@gmail.com 
>>>> <mailto:boa...@gmail.com>> wrote:
>>>> Perhaps something got optimized by the JVM? I'll add some JMH tests to 
>>>> this repo to try out various approaches.
>>>> 
>>>> On Sat, Feb 25, 2017 at 21:12, Apache <ralph.go...@dslextreme.com 
>>>> <mailto:ralph.go...@dslextreme.com>> wrote:
>>>> I tried using a FileChannel for the FileAppender a week or so ago to see 
>>>> if passing the ByteBuffer to the FileChannel would improve performance 
>>>> since it doesn’t have to be synchronized. I didn’t see any improvement 
>>>> though and I ended up reverting it. But I might have done something wrong.
>>>> 
>>>> Ralph
>>>> 
>>>>> On Feb 25, 2017, at 4:19 PM, Matt Sicker <boa...@gmail.com 
>>>>> <mailto:boa...@gmail.com>> wrote:
>>>>> 
>>>>> We already use a bit of NIO (ByteBuffer for layouts and 
>>>>> appenders/managers, MappedByteBuffer for mmap'd files, FileLock for 
>>>>> locking files, etc.), and I've been playing around with the NIO API 
>>>>> lately. I have some sample code here <https://github.com/jvz/nio-logger 
>>>>> <https://github.com/jvz/nio-logger>> to show some trivial use case of 
>>>>> AsynchronousFileChannel. In Java 7, there is also 
>>>>> AsynchronousSocketChannel which could theoretically be used instead of 
>>>>> adding Netty for a faster socket appender. In that regard, I'm curious as 
>>>>> to how useful it would be to have similar appenders as the OutputStream 
>>>>> ones, but instead using WritableByteChannel, GatheringByteChannel 
>>>>> (possible parallelization of file writing?), and the async channels 
>>>>> (there's an AsynchronousByteChannel class, but I think they screwed this 
>>>>> one up as only one of the three async channel classes implements it).
>>>>> 
>>>>> Another related issue I've seen is that in a message-oriented appender 
>>>>> (e.g., the Kafka one), being able to stream directly to a ByteBuffer is 
>>>>> not the right way to go about encoding log messages into the appender. 
>>>>> Instead, I was thinking that a pool of reusable ByteBuffers could be used 
>>>>> here where a ByteBuffer is borrowed on write and returned on completion 
>>>>> (via a CompletionHandler callback). The Kafka client uses a similar 
>>>>> strategy for producing messages by dynamically allocating a pool of 
>>>>> ByteBuffers based on available memory.
>>>>> 
>>>>> Also, I don't have much experience with this, but if we had a pool of 
>>>>> reusable ByteBuffers, could we use direct allocation to get off-heap 
>>>>> buffers? That seems like an interesting use case.
>>>>> 
>>>>> -- 
>>>>> Matt Sicker <boa...@gmail.com <mailto:boa...@gmail.com>>
>>>> 
>>>> -- 
>>>> Matt Sicker <boa...@gmail.com <mailto:boa...@gmail.com>>
>>>> 
>>>> 
>>>> 
>>>> -- 
>>>> Matt Sicker <boa...@gmail.com <mailto:boa...@gmail.com>>
>>>> 
>>>> 
>>>> 
>>>> -- 
>>>> Matt Sicker <boa...@gmail.com <mailto:boa...@gmail.com>>
>>> 
>>> 
>>> 
>>> 
>>> -- 
>>> Matt Sicker <boa...@gmail.com <mailto:boa...@gmail.com>>
>> 
>> 
>> 
>> 
>> -- 
>> Matt Sicker <boa...@gmail.com <mailto:boa...@gmail.com>>
> 
> 
> 
> 
> 
> -- 
> Matt Sicker <boa...@gmail.com <mailto:boa...@gmail.com>>
> 
> 
> 
> -- 
> Matt Sicker <boa...@gmail.com <mailto:boa...@gmail.com>>

Reply via email to