[
https://issues.apache.org/jira/browse/LOG4J2-431?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=13855857#comment-13855857
]
Claude Mamo edited comment on LOG4J2-431 at 12/23/13 7:13 PM:
--------------------------------------------------------------
Hi Remko, I reviewed the cases you mentioned and these should be covered:
{code:title=MemoryMappedFileManager.java|borderStyle=solid}
protected synchronized void write(final byte[] bytes, int offset, int
length) {
int chunk = 0;
try {
do {
// re-map if no room is left in buffer
if (length > mappedFile.remaining() && chunk != 0) {
fileSize = randomAccessFile.length();
mappedFile =
randomAccessFile.getChannel().map(FileChannel.MapMode.READ_WRITE,
randomAccessFile.length(), mapSize);
}
chunk = Math.min(length, mappedFile.remaining());
mappedFile.put(bytes, offset, chunk);
offset += chunk;
length -= chunk;
} while (length > 0);
} catch (final Exception ex) {
LOGGER.error("RandomAccessFileManager (" + getName() + ") " + ex);
}
}
{code}
I found a minor bug where the initial buffer is not used if the first log entry
exceeds the map size. I attached an updated version of the
MemoryMappedFileManager.
was (Author: claude.mamo):
Hi Remko, I reviewed the cases you mentioned and these should be covered:
{code:title=MemoryMappedFileManager.java|borderStyle=solid}
protected synchronized void write(final byte[] bytes, int offset, int
length) {
int chunk = 0;
try {
do {
// re-map if no room is left in buffer
if (length > mappedFile.remaining() && chunk != 0) {
fileSize = randomAccessFile.length();
mappedFile =
randomAccessFile.getChannel().map(FileChannel.MapMode.READ_WRITE,
randomAccessFile.length(), mapSize);
}
chunk = Math.min(length, mappedFile.remaining());
mappedFile.put(bytes, offset, chunk);
offset += chunk;
length -= chunk;
} while (length > 0);
} catch (final Exception ex) {
LOGGER.error("RandomAccessFileManager (" + getName() + ") " + ex);
}
}
{code}
I found a minor bug where unnecessary re-mapping is performed on the first log
entry if the write is larger than the map size. I attached an updated version
of the MemoryMappedFileManager.
> Create MemoryMappedFileAppender
> -------------------------------
>
> Key: LOG4J2-431
> URL: https://issues.apache.org/jira/browse/LOG4J2-431
> Project: Log4j 2
> Issue Type: New Feature
> Components: Appenders
> Reporter: Remko Popma
> Priority: Minor
> Attachments: MemoryMappedFileAppender.java,
> MemoryMappedFileAppenderTest.java, MemoryMappedFileAppenderTest.xml,
> MemoryMappedFileManager.java, MemoryMappedFileManagerTest.java
>
>
> A memory-mapped file appender may have better performance than the ByteBuffer
> + RandomAccessFile combination used by the RandomAccessFileAppender.
> *Drawbacks*
> * The drawback is that the file needs to be pre-allocated and only up to the
> file size can be mapped into memory. When the end of the file is reached the
> appender would need to extend the file and re-map.
> * Remapping is expensive (I think single-digit millisecond-range, need to
> check). For low-latency apps this kind of spike may be unacceptable so
> careful tuning is required.
> * Memory usage: If re-mapping happens too often you lose the performance
> benefits, so the memory-mapped buffer needs to be fairly large, which uses up
> memory.
> * At roll-over and shutdown the file should be truncated to immediately after
> the last written data (otherwise the user is left with a log file that ends
> in garbage).
> *Advantages*
> Measuring on a Solaris box, the difference between flushing to disk (with
> {{RandomAccessFile.write(bytes[])}}) and putting data in a MappedByteBuffer
> is about 20x: around 600ns for a ByteBuffer put and around 12-15 microseconds
> for a RandomAccessFile.write.
> (Of course different hardware and OS may give different results...)
> *Use cases*
> The difference may be most visible if {{immediateFlush}} is set to {{true}},
> which is only recommended if async loggers/appenders are not used. If
> {{immediateFlush=false}}, the large buffer used by RandomAccessFileAppender
> means you won't need to touch disk very often.
> So a MemoryMappedFileAppender is most useful in _synchronous_ logging
> scenarios, where you get the speed of writing to memory but the data is
> available on disk almost immediately. (MMap writes directly to the OS disk
> buffer.)
> In case of a application crash, the OS ensures that all data in the buffer
> will be written to disk. In case of an OS crash the data that was most
> recently added to the buffer may not be written to disk.
> Because by nature this appender would occupy a fair amount of memory, it is
> most suitable for applications running on server-class hardware with lots of
> memory available.
--
This message was sent by Atlassian JIRA
(v6.1.5#6160)
---------------------------------------------------------------------
To unsubscribe, e-mail: [email protected]
For additional commands, e-mail: [email protected]