[ 
https://issues.apache.org/jira/browse/IO-279?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=13701007#comment-13701007
 ] 

Otis Gospodnetic commented on IO-279:
-------------------------------------

bq. My case occurs on Linux (Debian) where I wrote a tool to tail GlassFish log 
files and out put them to Kafka. Every now and then it spits out the entire log 
file again, which makes the Tailer useless for me.

What about tracking the current position/line in the file, at least 
approximately.
Then, after detecting apparent new/rotated file one could check things like 
size of the file or some such and compare it with the offset to answer the 
question such as "Does this apparently new file that I'm about to start tailing 
from its beginning actually already have the offset I was at before?  If so, 
maybe this is the same file and somebody just touched it.  In that case, let me 
just jump to that offset".

Doable?

                
> Tailer erroneously considers file as new
> ----------------------------------------
>
>                 Key: IO-279
>                 URL: https://issues.apache.org/jira/browse/IO-279
>             Project: Commons IO
>          Issue Type: Bug
>    Affects Versions: 2.0.1, 2.4
>            Reporter: Sergio Bossa
>         Attachments: fix-tailer.patch, IO-279.patch, modify-test-fixed.patch, 
> modify-test.patch
>
>
> Tailer sometimes erroneously considers the tailed file as new, forcing a 
> repositioning at the start of the file: I'm still unable to reproduce this in 
> a test case, because it only happens to me with huge log files during Apache 
> Tomcat startup.
> This is the piece of code causing the problem:
> {code}
> // See if the file needs to be read again
> if (length > position) {
>     // The file has more content than it did last time
>     last = System.currentTimeMillis();
>     position = readLines(reader);
> } else if (FileUtils.isFileNewer(file, last)) {
>     /* This can happen if the file is truncated or overwritten
>         * with the exact same length of information. In cases like
>         * this, the file position needs to be reset
>         */
>     position = 0;
>     reader.seek(position); // cannot be null here
>     // Now we can read new lines
>     last = System.currentTimeMillis();
>     position = readLines(reader);
> }
> {code}
> What probably happens is that the new file content is about to be written on 
> disk, the date is already updated but content is still not flushed, so actual 
> length is untouched and there you go.
> In other words, I think there should be some better method to verify the 
> condition above, rather than relying only on dates: keeping and comparing the 
> hash code of the latest line may be a solution, but may hurt performances ... 
> other ideas?

--
This message is automatically generated by JIRA.
If you think it was sent incorrectly, please contact your JIRA administrators
For more information on JIRA, see: http://www.atlassian.com/software/jira

Reply via email to