It sounds like you wish to read files while they're still being written. Is that correct? If so, that's not always reliable, since entries in the file may be only partially written when you attempt to read it. One can handle such errors, back up and re-attempt reading after the file grows more, but this can become complex.
Rather one might initially create files to a staging directory, then periodically close and move them to the directory where they are to be read. This works well since file renames are atomic in most filesystems. If this generates too many small files then a consolidation job can be run periodically to replace sets of small files with larger files containing their appended content. To answer your specific question: you can store arbitrary values in a file's metadata, but those values must be set before data is written to the file, since metadata is in the file header. So it wouldn't work to store the current end of file in the metadata, if that's what you were asking. Doug On Fri, Sep 27, 2013 at 4:54 PM, Alan Miller <[email protected]> wrote: > Hi, > > Here's my scenario. > > One Hadoop job collects incoming Flume data and keeps appending > records to Avro files. Every 30 minutes the file just grows. Another > Hadoop job runs every hour and reads the above files. When this job > finishes I want to keep track of where in the file (offset) it left off so > that > the next iteration can immediately seek to that position. > > Can I use the DataFileWriter's setMeta(String key, long value) > method to update a meta field with the position and use the DataFileReader's > getMeta(String key, long value) & seek(long position) methods > to implement this? > > Is that reasonable? Currently I'm only using the Java API. > Are these methods implemented in the Ruby too? > > Thanks, > Alan
