For both 1 and 2, if your metadata is a Writable, you can simply reuse
its readFields() and write() methods to serialize it into and out of
the data output/input streams.

For instance, assume dos is out, and dis is in, and obj1 (K) and obj2
(V) are my writables, then I do:

To write K and V:
obj1.write(dos);
obj2.write(dos);

To read K and V in proper order (order is important when
deserializing), reconstruct your writable objects and read them in:
obj1.readFields(dis);
obj2.readFields(dis);

Does this not work for you?

On Tue, Jul 3, 2012 at 6:37 AM, Dare <dayakar.bit...@gmail.com> wrote:
> Hi Hadoop Team,
>
> I have been working with the TFiles in Hadoop. I got a few questions
> regarding Named Meta Blocks.
>
> 1) Using TFile.Writer one can append a <K,V> pair. But, if I prepare a Meta
> Block, then it returns a DataOutputStream which allows to write in byte[],
> since my <K,V> pairs are serialized objects.
>     Is this the same way or is there something I am missing. Because, out of
> my understanding, if I write it as a <K,V> pair, the key indexes will be
> prepared at the tail of the TFile.
>     But, when I write it as a just byte[], i am not sure if the indexes are
> formed.
>
> 2) While reading, is there a way to read <K,V> entry using the
> DataInputStream got from the getMetaBlock().
>
> Thanks
> DaRe



-- 
Harsh J

Reply via email to