Hello all

Is there a best practice for using my own classes as keys and values?

My first attempt at doing this was successful - I built a
BigIntegerWritable class using IntWritable as a template.  It was easy
because BigInteger has methods converting to and from byte arrays,
which I could then write into the DataOutput or read from the
DataInput.

It seems like I should be able to use object serialization to write
to/read from the DataOutput/Input objects and make my own classes
implement the Writable interface.  It seems like I should be able to
do something like this:

import java.io.*;

import org.apache.hadoop.io.*;

public class Sample implements Writable {

    Address address;
    SampleValue value;         // sampled value at this point

    public Sample(Address a, SampleValue v) {
        address = a;
        value = v;
    }

    public SampleValue getValue()  { return value;}
    public Address getAddress() { return address; }

    public String toString () {
        return (address.toString() + " " + value.toString());
    }

[...]

    public void readFields(DataInput in) throws IOException {
        ObjectInputStream oin = new ObjectInputStream((DataInputBuffer)in);

        try {
            address = (Address)oin.readObject();
            value = (SampleValue)oin.readObject();
        } catch (ClassNotFoundException e) {
            throw new IOException(e.toString());
        }

    }

    public void write(DataOutput out) throws IOException {
        ObjectOutputStream oout = new ObjectOutputStream((DataOutputBuffer)out);

        oout.writeObject(address);
        oout.writeObject(value);
    }
}

This code compiles, but throws exceptions at runtime, complaining that
WritableComparator can not access a member of class Sample with
modifiers "".  Can someone tell me what this exception is talking
about?

Do I need to implement a WritableComparator for each class that I want
to implement Writable?

Thanks again for the help.

-steve

Reply via email to