[ https://issues.apache.org/jira/browse/HADOOP-5589?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel ]
Jingkei Ly updated HADOOP-5589: ------------------------------- Attachment: HADOOP-5589-2.patch I've noticed a bug in my original patch where writeBitSet() would incorrectly detect when to write out a new long to the stream - this new patch should fix it. > TupleWritable: Lift implicit limit on the number of values that can be stored > ----------------------------------------------------------------------------- > > Key: HADOOP-5589 > URL: https://issues.apache.org/jira/browse/HADOOP-5589 > Project: Hadoop Core > Issue Type: Improvement > Components: mapred > Affects Versions: 0.21.0 > Reporter: Jingkei Ly > Attachments: HADOOP-5589-1.patch, HADOOP-5589-2.patch > > > TupleWritable uses an instance field of the primitive type, long, which I > presume is so that it can quickly determine if a position has been written to > in its array of Writables (by using bit-shifting operations on the long > field). The problem with this is that it implies that there is a maximum > limit of 64 values you can store in a TupleWritable. > An example of a use-case where I think this would be a problem is if you had > two MR jobs with over 64 reduces tasks and you wanted to join the outputs > with CompositeInputFormat - this will probably cause unexpected results in > the current scheme. > At the very least, the 64-value limit should be documented in TupleWritable. -- This message is automatically generated by JIRA. - You can reply to this email to add a comment to the issue online.