We have an application where a row can contain anywhere between 1 and 3600000 cells (there's only 1 column family). In practice, most rows have under 100 cells.
Now we want to run some mapreduce jobs that touch every cell within a range (eg count how many cells we have). With scanner caching set to something like 250, the job will chug along for a long time, until it hits a row with a lot of data, then it will die. Setting the cache size down to 1 (row) would presumably work, but take forever to run. We have addressed this by writing some jobs that use coprocessors, which allow us to pull back sets of cells instead of sets of rows, but this means we can't use any of the built-in jobs that come with hbase (eg copyTable). Is there any way around this? Have other people had to deal with such high variability in their row sizes?
