Hi to all,

I'd like to know which is the correct way to run a mapreduce job on a table
managed by phoenix to put data in another table (always managed by Phoenix).
Is it sufficient to read data contained in column 0 (like 0:id, 0:value)
and create insert statements in the reducer to put things correctly in the
output table?
Should I filter rows containing some special value for ccolumn 0:_0..?

Best,
FP

Reply via email to