Reading from Hbase using python

2014-11-12 Thread Alan Prando
Hi all, I'm trying to read an hbase table using this an example from github ( https://github.com/apache/spark/blob/master/examples/src/main/python/hbase_inputformat.py), however I have two qualifiers in a column family. Ex.: ROW COLUMN+CELL row1 column=f1:1, timestamp=1401883411986,

Re: Reading from Hbase using python

2014-11-12 Thread Ted Yu
Can you give us a bit more detail: hbase release you're using. whether you can reproduce using hbase shell. I did the following using hbase shell against 0.98.4: hbase(main):001:0 create 'test', 'f1' 0 row(s) in 2.9140 seconds = Hbase::Table - test hbase(main):002:0 put 'test', 'row1', 'f1:1',

Re: Reading from Hbase using python

2014-11-12 Thread Ted Yu
To my knowledge, Spark 1.1 comes with HBase 0.94 To utilize HBase 0.98, you will need: https://issues.apache.org/jira/browse/SPARK-1297 You can apply the patch and build Spark yourself. Cheers On Wed, Nov 12, 2014 at 12:57 PM, Alan Prando a...@scanboo.com.br wrote: Hi Ted! Thanks for

Re: Reading from Hbase using python

2014-11-12 Thread Ted Yu
Looking at HBaseResultToStringConverter : override def convert(obj: Any): String = { val result = obj.asInstanceOf[Result] Bytes.toStringBinary(result.value()) } Here is the code for Result.value(): public byte [] value() { if (isEmpty()) { return null; }