Okay thanks. I will take a look at it.
On 20-Sep-2016 1:03 AM, "Ted Yu" wrote:
> Please take a look at:
> hbase-spark/src/test/scala/org/apache/hadoop/hbase/spark/
> BulkLoadSuite.scala
>
> where usage of LoadIncrementalHFiles is demonstrated.
>
> This is in master branch of hbase.
>
> On Mon, S
Please take a look at:
hbase-spark/src/test/scala/org/apache/hadoop/hbase/spark/BulkLoadSuite.scala
where usage of LoadIncrementalHFiles is demonstrated.
This is in master branch of hbase.
On Mon, Sep 19, 2016 at 12:10 PM, Punit Naik wrote:
> Hi Guys
>
> I am currently using HBase's Put API to
Hi Guys
I am currently using HBase's Put API to load data into HBase from Spark.
But it gives me a lot of problems when the data size is huge or the number
of records are just too many. Can anyone suggest me other options in spark?
--
Thank You
Regards
Punit Naik