Please take a look at:
hbase-spark/src/test/scala/org/apache/hadoop/hbase/spark/BulkLoadSuite.scala

where usage of LoadIncrementalHFiles is demonstrated.

This is in master branch of hbase.

On Mon, Sep 19, 2016 at 12:10 PM, Punit Naik <naik.puni...@gmail.com> wrote:

> Hi Guys
>
> I am currently using HBase's Put API to load data into HBase from Spark.
> But it gives me a lot of problems when the data size is huge or the number
> of records are just too many. Can anyone suggest me other options in spark?
>
> --
> Thank You
>
> Regards
>
> Punit Naik
>

Reply via email to