*First you create the HBase configuration:*

      val hbaseTableName = "paid_daylevel"
      val hbaseColumnName = "paid_impression"
      val hconf = HBaseConfiguration.create()
      hconf.set("hbase.zookeeper.quorum", "sigmoid-dev-master")
      hconf.set("hbase.zookeeper.property.clientPort", "2182")
      hconf.set("hbase.defaults.for.version.skip", "true")
      hconf.set(TableOutputFormat.OUTPUT_TABLE, hbaseTableName)
      hconf.setClass("mapreduce.job.outputformat.class",
classOf[TableOutputFormat[String]], classOf[OutputFormat[String, Mutation]])
      val admin = new HBaseAdmin(hconf)

*Then you read the values:*

val values = sparkContext.newAPIHadoopRDD(hconf, classOf[TableInputFormat],
classOf[ImmutableBytesWritable], classOf[Result]).map {
          case (key, row) => {
            val rowkey = Bytes.toString(key.get())
            val valu = Bytes.toString(row.getValue(Bytes.toBytes("CF"),
Bytes.toBytes(hbaseColumnName)))

            (rowkey, valu.toInt)
          }



*Then you modify or do whatever you want with the values using the rdd
transformations and then save the values:*

values.map(valu => (new ImmutableBytesWritable, {
          val record = new Put(Bytes.toBytes(valu._1))
          record.add(Bytes.toBytes("CF"), Bytes.toBytes(hbaseColumnName),
Bytes.toBytes(valu._2.toString))

          record
        }
          )
        ).saveAsNewAPIHadoopDataset(hconf)



​You can also look at the
http://spark-packages.org/package/nerdammer/spark-hbase-connector ​




Thanks
Best Regards

On Tue, Dec 15, 2015 at 6:08 PM, censj <ce...@lotuseed.com> wrote:

> hi,all:
>     how cloud I through spark function  hbase get value then update this
> value and put this value to hbase ?
>

Reply via email to