Hi chetan,
If you just need HBase Data into Hive, You can use Hive EXTERNAL TABLE with
STORED BY 'org.apache.hadoop.hive.hbase.HBaseStorageHandler'.
Try this if you problem can be solved
https://cwiki.apache.org/confluence/display/Hive/HBaseIntegration
Regards
Amrit
.
On Wed, Jan 25, 20
Hi Rajendra,
It says your directory is not empty *s3n://**buccketName/cip/daily_date.*
Try to use save *mode. eg *
df.write.mode(SaveMode.Overwrite).partitionBy("date").f
ormat("com.databricks.spark.csv").option("delimiter", "#").option("codec", "
org.apache.hadoop.io.compress.GzipCo
You can try out *debezium* : https://github.com/debezium. it reads data
from bin-logs, provides structure and stream into Kafka.
Now Kafka can be your new source for streaming.
On Tue, Jan 3, 2017 at 4:36 PM, Yuanzhe Yang wrote:
> Hi Hongdi,
>
> Thanks a lot for your suggestion. The data is tru