How to stream data out of hbase

2017-10-24 Thread yeshwanth kumar
Hi i am searching for a way to stream data from hbase, one way to do is with filters , but i need to query hbase continously, another way is to read directly from WAL, (i am searching for sample code, and i found WALReader and WAL.Entry API's. can i use them directly without any side effects)

Re: HBase Region Size of 2.5 TB

2016-08-28 Thread yeshwanth kumar
ven't changed the value for > "hbase.increasing.policy.initial.size", the last two lines should have > been > executed. > > initialSize would be 2GB in that case according to the config you listed. > > > FYI > > On Fri, Aug 26, 2016 at 3:23 PM, yeshwanth kumar &

HBase Region Size of 2.5 TB

2016-08-26 Thread yeshwanth kumar
Hi we are using CDH 5.7 HBase 1.2 we are doing a performance testing over HBase through regular Load, which has 4 Region Servers. Input Data is compressed binary files around 2TB, which we process and write as Key-Value pairs to HBase. the output data size in HBase is almost 4 times around

Re: Spark HBase Bulk load using HFileFormat

2016-07-14 Thread yeshwanth kumar
On Thu, Jul 14, 2016 at 1:33 AM, yeshwanth kumar <yeshwant...@gmail.com> wrote: > > following is the code snippet for saveASHFile > > def saveAsHFile(putRDD: RDD[(ImmutableBytesWritable, KeyValue)], outputPath: > String) = { > val conf = ConfigFactory.getConf > val

Re: Spark HBase Bulk load using HFileFormat

2016-07-14 Thread yeshwanth kumar
gmail.com> wrote: > Can you show the code inside saveASHFile ? > > Maybe the partitions of the RDD need to be sorted (for 1st issue). > > Cheers > > On Wed, Jul 13, 2016 at 4:29 PM, yeshwanth kumar <yeshwant...@gmail.com> > wrote: > > > Hi i am doing bulk lo

Spark HBase Bulk load using HFileFormat

2016-07-13 Thread yeshwanth kumar
Hi i am doing bulk load into HBase as HFileFormat, by using saveAsNewAPIHadoopFile i am on HBase 1.2.0-cdh5.7.0 and spark 1.6 when i try to write i am getting an exception java.io.IOException: Added a key not lexically larger than previous. following is the code snippet case class

Re: choosing between hbase-spark / spark-hbase

2016-04-10 Thread yeshwanth kumar
the Hbase project. > > They are not any significant differences apart from the fact that Spark > on > > hbase is not updated. > > Dependent on the version you are using it would be more beneficial to use > > Hbase-Spark > > > > Kay > > On 5 Apr 2016 9:12 pm

choosing between hbase-spark / spark-hbase

2016-04-05 Thread yeshwanth kumar
i have cloudera cluster, i am exploring spark with HBase, after going through this blog http://blog.cloudera.com/blog/2014/11/how-to-do-near-real-time-sessionization-with-spark-streaming-and-apache-hadoop/ i found two options for using Spark with HBase, Cloudera's Spark on HBase or Apache

Re: java.lang.VerifyError: class com.google.protobuf.HBaseZeroCopyByteString overrides final method equals.(Ljava/lang/Object;)Z

2016-03-21 Thread yeshwanth kumar
n Mon, Mar 21, 2016 at 8:37 AM, yeshwanth kumar <yeshwant...@gmail.com> > wrote: > > > what if i use protobuf version 2.6, > > is it supported? > > > > please let me know > > > > -Yeshwanth > > Can you Imagine what I would do if I could do all I

Re: java.lang.VerifyError: class com.google.protobuf.HBaseZeroCopyByteString overrides final method equals.(Ljava/lang/Object;)Z

2016-03-21 Thread yeshwanth kumar
what if i use protobuf version 2.6, is it supported? please let me know -Yeshwanth Can you Imagine what I would do if I could do all I can - Art of War On Fri, Mar 18, 2016 at 10:31 PM, yeshwanth kumar <yeshwant...@gmail.com> wrote: > Thank you Ted, > Thank You Sean, for

java.lang.VerifyError: class com.google.protobuf.HBaseZeroCopyByteString overrides final method equals.(Ljava/lang/Object;)Z

2016-03-19 Thread yeshwanth kumar
i am using HBase 1.0.0-cdh5.5.1 i am hitting this exception when trying to write to Hbase following is the stack trace Exception in thread "main" java.lang.VerifyError: class com.google.protobuf.HBaseZeroCopyByteString overrides final method equals.(Ljava/lang/Object;)Z at

Re: java.lang.VerifyError: class com.google.protobuf.HBaseZeroCopyByteString overrides final method equals.(Ljava/lang/Object;)Z

2016-03-18 Thread yeshwanth kumar
, 2016 19:38, "Ted Yu" <yuzhih...@gmail.com> wrote: > > > > > HBase is built with this version of protobuf: > > > > > > 2.5.0 > > > > > > On Fri, Mar 18, 2016 at 5:13 PM, yeshwanth kumar < > yeshwant...@gmail.com>

Re: is it a gud way to store a map object in hbase column

2014-09-18 Thread yeshwanth kumar
. The above makes write(s) easy. But when you query, do you always need all the key-value pairs in this map object ? Cheers On Wed, Sep 17, 2014 at 1:38 PM, yeshwanth kumar yeshwant...@gmail.com wrote: hi i have a huge map object, which comes from the solr query results. map contains around

is it a gud way to store a map object in hbase column

2014-09-17 Thread yeshwanth kumar
hi i have a huge map object, which comes from the solr query results. map contains around 400-500 key-value pairs is it a gud way to store the entire map as a value in the column. is there any particular things like column vaue size, i need to take care of or shud i store it in different columns

Re: java.util.concurrent.ExecutionException

2014-09-03 Thread yeshwanth kumar
what's causing the issue, -yeshwanth On Wed, Sep 3, 2014 at 2:45 AM, Ted Yu yuzhih...@gmail.com wrote: Have you checked the region server (on the same node as the mapper) log to see if there was anything special around 07:56 ? Cheers On Tue, Sep 2, 2014 at 10:36 AM, yeshwanth kumar

java.util.concurrent.ExecutionException

2014-09-02 Thread yeshwanth kumar
hi i am running HBase 0.94.20 on Hadoop 2.2.0 i am working on a mapreduce job, where it reads input from a table and writes the processed back to that table and to another table, i am using MultiTableOutputFormat class for that. while running the mapreduce job, i encounter this exception, as a

Re: java.util.concurrent.ExecutionException

2014-09-02 Thread yeshwanth kumar
to localhost/127.0.0.1:60020 failed Can you check whether configuration from hbase-site.xml is correctly passed to your mapper ? Cheers On Tue, Sep 2, 2014 at 10:25 AM, yeshwanth kumar yeshwant...@gmail.com wrote: hi i am running HBase 0.94.20 on Hadoop 2.2.0 i am working on a mapreduce job

writing to multiple hbase tables in a mapreduce job

2014-08-26 Thread yeshwanth kumar
hi i am running HBase 0.94.20 on Hadoop 2.2.0 i am using MultiTableOutputFormat, for writing processed output to two different tables in hbase. here's the code snippet private ImmutableBytesWritable tab_cr = new ImmutableBytesWritable( Bytes.toBytes(i1)); private ImmutableBytesWritable

Re: writing to multiple hbase tables in a mapreduce job

2014-08-26 Thread yeshwanth kumar
You're initializing with table 'i1' Please remove the above call and try again. Cheers On Tue, Aug 26, 2014 at 9:18 AM, yeshwanth kumar yeshwant...@gmail.com wrote: hi i am running HBase 0.94.20 on Hadoop 2.2.0 i am using MultiTableOutputFormat, for writing processed output to two

Re: writing to multiple hbase tables in a mapreduce job

2014-08-26 Thread yeshwanth kumar
, yeshwanth kumar yeshwant...@gmail.com wrote: hi ted, how can we intialise the mapper if i comment out those lines On Tue, Aug 26, 2014 at 10:08 PM, Ted Yu yuzhih...@gmail.com wrote: TableMapReduceUtil.initTableMapperJob(otherArgs[0], scan, EntitySearcherMapper.class

Re: writing to multiple hbase tables in a mapreduce job

2014-08-26 Thread yeshwanth kumar
you do need to intialize the table by using the Util class. Regards, Shahab On Tue, Aug 26, 2014 at 2:29 PM, yeshwanth kumar yeshwant...@gmail.com wrote: hi ted, i need to process the data in table i1, and then i need to write the results to tables i1 and i2 so input

Re: writing to multiple hbase tables in a mapreduce job

2014-08-26 Thread yeshwanth kumar
); boolean b = job.waitForCompletion(true);if (!b) {throw new IOException(error with job!);}* Regards, Shahab On Tue, Aug 26, 2014 at 3:11 PM, yeshwanth kumar yeshwant...@gmail.com wrote: hi shahab, i tried in that way, by specifying outputformat as MultiTableOutputFormat, it is throwing

Re: writing to multiple hbase tables in a mapreduce job

2014-08-26 Thread yeshwanth kumar
? TableMapReduceUtil.initTableMapperJob Can you should your whole job setup/driver code? Regards, Shahab On Tue, Aug 26, 2014 at 3:18 PM, yeshwanth kumar yeshwant...@gmail.com wrote: that mapreduce job reads data from hbase table, it doesn't take any explicit input data/file/ -yeshwanth On Wed, Aug 27

Re: writing to multiple hbase tables in a mapreduce job

2014-08-26 Thread yeshwanth kumar
one you are specifying the data input which is must. Otherwise how would the job know to read or gets input? The second call (initTableReducerJob) is not necessary as your output format has changed. Regards, Shahab On Tue, Aug 26, 2014 at 3:31 PM, yeshwanth kumar yeshwant...@gmail.com

custom hbase shell commands

2014-05-26 Thread yeshwanth kumar
hi, i am using hbase 0.94.10, Distribution: Apache i am working on jruby scripts to create custom hbase command. I want to know whether I can create a custom hbase command similar to already available scan, put commands.. I have a sample jruby script, client.rb that outputs the Row ID and Value

Re: Running hbase 0.94 version on hadoop 2.2

2014-05-01 Thread yeshwanth kumar
PM, yeshwanth kumar yeshwant...@gmail.comwrote: thanks for the info ted. On Wed, Apr 30, 2014 at 9:22 PM, Ted Yu yuzhih...@gmail.com wrote: After rebuilding 0.94, you can deploy the artifacts onto hadoop 2.2 cluster. See HBASE-11076 Cheers On Wed, Apr 30, 2014 at 8:20 AM, yeshwanth

Re: Running hbase 0.94 version on hadoop 2.2

2014-05-01 Thread yeshwanth kumar
): -Dhadoop.profile=2.0 On Thu, May 1, 2014 at 9:02 AM, yeshwanth kumar yeshwant...@gmail.com wrote: hi ted, here are the changes http://pastebin.com/CJp2Z9iXi made to hbase pom while building it is giving hadoop-snappy native jar cannot get from repository, i am trying to build

Re: Running hbase 0.94 version on hadoop 2.2

2014-05-01 Thread yeshwanth kumar
:00 yeshwanth kumar yeshwant...@gmail.com: hi ted, i am trying to build hbase 0.94.18 i followed the procedure i edited the pom.xml with changing protobuf version to 2.5.0 and hadoop version to 2.2.0 but i cannot the build the hbase. here's the complete log http://pastebin.com/7bQ5TBZe

Running hbase 0.94 version on hadoop 2.2

2014-04-30 Thread yeshwanth kumar
hi, is hbase 0.94.x versions compatible with hadoop 2.2 i checked the apache hbase website there it mentioned as NT(not tested) thanks, yeshwanth.

Re: Running hbase 0.94 version on hadoop 2.2

2014-04-30 Thread yeshwanth kumar
thanks for the info ted. On Wed, Apr 30, 2014 at 9:22 PM, Ted Yu yuzhih...@gmail.com wrote: After rebuilding 0.94, you can deploy the artifacts onto hadoop 2.2 cluster. See HBASE-11076 Cheers On Wed, Apr 30, 2014 at 8:20 AM, yeshwanth kumar yeshwant...@gmail.com wrote: hi

Re: Unable to get data of znode /hbase/table/mytable.

2014-04-27 Thread yeshwanth kumar
) at java.lang.Thread.run(Thread.java:744) how can i fix this dependency issue. On Fri, Apr 25, 2014 at 9:06 PM, yeshwanth kumar yeshwant...@gmail.comwrote: hi jean, i haven't written any piece of code to workaround znode, one of my rest endpoint in webapp reads data from hbase

Re: Unable to get data of znode /hbase/table/mytable.

2014-04-27 Thread yeshwanth kumar
...@gmail.com wrote: Did the exception below happen when you were performing some query on the region server ? Can you tell us a bit more whether your query uses FilterList ? Thanks On Sun, Apr 27, 2014 at 9:28 AM, yeshwanth kumar yeshwant...@gmail.com wrote: hi jean, i am using

Re: Unable to get data of znode /hbase/table/mytable.

2014-04-27 Thread yeshwanth kumar
here's the code snippet http://pastebin.com/AGh7mTNT thanks, yeshwanth On Sun, Apr 27, 2014 at 10:20 PM, Ted Yu yuzhih...@gmail.com wrote: Can you show us code snippet where you add filter to Scan object ? Thanks On Apr 27, 2014, at 9:43 AM, yeshwanth kumar yeshwant...@gmail.com wrote

Re: Unable to get data of znode /hbase/table/mytable.

2014-04-27 Thread yeshwanth kumar
, yeshwanth kumar yeshwant...@gmail.comwrote: here's the code snippet http://pastebin.com/AGh7mTNT thanks, yeshwanth On Sun, Apr 27, 2014 at 10:20 PM, Ted Yu yuzhih...@gmail.com wrote: Can you show us code snippet where you add filter to Scan object ? Thanks On Apr 27, 2014, at 9:43 AM

Re: Unable to get data of znode /hbase/table/mytable.

2014-04-27 Thread yeshwanth kumar
thanks ted On Sun, Apr 27, 2014 at 11:06 PM, Ted Yu yuzhih...@gmail.com wrote: I am adding cdh users mailing list where you would get good response to the issue below. Cheers On Sun, Apr 27, 2014 at 10:30 AM, yeshwanth kumar yeshwant...@gmail.com wrote: hi ted, i replaced

Re: Unable to get data of znode /hbase/table/mytable.

2014-04-25 Thread yeshwanth kumar
client and not look at the ZNode. Not getting why it's looking there. Do yo uknow? JM 2014-04-25 2:01 GMT-04:00 yeshwanth kumar yeshwant...@gmail.com: hi matteo, my problem isn't solved yet. webapp isn't reading data from hbase. all i see in logs is znode /hbase/table/mytable

Unable to get data of znode /hbase/table/mytable.

2014-04-22 Thread yeshwanth kumar
Hi, i am running webapp written on jaxrs framework which performs CRUD opereations on hbase. app was working fine till last week, now when i perform reading opeartion from hbase i don't see any data, i don't see any errors or exceptions but i found this lines in the log *Unable to get data of

Re: Unable to get data of znode /hbase/table/mytable.

2014-04-22 Thread yeshwanth kumar
hi matteo, how do i specify hbase znode to use /hbase/table94 instead of /hbase/table thanks On Tue, Apr 22, 2014 at 9:40 PM, Matteo Bertozzi theo.berto...@gmail.comwrote: On Tue, Apr 22, 2014 at 9:00 AM, yeshwanth kumar yeshwant...@gmail.com wrote: @matteo present znode is at /hbase

connecting to hbase remotely

2014-03-06 Thread yeshwanth kumar
Hi i am using HBase 0.94.6-cdh4.5.0 i conected to hbase by setting config explicitly in my code through config.set zookeeper quorum property. i am able to read the hbase table data properly,it is connecting to the host specified in config log: Creating new Groups object Group mapping

Re: connecting to hbase remotely

2014-03-06 Thread yeshwanth kumar
. Is hbase-site.xml in the classpath ? Thanks On Thu, Mar 6, 2014 at 3:05 AM, yeshwanth kumar yeshwant...@gmail.com wrote: Hi i am using HBase 0.94.6-cdh4.5.0 i conected to hbase by setting config explicitly in my code through config.set zookeeper quorum property. i am able to read

DNS Exception while running TableMapper

2014-02-08 Thread yeshwanth kumar
hi i am using hbase 94.10 and hadoop 1.2.1 trying to run couple of map-reduce jobs on hbase by TableMapper, i am getting this exception do i really need to configure DNS on local.? can someone help me with this issue. Exception in thread main java.lang.NullPointerException at

Re: DNS Exception while running TableMapper

2014-02-08 Thread yeshwanth kumar
deployment ? Please configure DNS on the node. Cheers On Feb 8, 2014, at 6:46 AM, yeshwanth kumar yeshwant...@gmail.com wrote: hi i am using hbase 94.10 and hadoop 1.2.1 trying to run couple of map-reduce jobs on hbase by TableMapper, i am getting this exception do i really need

Inconsistency in hbase tables

2014-01-22 Thread yeshwanth kumar
i am running hbase 0.94.6 version by mistake i deleted a directory under /hbase in hdfs i recovered that directory again from .Trash of hdfs. when i ran a hbase hbck on the respective table it is showing Inconsistency. there's something i messed up with META info of Regions any idea of how to

Co-Processors in Hase 0.95.2 version

2013-09-22 Thread yeshwanth kumar
Hi, facing some difficulty to write the co-processors in hbase 0.95.2 version, looking for some tutorials and examples can anyone provide me some examples how the co-processors are related with protobuffer's .. Thanks