Hi
i am searching for a way to stream data from hbase,
one way to do is with filters , but i need to query hbase continously,
another way is to read directly from WAL, (i am searching for sample code,
and i found WALReader and WAL.Entry API's. can i use them directly without
any side effects)
ven't changed the value for
> "hbase.increasing.policy.initial.size", the last two lines should have
> been
> executed.
>
> initialSize would be 2GB in that case according to the config you listed.
>
>
> FYI
>
> On Fri, Aug 26, 2016 at 3:23 PM, yeshwanth kumar &
Hi we are using CDH 5.7 HBase 1.2
we are doing a performance testing over HBase through regular Load, which
has 4 Region Servers.
Input Data is compressed binary files around 2TB, which we process and
write as Key-Value pairs to HBase.
the output data size in HBase is almost 4 times around
On Thu, Jul 14, 2016 at 1:33 AM, yeshwanth kumar <yeshwant...@gmail.com>
wrote:
>
> following is the code snippet for saveASHFile
>
> def saveAsHFile(putRDD: RDD[(ImmutableBytesWritable, KeyValue)], outputPath:
> String) = {
> val conf = ConfigFactory.getConf
> val
gmail.com> wrote:
> Can you show the code inside saveASHFile ?
>
> Maybe the partitions of the RDD need to be sorted (for 1st issue).
>
> Cheers
>
> On Wed, Jul 13, 2016 at 4:29 PM, yeshwanth kumar <yeshwant...@gmail.com>
> wrote:
>
> > Hi i am doing bulk lo
Hi i am doing bulk load into HBase as HFileFormat, by
using saveAsNewAPIHadoopFile
i am on HBase 1.2.0-cdh5.7.0 and spark 1.6
when i try to write i am getting an exception
java.io.IOException: Added a key not lexically larger than previous.
following is the code snippet
case class
the Hbase project.
> > They are not any significant differences apart from the fact that Spark
> on
> > hbase is not updated.
> > Dependent on the version you are using it would be more beneficial to use
> > Hbase-Spark
> >
> > Kay
> > On 5 Apr 2016 9:12 pm
i have cloudera cluster,
i am exploring spark with HBase,
after going through this blog
http://blog.cloudera.com/blog/2014/11/how-to-do-near-real-time-sessionization-with-spark-streaming-and-apache-hadoop/
i found two options for using Spark with HBase,
Cloudera's Spark on HBase or
Apache
n Mon, Mar 21, 2016 at 8:37 AM, yeshwanth kumar <yeshwant...@gmail.com>
> wrote:
>
> > what if i use protobuf version 2.6,
> > is it supported?
> >
> > please let me know
> >
> > -Yeshwanth
> > Can you Imagine what I would do if I could do all I
what if i use protobuf version 2.6,
is it supported?
please let me know
-Yeshwanth
Can you Imagine what I would do if I could do all I can - Art of War
On Fri, Mar 18, 2016 at 10:31 PM, yeshwanth kumar <yeshwant...@gmail.com>
wrote:
> Thank you Ted,
> Thank You Sean, for
i am using HBase 1.0.0-cdh5.5.1
i am hitting this exception when trying to write to Hbase
following is the stack trace
Exception in thread "main" java.lang.VerifyError: class
com.google.protobuf.HBaseZeroCopyByteString overrides final method
equals.(Ljava/lang/Object;)Z
at
, 2016 19:38, "Ted Yu" <yuzhih...@gmail.com> wrote:
> >
> > > HBase is built with this version of protobuf:
> > >
> > > 2.5.0
> > >
> > > On Fri, Mar 18, 2016 at 5:13 PM, yeshwanth kumar <
> yeshwant...@gmail.com>
.
The above makes write(s) easy. But when you query, do you always need all
the key-value pairs in this map object ?
Cheers
On Wed, Sep 17, 2014 at 1:38 PM, yeshwanth kumar yeshwant...@gmail.com
wrote:
hi i have a huge map object, which comes from the solr query results.
map contains around
hi i have a huge map object, which comes from the solr query results.
map contains around 400-500 key-value pairs
is it a gud way to store the entire map as a value in the column.
is there any particular things like column vaue size, i need to take care of
or shud i store it in different columns
what's causing the issue,
-yeshwanth
On Wed, Sep 3, 2014 at 2:45 AM, Ted Yu yuzhih...@gmail.com wrote:
Have you checked the region server (on the same node as the mapper) log to
see if there was anything special around 07:56 ?
Cheers
On Tue, Sep 2, 2014 at 10:36 AM, yeshwanth kumar
hi i am running HBase 0.94.20 on Hadoop 2.2.0
i am working on a mapreduce job, where it reads input from a table and
writes the processed back to that table and to another table,
i am using MultiTableOutputFormat class for that.
while running the mapreduce job, i encounter this exception, as a
to localhost/127.0.0.1:60020 failed
Can you check whether configuration from hbase-site.xml is correctly passed
to your mapper ?
Cheers
On Tue, Sep 2, 2014 at 10:25 AM, yeshwanth kumar yeshwant...@gmail.com
wrote:
hi i am running HBase 0.94.20 on Hadoop 2.2.0
i am working on a mapreduce job
hi i am running HBase 0.94.20 on Hadoop 2.2.0
i am using MultiTableOutputFormat,
for writing processed output to two different tables in hbase.
here's the code snippet
private ImmutableBytesWritable tab_cr = new ImmutableBytesWritable(
Bytes.toBytes(i1)); private ImmutableBytesWritable
You're initializing with table 'i1'
Please remove the above call and try again.
Cheers
On Tue, Aug 26, 2014 at 9:18 AM, yeshwanth kumar yeshwant...@gmail.com
wrote:
hi i am running HBase 0.94.20 on Hadoop 2.2.0
i am using MultiTableOutputFormat,
for writing processed output to two
, yeshwanth kumar yeshwant...@gmail.com
wrote:
hi ted,
how can we intialise the mapper if i comment out those lines
On Tue, Aug 26, 2014 at 10:08 PM, Ted Yu yuzhih...@gmail.com wrote:
TableMapReduceUtil.initTableMapperJob(otherArgs[0], scan,
EntitySearcherMapper.class
you do need to
intialize the table by using the Util class.
Regards,
Shahab
On Tue, Aug 26, 2014 at 2:29 PM, yeshwanth kumar yeshwant...@gmail.com
wrote:
hi ted,
i need to process the data in table i1, and then i need to write the
results to tables i1 and i2
so input
);
boolean b = job.waitForCompletion(true);if (!b) {throw new
IOException(error with job!);}*
Regards,
Shahab
On Tue, Aug 26, 2014 at 3:11 PM, yeshwanth kumar yeshwant...@gmail.com
wrote:
hi shahab,
i tried in that way, by specifying outputformat as
MultiTableOutputFormat,
it is throwing
?
TableMapReduceUtil.initTableMapperJob
Can you should your whole job setup/driver code?
Regards,
Shahab
On Tue, Aug 26, 2014 at 3:18 PM, yeshwanth kumar yeshwant...@gmail.com
wrote:
that mapreduce job reads data from hbase table,
it doesn't take any explicit input data/file/
-yeshwanth
On Wed, Aug 27
one
you are specifying the data input which is must. Otherwise how would the
job know to read or gets input? The second call (initTableReducerJob) is
not necessary as your output format has changed.
Regards,
Shahab
On Tue, Aug 26, 2014 at 3:31 PM, yeshwanth kumar yeshwant...@gmail.com
hi,
i am using hbase 0.94.10, Distribution: Apache
i am working on jruby scripts to create custom hbase command.
I want to know whether I can create a custom hbase command similar to
already available scan, put commands.. I have a sample jruby script,
client.rb that outputs the Row ID and Value
PM, yeshwanth kumar yeshwant...@gmail.comwrote:
thanks for the info ted.
On Wed, Apr 30, 2014 at 9:22 PM, Ted Yu yuzhih...@gmail.com wrote:
After rebuilding 0.94, you can deploy the artifacts onto hadoop 2.2
cluster.
See HBASE-11076
Cheers
On Wed, Apr 30, 2014 at 8:20 AM, yeshwanth
):
-Dhadoop.profile=2.0
On Thu, May 1, 2014 at 9:02 AM, yeshwanth kumar yeshwant...@gmail.com
wrote:
hi ted,
here are the changes http://pastebin.com/CJp2Z9iXi made to hbase pom
while building it is giving hadoop-snappy native jar cannot get from
repository,
i am trying to build
:00 yeshwanth kumar yeshwant...@gmail.com:
hi ted,
i am trying to build hbase 0.94.18
i followed the procedure i edited the pom.xml with changing protobuf
version to 2.5.0 and hadoop version to 2.2.0
but i cannot the build the hbase.
here's the complete log http://pastebin.com/7bQ5TBZe
hi,
is hbase 0.94.x versions compatible with hadoop 2.2
i checked the apache hbase website there it mentioned as NT(not tested)
thanks,
yeshwanth.
thanks for the info ted.
On Wed, Apr 30, 2014 at 9:22 PM, Ted Yu yuzhih...@gmail.com wrote:
After rebuilding 0.94, you can deploy the artifacts onto hadoop 2.2
cluster.
See HBASE-11076
Cheers
On Wed, Apr 30, 2014 at 8:20 AM, yeshwanth kumar yeshwant...@gmail.com
wrote:
hi
)
at java.lang.Thread.run(Thread.java:744)
how can i fix this dependency issue.
On Fri, Apr 25, 2014 at 9:06 PM, yeshwanth kumar yeshwant...@gmail.comwrote:
hi jean,
i haven't written any piece of code to workaround znode,
one of my rest endpoint in webapp reads data from hbase
...@gmail.com wrote:
Did the exception below happen when you were performing some query on the
region server ?
Can you tell us a bit more whether your query uses FilterList ?
Thanks
On Sun, Apr 27, 2014 at 9:28 AM, yeshwanth kumar yeshwant...@gmail.com
wrote:
hi jean,
i am using
here's the code snippet http://pastebin.com/AGh7mTNT
thanks,
yeshwanth
On Sun, Apr 27, 2014 at 10:20 PM, Ted Yu yuzhih...@gmail.com wrote:
Can you show us code snippet where you add filter to Scan object ?
Thanks
On Apr 27, 2014, at 9:43 AM, yeshwanth kumar yeshwant...@gmail.com
wrote
, yeshwanth kumar yeshwant...@gmail.comwrote:
here's the code snippet http://pastebin.com/AGh7mTNT
thanks,
yeshwanth
On Sun, Apr 27, 2014 at 10:20 PM, Ted Yu yuzhih...@gmail.com wrote:
Can you show us code snippet where you add filter to Scan object ?
Thanks
On Apr 27, 2014, at 9:43 AM
thanks ted
On Sun, Apr 27, 2014 at 11:06 PM, Ted Yu yuzhih...@gmail.com wrote:
I am adding cdh users mailing list where you would get good response to the
issue below.
Cheers
On Sun, Apr 27, 2014 at 10:30 AM, yeshwanth kumar yeshwant...@gmail.com
wrote:
hi ted,
i replaced
client and
not look at the ZNode. Not getting why it's looking there. Do yo uknow?
JM
2014-04-25 2:01 GMT-04:00 yeshwanth kumar yeshwant...@gmail.com:
hi matteo,
my problem isn't solved yet.
webapp isn't reading data from hbase.
all i see in logs is znode /hbase/table/mytable
Hi,
i am running webapp written on jaxrs framework which performs CRUD
opereations on hbase.
app was working fine till last week,
now when i perform reading opeartion from hbase i don't see any data, i
don't see any errors or exceptions but i found this lines in the log
*Unable to get data of
hi matteo,
how do i specify hbase znode to use /hbase/table94 instead of /hbase/table
thanks
On Tue, Apr 22, 2014 at 9:40 PM, Matteo Bertozzi theo.berto...@gmail.comwrote:
On Tue, Apr 22, 2014 at 9:00 AM, yeshwanth kumar yeshwant...@gmail.com
wrote:
@matteo
present znode is at /hbase
Hi i am using HBase 0.94.6-cdh4.5.0
i conected to hbase by setting config explicitly in my code through
config.set zookeeper quorum property.
i am able to read the hbase table data properly,it is connecting to the
host specified in config
log:
Creating new Groups object
Group mapping
.
Is hbase-site.xml in the classpath ?
Thanks
On Thu, Mar 6, 2014 at 3:05 AM, yeshwanth kumar yeshwant...@gmail.com
wrote:
Hi i am using HBase 0.94.6-cdh4.5.0
i conected to hbase by setting config explicitly in my code through
config.set zookeeper quorum property.
i am able to read
hi
i am using hbase 94.10 and hadoop 1.2.1
trying to run couple of map-reduce jobs on hbase by TableMapper,
i am getting this exception
do i really need to configure DNS on local.?
can someone help me with this issue.
Exception in thread main java.lang.NullPointerException
at
deployment ?
Please configure DNS on the node.
Cheers
On Feb 8, 2014, at 6:46 AM, yeshwanth kumar yeshwant...@gmail.com wrote:
hi
i am using hbase 94.10 and hadoop 1.2.1
trying to run couple of map-reduce jobs on hbase by TableMapper,
i am getting this exception
do i really need
i am running hbase 0.94.6 version
by mistake i deleted a directory under /hbase in hdfs
i recovered that directory again from .Trash of hdfs.
when i ran a hbase hbck on the respective table it is showing Inconsistency.
there's something i messed up with META info of Regions
any idea of how to
Hi,
facing some difficulty to write the co-processors in hbase 0.95.2 version,
looking for some tutorials and examples
can anyone provide me some examples
how the co-processors are related with protobuffer's ..
Thanks
44 matches
Mail list logo