In terminal i mean through sell use double quote for specifying row key
instead of single quote. and for hbase api what i can thing is its not able
to match the row key. try using double quote there also...
Regards
∞
Shashwat Shriparv
On Mon, May 14, 2012 at 10:27 AM, Mahesh Balija
Hi,
I tried what you told me, but nothing worked:(((
First when I run this command:dalia@namenode:~$ host -v -t A
`hostname`Output:Trying namenodeHost namenode not found: 3(NXDOMAIN)Received
101 bytes from 10.0.2.1#53 in 13 ms My core-site.xml:configurationproperty
Dear All,
our hadoop cluster has 1 nn , 3 dd.Evey nodes has 1 TB hard disk and 4 GB
memory.
Hadoop version is CDH3.
We are trying to load some records (one record has approx. 250 bytes) from
many jms queues into a hbase table.
Is It possible to load records (For example 1.000.000 records in a
Need a little clarification...
You said that you need to do multi-index queries.
Did you mean to say multiple people running queries at the same time, or did
you mean you wanted to do multi-key indexes where the key is a multi-key part.
Or did you mean that you really wanted to use multiple
Hi Michel,
I indexed each column within a column family of a table, so we can query a
row with specific column value.
By multi-index I mean using multiple indexes at the same time on a single
query. That looks like a SQL select
with two *where* clauses of two indexed columns.
The row key of
Hi,
Sounds like a project for Flume: http://incubator.apache.org/flume/
I don't know the if the HBase Sink/JMS Source is committed yet but
this sounds like an ideal use case for them.
Disclaimer, I am a committer on Flume.
Brock
On Mon, May 14, 2012 at 7:17 AM, Faruk Berksöz
Hi,
There could be multiple issues, but it's strange to have in hbase-site.xml
valuehdfs://namenode:9000/hbase/value
while the core-site.xml says:
valuehdfs://namenode:54310//value
The two entries should match.
I would recommend to:
- use netstat to check the ports (netstat -l)
- do the
Ahmed,
Generally speaking, the intent of HBase IS to be a first class data store. It's
a young data store (not even 1.0) so you have to take that into account; but
there's been a lot of engineering put into making it fully safe, and known data
safety issues are considered release blockers.
Any data store may lose data, as a generic statement, so maybe you had
something more specific in mind?
On May 13, 2012, at 9:21 PM, Srikanth P. Shreenivas
srikanth_shreeni...@mindtree.com wrote:
There is a possibility that you may lose data, and hence, I would not use it
for first class
In core-file.xml, do you have this?
configuration
property
namefs.default.name/name
valuehdfs://namenode:8020/hbase/value
/property
If you want hbase to connect to 8020 you must have hdfs listening on
8020 as well.
On Mon, May 14, 2012 at 5:17 PM, Dalia Sobhy dalia.mohso...@hotmail.com wrote:
Re: your question #1, you won't be able to pass information from mappers to
reducers by using static variables. Since map tasks run in different JVM
instances than reduce tasks, the value of the static variable will never be
sent from the mapper JVM to the reducer JVM.
It might work in standalone
Hi,
Even now I am facing the same problem. This is not allowing me to
delete all records from the Hbase.
And I cannot use the double quotes in the MapReduce job. Also I am
able to get the record but unable to delete.
Can anyone please help me out in this?
Thanks,
Please don't cross-post, your question is about HBase not MapReduce
itself so I put mapreduce-user@ in BCC.
0.20.3 is, relatively to the age of the project, as old as my
grand-mother so you should consider upgrading to 0.90 or 0.92 which
are both pretty stable.
I'm curious about the shell's
On Mon, May 14, 2012 at 12:02 PM, anil gupta anilgupt...@gmail.com wrote:
I loaded around 70 thousand 1-2KB records in HBase. For scans, with my
custom filter i am able to get 97 rows in 500 milliseconds and for doing
sum, max, min(in built aggregations of HBase) on the same custom filter its
On Mon, May 14, 2012 at 8:20 AM, Dalia Sobhy dalia.mohso...@hotmail.com wrote:
Here is error msgs i receive..
12/05/14 09:16:17 FATAL master.HMaster: Unhandled exception. Starting
shutdown.
java.net.ConnectException: Call to namenode/10.0.2.3:8020 failed on
connection exception:
On Mon, May 14, 2012 at 5:17 AM, Faruk Berksöz fberk...@gmail.com wrote:
Is It possible to load records (For example 1.000.000 records in a second)
from many JMS queues into a HBase Table.
What Brock says. Regards 1M records a second, you'll probably need a
fatter cluster than a three noder @
I know that regions can split (either manually, or automatically), but
is there any process whereby regions that have previously split will
combine (perhaps when one region shrunk)?
If so, what are the conditions that cause it, and does it happen
automatically or only via a manual process?
On Sun, May 13, 2012 at 4:12 PM, Shrijeet Paliwal
shrij...@rocketfuel.com wrote:
Can you write a MR job that rewrites the data once Shijeet? It would
take hfiles for input and it would write out hfiles only it'd write
hfiles no bigger than a region max in size. You'd use bulk importer
to
HI Stack,
I'll look into Gary Helming post and try to do profiling of coprocessor and
share the results.
Thanks,
Anil Gupta
On Mon, May 14, 2012 at 12:08 PM, Stack st...@duboce.net wrote:
On Mon, May 14, 2012 at 12:02 PM, anil gupta anilgupt...@gmail.com
wrote:
I loaded around 70 thousand
Yes, agreed that data can be lost in any DB. However, isnt it more frequently
seen in NoSql DBs. In case of Hbase, Is it not possible for underlying HDFS to
lose data if nodes went down abrubtly few times.
Andrew Purtell andrew.purt...@gmail.com wrote:
Any data store may lose data, as a
Configure your hosts file properly..proper dns resplution is really
important and a bit tricky..also check your fs.default.name and
hbase.rootdir properties..they both should coincide..and add following two
jars into your HBASE_HOME/lib dorectory :
1. hadoop-core from your HADOOP_HOME
2.
HDFS is designed to not lose data if a few nodes fail. It holds multiple
replicas of each block. Having said that - it also depends on the definition of
a few. Many companies are using HDFS as their central data store and it's
proven at scale in production. It does not lose data arbitrarily,
You don't really help anyone evaluate their options if you just throw
out nonspecific statements like There is a possibility that you may
lose data ... of course there is, with anything. Do you have first or
secondhand knowledge of some specific incident where someone lost data
using HBase? Not a
Ahmed,
I'll second what Ian and Andrew have highlighted. HBase is very capable of
being used as a primary store as long as you run it following the best
practices. It's a useful exercise to clearly define the failure scenarios you
want to safeguard against and what kind of SLAs you have in
Hi,
Here are the steps I am doing to use deleteall,
1) get 'table_name', 0\x09
2) deleteall 'table_name', 0\x091038497092
0 row(s) in 0.0280 seconds1038497092
2) get 'table_name', 0\x091038497092
COLUMN
CELL
cf1:text timestamp=2009163330,
Anil:
I think the performance was related to your custom filter.
Please tell us more about the filter next time.
Thanks
On Mon, May 14, 2012 at 12:31 PM, anil gupta anilgupt...@gmail.com wrote:
HI Stack,
I'll look into Gary Helming post and try to do profiling of coprocessor and
share the
Hi stack,
The namenode, jobtracker and secondary namenode are working and no problem with
them.
The problem is when I run this command
$host -v -t A `hostname`Trying namenodeHost namenode not found: 3(NXDOMAIN)
I don't know why, so I want to ask a question do I have to let hostname =
On Mon, May 14, 2012 at 1:02 PM, Dalia Sobhy dalia.mohso...@hotmail.com wrote:
Hi stack,
The namenode, jobtracker and secondary namenode are working and no problem
with them.
The problem is when I run this command
$host -v -t A `hostname`Trying namenodeHost namenode not found: 3(NXDOMAIN)
HI Ted,
My bad, i missed out a big difference between the Scan object i am using in
my filter and Scan object used in coprocessors. So, scan object is not same.
Basically, i am doing filtering on the basis of a prefix of RowKey.
So, in my filter i do this to build scanner:
Code 1:
Filter filter
Anil:
As code #3 shows, having stopRow helps narrow the range of rows
participating in aggregation.
Do you have suggestion on how this process can be made more user-friendly ?
Thanks
On Mon, May 14, 2012 at 1:47 PM, anil gupta anilgupt...@gmail.com wrote:
HI Ted,
My bad, i missed out a big
Following up on a discussion that started in asynchbase group...
It looks like our RS failure was related to a ZooKeeper timeout, seems we may
have overloaded that RS. The cause of the failure is not as important to me
right now as our ability to recover from the failure. To answer some of
Let me know if Get is giving you any result with the row you specified..
try to get the value using that row key is that giving you any value??
On Tue, May 15, 2012 at 1:25 AM, Mahesh Balija
balijamahesh@gmail.comwrote:
Hi,
Here are the steps I am doing to use deleteall,
1)
Statck,
Ahh of course! Thank you. One question what partition file I give to
the top partitioner?
I am trying to parse your last comment.
You could figure how many you need by looking at the output of your MR job
Chicken and egg? Or am I not following you correctly.
-Shrijeet
On Mon, May 14,
On Mon, May 14, 2012 at 2:11 PM, Shrijeet Paliwal
shrij...@rocketfuel.com wrote:
Ahh of course! Thank you. One question what partition file I give to
the top partitioner?
I am trying to parse your last comment.
You could figure how many you need by looking at the output of your MR job
Yes I am able to retrieve the result with the get operation.
On Tue, May 15, 2012 at 2:32 AM, shashwat shriparv
dwivedishash...@gmail.com wrote:
Let me know if Get is giving you any result with the row you specified..
try to get the value using that row key is that giving you any value??
On
Hi Ted,
If we change the if statement condition in validateParameters method in
AggregationClient.java to:
if (scan == null || (Bytes.equals(scan.getStartRow(), scan.getStopRow())
!Bytes.equals(scan.getStartRow(), HConstants.EMPTY_START_ROW)) ||
(Bytes.compareTo(scan.getStartRow(),
I was aware of the following change.
Can you log a JIRA and attach the patch to it ?
Thanks for trying out and improving aggregation client.
On Mon, May 14, 2012 at 3:31 PM, anil gupta anilgupt...@gmail.com wrote:
Hi Ted,
If we change the if statement condition in validateParameters method
Hello!
I'm writing a mapreduce code to read a SequenceFile and write it to hbase
table.
Normally, or what hbase tutorial tells us to do.. you would create a Put in
TableMapper and pass it to IdentityTableReducer. This in fact work for me.
But now I'm trying to separate the computations into
Oops I made mistake while copy-paste
The reducer initialization code should be like this
TableMapReduceUtil.initTableReducerJob(rs_system, MyTableReducer,
itemTableJob);
On Tue, May 15, 2012 at 10:50 AM, Ben Kim benkimkim...@gmail.com wrote:
Hello!
I'm writing a mapreduce code to read a
39 matches
Mail list logo