Hi, has anybody been facing similar issues?
- R
On Wed, Feb 26, 2014 at 12:55 PM, Rohit Kelkar rohitkel...@gmail.comwrote:
We are running hbase 0.94.2 on hadoop 0.20 append version in production
(yes we have plans to upgrade hadoop). Its a 5 node cluster and a 6th node
running just the name
by Master+ZK but then try to join back and get informed it has
bene kicked out.
Reasons:
- Long Gargabe Collection;
- Swapping;
- Network issues (get disconnected, then re-connected);
- etc.
what do you have before 2014-02-21 13:41:00,308 in the logs?
2014-02-27 11:13 GMT-05:00 Rohit Kelkar
to, it will.
How many GB on your server? How many for the DN,for th RS, etc. any TT on
them? Any other tool? If TT, how many slots? How many GB per slots?
JM
2014-02-27 11:37 GMT-05:00 Rohit Kelkar rohitkel...@gmail.com:
Hi Jean-Marc,
I have updated the RS log here (http
Oh yes and forgot to add the ZK process
ZK = 5GB
Total = 45GB
On Thu, Feb 27, 2014 at 11:01 AM, Rohit Kelkar rohitkel...@gmail.comwrote:
Hi Jean-Marc,
Each node has 48GB RAM
To isolate and debug the RS failure issue, we have switched off all other
tools. The only processes running
Rohit Kelkar rohitkel...@gmail.com:
Oh yes and forgot to add the ZK process
ZK = 5GB
Total = 45GB
On Thu, Feb 27, 2014 at 11:01 AM, Rohit Kelkar rohitkel...@gmail.com
wrote:
Hi Jean-Marc,
Each node has 48GB RAM
To isolate and debug the RS failure issue, we have switched
We are running hbase 0.94.2 on hadoop 0.20 append version in production
(yes we have plans to upgrade hadoop). Its a 5 node cluster and a 6th node
running just the name node and hmaster.
I am seeing frequent RS YouAreDeadExceptions. Logs here
http://pastebin.com/44aFyYZV
The RS log shows a
I am using hbase version 0.92.4 on a 5 node cluster. I am seeing that a
particular region server often crashes. A status 'simple' on hbase shell
gives the following stats
HBase Shell; enter 'helpRETURN' for list of supported commands. Type
exitRETURN to leave the HBase Shell Version 0.94.2,
to the disproportionately high
number of regions on that server?
Checking region server log on server5 should give us more clue.
bq. 0.92.4
please consider upgrading :-)
On Fri, Feb 14, 2014 at 3:52 PM, Rohit Kelkar rohitkel...@gmail.com
wrote:
I am using hbase version 0.92.4 on a 5 node
are experiencing the
following which went into 0.94.10 :
HBASE-8432 a table with unbalanced regions will balance indefinitely
Master log would tell us more.
On Fri, Feb 14, 2014 at 4:18 PM, Rohit Kelkar rohitkel...@gmail.com
wrote:
Sorry mis-stated the version, its 0.94.2
- R
On Fri
server
On Fri, Feb 14, 2014 at 5:00 PM, Rohit Kelkar
rohitkel...@gmail.comjavascript:;
wrote:
Thanks for your inputs,
I am sharing the master log - http://pastebin.com/Xi9P6Ykr
and the region server log of the failed region server -
http://pastebin.com/1munghDv
- R
On Fri
Regarding slow scan- only fetch the columns /qualifiers that you need. It
may be that you are fetching a whole lot of data that you don't need. Try
scan.addColumn() and let us know.
- R
On Sunday, August 4, 2013, lars hofhansl wrote:
BigTable has one more level of abstraction: Locality Groups
In your client code can you try explicitly setting values of
hbase.zookeeper.quorum
- R
On Tuesday, July 16, 2013, Jean-Marc Spaggiari wrote:
Hi Pavan,
You should try to avoid localhost. Try to prefers your host name. Is
http://ubuntu:60010/master-status?filter=all
I suggested that because you were able to use the shell but not the client
code.
- R
On Tuesday, July 16, 2013, Pavan Sudheendra wrote:
Yes Jean, It is working fine.
@Rohit, i thought in the standalone mode it is not required.
On Tue, Jul 16, 2013 at 5:25 PM, Rohit Kelkar rohitkel
Depends on what your /etc/hosts file says
On Tuesday, July 16, 2013, Pavan Sudheendra wrote:
Ah. The value of quorun should be localhost or my ip address?
On Tue, Jul 16, 2013 at 5:32 PM, Rohit Kelkar rohitkel...@gmail.com
wrote:
I suggested that because you were able to use the shell
.
On Mon, Jul 15, 2013 at 7:27 AM, Rohit Kelkar rohitkel...@gmail.com
wrote:
Thanks Amit, I am also using 0.94.2 . I am also pre-splitting and I tried
the table.clearRegionCache() but still doesn't work.
- R
On Sun, Jul 14, 2013 at 3:45 AM, Amit Sela am...@infolinks.com wrote
?
On Tue, Jul 16, 2013 at 1:15 PM, Rohit Kelkar rohitkel...@gmail.com
wrote:
Yes. I tried everything from myTable.flushCommits() to
myTable.clearRegionCache() before and after the
LoadIncrementalHFiles.doBulkLoad(). But it doesn't seem to work. This is
what I am doing right now to get things
at 4:41 PM, Jimmy Xiang jxi...@cloudera.com wrote:
HBASE-8055 should have fixed it.
On Tue, Jul 16, 2013 at 2:33 PM, Rohit Kelkar rohitkel...@gmail.com
wrote:
This ( http://pastebin.com/yhx4apCG ) is the error on the region server
side when execute the following on the shell -
get
load hfile
HBASE-8055 should have fixed it.
On Tue, Jul 16, 2013 at 2:33 PM, Rohit Kelkar rohitkel...@gmail.com
wrote:
This ( http://pastebin.com/yhx4apCG ) is the error on the region server
side when execute the following on the shell -
get 'mytable', 'myrow', 'cf:q'
- R
HBASE-8055 should have fixed it.
On Tue, Jul 16, 2013 at 2:33 PM, Rohit Kelkar rohitkel...@gmail.com
wrote:
This ( http://pastebin.com/yhx4apCG ) is the error on the region
server
side when execute the following on the shell -
get 'mytable', 'myrow', 'cf:q'
- R
Now its working correctly. I had to do a
myTableWriter.appendTrackedTimestampsToMetadata(); after writing my KVs and
before closing the file.
- R
On Tue, Jul 16, 2013 at 6:20 PM, Rohit Kelkar rohitkel...@gmail.com wrote:
Oh wait. Didn't realise that I had the HbaseAdmin major compact code
myTable.clearRegionCache() after the bulk load (or even after the
pre-splitting if you do pre-split).
This should clear the region cache. I needed to use this because I am
pre-splitting my tables for bulk load.
BTW I'm using HBase 0.94.2
Good luck!
On Fri, Jul 12, 2013 at 6:50 PM, Rohit Kelkar
I am having problems while scanning a table created using HFile.
This is what I am doing -
Once Hfile is created I use following code to bulk load
LoadIncrementalHFiles loadTool = new LoadIncrementalHFiles(conf);
HTable myTable = new HTable(conf, mytablename.getBytes());
loadTool.doBulkLoad(new
Each row in my hbase table contains the following data:
rowkey column=pt:np, value=abcd
column=pt:vb, value=efgh
column=pt:employeeId, value=deptId
Using a combination of filters is it possible to get all rows and all
qualifiers within the pt column family where
SingleColumnValueFilter
You can refer
to
src/test/java/org/apache/hadoop/hbase/filter/TestSingleColumnValueFilter.java
BTW why is deptId stored in the pt:employeeId column ?
Cheers
On Fri, Jul 5, 2013 at 4:43 PM, Rohit Kelkar rohitkel...@gmail.com
wrote:
Each row in my hbase table contains
will recommand you to buffer
them (let's say, 100 by 100) and put them as a batch. Don't forget to
push the remaining at the end of the job. The drawback is that if the
MR crash you will have some rows already processed and not marked as
processed...
JM
2013/6/22 Rohit Kelkar rohitkel...@gmail.com
. But given
the volume of data I am inclined to save this extra IO operation.
- R
On Wed, Jun 19, 2013 at 11:08 PM, Rohit Kelkar rohitkel...@gmail.comwrote:
Perfect. That worked. Thanks.
- R
On Wed, Jun 19, 2013 at 7:23 PM, Jeff Kolesky j...@opower.com wrote:
Last time I wrote directly to an HFile
does LATEST_TIMESTAMP make the table
not see the actual rows?
- R
On Thu, Jun 20, 2013 at 2:53 PM, Rohit Kelkar rohitkel...@gmail.com wrote:
Ok. So I was able to write the HFile on hdfs but when I try loading it in
to an existing HTable the code completes without failing but when I do a
count
Here is a problem that I am facing while creating an HFile outside of a MR
job.
My column family is sd
For a given rowKey=10011-2-703, this is the sequence in
which I am writing KeyValue pairs to the HFile,
key=sd:dt, value=dummy value 1
key=sd:dth, value=dummy value 2
When I
Here is the code - https://gist.github.com/anonymous/5816180
I guess the issue is with my use of the comparator function.
- R
)
at
org.apache.hadoop.hbase.io.hfile.HFileWriterV2.append(HFileWriterV2.java:282)
at com.mycompany.hbase.process.myprocess.myFunction(MyClass.java:1492)
I am using hbase-0.94.2
- Rohit Kelkar
On Wed, Jun 19, 2013 at 1:15 PM, Jeff Kolesky j...@opower.com wrote:
I believe you need to use
, hfilePath,
(bytesPerBlock * 1024),
Compression.Algorithm.GZ,
KeyValue.KEY_COMPARATOR);
Perhaps you need the declaration of the comparator in the create statement
for the writer.
Jeff
On Wed, Jun 19, 2013 at 5:11 PM, Rohit Kelkar
Is running MR job and the Incremental bulk load simultaneously on the same
hbase table going to affect each other? If yes then can you suggest
strategies to make bulk load and MR jobs mutually exclusive?
- R
of
regions?
- Rohit Kelkar
On Fri, Apr 19, 2013 at 11:22 AM, Arpit Gupta ar...@hortonworks.com wrote:
Take a look at this https://issues.apache.org/jira/browse/ZOOKEEPER-1670
When no xmx was set we noticed that zookeeper could take upto 1/4 of the
memory available on the system with jdk 1.6
the zookeeper memory so much or is it that we have missed out some
important maintenance activity on zookeeper?
- Rohit Kelkar
No. Just using the bin/zkServer.sh start command. Also each node has 48
GB RAM
- Rohit Kelkar
On Thu, Apr 18, 2013 at 10:28 AM, Jean-Marc Spaggiari
jean-m...@spaggiari.org wrote:
Hi Rohit,
How are you starting your ZK servers? Are you passing any Xm* parameters?
JM
2013/4/18 Rohit
in
hbase to implement something that is equivalent to foreign keys.
- Rohit Kelkar
On Wed, Mar 28, 2012 at 10:05 AM, Neetu Ojha neetuojha.c...@gmail.com wrote:
Hi Jean,
Thanks A lot for reply, I got your point about HBASE , let me give a
little clear picture of what I am desiring from HBase
Jacques, I agree that storing files (lets say greater than 15mb) would
make the namenode run out of space. But what if I make my
blocksize=15mb ?
Even I am having the same issue that Konrad is mentioning and I have
used exactly the approach number 2 that he mentioned.
- Rohit Kelkar
On Mon, Mar
for a day by using scan
alongwith rowKeyFilters.
To perform analysis on data from multiple days you can then use map reduce jobs.
- Rohit Kelkar
On Mon, Feb 27, 2012 at 1:15 PM, Something Something
mailinglist...@gmail.com wrote:
why you even need hbase to store logs
So that all the useful
Try absolute path from /
$hadoop dfs -ls /hbase
On Fri, Feb 24, 2012 at 4:15 PM, Admin Absoftinc absoft...@gmail.com wrote:
I have a up and running Hbase / HDFS installation.
Question:How can I ensure that the data is written to HDFS and not the
local file system.
I am using Pseudo
Ioan, Sorry for messing up your name. Your strategy sounds
interesting. I will try that out and post the results/problems if and
when ...
- Rohit Kelkar
On Mon, Jan 30, 2012 at 1:41 PM, Ioan Eugen Stan stan.ieu...@gmail.com wrote:
Pe 30.01.2012 09:53, Rohit Kelkar a scris:
Hi Stack,
My
themselves.
- Rohit Kelkar
On Fri, Jan 27, 2012 at 4:21 PM, Ioan Eugen Stan stan.ieu...@gmail.com wrote:
Hello Rohit,
I would try to write most objects in a Hadoop Sequence file or a MapFile and
store the index/byte offeset in HBase.
When reading: open the file seek() to the position and start
in a MapFile/SequenceFile on hdfs and insert in to hbase
the reference of the object stored in the file. Now if I run a
mapreduce task, my mapper would be run locally wrt the object
references and not the actual dfs block where the object resides.
- Rohit Kelkar
On Mon, Jan 30, 2012 at 11:12 AM
in hbase
While storing the objects I am using
WritableUtils.toByteArray(myObject) method. Can I use the
WritableUtils.toByteArray(myObject).length to determine if the object
should go in hbase or hdfs? Is this an acceptable strategy? Is the 5
MB limit a safe enough threshold?
- Rohit Kelkar
the metadata and
measurement data for rowIds?
- Rohit Kelkar
On Mon, Jan 9, 2012 at 5:21 PM, Jonathan Hsieh j...@cloudera.com wrote:
Hi Tom,
In the case you describe -- two HTables -- there is no guarantee that they
will end up going to the same region server. If you have multiple tables
does it have anything to do with the disk space
allocated to hadoop?
- Rohit Kelkar
On Wed, Dec 28, 2011 at 10:14 PM, Mohammad Tariq donta...@gmail.com wrote:
Hi Doug,
Thanks a lot for the reply.Ya, I had asked a similar
question.Actually I am stuck with some schema design issue.I am sorry
the nodes based on how
often (or less often) the scheduled job successfully completes?
- Rohit Kelkar
On Fri, Dec 9, 2011 at 2:31 PM, Lars George lars.geo...@gmail.com wrote:
Hi,
Do you have maybe an issue with naming. HBase takes the hostname (as shown in
the UI and the ZK dump there) and hints
on a different node of my cluster. Shouldn't
the mapper be running on the same node that hosts the table?
I am using the TableMapReduceUtil.initTableMapperJob method to
initialize the mapreduce job.
- Rohit Kelkar
My hadoop cluster has 3 nodes in it and hbase too runs on the same 3
nodes. But the table that I am speaking of has only one region and
http://master:50030/jobtracker.jsp shows only one mapper running.
- Rohit Kelkar
On Tue, Dec 6, 2011 at 8:38 PM, Stack st...@duboce.net wrote:
On Tue, Dec 6
If you mean you want to execute the program that you have written in
eclipse by connecting to a hbase cluster then the following simple
lines of code should help you.
Configuration hconf = HBaseConfiguration.create();
hconf.addResource(resources/config.xml);
hconf.set(hbase.zookeeper.quorum,
documentids sent by that sender. But for
this first I would have to find all distinct authors and store them in
another table. Then run map-reduce job on the second table. Am I
thinking in the right direction or is there a better way to achieve
this?
- Rohit Kelkar
access. I am not inclined towards emitting author,
document in map and consuming author, list of docs in reduce
because then I have to do the processor intensive process in the
reduce part and that limits the number of parallel heavy processes
that I can spawn.
Thanks again.
- Rohit Kelkar
On Mon
51 matches
Mail list logo