That is an interesting (disturbing) find Alan. Hopefully the fallback is
rare. Did you have a technique for making the compare fallback to pure
java compare?
Thank you,
St.Ack
On Mon, Apr 1, 2013 at 7:54 AM, Alan Chaney a...@mechnicality.com wrote:
Hi
I need to write some code that sorts
Don't miss the deadline. Get your abstract in before April 1st (There are
still a few out there hiding in the bushes as best as I can tell).
Thanks all,
St.Ack
What Alex said or try the filter language that is described here:
http://hbase.apache.org/book.html#thrift.filter-language You can use it
from the shell too (Do help 'scan' in shell and see where it talks about
filters:
The filter can be specified in two ways:
1. Using a filterString - more
Raymond:
Major compaction does not first flush. Should it or should it be an option?
St.Ack
On Tue, Mar 12, 2013 at 6:46 PM, Liu, Raymond raymond@intel.com wrote:
I tried both hbase shell's major_compact cmd and java api
HBaseAdmin.majorCompact() on table name.
They don't flush the
On Sun, Mar 10, 2013 at 11:35 PM, dong.yajun dongt...@gmail.com wrote:
list, I'd like to unsubscribe this mail list.
For how to unsubscribe, see: http://hbase.apache.org/mail-lists.html
On Mon, Mar 11, 2013 at 1:25 PM, viv1d sb.h...@gmail.com wrote:
I have some questions for HBase DB:
How to ensure data integrity?
Are there methods ensure the integrity?
Does HBase primary key? or alternatives
Foreign key?
Referential Integrity?
How are the ACID
,
Pablo
On 03/10/2013 04:06 PM, Sreepathi wrote:
Hi Stack/Ted/Pablo,
Should we increase the hbase.rpc.timeout property to 5 minutes ( 30 ms
) ?
Regards,
- Sreepathi
On Sun, Mar 10, 2013 at 11:59 AM, Pablo Musa pa...@psafe.com wrote:
That combo should be fine.
Great!!
If JVM
Good on you Anoop!
St.Ack
On Sun, Mar 10, 2013 at 9:42 AM, ramkrishna vasudevan
ramkrishna.s.vasude...@gmail.com wrote:
Hi All
Pls welcome Anoop, our newest committer. Anoop's work in HBase has been
great and he has helped lot of users in the mailing list.
He has contributed features
What RAM says.
2013-03-07 17:24:57,887 INFO org.apache.zookeeper.**ClientCnxn: Client
session timed out, have not heard from server in 159348ms for sessionid
0x13d3c4bcba600a7, closing socket connection and attempting reconnect
You Full GC'ing around this time?
Put up your configs in a place
secs] [Times: user=0.06 sys=0.00,
real=0.01 secs]
I really appreciate you guys helping me to find out what is wrong.
Thanks,
Pablo
On 03/08/2013 02:11 PM, Stack wrote:
What RAM says.
2013-03-07 17:24:57,887 INFO org.apache.zookeeper.ClientCnxn: Client
session timed out, have
On Tue, Mar 5, 2013 at 8:36 PM, kiran kiran.sarvabho...@gmail.com wrote:
Dear All,
I had some miserable experience with gets (batch gets) in hbase. I have two
tables with different rowkeys, columns are distributed across the two
tables.
Currently what I am doing is scan over one table and
at 12:21 PM, Stack st...@duboce.net wrote:
On Tue, Mar 5, 2013 at 8:36 PM, kiran kiran.sarvabho...@gmail.com
wrote:
Dear All,
I had some miserable experience with gets (batch gets) in hbase. I have
two
tables with different rowkeys, columns are distributed across the two
tables
There were some nice talks at the meetup last Thursday if you folks are
interested.
+ Lars Hofhansl talked 0.94 speedup and obscure but useful HBase features
+ James Taylor gave update on Project Phoenix
+ MaryAnn Xue talked about an interesting case done by Intel in China
saving images to HBase
I quote: Cloudera, the conference host, would like to offer Apache
contributors the opportunity to register for $270, which reflects a $25
discount off the Early Bird price. Just enter promo code *hbasecon-community
* when registering. This discount is good through April 23, 2013.
Folks here
On Tue, Feb 26, 2013 at 10:02 AM, Graeme Wallace
graeme.wall...@farecompare.com wrote:
James,
Are you or anyone involved with Phoenix going to be at the Strata conf in
Santa Clara this week ?
You might want to attend the hbase meetup where James will be talking
Phoenix at the Intel Campus
On Tue, Feb 26, 2013 at 5:41 AM, Paul van Hoven
paul.van.ho...@googlemail.com wrote:
This map reduce job works fine but this is just one scan job for this
map reduce task. What do I have to do to pass multiple scans? Or do
you have any other suggestions on how to achieve that goal? The
On Tue, Feb 26, 2013 at 12:00 AM, Yusup Ashrap aph...@gmail.com wrote:
Hi Kiran , thanks for reply
From what I've read from online docs , downtime is inevitable for
upgrading from 0.90.2 to 0.94,
Yes.
Going from 0.90.x to 0.92., you will need to restart.
You will be able to do a rolling
Serialize it to hdfs? Have your mappers pick it up on startup or have the
map runner pick it up for the mappers?
St.Ack
On Fri, Feb 22, 2013 at 2:53 PM, Gaetan Deputier g...@ividence.com wrote:
Hi Hbase users,
(Using HBase 0.92.1 from Cloudera CDH4.1.1)
I am currently writing a
On Fri, Feb 22, 2013 at 7:39 AM, Eric Czech eczec...@gmail.com wrote:
Hi everyone,
Are blocks for in-memory column families automatically loaded in to the
block cache on restart?
No
If not, would anyone recommend running a scan with
.setCacheBlocks(true) after a restart for in-memory
Sweet.
St.Ack
On Mon, Feb 25, 2013 at 9:35 AM, James Taylor jtay...@salesforce.comwrote:
We are pleased to announce the immediate availability of Phoenix v 1.1,
with support for HBase v 0.94.4 and above. Phoenix is a SQL layer on top of
HBase. For details, see our announcement here:
In case you haven't yet heard, hbasecon2013 was announced today [1]. The
call for speakers is open so please consider giving a talk on anything that
falls within the pull of the hbase planet. We (your lovely hbasecon2013
program committee listed along the bottom of [1]), would love to have you.
On Mon, Feb 18, 2013 at 7:30 PM, Otis Gospodnetic
otis.gospodne...@gmail.com wrote:
Have there been any discussions,attempts, or thoughts about finding a way
to avoid compactions?
Any ideas on how it would work Otis?
Anyone know what m7 does?
St.Ack
What Marcos says.
Also read this:
http://hbase.apache.org/book.html#hbase.client.scanner.caching and what is
happening on your server when you get a timeout? Is it working hard?
Swapping? Processing other stuff? Is there a process running beside it
contending for resources (e.g. mapreduce?).
Good on you DD. One of us!
St.Ack
On Wed, Feb 6, 2013 at 9:19 PM, Ted Yu yuzhih...@gmail.com wrote:
Hi,
We've brought in one new Apache HBase Committer: Devaraj Das.
On behalf of the Apache HBase PMC, I am excited to welcome Devaraj as
committer.
He has played a key role in unifying RPC
4, 2013 at 8:25 PM, Stack st...@duboce.net wrote:
How about the first or second week in April?
Our hosts are amenable to doing it then (Unless I hear objection, will go
ahead and schedule first week of April).
St.Ack
On Fri, Feb 1, 2013 at 1:19 PM, Andrew Purtell apurt...@apache.orgwrote
at 1:14 PM, Jean-Daniel Cryans jdcry...@apache.org
wrote:
Seems a bit close to the other meetup on 02/28 in the South Bay but
maybe because it's in SF it's ok.
No personal preference since I'll be on the other side of the pond.
J-D
On Fri, Feb 1, 2013 at 11:06 AM, Stack st
Any preference for date?
You all good w/ it?
AdRoll have kindly offered to host.
If you want to talk anything hbasey, write me off list.
Thanks,
St.Ack
Congrats lads!
St.Ack
On Wed, Jan 30, 2013 at 1:04 PM, James Taylor jtay...@salesforce.comwrote:
We are pleased to announce the immediate availability of a new open source
project, Phoenix, a SQL layer over HBase that powers the HBase use cases at
Salesforce.com. We put the SQL back in the
The below seems like a good suggestion by Vandana.
I will say that focus is on support for hadoop 1 and 2. There has not
been much call for us to support 0.23.x If you can figure what needs
fixing, we could try adding the fix to 0.94 (In trunk a patch to add a
compatibility module for
On Mon, Jan 28, 2013 at 12:14 PM, Jim Abramson j...@magnetic.com wrote:
Hi,
We are testing HBase for some read-heavy batch operations, and
encountering frequent, silent RegionServer crashes.
'Silent' is interesting. Which files did you check? .log and the .out?
Nothing in the latter?
during runtime
:-)
Thanks for digging in Viral.
Stack, I noticed that in all profiles except 0.23 there is hadoop-core or
hadoop-common includes, while in 0.23 there is only hadoop-client as a
dependency and there is no mention for hadoop-common or hadoop-auth
anywhere, do they get pulled
On Tue, Jan 22, 2013 at 11:17 PM, Ron Sher ron.s...@gmail.com wrote:
scannerGet
Did you get your code from study of what is in src/examples/thrift ?
St.Ack
Good on you lads!
St.Ack
On Mon, Jan 21, 2013 at 12:56 PM, Jonathan Hsieh j...@cloudera.com wrote:
On behalf of the Apache HBase PMC, I am excited to welcome Jimmy Xiang
and Nicholas Liochon as members of the Apache HBase PMC.
* Jimmy (jxiang) has been one of the drivers on the RPC
On Mon, Jan 21, 2013 at 12:01 PM, Jean-Marc Spaggiari
jean-m...@spaggiari.org wrote:
Found lingering reference file
The comment on the method that is finding the lingering reference files is
pretty good:
http://hbase.apache.org/xref/org/apache/hadoop/hbase/util/HBaseFsck.html#604
It looks
for me how this happend to my cluster...
-repair helped to fix that, so I'm now fine. I will re-run the job I
ran and see if this is happening again.
Thanks,
JM
2013/1/21, Stack st...@duboce.net:
On Mon, Jan 21, 2013 at 12:01 PM, Jean-Marc Spaggiari
jean-m...@spaggiari.org wrote
On Mon, Jan 21, 2013 at 2:45 PM, Jimmy Xiang jxi...@cloudera.com wrote:
RECOVERED_EDITS is not a column family. It should be ignored by hbck.
Filed a jira:
https://issues.apache.org/jira/browse/HBASE-764https://issues.apache.org/jira/browse/HBASE-7640
Thanks Jimmy. That makes sense now
@ or A ;)
2013/1/21, Stack st...@duboce.net:
On Mon, Jan 21, 2013 at 2:45 PM, Jimmy Xiang jxi...@cloudera.com
wrote:
RECOVERED_EDITS is not a column family. It should be ignored by hbck.
Filed a jira:
https://issues.apache.org/jira/browse/HBASE-764
https://issues.apache.org/jira/browse
On Tue, Jan 15, 2013 at 1:07 AM, Ibrahim Yakti iya...@souq.com wrote:
The root directory is set to /var/lib/hbase/
The count issue is still exist:
*Sqoop: *
13/01/15 08:55:23 INFO mapreduce.ImportJobBase: Retrieved 1754285 records.
*MySQL:*
+--+
| count(1) |
+--+
|
On Mon, Jan 14, 2013 at 6:12 AM, Ibrahim Yakti iya...@souq.com wrote:
Hello,
I have a weird issue, I am using sqoop to import data from MySQL into
HBase, sqoop confirms that 2.5 million records were imported, when I do
count table_name in HBase shell it returns numbers like:
260970 row(s)
On Thu, Jan 10, 2013 at 6:27 PM, Lukáš Drbal lukas.dr...@gmail.com wrote:
Today i take a look on this code and when i implement thrift method GetRow,
i have 10x more better peformance, that's perfrect!
Please put up a patch Lukáš.
This will be breat step for thrift protocol and HBase too.
Is there no progress being made in your update? Mind providing more data
on version, where it is running, what the update is like, how big the cells
are, how many regions and regionservers.
St.Ack
On Sat, Jan 5, 2013 at 1:06 AM, hua beatls bea...@gmail.com wrote:
HI,
we have large batch
Good on you lads. Thanks for all the great contribs so far.
St.Ack
On Wed, Jan 2, 2013 at 11:37 AM, Jonathan Hsieh j...@cloudera.com wrote:
Along with bringing in the new year, we've brought in two new Apache
HBase Committers!
On behalf of the Apache HBase PMC, I am excited to welcome
://repository.apache.org)
contains the 0.94.3 folder with the sources, test-jar and pom, but without
the actual jar artifact.
Asaf
On 2 בינו 2013, at 19:57, Stack st...@duboce.net wrote:
Please try now Asaf. 0.94.3 should be available. 0.94.4 is not yet
released.
St.Ack
On Sun, Dec
On Wed, Dec 26, 2012 at 11:11 PM, 周梦想 abloz...@gmail.com wrote:
Thank you, Stack!
Which edit tool do you use to edit the xml file? just using a vim-style
text editor or some WYSWYG rich text editor?
I use vim and the online manual figuring stuff out. I do this because I
find
On Thu, Dec 27, 2012 at 6:06 PM, 周梦想 abloz...@gmail.com wrote:
“参考指南” is right. But I think the characters are better to be
中文参考指南(单页),means Chinese Reference Guilde (Single Page).
The html charset should be UTF-8. links to :
http://abloz.com/hbase/book.html
I put it up already:
On Tue, Dec 25, 2012 at 1:15 AM, 周梦想 abloz...@gmail.com wrote:
Hello everyone,
I keep a translation of Chinese version of the HBase official reference
guide document: http://hbase.apache.org/book.html
The translation page is http://abloz.com/hbase/book.html
Thank you for doing this.
You are figuring it out.
Usually we just run 'mvn site' and the reference guide is generated as part
of the site build (the xml transform of hbase-default.xml into html that
can be included as part of the document is run as part of the site maven
goal -- it is not run if you run just the docbkx
On Mon, Dec 24, 2012 at 9:27 AM, Varun Sharma va...@pinterest.com wrote:
Hi,
I am wondering where people usually place hbase + hadoop logs. I have 4
disks and 1 very tiny disk with barely 500 megs (thats the typical setup on
amazon ec2). The 4 disks shall be used for hbase data. Since 500M
On Wed, Dec 26, 2012 at 3:33 PM, Kun Ling lkun.lin...@qq.com wrote:
Dear Stack,
Is it possible to add multiple language support in the Hbase svn
trunck? Or is it allowed that someone submit a patch for multiple language
support, so that all the translations can directly availabe
That project has not had a commit in over two years. HBase has had a few
since then so the project has probably gone stale.
St.Ack
On Thu, Dec 20, 2012 at 6:04 AM, Shengjie Min kelvin@gmail.com wrote:
Hi,
I know Coprocessor 2ndary indices is still being developed. Has anybody
tried
on the
commmand line.
durruti:hbase stack$ ./bin/hbase shell --help
Usage: shell [OPTIONS] [SCRIPTFILE [ARGUMENTS]]
--format=OPTIONFormatter for outputting results.
Valid options are: console, html.
(Default: console)
-d
On Wed, Dec 19, 2012 at 8:05 PM, Matan Amir matan.a...@voxer.com wrote:
Hi,
I'm wondering if anyone has a spec for the wire protocol of HBase (v0.94.x
compatible)? I am interested in creating a new native client this would be
more efficient than reverse-engineering.
Much appreciated!
On Wed, Dec 19, 2012 at 8:53 PM, Matan Amir matan.a...@voxer.com wrote:
Thanks St.Ack,
Funny enough, since we use asynchbase internally, i am planning on using
that model (async, queue per region, time-based push rather than size, etc)
and I've already started reading through asynchbase
On Sun, Dec 16, 2012 at 1:16 AM, Robert Dyer psyb...@gmail.com wrote:
I recently enabled reverse DNS on my test cluster. Now when I run a MR
job, the HBase input split locations are all adding a period to the end.
For example:
/default-rack/foo-1.
/default-rack/foo-2.
Yet the machine
In refguide we repeat content of hbase-default.xml:
http://hbase.apache.org/book.html#hbase.tmp.dir
What Nick said plus its used to keep all data when doing standlone hbase.
We should amend doc. to say that you cannot do comma-delimited list?
St.Ack
On Mon, Dec 17, 2012 at 3:19 PM, anil gupta
On Fri, Dec 7, 2012 at 1:01 PM, Bryan Beaudreault
bbeaudrea...@hubspot.comwrote:
We have a couple tables that had thousands of regions due to the size of
the day in them. We recently changed them to have larger regions (nearly
4GB). We are trying to bulk load these in now, but every time we
On Tue, Dec 4, 2012 at 8:57 PM, Liu, Raymond raymond@intel.com wrote:
Hi
I am just curious about the region assignment strategy upon table
re-enabling.
By default, it's random assigned to the region server. I checked
jira and found HBASE-6143 suggest to make it smarter
Any chance of an update to
http://hbase.apache.org/book.html#snappy.compression ? If someone writes
it up, I'll stitch it in. Thanks,
St.Ack
On Mon, Dec 3, 2012 at 6:29 AM, a...@hsk.hk a...@hsk.hk wrote:
Hi,
Something more about my workaround last time:
I used the following steps to test
locally, so I'm not 100%
sure about thhe XML structure. I did cutpast so it should be good...
JM
2012/12/3, Jean-Marc Spaggiari jean-m...@spaggiari.org:
Sure I will.
JM
2012/12/3, Stack st...@duboce.net:
Any chance of an update to
http://hbase.apache.org/book.html
On Fri, Nov 30, 2012 at 8:56 AM, Bryan Baugher bjb...@gmail.com wrote:
Unfortunately it does not seem like HTable or HTablePool have any logic to
tell the HConnectionManager the connection is stale and I don't believe you
can rely on all of the clients giving back the connection at the same
On Wed, Nov 28, 2012 at 6:40 AM, matan ma...@cloudaloe.org wrote:
Why does the CF have to be in the HFile, isn't the entire HFile dedicated
to
just one CF to start with (I'm speaking at the HBase architecture level,
trying to figure why it is working as like it is).
The CF is repeated in
Please don't send same question to three different mailing lists.
See below for answers.
On Tue, Nov 27, 2012 at 6:59 PM, 张莉苹 zlpmiche...@gmail.com wrote:
*Do you know what's the release time of apache hbase 0.96.0 and hbase
0.94.3?*
0.94.3 should be out in a week or two.
0.96.0 start of
What happens if you put up a shell on your hbase instance and do the
same thing? Does it succeed?
St.Ack
On Sun, Nov 25, 2012 at 11:45 PM, shyam kumar lakshyam.sh...@gmail.com wrote:
HI
I am unable to create a Table in hbase dynamically
am using the following code
if
On Mon, Nov 26, 2012 at 7:28 AM, Nicolas Liochon nkey...@gmail.com wrote:
Yes, it's not useful to set the master address in the client. I suppose it
was different a long time ago, hence there are some traces on different
documentation.
The master references itself in ZooKeeper. So if the
On Sun, Nov 25, 2012 at 8:28 AM, matan ma...@cloudaloe.org wrote:
Nothing. Maybe just link to it from
http://hbase.apache.org/book/quickstart.html such that people for whom the
quick start doesn't work, will have a direct route to this and other
prerequisites.
I just added note on loopback
On Mon, Nov 26, 2012 at 2:16 PM, Mohit Anchlia mohitanch...@gmail.com wrote:
I have a need to move hbas-site.xml to an external location. So in order to
do that I changed my configuration as shown below. But this doesn't seem to
be working. It picks up the file but I get error, seems like it's
On Sat, Nov 24, 2012 at 10:31 AM, a...@hsk.hk a...@hsk.hk wrote:
I am also using Ubuntu 12.04, Zookeeper 3.4.4 HBase 0.94.2 and Hadoop 1.0.4.
(64-bit nodes), I finally managed to have the HBase cluster up and running,
below is the line in my /etc/hosts for your reference:
#127.0.0.1
On Sat, Nov 24, 2012 at 4:40 AM, Jean-Marc Spaggiari
jean-m...@spaggiari.org wrote:
Maybe this should be added in the documentation as an hint?
We have a section on loopback here:
http://hbase.apache.org/book.html#basic.prerequisites
What should we add to it?
St.Ack
On Sat, Nov 24, 2012 at 9:45 AM, matan ma...@cloualoe.org wrote:
It's included here http://hbase.apache.org/book.html actually, maybe link
the 'getting started' page to that page, unless it's becoming unfavorably
cyclic.
I did as you suggested by listing java, loopback and ssh as prereqs
for
On Wed, Nov 21, 2012 at 12:16 AM, Yusup Ashrap aph...@gmail.com wrote:
Does your hadoop support append/sync?
current version of hadoop supports append/sync ,but we have not trun on
explicitely in hdfs-site.xml.
Then you will lose data.
So, what is the worry? You are using CMS? It carries
on
java.util.concurrent.locks.AbstractQueuedSynchronizer$ConditionObject@131f139b
Stack:
sun.misc.Unsafe.park(Native Method)
java.util.concurrent.locks.LockSupport.park(LockSupport.java:158)
java.util.concurrent.locks.AbstractQueuedSynchronizer$ConditionObject.await(AbstractQueuedSynchronizer.java:1987
, Ted Yu a écrit :
Vincent:
What's the value for hbase.regionserver.handler.count ?
I assume you keep the same value as that from 0.90.3
Thanks
On Fri, Nov 16, 2012 at 8:14 AM, Vincent
Baratvincent.ba...@gmail.comwrote:
Le 16/11/12 01:56, Stack a écrit :
On Thu, Nov 15, 2012 at 5:21
On Wed, Oct 10, 2012 at 9:25 PM, Mohit Anchlia mohitanch...@gmail.com wrote:
What's the best way to see if all handlers are occupied? I am probably
running into similar issue but would like to check.
In 0.94, in the UI, you can see what all handlers are doing. Click
on 'Show All RPC Handler
On Fri, Oct 12, 2012 at 3:56 AM, Ricardo Vilaça rmvil...@di.uminho.pt wrote:
Yes, had already tried to increase to 200 but without improvement
on the application latency. However, the output of the active IPC
handlers, using the Web interface,
is strange. For region servers I can see in a
On Tue, Nov 20, 2012 at 7:45 PM, Yusup Ashrap aph...@gmail.com wrote:
hi all, I am encountering with high memory usage problem in my production
environment.I doubt this is caused by memory leak or something,
and I hope someone could tell me what is going on , or what should I do to
to keep a
On Tue, Nov 20, 2012 at 10:05 PM, Yusup Ashrap aph...@gmail.com wrote:
Can you upgrade your hbase? This is an old version. What version of
hadoop are you running?
it's in production environment, and we could not take upgrading as an
option for now.
hadoop version is 0.20.2
Does your hadoop
On Sat, Nov 17, 2012 at 10:14 AM, Dalia Sobhy
dalia.mohso...@hotmail.com wrote:
Dear all,
I want to understand when to use HTable, HTablePool and HtableAdmin in Java
API ?
HTable to access a table.
HTablePool if lots of threads in your application accessing a table or tables.
HTableAdmin
Version? Context? What was happening at the time?
How would you like it if I came and dumped random exceptions in your mail box!
St.Ack
On Sat, Nov 17, 2012 at 1:49 AM, yutoo yanio yutoo.ya...@gmail.com wrote:
org.apache.hadoop.hbase.ipc.CallerDisconnectedException: Aborting call
On Fri, Nov 16, 2012 at 11:23 AM, Scott Alcaide salca...@vmware.com wrote:
Any update on this? We encountered this on a production cluster. Did the
zookeeper bug ever get filed or fixed?
If hbase is not managing ZK, and it gets restarted, the client will
ride over the zk restart fine.
If
On Sat, Nov 17, 2012 at 11:11 AM, Stack st...@duboce.net wrote:
On Fri, Nov 16, 2012 at 11:23 AM, Scott Alcaide salca...@vmware.com wrote:
Any update on this? We encountered this on a production cluster. Did the
zookeeper bug ever get filed or fixed?
If hbase is not managing ZK, and it gets
On Sat, Nov 17, 2012 at 12:38 PM, Dalia Sobhy
dalia.mohso...@hotmail.com wrote:
Thanks Stack :D
Another Question: Whats the difference between Scan and Get??
What do you think?
St.Ack
On Thu, Nov 15, 2012 at 5:21 AM, Guillaume Perrot gper...@ubikod.com wrote:
It happens when several tables are being compacted and/or when there is
several scanners running.
It happens for a particular region? Anything you can tell about the
server looking in your cluster monitoring? Is it
On Thu, Nov 15, 2012 at 10:12 PM, msmdhussain msmdhuss...@gmail.com wrote:
i tried using the follwing command in hbase shell
put
'Test','\x03\x00\x00\x00\x0A\x00\x00\x01:K\xF0\xC0@IN\x00\x00AS***','C:input','25'
but, row key length is 292 bytes its a composite key combination of
On Thu, Nov 15, 2012 at 5:44 PM, Jean-Marc Spaggiari
jean-m...@spaggiari.org wrote:
So. I have restarted everything and I found the issue...
I ran the code above from Eclipse where I have HBase trunk. Cluster is
running with HBase 0.94.2.
Conclusion: HBase Trunk is not compatible with
On Sat, Nov 10, 2012 at 7:03 AM, yun peng pengyunm...@gmail.com wrote:
Hi, I want to profile the # of disk access (both random and sequential)
issued from HBase (into HDFS). For disk reads, I have tried use
blockCacheMissCount, which seems working. But is it the correct way for
reads (I can't
On Tue, Nov 6, 2012 at 8:01 AM, Jasper Rädisch jas...@trakken.de wrote:
Hello everyone,
Trying to truncate tables like this (from HBase shell) won't work:
list.each{|name| truncate(name) if name.end_with?('abc123')}
Why does it not work? What happens? Maybe wait on truncation before
On Thu, Nov 1, 2012 at 10:09 AM, Leonid Fedotov lfedo...@hortonworks.comwrote:
Varun,
for HA NameNode you may want to look at Hortonworks HDP 1.1 release. It
supported on vSphere and on RedHat HA cluster.
HDP 1.1 based on Hadoop 1.0.3 and fully certified for production
environments.
Do not
On Mon, Oct 29, 2012 at 3:55 PM, Jeff Whiting je...@qualtrics.com wrote:
However what we are seeing is that our memory usage goes up slowly until the
region server starts sputtering due to gc collection issues and it will
eventually get timed out by zookeeper and be killed.
Hey Jeff. You
On Thu, Oct 25, 2012 at 1:24 AM, Oliver Meyn (GBIF) om...@gbif.org wrote:
Hi all,
I'm on cdh3u3 (hbase 0.90.4) and I need to provide a bunch of row keys based
on a column value (e.g. give me all keys where column dataset = 1234).
That's straightforward using a scan and filter. The trick
On Fri, Oct 26, 2012 at 2:44 PM, yun peng pengyunm...@gmail.com wrote:
Hi, All,
I have a HBase cluster run with property hbase.zookeeper.quorum set to be
hostname node3. When HBase starts, Zookeeper can not properly starts and
it throws,
java.io.Exception: Could not find my address:
On Sat, Oct 20, 2012 at 1:36 PM, yun peng pengyunm...@gmail.com wrote:
I am trying to understand how and when hbase set the modification
timestamp for hfiles. My original intention is to get a timestamp when
a hfile is generated (when last write to a hfile in compaction).
On Thu, Oct 18, 2012 at 7:13 PM, Norbert Burger
norbert.bur...@gmail.com wrote:
We had the same question earlier. Unfortunately the documentation is
wrong on this account; scannerOpen resolves to either a call to
scan.addFamily or scan.addColumn, and neither directly supports regex
matching.
On Fri, Oct 19, 2012 at 3:56 PM, Jonathan Bishop jbishop@gmail.com wrote:
Hi,
Taking a look at the thrift interface to hbase and I am having a hard time
finding any way to set the filterString for scans. Anyone know how to do
this?
What have you tried?
St.Ack
On Wed, Oct 17, 2012 at 2:42 AM, hua xiang adam_...@yahoo.com wrote:
Hi,
when open hbase shell with hdfs,there is an error.
but root user can.
below is the error :
[root@hadoop2 ~]# su - hdfs
[hdfs@hadoop2 ~]$ id
uid=494(hdfs) gid=502(hadoop) groups=502(hadoop)
[hdfs@hadoop2 ~]$
On Thu, Oct 18, 2012 at 7:35 PM, Maoke fib...@gmail.com wrote:
hi Stack and all,
i noticed that the regionserver.Hlog is obsoleted by regionserver.wal.Hlog,
from version 0.20.6 to 0.90+. what is the major difference between the two,
in principle? what we should pay attention to when using
On Tue, Oct 16, 2012 at 7:00 AM, Jean-Marc Spaggiari
jean-m...@spaggiari.org wrote:
Hi St.Atck,
Is the foll out upgrade process documented anywhere? I looked at the
book by only find upgrade from 0.90 to 0.92. Can you point me to
something? If there is no documentation yet, can someone draft
On Tue, Oct 16, 2012 at 2:52 AM, Amit Sela am...@infolinks.com wrote:
I can't find to much about append support on the web. my full version is:
hadoop 0.20.3-r1057313
I did however check hbase-site.xml and hdfs-site.xmk for
dfs.support.append property
and couldn't find it.
Can you
On Tue, Oct 16, 2012 at 9:06 PM, Ramkrishna.S.Vasudevan
ramkrishna.vasude...@huawei.com wrote:
Hi Yun Peng
You want to know the creation time? I could see the getModificationTime()
api. Internally it is used to get a store file with minimum timestamp.
I have not tried it out. Let me know if
On Tue, Oct 16, 2012 at 12:29 PM, Wei Tan w...@us.ibm.com wrote:
Hi,
I am monitoring the readRequestsCount shown in the Requests column in
the web GUI of a server/region. I observe that, while a put correspond to
ONE write request, a get corresponds to 2 readRequestsCount. Is that true
and
On Tue, Oct 16, 2012 at 8:18 AM, Amit Sela am...@infolinks.com wrote:
Has anyone tried extending PutSortReducer in order to add some traditional
reduce logic (i.e, aggregating counters) ?
I want to process data with hadoop mapreduce job (aggregate counters per
keys - traditional hadoop mr)
701 - 800 of 2422 matches
Mail list logo