On Mon, Dec 9, 2013 at 3:17 PM, Michael Segel michael_se...@hotmail.comwrote:
I believe there's a bit more to it...
Such as?
Which is why I am asking.
As to #3... What happens to a column when you put a tombstone marker on it?
We have this in the doc. If it does not answer your
On Fri, Dec 6, 2013 at 3:06 PM, Koert Kuipers ko...@tresata.com wrote
i noticed that puts are put into a bugger (writeAsyncBuffer) that gets
flushed if it gets to a certain size.
writeAsyncBuffer can take objects of type Row, which includes besides the
Put also Deletes, Appends, and
. As a bonus, a few new things were added:
- Add support for deleting one specific KV at a specific timestamp
(instead of all previous KVs) – thanks Xun Liu.
- Maven compilation is fixed – thanks Stack.
- Support for prefetching META entries for a given table and a given
key range
On Thu, Nov 14, 2013 at 9:23 PM, Jean-Marc Spaggiari
jean-m...@spaggiari.org wrote:
Hum. I let is run over night and got that:
13/11/13 22:24:17 INFO zookeeper.ClientCnxn: Session establishment complete
on server hbasetest1/192.168.23.51:2181, sessionid = 0x1423ef50f7d0241,
negotiated
More of the log and the version of HBase involved please. Thanks.
St.Ack
On Wed, Nov 13, 2013 at 1:07 AM, jingych jing...@neusoft.com wrote:
Thanks, esteban!
I'v tried. But it did not work.
I first load the customer hbase-site.xml, and then try to check the hbase
server.
So my code is
On Mon, Nov 11, 2013 at 12:11 PM, Varun Sharma va...@pinterest.com wrote:
Hi,
Can hbase rpc timeout be changed across different HBase rpc calls for HBase
0.94. From the code, it looks like this is not possible ? I am wondering if
there is a way to fix this ?
I'm guessing you have already
Usually in hbase-site.xml. When the lads say 'on the client', they usually
mean the hbase-site.xml the client reads when it starts up (the
hbase-site.xml that is in the conf directory that it is pointing to on
startup).
St.Ack
On Tue, Nov 5, 2013 at 6:43 AM, michael.grund...@high5games.com
On Tue, Nov 5, 2013 at 6:04 AM, WangRamon ramon_w...@hotmail.com wrote:
Hi JM
Thanks for your help, yes, as you said the syntax is totally correct as it
can be executed in HBase Shell without any errors.
By checking the source code of class HRegion which initialise the region
policy from
On hadoop2?
On Fri, Oct 25, 2013 at 5:38 AM, Salih Kardan karda...@gmail.com wrote:
Hi all
I am getting the error below while starting hbase (hbase 0.94.11)
*java.lang.RuntimeException: Failed suppression of fs shutdown hook:
Thread[Thread-8,5,main] at
Did you replace the hadoop jars that is under hbase/lib with those of the
cluster you are connecting too? 0.96.0 bundles 2.1.0-beta and sounds like
you want to connect to hadoop-2.2.0. The two hadoops need to be the same
version (the above is a new variant on the mismatch message, each more
hbase-0.96.0 is now available for download [0].
Apache HBase is a scalable, distributed data store that runs atop Apache
Hadoop.
The hbase-0.96.0 release has been more than a year in the making and
supplants our long-running 0.94.x series of releases. We encourage users
upgrade.
More than 2k
On Wed, Oct 16, 2013 at 6:26 PM, Kireet kir...@feedly.com wrote:
is there a downside to going to larger regions?
Generally we see pluses (See 2.5.6.2 Bigger Regions in
http://hbase.apache.org/book/important_configurations.html for the latest
scripture on the topic). Downsides would be
On Thu, Oct 10, 2013 at 10:59 AM, Tianying Chang tich...@ebaysf.com wrote:
Not really. Because we are deploying hadoop2+hbase95 in our production
cluster. Need to confirm 95 is working fine.
Every announcement related to the 0.95.x series came with this disclaimer:
Be aware that development
On Wed, Sep 25, 2013 at 11:18 AM, Jay Vyas jayunit...@gmail.com wrote:
is there a way to probe whats going on in HMaster exceptions at a lower
level? For example, a snippet of hbase client code which might tease out
whats wrong with my hmaster at a lower level?
Right now, because the
error? Or an error on
the HMaster? And why is it that, with debug mode, the logs generally are
referring to zookeeper security, whereas with the normal mode, the
exception stack trace points to HMaster initialization failure?
SLF4J: Class path contains multiple SLF4J bindings.
SLF4J: Found
On Thu, Sep 19, 2013 at 10:09 AM, Tianying Chang tich...@ebaysf.com wrote:
Hi,
I have a customer who use openTSDB. Recently we found that only less than
10% data are written, rest are are lost. By checking the RS log, there are
many row lock related issues, like below. It seems large amount
Hurray for Rajesh!
On Wed, Sep 11, 2013 at 9:17 AM, ramkrishna vasudevan
ramkrishna.s.vasude...@gmail.com wrote:
Hi All,
Please join me in welcoming Rajeshbabu (Rajesh) as our new HBase committer.
Rajesh has been there for more than a year and has been solving some very
good bugs around
Can you thread dump the busy server and pastebin it?
Thanks,
St.Ack
On Wed, Sep 11, 2013 at 1:49 PM, OpenSource Dev dev.opensou...@gmail.comwrote:
Hi,
I'm using HBase 0.94.6 (CDH 4.3) for Opentsdb. So far I have had no
issues with writes/puts. System is handles upto 800k puts per seconds
On Tue, Sep 10, 2013 at 3:54 PM, Enis Söztutar e...@apache.org wrote:
Hi,
Please join me in welcoming Nick as our new addition to the list of
committers. Nick is exceptionally good with user-facing issues, and has
done major contributions in mapreduce related areas, hive support, as well
as
On Mon, Sep 9, 2013 at 12:14 AM, Amit Sela am...@infolinks.com wrote:
...
The main issue still remains, it looks like Compression.Algortihm
configuration's class loader had reference to the bundle in revision 0
(before jar update) instead of revision 1 (after jar update). This could be
Anil:
You having GC'ing issues Anil? You hitting Full GCs pretty regularly?
(Setting up a jenkins build as Lars suggests or having us add a note to the
website somewhere saying 0.94 runs on jdk7 -- presuming it passes a couple
of jenkins builds -- would be no problem... just say what you need).
On Wed, Aug 28, 2013 at 12:46 PM, Vandana Ayyalasomayajula
avand...@yahoo-inc.com wrote:
Hi All,
I have been looking at the parseColumn method in the KeyValue class of
HBase. The javadoc of that method
does not recommend that method be used. I wanted to know if there is any
other existing
On Fri, Aug 23, 2013 at 4:47 AM, Vamshi Krishna vamshi2...@gmail.comwrote:
Hi all,
I set up 2 node hbase cluster
Two nodes is too small of a cluster to use for making deduction about how
hbase behaves.
St.Ack
On Fri, Aug 23, 2013 at 10:58 AM, Kevin Wang kevin.w...@cloudera.comwrote:
Hey HBasers!
We on team Hue are announcing a brand new web UI for HBase! It's called
Hue HBase Browser (you can check it out on github:
https://github.com/cloudera/hue).
Here's a link to the introductory blog post
On Thu, Aug 22, 2013 at 8:00 PM, Xiong LIU liuxiongh...@gmail.com wrote:
We are considering to upgrade our hbase cluster from version 0.94.6 to
0.96.0 once 0.96.0 is out.
I want to know whether any possible failure may happen during the upgrade
progress, and if it does happen, is it possible
On Thu, Aug 15, 2013 at 7:50 PM, S. Zhou myx...@yahoo.com wrote:
Hi there,
I am writing Java program to access HBase and the version of Hbase I
use is 0.95.1-hadoop1. The problem is: I failed to download the jar file
for this version from Maven central repository even though it is
On Wed, Aug 14, 2013 at 8:18 AM, Bryan Beaudreault bbeaudrea...@hubspot.com
wrote:
Thanks Stack. We are going to test this on a test table in QA, but I'd
still like a fallback plan if something goes wrong when we eventually do it
in prod.
One idea I had was to snapshot the table, clone
On Tue, Aug 13, 2013 at 5:17 PM, Bryan Beaudreault bbeaudrea...@hubspot.com
wrote:
I'm running cdh4.2 hbase 0.94.2, and am looking to merge some regions in a
table. Looking at Merge.java, it seems to require that the entire cluster
be offline. However, I also notice an HMerge.java which
On Fri, Aug 9, 2013 at 12:43 PM, Ralf R. Kotowski r...@enlle.com wrote:
HI,
I'm new to this, I'm trying to set-up, under Fedora Core 19, Hbase 0.90.4
Standalone to use with Nutch 2.2.1 as per the Nutch 2.x tutorial and Hbase
quickstart.
0.90.4 hbase is very old, years old.
How do you
I would suggest you search the mail archives before posting first (you will
usually get your answer faster if you go this route).
The below has been answered in the recent past. See
http://search-hadoop.com/m/5tk8QnhFqw
Thanks,
St.Ack
On Tue, Aug 6, 2013 at 12:39 AM, ch huang
On Tue, Aug 6, 2013 at 7:42 AM, Bruno Dumon br...@ngdata.com wrote:
Hi,
Interesting, we did a similar thing for indexing HBase content into Solr,
you can find it here:
https://github.com/NGDATA/hbase-indexer
The part which picks up on the HBase replication stream is available as
On Tue, Aug 6, 2013 at 7:48 AM, Dhaval Shah prince_mithi...@yahoo.co.inwrote:
I have a weird (and a pretty serious) issue on my HBase cluster. Whenever
one of my zookeeper server goes down, already running services work fine
for a few hours but when I try to restart any service (be it region
On Tue, Aug 6, 2013 at 10:41 AM, Dhaval Shah prince_mithi...@yahoo.co.inwrote:
Thanks Stack. Do you have any specific pointers as to what configs would
help mitigate this issue with a DHCP setup (I am not a networking expert,
other teams manage the network and if I have specific pointers
On Mon, Aug 5, 2013 at 3:05 PM, Alex Newman posi...@gmail.com wrote:
Based on the previous work using async libraries to index HBase into
elastic search, I've created.
https://github.com/posix4e/Elasticsearch-HBase-River
This river uses the replication feature in HBase to replicate into
Nice. I added a pointer into the refguide.
St.Ack
On Fri, Aug 2, 2013 at 7:53 AM, rajeshbabu chintaguntla
rajeshbabu.chintagun...@huawei.com wrote:
Hi All,
I didn't find a site or blog with all shell commands at one place, so I
have collected all of them here.
This may be helpful for
Try turning off
http://hbase.apache.org/apidocs/org/apache/hadoop/hbase/client/HTable.html#setRegionCachePrefetch(byte[],
boolean)
St.Ack
On Tue, Jul 30, 2013 at 11:27 AM, Varun Sharma va...@pinterest.com wrote:
JD, its a big problem. The region server holding .META has 2X the network
It would be difficult for the exception to be any more explicit. Axis must
be messing w/ your classpath/class loading. Do any hbase classes load? Is
this the first? Perhaps it is the classes WBAC imports?
St.Ack
On Wed, Jul 24, 2013 at 1:14 PM, naveen.ara naveen@gmail.com wrote:
It is
You see anything in zk logs at the time of your connectionloss?
192.168.56.101:2181 is where a zk ensemble member resides (and is up and
running)?
St.Ack
On Fri, Jul 19, 2013 at 11:33 AM, rjoshi rink...@yahoo.com wrote:
Hello,
I have a java native api to access HBase and it works fine as
Thanks Otis. Helps. May we see a weeks-worth if (or when) it is available?
St.Ack
On Wed, Jul 10, 2013 at 12:00 PM, Otis Gospodnetic
otis.gospodne...@gmail.com wrote:
Hi,
There was talk of G1 and HBase here over the past few days. I wanted
to share a fresh and telling SPM graph showing
File a bug Stan please. Paste your log snippet and surrounding what is
going on at the time. It looks broke that a bulk load would be kept out of
a lock for ten minutes or more.
Hope all is well,
St.Ack
On Mon, Jul 8, 2013 at 9:53 AM, Stanislav Barton stanislav.bar...@gmail.com
wrote:
On Fri, Jul 5, 2013 at 12:40 PM, Hanish Bansal
hanish.bansal.agar...@gmail.com wrote:
I am using 0.94.6.1 so i will be able to rolling upgrade to 0.94.8.
As you said that clients and servers calls between HBase minor releases are
supposed to be compatible. So there should not be any issue
Hey Stan:
60seconds is a long time.
If you try upping it the wait, does it still fail still? (looking at how
long to wait is calculated, it is a little hairy figuring what to change).
Any chance of thread dump while it is hung up? Might tell us something?
Good on you Stan,
St.Ack
On Fri,
Try single quotes. The shell (ruby) may be trying to 'help you' by
interpreting your hex.
hbase(main):018:0 print \x20\n
hbase(main):019:0 print '\x20\n'
\x20\nhbase(main):020:0
See how w/ double quotes it prints space and new line where when I
single-quote it, it prints out the literal?
At
Can you update your CDH and hbase?
This seems pretty basic dns setup issue:
13/07/03 11:22:36 ERROR hbase.HServerAddress: Could not resolve the DNS
name of CH35
java.lang.IllegalArgumentException: hostname can't be null
Can you fix this first?
St.Ack
On Tue, Jul 2, 2013 at 8:29 PM, ch huang
On Sun, Jun 23, 2013 at 10:33 PM, Stephen Boesch java...@gmail.com wrote:
We want to connect to a non-default / remote hbase server by setting
hbase.zookeeper.quorum=our.remote.hbase.server
on the command line invocation of hbase shell (and not disturbing the
existing hbase-env.sh or
On Wed, Jun 26, 2013 at 6:57 AM, Jason Huang jason.hu...@icare.com wrote:
My question is - is this kind of heartbeat expected and useful? Our
normal use case involves fetching data to HBase table every 60 seconds or
so. Could we stop that heartbeat and re-connect to zookeeper on the fly
only
On Tue, Jun 25, 2013 at 3:10 PM, Varun Sharma va...@pinterest.com wrote:
I was looking at HDFS 347 and the nice long story with impressive
benchmarks and that it should really help with region server performance.
The question I had was whether it would still help if we were already using
the
On Fri, Jun 21, 2013 at 4:41 PM, Joel Alexandre joel.alexan...@gmail.comwrote:
...
I'm my jar there is a log4j.properties file, but it is being ignored.
Your log4j.properties is in the right location inside the job jar? (
On Thu, Jun 20, 2013 at 3:28 PM, Rohit Kelkar rohitkel...@gmail.com wrote:
Out of curiosity, for my learning, why does LATEST_TIMESTAMP make the table
not see the actual rows?
Because these values will be in the future relative to the host.
When you send a RS an edit w/ LATEST_TIMESTAMP,
On Wed, Jun 19, 2013 at 12:02 AM, Joarder KAMAL joard...@gmail.com wrote:
Thanks a lot for the information Azuryy.
Just to know, why there is apidocs for 0.97.0 ?? There is no release of
HBase with this number, right?
Yeah, we publish the 0.94 and trunk API docs. We need to fix so it is
typos...
Mike Segel
On Jun 19, 2013, at 12:41 AM, Stack st...@duboce.net wrote:
On Mon, Jun 17, 2013 at 12:06 PM, Patrick Schless
patrick.schl...@gmail.com
wrote:
Working on setting up HBase replication across a VPN tunnel, and
following
the docs here: [1] (and here: [2]).
Two
(Thank you Michel)
On Wed, Jun 19, 2013 at 8:28 AM, Stack st...@duboce.net wrote:
Pardon me. I should have mentioned that the slave cluster will send the
master cluster a response (success, faliure).
St.Ack
On Wed, Jun 19, 2013 at 2:49 AM, Michel Segel
michael_se...@hotmail.comwrote
RawComparator just does raw bytes? You need a Comparator that understands
the KV format. See KV class. Otherwise, post your code. It seems like you
are skirting checks the hfile hosting Store in hbase does.
St.Ack
On Wed, Jun 19, 2013 at 10:05 AM, Rohit Kelkar rohitkel...@gmail.comwrote:
On Wed, Jun 19, 2013 at 10:00 AM, Tianying Chang tich...@ebaysf.com wrote:
Hi,
I am trying to do some performance testing, and want to test when the WAL
is turned off. I know there is API writeToWAL(false) to do this, but if I
want to just change the setting and use the performanceEvaluation
On Tue, Jun 18, 2013 at 4:17 PM, Varun Sharma va...@pinterest.com wrote:
Hi,
If I wanted to write to write a unit test against HTable/HBase, is there an
already available utility to that for unit testing my application logic.
I don't want to write code that either touches production or
On Mon, Jun 17, 2013 at 2:41 PM, Rohit Kelkar rohitkel...@gmail.com wrote:
Is running MR job and the Incremental bulk load simultaneously on the same
hbase table going to affect each other? If yes then can you suggest
strategies to make bulk load and MR jobs mutually exclusive?
Bulk load
On Mon, Jun 17, 2013 at 12:06 PM, Patrick Schless patrick.schl...@gmail.com
wrote:
Working on setting up HBase replication across a VPN tunnel, and following
the docs here: [1] (and here: [2]).
Two questions, regarding firewall allowances required:
1) The docs say that the zookeeper
On Fri, Jun 14, 2013 at 10:36 AM, karunakar lkarunaka...@gmail.com wrote:
Hello Experts,
Can you please provide link for HBASECON 2013 videos?
Thank you.
This page says they should be available in a few weeks:
http://blog.cloudera.com/blog/2013/06/the-hbasecon-2013-afterglow/
St.Ack
On Tue, Jun 11, 2013 at 8:09 AM, Rayappan Antoni raj
rayappan.antoni...@gmail.com wrote:
Hi All,
http://stackoverflow.com/questions/17045314/apache-hbase-installation#
I am trying to install Hbase(hbase-0.94.8) in ubuntu 12.04 enviroment.
I followed the steps given in this page
The second release in a series of Development releases, hbase-0.95.1 is
available for download from your favorite Apache mirror. See:
http://www.apache.org/dyn/closer.cgi/hbase/
(It may take a few hours for the release to show up everywhere)
About 270 issues have been closed against this
On Thu, Jun 6, 2013 at 4:57 AM, Ted Yu yuzhih...@gmail.com wrote:
bq. I just dont find this hbase.zookeeper.property.tickTime anywhere in
the code base.
Neither do I. Mind filing a JIRA to correct this in troubleshooting.xml ?
It intentionally does not exist in the hbase code base. Read
On Thu, Jun 6, 2013 at 8:15 AM, Stack st...@duboce.net wrote:
bq. increase tickTime in zoo.cfg?
For shared zookeeper quorum, the above should be done.
What?
What I meant to say is, how does this answer the question Is this even
relevant anymore? hbase.zookeeper.property.tickTime
14th. To signup,
see http://kijicon.eventbrite.com/
Thats all for now...
Your dance-card coordinator,
St.Ack
On Fri, May 17, 2013 at 3:10 PM, Stack st...@duboce.net wrote:
We have some meetups happening over the next few months. Sign up if you
are interested in attending (or if you would like
On Wed, Jun 5, 2013 at 4:21 PM, Ted Yu yuzhih...@gmail.com wrote:
For thrift, there is already such support.
Take a look at (0.94 codebase):
src/main/java/org/apache/hadoop/hbase/regionserver/HRegionThriftServer.java
* HRegionThriftServer - this class starts up a Thrift server in the same
On Tue, Jun 4, 2013 at 9:58 PM, Rob Verkuylen r...@verkuylen.net wrote:
Finally fixed this, my code was at fault.
Protobufs require a builder object which was a (non static) protected
object in an abstract class all parsers extend. The mapper calls a parser
factory depending on the input
On Tue, Jun 4, 2013 at 6:48 PM, Jean-Daniel Cryans jdcry...@apache.orgwrote:
Replication doesn't need to know about compression at the RPC level so
it won't refer to it and as far as I can tell you need to set
compression only on the master cluster and the slave will figure it
out.
Looking
On Sun, Jun 2, 2013 at 8:09 AM, Jean-Marc Spaggiari jean-m...@spaggiari.org
wrote:
So, 2 things again here.
1) Should the region server send more information of the failure to
the master the the master can display the failure cause on the logs?
Yes. You shouldn't have to work so hard to
On Fri, May 31, 2013 at 1:13 PM, Asaf Mesika asaf.mes...@gmail.com wrote:
We have developed some custom scripts on top of fabric (
http://docs.fabfile.org/en/1.6/).
I've asked the developer on our team to see if can share some of it to the
community.
It's mainly used for
Yeah, no streaming API in our current client (nor does our thrift client
give you a streaming API).
St.Ack
On Sat, Jun 1, 2013 at 8:21 AM, Simon Majou si...@majou.org wrote:
No I don't want to scan a table, I want a stream of one result. In case for
example of big records.
With thrift it is
Yes Anil. Bug us again after the conference.
Thanks,
St.Ack
On Wed, May 29, 2013 at 11:38 PM, anil gupta anilgupt...@gmail.com wrote:
Hi All,
There are 4 tracks in HBaseCon2013. Topics of all 4 tracks are interesting
but i can only attend one track presentation at a time. Will the recorded
On Wed, May 29, 2013 at 10:49 AM, Jay Vyas jayunit...@gmail.com wrote:
Hi !
I've been working on installing HBASE using a shell script over some
nodes.
Usually folks do chef, puppet, etc. installing nodes. Do you not want to
go that route?
I can't help but think that someone else may
On Wed, May 29, 2013 at 8:28 AM, Rob robby.verkuy...@gmail.com wrote:
We're moving from ingesting our data via the Thrift API to inserting our
records via a MapReduce job. For the MR job I've used the exact same job
setup from HBase DefG, page 309. We're running CDH4.0.1, Hbase 0.92.1
We
On Wed, May 29, 2013 at 1:27 PM, Varun Sharma va...@pinterest.com wrote:
Hi,
I am working on some compaction coprocessors for a column family with
versions set to 1. I am using the preCompact hook to wrap a scanner around
the compaction scanner.
I wanted to know what to expect from the
On Tue, May 28, 2013 at 7:09 AM, jingguo yao yaojing...@gmail.com wrote:
Section 2.1.3 says that Hadoop 1.0.4 works with HBase-0.94.x [1]. And
Section 2.1.3.3 says that 1.0.4 has a working durable sync. But when I
check the source code of DFSClient.DFSOutputStream's sync method, I
finds the
Good on you Wouter. HappyBase makes me... well...
If you want to add happybase to
http://wiki.apache.org/hadoop/SupportingProjects, make yourself an id on
the hadoop/hbase wiki and send it to me offlist and I'll enable you as an
editor.
Thanks boss,
St.Ack
On Fri, May 24, 2013 at 1:50 PM,
How could we improve the doc Stephen? Was the problem that it was only in
the quick start section? Thanks.
St.Ack
On Wed, May 22, 2013 at 12:46 PM, Stephen Boesch java...@gmail.com wrote:
OK found the issue, it was the old ubuntu localhost anomaly of 127.0.1.1
vs 127.0.0.1 Changing
What did the logs show regards who could not find who? (I would like to
answer Jay Vyas but thought I'd ask here first to see if could see what
tight-coupling to /etc/hosts we are guilty of).
Thanks,
St.Ack
On Wed, May 22, 2013 at 12:46 PM, Stephen Boesch java...@gmail.com wrote:
OK found the
I can work on the doc. part. If you have a moment, would suggest filing an
issue w/ snippets of the logs where hbase is lost.
Thanks Stephen,
St.Ack
On Wed, May 22, 2013 at 2:45 PM, Stephen Boesch java...@gmail.com wrote:
Hi Stack,
This section of the docs is clear. I would suggest
for yourself.
Start up a local instance.
Then start up a shell, create a table, insert a row then flush:
durruti:hbase-0.94.7 stack$ ./bin/hbase shell
HBase Shell; enter 'helpRETURN' for list of supported commands.
Type exitRETURN to leave the HBase Shell
Version 0.94.7, r1471806, Wed Apr 24 18:48:26 PDT
On Thu, May 16, 2013 at 3:26 PM, Varun Sharma va...@pinterest.com wrote:
Referring to your comment above again
If you doing a prefix scan w/ row1c, we should be starting the scan at
row1c, not row1 (or more correctly at the row that starts the block we
believe has a row1c row in it...)
I
We have some meetups happening over the next few months. Sign up if you
are interested in attending (or if you would like to present, write me
off-list).
First up, there is hbasecon2013 (http://hbasecon.com) on June 13th in SF.
It is shaping up to be a great community day out with a bursting
This has come up in the past:
http://search-hadoop.com/m/mDn0i2kjGA32/NumberFormatException+dnssubj=unable+to+resolve+the+DNS+name
Or check out this old thread:
http://mail.openjdk.java.net/pipermail/jdk7-dev/2010-October/001605.html
St.Ack
On Fri, May 17, 2013 at 11:17 AM, Heng Sok
On Thu, May 16, 2013 at 2:03 PM, Varun Sharma va...@pinterest.com wrote:
Or do we use some kind of demarcator b/w rows and columns and timestamps
when building the HFile keys and the indices ?
No demarcation but in KeyValue, we keep row, column family name, column
family qualifier, etc.,
What you seeing Varun (or think you are seeing)?
St.Ack
On Thu, May 16, 2013 at 2:30 PM, Stack st...@duboce.net wrote:
On Thu, May 16, 2013 at 2:03 PM, Varun Sharma va...@pinterest.com wrote:
Or do we use some kind of demarcator b/w rows and columns and timestamps
when building the HFile
?
On Thu, May 16, 2013 at 2:30 PM, Stack st...@duboce.net wrote:
What you seeing Varun (or think you are seeing)?
St.Ack
On Thu, May 16, 2013 at 2:30 PM, Stack st...@duboce.net wrote:
On Thu, May 16, 2013 at 2:03 PM, Varun Sharma va...@pinterest.com
wrote:
Or do we use some kind
On Tue, May 14, 2013 at 11:33 PM, Varun Sharma va...@pinterest.com wrote:
Hi,
I was looking at PrefixFilter but going by the implementation - it looks we
scan every row until we hit the prefix instead of seeking to the row with
the required prefix.
I was wondering if there are more
On Tue, May 14, 2013 at 11:33 PM, Varun Sharma va...@pinterest.com wrote:
Hi,
I was looking at PrefixFilter but going by the implementation - it looks we
scan every row until we hit the prefix instead of seeking to the row with
the required prefix.
I was wondering if there are more
On Sat, May 11, 2013 at 1:02 PM, Mohammad Tariq donta...@gmail.com wrote:
Hello Aman,
Thank you so much for the quick response. But why would that happen? I
mean the required env variables are present in hbase-env.sh already. What
is the need to source bashrc?
Consider a scenario
Every quarter the HBase PMC has to make a report to the Apache Board. Here
is what we sent for this period (a minor, private item has been redacted).
Yours,
St.Ack
HBase is a distributed column-oriented database
built on top of Hadoop Common and Hadoop HDFS
ISSUES FOR THE BOARD’s ATTENTION
Turn it on by default in trunk/0.95 I'd say.
St.Ack
On Wed, Apr 10, 2013 at 4:02 PM, lars hofhansl la...@apache.org wrote:
Fix is committed and will be in 0.94.7.
I guess we should have a discussion at some point on whether we should
always switch this feature on (it is disabled by
On Wed, Apr 10, 2013 at 6:54 AM, Jean-Marc Spaggiari
jean-m...@spaggiari.org wrote:
Hi Nitin,
You got my question correctly.
However, I'm wondering how it's working when it's done into HBase.
We use the default MapReduce partitioner:
On Wed, Apr 10, 2013 at 12:01 PM, Jean-Marc Spaggiari
jean-m...@spaggiari.org wrote:
Hi Greame,
No. The reducer will simply write on the table the same way you are doing a
regular Put. If a split is required because of the size, then the region
will be split, but at the end, there will not
On Tue, Apr 9, 2013 at 4:41 AM, Bing Jiang jiangbinglo...@gmail.com wrote:
hi,
There are some physical machines which each one contains a large ssd(2T)
and general disk(4T),
and we want to build our hdfs and hbase environment.
What kind of workload do you intend to run on these machines?
Make a patch for the reference guide that points to this Otis? Or just
tell me where to insert?
Thanks,
St.Ack
On Wed, Aug 8, 2012 at 4:14 PM, Otis Gospodnetic otis_gospodne...@yahoo.com
wrote:
Hi,
We wrote an HBase Refcard and published it via DZone. Here is our very
brief announcement:
The first release in a series of Development releases, hbase-0.95.0 is
available for download from your favorite Apache mirror. See:
http://www.apache.org/dyn/closer.cgi/hbase/
(It may take a few hours for the release to show up everywhere)
About 1500 issues have been closed against this
On Sun, Apr 7, 2013 at 11:58 AM, Ted yuzhih...@gmail.com wrote:
With regard to number of column families, 3 is the recommended maximum.
How did you come up w/ the number '3'? Is it a 'hard' 3? Or does it
depend? If the latter, on what does it depend?
Thanks,
St.Ack
Try setting the hbase.balancer.period to a very high number in you
hbase-site.xml:
http://hbase.apache.org/book.html#hbase.master.dns.nameserver
St.Ack
On Sun, Apr 7, 2013 at 3:14 PM, Akshay Singh akshay_i...@yahoo.com wrote:
Hi,
I am trying to permanently switch off the balancer in HBase,
On Sun, Apr 7, 2013 at 3:27 PM, Ted Yu yuzhih...@gmail.com wrote:
From http://hbase.apache.org/book.html#number.of.cfs :
HBase currently does not do well with anything above two or three column
families so keep the number of column families in your schema low.
We should add more to that
for initial experiments (For example, see
https://repository.apache.org/content/groups/snapshots/org/apache/hbase/hbase-client/0.95.0-hadoop2-SNAPSHOT/).
Let us know if you'd like us to package things otherwise.
Yours,
The HBase Team
On Sun, Apr 7, 2013 at 2:51 PM, Stack st...@duboce.net wrote
On Fri, Apr 5, 2013 at 11:19 AM, Christophe Taton ta...@wibidata.comwrote:
Hi,
Is there an explicit specification of the behavior of max versions (set in
a get/scan) when combined with filters?
From my experiments (with 0.92 CDH4.1.2), the max versions is applied in a
way that is neither
601 - 700 of 2422 matches
Mail list logo