Congrats.
Enis
On Wed, Jan 2, 2013 at 11:47 AM, Himanshu Vashishtha
hvash...@cs.ualberta.ca wrote:
Congrats Matteo and Chunhui! :)
On Wed, Jan 2, 2013 at 11:40 AM, Jimmy Xiang jxi...@cloudera.com wrote:
Congratulations! Matteo and Chunhui!!
On Wed, Jan 2, 2013 at 11:37 AM, Jonathan
Cool. Will play a bit later on. Was waiting for it to appear.
On Wed, Jan 30, 2013 at 1:04 PM, James Taylor jtay...@salesforce.comwrote:
We are pleased to announce the immediate availability of a new open source
project, Phoenix, a SQL layer over HBase that powers the HBase use cases at
Congrats. Well deserved.
Enis
On Thu, Feb 7, 2013 at 9:56 AM, Jimmy Xiang jxi...@cloudera.com wrote:
Congratulations!
On Thu, Feb 7, 2013 at 9:54 AM, Andrew Purtell apurt...@apache.org
wrote:
Congratulations Devaraj!
On Wed, Feb 6, 2013 at 9:19 PM, Ted Yu yuzhih...@gmail.com
Hi,
Maybe you can implement a SplitAlgorithm, and use the RegionSplitter
utility. You can find some information about the usage of SplitAlgorithm
here:
http://hortonworks.com/blog/apache-hbase-region-splitting-and-merging/
Enis
On Tue, Feb 19, 2013 at 11:11 PM, Farrokh Shahriari
From some of their presentations, I've gathered that they implement
B-Tree's instead of LSM's on top of their file system which allows random
writes. They also claim that they are converting random mutation requests
to the B-Tree leafs to sequential-writes. They are also talking about
mini-WALs to
There is a
MultiTableInputFormat that has been recently added to HBase. You might want
to take a look at it.
https://github.com/apache/hbase/blob/trunk/hbase-server/src/main/java/org/apache/hadoop/hbase/mapreduce/MultiTableInputFormat.java
Enis
On Wed, Feb 27, 2013 at 8:17 AM, Paul van Hoven
Hey,
You need to use both. If you setup the cluster through Apache Ambari (
http://incubator.apache.org/ambari/), it will setup everything for you.
Enis
On Tue, Feb 26, 2013 at 11:43 PM, Leonid Fedotov
lfedo...@hortonworks.comwrote:
Ganglia is monitoring and trending.
Nagios is health
Congrats and welcome.
On Mon, Mar 11, 2013 at 2:21 AM, Nicolas Liochon nkey...@gmail.com wrote:
Congrats, Anoop!
On Mon, Mar 11, 2013 at 5:35 AM, rajeshbabu chintaguntla
rajeshbabu.chintagun...@huawei.com wrote:
Contratulations Anoop!
If you check out the latest source code, there are some examples under
hbase-examples/src/main/php.
Enis
On Thu, Mar 14, 2013 at 6:06 AM, dimpanagr dimpan...@yahoo.gr wrote:
Hi I have set up HBase and trying to use Thrift-Php to upload an image and
then display it. Is there any example or
I would caution against going that route for various reasons:
- Correctness : You can never be sure to sync the memstore flushes and
compactions changing the files under you.
- Security: All files from HBase are owned by HBase user. Other
users should not be able to read it.
-
Hi,
HBase cannot deduce the row key structure, thus cannot pre split the table
unless it knows the basic format for the row keys.
shameless_self_plug you can look at the blog post about splits here:
http://hortonworks.com/blog/apache-hbase-region-splitting-and-merging//shameless_self_plug
Enis
I think the page cache is not totally useless, but as long as you can
control the GC, you should prefer the block cache. Some of the reasons of
the top of my head:
- In case of a cache hit, for OS cache, you have to go through the DN
layer (an RPC if ssr disabled), and do a kernel jump, and read
enjoyed the depth of explanation from both Enis and J-D. I was indeed
mistakenly referring to HFile as HLog, fortunately you were still able
understand my question.
Thanks,
Pankaj
On Mar 21, 2013, at 1:28 PM, Enis Söztutar enis@gmail.com wrote:
I think the page cache
From: Enis Söztutar [enis@gmail.com]
Sent: Monday, March 25, 2013 11:24 AM
To: hbase-user
Cc: lars hofhansl
Subject: Re: Does HBase RegionServer benefit from OS Page Cache
Thanks Liyin for sharing your use cases.
Related to those, I was thinking of two improvements:
- AFAIK, MySQL
Hi,
From the logs, it seems you are running into the same problem I have
reported last week: https://issues.apache.org/jira/browse/HBASE-8143
There are some mitigation strategies outlined in that jira. It would be
good if you can confirm:
- How many regions in the region server
- How many open
I think having Int32, and NullableInt32 would support minimum overhead, as
well as allowing SQL semantics.
On Mon, Apr 1, 2013 at 7:26 PM, Nick Dimiduk ndimi...@gmail.com wrote:
Furthermore, is is more important to support null values than squeeze all
representations into minimum size
Hi,
Interesting use case. I think it depends on job many jobId's you expect to
have. If it is on the order of thousands, I would caution against going the
one table per jobid approach, since for every table, there is some master
overhead, as well as file structures in hdfs. If jobId's are
Hi,
I presume you have read the percolator paper. The design there uses a
single ts oracle, and BigTable itself as the transaction manager. In omid,
they also have a TS oracle, but I do not know how scalable it is. But using
ZK as the TS oracle would not work, since ZK can scale up to 40-50K
I would still caution relying on the sorting order between values of the
same cf, qualifier and timestamp. If for example, there is a Delete, it
will eclipse subsequent Puts given the same timestamp, even though Put
happened after Delete.
Enis
On Mon, Aug 27, 2012 at 9:20 AM, Tom Brown
Hey Andrew, any update on this. I am dying to see the blog posts : ). I can
also offer my relatively unworthy 2 cents, if you guys need any help.
Thanks,
Enis
On Wed, Feb 1, 2012 at 7:23 PM, Andrew Purtell apurt...@apache.org wrote:
Gary, Eugene is resurrecting posts on security now. I was
Interesting use case. For your product, do you also need to secure hbase as
well?
Enis
On Wed, Feb 22, 2012 at 10:03 AM, Alan Chaney a...@mechnicality.com wrote:
Hi
We are using Spring Security and HBase in our product. We are adding ACL
support through Spring and are looking at
Hi,
HDFS has two interfaces for durability: hflush and hsync:
Hflush() : Flush the data packet down the datanode pipeline. Wait for
ack’s.
Hsync() : Flush the data packet down the pipeline. Have datanodes execute
FSYNC equivalent. Wait for ack’s.
There is some work on adding a Durability API in
Hi,
Regarding running raw scans on top of Hfiles, you can try a version of the
patch attached at https://issues.apache.org/jira/browse/HBASE-8369, which
enables exactly this. However, the patch is for trunk.
In that, we open one region from snapshot files in each record reader, and
run a scan
Bryan,
3.6x improvement seems exciting. The ballpark difference between HBase scan
and hdfs scan is in that order, so it is expected I guess.
I plan to get back to the trunk patch, add more tests etc next week. In the
mean time, if you have any changes to the patch, pls attach the patch.
Enis
I've seen a similar stack trace in some test as well, and opened the issue
https://issues.apache.org/jira/browse/HBASE-8912 for tracking this.
This looks like a problem in AssignmentManager that fails to recognize a
valid state transition, but I did not have the time to look into it
further.
Hi,
Agreed with what Nick said. There is also an MSI based installation for
HBase as a part of HDP-1.3 package. You can check it out here:
http://hortonworks.com/products/hdp-windows/
Enis
On Tue, Aug 20, 2013 at 2:54 PM, Nick Dimiduk ndimi...@gmail.com wrote:
Hi Andrew,
I don't think the
As long as there is interest for 0.94, we will care for 0.94. However, when
0.96.0 comes out, it will be marked as the next stable release, so I expect
that we would promote newcomers that branch.
Any committer can propose any branch and release candidate any time, so if
there are road blocks for
Hi,
Please join me in welcoming Nick as our new addition to the list of
committers. Nick is exceptionally good with user-facing issues, and has
done major contributions in mapreduce related areas, hive support, as well
as 0.96 issues and the new and shiny data types API.
Nick, as tradition, feel
Congrats and welcome aboard.
On Wed, Sep 11, 2013 at 10:08 AM, Jimmy Xiang jxi...@cloudera.com wrote:
Congrats!
On Wed, Sep 11, 2013 at 9:54 AM, Stack st...@duboce.net wrote:
Hurray for Rajesh!
On Wed, Sep 11, 2013 at 9:17 AM, ramkrishna vasudevan
Right now we do not have what you suggest.
Eric has created an issue for this:
https://issues.apache.org/jira/browse/HBASE-8016
I think it makes a lot of sense, especially enabling HRegion as a library
to work on top of shared hdfs and building a simple layer to embed the
client side, etc.
The
Hi guys,
I just wanted to give a heads up on upcoming bay area user and dev meetups
which will happen on the same day, October 24th. ( special thanks Stack for
pushing this.)
The user meetup will start at 6:30, and the talks scheduled so far are:
+ Steven Noels will talk about using the Lily
Hbase shell is just the jruby shell with some custom methods imported. Are
you running with powershell or cmd? You can just boot up jruby to see
whether there is any difference there.
Enis
On Wed, Oct 2, 2013 at 7:03 AM, Sznajder ForMailingList
bs4mailingl...@gmail.com wrote:
Hi
I am
You can look at the HBase in Action book, which contains a whole chapter
on an example GIS system on HBase.
Enis
On Fri, Oct 4, 2013 at 1:01 AM, Adrien Mogenet adrien.moge...@gmail.comwrote:
If you mean insert and query spatial data, look at algorithms that are
distributed databases
You can change the log4j configuration for your deployment as you see fit.
DRFA is however a better default for the general case.
You might also want to ensure that you are not running with DEBUG logging
enabled in a production cluster.
Enis
On Mon, Oct 14, 2013 at 11:01 PM, sreenivasulu y
If the split takes too long (longer than 30 secs), I would say you may have
too many store files in the region. Split has to write two tiny files per
store file. The other thing may be the region has to be closed before
split. Thus it has to do a flush. If it cannot complete the flush in time,
it
Nice test!
There is a couple of things here:
(1) HFileReader reads only one file, versus, an HRegion reads multiple
files (into the KeyValueHeap) to do a merge scan. So, although there is
only one file, there is some overehead of doing a merge sort'ed read from
multiple files in the region. For
You can find the manual for HDP here:
http://docs.hortonworks.com/HDPDocuments/HDP2/HDP-2.0.5.0/bk_system-admin-guide/content/ch_hadoop-ha.html
I think you have to configure your hbase root dir and client side hdfs
configuration.
Enis
On Wed, Jan 22, 2014 at 12:22 AM, jaksky
We've also recently updated
http://hbase.apache.org/book/ops.capacity.htmlwhich contains similar
numbers, and some more details on the items to
consider for sizing.
Enis
On Sat, Feb 8, 2014 at 10:12 PM, Ramu M S ramu.ma...@gmail.com wrote:
Thanks Lars.
We were in the process of building
winutils.exe is the native implementation for some of the utilities that
hdfs and hadoop clients require. For the hbase client and hadoop client to
work properly, you have to have it installed properly.
You can build it locally: http://wiki.apache.org/hadoop/Hadoop2OnWindows.
Enis
On Fri, Feb
Indeed we need some pom mockery to be able to do that. It would be good
though.
Some history:
https://issues.apache.org/jira/browse/HBASE-5341
and related
https://issues.apache.org/jira/browse/HBASE-6929
Enis
On Tue, Mar 4, 2014 at 5:03 PM, James Taylor giacomotay...@gmail.comwrote:
That'd
From the log message, it seems that you did not run the upgrade.
Can you try with the instructions Ted sent?
Enis
On Wed, Mar 5, 2014 at 12:39 AM, Richard Chen cxd3...@gmail.com wrote:
Thanks for the suggestion. But when I looked at the JIRA, it shows
- Fix Version/s:0.98.0
Hi,
In the dev thread [1], HBase developers are considering dropping support
for Hadoop-1 in future releases 0.99, and HBase-1.0. This is a heads up, so
that you can plan ahead if you are choosing to go with 0.96.x and 0.98.x
releases.
Hadoop-2.2 was released last October, and it is superior in
Hey,
Indeed it is a viable approach. Some of the HBase deployments use the
master-master replication model across DCs. The consistency semantics will
obvious depend on the application use case.
However, there is no out-of-the-box client to do across-DC requests. But
wrapping the HBase client in
Here is my late +1 for the release.
- Checked checksums
- Checked gpg signature (gpg --verify )
- Checked included documentation book.html, etc.
- Running unit tests
- Started in local mode, run LoadTestTool
- checked hadoop libs in h1 / h2
- build src with both hadoop 1 and 2
- checked
Sorry I could not get on this sooner. I do typically run a list of tests
from my checklist to verify the release, which is why I sometimes do not
vote on releases. Let me do the dutiful for this.
Agreed with the general sentiment. Please reconsider.
Enis
On Mon, May 12, 2014 at 3:59 PM, Stack
Hi Keegan,
Unfortunately, at the time of the module split in 0.96, we could not
completely decouple mapreduce classes from the server dependencies. I think
we actually need two modules to be extracted out, one is hbase-mapreduce
(probably separate module than client module) and hbase-storage for
Agreed, this seems like an hdfs issue unless hbase itself does not close
the hfiles properly. But judging from the fact that you were able to
circumvent the problem by getting reducing the cache size, it does seem
unlikely.
I don't think the local block reader will be notified when a file/block
The HBase Team is pleased to announce the immediate release of HBase 0.99.0.
Download it from your favorite Apache mirror [1] or maven repository.
THIS RELEASE IS NOT INTENDED FOR PRODUCTION USE, and does not contain any
backwards or forwards compatibility guarantees (even within minor versions
The HBase Team is pleased to announce the immediate release of HBase 0.99.2.
Download it from your favorite Apache mirror [1] or maven repository.
THIS RELEASE IS NOT INTENDED FOR PRODUCTION USE, and does not contain any
backwards or forwards compatibility guarantees (even within minor versions
We do have
Admin.getClusterStatus().getLoad(serverName).getInfoServerPort() which
maybe what you want.
Enis
On Wed, Jan 21, 2015 at 5:05 PM, Stack st...@duboce.net wrote:
On Wed, Jan 21, 2015 at 4:49 PM, Stack st...@duboce.net wrote:
On Wed, Jan 21, 2015 at 3:03 PM, Talat Uyarer
Online in this context is HBase cluster being online, not individual
regions. For the merge process, the regions go briefly offline similar to
how splits work. It should be on the order of seconds.
Enis
On Wed, Jan 21, 2015 at 10:26 AM, Ted Yu yuzhih...@gmail.com wrote:
Please take a look at
Hi,
You are right that the constructor new HTable(Configuration, ..) will share
the underlying connection if same configuration object is used. Connection
is a heavy weight object, that holds the zookeeper connection, rpc client,
socket connections to multiple region servers, master, and the
sharing. Everything is explicit now.
Enis
On Tue, Feb 17, 2015 at 11:50 PM, Serega Sheypak serega.shey...@gmail.com
wrote:
Hi, Enis Söztutar
You've wrote:
You are right that the constructor new HTable(Configuration, ..) will
share the underlying connection if same configuration object is used
The configuration and logs seem usual. Agreed that it seems the shutdown
hook binding failed.
Do you see anything in the .out files? What about the classpath.
Enis
On Fri, Mar 27, 2015 at 2:23 AM, Dejan Menges dejan.men...@gmail.com
wrote:
Sorry, I ment to set it to $hbase.tmp.dir/local or
The HBase Team is pleased to announce the immediate release of HBase 1.0.0.
Download it from your favorite Apache mirror [1] or maven repository.
HBase 1.0.0 is the next stable release, and the start of semantic
versioned
releases (See [2]).
The 1.0.0 release has three goals:
1) to lay a stable
The HBase Team is pleased to announce the immediate release of HBase 1.0.0.
Download it from your favorite Apache mirror [1] or maven repository.
HBase 1.0.0 is the next stable release, and the start of semantic
versioned
releases (See [2]).
The 1.0.0 release has three goals:
1) to lay a stable
Yes, this looks like HBASE-10499, but without more logs from the region
server it is hard to tell.
hdp_specific
The next version of HDP-2.1 is already scheduled to contain HBASE-10499. If
you want, we can continue at the HDP forums.
Enis
On Tue, Feb 24, 2015 at 10:38 AM, Brian Jeltema
@Madeleine,
The folder gets cleaned regularly by a chore in master. When a WAL file is
not needed any more for recovery purposes (when HBase can guaratee HBase
has flushed all the data in the WAL file), it is moved to the oldWALs
folder for archival. The log stays there until all other references
Moving this to user mailing list. issues mailing list is for jira issues.
Did you start zookeeper, and follow the guide at
https://hbase.apache.org/book.html#quickstart ?
Enis
On Tue, Apr 21, 2015 at 11:36 AM, Bo Fu b...@uchicago.edu wrote:
Hi,
I’m a beginner of HBase. I’m recently
In case this is HBASE-11234, HDP-2.2 releases contain the fix.
Enis
On Thu, Apr 23, 2015 at 12:06 PM, Ted Yu yuzhih...@gmail.com wrote:
I think Dejan was referring to HBASE-11234
Cheers
On Thu, Apr 23, 2015 at 8:28 AM, Dejan Menges dejan.men...@gmail.com
wrote:
Hi,
This is a known
This is a nice topic. Let's put it on the ref guide.
Hbase on Azure FS is GA, and there has already been some work for
supporting HBase on the Hadoop native driver.
From this thread, my gathering is that, HBase should run on HDFS, MaprFS,
IBM GPFS, Azure WASB (and maybe Isilon, etc).
I had a
The build is broken with Hadoop-2.2 because mini-kdc is not found:
[ERROR] Failed to execute goal on project hbase-server: Could not resolve
dependencies for project org.apache.hbase:hbase-server:jar:1.1.0: Could not
find artifact org.apache.hadoop:hadoop-minikdc:jar:2.2
We are saying that 1.1
Here are my tests so far:
checked sigs, crcs
build src tarsal with hadoop 2.3, 2.4.0, 2.5.0, 2.5.1, 2.5.2 and 2.6.0.
Build with Hadoop-2.2.0 is broken as previous mail. I don’t think we should
sink the RC for this.
run local mode
simple tests from shell
Build with downstreamer
checked dir
, wasn't sure why
people.apache.org wasn't picking up the new sub key. I've just uploaded to
pgp.mit.edu just now.
See also http://people.apache.org/~ndimiduk/KEY
On Mon, May 4, 2015 at 6:42 PM, Enis Söztutar enis@gmail.com wrote:
Nick did you upload your keys to MIT servers?
I am
Nick did you upload your keys to MIT servers?
I am not able to verify the sig.
gpg --list-keys
pub 4096R/8644EEB6 2014-03-11 [expires: 2016-04-14]
uid Nick Dimiduk ndimi...@apache.org
uid Nick Dimiduk ndimi...@gmail.com
sub 4096R/D2DCE494 2014-03-11
- a resolution to the RegionScanner interface change, if deemed necessary
Sorry, I confused RegionScanner with ResultScanner. RegionScanner is not a
Public class, only for co-processors. I think we do not need a fix here.
On a totally unrelated note, I was going over the ThrottlingException
I guess you should mention that the RC is created from branch-1.1.0, as
opposed to branch-1.1.
I was looking at
https://people.apache.org/~ndimiduk/1.0.0_1.1.0RC1_compat_report.html and
was surprised by the HTable warning about RegionLocator. Did you
cherry-pick the commit from master or
Here is my +1.
checked sigs, crcs
build src tarsal with hadoop 2.3, 2.4.0, 2.5.0, 2.5.1, 2.5.2 and 2.6.0 and
2.7.0 (Did not try the 2.2)
run local mode
simple tests from shell
Build with downstreamer
checked dir layouts
checked jar files
checked version, tag,
Checked the documentation.
Yeah, for coprocessors, what Andrew said. You have to make minor changes.
From your repo, I was able to build:
HW10676:hbase-deps-test$ ./build.sh
:compileJava
Download
https://repository.apache.org/content/repositories/orgapachehbase-1078/org/apache/hbase/hbase/1.1.0/hbase-1.1.0.pom
Download
It is also possible because of HTTP resources are not loaded. I run into
this daily, because of an extension I am using which refuses to load unsafe
scripts from HTTPS links which I default to.
Usually, web sites also host the CSS and javascript files referred instead
of hot linking them. Maybe
ITBLL requires at least 1M per mapper. You can run with less number of
mappers and numKeys = 1M.
Enis
On Thu, Jun 25, 2015 at 4:23 AM, Jean-Marc Spaggiari
jean-m...@spaggiari.org wrote:
2015-06-24 22:29 GMT-04:00 Enis Söztutar enis@gmail.com:
Also, I tried to run
Slight correction, ITBLL needs numKeys to be a multiple of 1M. See the
javadoc and ascii art at the code.
Enis
On Thu, Jun 25, 2015 at 1:47 PM, Enis Söztutar enis@gmail.com wrote:
ITBLL requires at least 1M per mapper. You can run with less number of
mappers and numKeys = 1M.
Enis
Also, I tried to run IntegrationTestBigLinkedList and it fails:
015-06-24 19:06:11,644 ERROR [main]
test.IntegrationTestBigLinkedList$Verify: Expected referenced count does
not match with actual referenced count. expected referenced=100
,actual=0
What are the command line arguments
:29 PM, Enis Söztutar enis@gmail.com wrote:
Also, I tried to run IntegrationTestBigLinkedList and it fails:
015-06-24 19:06:11,644 ERROR [main]
test.IntegrationTestBigLinkedList$Verify: Expected referenced count does
not match with actual referenced count. expected referenced=100
I've just saw the thread in question, and I also feel that an action has to
be taken because this type of behavior is unacceptable. It is also not the
first strike if my memory serves me.
Moderation is fine if we have voluteers. Otherwise +1 for a temporary ban.
Enis
On Tue, Jun 30, 2015 at
Congrats, well deserved.
Enis
On Fri, Aug 21, 2015 at 11:12 AM, Devaraj Das d...@hortonworks.com wrote:
Congratulations Stephen!
On Aug 20, 2015, at 7:10 PM, Andrew Purtell apurt...@apache.org wrote:
On behalf of the Apache HBase PMC, I am pleased to announce that Stephen
Jiang has
Hi Artem,
The 1.0 API BufferedMutator should cover the use case where previously
HTableUtil was used or HTable.setAutoFlush(false) is used.
BufferedMutator already groups the mutations per region server under the
hood (AsyncProcess) and sends the buffered mutations in the background.
There
You should also configure authentication set to kerberos.
Enis
On Tue, Nov 3, 2015 at 9:00 PM, kumar r wrote:
> Yes I have set all this property,
>
> I got user ticket using kinit command and when trying to run *user_permission
> or grant *command in hbase shell, getting
It is highly unlikely that a client to cause the server side to abort. It
is possible that due to some other problem the regionservers are aborting.
The regionservers will reject client connection requests if there is an RPC
version mismatch.
1.x and 0.98 client and servers have been tested to be
The HBase Team is pleased to announce the immediate release of HBase
1.0.2.
Download it from your favorite Apache mirror [1] or maven repository.
HBase 1.0.2 is the next “patch” release in the 1.0.x release line and
supersedes all previous 1.0.x releases . According to the HBase’s semantic
Agreed that we should not change the declared interface for TRR in patch
releases. Ugly, but we can rethrow as RuntimeException or ignore in 1.1 and
before.
I think this is also a blocker:
https://issues.apache.org/jira/browse/HBASE-14474
Enis
On Wed, Sep 23, 2015 at 3:50 PM, Nick Dimiduk
Normally this path: /hbase/data/default/TestTable/TestTable/
d5b64ad96deb3b47467db97669c009fb
should be /hbase/data/default/TestTable/d5b64ad96deb3b47467db97669c009fb.
Meaning that the table descriptor somehow was written under the wrong
folder (there should be only one TestTable in the path).
Some of these properties got changed, deprecated or new ones added between
different apache versions. The apache documentation lists the latest
configuration examples which should work with hbase-1.1+.
Enis
On Thu, Nov 19, 2015 at 8:49 AM, Melvin Kanasseril <
melvin.kanasse...@sophos.com> wrote:
The regionservers file is used only by the start-hbase.sh and stop-hbase.sh
kind of scripts. That is why the documentation is referring to as "if you
rely on ssh to start your daemons". These scripts executes the start or
stop daemon request using SSH. That is also why you need to have a list of
Disabling the table should not be needed. From the stripe compaction
perspective, deploying this in a disabled table versus in an online alter
table is not different at all. The "hbase.online.schema.update.enable"
property was fixing some possible race conditions that were fixed long time
ago.
We
Welcome Mikhail.
Enis
On Thu, May 26, 2016 at 12:19 PM, Gary Helmling wrote:
> Welcome Mikhail!
>
> On Thu, May 26, 2016 at 11:47 AM Ted Yu wrote:
>
> > Congratulations, Mikhail !
> >
> > On Thu, May 26, 2016 at 11:30 AM, Andrew Purtell
You should probably read
https://blogs.apache.org/hbase/entry/scan_improvements_in_hbase_1 first.
In HBase-1.1 and later code bases, you can call Scan.allowPartialResults()
to instruct the ClientScanner to give you partial results. In this case,
you can use Result.isPartial() to stitch together
LRUBlock cache does not reserve the space for the in_memory tier. The space
is used for other tiers as well as long as there is space.
You can read the code at LRUBlockCache.evict() to learn more.
Enis
On Thu, Jun 16, 2016 at 1:21 AM, WangYQ wrote:
> in hbase
The row key byte[] you are passing in Get() has a length of 0. HBase data
model does not allow 0-length row key, it should be at least 1 byte. 0-byte
row key is reserved for internal usage (to designate empty start key and
end keys).
In your storm topology, you are probably passing a row key that
ade there.
>
> Checked logs.
>
> Poked around the package, all looks good
>
> St.Ack
>
>
> On Wed, Jan 20, 2016 at 12:29 AM, Enis Söztutar <e...@apache.org> wrote:
>
> > I am pleased to announce that the second release candidate for the
> releas
Tomorrow is the last day for this RC voting. Can we get some more votes
please. This will be the last scheduled release in 1.0 line.
Thanks,
Enis
On Mon, Jan 25, 2016 at 1:33 PM, Enis Söztutar <e...@apache.org> wrote:
> Here is the UT build for the RC:
>
>
> https://builds.a
HBase Team is pleased to announce the immediate release of HBase 1.0.3.
Download it from your favorite Apache mirror [1] or maven repository.
HBase 1.0.3 is the next “patch” release in the 1.0.x release line and
supersedes all earlier 1.0.x. According to the HBase’s semantic version
guide (See
For compaction configurations, you can also set it per table OR even per
column family.
In java, you can use
HTableDescriptor.setConfiguration() or HColumnDescriptor.setConfiguration()
to set the specific configuration values that overrides the ones set in
hbase-site.xml. For compaction and
Here is the UT build for the RC:
https://builds.apache.org/view/H-L/view/HBase/job/HBase-1.0.3RC1/2/testReport/
All tests passed on the second run.
Enis
On Sat, Jan 23, 2016 at 8:00 PM, Enis Söztutar <e...@apache.org> wrote:
> Gently reminder for the vote.
>
> On Wed, Jan 20,
-downstreamer
- run local model, shell smoke tests, LTT.
- Put it up on a 4 node cluster, ran LTT.
Everything looks nominal.
On Tue, Jan 19, 2016 at 8:29 PM, Enis Söztutar <e...@apache.org> wrote:
> I am pleased to announce that the second release candidate for the release
> 1.
Gently reminder for the vote.
On Wed, Jan 20, 2016 at 2:40 PM, Enis Söztutar <e...@apache.org> wrote:
> Here is my official +1. I executed the same tests from 1.1.3RC for
> 1.0.3RC.
>
> - checked crcs, sigs.
>
> - checked tarball layouts, files, jar files, etc.
>
>
> Operation category WRITE is not supported in state standby
This looks to be coming from a NN that is in Standby state (or safe mode?).
Did you check whether underlying HDFS is healthy. Is this HA configuration
and hbase-site.xml contains the correct NN configuration?
Enis
On Tue, Feb 16, 2016
Phoenix maintains a column with empty value, because unlike a row-oriented
RDBMS, a NULL column is not represented explicitly in HBase, but implicitly
where the cell is not in HBase at all.
Here is the explanation from phoenix.apache.org:
For CREATE TABLE, an empty key value will also be added
We do not publish per-release docs other than the master and 0.94. 0.94 is
for a special reason since 0.96+ contains some major changes.
The release artifacts for every release contains the book in the tarball if
you want to browse through.
Enis
On Wed, Mar 2, 2016 at 4:00 PM, Cosmin Lehene
Makes sense to drop the branch for HBase-1.0.x.
I had proposed it here before:
http://search-hadoop.com/m/9UY0h2XrnGW1d3OBF1=+DISCUSS+Drop+branch+for+HBase+1+0+
Enis
On Thu, Apr 21, 2016 at 8:06 AM, Andrew Purtell
wrote:
> HBase announced at the last 1.0 release
1 - 100 of 120 matches
Mail list logo