Congrats!
Enis
On Thu, Jul 6, 2017 at 10:57 AM, Devaraj Das wrote:
> Congratulations, Chunhui!
>
> From: Yu Li
> Sent: Monday, July 03, 2017 10:24 PM
> To: d...@hbase.apache.org; Hbase-User
> Subject: [ANNOUNCE]
Major compaction will not trigger flush. You have to issue that manually.
BTW, you can look at the settings related to "cache on flush" so that when
the flusher writes the files, the blocks can go directly to the block
cache.
Enis
On Mon, May 29, 2017 at 7:45 PM, Eric Owhadi
wrote:
> thanks Enis
>
> I apologize for earlier
>
> This looks very close to our issue
> When you say: "there is no "WAL" recovery is happening", how could i make
> sure of that? Thanks
>
> Jeff
>
>
>
>
Jeff, please be respectful to be people who are trying to help you. This is
not acceptable behavior and will result in consequences next time.
On the specific issue that you are seeing, it is highly likely that you are
seeing this: https://issues.apache.org/jira/browse/HBASE-14223. Having
those
Congrats and welcome.
Enis
On Mon, Mar 27, 2017 at 9:23 AM, Stephen Jiang
wrote:
> Great! Congratulations and welcome to the team!
>
> Thanks
> Stephen
>
> On Mon, Mar 27, 2017 at 8:53 AM, Andrew Purtell
> wrote:
>
> > Congratulations and
There are different thread pools in the client, and some of the thread
pools depend on how are you constructing connection and table instances.
The first thread pool is the one owned by the connection. If you are using
ConnectionFactory.createConnection() (which you should) then this is the
<anoop.hb...@gmail.com> wrote:
> Thanks Enis.. I was not knowing the way of setting replica id
> specifically.. So what will happen if that said replica is down at
> the read time? Will that go to another replica?
>
> -Anoop-
>
> On Sat, Feb 18, 2017 at 3:34 AM, Eni
nse, and then sending to
> another, this bodes well with our scenarios.
>
> Please confirm
>
> thanks
>
>
> From: Enis Söztutar <enis@gmail.com>
> Sent: Friday, February 17, 2017 11:38:42 AM
> To: hbase-user
> Subject: Re: On
You can use read-replicas to distribute the read-load if you are fine with
stale reads. The read replicas normally have a "backup rpc" path, which
implements a logic like this:
- Send the RPC to the primary replica
- if no response for 100ms (or configured timeout), send RPCs to the other
Congrats Josh!
Enis
On Mon, Dec 12, 2016 at 11:39 AM, Esteban Gutierrez
wrote:
> Congrats and welcome, Josh!
>
> esteban.
>
>
> --
> Cloudera, Inc.
>
>
> On Sun, Dec 11, 2016 at 10:17 PM, Yu Li wrote:
>
> > Congratulations and welcome!
> >
> > Best
Going back to the original discussion, the conclusion is to continue with
1.1 line for now, and re-evaluate in a couple of months it seems.
Nick do you want to keep driving the next 1.1 releases, or you want to let
Andrew do that? I can help as well if needed.
Enis
On Mon, Nov 7, 2016 at 12:17
I also think that having 1.1 going for a bit longer might be helpful still,
especially if the ITBLL is failing with branch-1.2. Almost all of our
internal testing happens with a 1.1 based code base, so I cannot tell
whether 1.2 / 1.3 is the same stability or not.
Enis
On Fri, Nov 4, 2016 at 5:05
inlined.
On Thu, Oct 27, 2016 at 3:01 PM, Stack <st...@duboce.net> wrote:
> On Fri, Oct 21, 2016 at 3:24 PM, Enis Söztutar <enis@gmail.com> wrote:
>
> > A bit late, but let me give my perspective. This can also be moved to
> jira
> > or dev@ I think.
>
A bit late, but let me give my perspective. This can also be moved to jira
or dev@ I think.
DLR was a nice and had pretty good gains for MTTR. However, dealing with
the sequence ids, onlining regions etc and the replay paths proved to be
too difficult in practice. I think the way forward would be
On behalf of the Apache HBase PMC, I am happy to announce that Stephen has
accepted our invitation to become a PMC member of the Apache HBase project.
Stephen has been working on HBase for a couple of years, and is already a
committer for more than a year. Apart from his contributions in proc v2,
Please also note that m7 IS NOT HBase and have no connection with Apache
HBase at all. Please do not let your vendor tell otherwise.
Enis
On Thu, Sep 22, 2016 at 12:00 AM, Deepak Khandelwal <
dkhandelwal@gmail.com> wrote:
> Ok thx everyone.
>
> Will check with Mapr
>
> On Wednesday,
Congrats Duo.
Enis
On Wed, Sep 7, 2016 at 8:03 AM, Misty Stanley-Jones
wrote:
> Congratulations, Duo!
>
> > On Sep 6, 2016, at 9:26 PM, Stack wrote:
> >
> > On behalf of the Apache HBase PMC I am pleased to announce that 张铎
> > has accepted our invitation
LRUBlock cache does not reserve the space for the in_memory tier. The space
is used for other tiers as well as long as there is space.
You can read the code at LRUBlockCache.evict() to learn more.
Enis
On Thu, Jun 16, 2016 at 1:21 AM, WangYQ wrote:
> in hbase
You should probably read
https://blogs.apache.org/hbase/entry/scan_improvements_in_hbase_1 first.
In HBase-1.1 and later code bases, you can call Scan.allowPartialResults()
to instruct the ClientScanner to give you partial results. In this case,
you can use Result.isPartial() to stitch together
Disabling the table should not be needed. From the stripe compaction
perspective, deploying this in a disabled table versus in an online alter
table is not different at all. The "hbase.online.schema.update.enable"
property was fixing some possible race conditions that were fixed long time
ago.
We
Welcome Mikhail.
Enis
On Thu, May 26, 2016 at 12:19 PM, Gary Helmling wrote:
> Welcome Mikhail!
>
> On Thu, May 26, 2016 at 11:47 AM Ted Yu wrote:
>
> > Congratulations, Mikhail !
> >
> > On Thu, May 26, 2016 at 11:30 AM, Andrew Purtell
This is great. Thanks James for organizing!
Enis
On Thu, May 19, 2016 at 2:50 PM, James Taylor
wrote:
> The inaugural PhoenixCon will take place 9am-1pm on Wed, May 25th (at
> Salesforce @ 1 Market St, SF), the day after HBaseCon. We'll have two
> tracks: one for Apache
Phoenix does not support HBase-1.2 clusters right now. Only 4.8 release of
Phoenix will support running Phoenix with HBase-1.2.
See https://issues.apache.org/jira/browse/PHOENIX-2833
What version of Phoenix did you deploy? It might be the case that Phoenix
coprocessors are just throwing
>
> Also FWIW. I'd be curious to hear how many Phoenix users are using 0.98
>> versus 1.0 and up, besides the folks at Salesforce, whom I know well
>> (smile). And, more generally, who in greater HBase land is still on 0.98
>> and won't move off this year.
>>
>
From our customers perspective,
Makes sense to drop the branch for HBase-1.0.x.
I had proposed it here before:
http://search-hadoop.com/m/9UY0h2XrnGW1d3OBF1=+DISCUSS+Drop+branch+for+HBase+1+0+
Enis
On Thu, Apr 21, 2016 at 8:06 AM, Andrew Purtell
wrote:
> HBase announced at the last 1.0 release
Phoenix maintains a column with empty value, because unlike a row-oriented
RDBMS, a NULL column is not represented explicitly in HBase, but implicitly
where the cell is not in HBase at all.
Here is the explanation from phoenix.apache.org:
For CREATE TABLE, an empty key value will also be added
We do not publish per-release docs other than the master and 0.94. 0.94 is
for a special reason since 0.96+ contains some major changes.
The release artifacts for every release contains the book in the tarball if
you want to browse through.
Enis
On Wed, Mar 2, 2016 at 4:00 PM, Cosmin Lehene
For compaction configurations, you can also set it per table OR even per
column family.
In java, you can use
HTableDescriptor.setConfiguration() or HColumnDescriptor.setConfiguration()
to set the specific configuration values that overrides the ones set in
hbase-site.xml. For compaction and
> Operation category WRITE is not supported in state standby
This looks to be coming from a NN that is in Standby state (or safe mode?).
Did you check whether underlying HDFS is healthy. Is this HA configuration
and hbase-site.xml contains the correct NN configuration?
Enis
On Tue, Feb 16, 2016
The row key byte[] you are passing in Get() has a length of 0. HBase data
model does not allow 0-length row key, it should be at least 1 byte. 0-byte
row key is reserved for internal usage (to designate empty start key and
end keys).
In your storm topology, you are probably passing a row key that
HBase Team is pleased to announce the immediate release of HBase 1.0.3.
Download it from your favorite Apache mirror [1] or maven repository.
HBase 1.0.3 is the next “patch” release in the 1.0.x release line and
supersedes all earlier 1.0.x. According to the HBase’s semantic version
guide (See
ade there.
>
> Checked logs.
>
> Poked around the package, all looks good
>
> St.Ack
>
>
> On Wed, Jan 20, 2016 at 12:29 AM, Enis Söztutar <e...@apache.org> wrote:
>
> > I am pleased to announce that the second release candidate for the
> releas
Tomorrow is the last day for this RC voting. Can we get some more votes
please. This will be the last scheduled release in 1.0 line.
Thanks,
Enis
On Mon, Jan 25, 2016 at 1:33 PM, Enis Söztutar <e...@apache.org> wrote:
> Here is the UT build for the RC:
>
>
> https://builds.a
Here is the UT build for the RC:
https://builds.apache.org/view/H-L/view/HBase/job/HBase-1.0.3RC1/2/testReport/
All tests passed on the second run.
Enis
On Sat, Jan 23, 2016 at 8:00 PM, Enis Söztutar <e...@apache.org> wrote:
> Gently reminder for the vote.
>
> On Wed, Jan 20,
Gently reminder for the vote.
On Wed, Jan 20, 2016 at 2:40 PM, Enis Söztutar <e...@apache.org> wrote:
> Here is my official +1. I executed the same tests from 1.1.3RC for
> 1.0.3RC.
>
> - checked crcs, sigs.
>
> - checked tarball layouts, files, jar files, etc.
>
>
-downstreamer
- run local model, shell smoke tests, LTT.
- Put it up on a 4 node cluster, ran LTT.
Everything looks nominal.
On Tue, Jan 19, 2016 at 8:29 PM, Enis Söztutar <e...@apache.org> wrote:
> I am pleased to announce that the second release candidate for the release
> 1.
Normally this path: /hbase/data/default/TestTable/TestTable/
d5b64ad96deb3b47467db97669c009fb
should be /hbase/data/default/TestTable/d5b64ad96deb3b47467db97669c009fb.
Meaning that the table descriptor somehow was written under the wrong
folder (there should be only one TestTable in the path).
Some of these properties got changed, deprecated or new ones added between
different apache versions. The apache documentation lists the latest
configuration examples which should work with hbase-1.1+.
Enis
On Thu, Nov 19, 2015 at 8:49 AM, Melvin Kanasseril <
melvin.kanasse...@sophos.com> wrote:
The regionservers file is used only by the start-hbase.sh and stop-hbase.sh
kind of scripts. That is why the documentation is referring to as "if you
rely on ssh to start your daemons". These scripts executes the start or
stop daemon request using SSH. That is also why you need to have a list of
It is highly unlikely that a client to cause the server side to abort. It
is possible that due to some other problem the regionservers are aborting.
The regionservers will reject client connection requests if there is an RPC
version mismatch.
1.x and 0.98 client and servers have been tested to be
You should also configure authentication set to kerberos.
Enis
On Tue, Nov 3, 2015 at 9:00 PM, kumar r wrote:
> Yes I have set all this property,
>
> I got user ticket using kinit command and when trying to run *user_permission
> or grant *command in hbase shell, getting
Hi Artem,
The 1.0 API BufferedMutator should cover the use case where previously
HTableUtil was used or HTable.setAutoFlush(false) is used.
BufferedMutator already groups the mutations per region server under the
hood (AsyncProcess) and sends the buffered mutations in the background.
There
Agreed that we should not change the declared interface for TRR in patch
releases. Ugly, but we can rethrow as RuntimeException or ignore in 1.1 and
before.
I think this is also a blocker:
https://issues.apache.org/jira/browse/HBASE-14474
Enis
On Wed, Sep 23, 2015 at 3:50 PM, Nick Dimiduk
The HBase Team is pleased to announce the immediate release of HBase
1.0.2.
Download it from your favorite Apache mirror [1] or maven repository.
HBase 1.0.2 is the next “patch” release in the 1.0.x release line and
supersedes all previous 1.0.x releases . According to the HBase’s semantic
Congrats, well deserved.
Enis
On Fri, Aug 21, 2015 at 11:12 AM, Devaraj Das d...@hortonworks.com wrote:
Congratulations Stephen!
On Aug 20, 2015, at 7:10 PM, Andrew Purtell apurt...@apache.org wrote:
On behalf of the Apache HBase PMC, I am pleased to announce that Stephen
Jiang has
I've just saw the thread in question, and I also feel that an action has to
be taken because this type of behavior is unacceptable. It is also not the
first strike if my memory serves me.
Moderation is fine if we have voluteers. Otherwise +1 for a temporary ban.
Enis
On Tue, Jun 30, 2015 at
ITBLL requires at least 1M per mapper. You can run with less number of
mappers and numKeys = 1M.
Enis
On Thu, Jun 25, 2015 at 4:23 AM, Jean-Marc Spaggiari
jean-m...@spaggiari.org wrote:
2015-06-24 22:29 GMT-04:00 Enis Söztutar enis@gmail.com:
Also, I tried to run
Slight correction, ITBLL needs numKeys to be a multiple of 1M. See the
javadoc and ascii art at the code.
Enis
On Thu, Jun 25, 2015 at 1:47 PM, Enis Söztutar enis@gmail.com wrote:
ITBLL requires at least 1M per mapper. You can run with less number of
mappers and numKeys = 1M.
Enis
Also, I tried to run IntegrationTestBigLinkedList and it fails:
015-06-24 19:06:11,644 ERROR [main]
test.IntegrationTestBigLinkedList$Verify: Expected referenced count does
not match with actual referenced count. expected referenced=100
,actual=0
What are the command line arguments
:29 PM, Enis Söztutar enis@gmail.com wrote:
Also, I tried to run IntegrationTestBigLinkedList and it fails:
015-06-24 19:06:11,644 ERROR [main]
test.IntegrationTestBigLinkedList$Verify: Expected referenced count does
not match with actual referenced count. expected referenced=100
Yeah, for coprocessors, what Andrew said. You have to make minor changes.
From your repo, I was able to build:
HW10676:hbase-deps-test$ ./build.sh
:compileJava
Download
https://repository.apache.org/content/repositories/orgapachehbase-1078/org/apache/hbase/hbase/1.1.0/hbase-1.1.0.pom
Download
I guess you should mention that the RC is created from branch-1.1.0, as
opposed to branch-1.1.
I was looking at
https://people.apache.org/~ndimiduk/1.0.0_1.1.0RC1_compat_report.html and
was surprised by the HTable warning about RegionLocator. Did you
cherry-pick the commit from master or
Here is my +1.
checked sigs, crcs
build src tarsal with hadoop 2.3, 2.4.0, 2.5.0, 2.5.1, 2.5.2 and 2.6.0 and
2.7.0 (Did not try the 2.2)
run local mode
simple tests from shell
Build with downstreamer
checked dir layouts
checked jar files
checked version, tag,
Checked the documentation.
- a resolution to the RegionScanner interface change, if deemed necessary
Sorry, I confused RegionScanner with ResultScanner. RegionScanner is not a
Public class, only for co-processors. I think we do not need a fix here.
On a totally unrelated note, I was going over the ThrottlingException
Here are my tests so far:
checked sigs, crcs
build src tarsal with hadoop 2.3, 2.4.0, 2.5.0, 2.5.1, 2.5.2 and 2.6.0.
Build with Hadoop-2.2.0 is broken as previous mail. I don’t think we should
sink the RC for this.
run local mode
simple tests from shell
Build with downstreamer
checked dir
, wasn't sure why
people.apache.org wasn't picking up the new sub key. I've just uploaded to
pgp.mit.edu just now.
See also http://people.apache.org/~ndimiduk/KEY
On Mon, May 4, 2015 at 6:42 PM, Enis Söztutar enis@gmail.com wrote:
Nick did you upload your keys to MIT servers?
I am
Nick did you upload your keys to MIT servers?
I am not able to verify the sig.
gpg --list-keys
pub 4096R/8644EEB6 2014-03-11 [expires: 2016-04-14]
uid Nick Dimiduk ndimi...@apache.org
uid Nick Dimiduk ndimi...@gmail.com
sub 4096R/D2DCE494 2014-03-11
This is a nice topic. Let's put it on the ref guide.
Hbase on Azure FS is GA, and there has already been some work for
supporting HBase on the Hadoop native driver.
From this thread, my gathering is that, HBase should run on HDFS, MaprFS,
IBM GPFS, Azure WASB (and maybe Isilon, etc).
I had a
The build is broken with Hadoop-2.2 because mini-kdc is not found:
[ERROR] Failed to execute goal on project hbase-server: Could not resolve
dependencies for project org.apache.hbase:hbase-server:jar:1.1.0: Could not
find artifact org.apache.hadoop:hadoop-minikdc:jar:2.2
We are saying that 1.1
In case this is HBASE-11234, HDP-2.2 releases contain the fix.
Enis
On Thu, Apr 23, 2015 at 12:06 PM, Ted Yu yuzhih...@gmail.com wrote:
I think Dejan was referring to HBASE-11234
Cheers
On Thu, Apr 23, 2015 at 8:28 AM, Dejan Menges dejan.men...@gmail.com
wrote:
Hi,
This is a known
Moving this to user mailing list. issues mailing list is for jira issues.
Did you start zookeeper, and follow the guide at
https://hbase.apache.org/book.html#quickstart ?
Enis
On Tue, Apr 21, 2015 at 11:36 AM, Bo Fu b...@uchicago.edu wrote:
Hi,
I’m a beginner of HBase. I’m recently
It is also possible because of HTTP resources are not loaded. I run into
this daily, because of an extension I am using which refuses to load unsafe
scripts from HTTPS links which I default to.
Usually, web sites also host the CSS and javascript files referred instead
of hot linking them. Maybe
The configuration and logs seem usual. Agreed that it seems the shutdown
hook binding failed.
Do you see anything in the .out files? What about the classpath.
Enis
On Fri, Mar 27, 2015 at 2:23 AM, Dejan Menges dejan.men...@gmail.com
wrote:
Sorry, I ment to set it to $hbase.tmp.dir/local or
@Madeleine,
The folder gets cleaned regularly by a chore in master. When a WAL file is
not needed any more for recovery purposes (when HBase can guaratee HBase
has flushed all the data in the WAL file), it is moved to the oldWALs
folder for archival. The log stays there until all other references
The HBase Team is pleased to announce the immediate release of HBase 1.0.0.
Download it from your favorite Apache mirror [1] or maven repository.
HBase 1.0.0 is the next stable release, and the start of semantic
versioned
releases (See [2]).
The 1.0.0 release has three goals:
1) to lay a stable
The HBase Team is pleased to announce the immediate release of HBase 1.0.0.
Download it from your favorite Apache mirror [1] or maven repository.
HBase 1.0.0 is the next stable release, and the start of semantic
versioned
releases (See [2]).
The 1.0.0 release has three goals:
1) to lay a stable
Yes, this looks like HBASE-10499, but without more logs from the region
server it is hard to tell.
hdp_specific
The next version of HDP-2.1 is already scheduled to contain HBASE-10499. If
you want, we can continue at the HDP forums.
Enis
On Tue, Feb 24, 2015 at 10:38 AM, Brian Jeltema
sharing. Everything is explicit now.
Enis
On Tue, Feb 17, 2015 at 11:50 PM, Serega Sheypak serega.shey...@gmail.com
wrote:
Hi, Enis Söztutar
You've wrote:
You are right that the constructor new HTable(Configuration, ..) will
share the underlying connection if same configuration object is used
Hi,
You are right that the constructor new HTable(Configuration, ..) will share
the underlying connection if same configuration object is used. Connection
is a heavy weight object, that holds the zookeeper connection, rpc client,
socket connections to multiple region servers, master, and the
We do have
Admin.getClusterStatus().getLoad(serverName).getInfoServerPort() which
maybe what you want.
Enis
On Wed, Jan 21, 2015 at 5:05 PM, Stack st...@duboce.net wrote:
On Wed, Jan 21, 2015 at 4:49 PM, Stack st...@duboce.net wrote:
On Wed, Jan 21, 2015 at 3:03 PM, Talat Uyarer
Online in this context is HBase cluster being online, not individual
regions. For the merge process, the regions go briefly offline similar to
how splits work. It should be on the order of seconds.
Enis
On Wed, Jan 21, 2015 at 10:26 AM, Ted Yu yuzhih...@gmail.com wrote:
Please take a look at
The HBase Team is pleased to announce the immediate release of HBase 0.99.2.
Download it from your favorite Apache mirror [1] or maven repository.
THIS RELEASE IS NOT INTENDED FOR PRODUCTION USE, and does not contain any
backwards or forwards compatibility guarantees (even within minor versions
The HBase Team is pleased to announce the immediate release of HBase 0.99.0.
Download it from your favorite Apache mirror [1] or maven repository.
THIS RELEASE IS NOT INTENDED FOR PRODUCTION USE, and does not contain any
backwards or forwards compatibility guarantees (even within minor versions
Agreed, this seems like an hdfs issue unless hbase itself does not close
the hfiles properly. But judging from the fact that you were able to
circumvent the problem by getting reducing the cache size, it does seem
unlikely.
I don't think the local block reader will be notified when a file/block
Hi Keegan,
Unfortunately, at the time of the module split in 0.96, we could not
completely decouple mapreduce classes from the server dependencies. I think
we actually need two modules to be extracted out, one is hbase-mapreduce
(probably separate module than client module) and hbase-storage for
Sorry I could not get on this sooner. I do typically run a list of tests
from my checklist to verify the release, which is why I sometimes do not
vote on releases. Let me do the dutiful for this.
Agreed with the general sentiment. Please reconsider.
Enis
On Mon, May 12, 2014 at 3:59 PM, Stack
Here is my late +1 for the release.
- Checked checksums
- Checked gpg signature (gpg --verify )
- Checked included documentation book.html, etc.
- Running unit tests
- Started in local mode, run LoadTestTool
- checked hadoop libs in h1 / h2
- build src with both hadoop 1 and 2
- checked
Hey,
Indeed it is a viable approach. Some of the HBase deployments use the
master-master replication model across DCs. The consistency semantics will
obvious depend on the application use case.
However, there is no out-of-the-box client to do across-DC requests. But
wrapping the HBase client in
Hi,
In the dev thread [1], HBase developers are considering dropping support
for Hadoop-1 in future releases 0.99, and HBase-1.0. This is a heads up, so
that you can plan ahead if you are choosing to go with 0.96.x and 0.98.x
releases.
Hadoop-2.2 was released last October, and it is superior in
Indeed we need some pom mockery to be able to do that. It would be good
though.
Some history:
https://issues.apache.org/jira/browse/HBASE-5341
and related
https://issues.apache.org/jira/browse/HBASE-6929
Enis
On Tue, Mar 4, 2014 at 5:03 PM, James Taylor giacomotay...@gmail.comwrote:
That'd
From the log message, it seems that you did not run the upgrade.
Can you try with the instructions Ted sent?
Enis
On Wed, Mar 5, 2014 at 12:39 AM, Richard Chen cxd3...@gmail.com wrote:
Thanks for the suggestion. But when I looked at the JIRA, it shows
- Fix Version/s:0.98.0
winutils.exe is the native implementation for some of the utilities that
hdfs and hadoop clients require. For the hbase client and hadoop client to
work properly, you have to have it installed properly.
You can build it locally: http://wiki.apache.org/hadoop/Hadoop2OnWindows.
Enis
On Fri, Feb
We've also recently updated
http://hbase.apache.org/book/ops.capacity.htmlwhich contains similar
numbers, and some more details on the items to
consider for sizing.
Enis
On Sat, Feb 8, 2014 at 10:12 PM, Ramu M S ramu.ma...@gmail.com wrote:
Thanks Lars.
We were in the process of building
You can find the manual for HDP here:
http://docs.hortonworks.com/HDPDocuments/HDP2/HDP-2.0.5.0/bk_system-admin-guide/content/ch_hadoop-ha.html
I think you have to configure your hbase root dir and client side hdfs
configuration.
Enis
On Wed, Jan 22, 2014 at 12:22 AM, jaksky
Nice test!
There is a couple of things here:
(1) HFileReader reads only one file, versus, an HRegion reads multiple
files (into the KeyValueHeap) to do a merge scan. So, although there is
only one file, there is some overehead of doing a merge sort'ed read from
multiple files in the region. For
If the split takes too long (longer than 30 secs), I would say you may have
too many store files in the region. Split has to write two tiny files per
store file. The other thing may be the region has to be closed before
split. Thus it has to do a flush. If it cannot complete the flush in time,
it
You can change the log4j configuration for your deployment as you see fit.
DRFA is however a better default for the general case.
You might also want to ensure that you are not running with DEBUG logging
enabled in a production cluster.
Enis
On Mon, Oct 14, 2013 at 11:01 PM, sreenivasulu y
Hi guys,
I just wanted to give a heads up on upcoming bay area user and dev meetups
which will happen on the same day, October 24th. ( special thanks Stack for
pushing this.)
The user meetup will start at 6:30, and the talks scheduled so far are:
+ Steven Noels will talk about using the Lily
Hbase shell is just the jruby shell with some custom methods imported. Are
you running with powershell or cmd? You can just boot up jruby to see
whether there is any difference there.
Enis
On Wed, Oct 2, 2013 at 7:03 AM, Sznajder ForMailingList
bs4mailingl...@gmail.com wrote:
Hi
I am
You can look at the HBase in Action book, which contains a whole chapter
on an example GIS system on HBase.
Enis
On Fri, Oct 4, 2013 at 1:01 AM, Adrien Mogenet adrien.moge...@gmail.comwrote:
If you mean insert and query spatial data, look at algorithms that are
distributed databases
Right now we do not have what you suggest.
Eric has created an issue for this:
https://issues.apache.org/jira/browse/HBASE-8016
I think it makes a lot of sense, especially enabling HRegion as a library
to work on top of shared hdfs and building a simple layer to embed the
client side, etc.
The
Congrats and welcome aboard.
On Wed, Sep 11, 2013 at 10:08 AM, Jimmy Xiang jxi...@cloudera.com wrote:
Congrats!
On Wed, Sep 11, 2013 at 9:54 AM, Stack st...@duboce.net wrote:
Hurray for Rajesh!
On Wed, Sep 11, 2013 at 9:17 AM, ramkrishna vasudevan
Hi,
Please join me in welcoming Nick as our new addition to the list of
committers. Nick is exceptionally good with user-facing issues, and has
done major contributions in mapreduce related areas, hive support, as well
as 0.96 issues and the new and shiny data types API.
Nick, as tradition, feel
As long as there is interest for 0.94, we will care for 0.94. However, when
0.96.0 comes out, it will be marked as the next stable release, so I expect
that we would promote newcomers that branch.
Any committer can propose any branch and release candidate any time, so if
there are road blocks for
Hi,
Agreed with what Nick said. There is also an MSI based installation for
HBase as a part of HDP-1.3 package. You can check it out here:
http://hortonworks.com/products/hdp-windows/
Enis
On Tue, Aug 20, 2013 at 2:54 PM, Nick Dimiduk ndimi...@gmail.com wrote:
Hi Andrew,
I don't think the
I've seen a similar stack trace in some test as well, and opened the issue
https://issues.apache.org/jira/browse/HBASE-8912 for tracking this.
This looks like a problem in AssignmentManager that fails to recognize a
valid state transition, but I did not have the time to look into it
further.
Bryan,
3.6x improvement seems exciting. The ballpark difference between HBase scan
and hdfs scan is in that order, so it is expected I guess.
I plan to get back to the trunk patch, add more tests etc next week. In the
mean time, if you have any changes to the patch, pls attach the patch.
Enis
Hi,
Regarding running raw scans on top of Hfiles, you can try a version of the
patch attached at https://issues.apache.org/jira/browse/HBASE-8369, which
enables exactly this. However, the patch is for trunk.
In that, we open one region from snapshot files in each record reader, and
run a scan
Hi,
HDFS has two interfaces for durability: hflush and hsync:
Hflush() : Flush the data packet down the datanode pipeline. Wait for
ack’s.
Hsync() : Flush the data packet down the pipeline. Have datanodes execute
FSYNC equivalent. Wait for ack’s.
There is some work on adding a Durability API in
Hi,
Interesting use case. I think it depends on job many jobId's you expect to
have. If it is on the order of thousands, I would caution against going the
one table per jobid approach, since for every table, there is some master
overhead, as well as file structures in hdfs. If jobId's are
1 - 100 of 120 matches
Mail list logo