Congrats, Misty, and thanks for all your efforts!
On Fri, Sep 22, 2017 at 3:57 PM Umesh Agashe wrote:
> Congratulations Misty!
>
>
>
> On Fri, Sep 22, 2017 at 11:41 AM, Esteban Gutierrez
> wrote:
>
> > Thats awesome! Congratulations, Misty!
> >
> >
>
A relogin from the keytab will happen in
RpcClientImpl.Connection.handleSaslConnectionFailure(). So if the Thrift
server fails to establish a connection to a regionserver to relay a client
request, it should perform a relogin from the configured keytab. This is a
bit indirect though, and there
On behalf of the Apache HBase PMC, I am pleased to announce that Ashu
Pachauri has accepted the PMC's invitation to become a committer on the
project. We appreciate all of Ashu's generous contributions thus far and
look forward to his continued involvement.
Congratulations and welcome, Ashu!
+1 to EOL, and thanks to Andrew for all of the RM'ing.
On Mon, Apr 10, 2017 at 12:27 PM Ted Yu wrote:
> +1
>
> Andrew has done tremendous work.
>
> On Mon, Apr 10, 2017 at 12:17 PM, Mikhail Antonov
> wrote:
>
> > +1 to EOL 0.98.
> >
> > Thanks Andrew
On Tue, Apr 4, 2017 at 11:00 AM Stack wrote:
>
> What's the recommended approach to avoid or reduce the delay between when
> > HBase starts sending the response and when the application can act on it?
>
>
> As is, Cells are indivisible as are 'responses' when we promise a
>
>
> A jira sounds like a good idea. Even if this is buried somewhere, it's
> clearly not prominent enough.
>
>
+1. Clarifying this in the javadoc and reference guide seems like a good
idea.
Did you try throwing CoprocessorException or making your custom exception a
subclass of it? These should be carried through to the client.
Yes, for exceptions outside of this hierarchy, there is no way to know if
the exception is recoverable or not, so the safe route is chosen and either
the
>
> I'm not deeply familiar with the AssignmentManager. I see when we process
> split rollbacks in onRegionSplit() we only call regionOffline() on
> daughters if they are known to exist. However when processing merge
> rollbacks in the else case of onRegionMerge() we unconditionally call
>
>
> I'm not deeply familiar with the AssignmentManager. I see when we process
> split rollbacks in onRegionSplit() we only call regionOffline() on
> daughters if they are known to exist. However when processing merge
> rollbacks in the else case of onRegionMerge() we unconditionally call
>
>
> The behavior: Looks like failed split/compaction rollback: row(s) in META
> without HRegionInfo, regions deployed without valid meta entries (at
> first), regions on HDFS without valid meta entries (later, after RS
> carrying them are killed by chaos), holes in the region chain leading to
>
Welcome Mikhail!
On Thu, May 26, 2016 at 11:47 AM Ted Yu wrote:
> Congratulations, Mikhail !
>
> On Thu, May 26, 2016 at 11:30 AM, Andrew Purtell
> wrote:
>
> > On behalf of the Apache HBase PMC I am pleased to announce that Mikhail
> > Antonov has
The effect of setting this to false is that, if any of your coprocessors
throw unexpected exceptions, instead of aborting, the region server will
log an error and remove the coprocessor from the list of loaded
coprocessors on the region / region server / master.
This allows HBase to continue
Welcome, Jerry!
On Wed, Apr 1, 2015 at 12:04 PM Richard Ding pigu...@gmail.com wrote:
Congratulations!
On Wed, Apr 1, 2015 at 11:28 AM, Demai Ni nid...@gmail.com wrote:
Jerry, congratulations! well deserved
On Wed, Apr 1, 2015 at 11:23 AM, Esteban Gutierrez este...@cloudera.com
Proving it to yourself is sometimes the hardest part!
On Mon, Mar 2, 2015 at 2:11 PM Nick Dimiduk ndimi...@gmail.com wrote:
Gary to the rescue! Does it still count as being right even if you cannot
prove it for yourself? ;)
On Mon, Mar 2, 2015 at 2:06 PM, Gary Helmling ghelml...@gmail.com
Sorry Kristoffer, but I believe my previous statement was mistaken. I
cannot find a location where the timestamp is taken into account at the
StoreFile level. I thought the above statement about metadata from the
HFile headers was correct, but I cannot locate the code that takes such
Fantastic work! Congrats everyone!
On Tue Feb 24 2015 at 9:45:24 AM Esteban Gutierrez este...@cloudera.com
wrote:
Wow! Congrats, all!
--
Cloudera, Inc.
On Tue, Feb 24, 2015 at 9:41 AM, Jerry He jerry...@gmail.com wrote:
Congratulations on the milestone!
2) is more expensive than 1).
I'm wondering if we could use Compaction Coprocessor for 2)? HBaseHUT
needs to be able to grab N rows and merge them into 1, delete those N rows,
and just write that 1 new row. This N could be several thousand rows.
Could Compaction Coprocessor really be used
by DefaultCompactor:
ScanType scanType =
request.isAllFiles() ? ScanType.COMPACT_DROP_DELETES :
ScanType.
COMPACT_RETAIN_DELETES;
BTW ScanType is currently marked InterfaceAudience.Private
Should it be marked LimitedPrivate ?
Cheers
On Fri, Jan 9, 2015 at 12:19 PM, Gary
Yes, you can use the org.apache.hadoop.hbase.util.VersionInfo class.
From java code, you can use VersionInfo.getVersion(). From shell
scripts, you can just run hbase version and parse the output.
On Wed, Nov 12, 2014 at 1:37 PM, Otis Gospodnetic
otis.gospodne...@gmail.com wrote:
Hi,
Is there
Hi Tom,
First off, are you talking about a region endpoint (vs. master
endpoint or region server endpoint)?
As long as you are talking about a region endpoint, the endpoint
coprocessor can be configured as a table coprocessor, the same as a
RegionObserver. You can see an example and description
from HDFS, or just the table-based
one?
--Tom
On Mon, Oct 27, 2014 at 3:31 PM, Gary Helmling ghelml...@gmail.com
wrote:
Hi Tom,
First off, are you talking about a region endpoint (vs. master
endpoint or region server endpoint)?
As long as you are talking about a region endpoint
Hi all,
I'm happy to announce the 0.3.0 release of Tephra.
This release is a renaming of the project from Continuuity Tephra to
Cask Tephra, and includes the following changes:
* All packages have changed from com.continuuity.tephra to co.cask.tephra
* The Maven group ID has changed from
What do you have HBASE_HEAPSIZE set to in hbase-env.sh? Is it
possible that you're overcommitting memory and the instance is
swapping? Just a shot in the dark, but I see that the m3.2xlarge
instance has 30G of memory vs. 15G for c3.2xlarge.
On Wed, Sep 17, 2014 at 3:28 PM, Ted Yu
Authentication is only performed during RPC connection setup. So
there isn't really a concept of token expiration for an existing RPC
connection. The connection will be authenticated (will not expire)
for as long as it's held open. When it's closed and re-opened, it
should pick up the latest
Hi all,
I'm happy to announce the 0.2.1 release of Tephra.
Tephra provides globally consistent transactions on top of Apache
HBase by leveraging HBase's native data versioning to provide
multi-versioned concurrency control (MVCC) for transactional reads and
writes. With MVCC capability, each
Hi Parth,
The code that you outline here would just return credentials containing
tokens that have already been obtained for the given user.
As I understand it, what you are trying to do is have Storm do secure
impersonation in order to obtain a delegation token on behalf of another
user, which
I don’t think we need to support older versions of HBase. However there is
one thing that still bugs me. How does token renewal work here? Generally
in HDFS I have seen that you have to pass in the renewer user as an
argument when you obtain a token. Here as renew user is not passed I am
Hi Jianwei,
You may also want to take a look at the generic client transaction API
being proposed in HBASE-11447:
https://issues.apache.org/jira/browse/HBASE-11447
I think it would be useful to have the Themis perspective there, and
whether the proposed API meets your needs and requirements.
Hi Cheney,
Did you obtain kerberos credentials before running your program, either by
calling kinit before running the program, or by calling
UserGroupInformation.loginFromKeytab() in your code?
On Tue, Jul 1, 2014 at 8:44 AM, Cheney Sun sun.che...@gmail.com wrote:
Hello all,
I have setup a
On Wed, Jul 2, 2014 at 1:20 AM, Gary Helmling ghelml...@gmail.com
wrote:
Hi Cheney,
Did you obtain kerberos credentials before running your program, either
by
calling kinit before running the program, or by calling
UserGroupInformation.loginFromKeytab() in your code
Hi Demai,
Yes, even when using hbase.security.authentication=simple in 0.94, you need
to use SecureRpcEngine. The default WritableRpcEngine does not pass the
username to the server at all, which can obviously cause problems for
authorization.
--gh
On Fri, Jun 20, 2014 at 10:21 AM, Demai Ni
As Anoop described, region observers don't use ZK directly. Can you
describe more of what you are trying to do in your coprocessor -- how / why
you are connecting to zookeeper, or even provide sample code from your
coprocessor implementation?
On Fri, Mar 21, 2014 at 10:43 AM, Anoop John
For HBase 0.94, you need a version of HBase built with the security
profile to get SecureRpcEngine and other security classes. I'm not sure
that the published releases on maven central actually include this.
However, it's easily to build yourself, just add -Psecurity to the mvn
command line to
It looks like how the CREATE permission is applied changed with HBASE-6188,
which removed the concept of a table owner. Prior to HBASE-6188, the
disable/enable table permission checks required either:
* ADMIN permission
or
* the user is the table owner AND has the CREATE permission
I believe
Yes, the pre/post method calls for the Observer hooks (RegionObserver for
postPut()) are executed synchronously on the RPC calling path. So the
RegionServer will not return the response to the client until your
postPut() method has returned. In general, this means that for best
performance you
To grant privileges to a group, just prefix the group name with '@' in the
grant command. For example, to grant global read/write privileges to the
group mygroup in the shell, you would use:
grant '@mygroup', 'RW'
On Wed, Jan 8, 2014 at 7:59 PM, takeshi takeshi.m...@gmail.com wrote:
Hi All,
Hi Andy,
I'm afraid you will have to ask MapR then what is supported. MapR M7 is a
proprietary application. It is _not_ Apache HBase.
On Fri, Dec 13, 2013 at 2:27 PM, ados1...@gmail.com ados1...@gmail.comwrote:
Am using MapR M7 HBase distribution (
hi Asaf,
Thank you for your response. the rpc server in my application is a
singleton
instance. It is started in a region observer, and work as a single server
in the HRegionServer, just like the RPC servers bring up in the RS's main()
Method. It not attatched with any Table or regions,
You are welcome to not use coprocessors.
IMHO, the current implementation is DOA, primarily because it runs in the
same JVM as the RS.
(I'll have to see if I can open a JIRA and make comments.)
There has been a JIRA for out of process coprocessors for quite some time:
The coprocessor class is of course still in memory on the
regionserver,
That was kinda my point.
You can't remove the class from the RS until you do a rolling restart.
Yes, understood.
However, your original statement that You can't remove a coprocessor
needed some clarification, in
You can't remove a coprocessor.
Well, you can, but that would require a rolling restart.
It still exists and is still loaded.
Assuming we are talking about RegionObserver coprocessors here, when a
coprocessor throws an exception (other than IOException), it is either:
a) removed from the
Your DemoObserver is not being invoked because DemoEndpoint is opening a
scanner directly on the region:
RegionCoprocessorEnvironment env
=(RegionCoprocessorEnvironment)getEnvironment();
InternalScanner scanner = env.getRegion().getScanner(scan);
The RegionObserver.postScannerNext() hook
Congrats, Rajesh!
On Wed, Sep 11, 2013 at 11:09 AM, Enis Söztutar e...@apache.org wrote:
Congrats and welcome aboard.
On Wed, Sep 11, 2013 at 10:08 AM, Jimmy Xiang jxi...@cloudera.com wrote:
Congrats!
On Wed, Sep 11, 2013 at 9:54 AM, Stack st...@duboce.net wrote:
Hurray for
Ben George,
The arguments you provide when configuring the coprocessor should be
present in the Configuration object exposed through
CoprocessorEnvironment. So, for example, in your RegionObserver.start()
method, you should be able to do:
*public void start(CoprocessorEnvironment e) throws
To further isolate the problem, try doing some simple commands from the
hbase shell after obtaining kerberos credentials:
1) kinit
2) hbase shell
3) in hbase shell:
create 'testtable', 'f'
put 'testtable', 'r1', 'f:col1', 'val1'
get 'testtable', 'r1'
If these all work, then the HBase code
Endpoint coprocessors can be loaded on a single table. They are no
different from RegionObservers in this regard. Both are instantiated per
region by RegionCoprocessorHost. You should be able to load the
coprocessor by setting it as a table attribute. If it doesn't seem to be
loading, check
to restart RS. It would be nice to have APIs
to load the Endpoint coprocessor dynamically.
Kim
On Fri, Jul 12, 2013 at 9:18 AM, Gary Helmling ghelml...@gmail.com
wrote:
Endpoint coprocessors can be loaded on a single table. They are no
different from RegionObservers in this regard. Both
Is the HMaster process running correctly on the cluster? Between the
missing cluster ID and meta region not being available, it looks like
HMaster may not have fully initialized.
Alternately, if HMaster is running correctly, did you override the default
value for zookeeper.znode.parent in your
Does your NameAndDistance class implement org.apache.hadoop.io.Writable?
If so, it _should_ be serialized correctly. There was a past issue
handling generic types in coprocessor endpoints, but that was fixed way
back (long before 0.94.2). So, as far as I know, this should all be
working,
A single class can act as both a RegionObserver and an endpoint. The
Base... classes are just there for convenience.
To implement both, for example, you could:
1) have your class extend BaseRegionObserver, override postPut(), etc
2) define an interface that extends CoprocessorProtocol. the
As others mention HBASE-6870 is about coprocessorExec() always scanning the
full .META. table to determine region locations. Is this what you mean or
are you talking about your coprocessor always scanning your full user table?
If you want to limit the scan within regions in your user table,
Hi Rami,
One thing to note for RegionObservers, is that each table region gets its
own instance of each configured coprocessor. So if your cluster has N
regions per region server, with your RegionObserver loaded on all tables,
then each region server will have N instances of your coprocessor.
Hi Jeff,
Yeah that is pretty bad. User should definitely be implementing equals()
and hashCode(). Thanks for tracking this down and reporting it.
I opened https://issues.apache.org/jira/browse/HBASE-8222
Gary
On Fri, Mar 29, 2013 at 11:41 AM, Jeff Whiting je...@qualtrics.com wrote:
After
To expand on what Himanshu said, your endpoint is doing an unbounded scan
on the region, so with a region with a lot of rows it's taking more than 60
seconds to run to the region end, which is why the client side of the call
is timing out. In addition you're building up an in memory list of all
I profiled it and getStartKeysInRange is taking all the time. Recall I'm
running 0.92.1. I think these factors are consistent with
https://issues.apache.org/jira/browse/HBASE-5492, which was fixed in
0.92.3.
We'll be upgrading soon, so I'll be able to verify the perf issue is gone.
So should we close HBASE-5492 as a dup?
Yes, that would make sense. Done.
I'm running some experiments to understand where to use coprocessors. One
interesting scenario is computing distinct values. I ran performance tests
with two distinct value implementations: one using endpoint coprocessors,
and one using just scans (computing distinct values client side only).
Check your logs for whether your end-point coprocessor is hitting
zookeeper on every invocation to figure out the region start key.
Unfortunately (at least last time I checked), the default way of invoking
an end point coprocessor doesn't use the meta cache. You can go through a
combination
I see this is HBASE-6870. I thought that sounded familiar.
On Mon, Mar 4, 2013 at 6:23 PM, Gary Helmling ghelml...@gmail.com wrote:
Check your logs for whether your end-point coprocessor is hitting
zookeeper on every invocation to figure out the region start key.
Unfortunately (at least
Congrats, Sergey! Great work!
On Fri, Feb 22, 2013 at 2:10 PM, Enis Söztutar enis@gmail.com wrote:
Congrats. Well deserved.
On Fri, Feb 22, 2013 at 1:57 PM, Andrew Purtell apurt...@apache.org
wrote:
Congratulations Sergey!
On Fri, Feb 22, 2013 at 1:39 PM, Ted Yu
You can also use the service-level authorization support to control which
users/groups are allowed to connect at all. It's configured via
hbase-policy.xml in the conf/ directory and functions similarly to the HDFS
implementation:
http://hadoop.apache.org/docs/r1.0.4/service_level_auth.html
But
Congrats, Devaraj!
On Thu, Feb 7, 2013 at 5:36 AM, Nicolas Liochon nkey...@gmail.com wrote:
Congrats, Devaraj!
On Thu, Feb 7, 2013 at 2:26 PM, Marcos Ortiz mlor...@uci.cu wrote:
Congratulations, Devaraj.
On 02/07/2013 02:20 AM, Lars George wrote:
Congrats! Welcome aboard.
If you're writing a junit test that spins up a mini cluster to test the
coprocessor, then there's no need to deploy the jar into HDFS just for
testing. The coprocessor class should already be on your test classpath.
In your test's setup method, you just need to either: a) add the
coprocessor
Will the CoprocessorEnvironment reference in the start() method be
instanceof RegionCoprocessorEnvironment too
No. It will be reference of RegionEnvironment . This is not a public class
so you wont be able to do the casting.
Since RegionEnvionment implements RegionCoprocessorEnvironment,
Congrats Matteo and Chunhui! Keep up the good work!
On Wed, Jan 2, 2013 at 5:24 PM, Manoj Babu manoj...@gmail.com wrote:
Congraulations Matteo and Chunhui!
Cheers!
Manoj.
On Thu, Jan 3, 2013 at 5:18 AM, lars hofhansl lhofha...@yahoo.com wrote:
Congrats Matteo and Chunhui, glad to
I'm not familiar with happybase, but with the recent conversion of
coprocessor endpoints to protocol buffer services in trunk, it should be
possible to implement calling endpoints from other languages that protobufs
support. There is a ticket to enable endpoint calls over the REST gateway:
It sounds like you want to have the coprocessor expose it's own
metrics as part of the HBase metrics? If that's right, can you
describe some of the metrics you might want to expose?
We could possibly provide hooks to publish metrics through the
CoprocessorEnvironment, which could then get pushed
have HBASE-6505, which allows
you to share state between RegionObservers (and Endpoints) within the
same RegionServer.
-- Lars
From: Gary Helmling ghelml...@gmail.com
To: user@hbase.apache.org
Sent: Thursday, August 30, 2012 1:59 PM
Subject: Re: Allocating
Endpoint coprocessors are loaded and run within the HBase RegionServer
process. Your endpoint coprocessors will be running on the region
servers hosting the regions for the table(s) on which the coprocessor
is configured.
So the way to allocate more memory is by setting either HBASE_HEAPSIZE
or
I could repost the up and running with secure hadoop one. But it's
kind of out of date at this point. I remember, back when the site was
still up, getting some comments on it about things that had already
changed in the 0.20.20X releases.
I can take a look and see how bad it is.
On Thu, May
What is the best way to profile some co-processor code (running on the
regionserver)? If you have done it successfully, what tips can you
offer, and what unexpected problems did you encounter?
It depends on what exactly you want to look at, but ultimately I don't
think it's too different from
org.apache.hadoop.hbase.ipc.HBaseRPC$UnknownProtocolException:
org.apache.hadoop.hbase.ipc.HBaseRPC$UnknownProtocolException: No matching
handler for protocol org.apache.hadoop.hbase.coprocessor.AggregateProtocol
in region transactions,,1335223974116.e9190687f8a74b5083b39b6e5bd55705.
The
Hi Anil,
Does this mean that
hbase.coprocessor.region.classes is not a client side configuration? I am
just curious to know why it was not working when i was setting the conf
through code.
That is correct. This is a server-side only configuration. Setting
it on the client side will have
Currently endpoint coprocessors are only callable via the java client.
Please do open a JIRA describing what you would like to see here. If
you'd like to try working up a patch, that would be even better!
On Mon, Mar 19, 2012 at 11:03 AM, Ben West bwsithspaw...@yahoo.com wrote:
Hi all,
We
and Happy New Year to you guys,
Royston (and Tom).
(HBase 0.92, Hadoop 1.0)
-Original Message-
From: Gary Helmling [mailto:ghelml...@gmail.com]
Sent: 23 December 2011 18:06
To: user@hbase.apache.org
Subject: Re: AggregateProtocol Help
Hi Tom,
The test
Hi Tom,
The test code is not really the best guide for configuration.
To enable the AggregateProtocol on all of your tables, add this to the
hbase-site.xml for the servers in your cluster:
property
namehbase.coprocessor.user.region.classes/name
Yes, HBASE-4510 broke running on one set of conditions, now the fix
in HBASE-4680 seems to have broken another.
Are the safe mode related changes from HBASE-4510 really necessary
right now? Would it be possible to wait for HDFS-2413, when we have a
real API for checking safe mode? Or do we
At the same time, it might be simpler to get your customers/operators
to fix their ntp setups.
Not having synchronized clocks throughout the cluster will cause
problems in other areas as well. It will make it very difficult to
correlate events in different server logs when troubleshooting
Also, make sure that you're either setting a stop row on the scan, or
if you're using a filter, try wrapping it in a WhileMatchFilter. This
tells the scanner it can stop as soon as the filter starts rejecting
rows. Otherwise you can wind up getting back just the data you
expect, but still
I think the key part is this:
java.io.IOException: All datanodes 10.33.100.74:50010 are bad. Aborting...
at
org.apache.hadoop.hdfs.DFSClient$DFSOutputStream.processDatanodeError(DFSClient.java:2680)
at
Working and consistent hostname resolution is a requirement for running an
HBase cluster. Usually the easiest way to do this is with DNS. You can
also use a hosts file, but you need to make sure the hosts file includes all
cluster hosts and have a way of synchronizing it throughout the cluster.
Since this is fairly off-topic at this point, I'll keep it short. The
simple
rule for Dynamo goes like this: if (R+WN W=Quorum), then you're
guaranteed a consistent result always. You get eventual consistency if
W=Quorum. If WQuorum, then you can get inconsistent data that must be
That led me to question: Should a RegionObserver be allowed to interfere
with the system tables?
Yes.
This is critical for the security implementation, for example. We need to
perform authorization checks on access to -ROOT- and .META. If this were
disallowed, then security couldn't be
Hi Lars,
Should then all RPC triggered by a coprocessor be avoided (and hence the use
the env-provided HTableInterface be generally discouraged)?
I would generally avoid making synchronous RPC calls within a direct
coprocessor call path (blocking a handler thread waiting on the response).
On Wed, Aug 10, 2011 at 10:46 PM, lars hofhansl lhofha...@yahoo.com wrote:
I guess there could either be a {pre|post}Multi on RegionObserver (although
HRegionServer.multi does a lot of munging).
Or maybe a general {pre|post}Request with no arguments - in which case it
would be at least
On Thu, Aug 11, 2011 at 2:20 PM, Allan Yan hailun...@gmail.com wrote:
Hello,
1. Scan s = new Scan();
2. s.addFamily(myFamily);
3. s.setStartRow(startRow);
4. Filter rowFilter = new RowFilter(CompareFilter.CompareOp.EQUAL, new
BinaryPrefixComparator(startRow));
5. s.setFilter(rowFilter);
Thanks Lars. That's a remnant of HBASE-3065.
Yes please open an issue with the patch. We'll get it in.
On Fri, Aug 5, 2011 at 7:01 PM, lars hofhansl lhofha...@yahoo.com wrote:
I noticed in HBase trunk I always get this error in the shell:
ERROR: undefined method `getRecoverableZookeeper'
Is it possible that you have mismatched versions of either the hbase jar or
hadoop jar on the ycsb client versus the servers? In almost all cases where
I've run into mysterious rpc hangs right off the bat it's been attributable
to forgetting to update a jar file or an older version still being
Coprocessors are currently only in trunk. They will be in the 0.92 release
once we get that out. There's no set date for that, but personally I'll be
trying to help get it out sooner than later.
On Mon, Jul 25, 2011 at 7:37 AM, Michel Segel michael_se...@hotmail.comwrote:
Which release(s)
:
We currently run on the cloudera stack. Would this be something that we can
pull, compile, and plug right into that stack?
- Original Message -
From: Gary Helmling ghelml...@gmail.com
To: user@hbase.apache.org
Sent: Monday, July 25, 2011 2:02:50 PM
Subject: Re: Fanning out hbase
I wasn't at the day-after presentation, but I believe these are the slides?
https://docs.google.com/viewer?a=vpid=explorerchrome=truesrcid=0B2c-FWyLSJBCN2E5MTdmOGMtY2U5NS00NmEwLWE2NmItZTYxOTI0MTJmMzU5hl=en_US
On Tue, Jul 19, 2011 at 10:29 AM, Stack st...@duboce.net wrote:
Here is the issue:
All excellent points here in terms of tuning! For the higher-level question
about using a table as a queue, I just wanted to add in a link to the Lily
guys' rowlog library, since it does exactly that:
http://www.lilyproject.org/lily/about/playground/hbaserowlog.html
On Tue, Jul 19, 2011 at
Hi Claudio,
The Hadoop 0.20-security-append branch is what we're using to develop HBase
security features (since those need both Hadoop security and append).
It's a mashup of two different Apache Hadoop branches -- 0.20.203 for
security and 0.20-append for the append support. To my knowledge
On Mon, Jul 18, 2011 at 9:25 AM, David Capwell dcapw...@yahoo-inc.comwrote:
HBase does work on Hadoop with Security, but you will need the following in
your hbase-site.xml
property
namehbase.master.keytab.file/name
value/path/to/keytab/hbase.keytab/value
/property
property
Hi Bill,
The current security code supports per-column-qualifier ACLs, though not the
pattern matching approach you describe. It's simply an exact match on
column qualifier.
As an alternative (which would work with the current code), you could
segment each set of access patterns to a separate
Hi Francis,
First a word of warning -- Hadoop 0.20.203 does not include the append
support that HBase needs to avoid data loss in the case of region server
failure. I'd _strongly_ recommend you look at running CDH3 (which contains
both append support and security) for the moment. There may be
Hi Oleg,
A TTL configuration will apply whether you use only 1 version or many
versions. If all the KeyValue timestamps in a row are older than the
configured TTL, then the row is effectively deleted at the next major
compaction.
It sounds like the TTL functionality will do exactly what you
Yes, there's no separate insert/update distinction in HBase. Both are
simply handled as Puts.
For the hooks exposed to coprocessors during regular data operations, see:
http://svn.apache.org/viewvc/hbase/trunk/src/main/java/org/apache/hadoop/hbase/coprocessor/RegionObserver.java?view=markup
Hi Joerg,
Try changing the table attribute name to COPROCESSOR$1 -- it is currently
case sensitive (we should probably change that).
After doing that, look for lines like the following in the region server
log:
Load coprocessor ... from HTD of tablename successfully.
or
attribute '...' has
I'm now using CDH3u0 at 16nodes cluster (hdp0-hdp15).
The configuraiton is below.
hdp0: zk + master + region + nn + dn + jt + tt
hdp1: zk + master + region + snn + dn + tt
hdp2: zk + region + dn + tt
hdp3 to hdp15: region + dn + tt
I would also look at the memory configuration for your
1 - 100 of 130 matches
Mail list logo