I remember one case where my browsers just hung fetching things from the
bootstrap CDN. I assumed it was due to some issue on their side because
it eventually resolved itself. I was a little surprised that those
resources weren't just being hosted at Apache (which would hopefully
have a 100%
`UserGroupInformation ugi =
UserGroupInformation.loginUserFromKeytabAndReturnUGI(...)`
and
`UserGroupInformation.setLoginUser(ugi)`
Should be sufficient. You may also need to use a `UGI.doAs()`, e.g.
ugi.doAs(new PrivilegedExceptionAction() {
public Void run() {
I think you are worried about nothing, Ganesh.
If you want to drop (delete) the entire table, just disable and drop it
from the shell. This operation is not going to have a significant impact
on your cluster (save a few flush'es). This would only happen if you
have had recent writes to this
llion rows per minute and stores for 15days. So around 1.125billion
rows total.
On Fri, Feb 3, 2017 at 12:52 PM, Josh Elser<els...@apache.org> wrote:
I think you are worried about nothing, Ganesh.
If you want to drop (delete) the entire table, just disable and drop it
from the shell
Akshat Mahajan wrote:
b) Every table in our Hbase is associated with a unique collection of items,
and all tables exhibit the same column families (2 column families). After
Hadoop runs, we no longer require the tables, so we delete them. Individual
rows are never removed; instead, entire
st that it can spawn zookeeper transactions and even lock the zookeeper
> nodes. Is there any concern around zookeeper functionality when dropping
> large HBase tables.
>
> Thanks again for taking the time to respond to my questions!
>
> Ganesh
>
>
>
> On Fri, Feb 3, 2017 at 1:
e.printStackTrace();
return null;
}
}
}
Please guide me how can I genrate code for c++ (client) and server which
will responsible to call my DAL
PSI am using Java 8
Hbase 1.1.2
Thanks
Manjeet
On Mon, Jan 23, 2017 at 10:26 PM, Josh Elser<els...@apache.org> wrot
(-cc dev)
Might you be able to be more specific in the context of your question?
What kind of requirements do you have?
Chetan Khatri wrote:
Hello Community,
I am working with HBase 1.2.4 , what would be the best approach to do
Incremental load from HBase to Hive ?
Thanks.
cached on the client? Also on the same topic,
is a Thrift server assisting this process in any shape or form? to make its
presence necessary?
Is there anything else that the Thrift server might be contributing to
positively?
From: Josh Elser<els...@apache.
Would recommend that you brush up on your understanding of the HBase
architecture.
Clients do not receive table data from the HBase Master at any point.
This is purely a RegionServer operation.
http://hbase.apache.org/book.html#_architecture
jeff saremi wrote:
I'd like to understand if
.
From: Josh Elser<els...@apache.org>
Sent: Monday, January 30, 2017 1:05 PM
To: user@hbase.apache.org
Subject: Re: Performance: HBase Native Java Client versus Thrift Java client
Right you are, Jeff. The Master is just coordinating where these Regions
are hosted (among
Would suggest you look at the full context of that sentence.
*Higher caching values will enable faster scanners but will eat up more
memory and some calls of next may take longer and longer times when the
cache is empty*
When the caching value is large, you will have to block to fill the
res hbck -repair and it seemed more invasive on the
entire cluster health.)
Thanks,
Ganesh
On Sat, Feb 4, 2017 at 11:20 AM Josh Elser<els...@apache.org> wrote:
Ganesh,
Just drop the table. You are worried about nothing.
On Feb 3, 2017 16:51, "Ganesh Viswanathan"<gan...@gmail.com&
Gaurhari,
This is a publicly-archived mailing list. As such, I don't believe
anyone is going to be upset with intellectual property concerns.
If anything, you could take a look at the ASF's trademark policy[1].
This likely covers any sensitive issues.
[1]
Manjeet Singh wrote:
I come to know about thrift but as per my understanding on thrift client
and server should be on same language thrift only give direct access to
other language (correct me if I am wrong)
You are incorrect. One of the features of Thrift is that the client and
server can
I would suggest that ask vendor specific questions via the vendor's
channel in the future.
Add the following repository to your Maven project[1] and specify the
correct version for the artifact (e.g. 1.1.2.2.4.2.0-258) to specify the
correct dependencies for your project.
[1]
Sean Busbey wrote:
This would be a very big breaking change in release numbering that goes
against our compatibility guidelines. We only drop support for Java
versions on major releases.
If we want features that require newer jdks sooner, we should make major
releases sooner.
+1 On this
The default configuration is (likely) only using one thread for
performing compactions (really, one thread for small and one thread for
big compactions). As such, yes, compactions in one RegionServer would be
performed serially.
But, don't forget, you probably have multiple RegionServers
Did some quick searches out of curiosity which state that unix
filesystem permissions should be sufficient (the hbase user would not
need to be in the hdfs group).
Is the permission on /var/run/hadoop-hdfs set correctly? (hbase user
could do that same `ls`)
Nick Dimiduk wrote:
That closing
Hi,
I don't believe all versions of Javadoc are published on the website.
Something similar to the following should build it for you locally.
1. git checkout rel/1.1.1
2. `mvn clean install -DskipTests javadoc:aggregate site assembly:single`
Rajeshkumar J wrote:
Hi,
We are moving from
In hindsight, you're probably right, Ted. I was just copying from the
book and forgot to remove that execution :)
Ted Yu wrote:
If I am not mistaken, the 'assembly:single' goal is not needed for building
the site.
Cheers
On Sun, Nov 6, 2016 at 12:57 PM, Josh Elser<josh.el...@gmail.
eers
On Sun, Nov 6, 2016 at 8:44 PM, Josh Elser<josh.el...@gmail.com>
wrote:
In hindsight, you're probably right, Ted. I was just copying from
the
book
and forgot to remove that execution :)
Ted Yu wrote:
If I am not mistaken, the 'assembly:single' goal is not needed for
building
the s
Thanks, all. I'm looking forward to continuing to work with you all!
Nick Dimiduk wrote:
On behalf of the Apache HBase PMC, I am pleased to announce that Josh Elser
has accepted the PMC's invitation to become a committer on the project. We
appreciate all of Josh's generous contributions thus
ZooKeeper servers should only be deployed in numbers that can determine
majorities. 1, 3, 5, etc. If you have only two ZooKeeper servers, they
can not make a decision on elections.
Would recommend that you use only one ZK server if you only have two nodes.
Vincent Fontana wrote:
I started
Behind the scenes, the ClientScanner is buffering results from the
previous RPC. Ignoring multiple RegionServers for now, the caching value
denotes the number of records that were fetched by the ClientScanner in
an RPC. When the buffered results are consumed by your client, a new RPC
will be
Just another C# library which can be used with Apache Phoenix:
https://github.com/Azure/hdinsight-phoenix-sharp
Dima says it best in how to determine which to use.
Manjeet Singh wrote:
Hi All,
I have to find which is the best way to query on Hbase will give best result
options are as below
assigning Zero value for this property
On Thu, Dec 29, 2016 at 1:33 AM, Josh Elser<els...@apache.org> wrote:
Most likely, since you gave a nonsensical value, HBase used a default
value instead of the one you provided. Since you have not shared the
version of HBase which you are using, I would rec
Ganesh Viswanathan wrote:
3) When all the nodes in the cluster were restarted with Ambari, locality
dropped to ~13% and Hbase was almost non-responsive. Only triggering a
manual major compaction seems to help improve the locality after this. But
the data-locality increase is very gradual (about
Yes, the expectation is that when using the Phoenix secondary-indexing
feature, you also must use Phoenix to maintain the accuracy of the
indices with the data table.
However, even without secondary-indexing, why would you not use Phoenix
APIs to update? If there are deficiencies in the
, Dec 27, 2016 at 8:05 PM, Josh Elser<els...@apache.org> wrote:
hbase.client.scanner.timeout.period is a timeout specifically for RPCs
that come from the HBase Scanner classes (e.g. ClientScanner) while
hbase.rpc.timeout is the default timeout for any RPC. I b
hbase.client.scanner.timeout.period is a timeout specifically for RPCs
that come from the HBase Scanner classes (e.g. ClientScanner) while
hbase.rpc.timeout is the default timeout for any RPC. I believe that the
hbase.client.scanner.timeout.period is also used by the RegionServers to
define
Hi Anders,
Your investigation is surprising to me! I would guess that it is
unintended that the auth_to_local rules would not be applied, and that
the realm removal is just done as a "convenience".
If you have the interest in fixing up the code, I'd be happy to review
it and help shepherd
Margus -- have you found/read
https://hbase.apache.org/book.html#trouble.rs.runtime.zkexpired
I think you're probably chasing a red-herring in the ZK logs. Most
likely, it's an issue that needs to be addressed in the HBase
configuration/tuning side.
If it's happening nightly, I'd guess that
You may also be experiencing HDFS trying to re-replicate the blocks from
the datanode you decommissioned? I'm not sure if your DN decommission
process would have done that already.
I'm not aware of anything that you can pull off the shelf to maintain
locality. Is decommissioning nodes that
With Apache HBase 1.0-based applications, you must use Protocol Buffers
2.5 in its standard package (e.g. com.google.protobuf). If you want to
use another version of protobuf, you would have to shade and relocate
that dependency.
In newer versions of HBase (2.0 I believe it will land), does
Oops! Thanks for the shaded-client version correction, Sean!
Sean Busbey wrote:
If upgrading is an option, for HBase 1.1+ applications you can use a
newer Protocol Buffers by relying on hbase-shaded-client as a
dependency.
On Sun, Mar 19, 2017 at 12:04 PM, Josh Elser<els...@apache.org>
If it's helpful to state it in generic terms: specifying a range of
HBase timestamps is *only* a post-filter (server-side) and *never* a
primary search criteria.
In other words, searching by the HBase timestamp is a full-table scan
(exhaustive search). While the timestamp can be nice for certain
Use jstack on the AM or Mapper to figure out what it's stuck doing.
FWIW, `conf.set("hbase.zookeeper.quorum", "localhost");` will not work
if you have more than one node.
evelina dumitrescu wrote:
The Hadoop version that I use is 2.7.1 and the Hbase version is 1.2.5.
I can do any operation
Might you try upgrading to the latest on the HBase 1.2 line? ESAPI has
been removed from HBase because it is incompatibly licensed:
https://issues.apache.org/jira/browse/HBASE-16317
There was also https://issues.apache.org/jira/browse/HBASE-16298 which
seems to be your reported issue, but it
Minwoo -- any chance you could share some details about that Hadoop
issue? Something on JIRA?
On Tue, Aug 15, 2017 at 8:11 PM, Kang Minwoo wrote:
> My team member tries to resolve this issue.
> And He found it is a Hadoop Issue.
> He tries to test purging FD.
>
> Best
TaskMonitor is probably relying on the fact that it's invoked inside of
the RegionServer/Master process (note the maven module and the fact that
it's a template file for the webUI).
You would have to invoke an RPC to the server to get the list of Tasks.
I'm not sure if such an RPC already
HBCK can do this, something like `hbase hbck -summary `.
This wouldn't be easily machine-consumable, but the content should be there.
On 7/7/17 3:48 PM, jeff saremi wrote:
Is there a command line option that would give us a list of offline Regions for
a table? or a list of all regions and
On 7/12/17 11:03 AM, Robert Yokota wrote:
In case anyone is interested, I wrote a blog revisiting the HBase
application archetypes as presented by Lars George and Jonathan Hsieh:
https://yokota.blog/2017/07/12/hbase-application-archetypes-redux/
Nice write-up! Thanks for sharing.
I'm pleased to announce yet another PMC addition in the form of Devaraj
Das. One of the "old guard" in the broader Hadoop umbrella, he's also a
long-standing member in our community. We all look forward to the
continued contributions and project leadership.
Please join me in welcoming
On behalf of the Apache HBase PMC, I'm pleased to announce that Mike
Drob has accepted the PMC's invitation to become a committer.
Mike has been doing some great things lately in the project and this is
a simple way that we can express our thanks. As my boss likes to tell
me: the reward for a
Some specificity (as I still remember it too vividly)
https://issues.apache.org/jira/browse/HADOOP-11710
Our Sean got this one fixed for 2.6.1, and would by why using HDFS
transparent encryption with 2.6.0 will flat-out not work :)
On 8/18/17 1:35 PM, Ted Yu wrote:
Please see the 'Hadoop
That's a good explanation, Kevin! It's also good to keep in mind that
the ResultScanner implementation is not reading data in parallel. You
have many servers to read data from, but you're only communicating with
one at a time.
Also, remember that HBase stores its data in sorted-order. The
You issue a Get request for the rowkey you're looking for (checking for
existence) or your use a Scanner to read all rowkeys (data) in the table.
Developer Brasil wrote:
How to find the rowkey of a table in HBASE
--
View this message in context:
Let me follow along there too. I ran into this one while rebase'ing some
changes on Master. There wasn't a way (that I saw) to create the
necessary objects that didn't trigger a deprecation warning :)
On 6/19/17 11:55 PM, Biju N wrote:
1. HBASE-18241
The assumption is that one of those three copies of the HDFS block
comprising your HFiles are stored on the local datanode.
That is what the major compaction process guarantee.
On 5/26/17 9:59 AM, Rajeshkumar J wrote:
I have seen the code in that while creating input split they are also
Lalit,
Typically, questions about "vendor products" are best reserved for their
respective forums. This question is not relevant to the Apache HBase
community.
Please consider asking your question on
https://community.hortonworks.com/ instead.
- Josh
On 9/19/17 2:17 AM, Lalit Jadhav
You may find Apache Phoenix to be of use as you explore your requirements.
Phoenix provides a much higher-level API which provides logic to build
composite rowkeys (e.g. primary key constraints over multiple columns)
for you automatically. This would help you iterate much faster as you
better
FWIW, last I looked into this,
https://issues.apache.org/jira/browse/HBASE-15154 would be the long-term
solution to the Master also requiring the MaxDirectMemorySize
configuration (even when it is not acting as a RegionServer).
Obviously, it's a lower priority fix as there is a simple
Please be patient when waiting for a response. Apache HBase is made up
of community volunteers. Your expectation should be on the order of many
hours (although, we strive to be faster).
I'd recommend that you begin by looking at a newer release than 0.94.2
as there have likely been over 50
e
indexes until your heart is content (space permitting of course).
-Original Message-
From: Andrzej [mailto:borucki_andr...@wp.pl]
Sent: Wednesday, August 30, 2017 11:03 AM
To: user@hbase.apache.org
Subject: Re: Fast search by any column
W dniu 30.08.2017 o 19:54, Dave Birdsall pisze:
As
Redundant power supplies are probably your next-best bet for running
without fsync (hsync). Something that can prevent a node from going down
hard will mitigate this issue for the most part.
The importance of this is often a multi-variable equation. The small
chance for data loss that exists
Thanks for sharing, Sahil.
A couple of thoughts at a glance:
* You should add a LICENSE to your project so people know how they can
(re)use your project.
* You have a dependency against 1.0.3, and I see at least one thing that
will not work against 2.0.0. Would be great if you wrote up what
The most reliably way (read-as, likely to continue working across HBase
releases) would probably be to implement a custom ReplicationEndpoint.
This would abstract away the logic behind "tail'ing of WALs" and give
you some nicer APIs to leverage. Beware that this would still be a
rather
Yep, you got it :)
Easy doc fix we can get in place.
On 5/14/18 2:25 PM, Kevin Risden wrote:
Looks like this might have triggered
https://issues.apache.org/jira/browse/HBASE-20581
Kevin Risden
On Mon, May 14, 2018 at 8:46 AM, Kevin Risden wrote:
We are using HDP 2.5
You shouldn't be putting the phoenix-client.jar on the HBase server
classpath.
There is specifically the phoenix-server.jar which is specifically built
to be included in HBase (to avoid issues such as these).
Please remove all phoenix-client jars and provide the
phoenix-5.0.0-server jar
There is no such artifact with the groupId & artifactId
org.apache.hbase:hbase for Apache HBase. I assume would be the same for CDH.
You need the test jar from hbase-server if you want the
HBaseTestingUtility class.
On 1/5/18 10:23 AM, Debraj Manna wrote:
Cross posting from
-
Hey Kevin!
Looks like you got some good changes in here.
IMO, the HBase Thrift2 "implementation" makes more sense to me (I'm sure
there was a reason for having HTTP be involved at one point, but Thrift
today has the ability to do all of this RPC work for us). I'm not sure
what the HBase API
Hi Andrew,
Yes. The answer is, of course, that you should see consistent results
from HBase if there are no mutations in flight to that table. Whether
you're reading "current" or "back-in-time", as long as you're not
dealing with raw scans (where compactions may persist delete
tombstones),
This sounds like something I've seen in the past but was unable to get
past. I think I was seeing it when the hbase-shaded-client was on the
classpath. Could you see if the presence of that artifact makes a
difference one way or another?
On 2/22/18 12:52 PM, sahil aggarwal wrote:
Yes, it is
The Apache Phoenix PMC is happy to announce the release of Phoenix
5.0.0-alpha for Apache Hadoop 3 and Apache HBase 2.0. The release is
available for download at here[1].
Apache Phoenix enables OLTP and operational analytics in Hadoop for low
latency applications by combining the power of
Use `mvn package`, not `compile`.
On 6/21/18 10:41 AM, Andrzej wrote:
W dniu 21.06.2018 o 19:01, Andrzej pisze:
Is any alternative to fast control HBase from C++ sources?
Or is Java client?
Native Client C++ (HBASE-14850) sources are old and mismatch to folly
library (Futures.h)
Now I
Nothing in here indicates why the RegionServers actually failed.
If the RegionServer crashed, there is very likely a log message at
FATAL. You want to find that to understand what actually caused it.
On 8/13/18 4:22 PM, Adep, Karankumar (ETW - FLEX) wrote:
Hi,
Region Server Crashes with
(-cc user@hbase, +bcc user@hbase)
How about the rest of the stacktrace? You didn't share the cause.
On 8/20/18 1:35 PM, Mich Talebzadeh wrote:
This was working fine before my Hbase upgrade to 1.2.6
I have Hbase version 1.2.6 and Phoenix
version apache-phoenix-4.8.1-HBase-1.2-bin
This
Thanks, Umesh. Seems like you're saying it's not a problem now, but
you're not sure if it would become a problem. Regardless of that, it's a
goal to not be version-specific (and thus, we can have generic hbck-v1
and hbck-v2 tools). LMK if I misread, please :)
One more thought, it would be
As I've been trying to explain in Slack:
1. Are you including the salt in the data that you are writing, such
that you are spreading the data across all Regions per their boundaries?
Or, as I think you are, just creating split points with this arbitrary
"salt" and not including it when you
If it was related to maxClientCnxns, you would see sessions being
torn-down and recreated in HBase on that node, as well as a clear
message in the ZK server log that it's denying requests because the
number of outstanding connections from that host exceeds the limit.
ConnectionLoss is a
on
rowkey
create 'TEST_TABLE','si',{ NAME => 'si', COMPRESSION => 'SNAPPY' }
alter 'TEST_TABLE', { NAME => 'si', DATA_BLOCK_ENCODING => 'FAST_DIFF' }
alter 'TEST_TABLE', {NAME => 'si', COMPRESSION => 'SNAPPY' }
Thanks
Manjeet Singh
On Fri, Aug 31, 2018 at 6:49 PM, Josh Elser w
1. Yes
2. HDFS NN pressure, read slow down, general poor performance
3. Default configuration is weekly, if you don't explicitly know some
reasons why weekly doesn't work, this is what you should follow ;)
4. No
I would be surprised if you need to do anything special with S3, but I
don't know
, 2018 at 9:11 PM Josh Elser wrote:
As I've been trying to explain in Slack:
1. Are you including the salt in the data that you are writing, such
that you are spreading the data across all Regions per their boundaries?
Or, as I think you are, just creating split points with this arbitrary
"
There's the Region Normalizer which I'd presume would be in an HBase 1.4
release
https://issues.apache.org/jira/browse/HBASE-13103
On 8/30/18 3:50 PM, Austin Heyne wrote:
I'm using HBase 1.4.4 (AWS/EMR) and I'm looking for an automated
solution because I believe there are going to be a few
Please be patient in getting a response to questinos you post to this
list as we're all volunteers.
On 9/8/18 2:16 AM, onmstester onmstester wrote:
Hi, Currently I'm using Apache Cassandra as backend for my restfull application. Having a cluster of 30 nodes (each having 12 cores, 64gb ram and 6
You might also need hbase.wal.meta_provider=filesystem (if you haven't
already realized that)
On 7/2/18 5:43 PM, Andrey Elenskiy wrote:
hbase.wal.provider
filesystem
Seems to fix it, but would be nice to actually try the fanout wal with
hadoop 2.8.4.
On Mon, Jul 2, 2018 at 1:03 PM, Andrey
Unless you are including the date+time in the rowKey yourself, no.
HBase has exactly one index for fast lookups, and that is the rowKey.
Any other query operation is (essentially) an exhaustive search.
On 7/11/18 12:07 PM, Ming wrote:
Hi, all,
Is there a way to get the last row
There was an HBase RPC connection from a client at the host (identified
by the IP:port you redacted). IIRC, the "read count=-1" is essentially
saying that the server tried to read from the socket and read no data
which means that the client has hung up. There were 33 other outstanding
HBase
(-to dev, +bcc dev, +to user)
Hi Stefano,
Moving your question over to the user@ mailing list as it's not so much
about development of HBase, instead development when using HBase.
Q1: what do you mean by the "latest field"? Are you talking about the
latest version of a Cell for a column
Hi folks!
A gentle reminder that the HBaseCon 2018 call for proposals remains open
for just one more week -- until April 16th. The event is held in San
Jose, CA on June 18th.
We've have some great proposals already submitted, but we look forward
to many, many more. All levels of complexity,
Oh, and the most important part:
Submit your talks here: https://easychair.org/conferences/?conf=hbasecon2018
On 4/9/18 10:26 PM, Josh Elser wrote:
Hi folks!
A gentle reminder that the HBaseCon 2018 call for proposals remains open
for just one more week -- until April 16th. The event is held
The HBaseCon 2018 call for proposals is scheduled to close Monday, April
16th. If you have an idea for a talk, make sure you get it submitted ASAP!
Submit your talks at https://easychair.org/conferences/?conf=hbasecon2018
If you need more information, please see
We've received some requests to extend the CFP a few more days. The new
day of closing will be this Friday 2018/04/20, end of day.
Please keep them coming in!
On 4/15/18 9:23 PM, Josh Elser wrote:
The HBaseCon 2018 call for proposals is scheduled to close Monday, April
16th. If you have
This question is better asked on the Phoenix users list.
The phoenix-client.jar is the one you need and is unique from the
phoenix-core jar. Logging frameworks are likely not easily
relocated/shaded to avoid issues which is why you're running into this.
Can you provide the error you're
Yes, you can bulk load into a table which already contains data.
The ideal case is that you generate HFiles which map exactly to the
distribution of Regions on your HBase cluster. However, given that we
know that Region boundaries can change, the bulk load client
(LoadIncrementalHFiles) has
All,
It's my pleasure to announce the 2.1.0 release of the Apache HBase
Thirdparty project. This project is used by the Apache HBase project to
encapsulate a number of dependencies that HBase relies upon and ensure
that they are properly isolated from HBase users, e.g. Google Protocol
All,
I'm pleased to announce HBaseCon 2018 which is to be held in San Jose,
CA on June 18th.
A call for proposals is available now[1], and we encourage all HBase
users and developers to contribute a talk and plan to attend the event
(however, event registration is not yet available).
ion=3.4.6 but in pom we have 3.4.10. I am
gonna
try rebuilding it with 3.4.10.
On 23 February 2018 at 00:29, Josh Elser <els...@apache.org> wrote:
This sounds like something I've seen in the past but was unable to
get
past. I think I was seeing it when the hbase-shaded-client was on
the
CVE-2018-8025 describes an issue in Apache HBase that affects the
optional "Thrift 1" API server when running over HTTP. There is a
race-condition which could lead to authenticated sessions being
incorrectly applied to users, e.g. one authenticated user would be
considered a different user or
First off: You're on EMR? What version of HBase you're using? (Maybe
Zach or Stephen can help here too). Can you figure out the
RegionServer(s) which are stuck opening these PENDING_OPEN regions? Can
you get a jstack/thread-dump from those RS's?
In terms of how the system is supposed to work:
Please do not cross-post lists. I've dropped dev@hbase.
This doesn't seem like a replication issue. As you have described it, it
reads more like a data-correctness issue. However, I'd guess that it's
more related to timestamps rather than be an issue on your cluster.
If there was no error,
's getting these references from.
-Austin
On 09/30/2018 02:38 PM, Josh Elser wrote:
First off: You're on EMR? What version of HBase you're using? (Maybe
Zach or Stephen can help here too). Can you figure out the
RegionServer(s) which are stuck opening these PENDING_OPEN regions?
Can you get a jstack/thread
previous assignments.
Like I said, I don't recommend manually rewriting the hbase:meta table
but it did work for us.
Thanks,
Austin
On 10/01/2018 01:28 PM, Josh Elser wrote:
That seems pretty wrong. The Master should know that old RS's are no
longer alive and not try to assign regions to them
not mistaken the Normalizer will keep the same number of regions,
but will uniform the size, right? So if the goal is to reduce the number of
region, the Normalizer might not help?
JMS
Le ven. 31 août 2018 à 09:16, Josh Elser a écrit :
There's the Region Normalizer which I'd presume would
That thread is a part of the ThreadPool that HConnection uses and that
thread is simply waiting for a task to execute. It's not indicative of
any problem.
See how the thread is inside of a call to LinkedBlockingQueue#poll()
On 9/28/18 3:02 AM, Lalit Jadhav wrote:
While load testing in the
Hi Davis,
I don't think we have a release planned yet for the hbase-connectors
library. I know our mighty Stack has been doing lots of the heavy
lifting around lately.
If you're interested/willing, I'm sure we'd all be gracious if you have
the cycles to help out testing what we have in the
CVE-2019-0212: HBase REST Server incorrect user authorization
Description: In all previously released Apache HBase 2.x versions,
authorization was incorrectly applied to users of the HBase REST server.
Requests sent to the HBase REST server were executed with the
permissions of the REST
There are just *two weeks* remaining to submit abstracts for NoSQL Day
2019, in Washington D.C. on May 21st. Abstracts are due April 19th.
https://dataworkssummit.com/nosql-day-2019/
Abstracts don't need to be more than a paragraph or two. Please the time
sooner than later to submit your
Looks like your RegionServer process might have died if you can't
connect to its RPC port.
Did you look in the RegionServer log for any mention of an ERROR or
FATAL log message?
On 4/4/19 8:20 AM, melank...@synergentl.com wrote:
I have installed Hadoop single node
1 - 100 of 151 matches
Mail list logo