Hello,
My region servers die at regular intervals for unknown reasons.
I restarted HBase and region servers continued to die.
I solved it by eliminating Old WAL.
Now I'm going through the logs and trying to find the cause.
But I do not know where to look.
Please let me know if I need to watch
HBase and HDFS. Tail and post HBase logs from
> regionserver and from active HBase master to help debug the root cause.
>
>
>
> On Mon, Feb 6, 2017 at 5:27 PM Kang Minwoo <minwoo.k...@outlook.com>
> wrote:
>
> > Hello,
> >
> >
> > My region servers d
short, there is already connection sharing underneath.
FYI
On Fri, Feb 3, 2017 at 10:45 PM, Kang Minwoo <minwoo.k...@outlook.com>
wrote:
> Good morning.
>
>
> I'm using hbase-client 1.2.4 version.
>
> My client environment is multithreaded.
>
> I shared a connection, but I w
r dies at regular intervals for unknown reasons.
The 'Premature EOF from inputStream' log was at INFO level - it may not be
critical.
Please pastebin more of region server log when you reply.
Was there long pause prior to 2017-02-08 11:08:11,878 ?
Thanks
On Thu, Feb 9, 2017 at 5:59 PM, Kang
region server and master around this time ?
Please also check hdfs health.
> On Feb 9, 2017, at 3:44 AM, Kang Minwoo <minwoo.k...@outlook.com> wrote:
>
> The DataNode caused an java.io.IOException: Premature EOF from inputStream
> error.
>
> This error seems to have kil
o open it.
Pastebin the relevant snippet of region server log pertaining to the
attempted open of the region.
Thanks
On Tue, Feb 7, 2017 at 7:17 PM, Kang Minwoo <minwoo.k...@outlook.com> wrote:
> Yes. I agree with you.
> But I can not upgrade right away.
>
> The problem is that
n you get this Exception?
On Tuesday, August 30, 2016, Kang Minwoo <minwoo.k...@outlook.com> wrote:
> Hello Hbase users.
>
>
> While I used hbase client libarary in JAVA, I got
> OutOfOrderScannerNextException.
>
> Here is stacktrace.
>
&g
quot;hbase.client.rpc.compressor" option.
My hbase java client works well.
Thanks alot for your comment, It is helpful for debug.
I have a suggestion. If hbase show raw exception (not wrap), it is more helpful
for debug.
Yours sincerely,
Minwoo
____
보낸 사람:
: user@hbase.apache.org
제목: Re: How to deal OutOfOrderScannerNextException
Any reason to not use the 1.2.2 client library? You're likely hitting a
compatibility issue.
On Tuesday, August 30, 2016, Kang Minwoo <minwoo.k...@outlook.com> wrote:
> Hi Dima Spivak,
>
>
> Thanks for interesting my pro
Gotcha. Unfortunately, we don't guarantee compatibility between that client
and the HBase server you're running, so the only way to solve the issue
will probably be to upgrade your client dependencies.
On Tuesday, August 30, 2016, Kang Minwoo <minwoo.k...@outlook.com> wrote:
> Because I have a
Hello Hbase users.
While I used hbase client libarary in JAVA, I got
OutOfOrderScannerNextException.
Here is stacktrace.
--
java.lang.RuntimeException: org.apache.hadoop.hbase.DoNotRetryIOException:
Failed after retry of OutOfOrderScannerNextException: was there a rpc timeout?
losing
(CloseRegionHandler.java#L110)?
Which release are you using ?
Maybe related: HBASE-13592
On Sun, Mar 19, 2017 at 8:22 PM, Kang Minwoo <minwoo.k...@outlook.com>
wrote:
> Yes, It happened in my cluster.
>
>
> [RegionServer LOG]
>
>
org
제목: Re: Why IOException occur when region server is closing
(CloseRegionHandler.java#L110)?
See HBASE-4270
Did you see this happen in your cluster ?
If so, mind sharing related log snippets ?
Cheers
On Sun, Mar 19, 2017 at 7:50 PM, Kang Minwoo <minwoo.k...@outlook.com>
wro
Hello!
In this code
(https://github.com/apache/hbase/blob/master/hbase-server/src/main/java/org/apache/hadoop/hbase/regionserver/handler/CloseRegionHandler.java#L110),
Region server can occur IOException, When they are closing.
Why IOException occur here?
If I want to know specific reason, Where
Hi, I am Minwoo.
I am interesting in HBase Architecture.
So I read Architecting HBase Applications.
In the book, HBase 2.0 is work in progress to reduce its dependency on
ZooKeeper.
I want to the reason.
Why are work in progress in HBase 2.0 to reduce its dependency on ZooKeeper?
Hi, I am Minwoo.
I am interesting in HBase Architecture.
So These Days I read Architecting HBase Applications.
My team member tries to resolve this issue.
And He found it is a Hadoop Issue.
He tries to test purging FD.
Best regards,
Minwoo Kang
보낸 사람: Kang Minwoo <minwoo.k...@outlook.com>
보낸 날짜: 2017년 8월 11일 금요일 오전 11:18:58
받는 사람: user@hbase.apache.org
제목:
Hello, HBase Users.
These days, My team upgrade HBase Version up to 1.2.6 and test in our product.
In the meantime, My team improves real machine Fault handling.
My server is JBOD. So My team decided using data node volume failures
thresholds in HDFS Feature.
My team set an option that is data
Hello, HBase Users.
While I am watching HMaster log, I found NullPointerException Logs.
2017-07-06 09:05:02,579 DEBUG [,1498445640728_ChoreService_1]
cleaner.CleanerChore: Removing: hdfs://*** from archive
2017-07-06 09:05:02,585 ERROR [,1498445640728_ChoreService_2]
cators = queues.getListOfReplicators();
for (String replicator: replicators) {
replicators seems to be null.
Can you log a JIRA (and attach redacted log if possible) ?
Cheers
On Thu, Jul 6, 2017 at 5:10 PM, Kang Minwoo <minwoo.k...@outlook.com> wrote:
> I am using
I am using 1.2.5 (revision=d7b05f79dee10e0ada614765bb354b93d615a157)
Yes, I see NPE repeatedly.
This occurs every minute.
Should I fix it?
Best regards,
Minwoo Kang
보낸 사람: Kang Minwoo <minwoo.k...@outlook.com>
보낸 날짜: 2017년 7월 6일 목요일 오전 9:29:35
Hi, Users.
I store a lot of logs in HBase.
However, the reading speed of the log is too slow compared to the Hive ORC file.
I know that HBase is slow compared to the Hive ORC file.
The problem is that it is too slow.
HBase is about 6 times slower.
Is there a good way to speed up HBase's reading
notice-able
exceptions.
Best Regards,
Yu
On 23 May 2018 at 14:19, Kang Minwoo <minwoo.k...@outlook.com> wrote:
> @Duo Zhang
> This means that you're writing too fast and memstore has reached its upper
> limit. Is the flush and compaction fine at RS side?
>
> -> No, flush t
In HRegion#internalFlushCacheAndCommit
There is following code.
synchronized (this) {
notifyAll(); // FindBugs NN_NAKED_NOTIFY
}
one question.
Where is the lock acquired?
Best regards,
Minwoo Kang
보낸 사람: Kang Minwoo <minwo
re has reached its upper
limit. Is the flush and compaction fine at RS side?
2018-05-23 10:20 GMT+08:00 Kang Minwoo <minwoo.k...@outlook.com>:
> attach client exception and stacktrace.
>
> I've looked more.
> It seems to be the reason why it takes 1290 seconds to flush in the Regi
src/main/java/org/apache/hadoop/hbase/regionserver/HRegion.java#L2424-L2508
Best regards,
Minwoo Kang
보낸 사람: Kang Minwoo <minwoo.k...@outlook.com>
보낸 날짜: 2018년 5월 23일 수요일 15:16
받는 사람: user@hbase.apache.org
제목: Re: can not write to HBase
I am using salt
Hello, Users
My HBase client does not work after print the following logs.
Call exception, tries=23, retries=35, started=291277 ms ago, cancelled=false,
msg=row '{row}' on table '{table}' at region={region}, hostname={hostname},
seqNum=100353531
There are no special logs in the Master and
2018-05-23 8:17 GMT+08:00 Kang Minwoo <minwoo.k...@outlook.com>:
> Hello, Users
>
> My HBase client does not work after print the following logs.
>
> Call exception, tries=23, retries=35, started=291277 ms ago,
> cancelled=false, msg=row '{row}' on table '{table}' at region=
u have more than
15-20 regions for that table?
Sent from my iPhone
> On May 22, 2018, at 9:52 PM, Kang Minwoo <minwoo.k...@outlook.com> wrote:
>
> I think hbase flush is too slow.
> so memstore reached upper limit.
>
> flush took about 30min.
> I don't know why fl
,
Minwoo Kang
보낸 사람: Kang Minwoo <minwoo.k...@outlook.com>
보낸 날짜: 2018년 5월 23일 수요일 16:53
받는 사람: Hbase-User
제목: Re: can not write to HBase
In HRegion#internalFlushCacheAndCommit
There is following code.
synchronized (this) {
notifyAll(); // Fi
gards,
Minwoo Kang
보낸 사람: Stack <st...@duboce.net> 대신 saint@gmail.com <saint@gmail.com>
보낸 날짜: 2018년 5월 24일 목요일 01:33
받는 사람: Hbase-User
제목: Re: How to improve HBase read performance.
On Wed, May 16, 2018 at 7:30 PM, Kang Minwoo <minwoo.k...@outlook.c
rtutay <mortu...@23andme.com>
wrote:
> This ticket: https://issues.apache.org/jira/browse/HBASE-20459 was fixed
> in
> the latest version of HBase, upgrading to latest may help with performance
>
> On Wed, May 16, 2018 at 3:55 AM, Kang Minwoo <minwoo.k...@outlook.com>
&g
Hello, Users
I recently met an unusual situation.
That is the cell result does not contain column family.
I thought the cell is the smallest unit where data could be transferred in
HBase.
But cell does not contain column family means the cell is not the smallest unit.
I'm wrong?
It occurred in
받는 사람: hbase-user
제목: Re: Odd cell result
Which connector do you use for Spark 2.1.2 ?
Is there any code snippet which may reproduce what you experienced ?
Which hbase release are you using ?
Thanks
On Fri, Jun 8, 2018 at 1:50 AM, Kang Minwoo wrote:
> Hello, Users
>
> I rec
(tableName), scan)
rdd.count()
or use a Spark-HBase connector which encapsulates the details
Regards
On Sat, Jun 9, 2018 at 8:48 AM, Kang Minwoo wrote:
> 1) I am using just InputFormat. (I do not know it is the right answer to
> the question.)
>
> 2) code snippet
>
mapreduce you can create a custom TableInputFormat that generates one
split per region (or per prefix) with the salted ranges and pass that to
your job configuration (e,g, for mapduce or spark).
On Thu, Jun 7, 2018 at 4:05 AM, Kang Minwoo wrote:
> Sorry for the late reply.
>
> The row key
. ^^)
-
https://www.evernote.com/shard/s167/sh/39eb6b44-25e7-4e61-ad2a-a0d1b076c7d1/159db49e3e49b189
Best regards,
Jeongdae Kim
김정대 드림.
On Thu, May 24, 2018 at 1:22 PM, Kang Minwoo
wrote:
> I have a same error on today.
> thread dump is here.
>
>
>
> Thread 286
I left a comment on JIRA.
( https://issues.apache.org/jira/browse/HBASE-15871 )
Best regards,
Minwoo Kang
보낸 사람: Sean Busbey
보낸 날짜: 2018년 5월 29일 화요일 23:12
받는 사람: user@hbase.apache.org
제목: Re: can not write to HBase
On Tue, May 29, 2018 at 1:25 AM, Kang
: Re: How to improve HBase read performance.
HBASE performance highly dependant on query & row key format.
Can you share few rowkeys, query format? also what is encoding you are
using?
On Thu, May 24, 2018 at 8:38 PM, Kang Minwoo
wrote:
> 5B logs a day?
> => Yes, 5B/day
>
>
Hello Users,
I am looking forward to releasing HBase 1.2.7.
Do you know when 1.2.7 will be released?
Best regards,
Minwoo Kang
coding for the column family where this error occurred
pastebin of more of the region server prior to the StackOverflowError
(after redaction)
release of hadoop for the hdfs cluster
non-default config which may be related
Thanks
On Sat, Jan 6, 2018 at 4:36 PM, Kang Minwoo <minwoo.k...@outlook.c
Hello,
I have met StackOverflowError in region server.
Detail Error log here...
HBase version is 1.2.6
DAYS:36,787 DEBUG [regionserver/longCompactions]
regionserver.CompactSplitThread: Not compacting xxx. because compaction request
was cancelled
DAYS:36,787 DEBUG
Hello Users,
These days, I settings new HBase cluster.
While I am testing HBase cluster, I find compaction queue constantly increase.
(over 9000)
I am worried about this situation.
I should like to have the benefit of your advice.
Best regards,
Minwoo Kang
Hello, Everyone
When I check HBase compactionQueueLength metric, Some RegionServer's
compactionQueueLength is too high.
So, I check the RegionServer log.
There is "regionserver.CompactSplitThread: Small Compaction requested: system;
Because: MemStoreFlusher; compaction_queue=(8034:1),
Hello, Users
Our filter is row key filter. So scan time limit not work.
the related issue is https://issues.apache.org/jira/browse/HBASE-19818
Is it possible HBASE-19818 backport 1.2.x. branch?
Best regards,
Minwoo Kang
보낸 사람: Kang Minwoo
보낸 날짜: 2018년
Hello, All.
What is the difference between hbase.client.scanner.timeout.period and
hbase.rpc.timeout?
If I don't want to timeout while long running scan, should I do not set
hbase.client.scanner.timeout.period?
Best regards,
Minwoo Kang
hbase.client.scanner.timeout.period and
hbase.rpc.timeout?
Please refer to our refguide
<http://hbase.apache.org/book.html#config_timeouts> or HBASE-17449
<http://hbase.apache.org/book.html#config_timeouts> for details. Hope this
information helps.
Best Regards,
Yu
On 16 July 2018 at 14:20, Kang Minwoo wro
backporting the patch to branch-1. There just a few rejects. Would
you be up for creating a backport issue and attaching a version of this
patch that fit branch-1?
Thanks,
S
On Thu, Jul 19, 2018 at 10:12 PM Kang Minwoo
wrote:
> Hello, Users
>
> Our filter is row key filter. So scan t
Hello,
I am not clear about heartbeats.
After HBase introduce Progress heartbeats for long running scanners[1], I think
client does not get SocketTimeoutException.
Because client know that scanner in region server is still working.
But When I executed scan with filter (many consecutive
Hello.
Occasionally, when closing a region, the RS_CLOSE_REGION thread is unable to
acquire a lock and is still in the WAITING.
(These days, the cluster load increase.)
So the Region state is PENDING_CLOSE persists.
The thread holding the lock is the RPC handler.
If you have any good tips on
d is running (or stuck) somewhere, so the
close region thread can't obtain the write lock. You can look closely in
your thread dump.
The handler thread you pasted above it is just a thread can't obtain the
read lock since the close thread is trying write lock.
Best Regards
Allan Yang
Kang Minwoo
Hello, All
I changed Hadoop name node manually. Because I had some reason.
After that some region server down.
Error logs below
HBase version: 1.26
Hadoop version: 2.7.3
Caused by: org.apache.hadoop.hdfs.server.namenode.SafeModeException: Cannot
create file ${hbase hdfs path}. Name node is in
Hello Users,
I have a question.
My client complains to me, HBase scan spent too much time.
So I started to debug.
I profiled the HBase Client Application using hprof.
The App spent the most time in below stack trace.
something to do.
You should include threads like these from your analysis.
On 2/26/19 8:32 AM, Kang Minwoo wrote:
> Hello Users,
>
> I have a question.
>
> My client complains to me, HBase scan spent too much time.
> So I started to debug.
>
> I profiled the HBase Client
MetaScan is so slow.
When I invoked `regionLocator.getAllRegionLocations()` method, It throw
`org.apache.hadoop.hbase.client.RetriesExhaustedException: Can't get the
locations` Exception.
Best regards,
Minwoo Kang
보낸 사람: Kang Minwoo
보낸 날짜: 2019년 2월 27일
Hello, Users.
I wonder what is the benefit use HBase Spark Connector instead of
TableInputFormat.
Best regards,
Minwoo Kang
I found what is a problem.
This is because of HBASE-18665[1].
Best regards,
Minwoo Kang
[1]: https://issues.apache.org/jira/browse/HBASE-18665
보낸 사람: Kang Minwoo
보낸 날짜: 2019년 2월 28일 목요일 11:33
받는 사람: user@hbase.apache.org
제목: Re: HBase client spent most
eta should not take more than a second.
On 2/27/19 12:34 AM, Kang Minwoo wrote:
> MetaScan is so slow.
> When I invoked `regionLocator.getAllRegionLocations()` method, It throw
> `org.apache.hadoop.hbase.client.RetriesExhaustedException: Can't get the
> locations` Exception.
&
seg, 11 de mar de 2019 às 04:22, Kang Minwoo
escreveu:
> Hello Users.
>
> ---
> HBase version is 1.2.9
> ---
>
> I wonder this region operation is intended.
>
> I set "hbase.regionserver.optionalcacheflushinterval" slightly shorter
> than the default s
Hello Users.
---
HBase version is 1.2.9
---
I wonder this region operation is intended.
I set "hbase.regionserver.optionalcacheflushinterval" slightly shorter than the
default setting.
So cf has old edit, they flush after a random delay.
If the flush queue has a flush request by old edit, a
Hello Users,
When I load a dynamic coprocessor, If the table already has the same class
coprocessor, coprocessor fails to load.
Because the same class coprocessor cannot load.
So I should unload old version coprocessor before load new version coprocessor.
But coprocessor has a mission-critical
Hello, Users.
I use JBOD for data node. Some times the disk in the data node has a problem.
The first time, I shut down all instance include data node and region server in
the machine that has a disk problem.
But It is not a good solution. So I improve the process.
When I detect disk problem
g.
On Tue, May 28, 2019 at 9:39 PM Kang Minwoo wrote:
> Hello, Users.
>
> I use JBOD for data node. Some times the disk in the data node has a
> problem.
>
> The first time, I shut down all instance include data node and region
> server in the machine that has a disk problem.
&
Hello Users,
When I load a dynamic coprocessor, If the table already has the same class
coprocessor, coprocessor fails to load.
Because the same class coprocessor cannot load.
So I should unload old version coprocessor before load new version coprocessor.
But coprocessor has a mission-critical
, and then enable the table.
Or another way is to upload the coprocessor jar to another place, and
update the table descriptor to point to the new place. I think this could
be done by code, as you can completely replace the old coprocessor config.
Not sure if this is easy to do through shell.
Kang Minwoo 于
difficult to
control the uploading part?
Kang Minwoo 于2019年5月14日周二 下午3:41写道:
> Thank you for your reply.
>
> I tried to update the table descriptor using set
> HTableDescriptor#setValue(byte[], byte[]).
> the table descriptor changed sucessfully.
> But the region doesn't
configs, so usually the
code will be
HTableDescriptor htd = admin.getTableDescriptor(tableName);
htd.setCoprocessor or htd.setValue
admin.modifyTable(htd);
Kang Minwoo 于2019年5月15日周三 上午10:51写道:
> Thanks! I don't know that.
> HBaseAdmin.modifyTable method looks like overwrite the pr
Hello User.
(HBase version: 1.2.9)
Recently, I am testing about DoNotRetryIOException.
I expected when RegionServer send a DoNotRetryIOException (or
AccessDeniedException), Client does not retry.
But, In Spark or MR, Client retries even though they receive
AccessDeniedException.
Here is a
Why do not use "doNotRetry" value in RemoteWithExtrasException?
____
보낸 사람: Kang Minwoo
보낸 날짜: 2019년 5월 7일 화요일 18:23
받는 사람: user@hbase.apache.org
제목: Why HBase client retry even though AccessDeniedException
Hello User.
(HBase version: 1.2.9)
Rece
/browse/HBASE-17170
On Tue, May 7, 2019 at 10:33 AM Josh Elser wrote:
> Sounds like a bug to me.
>
> On 5/7/19 5:52 AM, Kang Minwoo wrote:
> > Why do not use "doNotRetry" value in RemoteWithExtrasException?
> >
> > ________
> >
11, 2019 at 11:38 PM Kang Minwoo wrote:
>
> Hello, User.
>
> While I build HBase from the source. I got an error that is License errors
> detected, for more detail find ERROR in
> /hbase-shaded/hbase-shaded-client/target/maven-shared-archive-resources/META-INF/LICENSE.
>
Hello, User.
While I build HBase from the source. I got an error that is License errors
detected, for more detail find ERROR in
/hbase-shaded/hbase-shaded-client/target/maven-shared-archive-resources/META-INF/LICENSE.
My build environment is CentOS 6.3
Command is mvn -DskipTests
e-shaded-client/target/maven-shared-archive-resources/META-INF/LICENSE
>
> And then search for the details around the text "ERROR"
>
> On Fri, Jul 12, 2019, 00:46 Kang Minwoo wrote:
>
> > I try to build HBase 2.1.5
> > error is..
> >
> > [WARNING] Rule 0
assembly:single
---
보낸 사람: Kang Minwoo
보낸 날짜: 2019년 7월 15일 월요일 18:31
받는 사람: user@hbase.apache.org
제목: Failed to create assembly
Hello Users,
I try to build HBase 2.1.5 but I got a failure on Apache HBase - Assembly
Project.
Error Message is below.
---
[INFO
Hello Users,
I try to build HBase 2.1.5 but I got a failure on Apache HBase - Assembly
Project.
Error Message is below.
---
[INFO] Apache HBase - External Block Cache SUCCESS [ 1.750 s]
[INFO] Apache HBase - Assembly FAILURE [ 11.643 s]
[INFO]
Thanks, I want to build full tarball with site doc.
So I modified my script three build steps.
The build was a success.
보낸 사람: Stack
보낸 날짜: 2019년 7월 16일 화요일 00:48
받는 사람: Hbase-User
제목: Re: Failed to create assembly
On Mon, Jul 15, 2019 at 2:31 AM Kang
, Feb 24, 2020 at 8:20 PM Kang Minwoo wrote:
> Hello Users.
>
> Is there any way to check the system stop is requested in
> performCompaction over time?
>
> When the region got a close request, the region should wait there is no
> compaction and flush.
> However, in performC
Hello Users.
Is there any way to check the system stop is requested in performCompaction
over time?
When the region got a close request, the region should wait there is no
compaction and flush.
However, in performCompaction method checked periodically only by write bytes.
If write bytes is too
Hello, Users.
I use HBase Version 1.2.9.
However, Version 1.2.9 reached EOL. I prepare the HBase Major version upgrade.
In my case, Every clients' version is 1.2.9.
There are too many clients. I cannot upgrade the clients' major version.
Therefore old client that is using 1.2.9 connect new HBase
I looked around Apache Omid and Apache Tephra.
It seems like the dead.
Are there projects improving?
Best regards,
Minwoo Kang
보낸 사람: Kang Minwoo
보낸 날짜: 2020년 1월 10일 금요일 15:37
받는 사람: hbase-user
제목: Re: How to avoid write hot spot, While using cross row
delete requests.
Best regards,
Minwoo Kang
보낸 사람: ramkrishna vasudevan
보낸 날짜: 2020년 1월 30일 목요일 14:07
받는 사람: Kang Minwoo
참조: Hbase-User; Stack
제목: Re: Extremely long flush times
Hi Minwoo Kang
Any updates here? Where you able to over come the issue
it always inside in updateReaders? Is there a bad file or lots of files
to add to the list?
Yours,
S
On Thu, Jan 2, 2020 at 8:34 PM Kang Minwoo wrote:
> Hello Users,
>
> I met an issue that is flush times is too long.
>
> MemStoreFlusher is waiting for a lock.
&g
egative).
>
> Last but not lease, what about trying Phoenix?
>
>
>
> --
>
> Best regards,
> R.C
>
>
>
> ____
> From: Kang Minwoo
> Sent: 10 January 2020 12:51
> To: user@hbase.apache
Hello, users.
I use MultiRowMutationEndpoint coprocessor for cross row transactions.
It has a constraint that is rows must be located in the same region.
I removed random hash bytes in the row key.
After that, I suffer write hot-spot.
But cross row transactions are a core feature in my
the boolean to be set then it resets if not the scan just
goes on .
Regards
Ram
On Fri, Jan 10, 2020 at 10:01 AM Kang Minwoo
wrote:
> Thank you for reply.
>
> All Regions or just the one?
> => just one
>
> Do thread dumps lock thread reading against hdfs every time y
Hello Users,
I met an issue that is flush times is too long.
MemStoreFlusher is waiting for a lock.
```
"MemStoreFlusher.0"
java.lang.Thread.State: WAITING (parking)
at sun.misc.Unsafe.park(Native Method)
- parking to wait for <0x7f0412bddcb8> (a
seems but is available to you in hbase1).
S
2.
https://github.com/saintstack/hbase/blob/branch-1.2/hbase-server/src/main/java/org/apache/hadoop/hbase/regionserver/RegionServerServices.java#L51
On Sun, Mar 1, 2020 at 8:49 PM Kang Minwoo
mailto:minwoo.k...@outlook.com>> wrote:
HBase version is 1.2.
Additional information)
I am currently using hbase-1, but I am preparing to upgrade to hbase-2.
2020. 3. 17. 11:51, Kang Minwoo
mailto:minwoo.k...@outlook.com>> 작성:
Thank you for kindly reply.
I think the solution you gave me is really good.
I didn't know that before, so I took a dif
Hello,
I am working hard to upgrade the HBase cluster these days.
I want to get some advice about HBase configuration.
(My department plan is to make a new hbase cluster running version is 2.2 and
fade-out 1.2 hbase cluster.)
Now, I should compare the HBase configuration between 1.2 and 2.2.4.
89 matches
Mail list logo