Any idea? Sent using Zoho Mail Forwarded message From
: onmstester onmstester To :
"user" Date : Sat, 08 Sep 2018 10:46:25 +0430 Subject :
Migrating from Apache Cassandra to Hbase Forwarded message
Hi, Currently I'm using Apache Cassandra as
Upgrade to "2.0.2" version didn't help.
"Fixed" by re-creating tables, which is not good.
Hope in nearest future we will have tool for fix that.
On Mon, Sep 10, 2018 at 2:05 PM Oleg Galitskiy
wrote:
> Yes, bow main issue in "hole".
>
> Could you give me example how I can fetch or scan section
1. Yes
2. HDFS NN pressure, read slow down, general poor performance
3. Default configuration is weekly, if you don't explicitly know some
reasons why weekly doesn't work, this is what you should follow ;)
4. No
I would be surprised if you need to do anything special with S3, but I
don't know
It is during a period when the number of client operations was relatively low.
It wasn’t zero, but it was definitely off peak hours.
On 9/10/18, 12:16 PM, "Ted Yu" wrote:
In the previous stack trace you sent, shortCompactions and longCompactions
threads were not active.
Was
Yes, bow main issue in "hole".
Could you give me example how I can fetch or scan section of the table?
Thanks.
On Mon, Sep 10, 2018 at 1:53 PM Stack wrote:
> On Mon, Sep 10, 2018 at 1:32 PM Oleg Galitskiy
> wrote:
>
> > Hello,
> >
> > Faced with inconsistent issues on HBase 2.0.1:
> > --
>
On Mon, Sep 10, 2018 at 1:32 PM Oleg Galitskiy
wrote:
> Hello,
>
> Faced with inconsistent issues on HBase 2.0.1:
> --
>
> ERROR: Region \{ meta => null, hdfs =>
>
> hdfs://master:50001/hbase/data/default/some_table/0646d0bee757d0fb0de1529475b5426f,
> deployed =>
>
Hello,
Faced with inconsistent issues on HBase 2.0.1:
--
ERROR: Region \{ meta => null, hdfs =>
hdfs://master:50001/hbase/data/default/some_table/0646d0bee757d0fb0de1529475b5426f,
deployed =>
hbase-region,16020,1536493017073;some_table,,1534195327532.0646d0bee757d0fb0de1529475b5426f.,
replicaId
In the previous stack trace you sent, shortCompactions and longCompactions
threads were not active.
Was the stack trace captured during period when the number of client
operations was low ?
If not, can you capture stack trace during off peak hours ?
Cheers
On Mon, Sep 10, 2018 at 12:08 PM
Hi Ted,
The highest number of filters used is 10, but the average is generally close to
1. Is it possible the CPU usage spike has to do with Hbase internal maintenance
operations? It looks like post-upgrade the spike isn’t correlated with the
frequency of reads/writes we are making, because
Hello,
As I understand, the deleted records in hbase files do not get removed
until a major compaction is performed.
I have a few questions regarding major compaction:
1. If I set a TTL and/or a max number of versions, the records are older
than the TTL or the
expired versions will
For the second config you mentioned, hbase.master.distributed.log.replay,
see http://hbase.apache.org/book.html#upgrade2.0.distributed.log.replay
FYI
On Mon, Sep 10, 2018 at 8:52 AM sahil aggarwal
wrote:
> Hi,
>
> My cluster has around 50k regions and 130 RS. In case of unclean shutdown,
> the
Hi,
My cluster has around 50k regions and 130 RS. In case of unclean shutdown,
the cluster take around 40 50 mins to come up(mostly slow on region
assignment from observation). Trying to optimize it found following
possible configs:
*hbase.assignment.usezk:* which will co-host meta table and
12 matches
Mail list logo