t; > > > I see similar log spam while system has reasonable performance.
> > Was
> > > > the
> > > > > > 250ms default chosen with SSDs and 10ge in mind or something? I
> > > guess
> > > > > I'm
> > > > >
gt; > > surprised a sync write several times through JVMs to 2 remote
> > datanodes
> > > > > would be expected to consistently happen that fast.
> > > > >
> > > > > Regards,
> > > > >
> > > > > On Mon, Apr 25
10ge in mind or something? I
> > guess
> > > > I'm
> > > > > surprised a sync write several times through JVMs to 2 remote
> > datanodes
> > > > > would be expected to consistently happen that fast.
> > > > >
> > > > > Re
; > > > > > >
> > > > > > > > From what I can see in the source code, the default is
> actually
> > > > even
> > > > > > > lower
> > > > > > > > at 100 ms (can be overridden with
> > > > > hbase.regionserver.hlog.slowsync.ms
>
> > > > > >
> > > > > > >
> > > > > > > On Tue, Apr 26, 2016 at 3:13 AM, Kevin Bowling <
> > > > > kevin.bowl...@kev009.com
> > > > > > >
> > > > > > > wrote:
> > > >
gt; > wrote:
> > > > > >
> > > > > > > I see similar log spam while system has reasonable performance.
> > > Was
> > > > > the
> > > > > > > 250ms default chosen with SSDs and 10ge in mind or something?
> I
> > >
ote:
> > > > >
> > > > > > I see similar log spam while system has reasonable performance.
> > Was
> > > > the
> > > > > > 250ms default chosen with SSDs and 10ge in mind or something? I
> > > guess
> > > > &
e in mind or something? I
> > guess
> > > > I'm
> > > > > surprised a sync write several times through JVMs to 2 remote
> > datanodes
> > > > > would be expected to consistently happen that fast.
> > > &g
> > > surprised a sync write several times through JVMs to 2 remote
> > datanodes
> > > > > would be expected to consistently happen that fast.
> > > > >
> > > > > Regards,
> > > > >
> > > > > On Mon, Apr 25, 20
at fast.
> > > >
> > > > Regards,
> > > >
> > > > On Mon, Apr 25, 2016 at 12:18 PM, Saad Mufti
> > > wrote:
> > > >
> > > > > Hi,
> > > > >
> > > > > In our large HBase cluster based on CDH 5.5 in AWS,
t; > would be expected to consistently happen that fast.
> > >
> > > Regards,
> > >
> > > On Mon, Apr 25, 2016 at 12:18 PM, Saad Mufti
> > wrote:
> > >
> > > > Hi,
> > > >
> > > > In our large HBase cluster based
ds,
> >
> > On Mon, Apr 25, 2016 at 12:18 PM, Saad Mufti
> wrote:
> >
> > > Hi,
> > >
> > > In our large HBase cluster based on CDH 5.5 in AWS, we're constantly
> > seeing
> > > the following messages in the region server log
:
>
> > Hi,
> >
> > In our large HBase cluster based on CDH 5.5 in AWS, we're constantly
> seeing
> > the following messages in the region server logs:
> >
> > 2016-04-25 14:02:55,178 INFO
> > org.apache.hadoop.hbase.regionserver.wal.FSHLog: Slo
pr 25, 2016 at 12:18 PM, Saad Mufti wrote:
> Hi,
>
> In our large HBase cluster based on CDH 5.5 in AWS, we're constantly seeing
> the following messages in the region server logs:
>
> 2016-04-25 14:02:55,178 INFO
> org.apache.hadoop.hbase.regionserver.wal.FSHLog: Sl
gt; seeing
> > the following messages in the region server logs:
> >
> > 2016-04-25 14:02:55,178 INFO
> > org.apache.hadoop.hbase.regionserver.wal.FSHLog: Slow sync cost: 258 ms,
> > current pipeline:
> > [DatanodeInfoWithStorage[10.99.182.165:50010
> > ,DS
the region server logs:
>
> 2016-04-25 14:02:55,178 INFO
> org.apache.hadoop.hbase.regionserver.wal.FSHLog: Slow sync cost: 258 ms,
> current pipeline:
> [DatanodeInfoWithStorage[10.99.182.165:50010
> ,DS-281d4c4f-23bd-4541-bedb-946e57a0f0fd,DISK],
> DatanodeInfoWithStorage[10.99.
Hi,
In our large HBase cluster based on CDH 5.5 in AWS, we're constantly seeing
the following messages in the region server logs:
2016-04-25 14:02:55,178 INFO
org.apache.hadoop.hbase.regionserver.wal.FSHLog: Slow sync cost: 258 ms,
current pipeline:
[DatanodeInfoWithStorage[10.99.182.165:
FSHLog: Slow sync cost:143 ms, current ...
I have seen this messages at recent performance test.
This showed up when hdfs had hard time catching up.
(I did a lot of put request. )
check read/write byte of hdfs jmx to confirm.
Maybe when you added new RS server,
loadbalance kicked in and many
M, Artem Ervits
> > wrote:
> >
> > > Hello all, trying to address a sudden change in performance, processing
> > > Kafka, Storm, HBase pipeline. I'm seeing error wal.FSHLog: Slow sync
> > cost:
> > > 143 ms, current pipeline:, this started appearing
ge in performance, processing
> > Kafka, Storm, HBase pipeline. I'm seeing error wal.FSHLog: Slow sync
> cost:
> > 143 ms, current pipeline:, this started appearing once I added more
> > regionservers. Is there a problem with small Xmx value for datanode?
> That's
>
se pipeline. I'm seeing error wal.FSHLog: Slow sync cost:
> 143 ms, current pipeline:, this started appearing once I added more
> regionservers. Is there a problem with small Xmx value for datanode?
That's
> what I've found so far in searches. Table is presplit with no hotspottin
den change in performance, processing
> Kafka, Storm, HBase pipeline. I'm seeing error wal.FSHLog: Slow sync cost:
> 143 ms, current pipeline:, this started appearing once I added more
> regionservers. Is there a problem with small Xmx value for datanode? That's
> what I
Hello all, trying to address a sudden change in performance, processing
Kafka, Storm, HBase pipeline. I'm seeing error wal.FSHLog: Slow sync cost:
143 ms, current pipeline:, this started appearing once I added more
regionservers. Is there a problem with small Xmx value for datanode? That
23 matches
Mail list logo