>
> On Tue, Aug 16, 2016 at 9:27 AM, Sterfield wrote:
> > >
> >
> > ...
> > > > On the corresponding RS, at the same time, there's a message about a
> > big
> > > > flush, but not with so much memory in the memstore. Also, I don't see
> > any
> > > > warning that could explain why the memstore gre
On Tue, Aug 16, 2016 at 9:27 AM, Sterfield wrote:
> >
>
> ...
> > > On the corresponding RS, at the same time, there's a message about a
> big
> > > flush, but not with so much memory in the memstore. Also, I don't see
> any
> > > warning that could explain why the memstore grew so large (nothin
>
> On Tue, Aug 16, 2016 at 6:12 AM, Sterfield wrote:
> > >
> > > This is a well-known issue over in tsdb-land. IIRC, they are working on
> > > alternative to the once-an-hour compression. See what they say over
> there
> > > Guillaume.
> > > Thanks,
> > > St.Ack
> >
> >
> > Thanks for the tips. I
On Tue, Aug 16, 2016 at 6:12 AM, Sterfield wrote:
> >
> > This is a well-known issue over in tsdb-land. IIRC, they are working on
> > alternative to the once-an-hour compression. See what they say over there
> > Guillaume.
> > Thanks,
> > St.Ack
>
>
> Thanks for the tips. I'll check on OpenTSDB s
>
> This is a well-known issue over in tsdb-land. IIRC, they are working on
> alternative to the once-an-hour compression. See what they say over there
> Guillaume.
> Thanks,
> St.Ack
Thanks for the tips. I'll check on OpenTSDB side and come back here with
what I'll find.
I have one last questio
On Fri, Aug 12, 2016 at 1:22 AM, Sterfield wrote:
> ...
> I've reach OpenTSDB guys in order to know what's being done exactly when
> doing a compaction. With that information, it may be possible to tune Hbase
> in order to handle correctly the load. To me, it seems that there's a big
> scan, then
>>We saw this as well at Splice Machine. This led us to run compactions in
Spark.
It will be great to see how this is done. One thing moving HBase's internal
to an external proces is something out of the box. But eager to see. Looks
interesting.
Regards
Ram
On Thu, Aug 11, 2016 at 8:43 PM, John
Hi John,
So it means that you were not able to handle the compaction additional load
by tuning Hbase ?
I've reach OpenTSDB guys in order to know what's being done exactly when
doing a compaction. With that information, it may be possible to tune Hbase
in order to handle correctly the load. To me,
We saw this as well at Splice Machine. This led us to run compactions in
Spark. Once we did this, we saw the compaction effects go away almost entirely.
Here is a link to our code.
https://github.com/splicemachine/spliceengine/blob/73640a81972ef5831c1ea834ac9ac22f5b3428db/hbase_sql/src/main/ja
And it's gone [1]. No more spikes in the writes / read, no more OpenTSDB
error. So I think it's safe to assume that OpenTSDB compaction is
generating some additional load that is not very well handled by the HBase,
and therefore, generating the issues I'm mentioning.
It seems also that the MVCC er
Hello,
> > Hi,
> >
> > Thanks for your answer.
> >
> > I'm currently testing OpenTSDB + HBase, so I'm generating thousands of
> HTTP
> > POST on OpenTSDB in order to write data points (currently up to 300k/s).
> > OpenTSDB is only doing increment / append (AFAIK)
> >
> > How many nodes or is that
On Wed, Aug 10, 2016 at 1:13 AM, Sterfield wrote:
> Hi,
>
> Thanks for your answer.
>
> I'm currently testing OpenTSDB + HBase, so I'm generating thousands of HTTP
> POST on OpenTSDB in order to write data points (currently up to 300k/s).
> OpenTSDB is only doing increment / append (AFAIK)
>
> Ho
Hi,
Thanks for your answer.
I'm currently testing OpenTSDB + HBase, so I'm generating thousands of HTTP
POST on OpenTSDB in order to write data points (currently up to 300k/s).
OpenTSDB is only doing increment / append (AFAIK)
If I have understood your answer correctly, some write ops are queued
Ya it comes with write workload. Not like with concurrent reads.
Once the write is done (memstore write and WAL write), we mark that
MVCC operation corresponding to this as complete and wait for a global
read point to advance to atleast this point. (Every write op will have
a number corresponding
2016-08-05 17:52 GMT+02:00 Bryan Beaudreault :
> I'm also interested in an answer here. We see this from time to time in our
> production HBase clusters (non-opentsdb). It seems to be related to
> contention under heavy reads or heavy writes. But it's not clear what the
> impact is here.
>
> On Fr
I'm also interested in an answer here. We see this from time to time in our
production HBase clusters (non-opentsdb). It seems to be related to
contention under heavy reads or heavy writes. But it's not clear what the
impact is here.
On Fri, Aug 5, 2016 at 5:14 AM Sterfield wrote:
> Hi,
>
> I'm
Hi,
I'm currently testing Hbase 1.2.1 + OpenTSDB. For that, I'm generating a
high load of HTTP PUT on OpenTSDB, which then writes in Hbase. Currently,
I'm able to feed 300k data point per seconds, and I'm trying to achieve
higher speed.
I have also activate JMX on both Master and Region servers i
17 matches
Mail list logo