AFAIK, there is no database that could live in 4 times large data I/O,
even for ElasticSearch/OpenSearch.

We are working on a new BanyanDB, which is much better in I/O, we hope
one day we could provide this. But not for now.


Sheng Wu 吴晟
Twitter, wusheng1108

dafang <13240156...@163.com> 于2022年11月28日周一 15:02写道:
>
> Hi god wu:
> Recently, we have a seconds metrics demand.And we don't want to save these 
> metrics into our db.So we want to do an intercept when the metrics 
> aggregation in the oap(At this time, there will be two data streams: 1. Data 
> written to es in minutes; 2. Write the second level data of other 
> storage).But we found the metrics are setted the timebucket in minute when 
> they are created.So we have no idea to extend it, and we need your help~
>
>
> 为了怕表达不清楚,我再用中文表述一下:
> 最近,我们有一个按秒级收集监控数据的需求。但是我们不想把这些秒级的数据存在es中。所以我们想在数据聚合的位置进行拦截(这时会同时存在2个数据流:1.写入es的以分钟为维度的数据;2.写入其他存储的秒级的数据)。但是我们阅读源码发现,metrics在生成的时候,就将timebucket阉割到分钟维度了,我们后面就没有办法扩展了。现在没有思路了,现在需要你的帮助
>
>
> Yours
> 2022.11.28

Reply via email to