I can see the requirements, and I can know why this is being asked.
But, the reality makes use have to do the trade-off. If SkyWalking
costs too much, it would be hard for end users.

Sheng Wu 吴晟
Twitter, wusheng1108

dafang <13240156...@163.com> 于2022年11月28日周一 16:04写道:
>
>
>
>
>
>
>
> OK,I got it,thank you~
>
>
>
>
> 在 2022-11-28 15:53:32,"Sheng Wu" <wu.sheng.841...@gmail.com> 写道:
> >I don't mean anything about trace/log.
> >This is not about the size of the data. From the database perspective,
> >IOPS is the key. You are asking for 4x IOPS for every metric index.
> >
> >Sheng Wu 吴晟
> >Twitter, wusheng1108
> >
> >dafang <13240156...@163.com> 于2022年11月28日周一 15:47写道:
> >>
> >>
> >>
> >>
> >> I mean only metrics data,not trace data. Should metric data be much less 
> >> than trace? If yes, The data is only increase serval times.I have found 
> >> our metrics index(such as jvm_old_gc_count)is only 20G per day pre 
> >> shard.It only will be increase 4 times to 80G ,and trace index is more 
> >> than 200G per shard per day.So I thought it would be work properly.
> >>
> >>
> >>
> >>
> >> 在 2022-11-28 15:40:42,"Sheng Wu" <wu.sheng.841...@gmail.com> 写道:
> >> >AFAIK, there is no database that could live in 4 times large data I/O,
> >> >even for ElasticSearch/OpenSearch.
> >> >
> >> >We are working on a new BanyanDB, which is much better in I/O, we hope
> >> >one day we could provide this. But not for now.
> >> >
> >> >
> >> >Sheng Wu 吴晟
> >> >Twitter, wusheng1108
> >> >
> >> >dafang <13240156...@163.com> 于2022年11月28日周一 15:02写道:
> >> >>
> >> >> Hi god wu:
> >> >> Recently, we have a seconds metrics demand.And we don't want to save 
> >> >> these metrics into our db.So we want to do an intercept when the 
> >> >> metrics aggregation in the oap(At this time, there will be two data 
> >> >> streams: 1. Data written to es in minutes; 2. Write the second level 
> >> >> data of other storage).But we found the metrics are setted the 
> >> >> timebucket in minute when they are created.So we have no idea to extend 
> >> >> it, and we need your help~
> >> >>
> >> >>
> >> >> 为了怕表达不清楚,我再用中文表述一下:
> >> >> 最近,我们有一个按秒级收集监控数据的需求。但是我们不想把这些秒级的数据存在es中。所以我们想在数据聚合的位置进行拦截(这时会同时存在2个数据流:1.写入es的以分钟为维度的数据;2.写入其他存储的秒级的数据)。但是我们阅读源码发现,metrics在生成的时候,就将timebucket阉割到分钟维度了,我们后面就没有办法扩展了。现在没有思路了,现在需要你的帮助
> >> >>
> >> >>
> >> >> Yours
> >> >> 2022.11.28

Reply via email to