Hi, The performance of the tool should be improved...
I think a user has no patience if the tool takes more than several seconds (at most several minutes) for a workload which has hundreds of millions of time series... Best, ----------------------------------- Xiangdong Huang School of Software, Tsinghua University 黄向东 清华大学 软件学院 李天安 <[email protected]> 于2019年7月17日周三 下午4:57写道: > Hi, Jialin: > Actually, I have written an docu to show how to use this tool, and it has > been submitted to this pr. > https://github.com/apache/incubator-iotdb/pull/256 < > https://github.com/apache/incubator-iotdb/pull/256> > > Best Regards, > ————————————————— > Tianan Li > School of Software, Tsinghua University > > 李天安 > 清华大学 软件学院 > > > 在 2019年7月17日,下午4:44,Jialin Qiao <[email protected]> 写道: > > > > Hi Tianan, > > > > Nice work, This is very useful! > > > > Could you add an example (command) about how to use this tool? > > > > Thanks, > > -- > > Jialin Qiao > > School of Software, Tsinghua University > > > > 乔嘉林 > > 清华大学 软件学院 > > > >> -----原始邮件----- > >> 发件人: "李天安" <[email protected]> > >> 发送时间: 2019-07-17 14:57:51 (星期三) > >> 收件人: [email protected] > >> 抄送: > >> 主题: Memory estimation tool > >> > >> Hi, > >> > >> > >> Recently, I participated in the development of the dynamic parameter > adapter module[https://github.com/apache/incubator-iotdb/pull/232], which > can dynamically adjust system parameters according to load conditions. When > we have the above module, I think we can develop a cooler thing: the memory > estimation tool. When users provide the number of storage groups and > timeseries according to the workload, the memory estimation tool can give > the minimum memory for writing to meet the workload. > >> > >> > >> I think this tool is very useful and I have started to develop it [ > https://github.com/apache/incubator-iotdb/pull/256]. > >> > >> > >> Anyone interested in this tool can discuss it here together. > >> > >> > >> Best Regards, > >> ————————————————— > >> Tianan Li > >> School of Software, Tsinghua University > >> > >> > >> > >
