As the makers of Promscale, we're very attuned to the needs of effective Prometheus deployments. With that in mind, one thing to consider with Timestream is that ingest performance from a single client seems to be a current limitation. The creator of this adaptor doesn't mention his his setup or how many metrics he was trying to ingest per minute or second.
In recent benchmarks by CrateDB (https://crate.io/a/amazon-timestream-first-impressions/) and Timescale (not yet published), it appears that Timestream only achieves a consistent ingest rate in the range of 500-800 metrics/second from a single client, especially when lots of attributes are involved.. Higher throughput is achieved using a streaming service (ie. Kinesis) or adding more clients. In our tests using the open-source Time-series Benchmarking Suite (https://github.com/timescale/tsbs), we ended up using 10 EC2 clients to import data for about 36 hours and were only able to achieve (effectively) 5,000 metrics/sec, meaning clients averaged ~550 metrics/sec. So, definitely test your throughput and make sure the system can keep up with ingesting data. Also, while storage is really cheap, queries are billed based on the number of GB scanned, so your bill is likely to grow over time unless you're really efficient with removing older data. On Tuesday, November 24, 2020 at 11:40:03 AM UTC-5 Stuart Clark wrote: > On 24/11/2020 14:06, '[email protected]' via Prometheus Users wrote: > > the guy that wrote the adapter suggests that a Grafana plugin would be > > used to read the information from Timestream in AWS. > > Yes, but that doesn't help for alerting, recording rules, etc. which are > in Prometheus. > > -- You received this message because you are subscribed to the Google Groups "Prometheus Users" group. To unsubscribe from this group and stop receiving emails from it, send an email to [email protected]. To view this discussion on the web visit https://groups.google.com/d/msgid/prometheus-users/2ed17beb-29c1-4cef-8b3e-55374c09a010n%40googlegroups.com.

