In regards to testing I think having end2end tests is more important than area 
targeted ones. Obviously these help but i don't think should be the deciding 
factor.

I quite liked the quiver tool that was shared to me a while back I think 
Clebert shared a link, very similar to our internal testing rig, so much so I 
started to use it instead of our own especially in non Java client testing :)

I wonder if there's a way to automate it, so that a bunch of perf stats 
(latency histo and throughout) can be produced for a build , given the same kit 
(build agent) After all this is what an end user cares about.

Obv I wouldn't run it every build but look to have it reserved for pre tag 
builds or weekly/monthly to ensure perf isn't regressed and any complicated or 
perf improvements like yours. Or can be manually run but easily locally. This 
way we don't end up with a bunch of one off or bespoke tests also.


Re SSD's my personal feeling is that this is the way of the data centre (hdd 
will start being relegated to glacial or non transactional storage) . Many many 
apps already are built or tuned heavily for SSD already. As long as any feature 
is behind a toggle so it can eve turned off or on then I see no reason why not 
to implement features targeted at specific hardware like ssd's.

My two cents and thoughts.

Re SSD, have a poke around aerospike documents, they got early access to intel 
optane to tune their engine (a few google searches can return some interesting 
details) really I think they're one of the leaders in this area.
The other one is the well known rocksdb. 

For me it's about parallelism and stop treating SSD like a HDD, enterprise ones 
come with capacitor backed ram caches exploiting these, raid controllers and 
sata add too much overhead etc. Also keep in mind the new features up coming 
like non volatile ram with optane 3D xpoint 

Heres a less low level but interesting read also.
https://www.oreilly.com/ideas/how-flash-changes-the-design-of-database-storage-engines








Sent from my iPhone

> On 8 May 2017, at 19:12, nigro_franz <[email protected]> wrote:
> 
> In the meantime, I'm searching good articles to add other disk I/O limiter
> strategies, maybe based on IOPS credits consumption.
> 
> A possible idea about an IOPS limiter: 
> - a single writer/appender/sync thread to perform the real writes & flush
> - a bounded queue of bytes (I've already designed a couple pretty fast) to
> submit any requests 
> - a proper batch strategy on the appender/sync thread based on the current
> IOPS and unflushed writes
> 
> The bounded queue/ringbuffer will be useful to propagate backpressure
> "naturally" to the request producer, bounding the max latency and with
> bounded memory footprint.
> 
> What do you think?
> 
> Right now I've found only this article:
> https://engineering.linkedin.com/blog/2016/05/designing-ssd-friendly-applications-for-better-application-perfo
> But it doesn't take in account sync operations and is specific only for
> SSDs!!
> If any of you have something good on the argument, please share :)
> 
> 
> 
> 
> 
> --
> View this message in context: 
> http://activemq.2283324.n4.nabble.com/Adapting-TimedBuffer-and-NIO-Buffer-Pooling-tp4725727p4725780.html
> Sent from the ActiveMQ - Dev mailing list archive at Nabble.com.

Reply via email to