Yes, you can! The kernel documentation for read/write limits actually uses
/dev/null in the examples :)

But more seriously: while we have not architected specifically for high
performance, for the past few years, we have used a zpool of cheap spindle
disks and 1-2 SSD disks for caching. We have ZFS configured for
deduplication which helps for the base images but not so much for ephemeral.

If you have a standard benchmark command in mind to run, I'd be happy to
post the results. Maybe others could do the same to create some type of
matrix?

On Wed, Jun 13, 2018 at 8:18 AM, Blair Bethwaite <blair.bethwa...@gmail.com>
wrote:

> Hi Jay,
>
> Ha, I'm sure there's some wisdom hidden behind the trolling here?
>
> Believe me, I have tried to push these sorts of use-cases toward volume or
> share storage, but in the research/science domain there is often more
> accessible funding available to throw at infrastructure stop-gaps than
> software engineering (parallelism is hard). PS: when I say ephemeral I
> don't necessarily mean they aren't doing backups and otherwise caring that
> they have 100+TB of data on a stand alone host.
>
> PS: I imagine you can set QoS limits on /dev/null these days via CPU
> cgroups...
>
> Cheers,
>
>
> On Thu., 14 Jun. 2018, 00:03 Jay Pipes, <jaypi...@gmail.com> wrote:
>
>> On 06/13/2018 09:58 AM, Blair Bethwaite wrote:
>> > Hi all,
>> >
>> > Wondering if anyone can share experience with architecting Nova KVM
>> > boxes for large capacity high-performance storage? We have some
>> > particular use-cases that want both high-IOPs and large capacity local
>> > storage.
>> >
>> > In the past we have used bcache with an SSD based RAID0 write-through
>> > caching for a hardware (PERC) backed RAID volume. This seemed to work
>> > ok, but we never really gave it a hard time. I guess if we followed a
>> > similar pattern today we would use lvmcache (or are people still using
>> > bcache with confidence?) with a few TB of NVMe and a NL-SAS array with
>> > write cache.
>> >
>> > Is the collective wisdom to use LVM based instances for these
>> use-cases?
>> > Putting a host filesystem with qcow2 based disk images on it can't help
>> > performance-wise... Though we have not used LVM based instance storage
>> > before, are there any significant gotchas? And furthermore, is it
>> > possible to use set IO QoS limits on these?
>>
>> I've found /dev/null to be the fastest ephemeral storage system, bar none.
>>
>> Not sure if you can set QoS limits on it though.
>>
>> Best,
>> -jay
>>
>> _______________________________________________
>> OpenStack-operators mailing list
>> OpenStack-operators@lists.openstack.org
>> http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-operators
>>
>
> _______________________________________________
> OpenStack-operators mailing list
> OpenStack-operators@lists.openstack.org
> http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-operators
>
>
_______________________________________________
OpenStack-operators mailing list
OpenStack-operators@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-operators

Reply via email to