Index pool on Aerospike? 

Building OSDs on PRAM might be a lot less work than trying to ensure 
consistency on backing storage while still servicing out of RAM and not syncing 
every transaction.

> On Jul 18, 2023, at 14:31, Peter Grandi <p...@ceph.list.sabi.co.uk> wrote:
> 
>>>>> [...] S3 workload, that will need to delete 100M file
>>>>> daily [...]
> 
>>> [...] average (what about peaks?) around 1,200 committed
>>> deletions per second (across the traditional 3 metadata
>>> OSDs) sustained, that may not leave a lot of time for file
>> creation, writing or reading. :-)[...]
> 
>>>> [...] So many people seem to think that distributed (or
>>>> even local) filesystems (and in particular their metadata
>>>> servers) can sustain the same workload as high volume
>>>> transactional DBMSes. [...]
> 
>> Index pool distributed over a large number of NVMe OSDs?
>> Multiple, dedicated RGW instances that only run LC?
> 
> As long as that guarantees a total maximum network+write
> latency of well below 800µs across all of them that might
> result in a committed rate of a deletion every 800µs (and there
> are no peaks and the metadata server only does deletions and
> does not do creations or opens or any "maintenance" operations
> like checks and backups). :-)
> 
> Sometimes I suggest somewhat seriously entirely RAM based
> metadata OSDs, which given a suitable environment may be
> feasible. But I still wonder why "So many people seem to think
> ... can sustain the same workload as high volume transactional
> DBMSes" :-).
> _______________________________________________
> ceph-users mailing list -- ceph-users@ceph.io
> To unsubscribe send an email to ceph-users-le...@ceph.io
_______________________________________________
ceph-users mailing list -- ceph-users@ceph.io
To unsubscribe send an email to ceph-users-le...@ceph.io

Reply via email to