>>>> [...] S3 workload, that will need to delete 100M file
>>>> daily [...]

>> [...] average (what about peaks?) around 1,200 committed
>> deletions per second (across the traditional 3 metadata
>> OSDs) sustained, that may not leave a lot of time for file
> creation, writing or reading. :-)[...]

>>> [...] So many people seem to think that distributed (or
>>> even local) filesystems (and in particular their metadata
>>> servers) can sustain the same workload as high volume
>>> transactional DBMSes. [...]

> Index pool distributed over a large number of NVMe OSDs?
> Multiple, dedicated RGW instances that only run LC?

As long as that guarantees a total maximum network+write
latency of well below 800µs across all of them that might
result in a committed rate of a deletion every 800µs (and there
are no peaks and the metadata server only does deletions and
does not do creations or opens or any "maintenance" operations
like checks and backups). :-)

Sometimes I suggest somewhat seriously entirely RAM based
metadata OSDs, which given a suitable environment may be
feasible. But I still wonder why "So many people seem to think
... can sustain the same workload as high volume transactional
DBMSes" :-).
_______________________________________________
ceph-users mailing list -- [email protected]
To unsubscribe send an email to [email protected]

Reply via email to