On Wed, Sep 23, 2020 at 3:44 AM <vita...@yourcmc.ru> wrote:
>
> Hi!
>
> After almost a year of development in my spare time I present my own 
> software-defined block storage system: Vitastor - https://vitastor.io
>
> I designed it similar to Ceph in many ways, it also has Pools, PGs, OSDs, 
> different coding schemes, rebalancing and so on. However it's much simpler 
> and much faster. In a test cluster with SATA SSDs it achieved Q1T1 latency of 
> 0.14ms which is especially great compared to Ceph RBD's 1ms for writes and 
> 0.57ms for reads. In an "iops saturation" parallel load benchmark it reached 
> 895k read / 162k write iops, compared to Ceph's 480k / 100k on the same 
> hardware, but the most interesting part was CPU usage: Ceph OSDs were using 
> 40 CPU cores out of 64 on each node and Vitastor was only using 4.
>
> Of course it's an early pre-release which means that, for example, it lacks 
> snapshot support and other useful features. However the base is finished - it 
> works and runs QEMU VMs. I like the design and I plan to develop it further.
>
> There are more details in the README file which currently opens from the 
> domain https://vitastor.io

Very interesting.

Could you please add more details to the README file, as listed below?

1. Network benchmarks, in terms of achievable throughput and latency.
2. The type of the switch you used, and if there was any latency
tuning, please state it.
3. The network MTU.
4. The utilization figures for SSDs and network interfaces during each test.

Also, given that the scope of the project only includes block storage,
I think it would be fair to ask for a comparison with DRBD 9 and
possibly Linstor, not only with Ceph.

-- 
Alexander E. Patrakov
CV: http://pc.cd/PLz7
_______________________________________________
ceph-users mailing list -- ceph-users@ceph.io
To unsubscribe send an email to ceph-users-le...@ceph.io

Reply via email to