Re: [Openstack-operators] Openstack and Ceph

2017-02-17 Thread Alex Hübner
Are these nodes connected to a dedicated or a shared (in the sense there
are other workloads running) network switches? How fast (1G, 10G or faster)
are the interfaces? Also, how much RAM are you using? There's a rule of
thumb that says you should dedicate at least 1GB of RAM for each 1 TB of
raw disk space. How the clients are consuming the storage? Are they virtual
machines? Are you using iSCSI to connect those? Are these clients the same
ones you're testing against your regular SAN storage and are they
positioned in a similar fashion (ie: over a steady network channel)? What
Ceph version are you using?

Finally, replicas are normally faster than erasure coding, so you're good
on this. It's *never* a good idea to enable RAID cache, even when it
apparently improves IOPS (the magic of Ceph relies on the cluster, it's
network and the number of nodes, don't approach the nodes as if they where
isolate storage servers). Also, RAID0 should only be used as a last resort
for the cases the disk controller doesn't offer JBOD mode.

[]'s
Hubner

On Fri, Feb 17, 2017 at 7:19 AM, Vahric Muhtaryan 
wrote:

> Hello All ,
>
> First thanks for your answers . Looks like everybody is ceph lover :)
>
> I believe that you already made some tests and have some results because
> of until now we used traditional storages like IBM V7000 or XIV or Netapp
> or something we are very happy to get good iops and also provide same
> performance to all instances until now.
>
> We saw that each OSD eating a lot of cpu and when multiple client try to
> get same performance from ceph its looks like not possible , ceph is
> sharing all things with clients and we can not reach hardware raw iops
> capacity with ceph. For example each SSD can do 90K iops we have three on
> each node and have 6 nodes means we should get better results then what we
> have now !
>
> Could you pls share your hardware configs , iops test and advise our
> expectations correct or not ?
>
> We are using Kraken , almost all debug options are set 0/0 , we modified
> op_Tracker or some other ops based configs too !
>
> Our Hardware
>
> 6 x Node
> Each Node Have :
> 2 Socket Intel(R) Xeon(R) CPU E5-2630L v3 @ 1.80GHz each and total 16
> core and HT enabled
> 3 SSD + 12 HDD (SSDs are in journal mode 4 HDD to each SSD)
> Each disk configured Raid 0 (We did not see any performance different with
> JBOD mode of raid card because of that continued with raid 0 )
> Also raid card write back cache is used because its adding extra IOPS too
> !
>
> Our Test
>
> Its %100 random and write
> Ceph pool is configured 3 replica set. (we did not use 2 because at the
> failover time all system stacked and we couldn’t imagine great tunning
> about it because some of reading said that under high load OSDs can be down
> and up again we should care about this too ! )
>
> Test Command : fio --randrepeat=1 --ioengine=libaio --direct=1
> --gtod_reduce=1 --name=test --filename=test --bs=4k —iodepth=256 --size=1G
> --numjobs=8 --readwrite=randwrite —group_reporting
>
> Achieved IOPS : 35 K (Single Client)
> We tested up to 10 Clients which ceph fairly share this usage like almost
> 4K for each
>
> Thanks
> Regards
> Vahric Muhtaryan
>
___
OpenStack-operators mailing list
OpenStack-operators@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-operators


Re: [Openstack-operators] ceph vs gluster for block

2017-02-16 Thread Alex Hübner
Gluster for block storage is definitely not a good choice, specially for
VMs and OpenStack in general. Also, there are rumors all over the place
that RedHat will start to "phase out" Gluster in favor of CephFS, the "last
frontier" of the so-called "Unicorn Storage" (Ceph does everything). But
when it comes to block, there's no better choice than Ceph for every-single
scenario I could think off.

[]'s
Hubner

On Thu, Feb 16, 2017 at 4:39 PM, Mike Smith  wrote:

> Same experience here.  Gluster ‘failover’ time was an issue for as well
> (rebooting one of the Gluster nodes caused unacceptable locking/timeout for
> a period of time).  Ceph has worked well for us for both nova-ephemeral and
> cinder volume as well as Glance.  Just make sure you stay well ahead of
> running low on disk space!  You never want to run low on a Ceph cluster
> because it will write lock until you add more disk/OSDs
>
> Mike Smith
> Lead Cloud Systems Architect
> Overstock.com 
>
>
>
> On Feb 16, 2017, at 11:30 AM, Jonathan Abdiel Gonzalez Valdebenito <
> jonathan.abd...@gmail.com> wrote:
>
> Hi Vahric!
>
> We tested GlusterFS a few years ago and the latency was high, poors IOPs
> and every node with a high cpu usage, well that was a few years ago.
>
> We ended up after lot of tests using fio with Ceph cluster, so my advice
> it's use Ceph Cluster without doubts
>
> Regards,
>
> On Thu, Feb 16, 2017 at 1:32 PM Vahric Muhtaryan 
> wrote:
>
>> Hello All ,
>>
>> For a long time we are testing Ceph and today we also want to test
>> GlusterFS
>>
>> Interesting thing is maybe with single client we can not get IOPS what we
>> get from ceph cluster . (From ceph getting max 35 K IOPS for % 100 random
>> write and gluster gave us 15-17K  )
>> But interesting thing when add additional client to test its get same
>> IOPS with first client means overall performance is doubled  , couldn’t
>> test more client but also interesting things is glusterfs do not use/eat
>> CPU like Ceph , a few percent of CPU is used.
>>
>> I would like to ask with Openstack , anybody use GlusterFS for instance
>> workload ?
>> Anybody used both of them in production and can compare ? Or share
>> experience ?
>>
>> Regards
>> Vahric Muhtaryan
>> ___
>> OpenStack-operators mailing list
>> OpenStack-operators@lists.openstack.org
>> http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-operators
>>
> ___
> OpenStack-operators mailing list
> OpenStack-operators@lists.openstack.org
> http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-operators
>
>
>
> ___
> OpenStack-operators mailing list
> OpenStack-operators@lists.openstack.org
> http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-operators
>
>
___
OpenStack-operators mailing list
OpenStack-operators@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-operators