Re: AW: AW: KVM storage cluster

2018-02-02 Thread Andrija Panic
>From my extremely short reading on ScaleIO few months ago, they are
utilizing RAM or similar for write caching, so basically, you write to RAM
or other part of ultra fast temp memory (NVME,etc) and later it is flushed
to durable part of storage.

I assume its 1633a not 1663a ? -
http://www.samsung.com/semiconductor/ssd/enterprise-ssd/MZILS1T9HEJH/ ( ?)
This one can barely do 35K IOPS of write per spec... and based on my humble
experience with Samsung, you can hardly ever reach that specification, even
with locally attached SSD and a lot of CPU available...(local filesystem)

So it must be RAM writing for sure...so make sure you saturate the
benchmark enough, so that the flushing process kicks in, and that the
benchmark will make sense when you later have constant IO load on the
cluster.

Cheers


On 2 February 2018 at 15:56, Ivan Kudryavtsev 
wrote:

> Swen, performance looks awesome, but still wonder where is the magic here,
> because AFAIK Ceph is not capable to even touch the base, but Red Hat bets
> on it... Might it be the ScaleIO doesn't wait while the replication
> complete for IO or other hack is used?
>
> 2 февр. 2018 г. 3:19 ПП пользователь "S. Brüseke - proIO GmbH" <
> s.brues...@proio.com> написал:
>
> > Hi Ivan,
> >
> >
> >
> > it is a 50/50 read-write mix. Here is the fio command I used:
> >
> > fio --name=test --readwrite=randrw --rwmixwrite=50 --bs=4k --invalidate=1
> > --group_reporting --direct=1 --filename=/dev/scinia --time_based
> > --runtime= --ioengine=libaio --numjobs=4 --iodepth=256 --norandommap
> > --randrepeat=0 –exitall
> >
> >
> >
> > Result was:
> >
> > IO Workload 274.000 IOPS
> >
> > 1,0 GB/s transfer
> >
> > Read Bandwith 536MB/s
> >
> > Read IOPS 137.000
> >
> > Write Bandwith 536MB/s
> >
> > Write IOPS 137.000
> >
> >
> >
> > If you want me to run a different fio command just send it. My lab is
> > still running.
> >
> >
> >
> > Any idea how I can mount my ScaleIO volume in KVM?
> >
> >
> >
> > Mit freundlichen Grüßen / With kind regards,
> >
> >
> >
> > Swen
> >
> >
> >
> > *Von:* Ivan Kudryavtsev [mailto:kudryavtsev...@bw-sw.com]
> > *Gesendet:* Freitag, 2. Februar 2018 02:58
> > *An:* users@cloudstack.apache.org; S. Brüseke - proIO GmbH <
> > s.brues...@proio.com>
> > *Betreff:* Re: AW: KVM storage cluster
> >
> >
> >
> > Hi, Swen. Do you test with direct or cached ops or buffered ones? Is it a
> > write test or rw with certain rw percenrage? Hardly believe the
> deployment
> > can do 250k IOs for writting with single VM test.
> >
> >
> >
> > 2 февр. 2018 г. 4:56 пользователь "S. Brüseke - proIO GmbH" <
> > s.brues...@proio.com> написал:
> >
> > I am also testing with ScaleIO on CentOS7 with KVM. With a 3 node cluster
> > with each node has 2x 2TB SSD (Samsung PM1663a) I get 250.000 IOPS when
> > doing a fio test (random 4k).
> > The only problem is that I do not know how to mount the shared volume so
> > that KVM can use it to store vms on it. Does anyone know how to do it?
> >
> > Mit freundlichen Grüßen / With kind regards,
> >
> > Swen
> >
> > -Ursprüngliche Nachricht-
> > Von: Andrija Panic [mailto:andrija.pa...@gmail.com]
> > Gesendet: Donnerstag, 1. Februar 2018 22:00
> > An: users 
> > Betreff: Re: KVM storage cluster
> >
> >
> > a bit late, but:
> >
> > - for any IO heavy (medium even...) workload, try to avoid CEPH, no
> > offence, simply it takes lot of $$$ to make CEPH perform in random IO
> > worlds (imagine RHEL and vendors provide only refernce architecutre with
> > SEQUNATIAL benchmark workload, not random) - not to mention a huge list
> of
> > bugs we hit back in the days (simply, one/single great guy handled the
> CEPH
> > integration for CloudStack, but otherwise not lot of help from other
> > committers, if not mistaken, afaik...)
> > - NFS better performance but not magic... (but most well supported, code
> > wise, bug-less wise :)
> > - and for top notch (cost some $$$) SolidFire is the way to go (we have
> > tons of IO heavy customers, so this THE solution really, after living
> with
> > CEPH, then NFS on SSDs, etc) and provides guarantied IOPS etc...
> >
> > Cheers.
> >
> > On 7 January 2018 at 22:46, Grégoire Lamodière 
> > wrote:
> >
> > > Hi Vahric,
> > >
> > > Thank you. I will have a look on it.
> > >
> > > Grégoire
> > >
> > >
> > >
> > > Envoyé depuis mon smartphone Samsung Galaxy.
> > >
> > >
> > >  Message d'origine 
> > > De : Vahric MUHTARYAN  Date : 07/01/2018 21:08
> > > (GMT+01:00) À : users@cloudstack.apache.org Objet : Re: KVM storage
> > > cluster
> > >
> > > Hello Grégoire,
> > >
> > > I suggest you to look EMC scaleio for block based operations. It has a
> > > free one too ! And as a block working better then Ceph ;)
> > >
> > > Regards
> > > VM
> > >
> > > On 7.01.2018 18:12, "Grégoire Lamodière"  wrote:
> > >
> > > Hi Ivan,
> > >
> > > Thank you for your quick reply.
> 

Re: AW: AW: KVM storage cluster

2018-02-02 Thread Ivan Kudryavtsev
Swen, performance looks awesome, but still wonder where is the magic here,
because AFAIK Ceph is not capable to even touch the base, but Red Hat bets
on it... Might it be the ScaleIO doesn't wait while the replication
complete for IO or other hack is used?

2 февр. 2018 г. 3:19 ПП пользователь "S. Brüseke - proIO GmbH" <
s.brues...@proio.com> написал:

> Hi Ivan,
>
>
>
> it is a 50/50 read-write mix. Here is the fio command I used:
>
> fio --name=test --readwrite=randrw --rwmixwrite=50 --bs=4k --invalidate=1
> --group_reporting --direct=1 --filename=/dev/scinia --time_based
> --runtime= --ioengine=libaio --numjobs=4 --iodepth=256 --norandommap
> --randrepeat=0 –exitall
>
>
>
> Result was:
>
> IO Workload 274.000 IOPS
>
> 1,0 GB/s transfer
>
> Read Bandwith 536MB/s
>
> Read IOPS 137.000
>
> Write Bandwith 536MB/s
>
> Write IOPS 137.000
>
>
>
> If you want me to run a different fio command just send it. My lab is
> still running.
>
>
>
> Any idea how I can mount my ScaleIO volume in KVM?
>
>
>
> Mit freundlichen Grüßen / With kind regards,
>
>
>
> Swen
>
>
>
> *Von:* Ivan Kudryavtsev [mailto:kudryavtsev...@bw-sw.com]
> *Gesendet:* Freitag, 2. Februar 2018 02:58
> *An:* users@cloudstack.apache.org; S. Brüseke - proIO GmbH <
> s.brues...@proio.com>
> *Betreff:* Re: AW: KVM storage cluster
>
>
>
> Hi, Swen. Do you test with direct or cached ops or buffered ones? Is it a
> write test or rw with certain rw percenrage? Hardly believe the deployment
> can do 250k IOs for writting with single VM test.
>
>
>
> 2 февр. 2018 г. 4:56 пользователь "S. Brüseke - proIO GmbH" <
> s.brues...@proio.com> написал:
>
> I am also testing with ScaleIO on CentOS7 with KVM. With a 3 node cluster
> with each node has 2x 2TB SSD (Samsung PM1663a) I get 250.000 IOPS when
> doing a fio test (random 4k).
> The only problem is that I do not know how to mount the shared volume so
> that KVM can use it to store vms on it. Does anyone know how to do it?
>
> Mit freundlichen Grüßen / With kind regards,
>
> Swen
>
> -Ursprüngliche Nachricht-
> Von: Andrija Panic [mailto:andrija.pa...@gmail.com]
> Gesendet: Donnerstag, 1. Februar 2018 22:00
> An: users 
> Betreff: Re: KVM storage cluster
>
>
> a bit late, but:
>
> - for any IO heavy (medium even...) workload, try to avoid CEPH, no
> offence, simply it takes lot of $$$ to make CEPH perform in random IO
> worlds (imagine RHEL and vendors provide only refernce architecutre with
> SEQUNATIAL benchmark workload, not random) - not to mention a huge list of
> bugs we hit back in the days (simply, one/single great guy handled the CEPH
> integration for CloudStack, but otherwise not lot of help from other
> committers, if not mistaken, afaik...)
> - NFS better performance but not magic... (but most well supported, code
> wise, bug-less wise :)
> - and for top notch (cost some $$$) SolidFire is the way to go (we have
> tons of IO heavy customers, so this THE solution really, after living with
> CEPH, then NFS on SSDs, etc) and provides guarantied IOPS etc...
>
> Cheers.
>
> On 7 January 2018 at 22:46, Grégoire Lamodière 
> wrote:
>
> > Hi Vahric,
> >
> > Thank you. I will have a look on it.
> >
> > Grégoire
> >
> >
> >
> > Envoyé depuis mon smartphone Samsung Galaxy.
> >
> >
> >  Message d'origine 
> > De : Vahric MUHTARYAN  Date : 07/01/2018 21:08
> > (GMT+01:00) À : users@cloudstack.apache.org Objet : Re: KVM storage
> > cluster
> >
> > Hello Grégoire,
> >
> > I suggest you to look EMC scaleio for block based operations. It has a
> > free one too ! And as a block working better then Ceph ;)
> >
> > Regards
> > VM
> >
> > On 7.01.2018 18:12, "Grégoire Lamodière"  wrote:
> >
> > Hi Ivan,
> >
> > Thank you for your quick reply.
> >
> > I'll have a look on Ceph and related perfs.
> > As you mentionned, 2 DRDB nfs servers can do the job, but if I can
> > avoid using 2 blades for just passing blocks to nfs, this is even
> > better (and maintain them as well).
> >
> > Thanks for pointing to ceph.
> >
> > Grégoire
> >
> >
> >
> >
> > ---
> > Grégoire Lamodière
> > T/ + 33 6 76 27 03 31
> > F/ + 33 1 75 43 89 71
> >
> > -Message d'origine-
> > De : Ivan Kudryavtsev [mailto:kudryavtsev...@bw-sw.com]
> > Envoyé : dimanche 7 janvier 2018 15:20
> > À : users@cloudstack.apache.org
> > Objet : Re: KVM storage cluster
> >
> > Hi, Grégoire,
> > You could have
> > - local storage if you like, so every compute node could have own
> > space (one lun per host)
> > - to have Ceph deployed on the same compute nodes (distribute raw
> > devices among nodes)
> > - to dedicate certain node as NFS server (or two servers with
> > DRBD)
> >
> > I don't think that shared FS is a good option, even clustered LVM
> > is a big pain.
> >
> > 2018-01-07 21:08 GMT+07:00 Grégoire Lamodière  >:
> >
> >