Re: Time-out when creating a template from a snapshot

2018-02-02 Thread Vladimir Melnik
Thanks for sharing!

I think, it's not only an SQL-related issue. I raised timeout thresholds of 
haproxy, so now there's no DB exceptions since that, but something is 
terminating the "cp" process on the SSVM leaving the template incomplete. There 
are no messages about the database in the log-file and ACS thinks that the 
operation has been finished successfully.

I'm pretty sure that haproxy was a half of a problem, but the second half is 
somewhere inside of the SSVM.

On Thu, Feb 01, 2018 at 09:17:51PM +0100, Andrija Panic wrote:
> Vladimir,
> 
> the original error seems as MySQL timeout for sure (I assume because of
> HAPROXY in the middle), and we also had this setup originally (MGMT server
> using HAPROXY on top of galera nodes...) but this has confirmed to be
> issue, no matter what we changed on HAproxy or Mysql, and at that time we
> didn't find the solution (MGMT then was set to hit first galera node
> directly...poor man solution) - problem is that it seems when snapshot
> starts (and can take even hours / or conversion to template, same thing...)
> Java keeps the DB connection/transaction open for all time (which is
> strange approach in my head, for such long image-converting actions)
> 
> If I'm not wrong, the snapshot to template conversion should be done via
> agent node, not SSVM ?
> Ping here if you find solution.
> 
> Btw, for some actions with images the real timeout = 2 x wait parameter :)
> so change  that to 2000 and check if actions fails after 4000 sec.
> 
> 
> 
> On 1 February 2018 at 13:00, Vladimir Melnik  wrote:
> 
> > Thanks a lot, any help will be so much appreciated!
> >
> > On Wed, Jan 31, 2018 at 05:23:25PM +, Nux! wrote:
> > > It's possible there are timeouts being hit somewhere. I'd take this to
> > dev@ to be honest, I am not very familiar with the ssvm internals.
> > >
> > > --
> > > Sent from the Delta quadrant using Borg technology!
> > >
> > > Nux!
> > > www.nux.ro
> > >
> > > - Original Message -
> > > > From: "Vladimir Melnik" 
> > > > To: "users" 
> > > > Sent: Wednesday, 31 January, 2018 12:42:01
> > > > Subject: Re: Time-out when creating a template from a snapshot
> > >
> > > > No, it doesn't seem to be a database-related issue.
> > > >
> > > > This time I haven't got any error messages at all. Moreover, I see
> > this template
> > > > as available in the templates' list and there's the following message
> > in the
> > > > log-file:
> > > > 2018-01-31 14:30:09,862 DEBUG [o.a.c.f.j.i.AsyncJobManagerImpl]
> > > > (API-Job-Executor-1:ctx-b39838a2 job-1485421 ctx-3b9d9083)
> > (logid:cb74bce4)
> > > > Complete async job-1485421, jobStatus: SUCCEEDED, resultCode: 0,
> > result:
> > > > org.apache.cloudstack.api.response.TemplateResponse/
> > template/{"id":"b69c65b3-70d6-4000-934c-fac9e887d3ef","name"
> > :"Tucha#2018012713000408","displaytext":"Tucha#
> > 2018012713000408","ispublic":false,"created":"2018-01-
> > 31T13:30:09+0200","isready":true,"passwordenabled":true,"
> > format":"QCOW2","isfeatured":false,"crossZones":false,"
> > ostypeid":"b5490e1c-bd31-11e6-b74f-06973a00088a","ostypename":"Windows
> > > > Server 2012 R2
> > > > (64-bit)","account":"admin#lite","zoneid":"c8d773fa-76ca-
> > 4637-8ecf-88656444fc86","zonename":"z2.tucha13.net","status":"Download
> > > > Complete","size":375809638400,"templatetype":"USER","
> > hypervisor":"KVM","domain":"ROOT","domainid":"b514ef44-
> > bd2f-11e6-b74f-06973a00088a","isextractable":false,"
> > sourcetemplateid":"ba26d2a9-5e2f-468d-8a38-df71a7811ee8","details":{"
> > memoryOvercommitRatio":"1.0","cpuNumber":"4","cpuSpeed":"2399","Message.
> > ReservedCapacityFreed.Flag":"false","cpuOvercommitRatio":"
> > 10","memory":"12288"},"sshkeyenabled":false,"
> > isdynamicallyscalable":false,"tags":[]}
> > > >
> > > > But at the same time I see that the template's file is less than the
> > snapshot's
> > > > file:
> > > > -rw-r--r-- 1 root root 311253204992 Jan 31 00:14
> > > > /mnt/SecStorage/ea7ebf9a-5195-31ab-be8e-f9348f9fee2b/
> > snapshots/4391/12401/e7364ecf-56f2-451d-ba2e-537b9465097f
> > > > -rw-r--r-- 1 root root 195583541248 Jan 31 12:30
> > > > /mnt/SecStorage/ea7ebf9a-5195-31ab-be8e-f9348f9fee2b/
> > template/tmpl/3121/473/e7364ecf-56f2-451d-ba2e-537b9465097f.qcow2
> > > >
> > > > The oddest thing is that the "cp" process in the SSVM is being
> > terminated
> > > > exactly in an hour after its start. Who would be doing that each time
> > I'm
> > > > trying to create a template? Isn't it being done by some script at the
> > SSVM
> > > > itself?
> > > >
> > > >
> > > > On Mon, Jan 29, 2018 at 05:36:54PM +0200, Vladimir Melnik wrote:
> > > >> Thank you, Lucian! My MySQL timeout thresolds are higher than 1 hour,
> > but
> > > >> there's HAproxy between ACS and MySQL, so I've changed haproxy's
> > timeouts and
> > > >> now will see what happens in an hour :-)
> > > >>
> > > >> On Mon, Jan 29, 2018 at 11:47:31AM +, Nux! wrote:
> > > 

Re: Time-out when creating a template from a snapshot

2018-02-02 Thread Andrija Panic
Might have sense, yes.

Anyway, chech that parameter and let us know.
Again, for me (we use SolidFire, so snapshots are kept on SF, not on
Secondary Storage) so the snapshot to Template copy process is handled by
Agent (KVM host) not SSVM.

Cheers

On 2 February 2018 at 15:02, Vladimir Melnik  wrote:

> Thanks for sharing!
>
> I think, it's not only an SQL-related issue. I raised timeout thresholds
> of haproxy, so now there's no DB exceptions since that, but something is
> terminating the "cp" process on the SSVM leaving the template incomplete.
> There are no messages about the database in the log-file and ACS thinks
> that the operation has been finished successfully.
>
> I'm pretty sure that haproxy was a half of a problem, but the second half
> is somewhere inside of the SSVM.
>
> On Thu, Feb 01, 2018 at 09:17:51PM +0100, Andrija Panic wrote:
> > Vladimir,
> >
> > the original error seems as MySQL timeout for sure (I assume because of
> > HAPROXY in the middle), and we also had this setup originally (MGMT
> server
> > using HAPROXY on top of galera nodes...) but this has confirmed to be
> > issue, no matter what we changed on HAproxy or Mysql, and at that time we
> > didn't find the solution (MGMT then was set to hit first galera node
> > directly...poor man solution) - problem is that it seems when snapshot
> > starts (and can take even hours / or conversion to template, same
> thing...)
> > Java keeps the DB connection/transaction open for all time (which is
> > strange approach in my head, for such long image-converting actions)
> >
> > If I'm not wrong, the snapshot to template conversion should be done via
> > agent node, not SSVM ?
> > Ping here if you find solution.
> >
> > Btw, for some actions with images the real timeout = 2 x wait parameter
> :)
> > so change  that to 2000 and check if actions fails after 4000 sec.
> >
> >
> >
> > On 1 February 2018 at 13:00, Vladimir Melnik  wrote:
> >
> > > Thanks a lot, any help will be so much appreciated!
> > >
> > > On Wed, Jan 31, 2018 at 05:23:25PM +, Nux! wrote:
> > > > It's possible there are timeouts being hit somewhere. I'd take this
> to
> > > dev@ to be honest, I am not very familiar with the ssvm internals.
> > > >
> > > > --
> > > > Sent from the Delta quadrant using Borg technology!
> > > >
> > > > Nux!
> > > > www.nux.ro
> > > >
> > > > - Original Message -
> > > > > From: "Vladimir Melnik" 
> > > > > To: "users" 
> > > > > Sent: Wednesday, 31 January, 2018 12:42:01
> > > > > Subject: Re: Time-out when creating a template from a snapshot
> > > >
> > > > > No, it doesn't seem to be a database-related issue.
> > > > >
> > > > > This time I haven't got any error messages at all. Moreover, I see
> > > this template
> > > > > as available in the templates' list and there's the following
> message
> > > in the
> > > > > log-file:
> > > > > 2018-01-31 14:30:09,862 DEBUG [o.a.c.f.j.i.AsyncJobManagerImpl]
> > > > > (API-Job-Executor-1:ctx-b39838a2 job-1485421 ctx-3b9d9083)
> > > (logid:cb74bce4)
> > > > > Complete async job-1485421, jobStatus: SUCCEEDED, resultCode: 0,
> > > result:
> > > > > org.apache.cloudstack.api.response.TemplateResponse/
> > > template/{"id":"b69c65b3-70d6-4000-934c-fac9e887d3ef","name"
> > > :"Tucha#2018012713000408","displaytext":"Tucha#
> > > 2018012713000408","ispublic":false,"created":"2018-01-
> > > 31T13:30:09+0200","isready":true,"passwordenabled":true,"
> > > format":"QCOW2","isfeatured":false,"crossZones":false,"
> > > ostypeid":"b5490e1c-bd31-11e6-b74f-06973a00088a","ostypename":"Windows
> > > > > Server 2012 R2
> > > > > (64-bit)","account":"admin#lite","zoneid":"c8d773fa-76ca-
> > > 4637-8ecf-88656444fc86","zonename":"z2.tucha13.net","status":"Download
> > > > > Complete","size":375809638400,"templatetype":"USER","
> > > hypervisor":"KVM","domain":"ROOT","domainid":"b514ef44-
> > > bd2f-11e6-b74f-06973a00088a","isextractable":false,"
> > > sourcetemplateid":"ba26d2a9-5e2f-468d-8a38-df71a7811ee8","details":{"
> > > memoryOvercommitRatio":"1.0","cpuNumber":"4","cpuSpeed":"
> 2399","Message.
> > > ReservedCapacityFreed.Flag":"false","cpuOvercommitRatio":"
> > > 10","memory":"12288"},"sshkeyenabled":false,"
> > > isdynamicallyscalable":false,"tags":[]}
> > > > >
> > > > > But at the same time I see that the template's file is less than
> the
> > > snapshot's
> > > > > file:
> > > > > -rw-r--r-- 1 root root 311253204992 Jan 31 00:14
> > > > > /mnt/SecStorage/ea7ebf9a-5195-31ab-be8e-f9348f9fee2b/
> > > snapshots/4391/12401/e7364ecf-56f2-451d-ba2e-537b9465097f
> > > > > -rw-r--r-- 1 root root 195583541248 Jan 31 12:30
> > > > > /mnt/SecStorage/ea7ebf9a-5195-31ab-be8e-f9348f9fee2b/
> > > template/tmpl/3121/473/e7364ecf-56f2-451d-ba2e-537b9465097f.qcow2
> > > > >
> > > > > The oddest thing is that the "cp" process in the SSVM is being
> > > terminated
> > > > > exactly in an hour after its start. Who would be doing that 

Re: Failing to enable SSL/HTTPS on console proxy vm

2018-02-02 Thread Ugo Vasi

Hi Paul,
do I have to destroy console-proxy too?
Could the problem be caused by certificates' chain?
I've got two intermediate certificates between the root and the leaf 
one, could this cause problems?


Thanks

On 02/02/2018 13:18, Paul Angus wrote:

Hi Ugo,
Have you destroyed your sec storage VM and let CloudStack recreate it.  A 
stop-start isn't usually enough to reconfigure certificates.

paul.an...@shapeblue.com
www.shapeblue.com
53 Chandos Place, Covent Garden, London  WC2N 4HSUK
@shapeblue
   
  



-Original Message-
From: Ugo Vasi [mailto:ugo.v...@procne.it]
Sent: 02 February 2018 11:37
To: users@cloudstack.apache.org; Benjamin Naber 
Subject: Re: Failing to enable SSL/HTTPS on console proxy vm

Hi Ben,
I'm sure that the DNS is resolving the right IP (aaa-bbb-ccc-ddd.domain.com -> 
aaa.bbb.ccc.ddd), I tried with wget using the same src of iframe (masquerade log):

$ wget https://123-123-123-123.domain.com/ajax?token=...(snipped)
--2018-02-02 10:24:23-- https://123-123-123-123.domain.com/ajax?token=...
Resolving 123-123-123-123.domain.com (123-123-123-123.domain.com)...
123.123.123.123
Connecting to 123-123-123-123.domain.com 
(123-123-123-123.domain.com)|123.123.123.123|:443...

here the command hangs until a timeout.



On 02/02/2018 11:43, Benjamin Naber wrote:

Hi Ugo,

you need a DNS Record for the public ip address the consoleproxy has beed 
allocatet.
should be look like this: 80-190-44-22.domain.com otherwise the iframe denied 
loading in case of ssl error.
In Global setting "Console proxy url domain" set *.domain.com restart
management server and it should work.

Kind Regards

Ben


Ugo Vasi  hat am 2. Februar 2018 um 11:26 geschrieben:


Hi all,
I had the same problem installing the wildcard certificate.

I tried to set the consoleproxy.url.domain in global settings but now
the console interface inside the iframe does not respond...

The dns record are OK.




On 16/06/2016 18:10, Andy Dills wrote:

I have this working perfectly.

Couple of key things that are not mentioned in the
documentation:

- You need to set consoleproxy.url.domain to *.domain.com for whatever domain 
you're using. Do this before re-uploading your SSL certificate. The SSL upload 
dialogue doesn't set this value as it should.

- You need a wildcard certificate for that domain.

Assuming you setup the proper DNS records, it should then work.

I'm open to follow up questions if anybody is struggling with this.

Thanks,
Andy

Sent from my iPhone


On Jun 16, 2016, at 12:01 PM, Will Stevens  wrote:

We have been having issues with this for as long as I can remember
(on both ACS and CCP).  In order to get it to work you have to
'trust unsafe scripts' or whatever by clicking the shield in the
URL bar in the top right (maybe that is chrome).

I don't know that there is a solution, but if there is, I am all ears...

*Will STEVENS*
Lead Developer

*CloudOps* *| *Cloud Solutions Experts
420 rue Guy *|* Montreal *|* Quebec *|* H3J 1S6 w cloudops.com *|*
tw @CloudOps_


On Thu, Jun 16, 2016 at 11:54 AM, Nux!  wrote:

Hi,

Is there any particular voodoo involved in getting the $subject to
work correctly on 4.8.0?
I've uploaded the Comodo wildcard cabundle, crt and key in the
Infrastructure page, the systemvms have rebooted.
They came back fine and nothing dodgy in the logs, but when I open
the console of a VM Firefox will say there are insecure contents
loaded and will not display the terminal ajax thingy.
View source shoes an iframe linking http://1.2.3.4 instead of
https://1-2-3-4.wildcarddomain.tld.

Apache HTTPD and Tomcat had no issues with these certs.

Is there something that I am missing?

Thanks


--
Sent from the Delta quadrant using Borg technology!

Nux!
www.nux.ro





--

*Ugo Vasi* / System Administrator
ugo.v...@procne.it 




*Procne S.r.l.*
+39 0432 486 523
via Cotonificio, 45
33010 Tavagnacco (UD)
www.procne.it 


Le informazioni contenute nella presente comunicazione ed i relativi
allegati possono essere riservate e sono, comunque, destinate
esclusivamente alle persone od alla Società sopraindicati. La
diffusione, distribuzione e/o copiatura del documento trasmesso da
parte di qualsiasi soggetto diverso dal destinatario è proibita sia
ai sensi dell'art. 616 c.p., che ai sensi del Decreto Legislativo n.
196/2003 "Codice in materia di protezione dei dati personali". Se
avete ricevuto questo messaggio per errore, vi preghiamo di
distruggerlo e di informare immediatamente Procne S.r.l. scrivendo
all' indirizzo e-mail i...@procne.it .












--

*Ugo Vasi* / System Administrator
ugo.v...@procne.it 




*Procne S.r.l.*
+39 0432 486 523
via Cotonificio, 45
33010 Tavagnacco (UD)
www.procne.it 


Le informazioni contenute nella presente comunicazione ed i relativi 
allegati possono essere 

Re: AW: AW: KVM storage cluster

2018-02-02 Thread Ivan Kudryavtsev
Swen, performance looks awesome, but still wonder where is the magic here,
because AFAIK Ceph is not capable to even touch the base, but Red Hat bets
on it... Might it be the ScaleIO doesn't wait while the replication
complete for IO or other hack is used?

2 февр. 2018 г. 3:19 ПП пользователь "S. Brüseke - proIO GmbH" <
s.brues...@proio.com> написал:

> Hi Ivan,
>
>
>
> it is a 50/50 read-write mix. Here is the fio command I used:
>
> fio --name=test --readwrite=randrw --rwmixwrite=50 --bs=4k --invalidate=1
> --group_reporting --direct=1 --filename=/dev/scinia --time_based
> --runtime= --ioengine=libaio --numjobs=4 --iodepth=256 --norandommap
> --randrepeat=0 –exitall
>
>
>
> Result was:
>
> IO Workload 274.000 IOPS
>
> 1,0 GB/s transfer
>
> Read Bandwith 536MB/s
>
> Read IOPS 137.000
>
> Write Bandwith 536MB/s
>
> Write IOPS 137.000
>
>
>
> If you want me to run a different fio command just send it. My lab is
> still running.
>
>
>
> Any idea how I can mount my ScaleIO volume in KVM?
>
>
>
> Mit freundlichen Grüßen / With kind regards,
>
>
>
> Swen
>
>
>
> *Von:* Ivan Kudryavtsev [mailto:kudryavtsev...@bw-sw.com]
> *Gesendet:* Freitag, 2. Februar 2018 02:58
> *An:* users@cloudstack.apache.org; S. Brüseke - proIO GmbH <
> s.brues...@proio.com>
> *Betreff:* Re: AW: KVM storage cluster
>
>
>
> Hi, Swen. Do you test with direct or cached ops or buffered ones? Is it a
> write test or rw with certain rw percenrage? Hardly believe the deployment
> can do 250k IOs for writting with single VM test.
>
>
>
> 2 февр. 2018 г. 4:56 пользователь "S. Brüseke - proIO GmbH" <
> s.brues...@proio.com> написал:
>
> I am also testing with ScaleIO on CentOS7 with KVM. With a 3 node cluster
> with each node has 2x 2TB SSD (Samsung PM1663a) I get 250.000 IOPS when
> doing a fio test (random 4k).
> The only problem is that I do not know how to mount the shared volume so
> that KVM can use it to store vms on it. Does anyone know how to do it?
>
> Mit freundlichen Grüßen / With kind regards,
>
> Swen
>
> -Ursprüngliche Nachricht-
> Von: Andrija Panic [mailto:andrija.pa...@gmail.com]
> Gesendet: Donnerstag, 1. Februar 2018 22:00
> An: users 
> Betreff: Re: KVM storage cluster
>
>
> a bit late, but:
>
> - for any IO heavy (medium even...) workload, try to avoid CEPH, no
> offence, simply it takes lot of $$$ to make CEPH perform in random IO
> worlds (imagine RHEL and vendors provide only refernce architecutre with
> SEQUNATIAL benchmark workload, not random) - not to mention a huge list of
> bugs we hit back in the days (simply, one/single great guy handled the CEPH
> integration for CloudStack, but otherwise not lot of help from other
> committers, if not mistaken, afaik...)
> - NFS better performance but not magic... (but most well supported, code
> wise, bug-less wise :)
> - and for top notch (cost some $$$) SolidFire is the way to go (we have
> tons of IO heavy customers, so this THE solution really, after living with
> CEPH, then NFS on SSDs, etc) and provides guarantied IOPS etc...
>
> Cheers.
>
> On 7 January 2018 at 22:46, Grégoire Lamodière 
> wrote:
>
> > Hi Vahric,
> >
> > Thank you. I will have a look on it.
> >
> > Grégoire
> >
> >
> >
> > Envoyé depuis mon smartphone Samsung Galaxy.
> >
> >
> >  Message d'origine 
> > De : Vahric MUHTARYAN  Date : 07/01/2018 21:08
> > (GMT+01:00) À : users@cloudstack.apache.org Objet : Re: KVM storage
> > cluster
> >
> > Hello Grégoire,
> >
> > I suggest you to look EMC scaleio for block based operations. It has a
> > free one too ! And as a block working better then Ceph ;)
> >
> > Regards
> > VM
> >
> > On 7.01.2018 18:12, "Grégoire Lamodière"  wrote:
> >
> > Hi Ivan,
> >
> > Thank you for your quick reply.
> >
> > I'll have a look on Ceph and related perfs.
> > As you mentionned, 2 DRDB nfs servers can do the job, but if I can
> > avoid using 2 blades for just passing blocks to nfs, this is even
> > better (and maintain them as well).
> >
> > Thanks for pointing to ceph.
> >
> > Grégoire
> >
> >
> >
> >
> > ---
> > Grégoire Lamodière
> > T/ + 33 6 76 27 03 31
> > F/ + 33 1 75 43 89 71
> >
> > -Message d'origine-
> > De : Ivan Kudryavtsev [mailto:kudryavtsev...@bw-sw.com]
> > Envoyé : dimanche 7 janvier 2018 15:20
> > À : users@cloudstack.apache.org
> > Objet : Re: KVM storage cluster
> >
> > Hi, Grégoire,
> > You could have
> > - local storage if you like, so every compute node could have own
> > space (one lun per host)
> > - to have Ceph deployed on the same compute nodes (distribute raw
> > devices among nodes)
> > - to dedicate certain node as NFS server (or two servers with
> > DRBD)
> >
> > I don't think that shared FS is a good option, even clustered LVM
> > is a big pain.
> >
> > 2018-01-07 21:08 GMT+07:00 Grégoire Lamodière  >:
> >
> >   

AW: AW: AW: KVM storage cluster

2018-02-02 Thread S . Brüseke - proIO GmbH
Hi Andrija,

you are right, of course it is Samsung PM1633a. I am not sure if this is really 
only RAM. I let the fio command run for more than 30min and IOPS did not drop.
I am using 6 SSDs in my setup, each has 35.000 IOPS random write max, so 
ScaleIO can do 210.000 IOPS (read) at its best. fio shows around 140.000 IOPS 
(read) max. ScaleIO GUI shows me around 45.000 IOPS (read/write combined) per 
SSD.

Do you have a different fio command I can run?

Mit freundlichen Grüßen / With kind regards,

Swen

-Ursprüngliche Nachricht-
Von: Andrija Panic [mailto:andrija.pa...@gmail.com] 
Gesendet: Freitag, 2. Februar 2018 16:04
An: users 
Cc: S. Brüseke - proIO GmbH 
Betreff: Re: AW: AW: KVM storage cluster

>From my extremely short reading on ScaleIO few months ago, they are utilizing 
>RAM or similar for write caching, so basically, you write to RAM or other part 
>of ultra fast temp memory (NVME,etc) and later it is flushed to durable part 
>of storage.

I assume its 1633a not 1663a ? -
http://www.samsung.com/semiconductor/ssd/enterprise-ssd/MZILS1T9HEJH/ ( ?) This 
one can barely do 35K IOPS of write per spec... and based on my humble 
experience with Samsung, you can hardly ever reach that specification, even 
with locally attached SSD and a lot of CPU available...(local filesystem)

So it must be RAM writing for sure...so make sure you saturate the benchmark 
enough, so that the flushing process kicks in, and that the benchmark will make 
sense when you later have constant IO load on the cluster.

Cheers


On 2 February 2018 at 15:56, Ivan Kudryavtsev 
wrote:

> Swen, performance looks awesome, but still wonder where is the magic 
> here, because AFAIK Ceph is not capable to even touch the base, but 
> Red Hat bets on it... Might it be the ScaleIO doesn't wait while the 
> replication complete for IO or other hack is used?
>
> 2 февр. 2018 г. 3:19 ПП пользователь "S. Brüseke - proIO GmbH" < 
> s.brues...@proio.com> написал:
>
> > Hi Ivan,
> >
> >
> >
> > it is a 50/50 read-write mix. Here is the fio command I used:
> >
> > fio --name=test --readwrite=randrw --rwmixwrite=50 --bs=4k 
> > --invalidate=1 --group_reporting --direct=1 --filename=/dev/scinia 
> > --time_based
> > --runtime= --ioengine=libaio --numjobs=4 --iodepth=256 
> > --norandommap
> > --randrepeat=0 –exitall
> >
> >
> >
> > Result was:
> >
> > IO Workload 274.000 IOPS
> >
> > 1,0 GB/s transfer
> >
> > Read Bandwith 536MB/s
> >
> > Read IOPS 137.000
> >
> > Write Bandwith 536MB/s
> >
> > Write IOPS 137.000
> >
> >
> >
> > If you want me to run a different fio command just send it. My lab 
> > is still running.
> >
> >
> >
> > Any idea how I can mount my ScaleIO volume in KVM?
> >
> >
> >
> > Mit freundlichen Grüßen / With kind regards,
> >
> >
> >
> > Swen
> >
> >
> >
> > *Von:* Ivan Kudryavtsev [mailto:kudryavtsev...@bw-sw.com]
> > *Gesendet:* Freitag, 2. Februar 2018 02:58
> > *An:* users@cloudstack.apache.org; S. Brüseke - proIO GmbH < 
> > s.brues...@proio.com>
> > *Betreff:* Re: AW: KVM storage cluster
> >
> >
> >
> > Hi, Swen. Do you test with direct or cached ops or buffered ones? Is 
> > it a write test or rw with certain rw percenrage? Hardly believe the
> deployment
> > can do 250k IOs for writting with single VM test.
> >
> >
> >
> > 2 февр. 2018 г. 4:56 пользователь "S. Brüseke - proIO GmbH" < 
> > s.brues...@proio.com> написал:
> >
> > I am also testing with ScaleIO on CentOS7 with KVM. With a 3 node 
> > cluster with each node has 2x 2TB SSD (Samsung PM1663a) I get 
> > 250.000 IOPS when doing a fio test (random 4k).
> > The only problem is that I do not know how to mount the shared 
> > volume so that KVM can use it to store vms on it. Does anyone know how to 
> > do it?
> >
> > Mit freundlichen Grüßen / With kind regards,
> >
> > Swen
> >
> > -Ursprüngliche Nachricht-
> > Von: Andrija Panic [mailto:andrija.pa...@gmail.com]
> > Gesendet: Donnerstag, 1. Februar 2018 22:00
> > An: users 
> > Betreff: Re: KVM storage cluster
> >
> >
> > a bit late, but:
> >
> > - for any IO heavy (medium even...) workload, try to avoid CEPH, no 
> > offence, simply it takes lot of $$$ to make CEPH perform in random 
> > IO worlds (imagine RHEL and vendors provide only refernce 
> > architecutre with SEQUNATIAL benchmark workload, not random) - not 
> > to mention a huge list
> of
> > bugs we hit back in the days (simply, one/single great guy handled 
> > the
> CEPH
> > integration for CloudStack, but otherwise not lot of help from other 
> > committers, if not mistaken, afaik...)
> > - NFS better performance but not magic... (but most well supported, 
> > code wise, bug-less wise :)
> > - and for top notch (cost some $$$) SolidFire is the way to go (we 
> > have tons of IO heavy customers, so this THE solution really, after 
> > living
> with
> > CEPH, then NFS on SSDs, etc) and provides guarantied IOPS etc...
> >
> > 

Re: Network ACL Lists

2018-02-02 Thread Dag Sonstebo
Hi Benjamin,

Not to my knowledge – that would be a security issue in itself since you would 
then announce to any user what ACL rules are in place for other users. 

Regards,
Dag Sonstebo
Cloud Architect
ShapeBlue

On 02/02/2018, 10:06, "Benjamin Naber"  wrote:

Hi @all,

is there any way, to create Network ACL Lists that global accessable ?

Kind Reagrds 

Benjamin



dag.sonst...@shapeblue.com 
www.shapeblue.com
53 Chandos Place, Covent Garden, London  WC2N 4HSUK
@shapeblue
  
 



Re: AW: AW: KVM storage cluster

2018-02-02 Thread Andrija Panic
>From my extremely short reading on ScaleIO few months ago, they are
utilizing RAM or similar for write caching, so basically, you write to RAM
or other part of ultra fast temp memory (NVME,etc) and later it is flushed
to durable part of storage.

I assume its 1633a not 1663a ? -
http://www.samsung.com/semiconductor/ssd/enterprise-ssd/MZILS1T9HEJH/ ( ?)
This one can barely do 35K IOPS of write per spec... and based on my humble
experience with Samsung, you can hardly ever reach that specification, even
with locally attached SSD and a lot of CPU available...(local filesystem)

So it must be RAM writing for sure...so make sure you saturate the
benchmark enough, so that the flushing process kicks in, and that the
benchmark will make sense when you later have constant IO load on the
cluster.

Cheers


On 2 February 2018 at 15:56, Ivan Kudryavtsev 
wrote:

> Swen, performance looks awesome, but still wonder where is the magic here,
> because AFAIK Ceph is not capable to even touch the base, but Red Hat bets
> on it... Might it be the ScaleIO doesn't wait while the replication
> complete for IO or other hack is used?
>
> 2 февр. 2018 г. 3:19 ПП пользователь "S. Brüseke - proIO GmbH" <
> s.brues...@proio.com> написал:
>
> > Hi Ivan,
> >
> >
> >
> > it is a 50/50 read-write mix. Here is the fio command I used:
> >
> > fio --name=test --readwrite=randrw --rwmixwrite=50 --bs=4k --invalidate=1
> > --group_reporting --direct=1 --filename=/dev/scinia --time_based
> > --runtime= --ioengine=libaio --numjobs=4 --iodepth=256 --norandommap
> > --randrepeat=0 –exitall
> >
> >
> >
> > Result was:
> >
> > IO Workload 274.000 IOPS
> >
> > 1,0 GB/s transfer
> >
> > Read Bandwith 536MB/s
> >
> > Read IOPS 137.000
> >
> > Write Bandwith 536MB/s
> >
> > Write IOPS 137.000
> >
> >
> >
> > If you want me to run a different fio command just send it. My lab is
> > still running.
> >
> >
> >
> > Any idea how I can mount my ScaleIO volume in KVM?
> >
> >
> >
> > Mit freundlichen Grüßen / With kind regards,
> >
> >
> >
> > Swen
> >
> >
> >
> > *Von:* Ivan Kudryavtsev [mailto:kudryavtsev...@bw-sw.com]
> > *Gesendet:* Freitag, 2. Februar 2018 02:58
> > *An:* users@cloudstack.apache.org; S. Brüseke - proIO GmbH <
> > s.brues...@proio.com>
> > *Betreff:* Re: AW: KVM storage cluster
> >
> >
> >
> > Hi, Swen. Do you test with direct or cached ops or buffered ones? Is it a
> > write test or rw with certain rw percenrage? Hardly believe the
> deployment
> > can do 250k IOs for writting with single VM test.
> >
> >
> >
> > 2 февр. 2018 г. 4:56 пользователь "S. Brüseke - proIO GmbH" <
> > s.brues...@proio.com> написал:
> >
> > I am also testing with ScaleIO on CentOS7 with KVM. With a 3 node cluster
> > with each node has 2x 2TB SSD (Samsung PM1663a) I get 250.000 IOPS when
> > doing a fio test (random 4k).
> > The only problem is that I do not know how to mount the shared volume so
> > that KVM can use it to store vms on it. Does anyone know how to do it?
> >
> > Mit freundlichen Grüßen / With kind regards,
> >
> > Swen
> >
> > -Ursprüngliche Nachricht-
> > Von: Andrija Panic [mailto:andrija.pa...@gmail.com]
> > Gesendet: Donnerstag, 1. Februar 2018 22:00
> > An: users 
> > Betreff: Re: KVM storage cluster
> >
> >
> > a bit late, but:
> >
> > - for any IO heavy (medium even...) workload, try to avoid CEPH, no
> > offence, simply it takes lot of $$$ to make CEPH perform in random IO
> > worlds (imagine RHEL and vendors provide only refernce architecutre with
> > SEQUNATIAL benchmark workload, not random) - not to mention a huge list
> of
> > bugs we hit back in the days (simply, one/single great guy handled the
> CEPH
> > integration for CloudStack, but otherwise not lot of help from other
> > committers, if not mistaken, afaik...)
> > - NFS better performance but not magic... (but most well supported, code
> > wise, bug-less wise :)
> > - and for top notch (cost some $$$) SolidFire is the way to go (we have
> > tons of IO heavy customers, so this THE solution really, after living
> with
> > CEPH, then NFS on SSDs, etc) and provides guarantied IOPS etc...
> >
> > Cheers.
> >
> > On 7 January 2018 at 22:46, Grégoire Lamodière 
> > wrote:
> >
> > > Hi Vahric,
> > >
> > > Thank you. I will have a look on it.
> > >
> > > Grégoire
> > >
> > >
> > >
> > > Envoyé depuis mon smartphone Samsung Galaxy.
> > >
> > >
> > >  Message d'origine 
> > > De : Vahric MUHTARYAN  Date : 07/01/2018 21:08
> > > (GMT+01:00) À : users@cloudstack.apache.org Objet : Re: KVM storage
> > > cluster
> > >
> > > Hello Grégoire,
> > >
> > > I suggest you to look EMC scaleio for block based operations. It has a
> > > free one too ! And as a block working better then Ceph ;)
> > >
> > > Regards
> > > VM
> > >
> > > On 7.01.2018 18:12, "Grégoire Lamodière"  wrote:
> > >
> > > Hi Ivan,
> > >
> > > Thank you for your quick reply.
> 

Re: AW: AW: AW: KVM storage cluster

2018-02-02 Thread Ivan Kudryavtsev
I suppose Andrija says about the volume size, it should be much bigger than
storage host RAM.

2 февр. 2018 г. 10:17 ПП пользователь "S. Brüseke - proIO GmbH" <
s.brues...@proio.com> написал:

> Hi Andrija,
>
> you are right, of course it is Samsung PM1633a. I am not sure if this is
> really only RAM. I let the fio command run for more than 30min and IOPS did
> not drop.
> I am using 6 SSDs in my setup, each has 35.000 IOPS random write max, so
> ScaleIO can do 210.000 IOPS (read) at its best. fio shows around 140.000
> IOPS (read) max. ScaleIO GUI shows me around 45.000 IOPS (read/write
> combined) per SSD.
>
> Do you have a different fio command I can run?
>
> Mit freundlichen Grüßen / With kind regards,
>
> Swen
>
> -Ursprüngliche Nachricht-
> Von: Andrija Panic [mailto:andrija.pa...@gmail.com]
> Gesendet: Freitag, 2. Februar 2018 16:04
> An: users 
> Cc: S. Brüseke - proIO GmbH 
> Betreff: Re: AW: AW: KVM storage cluster
>
> From my extremely short reading on ScaleIO few months ago, they are
> utilizing RAM or similar for write caching, so basically, you write to RAM
> or other part of ultra fast temp memory (NVME,etc) and later it is flushed
> to durable part of storage.
>
> I assume its 1633a not 1663a ? -
> http://www.samsung.com/semiconductor/ssd/enterprise-ssd/MZILS1T9HEJH/ (
> ?) This one can barely do 35K IOPS of write per spec... and based on my
> humble experience with Samsung, you can hardly ever reach that
> specification, even with locally attached SSD and a lot of CPU
> available...(local filesystem)
>
> So it must be RAM writing for sure...so make sure you saturate the
> benchmark enough, so that the flushing process kicks in, and that the
> benchmark will make sense when you later have constant IO load on the
> cluster.
>
> Cheers
>
>
> On 2 February 2018 at 15:56, Ivan Kudryavtsev 
> wrote:
>
> > Swen, performance looks awesome, but still wonder where is the magic
> > here, because AFAIK Ceph is not capable to even touch the base, but
> > Red Hat bets on it... Might it be the ScaleIO doesn't wait while the
> > replication complete for IO or other hack is used?
> >
> > 2 февр. 2018 г. 3:19 ПП пользователь "S. Brüseke - proIO GmbH" <
> > s.brues...@proio.com> написал:
> >
> > > Hi Ivan,
> > >
> > >
> > >
> > > it is a 50/50 read-write mix. Here is the fio command I used:
> > >
> > > fio --name=test --readwrite=randrw --rwmixwrite=50 --bs=4k
> > > --invalidate=1 --group_reporting --direct=1 --filename=/dev/scinia
> > > --time_based
> > > --runtime= --ioengine=libaio --numjobs=4 --iodepth=256
> > > --norandommap
> > > --randrepeat=0 –exitall
> > >
> > >
> > >
> > > Result was:
> > >
> > > IO Workload 274.000 IOPS
> > >
> > > 1,0 GB/s transfer
> > >
> > > Read Bandwith 536MB/s
> > >
> > > Read IOPS 137.000
> > >
> > > Write Bandwith 536MB/s
> > >
> > > Write IOPS 137.000
> > >
> > >
> > >
> > > If you want me to run a different fio command just send it. My lab
> > > is still running.
> > >
> > >
> > >
> > > Any idea how I can mount my ScaleIO volume in KVM?
> > >
> > >
> > >
> > > Mit freundlichen Grüßen / With kind regards,
> > >
> > >
> > >
> > > Swen
> > >
> > >
> > >
> > > *Von:* Ivan Kudryavtsev [mailto:kudryavtsev...@bw-sw.com]
> > > *Gesendet:* Freitag, 2. Februar 2018 02:58
> > > *An:* users@cloudstack.apache.org; S. Brüseke - proIO GmbH <
> > > s.brues...@proio.com>
> > > *Betreff:* Re: AW: KVM storage cluster
> > >
> > >
> > >
> > > Hi, Swen. Do you test with direct or cached ops or buffered ones? Is
> > > it a write test or rw with certain rw percenrage? Hardly believe the
> > deployment
> > > can do 250k IOs for writting with single VM test.
> > >
> > >
> > >
> > > 2 февр. 2018 г. 4:56 пользователь "S. Brüseke - proIO GmbH" <
> > > s.brues...@proio.com> написал:
> > >
> > > I am also testing with ScaleIO on CentOS7 with KVM. With a 3 node
> > > cluster with each node has 2x 2TB SSD (Samsung PM1663a) I get
> > > 250.000 IOPS when doing a fio test (random 4k).
> > > The only problem is that I do not know how to mount the shared
> > > volume so that KVM can use it to store vms on it. Does anyone know how
> to do it?
> > >
> > > Mit freundlichen Grüßen / With kind regards,
> > >
> > > Swen
> > >
> > > -Ursprüngliche Nachricht-
> > > Von: Andrija Panic [mailto:andrija.pa...@gmail.com]
> > > Gesendet: Donnerstag, 1. Februar 2018 22:00
> > > An: users 
> > > Betreff: Re: KVM storage cluster
> > >
> > >
> > > a bit late, but:
> > >
> > > - for any IO heavy (medium even...) workload, try to avoid CEPH, no
> > > offence, simply it takes lot of $$$ to make CEPH perform in random
> > > IO worlds (imagine RHEL and vendors provide only refernce
> > > architecutre with SEQUNATIAL benchmark workload, not random) - not
> > > to mention a huge list
> > of
> > > bugs we hit back in the days (simply, one/single great guy handled
> > > the
> > CEPH
> > > integration for 

RE: Failing to enable SSL/HTTPS on console proxy vm

2018-02-02 Thread Paul Angus
Hi Ugo,
Have you destroyed your sec storage VM and let CloudStack recreate it.  A 
stop-start isn't usually enough to reconfigure certificates.

paul.an...@shapeblue.com 
www.shapeblue.com
53 Chandos Place, Covent Garden, London  WC2N 4HSUK
@shapeblue
  
 


-Original Message-
From: Ugo Vasi [mailto:ugo.v...@procne.it] 
Sent: 02 February 2018 11:37
To: users@cloudstack.apache.org; Benjamin Naber 
Subject: Re: Failing to enable SSL/HTTPS on console proxy vm

Hi Ben,
I'm sure that the DNS is resolving the right IP (aaa-bbb-ccc-ddd.domain.com -> 
aaa.bbb.ccc.ddd), I tried with wget using the same src of iframe (masquerade 
log):

$ wget https://123-123-123-123.domain.com/ajax?token=...(snipped)
--2018-02-02 10:24:23-- https://123-123-123-123.domain.com/ajax?token=...
Resolving 123-123-123-123.domain.com (123-123-123-123.domain.com)... 
123.123.123.123
Connecting to 123-123-123-123.domain.com 
(123-123-123-123.domain.com)|123.123.123.123|:443...

here the command hangs until a timeout.



On 02/02/2018 11:43, Benjamin Naber wrote:
> Hi Ugo,
>
> you need a DNS Record for the public ip address the consoleproxy has beed 
> allocatet.
> should be look like this: 80-190-44-22.domain.com otherwise the iframe denied 
> loading in case of ssl error.
> In Global setting "Console proxy url domain" set *.domain.com restart 
> management server and it should work.
>
> Kind Regards
>
> Ben
>
>> Ugo Vasi  hat am 2. Februar 2018 um 11:26 geschrieben:
>>
>>
>> Hi all,
>> I had the same problem installing the wildcard certificate.
>>
>> I tried to set the consoleproxy.url.domain in global settings but now 
>> the console interface inside the iframe does not respond...
>>
>> The dns record are OK.
>>
>>
>>
>>
>> On 16/06/2016 18:10, Andy Dills wrote:
>>> I have this working perfectly.
>>>
>>> Couple of key things that are not mentioned in the
>>> documentation:
>>>
>>> - You need to set consoleproxy.url.domain to *.domain.com for whatever 
>>> domain you're using. Do this before re-uploading your SSL certificate. The 
>>> SSL upload dialogue doesn't set this value as it should.
>>>
>>> - You need a wildcard certificate for that domain.
>>>
>>> Assuming you setup the proper DNS records, it should then work.
>>>
>>> I'm open to follow up questions if anybody is struggling with this.
>>>
>>> Thanks,
>>> Andy
>>>
>>> Sent from my iPhone
>>>
 On Jun 16, 2016, at 12:01 PM, Will Stevens  wrote:

 We have been having issues with this for as long as I can remember 
 (on both ACS and CCP).  In order to get it to work you have to 
 'trust unsafe scripts' or whatever by clicking the shield in the 
 URL bar in the top right (maybe that is chrome).

 I don't know that there is a solution, but if there is, I am all ears...

 *Will STEVENS*
 Lead Developer

 *CloudOps* *| *Cloud Solutions Experts
 420 rue Guy *|* Montreal *|* Quebec *|* H3J 1S6 w cloudops.com *|* 
 tw @CloudOps_

> On Thu, Jun 16, 2016 at 11:54 AM, Nux!  wrote:
>
> Hi,
>
> Is there any particular voodoo involved in getting the $subject to 
> work correctly on 4.8.0?
> I've uploaded the Comodo wildcard cabundle, crt and key in the 
> Infrastructure page, the systemvms have rebooted.
> They came back fine and nothing dodgy in the logs, but when I open 
> the console of a VM Firefox will say there are insecure contents 
> loaded and will not display the terminal ajax thingy.
> View source shoes an iframe linking http://1.2.3.4 instead of 
> https://1-2-3-4.wildcarddomain.tld.
>
> Apache HTTPD and Tomcat had no issues with these certs.
>
> Is there something that I am missing?
>
> Thanks
>
>
> --
> Sent from the Delta quadrant using Borg technology!
>
> Nux!
> www.nux.ro
>
>>>
>>>
>>>
>>
>> --
>>
>> *Ugo Vasi* / System Administrator
>> ugo.v...@procne.it 
>>
>>
>>
>>
>> *Procne S.r.l.*
>> +39 0432 486 523
>> via Cotonificio, 45
>> 33010 Tavagnacco (UD)
>> www.procne.it 
>>
>>
>> Le informazioni contenute nella presente comunicazione ed i relativi 
>> allegati possono essere riservate e sono, comunque, destinate 
>> esclusivamente alle persone od alla Società sopraindicati. La 
>> diffusione, distribuzione e/o copiatura del documento trasmesso da 
>> parte di qualsiasi soggetto diverso dal destinatario è proibita sia 
>> ai sensi dell'art. 616 c.p., che ai sensi del Decreto Legislativo n. 
>> 196/2003 "Codice in materia di protezione dei dati personali". Se 
>> avete ricevuto questo messaggio per errore, vi preghiamo di 
>> distruggerlo e di informare immediatamente Procne S.r.l. scrivendo 
>> all' indirizzo e-mail i...@procne.it .
>>
>>
>>
>>
>
>
>


-- 

*Ugo Vasi* / System Administrator
ugo.v...@procne.it 





Re: kvm live volume migration

2018-02-02 Thread Andrija Panic
@Dag, you might want to check with Mike Tutkowski, how he implemented this
for the "online storage migration" from other storages (CEPH and NFS
implemented so far as sources) to SolidFire.

We are doing exactly the same demo/manual way (this is what Mike has sent
me back in the days), so perhaps you want to see how to translate this into
general things (so ANY to ANY storage migration) inside CloudStack.

Cheers

On 2 February 2018 at 10:28, Dag Sonstebo 
wrote:

> All
>
> I am doing a bit of R around this for a client at the moment. I am
> semi-successful in getting live migrations to different storage pools to
> work. The method I’m using is as follows – this does not take into account
> any efficiency optimisation around the disk transfer (which is next on my
> list). The below should answer your question Eric about moving to a
> different location – and I am also working with your steps to see where I
> can improve the following. Keep in mind all of this is external to
> CloudStack – although CloudStack picks up the destination KVM host
> automatically it does not update the volume tables etc., neither does it do
> any housekeeping.
>
> 1) Ensure the same network bridges are up on source and destination –
> these are found with:
>
> [root@kvm1 ~]# virsh dumpxml 9 | grep source
>   
>   
>   
>   
>
> So from this make sure breth1-725 is up on the destionation host (do it
> the hard way or cheat and spin up a VM from same account and network on
> that host)
>
> 2) Find size of source disk and create stub disk in destination (this part
> can be made more efficient to speed up disk transfer – by doing similar
> things to what Eric is doing):
>
> [root@kvm1 ~]# qemu-img info /mnt/00e88a7b-985f-3be8-b717-
> 0a59d8197640/d0ab5dd5-e3dd-47ac-a326-5ce3d47d194d
> image: /mnt/00e88a7b-985f-3be8-b717-0a59d8197640/d0ab5dd5-e3dd-
> 47ac-a326-5ce3d47d194d
> file format: qcow2
> virtual size: 8.0G (8589934592 bytes)
> disk size: 32M
> cluster_size: 65536
> backing file: /mnt/00e88a7b-985f-3be8-b717-0a59d8197640/3caaf4c9-eaec-
> 11e7-800b-06b4a401075c
>
> ##
>
> [root@kvm3 50848ff7-c6aa-3fdd-b487-27899bf2129c]# qemu-img create -f
> qcow2 d0ab5dd5-e3dd-47ac-a326-5ce3d47d194d 8G
> Formatting 'd0ab5dd5-e3dd-47ac-a326-5ce3d47d194d', fmt=qcow2
> size=8589934592 encryption=off cluster_size=65536
> [root@kvm3 50848ff7-c6aa-3fdd-b487-27899bf2129c]# qemu-img info
> d0ab5dd5-e3dd-47ac-a326-5ce3d47d194d
> image: d0ab5dd5-e3dd-47ac-a326-5ce3d47d194d
> file format: qcow2
> virtual size: 8.0G (8589934592 bytes)
> disk size: 448K
> cluster_size: 65536
>
> 3) Rewrite the new VM XML file for the destination with:
> a) New disk location, in this case this is just a new path (Eric – this
> answers your question)
> b) Different IP addresses for VNC – in this case 10.0.0.1 to 10.0.0.2
> and carry out migration.
>
> [root@kvm1 ~]# virsh dumpxml 9 | sed -e 's/00e88a7b-985f-3be8-b717-
> 0a59d8197640/50848ff7-c6aa-3fdd-b487-27899bf2129c/g' | sed -e 's/
> 10.0.0.1/10.0.0.2/g' > /root/i-2-14-VM.xml
>
> [root@kvm1 ~]# virsh migrate --live --persistent --copy-storage-all --xml
> /root/i-2-14-VM.xml i-2-14-VM qemu+tcp://10.0.0.2/system --verbose
> --abort-on-error
> Migration: [ 25 %]
>
> 4) Once complete delete the source file. This can be done with extra
> switches on the virsh migrate command if need be.
> = = =
>
> In the simplest tests this works – destination VM remains online and has
> storage in new location – but it’s not persistent – sometimes the
> destination VM ends up in a paused state, and I’m working on how to get
> around this. I also noted virsh migrate has a  migrate-setmaxdowntime which
> I think can be useful here.
>
> Regards,
> Dag Sonstebo
> Cloud Architect
> ShapeBlue
>
> On 01/02/2018, 20:30, "Andrija Panic"  wrote:
>
> Actually,  we have this feature (we call this internally
> online-storage-migration) to migrate volume from CEPH/NFS to SolidFire
> (thanks to Mike Tutkowski)
>
> There is libvirt mechanism, where basically you start another PAUSED
> VM on
> another host (same name and same XML file, except the storage volumes
> are
> pointing to new storage, different paths, etc and maybe VNC listening
> address needs to be changed or so) and then you issue on original
> host/VM
> the live migrate command with few parameters... the libvirt will
> transaprently handle the copy data process from Soruce to New volumes,
> and
> after migration the VM will be alive (with new XML since have new
> volumes)
> on new host, while the original VM on original host is destroyed
>
> (I can send you manual for this, that is realted to SF, but idea is the
> same and you can exercies this on i.e. 2 NFS volumes on 2 different
> storages)
>
> This mechanism doesn't exist in ACS in general (AFAIK), except for when
> migrating to SolidFire.
>
> Perhaps community/DEV can help extend 

AW: AW: AW: KVM storage cluster

2018-02-02 Thread S . Brüseke - proIO GmbH
Hi Ivan,

it is a standard installation without any tuning. We are using 2x 10Gbit 
interfaces on all servers. I am not really sure how ScaleIO handles the 
replication at the moment. I do not have any experience with Ceph too so I am 
unable to compare it.
FYI: If you use 128k instead of 4k blocks than the IOPS are dropping to 11.000.

Mit freundlichen Grüßen / With kind regards,

Swen

-Ursprüngliche Nachricht-
Von: Ivan Kudryavtsev [mailto:kudryavtsev...@bw-sw.com] 
Gesendet: Freitag, 2. Februar 2018 15:57
An: S. Brüseke - proIO GmbH 
Cc: users@cloudstack.apache.org
Betreff: Re: AW: AW: KVM storage cluster

Swen, performance looks awesome, but still wonder where is the magic here, 
because AFAIK Ceph is not capable to even touch the base, but Red Hat bets on 
it... Might it be the ScaleIO doesn't wait while the replication complete for 
IO or other hack is used?

2 февр. 2018 г. 3:19 ПП пользователь "S. Brüseke - proIO GmbH" < 
s.brues...@proio.com> написал:

> Hi Ivan,
>
>
>
> it is a 50/50 read-write mix. Here is the fio command I used:
>
> fio --name=test --readwrite=randrw --rwmixwrite=50 --bs=4k 
> --invalidate=1 --group_reporting --direct=1 --filename=/dev/scinia 
> --time_based
> --runtime= --ioengine=libaio --numjobs=4 --iodepth=256 
> --norandommap
> --randrepeat=0 –exitall
>
>
>
> Result was:
>
> IO Workload 274.000 IOPS
>
> 1,0 GB/s transfer
>
> Read Bandwith 536MB/s
>
> Read IOPS 137.000
>
> Write Bandwith 536MB/s
>
> Write IOPS 137.000
>
>
>
> If you want me to run a different fio command just send it. My lab is 
> still running.
>
>
>
> Any idea how I can mount my ScaleIO volume in KVM?
>
>
>
> Mit freundlichen Grüßen / With kind regards,
>
>
>
> Swen
>
>
>
> *Von:* Ivan Kudryavtsev [mailto:kudryavtsev...@bw-sw.com]
> *Gesendet:* Freitag, 2. Februar 2018 02:58
> *An:* users@cloudstack.apache.org; S. Brüseke - proIO GmbH < 
> s.brues...@proio.com>
> *Betreff:* Re: AW: KVM storage cluster
>
>
>
> Hi, Swen. Do you test with direct or cached ops or buffered ones? Is 
> it a write test or rw with certain rw percenrage? Hardly believe the 
> deployment can do 250k IOs for writting with single VM test.
>
>
>
> 2 февр. 2018 г. 4:56 пользователь "S. Brüseke - proIO GmbH" < 
> s.brues...@proio.com> написал:
>
> I am also testing with ScaleIO on CentOS7 with KVM. With a 3 node 
> cluster with each node has 2x 2TB SSD (Samsung PM1663a) I get 250.000 
> IOPS when doing a fio test (random 4k).
> The only problem is that I do not know how to mount the shared volume 
> so that KVM can use it to store vms on it. Does anyone know how to do it?
>
> Mit freundlichen Grüßen / With kind regards,
>
> Swen
>
> -Ursprüngliche Nachricht-
> Von: Andrija Panic [mailto:andrija.pa...@gmail.com]
> Gesendet: Donnerstag, 1. Februar 2018 22:00
> An: users 
> Betreff: Re: KVM storage cluster
>
>
> a bit late, but:
>
> - for any IO heavy (medium even...) workload, try to avoid CEPH, no 
> offence, simply it takes lot of $$$ to make CEPH perform in random IO 
> worlds (imagine RHEL and vendors provide only refernce architecutre 
> with SEQUNATIAL benchmark workload, not random) - not to mention a 
> huge list of bugs we hit back in the days (simply, one/single great 
> guy handled the CEPH integration for CloudStack, but otherwise not lot 
> of help from other committers, if not mistaken, afaik...)
> - NFS better performance but not magic... (but most well supported, 
> code wise, bug-less wise :)
> - and for top notch (cost some $$$) SolidFire is the way to go (we 
> have tons of IO heavy customers, so this THE solution really, after 
> living with CEPH, then NFS on SSDs, etc) and provides guarantied IOPS etc...
>
> Cheers.
>
> On 7 January 2018 at 22:46, Grégoire Lamodière 
> wrote:
>
> > Hi Vahric,
> >
> > Thank you. I will have a look on it.
> >
> > Grégoire
> >
> >
> >
> > Envoyé depuis mon smartphone Samsung Galaxy.
> >
> >
> >  Message d'origine 
> > De : Vahric MUHTARYAN  Date : 07/01/2018 21:08
> > (GMT+01:00) À : users@cloudstack.apache.org Objet : Re: KVM storage 
> > cluster
> >
> > Hello Grégoire,
> >
> > I suggest you to look EMC scaleio for block based operations. It has 
> > a free one too ! And as a block working better then Ceph ;)
> >
> > Regards
> > VM
> >
> > On 7.01.2018 18:12, "Grégoire Lamodière"  wrote:
> >
> > Hi Ivan,
> >
> > Thank you for your quick reply.
> >
> > I'll have a look on Ceph and related perfs.
> > As you mentionned, 2 DRDB nfs servers can do the job, but if I 
> > can avoid using 2 blades for just passing blocks to nfs, this is 
> > even better (and maintain them as well).
> >
> > Thanks for pointing to ceph.
> >
> > Grégoire
> >
> >
> >
> >
> > ---
> > Grégoire Lamodière
> > T/ + 33 6 76 27 03 31
> > F/ + 33 1 75 43 89 71
> >
> > -Message d'origine-
> > De : Ivan Kudryavtsev 

Re: Failing to enable SSL/HTTPS on console proxy vm

2018-02-02 Thread Andrija Panic
You need to put all certificates in the chain in the GUI dialog, in 4.8
this is supported in GUI, made easy (god forgive doing the same work in 4.5
:)

I don't remember ATM, but I believe also restarting MGMT was required or
advises, since it build up the ssl/trust chan (of whaever...) so make sure
you better do than don't do it (I hardly remember that MGMT would not start
due to some hacks I did with SSLs back in the days)

On 2 February 2018 at 15:10, Ugo Vasi  wrote:

> Hi Paul,
> do I have to destroy console-proxy too?
> Could the problem be caused by certificates' chain?
> I've got two intermediate certificates between the root and the leaf one,
> could this cause problems?
>
> Thanks
>
>
> On 02/02/2018 13:18, Paul Angus wrote:
>
>> Hi Ugo,
>> Have you destroyed your sec storage VM and let CloudStack recreate it.  A
>> stop-start isn't usually enough to reconfigure certificates.
>>
>> paul.an...@shapeblue.com
>> www.shapeblue.com
>> 53 Chandos Place, Covent Garden, London  WC2N 4HSUK
>> @shapeblue
>>
>>
>> -Original Message-
>> From: Ugo Vasi [mailto:ugo.v...@procne.it]
>> Sent: 02 February 2018 11:37
>> To: users@cloudstack.apache.org; Benjamin Naber <
>> benjamin.na...@coders-area.de>
>> Subject: Re: Failing to enable SSL/HTTPS on console proxy vm
>>
>> Hi Ben,
>> I'm sure that the DNS is resolving the right IP (
>> aaa-bbb-ccc-ddd.domain.com -> aaa.bbb.ccc.ddd), I tried with wget using
>> the same src of iframe (masquerade log):
>>
>> $ wget https://123-123-123-123.domain.com/ajax?token=...(snipped)
>> --2018-02-02 10:24:23-- https://123-123-123-123.domain.com/ajax?token=...
>> Resolving 123-123-123-123.domain.com (123-123-123-123.domain.com)...
>> 123.123.123.123
>> Connecting to 123-123-123-123.domain.com (123-123-123-123.domain.com)|1
>> 23.123.123.123|:443...
>>
>> here the command hangs until a timeout.
>>
>>
>>
>> On 02/02/2018 11:43, Benjamin Naber wrote:
>>
>>> Hi Ugo,
>>>
>>> you need a DNS Record for the public ip address the consoleproxy has
>>> beed allocatet.
>>> should be look like this: 80-190-44-22.domain.com otherwise the iframe
>>> denied loading in case of ssl error.
>>> In Global setting "Console proxy url domain" set *.domain.com restart
>>> management server and it should work.
>>>
>>> Kind Regards
>>>
>>> Ben
>>>
>>> Ugo Vasi  hat am 2. Februar 2018 um 11:26
 geschrieben:


 Hi all,
 I had the same problem installing the wildcard certificate.

 I tried to set the consoleproxy.url.domain in global settings but now
 the console interface inside the iframe does not respond...

 The dns record are OK.




 On 16/06/2016 18:10, Andy Dills wrote:

> I have this working perfectly.
>
> Couple of key things that are not mentioned in the
> documentation:
>
> - You need to set consoleproxy.url.domain to *.domain.com for
> whatever domain you're using. Do this before re-uploading your SSL
> certificate. The SSL upload dialogue doesn't set this value as it should.
>
> - You need a wildcard certificate for that domain.
>
> Assuming you setup the proper DNS records, it should then work.
>
> I'm open to follow up questions if anybody is struggling with this.
>
> Thanks,
> Andy
>
> Sent from my iPhone
>
> On Jun 16, 2016, at 12:01 PM, Will Stevens 
>> wrote:
>>
>> We have been having issues with this for as long as I can remember
>> (on both ACS and CCP).  In order to get it to work you have to
>> 'trust unsafe scripts' or whatever by clicking the shield in the
>> URL bar in the top right (maybe that is chrome).
>>
>> I don't know that there is a solution, but if there is, I am all
>> ears...
>>
>> *Will STEVENS*
>> Lead Developer
>>
>> *CloudOps* *| *Cloud Solutions Experts
>> 420 rue Guy *|* Montreal *|* Quebec *|* H3J 1S6 w cloudops.com *|*
>> tw @CloudOps_
>>
>> On Thu, Jun 16, 2016 at 11:54 AM, Nux!  wrote:
>>>
>>> Hi,
>>>
>>> Is there any particular voodoo involved in getting the $subject to
>>> work correctly on 4.8.0?
>>> I've uploaded the Comodo wildcard cabundle, crt and key in the
>>> Infrastructure page, the systemvms have rebooted.
>>> They came back fine and nothing dodgy in the logs, but when I open
>>> the console of a VM Firefox will say there are insecure contents
>>> loaded and will not display the terminal ajax thingy.
>>> View source shoes an iframe linking http://1.2.3.4 instead of
>>> https://1-2-3-4.wildcarddomain.tld.
>>>
>>> Apache HTTPD and Tomcat had no issues with these certs.
>>>
>>> Is there something that I am missing?
>>>
>>> Thanks
>>>
>>>
>>> --
>>> Sent from the Delta quadrant using Borg technology!
>>>
>>> Nux!
>>> www.nux.ro
>>>
>>>
>
> 

Apache project page updates needed.

2018-02-02 Thread Ron Wheeler
http://cloudstack.apache.org/ list upcoming conferences in 2016 and 
2017. Nothing planned for 2018.



--
Ron Wheeler
President
Artifact Software Inc
email: rwhee...@artifact-software.com
skype: ronaldmwheeler
phone: 866-970-2435, ext 102



Re: Failing to enable SSL/HTTPS on console proxy vm

2018-02-02 Thread Paul Angus
Sorry should have said that, yes cpvm is the more important one to restart for 
your issue. Ssvm also uses cert for transfers.


From: Andrija Panic 
Sent: Friday, 2 February 2018 3:13 pm
To: users
Cc: Paul Angus; Benjamin Naber
Subject: Re: Failing to enable SSL/HTTPS on console proxy vm

You need to put all certificates in the chain in the GUI dialog, in 4.8 this is 
supported in GUI, made easy (god forgive doing the same work in 4.5 :)

I don't remember ATM, but I believe also restarting MGMT was required or 
advises, since it build up the ssl/trust chan (of whaever...) so make sure you 
better do than don't do it (I hardly remember that MGMT would not start due to 
some hacks I did with SSLs back in the days)


paul.an...@shapeblue.com 
www.shapeblue.com
53 Chandos Place, Covent Garden, London  WC2N 4HSUK
@shapeblue
  
 

On 2 February 2018 at 15:10, Ugo Vasi 
> wrote:
Hi Paul,
do I have to destroy console-proxy too?
Could the problem be caused by certificates' chain?
I've got two intermediate certificates between the root and the leaf one, could 
this cause problems?

Thanks


On 02/02/2018 13:18, Paul Angus wrote:
Hi Ugo,
Have you destroyed your sec storage VM and let CloudStack recreate it.  A 
stop-start isn't usually enough to reconfigure certificates.

paul.an...@shapeblue.com
www.shapeblue.com
53 Chandos Place, Covent Garden, London  WC2N 4HSUK
@shapeblue


-Original Message-
From: Ugo Vasi [mailto:ugo.v...@procne.it]
Sent: 02 February 2018 11:37
To: users@cloudstack.apache.org; Benjamin 
Naber >
Subject: Re: Failing to enable SSL/HTTPS on console proxy vm

Hi Ben,
I'm sure that the DNS is resolving the right IP 
(aaa-bbb-ccc-ddd.domain.com -> 
aaa.bbb.ccc.ddd), I tried with wget using the same src of iframe (masquerade 
log):

$ wget https://123-123-123-123.domain.com/ajax?token=...(snipped)
--2018-02-02 10:24:23-- https://123-123-123-123.domain.com/ajax?token=...
Resolving 123-123-123-123.domain.com 
(123-123-123-123.domain.com)...
123.123.123.123
Connecting to 123-123-123-123.domain.com 
(123-123-123-123.domain.com)|123.123.123.123|:443...

here the command hangs until a timeout.



On 02/02/2018 11:43, Benjamin Naber wrote:
Hi Ugo,

you need a DNS Record for the public ip address the consoleproxy has beed 
allocatet.
should be look like this: 
80-190-44-22.domain.com otherwise the iframe 
denied loading in case of ssl error.
In Global setting "Console proxy url domain" set 
*.domain.com restart
management server and it should work.

Kind Regards

Ben

Ugo Vasi > hat am 2. Februar 2018 
um 11:26 geschrieben:


Hi all,
I had the same problem installing the wildcard certificate.

I tried to set the consoleproxy.url.domain in global settings but now
the console interface inside the iframe does not respond...

The dns record are OK.




On 16/06/2016 18:10, Andy Dills wrote:
I have this working perfectly.

Couple of key things that are not mentioned in the
documentation:

- You need to set consoleproxy.url.domain to *.domain.com 
for whatever domain you're using. Do this before re-uploading your SSL 
certificate. The SSL upload dialogue doesn't set this value as it should.

- You need a wildcard certificate for that domain.

Assuming you setup the proper DNS records, it should then work.

I'm open to follow up questions if anybody is struggling with this.

Thanks,
Andy

Sent from my iPhone

On Jun 16, 2016, at 12:01 PM, Will Stevens 
> wrote:

We have been having issues with this for as long as I can remember
(on both ACS and CCP).  In order to get it to work you have to
'trust unsafe scripts' or whatever by clicking the shield in the
URL bar in the top right (maybe that is chrome).

I don't know that there is a solution, but if there is, I am all ears...

*Will STEVENS*
Lead Developer

*CloudOps* *| *Cloud Solutions Experts
420 rue Guy *|* Montreal *|* Quebec *|* H3J 1S6 w 
cloudops.com *|*
tw @CloudOps_

On Thu, Jun 16, 2016 at 11:54 AM, Nux! > 
wrote:

Hi,

Is there any particular voodoo involved in getting the $subject to
work correctly on 4.8.0?
I've uploaded the Comodo wildcard cabundle, crt and key in the
Infrastructure page, the systemvms have rebooted.
They came back fine and nothing dodgy in the logs, but when I open
the console of a VM Firefox will say there are insecure contents
loaded and will 

cloudstackcollab.org needs updating

2018-02-02 Thread Ron Wheeler
http://us.cloudstackcollab.org/ list upcoming conferences for 2017 in 
Miami and Brazil.


--
Ron Wheeler
President
Artifact Software Inc
email: rwhee...@artifact-software.com
skype: ronaldmwheeler
phone: 866-970-2435, ext 102



Re: AW: AW: AW: KVM storage cluster

2018-02-02 Thread Andrija Panic
No other FIO command, that is OK, direct=1, engine=libaio is the critical
ones, I use very similar setup, except that i prefer to do pure READ and
later pure WRITE, dont like these interleaved settings :)
Also, the critical thing is not the IOPS alone but also LATENCY (completion
latency on IO) - make sure to check those.

Those 250.000 were combined, so my bad I did not read it correctly, but it
makes it possible for sure to reach that, if you write to 6x35K IOPS that
is still more (theoretically) then what you get - so you get under the spec
(137K vs almost 200K for writes), which sounds realistic and OK I guess.

And yes, the volume size should be more than RAM in cases when RAM is used
for any kind of buffering/caching, but again I have no idea how this works
with scaleIO - with direct=1 you avoid writing to VM's/HOST's RAM, just
write directly to storage over network, that is OK.

If you do any other scaleIO benchmarks or have other results later, I'm
very interested to see it, since I never played with ScaleIO :)

Here is one of the articles (if you can trust it...) showing some CEPH vs
ScaleIO differences
http://cloudscaling.com/blog/cloud-computing/killing-the-storage-unicorn-purpose-built-scaleio-spanks-multi-purpose-ceph-on-performance/


Not meant to start a war on better one :), but CEPH definitively sucks on
random IO, and even if you have 1000 x 100% sequential streams/writes to
storage, those 1000 streams become all interleaved at the end, becoming
effectively pure RANDOM IO on storage side.
We have been fighting a long battle with CEPH, and it's just not worth it,
for good performance VMs, simply not.
It is though exceptionally nice storage for other streaming application or
massive scalling... again, just my 2 cents after 3 years in production

Whatever storage you choose, make sure you are not going to regret on many
different factors - performances?, ACS integration good enough?, Libvirt
driver stable enough (if used, i.e. for CEPH librbd) ? vendor support ?
etc), since this is the core of your cloud.
Believe me on this :)


On 2 February 2018 at 16:22, Ivan Kudryavtsev 
wrote:

> I suppose Andrija says about the volume size, it should be much bigger than
> storage host RAM.
>
> 2 февр. 2018 г. 10:17 ПП пользователь "S. Brüseke - proIO GmbH" <
> s.brues...@proio.com> написал:
>
> > Hi Andrija,
> >
> > you are right, of course it is Samsung PM1633a. I am not sure if this is
> > really only RAM. I let the fio command run for more than 30min and IOPS
> did
> > not drop.
> > I am using 6 SSDs in my setup, each has 35.000 IOPS random write max, so
> > ScaleIO can do 210.000 IOPS (read) at its best. fio shows around 140.000
> > IOPS (read) max. ScaleIO GUI shows me around 45.000 IOPS (read/write
> > combined) per SSD.
> >
> > Do you have a different fio command I can run?
> >
> > Mit freundlichen Grüßen / With kind regards,
> >
> > Swen
> >
> > -Ursprüngliche Nachricht-
> > Von: Andrija Panic [mailto:andrija.pa...@gmail.com]
> > Gesendet: Freitag, 2. Februar 2018 16:04
> > An: users 
> > Cc: S. Brüseke - proIO GmbH 
> > Betreff: Re: AW: AW: KVM storage cluster
> >
> > From my extremely short reading on ScaleIO few months ago, they are
> > utilizing RAM or similar for write caching, so basically, you write to
> RAM
> > or other part of ultra fast temp memory (NVME,etc) and later it is
> flushed
> > to durable part of storage.
> >
> > I assume its 1633a not 1663a ? -
> > http://www.samsung.com/semiconductor/ssd/enterprise-ssd/MZILS1T9HEJH/ (
> > ?) This one can barely do 35K IOPS of write per spec... and based on my
> > humble experience with Samsung, you can hardly ever reach that
> > specification, even with locally attached SSD and a lot of CPU
> > available...(local filesystem)
> >
> > So it must be RAM writing for sure...so make sure you saturate the
> > benchmark enough, so that the flushing process kicks in, and that the
> > benchmark will make sense when you later have constant IO load on the
> > cluster.
> >
> > Cheers
> >
> >
> > On 2 February 2018 at 15:56, Ivan Kudryavtsev 
> > wrote:
> >
> > > Swen, performance looks awesome, but still wonder where is the magic
> > > here, because AFAIK Ceph is not capable to even touch the base, but
> > > Red Hat bets on it... Might it be the ScaleIO doesn't wait while the
> > > replication complete for IO or other hack is used?
> > >
> > > 2 февр. 2018 г. 3:19 ПП пользователь "S. Brüseke - proIO GmbH" <
> > > s.brues...@proio.com> написал:
> > >
> > > > Hi Ivan,
> > > >
> > > >
> > > >
> > > > it is a 50/50 read-write mix. Here is the fio command I used:
> > > >
> > > > fio --name=test --readwrite=randrw --rwmixwrite=50 --bs=4k
> > > > --invalidate=1 --group_reporting --direct=1 --filename=/dev/scinia
> > > > --time_based
> > > > --runtime= --ioengine=libaio --numjobs=4 --iodepth=256
> > > > --norandommap
> > > > --randrepeat=0 

Re: AW: AW: AW: KVM storage cluster

2018-02-02 Thread Ivan Kudryavtsev
Andrija, indeed, amen!

2 февр. 2018 г. 11:14 ПП пользователь "Andrija Panic" <
andrija.pa...@gmail.com> написал:

> No other FIO command, that is OK, direct=1, engine=libaio is the critical
> ones, I use very similar setup, except that i prefer to do pure READ and
> later pure WRITE, dont like these interleaved settings :)
> Also, the critical thing is not the IOPS alone but also LATENCY (completion
> latency on IO) - make sure to check those.
>
> Those 250.000 were combined, so my bad I did not read it correctly, but it
> makes it possible for sure to reach that, if you write to 6x35K IOPS that
> is still more (theoretically) then what you get - so you get under the spec
> (137K vs almost 200K for writes), which sounds realistic and OK I guess.
>
> And yes, the volume size should be more than RAM in cases when RAM is used
> for any kind of buffering/caching, but again I have no idea how this works
> with scaleIO - with direct=1 you avoid writing to VM's/HOST's RAM, just
> write directly to storage over network, that is OK.
>
> If you do any other scaleIO benchmarks or have other results later, I'm
> very interested to see it, since I never played with ScaleIO :)
>
> Here is one of the articles (if you can trust it...) showing some CEPH vs
> ScaleIO differences
> http://cloudscaling.com/blog/cloud-computing/killing-the-
> storage-unicorn-purpose-built-scaleio-spanks-multi-purpose-
> ceph-on-performance/
>
>
> Not meant to start a war on better one :), but CEPH definitively sucks on
> random IO, and even if you have 1000 x 100% sequential streams/writes to
> storage, those 1000 streams become all interleaved at the end, becoming
> effectively pure RANDOM IO on storage side.
> We have been fighting a long battle with CEPH, and it's just not worth it,
> for good performance VMs, simply not.
> It is though exceptionally nice storage for other streaming application or
> massive scalling... again, just my 2 cents after 3 years in production
>
> Whatever storage you choose, make sure you are not going to regret on many
> different factors - performances?, ACS integration good enough?, Libvirt
> driver stable enough (if used, i.e. for CEPH librbd) ? vendor support ?
> etc), since this is the core of your cloud.
> Believe me on this :)
>
>
> On 2 February 2018 at 16:22, Ivan Kudryavtsev 
> wrote:
>
> > I suppose Andrija says about the volume size, it should be much bigger
> than
> > storage host RAM.
> >
> > 2 февр. 2018 г. 10:17 ПП пользователь "S. Brüseke - proIO GmbH" <
> > s.brues...@proio.com> написал:
> >
> > > Hi Andrija,
> > >
> > > you are right, of course it is Samsung PM1633a. I am not sure if this
> is
> > > really only RAM. I let the fio command run for more than 30min and IOPS
> > did
> > > not drop.
> > > I am using 6 SSDs in my setup, each has 35.000 IOPS random write max,
> so
> > > ScaleIO can do 210.000 IOPS (read) at its best. fio shows around
> 140.000
> > > IOPS (read) max. ScaleIO GUI shows me around 45.000 IOPS (read/write
> > > combined) per SSD.
> > >
> > > Do you have a different fio command I can run?
> > >
> > > Mit freundlichen Grüßen / With kind regards,
> > >
> > > Swen
> > >
> > > -Ursprüngliche Nachricht-
> > > Von: Andrija Panic [mailto:andrija.pa...@gmail.com]
> > > Gesendet: Freitag, 2. Februar 2018 16:04
> > > An: users 
> > > Cc: S. Brüseke - proIO GmbH 
> > > Betreff: Re: AW: AW: KVM storage cluster
> > >
> > > From my extremely short reading on ScaleIO few months ago, they are
> > > utilizing RAM or similar for write caching, so basically, you write to
> > RAM
> > > or other part of ultra fast temp memory (NVME,etc) and later it is
> > flushed
> > > to durable part of storage.
> > >
> > > I assume its 1633a not 1663a ? -
> > > http://www.samsung.com/semiconductor/ssd/enterprise-ssd/MZILS1T9HEJH/
> (
> > > ?) This one can barely do 35K IOPS of write per spec... and based on my
> > > humble experience with Samsung, you can hardly ever reach that
> > > specification, even with locally attached SSD and a lot of CPU
> > > available...(local filesystem)
> > >
> > > So it must be RAM writing for sure...so make sure you saturate the
> > > benchmark enough, so that the flushing process kicks in, and that the
> > > benchmark will make sense when you later have constant IO load on the
> > > cluster.
> > >
> > > Cheers
> > >
> > >
> > > On 2 February 2018 at 15:56, Ivan Kudryavtsev <
> kudryavtsev...@bw-sw.com>
> > > wrote:
> > >
> > > > Swen, performance looks awesome, but still wonder where is the magic
> > > > here, because AFAIK Ceph is not capable to even touch the base, but
> > > > Red Hat bets on it... Might it be the ScaleIO doesn't wait while the
> > > > replication complete for IO or other hack is used?
> > > >
> > > > 2 февр. 2018 г. 3:19 ПП пользователь "S. Brüseke - proIO GmbH" <
> > > > s.brues...@proio.com> написал:
> > > >
> > > > > Hi Ivan,
> > > > >
> > > > >
> > > > >
> > > > > it 

Re: Migrate system VMs volumes to new storage

2018-02-02 Thread Andrija Panic
If you can afford using (Storage) Tags, then you can  do it that way also.

we have 3 different storages (had) and all 3 were having at some time
different TAGs - you edit existing System Offering for the CPVM, SSVM, VR
(and/or Compute and Data disk offerings)  i.e. //Service Offerings ->
System Offerings -> System Offering For Secondary Storage VM... and define
storage tag.
just make sure you put same tag on offering as you put on new storage, and
than go and destroy systemVMs - and they will be automatically recreated
after 1-2 minutes, but since tagging is present they will be created on the
proper storage.






On 24 January 2018 at 16:35, Dag Sonstebo 
wrote:

> Hi Ugo,
>
> If all you are worried about is the system VMs then the easiest and risk
> free option is:
>
> - Configured new primary storage pool.
> - Disable the old one (you will have to do this with cloudmonkey, it is
> not available through the GUI – something like update storagepool
> enabled=false id=50848ff7-c6aa-3fdd-b487-27899bf2129c)
> - Destroy your system VMs and watch them come back online on new primary
> storage.
> - If for some reason it doesn't work then just re-enable the old storage
> and do some troubleshooting.
>
> Regards,
> Dag Sonstebo
> Cloud Architect
> ShapeBlue
>
> From: Özhan Rüzgar Karaman 
> Date: Wednesday, 24 January 2018 at 14:25
> To: Ugo Vasi 
> Cc: "users@cloudstack.apache.org" , Dag
> Sonstebo 
> Subject: Re: Migrate system VMs volumes to new storage
>
> Hi Ugo;
> If you have running other vm's over it then you could not remove primary
> storage. If you want to remove primary storage then you need to migrate
> vm's to other storage first then make it empty and you could disable &
> remove.
>
> So maybe updating system offerings could help you to change reprovisioned
> SSVM's disk location.
>
> Thanks
> Özhan
>
> 2018-01-24 17:15 GMT+03:00 Ugo Vasi  .v...@procne.it>>:
> Hi Özhan,
> can I disable the zone and remove primary storage with running VM on it?
>
>
>
>
> On 24/01/2018 14:56, Özhan Rüzgar Karaman wrote:
> Hi Ugo;
> When you destroy systemvm's, disable zone, disable and remove primary
> storage and add new primary storage and enable zone, new system vms will
> automatically provision on new primary storage.
>
> Thanks
> Özhan
>
> 2018-01-24 16:50 GMT+03:00 Ugo Vasi  .v...@procne.it>>:
> Hi Dag,
> I have to dismiss a primary storage, not the secondary. Do I have to
> create or modify a system offering with a storage tag and destroy the
> system-vm? In this case, how can I be sure that the SVM will be recreated
> using just that system offering?
>
>
>
> On 24/01/2018 14:40, Dag Sonstebo wrote:
> Hi Ugo,
>
> If this is just system VMs you can just make sure the system VM template
> has been copied and is present on your new secondary storage, then disable
> the old secondary storage and destroy the system VMs – they should start
> again (they are stateless so you don’t need to worry about copying image
> files etc.)
>
> Regards,
> Dag Sonstebo
> Cloud Architect
> ShapeBlue
>
> On 24/01/2018, 09:56, "Ugo Vasi"  .v...@procne.it>> wrote:
>
>   Hi all,
>   I have to dismiss a storage nas where the system VMs image files are
>   resident (kvm hypervisor on nfs storage).
>From the web interface I can not do it like normal VMs.
>   The system vms are all cloned starting from the same image and have
>   backing files in the same (old) storage, then copying the file I have
>   the reference to the storage that I have to remove.
>   How can I proceed?
>   --
>*Ugo Vasi* / System Administrator
>   ugo.v...@procne.it  ugo.v...@procne.it>
>   *Procne S.r.l.*
>   +39 0432 486 523
>   via Cotonificio, 45
>   33010 Tavagnacco (UD)
>   www.procne.it 
> Le informazioni contenute nella presente comunicazione ed
> i relativi
>   allegati possono essere riservate e sono, comunque, destinate
>   esclusivamente alle persone od alla Società sopraindicati. La
>   diffusione, distribuzione e/o copiatura del documento trasmesso da
> parte
>   di qualsiasi soggetto diverso dal destinatario è proibita sia ai
> sensi
>   dell'art. 616 c.p., che ai sensi del Decreto Legislativo n. 196/2003
>   "Codice in materia di protezione dei dati personali". Se avete
> ricevuto
>   questo messaggio per errore, vi preghiamo di di
>  preghiamo+di+di=gmail=g>struggerlo
>
> e di informare
>   immediatamente Procne S.r.l. scrivendo all' ind<
> https://maps.google.com/?q=atamente+Procne+S.r.l.+
> 

Re: XenServer Licensing Change - Switch Hypervisors?

2018-02-02 Thread Alessandro Caviglione
Hi all,
I'm also trying to find a solution, our infrastructure is based on XS6.5
that will not patched on meltdown and spectre so we're considering to
create new cluster based on a different hypervisor instead of upgrade to
XS7.2.
In fact, I think that all here work for a company that has a MS SPLA
agreement in place, so my question is: since we're already paying MS
Datacenter license, what do you think about Hyper-V under Cloudstack?
I'm trying to compare it versus KVM...


Thank you.


On Tue, Jan 9, 2018 at 3:18 AM, Pierre-Luc Dion  wrote:

> Hi Dingo,
>
> That's an interesting answer to recent citrix licensing change for
> xenserver, I'll definitely keep an eye on this project!
>
> thanks!
>
>
> *Pierre-Luc DION*
> Architecte de Solution Cloud | Cloud Solutions Architect
> t 855.652.5683
>
> *CloudOps* Votre partenaire infonuagique* | *Cloud Solutions Experts
> 420 rue Guy *|* Montreal *|* Quebec *|* H3J 1S6
> w cloudops.com *|* tw @CloudOps_
>
> On Mon, Jan 8, 2018 at 10:13 AM, Nux!  wrote:
>
> > Good luck, Sean. It should be doable.
> > If you're buying new Intel hardware, make sure it supports the invpcid
> cpu
> > flag, or buy AMD Epyc.
> > See my other recent email on the list about performance implications of
> > Meltdown.
> >
> > --
> > Sent from the Delta quadrant using Borg technology!
> >
> > Nux!
> > www.nux.ro
> >
> > - Original Message -
> > > From: "Sean Lair" 
> > > To: "users" 
> > > Sent: Sunday, 7 January, 2018 18:20:11
> > > Subject: RE: XenServer Licensing Change - Switch Hypervisors?
> >
> > > Thanks for the reply Nux, yea we originally chose XenServer over KVM
> > because KVM
> > > didn't support all of the VM snapshot functionality of XenServer.
> > >
> > > We are evaluating switching to KVM now...  But don't have a good way of
> > moving
> > > customers over from XenServer host to KVM hosts...
> > >
> > > We are on XenServer 6.5 and with the new Spectre and Meltdown
> > vulnerabilities
> > > not being patched in 6.5...  We may accelerate the move to KVM.
> > >
> > >
> > > -Original Message-
> > > From: Nux! [mailto:n...@li.nux.ro]
> > > Sent: Friday, January 5, 2018 4:22 PM
> > > To: users 
> > > Subject: Re: XenServer Licensing Change - Switch Hypervisors?
> > >
> > > If you have expertise with XenServer and don't mind paying, then it's
> > not a bad
> > > direction to follow. It's a nice HV.
> > > On the long term I think KVM will be a much better solution though.
> > >
> > > --
> > > Sent from the Delta quadrant using Borg technology!
> > >
> > > Nux!
> > > www.nux.ro
> > >
> > > - Original Message -
> > >> From: "Sean Lair" 
> > >> To: "users" 
> > >> Sent: Wednesday, 3 January, 2018 00:53:32
> > >> Subject: XenServer Licensing Change - Switch Hypervisors?
> > >
> > >> It looks like XenServer 7.3 will no longer have the following features
> > >> in the Free Edition.  Is anyone considering moving from Free to
> > >> Standard Edition or possibly to another hyper-visor (like KVM) for
> their
> > >> CloudStack environment?
> > >>
> > >> Thoughts?  We are looking more at KVM at this point, any feature gaps
> > >> we should be aware of?
> > >>
> > >> Free Edition Changes
> > >>
> > >> -  Limited to up to 3 hosts per clusters
> > >>
> > >> -  No Pool High-Availability
> > >>
> > >> -  No Dynamic Memory Control (DMC)
> > >>
> > >> https://www.citrix.com/content/dam/citrix/en_us/
> documents/product-over
> > >> view/citrix-xenserver-feature-matrix.pdf
> > >>
> > >> Thanks
> > > > Sean
> >
>


Re: Network ACL Lists

2018-02-02 Thread daniel.herrmann
Hi Benjamin, Hi Dag,

I think, in some environment that could make perfect sense.

We are using the software in a private cloud environment and have some 
centrally managed lists of IP networks which are allowed to access internal 
services.

Right now, every service using our private cloud has to maintain those ACLs on 
their own (>>200 rule entries). We've written and provided a Python tool that 
allows the customer to manage both their ACL entries and firewall entries (for 
IP addresses in a non-VPC network), which automatically inserts and regularly 
updates those "Intranet list". It would be great to have them defined globally 
once, such that each customer can rely on us to update the ACL accordingly.

But Benjamin: As of now, I don't think there is such a functionality in CS.

Regards
Daniel

-- 
Daniel Herrmann
Network Engineer – Fraunhofer Private Cloud
CCIE #55056 (Routing and Switching)
Cisco CCDP, CCIP; Fluke CCTT
 
Fraunhoferstraße 5, 64283 Darmstadt
Tel.: +49 6151 155346
Mail: daniel.herrm...@zv.fraunhofer.de
 
On 02.02.18, 11:59, "Dag Sonstebo"  wrote:

Hi Benjamin,

Not to my knowledge – that would be a security issue in itself since you 
would then announce to any user what ACL rules are in place for other users. 

Regards,
Dag Sonstebo
Cloud Architect
ShapeBlue

On 02/02/2018, 10:06, "Benjamin Naber"  
wrote:

Hi @all,

is there any way, to create Network ACL Lists that global accessable ?

Kind Reagrds 

Benjamin



dag.sonst...@shapeblue.com 
www.shapeblue.com
53 Chandos Place, Covent Garden, London  WC2N 4HSUK
@shapeblue
  
 





Re: Failing to enable SSL/HTTPS on console proxy vm

2018-02-02 Thread Ugo Vasi

Hi all,
I had the same problem installing the wildcard certificate.

I tried to set the consoleproxy.url.domain in global settings but now 
the console interface inside the iframe does not respond...


The dns record are OK.




On 16/06/2016 18:10, Andy Dills wrote:

I have this working perfectly.

Couple of key things that are not mentioned in the
documentation:

- You need to set consoleproxy.url.domain to *.domain.com for whatever domain 
you're using. Do this before re-uploading your SSL certificate. The SSL upload 
dialogue doesn't set this value as it should.

- You need a wildcard certificate for that domain.

Assuming you setup the proper DNS records, it should then work.

I'm open to follow up questions if anybody is struggling with this.

Thanks,
Andy

Sent from my iPhone


On Jun 16, 2016, at 12:01 PM, Will Stevens  wrote:

We have been having issues with this for as long as I can remember (on both
ACS and CCP).  In order to get it to work you have to 'trust unsafe
scripts' or whatever by clicking the shield in the URL bar in the top right
(maybe that is chrome).

I don't know that there is a solution, but if there is, I am all ears...

*Will STEVENS*
Lead Developer

*CloudOps* *| *Cloud Solutions Experts
420 rue Guy *|* Montreal *|* Quebec *|* H3J 1S6
w cloudops.com *|* tw @CloudOps_


On Thu, Jun 16, 2016 at 11:54 AM, Nux!  wrote:

Hi,

Is there any particular voodoo involved in getting the $subject to work
correctly on 4.8.0?
I've uploaded the Comodo wildcard cabundle, crt and key in the
Infrastructure page, the systemvms have rebooted.
They came back fine and nothing dodgy in the logs, but when I open the
console of a VM Firefox will say there are insecure contents loaded and
will not display the terminal ajax thingy.
View source shoes an iframe linking http://1.2.3.4 instead of
https://1-2-3-4.wildcarddomain.tld.

Apache HTTPD and Tomcat had no issues with these certs.

Is there something that I am missing?

Thanks


--
Sent from the Delta quadrant using Borg technology!

Nux!
www.nux.ro









--

*Ugo Vasi* / System Administrator
ugo.v...@procne.it 




*Procne S.r.l.*
+39 0432 486 523
via Cotonificio, 45
33010 Tavagnacco (UD)
www.procne.it 


Le informazioni contenute nella presente comunicazione ed i relativi 
allegati possono essere riservate e sono, comunque, destinate 
esclusivamente alle persone od alla Società sopraindicati. La 
diffusione, distribuzione e/o copiatura del documento trasmesso da parte 
di qualsiasi soggetto diverso dal destinatario è proibita sia ai sensi 
dell'art. 616 c.p., che ai sensi del Decreto Legislativo n. 196/2003 
"Codice in materia di protezione dei dati personali". Se avete ricevuto 
questo messaggio per errore, vi preghiamo di distruggerlo e di informare 
immediatamente Procne S.r.l. scrivendo all' indirizzo e-mail 
i...@procne.it .







Re: Failing to enable SSL/HTTPS on console proxy vm

2018-02-02 Thread Ugo Vasi
I noticed another problem related to this procedure: after loading the 
certificates the console is restarted while the secondary storage VM 
disconnects and I have to stop it and restart it to see it online.




On 02/02/2018 11:26, Ugo Vasi wrote:

Hi all,
I had the same problem installing the wildcard certificate.

I tried to set the consoleproxy.url.domain in global settings but now 
the console interface inside the iframe does not respond...


The dns record are OK.




On 16/06/2016 18:10, Andy Dills wrote:

I have this working perfectly.

Couple of key things that are not mentioned in the
documentation:

- You need to set consoleproxy.url.domain to *.domain.com for 
whatever domain you're using. Do this before re-uploading your SSL 
certificate. The SSL upload dialogue doesn't set this value as it 
should.


- You need a wildcard certificate for that domain.

Assuming you setup the proper DNS records, it should then work.

I'm open to follow up questions if anybody is struggling with this.

Thanks,
Andy

Sent from my iPhone

On Jun 16, 2016, at 12:01 PM, Will Stevens  
wrote:


We have been having issues with this for as long as I can remember 
(on both

ACS and CCP).  In order to get it to work you have to 'trust unsafe
scripts' or whatever by clicking the shield in the URL bar in the 
top right

(maybe that is chrome).

I don't know that there is a solution, but if there is, I am all 
ears...


*Will STEVENS*
Lead Developer

*CloudOps* *| *Cloud Solutions Experts
420 rue Guy *|* Montreal *|* Quebec *|* H3J 1S6
w cloudops.com *|* tw @CloudOps_


On Thu, Jun 16, 2016 at 11:54 AM, Nux!  wrote:

Hi,

Is there any particular voodoo involved in getting the $subject to 
work

correctly on 4.8.0?
I've uploaded the Comodo wildcard cabundle, crt and key in the
Infrastructure page, the systemvms have rebooted.
They came back fine and nothing dodgy in the logs, but when I open the
console of a VM Firefox will say there are insecure contents loaded 
and

will not display the terminal ajax thingy.
View source shoes an iframe linking http://1.2.3.4 instead of
https://1-2-3-4.wildcarddomain.tld.

Apache HTTPD and Tomcat had no issues with these certs.

Is there something that I am missing?

Thanks


--
Sent from the Delta quadrant using Borg technology!

Nux!
www.nux.ro












--

*Ugo Vasi* / System Administrator
ugo.v...@procne.it 




*Procne S.r.l.*
+39 0432 486 523
via Cotonificio, 45
33010 Tavagnacco (UD)
www.procne.it 


Le informazioni contenute nella presente comunicazione ed i relativi 
allegati possono essere riservate e sono, comunque, destinate 
esclusivamente alle persone od alla Società sopraindicati. La 
diffusione, distribuzione e/o copiatura del documento trasmesso da parte 
di qualsiasi soggetto diverso dal destinatario è proibita sia ai sensi 
dell'art. 616 c.p., che ai sensi del Decreto Legislativo n. 196/2003 
"Codice in materia di protezione dei dati personali". Se avete ricevuto 
questo messaggio per errore, vi preghiamo di distruggerlo e di informare 
immediatamente Procne S.r.l. scrivendo all' indirizzo e-mail 
i...@procne.it .







Re: Failing to enable SSL/HTTPS on console proxy vm

2018-02-02 Thread Ugo Vasi

Hi Ben,
I'm sure that the DNS is resolving the right IP 
(aaa-bbb-ccc-ddd.domain.com -> aaa.bbb.ccc.ddd), I tried with wget using 
the same src of iframe (masquerade log):


$ wget https://123-123-123-123.domain.com/ajax?token=...(snipped)
--2018-02-02 10:24:23-- https://123-123-123-123.domain.com/ajax?token=...
Resolving 123-123-123-123.domain.com (123-123-123-123.domain.com)... 
123.123.123.123
Connecting to 123-123-123-123.domain.com 
(123-123-123-123.domain.com)|123.123.123.123|:443...


here the command hangs until a timeout.



On 02/02/2018 11:43, Benjamin Naber wrote:

Hi Ugo,

you need a DNS Record for the public ip address the consoleproxy has beed 
allocatet.
should be look like this: 80-190-44-22.domain.com otherwise the iframe denied 
loading in case of ssl error.
In Global setting "Console proxy url domain" set *.domain.com
restart management server and it should work.

Kind Regards

Ben


Ugo Vasi  hat am 2. Februar 2018 um 11:26 geschrieben:


Hi all,
I had the same problem installing the wildcard certificate.

I tried to set the consoleproxy.url.domain in global settings but now
the console interface inside the iframe does not respond...

The dns record are OK.




On 16/06/2016 18:10, Andy Dills wrote:

I have this working perfectly.

Couple of key things that are not mentioned in the
documentation:

- You need to set consoleproxy.url.domain to *.domain.com for whatever domain 
you're using. Do this before re-uploading your SSL certificate. The SSL upload 
dialogue doesn't set this value as it should.

- You need a wildcard certificate for that domain.

Assuming you setup the proper DNS records, it should then work.

I'm open to follow up questions if anybody is struggling with this.

Thanks,
Andy

Sent from my iPhone


On Jun 16, 2016, at 12:01 PM, Will Stevens  wrote:

We have been having issues with this for as long as I can remember (on both
ACS and CCP).  In order to get it to work you have to 'trust unsafe
scripts' or whatever by clicking the shield in the URL bar in the top right
(maybe that is chrome).

I don't know that there is a solution, but if there is, I am all ears...

*Will STEVENS*
Lead Developer

*CloudOps* *| *Cloud Solutions Experts
420 rue Guy *|* Montreal *|* Quebec *|* H3J 1S6
w cloudops.com *|* tw @CloudOps_


On Thu, Jun 16, 2016 at 11:54 AM, Nux!  wrote:

Hi,

Is there any particular voodoo involved in getting the $subject to work
correctly on 4.8.0?
I've uploaded the Comodo wildcard cabundle, crt and key in the
Infrastructure page, the systemvms have rebooted.
They came back fine and nothing dodgy in the logs, but when I open the
console of a VM Firefox will say there are insecure contents loaded and
will not display the terminal ajax thingy.
View source shoes an iframe linking http://1.2.3.4 instead of
https://1-2-3-4.wildcarddomain.tld.

Apache HTTPD and Tomcat had no issues with these certs.

Is there something that I am missing?

Thanks


--
Sent from the Delta quadrant using Borg technology!

Nux!
www.nux.ro







--

*Ugo Vasi* / System Administrator
ugo.v...@procne.it 




*Procne S.r.l.*
+39 0432 486 523
via Cotonificio, 45
33010 Tavagnacco (UD)
www.procne.it 


Le informazioni contenute nella presente comunicazione ed i relativi
allegati possono essere riservate e sono, comunque, destinate
esclusivamente alle persone od alla Società sopraindicati. La
diffusione, distribuzione e/o copiatura del documento trasmesso da parte
di qualsiasi soggetto diverso dal destinatario è proibita sia ai sensi
dell'art. 616 c.p., che ai sensi del Decreto Legislativo n. 196/2003
"Codice in materia di protezione dei dati personali". Se avete ricevuto
questo messaggio per errore, vi preghiamo di distruggerlo e di informare
immediatamente Procne S.r.l. scrivendo all' indirizzo e-mail
i...@procne.it .











--

*Ugo Vasi* / System Administrator
ugo.v...@procne.it 




*Procne S.r.l.*
+39 0432 486 523
via Cotonificio, 45
33010 Tavagnacco (UD)
www.procne.it 


Le informazioni contenute nella presente comunicazione ed i relativi 
allegati possono essere riservate e sono, comunque, destinate 
esclusivamente alle persone od alla Società sopraindicati. La 
diffusione, distribuzione e/o copiatura del documento trasmesso da parte 
di qualsiasi soggetto diverso dal destinatario è proibita sia ai sensi 
dell'art. 616 c.p., che ai sensi del Decreto Legislativo n. 196/2003 
"Codice in materia di protezione dei dati personali". Se avete ricevuto 
questo messaggio per errore, vi preghiamo di distruggerlo e di informare 
immediatamente Procne S.r.l. scrivendo all' indirizzo e-mail 
i...@procne.it .







Re: Failing to enable SSL/HTTPS on console proxy vm

2018-02-02 Thread Benjamin Naber
Hi Ugo,

you need a DNS Record for the public ip address the consoleproxy has beed 
allocatet.
should be look like this: 80-190-44-22.domain.com otherwise the iframe denied 
loading in case of ssl error.
In Global setting "Console proxy url domain" set *.domain.com
restart management server and it should work.

Kind Regards

Ben

> Ugo Vasi  hat am 2. Februar 2018 um 11:26 geschrieben:
> 
> 
> Hi all,
> I had the same problem installing the wildcard certificate.
> 
> I tried to set the consoleproxy.url.domain in global settings but now 
> the console interface inside the iframe does not respond...
> 
> The dns record are OK.
> 
> 
> 
> 
> On 16/06/2016 18:10, Andy Dills wrote:
> > I have this working perfectly.
> >
> > Couple of key things that are not mentioned in the
> > documentation:
> >
> > - You need to set consoleproxy.url.domain to *.domain.com for whatever 
> > domain you're using. Do this before re-uploading your SSL certificate. The 
> > SSL upload dialogue doesn't set this value as it should.
> >
> > - You need a wildcard certificate for that domain.
> >
> > Assuming you setup the proper DNS records, it should then work.
> >
> > I'm open to follow up questions if anybody is struggling with this.
> >
> > Thanks,
> > Andy
> >
> > Sent from my iPhone
> >
> >> On Jun 16, 2016, at 12:01 PM, Will Stevens  wrote:
> >>
> >> We have been having issues with this for as long as I can remember (on both
> >> ACS and CCP).  In order to get it to work you have to 'trust unsafe
> >> scripts' or whatever by clicking the shield in the URL bar in the top right
> >> (maybe that is chrome).
> >>
> >> I don't know that there is a solution, but if there is, I am all ears...
> >>
> >> *Will STEVENS*
> >> Lead Developer
> >>
> >> *CloudOps* *| *Cloud Solutions Experts
> >> 420 rue Guy *|* Montreal *|* Quebec *|* H3J 1S6
> >> w cloudops.com *|* tw @CloudOps_
> >>
> >>> On Thu, Jun 16, 2016 at 11:54 AM, Nux!  wrote:
> >>>
> >>> Hi,
> >>>
> >>> Is there any particular voodoo involved in getting the $subject to work
> >>> correctly on 4.8.0?
> >>> I've uploaded the Comodo wildcard cabundle, crt and key in the
> >>> Infrastructure page, the systemvms have rebooted.
> >>> They came back fine and nothing dodgy in the logs, but when I open the
> >>> console of a VM Firefox will say there are insecure contents loaded and
> >>> will not display the terminal ajax thingy.
> >>> View source shoes an iframe linking http://1.2.3.4 instead of
> >>> https://1-2-3-4.wildcarddomain.tld.
> >>>
> >>> Apache HTTPD and Tomcat had no issues with these certs.
> >>>
> >>> Is there something that I am missing?
> >>>
> >>> Thanks
> >>>
> >>>
> >>> --
> >>> Sent from the Delta quadrant using Borg technology!
> >>>
> >>> Nux!
> >>> www.nux.ro
> >>>
> >
> >
> >
> >
> 
> 
> -- 
> 
> *Ugo Vasi* / System Administrator
> ugo.v...@procne.it 
> 
> 
> 
> 
> *Procne S.r.l.*
> +39 0432 486 523
> via Cotonificio, 45
> 33010 Tavagnacco (UD)
> www.procne.it 
> 
> 
> Le informazioni contenute nella presente comunicazione ed i relativi 
> allegati possono essere riservate e sono, comunque, destinate 
> esclusivamente alle persone od alla Società sopraindicati. La 
> diffusione, distribuzione e/o copiatura del documento trasmesso da parte 
> di qualsiasi soggetto diverso dal destinatario è proibita sia ai sensi 
> dell'art. 616 c.p., che ai sensi del Decreto Legislativo n. 196/2003 
> "Codice in materia di protezione dei dati personali". Se avete ricevuto 
> questo messaggio per errore, vi preghiamo di distruggerlo e di informare 
> immediatamente Procne S.r.l. scrivendo all' indirizzo e-mail 
> i...@procne.it .
> 
> 
> 
>


AW: AW: KVM storage cluster

2018-02-02 Thread S . Brüseke - proIO GmbH
Hi Ivan,
 
it is a 50/50 read-write mix. Here is the fio command I used:
fio --name=test --readwrite=randrw --rwmixwrite=50 --bs=4k --invalidate=1 
--group_reporting --direct=1 --filename=/dev/scinia --time_based --runtime= 
--ioengine=libaio --numjobs=4 --iodepth=256 --norandommap --randrepeat=0 
–exitall
 
Result was:
IO Workload 274.000 IOPS
1,0 GB/s transfer
Read Bandwith 536MB/s
Read IOPS 137.000
Write Bandwith 536MB/s
Write IOPS 137.000
 
If you want me to run a different fio command just send it. My lab is still 
running.
 
Any idea how I can mount my ScaleIO volume in KVM?
 
Mit freundlichen Grüßen / With kind regards,
 
Swen
 
Von: Ivan Kudryavtsev [mailto:kudryavtsev...@bw-sw.com] 
Gesendet: Freitag, 2. Februar 2018 02:58
An: users@cloudstack.apache.org; S. Brüseke - proIO GmbH 
Betreff: Re: AW: KVM storage cluster
 
Hi, Swen. Do you test with direct or cached ops or buffered ones? Is it a write 
test or rw with certain rw percenrage? Hardly believe the deployment can do 
250k IOs for writting with single VM test. 
 
2 февр. 2018 г. 4:56 пользователь "S. Brüseke - proIO GmbH" 
 написал:
I am also testing with ScaleIO on CentOS7 with KVM. With a 3 node cluster with 
each node has 2x 2TB SSD (Samsung PM1663a) I get 250.000 IOPS when doing a fio 
test (random 4k).
The only problem is that I do not know how to mount the shared volume so that 
KVM can use it to store vms on it. Does anyone know how to do it?

Mit freundlichen Grüßen / With kind regards,

Swen

-Ursprüngliche Nachricht-
Von: Andrija Panic [mailto:andrija.pa...@gmail.com]
Gesendet: Donnerstag, 1. Februar 2018 22:00
An: users 
Betreff: Re: KVM storage cluster

a bit late, but:

- for any IO heavy (medium even...) workload, try to avoid CEPH, no offence, 
simply it takes lot of $$$ to make CEPH perform in random IO worlds (imagine 
RHEL and vendors provide only refernce architecutre with SEQUNATIAL benchmark 
workload, not random) - not to mention a huge list of bugs we hit back in the 
days (simply, one/single great guy handled the CEPH integration for CloudStack, 
but otherwise not lot of help from other committers, if not mistaken, afaik...)
- NFS better performance but not magic... (but most well supported, code wise, 
bug-less wise :)
- and for top notch (cost some $$$) SolidFire is the way to go (we have tons of 
IO heavy customers, so this THE solution really, after living with CEPH, then 
NFS on SSDs, etc) and provides guarantied IOPS etc...

Cheers.

On 7 January 2018 at 22:46, Grégoire Lamodière  wrote:

> Hi Vahric,
>
> Thank you. I will have a look on it.
>
> Grégoire
>
>
>
> Envoyé depuis mon smartphone Samsung Galaxy.
>
>
>  Message d'origine 
> De : Vahric MUHTARYAN  Date : 07/01/2018 21:08
> (GMT+01:00) À : users@cloudstack.apache.org Objet : Re: KVM storage
> cluster
>
> Hello Grégoire,
>
> I suggest you to look EMC scaleio for block based operations. It has a
> free one too ! And as a block working better then Ceph ;)
>
> Regards
> VM
>
> On 7.01.2018 18:12, "Grégoire Lamodière"  wrote:
>
> Hi Ivan,
>
> Thank you for your quick reply.
>
> I'll have a look on Ceph and related perfs.
> As you mentionned, 2 DRDB nfs servers can do the job, but if I can
> avoid using 2 blades for just passing blocks to nfs, this is even
> better (and maintain them as well).
>
> Thanks for pointing to ceph.
>
> Grégoire
>
>
>
>
> ---
> Grégoire Lamodière
> T/ + 33 6 76 27 03 31
> F/ + 33 1 75 43 89 71
>
> -Message d'origine-
> De : Ivan Kudryavtsev [mailto:kudryavtsev...@bw-sw.com]
> Envoyé : dimanche 7 janvier 2018 15:20
> À : users@cloudstack.apache.org
> Objet : Re: KVM storage cluster
>
> Hi, Grégoire,
> You could have
> - local storage if you like, so every compute node could have own
> space (one lun per host)
> - to have Ceph deployed on the same compute nodes (distribute raw
> devices among nodes)
> - to dedicate certain node as NFS server (or two servers with
> DRBD)
>
> I don't think that shared FS is a good option, even clustered LVM
> is a big pain.
>
> 2018-01-07 21:08 GMT+07:00 Grégoire Lamodière :
>
> > Dear all,
> >
> > Since Citrix changed deeply the free version of XenServer 7.3, I
> am in
> > the process of Pocing moving our Xen clusters to KVM on Centos 7 I
> > decided to use HP blades connected to HP P2000 over mutipath SAS
> links.
> >
> > The network part seems fine to me, not so far from what we used to do
> > with Xen.
> > About the storage, I am a little but confused about the shared
> > mountpoint storage option offerds by CS.
> >
> > What would be the good option, in terms of CS, to create a cluster fs
> > using my SAS array ?
> > I read somewhere (a Dag SlideShare I think) that 

Re: [fosdem] Anybody going to Fosdem this weekend?

2018-02-02 Thread Daan Hoogland
what he said !!!

On Thu, Feb 1, 2018 at 11:58 PM, Rohit Yadav 
wrote:

> Hi all,
>
> I will be at Fosdem in Brussels this weekend, and I know Daan is going to
> be there too - if you're going it would be lovely to meet you and discuss
> CloudStack among other things, tweet me @rhtyd.
>
> Cheers.
>
> rohit.ya...@shapeblue.com
> www.shapeblue.com
> 53 Chandos Place, Covent Garden, London  WC2N 4HSUK
> @shapeblue
>
>
>
>


-- 
Daan


Network ACL Lists

2018-02-02 Thread Benjamin Naber
Hi @all,

is there any way, to create Network ACL Lists that global accessable ?

Kind Reagrds 

Benjamin