Re: AW: KVM storage cluster

2018-02-01 Thread Ivan Kudryavtsev
Hi, Swen. Do you test with direct or cached ops or buffered ones? Is it a
write test or rw with certain rw percenrage? Hardly believe the deployment
can do 250k IOs for writting with single VM test.

2 февр. 2018 г. 4:56 пользователь "S. Brüseke - proIO GmbH" <
s.brues...@proio.com> написал:

I am also testing with ScaleIO on CentOS7 with KVM. With a 3 node cluster
with each node has 2x 2TB SSD (Samsung PM1663a) I get 250.000 IOPS when
doing a fio test (random 4k).
The only problem is that I do not know how to mount the shared volume so
that KVM can use it to store vms on it. Does anyone know how to do it?

Mit freundlichen Grüßen / With kind regards,

Swen

-Ursprüngliche Nachricht-
Von: Andrija Panic [mailto:andrija.pa...@gmail.com]
Gesendet: Donnerstag, 1. Februar 2018 22:00
An: users 
Betreff: Re: KVM storage cluster

a bit late, but:

- for any IO heavy (medium even...) workload, try to avoid CEPH, no
offence, simply it takes lot of $$$ to make CEPH perform in random IO
worlds (imagine RHEL and vendors provide only refernce architecutre with
SEQUNATIAL benchmark workload, not random) - not to mention a huge list of
bugs we hit back in the days (simply, one/single great guy handled the CEPH
integration for CloudStack, but otherwise not lot of help from other
committers, if not mistaken, afaik...)
- NFS better performance but not magic... (but most well supported, code
wise, bug-less wise :)
- and for top notch (cost some $$$) SolidFire is the way to go (we have
tons of IO heavy customers, so this THE solution really, after living with
CEPH, then NFS on SSDs, etc) and provides guarantied IOPS etc...

Cheers.

On 7 January 2018 at 22:46, Grégoire Lamodière  wrote:

> Hi Vahric,
>
> Thank you. I will have a look on it.
>
> Grégoire
>
>
>
> Envoyé depuis mon smartphone Samsung Galaxy.
>
>
>  Message d'origine 
> De : Vahric MUHTARYAN  Date : 07/01/2018 21:08
> (GMT+01:00) À : users@cloudstack.apache.org Objet : Re: KVM storage
> cluster
>
> Hello Grégoire,
>
> I suggest you to look EMC scaleio for block based operations. It has a
> free one too ! And as a block working better then Ceph ;)
>
> Regards
> VM
>
> On 7.01.2018 18:12, "Grégoire Lamodière"  wrote:
>
> Hi Ivan,
>
> Thank you for your quick reply.
>
> I'll have a look on Ceph and related perfs.
> As you mentionned, 2 DRDB nfs servers can do the job, but if I can
> avoid using 2 blades for just passing blocks to nfs, this is even
> better (and maintain them as well).
>
> Thanks for pointing to ceph.
>
> Grégoire
>
>
>
>
> ---
> Grégoire Lamodière
> T/ + 33 6 76 27 03 31
> F/ + 33 1 75 43 89 71
>
> -Message d'origine-
> De : Ivan Kudryavtsev [mailto:kudryavtsev...@bw-sw.com]
> Envoyé : dimanche 7 janvier 2018 15:20
> À : users@cloudstack.apache.org
> Objet : Re: KVM storage cluster
>
> Hi, Grégoire,
> You could have
> - local storage if you like, so every compute node could have own
> space (one lun per host)
> - to have Ceph deployed on the same compute nodes (distribute raw
> devices among nodes)
> - to dedicate certain node as NFS server (or two servers with
> DRBD)
>
> I don't think that shared FS is a good option, even clustered LVM
> is a big pain.
>
> 2018-01-07 21:08 GMT+07:00 Grégoire Lamodière :
>
> > Dear all,
> >
> > Since Citrix changed deeply the free version of XenServer 7.3, I
> am in
> > the process of Pocing moving our Xen clusters to KVM on Centos 7 I
> > decided to use HP blades connected to HP P2000 over mutipath SAS
> links.
> >
> > The network part seems fine to me, not so far from what we used to
do
> > with Xen.
> > About the storage, I am a little but confused about the shared
> > mountpoint storage option offerds by CS.
> >
> > What would be the good option, in terms of CS, to create a cluster
fs
> > using my SAS array ?
> > I read somewhere (a Dag SlideShare I think) that GFS2 is the only
> > clustered FS supported by CS. Is it still correct ?
> > Does it mean I have to create the GFS2 cluster, make identical mount
> > conf on all host, and use it on CS as NFS ?
> > I do not have to add the storage to KVM prior CS zone creation ?
> >
> > Thanks a lot for any help / information.
> >
> > ---
> > Grégoire Lamodière
> > T/ + 33 6 76 27 03 31
> > F/ + 33 1 75 43 89 71
> >
> >
>
>
> --
> With best regards, Ivan Kudryavtsev
> Bitworks Software, Ltd.
> Cell: +7-923-414-1515
> WWW: http://bitworks.software/ 
>
>
>
>


--

Andrija Panić


- proIO GmbH -
Geschäftsführer: Swen Brüseke
Sitz der Gesellschaft: Frankfurt am Main

USt-IdNr. DE 267 075 918
Registergericht: Frankfurt am Main - HRB 86239

Diese E-Mail enthält vertrauliche und/oder rechtlich 

Re: Intel meltdown/spectre kvm upgrade results

2018-02-01 Thread Ivan Kudryavtsev
Hi, I don't see spontaneous reboots yet, but have heard that some people
met them with new intel ucode.

2 февр. 2018 г. 3:53 пользователь "Andrija Panic" 
написал:

> Thx Ivan for sharing, no reboot issues because of problematic Intel
> microcode ?
>
> This is just Meltdown fix for now,  and btw congrats on courage to do so
> this early (since no final solution yet).
>
> FYI, CentOS/RHEL has already patched everyhing (kernel and qemu/libvirt)
> but we are also on Ubuntu..
>
> Cheers
>
> On 17 January 2018 at 21:29, Sebastian Gomez  wrote:
>
> > Good!
> >
> > Thanks for sharing it, nice iniciative!
> >
> > We have to upgrade the VMware clusters, but vmware is changing daily the
> > patch policy...
> > Once done I will try to remember to share too.
> >
> >
> >
> > Regards.
> >
> >
> >
> >
> > Atentamente,
> > Sebastián Gómez
> >
> > On Sat, Jan 13, 2018 at 5:59 AM, Ivan Kudryavtsev <
> > kudryavtsev...@bw-sw.com>
> > wrote:
> >
> > > Hi, colleagues,
> > >
> > > just would like to share that yesterday successfuly upgraded my ubuntu
> > > 14.04 kvm cloud to custom built linux 4.14.11 keenel with ubuntu
> > 2018/01/08
> > > intel cpu microcode update. Compute CPUs - Xeon E5-2670, Xeon X5650,
> > > everything works nice, no claims from customers, no sensitive load
> > change.
> > > Live migration between new and old kernels goes well, back migration
> too.
> > > It seems that kvm, libvirt and qemu patches are not here yet for
> Ubuntu.
> > > Waiting for additional updates. Btw, It is CS4.3.
> > >
> > > Have a nice migration.
> > >
> >
>
>
>
> --
>
> Andrija Panić
>


[fosdem] Anybody going to Fosdem this weekend?

2018-02-01 Thread Rohit Yadav
Hi all,

I will be at Fosdem in Brussels this weekend, and I know Daan is going to be 
there too - if you're going it would be lovely to meet you and discuss 
CloudStack among other things, tweet me @rhtyd.

Cheers.

rohit.ya...@shapeblue.com 
www.shapeblue.com
53 Chandos Place, Covent Garden, London  WC2N 4HSUK
@shapeblue
  
 



AW: KVM storage cluster

2018-02-01 Thread S . Brüseke - proIO GmbH
I am also testing with ScaleIO on CentOS7 with KVM. With a 3 node cluster with 
each node has 2x 2TB SSD (Samsung PM1663a) I get 250.000 IOPS when doing a fio 
test (random 4k).
The only problem is that I do not know how to mount the shared volume so that 
KVM can use it to store vms on it. Does anyone know how to do it? 

Mit freundlichen Grüßen / With kind regards,

Swen

-Ursprüngliche Nachricht-
Von: Andrija Panic [mailto:andrija.pa...@gmail.com] 
Gesendet: Donnerstag, 1. Februar 2018 22:00
An: users 
Betreff: Re: KVM storage cluster

a bit late, but:

- for any IO heavy (medium even...) workload, try to avoid CEPH, no offence, 
simply it takes lot of $$$ to make CEPH perform in random IO worlds (imagine 
RHEL and vendors provide only refernce architecutre with SEQUNATIAL benchmark 
workload, not random) - not to mention a huge list of bugs we hit back in the 
days (simply, one/single great guy handled the CEPH integration for CloudStack, 
but otherwise not lot of help from other committers, if not mistaken, afaik...)
- NFS better performance but not magic... (but most well supported, code wise, 
bug-less wise :)
- and for top notch (cost some $$$) SolidFire is the way to go (we have tons of 
IO heavy customers, so this THE solution really, after living with CEPH, then 
NFS on SSDs, etc) and provides guarantied IOPS etc...

Cheers.

On 7 January 2018 at 22:46, Grégoire Lamodière  wrote:

> Hi Vahric,
>
> Thank you. I will have a look on it.
>
> Grégoire
>
>
>
> Envoyé depuis mon smartphone Samsung Galaxy.
>
>
>  Message d'origine 
> De : Vahric MUHTARYAN  Date : 07/01/2018 21:08 
> (GMT+01:00) À : users@cloudstack.apache.org Objet : Re: KVM storage 
> cluster
>
> Hello Grégoire,
>
> I suggest you to look EMC scaleio for block based operations. It has a 
> free one too ! And as a block working better then Ceph ;)
>
> Regards
> VM
>
> On 7.01.2018 18:12, "Grégoire Lamodière"  wrote:
>
> Hi Ivan,
>
> Thank you for your quick reply.
>
> I'll have a look on Ceph and related perfs.
> As you mentionned, 2 DRDB nfs servers can do the job, but if I can 
> avoid using 2 blades for just passing blocks to nfs, this is even 
> better (and maintain them as well).
>
> Thanks for pointing to ceph.
>
> Grégoire
>
>
>
>
> ---
> Grégoire Lamodière
> T/ + 33 6 76 27 03 31
> F/ + 33 1 75 43 89 71
>
> -Message d'origine-
> De : Ivan Kudryavtsev [mailto:kudryavtsev...@bw-sw.com]
> Envoyé : dimanche 7 janvier 2018 15:20
> À : users@cloudstack.apache.org
> Objet : Re: KVM storage cluster
>
> Hi, Grégoire,
> You could have
> - local storage if you like, so every compute node could have own 
> space (one lun per host)
> - to have Ceph deployed on the same compute nodes (distribute raw 
> devices among nodes)
> - to dedicate certain node as NFS server (or two servers with 
> DRBD)
>
> I don't think that shared FS is a good option, even clustered LVM 
> is a big pain.
>
> 2018-01-07 21:08 GMT+07:00 Grégoire Lamodière :
>
> > Dear all,
> >
> > Since Citrix changed deeply the free version of XenServer 7.3, I 
> am in
> > the process of Pocing moving our Xen clusters to KVM on Centos 7 I
> > decided to use HP blades connected to HP P2000 over mutipath SAS 
> links.
> >
> > The network part seems fine to me, not so far from what we used to do
> > with Xen.
> > About the storage, I am a little but confused about the shared
> > mountpoint storage option offerds by CS.
> >
> > What would be the good option, in terms of CS, to create a cluster fs
> > using my SAS array ?
> > I read somewhere (a Dag SlideShare I think) that GFS2 is the only
> > clustered FS supported by CS. Is it still correct ?
> > Does it mean I have to create the GFS2 cluster, make identical mount
> > conf on all host, and use it on CS as NFS ?
> > I do not have to add the storage to KVM prior CS zone creation ?
> >
> > Thanks a lot for any help / information.
> >
> > ---
> > Grégoire Lamodière
> > T/ + 33 6 76 27 03 31
> > F/ + 33 1 75 43 89 71
> >
> >
>
>
> --
> With best regards, Ivan Kudryavtsev
> Bitworks Software, Ltd.
> Cell: +7-923-414-1515
> WWW: http://bitworks.software/ 
>
>
>
>


-- 

Andrija Panić


- proIO GmbH -
Geschäftsführer: Swen Brüseke
Sitz der Gesellschaft: Frankfurt am Main

USt-IdNr. DE 267 075 918
Registergericht: Frankfurt am Main - HRB 86239

Diese E-Mail enthält vertrauliche und/oder rechtlich geschützte Informationen. 
Wenn Sie nicht der richtige Adressat sind oder diese E-Mail irrtümlich erhalten 
haben, 
informieren Sie bitte sofort den Absender und vernichten Sie diese Mail. 
Das unerlaubte Kopieren sowie die unbefugte Weitergabe dieser Mail sind nicht 

Re: KVM storage cluster

2018-02-01 Thread Andrija Panic
a bit late, but:

- for any IO heavy (medium even...) workload, try to avoid CEPH, no
offence, simply it takes lot of $$$ to make CEPH perform in random IO
worlds (imagine RHEL and vendors provide only refernce architecutre with
SEQUNATIAL benchmark workload, not random) - not to mention a huge list of
bugs we hit back in the days (simply, one/single great guy handled the CEPH
integration for CloudStack, but otherwise not lot of help from other
committers, if not mistaken, afaik...)
- NFS better performance but not magic... (but most well supported, code
wise, bug-less wise :)
- and for top notch (cost some $$$) SolidFire is the way to go (we have
tons of IO heavy customers, so this THE solution really, after living with
CEPH, then NFS on SSDs, etc) and provides guarantied IOPS etc...

Cheers.

On 7 January 2018 at 22:46, Grégoire Lamodière  wrote:

> Hi Vahric,
>
> Thank you. I will have a look on it.
>
> Grégoire
>
>
>
> Envoyé depuis mon smartphone Samsung Galaxy.
>
>
>  Message d'origine 
> De : Vahric MUHTARYAN 
> Date : 07/01/2018 21:08 (GMT+01:00)
> À : users@cloudstack.apache.org
> Objet : Re: KVM storage cluster
>
> Hello Grégoire,
>
> I suggest you to look EMC scaleio for block based operations. It has a
> free one too ! And as a block working better then Ceph ;)
>
> Regards
> VM
>
> On 7.01.2018 18:12, "Grégoire Lamodière"  wrote:
>
> Hi Ivan,
>
> Thank you for your quick reply.
>
> I'll have a look on Ceph and related perfs.
> As you mentionned, 2 DRDB nfs servers can do the job, but if I can
> avoid using 2 blades for just passing blocks to nfs, this is even better
> (and maintain them as well).
>
> Thanks for pointing to ceph.
>
> Grégoire
>
>
>
>
> ---
> Grégoire Lamodière
> T/ + 33 6 76 27 03 31
> F/ + 33 1 75 43 89 71
>
> -Message d'origine-
> De : Ivan Kudryavtsev [mailto:kudryavtsev...@bw-sw.com]
> Envoyé : dimanche 7 janvier 2018 15:20
> À : users@cloudstack.apache.org
> Objet : Re: KVM storage cluster
>
> Hi, Grégoire,
> You could have
> - local storage if you like, so every compute node could have own
> space (one lun per host)
> - to have Ceph deployed on the same compute nodes (distribute raw
> devices among nodes)
> - to dedicate certain node as NFS server (or two servers with DRBD)
>
> I don't think that shared FS is a good option, even clustered LVM is a
> big pain.
>
> 2018-01-07 21:08 GMT+07:00 Grégoire Lamodière :
>
> > Dear all,
> >
> > Since Citrix changed deeply the free version of XenServer 7.3, I am
> in
> > the process of Pocing moving our Xen clusters to KVM on Centos 7 I
> > decided to use HP blades connected to HP P2000 over mutipath SAS
> links.
> >
> > The network part seems fine to me, not so far from what we used to do
> > with Xen.
> > About the storage, I am a little but confused about the shared
> > mountpoint storage option offerds by CS.
> >
> > What would be the good option, in terms of CS, to create a cluster fs
> > using my SAS array ?
> > I read somewhere (a Dag SlideShare I think) that GFS2 is the only
> > clustered FS supported by CS. Is it still correct ?
> > Does it mean I have to create the GFS2 cluster, make identical mount
> > conf on all host, and use it on CS as NFS ?
> > I do not have to add the storage to KVM prior CS zone creation ?
> >
> > Thanks a lot for any help / information.
> >
> > ---
> > Grégoire Lamodière
> > T/ + 33 6 76 27 03 31
> > F/ + 33 1 75 43 89 71
> >
> >
>
>
> --
> With best regards, Ivan Kudryavtsev
> Bitworks Software, Ltd.
> Cell: +7-923-414-1515
> WWW: http://bitworks.software/ 
>
>
>
>


-- 

Andrija Panić


Re: Intel meltdown/spectre kvm upgrade results

2018-02-01 Thread Andrija Panic
Thx Ivan for sharing, no reboot issues because of problematic Intel
microcode ?

This is just Meltdown fix for now,  and btw congrats on courage to do so
this early (since no final solution yet).

FYI, CentOS/RHEL has already patched everyhing (kernel and qemu/libvirt)
but we are also on Ubuntu..

Cheers

On 17 January 2018 at 21:29, Sebastian Gomez  wrote:

> Good!
>
> Thanks for sharing it, nice iniciative!
>
> We have to upgrade the VMware clusters, but vmware is changing daily the
> patch policy...
> Once done I will try to remember to share too.
>
>
>
> Regards.
>
>
>
>
> Atentamente,
> Sebastián Gómez
>
> On Sat, Jan 13, 2018 at 5:59 AM, Ivan Kudryavtsev <
> kudryavtsev...@bw-sw.com>
> wrote:
>
> > Hi, colleagues,
> >
> > just would like to share that yesterday successfuly upgraded my ubuntu
> > 14.04 kvm cloud to custom built linux 4.14.11 keenel with ubuntu
> 2018/01/08
> > intel cpu microcode update. Compute CPUs - Xeon E5-2670, Xeon X5650,
> > everything works nice, no claims from customers, no sensitive load
> change.
> > Live migration between new and old kernels goes well, back migration too.
> > It seems that kvm, libvirt and qemu patches are not here yet for Ubuntu.
> > Waiting for additional updates. Btw, It is CS4.3.
> >
> > Have a nice migration.
> >
>



-- 

Andrija Panić


Re: External DNS

2018-02-01 Thread Andrija Panic
in our VMs in reslolv.conf we have internal IP address of VR as first
nameserver, then the public ones... ( use.external.dns  set to false on
Zone level  - zone level settings)

On 1 February 2018 at 21:16,  wrote:

> Hello,
>
> we are using advanced networking
>
>
>
> Andrija Panic писал 2018-02-01 23:25:
>
> Hi,
>>
>> you didn't write what kind of networking you have, are VMs supposed to use
>> VR (advanced networking) for DNS (as deafult) or not.
>>
>> In zone settings, we have set public DNS to google's also, and some
>> internal ones.
>> SSVM and CPVM are assinged both 2 internal, and then 2 external servers
>> (in
>> that order) inside resolv.conf.
>> VRs are assigned only public DNS in resolv.conf
>> all VMs are inside VPC and use VR as it's own DNS server, which furhter
>> proxy to internet etc...
>>
>> Best
>>
>> On 23 January 2018 at 10:48,  wrote:
>>
>> Hello guys,
>>>
>>> After installation and configuration cloudstack we got lil problem.
>>>
>>> We can't use external DNS in our VM's. Every VM's is going up with our
>>> internal DNS and Google Public. We are interested to start VM's only with
>>> GP DNS.
>>>
>>> We change settings: use.external.dnsBypass internal dns, use external
>>> dns1 and dns2 true
>>> We restart management server, VR and all other systems, but do not having
>>> effect. It's still using our internal DNS and GP.It's very laggy with
>>> our DNS, internet speed only 10Mbps
>>>
>>> CloudStack: 4.8.0
>>> XenServer 6.5
>>>
>>> Anyone have solution?
>>>
>>>
>>>
>>>
>


-- 

Andrija Panić


Re: kvm live volume migration

2018-02-01 Thread Andrija Panic
Actually,  we have this feature (we call this internally
online-storage-migration) to migrate volume from CEPH/NFS to SolidFire
(thanks to Mike Tutkowski)

There is libvirt mechanism, where basically you start another PAUSED VM on
another host (same name and same XML file, except the storage volumes are
pointing to new storage, different paths, etc and maybe VNC listening
address needs to be changed or so) and then you issue on original host/VM
the live migrate command with few parameters... the libvirt will
transaprently handle the copy data process from Soruce to New volumes, and
after migration the VM will be alive (with new XML since have new volumes)
on new host, while the original VM on original host is destroyed

(I can send you manual for this, that is realted to SF, but idea is the
same and you can exercies this on i.e. 2 NFS volumes on 2 different
storages)

This mechanism doesn't exist in ACS in general (AFAIK), except for when
migrating to SolidFire.

Perhaps community/DEV can help extend Mike's code to do same work on
different storage types...

Cheers

On 19 January 2018 at 18:45, Eric Green  wrote:

> KVM is able to live migrate entire virtual machines complete with local
> volumes (see 'man virsh') but does require nbd (Network Block Device) to be
> installed on the destination host to do so. It may need installation of
> later libvirt / qemu packages from OpenStack repositories on Centos 6, I'm
> not sure, but just works on Centos 7. In any event, I have used this
> functionality to move virtual machines between virtualization hosts on my
> home network. It works.
>
> What is missing is the ability to live-migrate a disk from one shared
> storage to another. The functionality built into virsh live-migrates the
> volume ***to the exact same location on the new host***, so obviously is
> useless for migrating the disk to a new location on shared storage. I
> looked everywhere for the ability of KVM to live migrate a disk from point
> A to point B all by itself, and found no such thing. libvirt/qemu has the
> raw capabilities needed to do this, but it is not currently exposed as a
> single API via the qemu console or virsh. It can be emulated via scripting
> however:
>
> 1. Pause virtual machine
> 2. Do qcow2 snapshot.
> 3. Detach base disk, attach qcow2 snapshot
> 4. unpause virtual machine
> 5. copy qcow2 base file to new location
> 6. pause virtual machine
> 7. detach snapshot
> 8. unsnapshot qcow2 snapshot at its new location.
> 9. attach new base at new location.
> 10. unpause virtual machine.
>
> Thing is, if that entire process is not built into the underlying
> kvm/qemu/libvirt infrastructure as tested functionality with a defined API,
> there's no guarantee that it will work seamlessly and will continue working
> with the next release of the underlying infrastructure. This is using
> multiple different tools to manipulate the qcow2 file and attach/detach
> base disks to the running (but paused) kvm domain, and would have to be
> tested against all variations of those tools on all supported Cloudstack
> KVM host platforms. The test matrix looks pretty grim.
>
> By contrast, the migrate-with-local-storage process is built into virsh
> and is tested by the distribution vendor and the set of tools provided with
> the distribution is guaranteed to work with the virsh / libvirt/ qemu
> distributed by the distribution vendor. That makes the test matrix for
> move-with-local-storage look a lot simpler -- "is this functionality
> supported by that version of virsh on that distribution? Yes? Enable it.
> No? Don't enable it."
>
> I'd love to have live migration of disks on shared storage with Cloudstack
> KVM, but not at the expense of reliability. Shutting down a virtual machine
> in order to migrate one of its disks from one shared datastore to another
> is not ideal, but at least it's guaranteed reliable.
>
>
> > On Jan 19, 2018, at 04:54, Rafael Weingärtner <
> rafaelweingart...@gmail.com> wrote:
> >
> > Hey Marc,
> > It is very interesting that you are going to pick this up for KVM. I am
> > working in a related issue for XenServer [1].
> > If you can confirm that KVM is able to live migrate local volumes to
> other
> > local storage or shared storage I could make the feature I am working on
> > available to KVM as well.
> >
> >
> > [1] https://issues.apache.org/jira/browse/CLOUDSTACK-10240
> >
> > On Thu, Jan 18, 2018 at 11:35 AM, Marc-Aurèle Brothier <
> ma...@exoscale.ch>
> > wrote:
> >
> >> There's a PR waiting to be fixed about live migration with local volume
> for
> >> KVM. So it will come at some point. I'm the one who made this PR but I'm
> >> not using the upstream release so it's hard for me to debug the problem.
> >> You can add yourself to the PR to get notify when things are moving on
> it.
> >>
> >> https://github.com/apache/cloudstack/pull/1709
> >>
> >> On Wed, Jan 17, 2018 at 10:56 AM, Eric Green 
> >> wrote:
> >>

Re: External DNS

2018-02-01 Thread mm

Hello,

we are using advanced networking



Andrija Panic писал 2018-02-01 23:25:

Hi,

you didn't write what kind of networking you have, are VMs supposed to 
use

VR (advanced networking) for DNS (as deafult) or not.

In zone settings, we have set public DNS to google's also, and some
internal ones.
SSVM and CPVM are assinged both 2 internal, and then 2 external servers 
(in

that order) inside resolv.conf.
VRs are assigned only public DNS in resolv.conf
all VMs are inside VPC and use VR as it's own DNS server, which furhter
proxy to internet etc...

Best

On 23 January 2018 at 10:48,  wrote:


Hello guys,

After installation and configuration cloudstack we got lil problem.

We can't use external DNS in our VM's. Every VM's is going up with our
internal DNS and Google Public. We are interested to start VM's only 
with

GP DNS.

We change settings: use.external.dnsBypass internal dns, use 
external

dns1 and dns2 true
We restart management server, VR and all other systems, but do not 
having
effect. It's still using our internal DNS and GP.It's very laggy 
with

our DNS, internet speed only 10Mbps

CloudStack: 4.8.0
XenServer 6.5

Anyone have solution?







Re: External DNS

2018-02-01 Thread Andrija Panic
Hi,

you didn't write what kind of networking you have, are VMs supposed to use
VR (advanced networking) for DNS (as deafult) or not.

In zone settings, we have set public DNS to google's also, and some
internal ones.
SSVM and CPVM are assinged both 2 internal, and then 2 external servers (in
that order) inside resolv.conf.
VRs are assigned only public DNS in resolv.conf
all VMs are inside VPC and use VR as it's own DNS server, which furhter
proxy to internet etc...

Best

On 23 January 2018 at 10:48,  wrote:

> Hello guys,
>
> After installation and configuration cloudstack we got lil problem.
>
> We can't use external DNS in our VM's. Every VM's is going up with our
> internal DNS and Google Public. We are interested to start VM's only with
> GP DNS.
>
> We change settings: use.external.dnsBypass internal dns, use external
> dns1 and dns2 true
> We restart management server, VR and all other systems, but do not having
> effect. It's still using our internal DNS and GP.It's very laggy with
> our DNS, internet speed only 10Mbps
>
> CloudStack: 4.8.0
> XenServer 6.5
>
> Anyone have solution?
>
>
>


-- 

Andrija Panić


Re: Time-out when creating a template from a snapshot

2018-02-01 Thread Andrija Panic
Vladimir,

the original error seems as MySQL timeout for sure (I assume because of
HAPROXY in the middle), and we also had this setup originally (MGMT server
using HAPROXY on top of galera nodes...) but this has confirmed to be
issue, no matter what we changed on HAproxy or Mysql, and at that time we
didn't find the solution (MGMT then was set to hit first galera node
directly...poor man solution) - problem is that it seems when snapshot
starts (and can take even hours / or conversion to template, same thing...)
Java keeps the DB connection/transaction open for all time (which is
strange approach in my head, for such long image-converting actions)

If I'm not wrong, the snapshot to template conversion should be done via
agent node, not SSVM ?
Ping here if you find solution.

Btw, for some actions with images the real timeout = 2 x wait parameter :)
so change  that to 2000 and check if actions fails after 4000 sec.



On 1 February 2018 at 13:00, Vladimir Melnik  wrote:

> Thanks a lot, any help will be so much appreciated!
>
> On Wed, Jan 31, 2018 at 05:23:25PM +, Nux! wrote:
> > It's possible there are timeouts being hit somewhere. I'd take this to
> dev@ to be honest, I am not very familiar with the ssvm internals.
> >
> > --
> > Sent from the Delta quadrant using Borg technology!
> >
> > Nux!
> > www.nux.ro
> >
> > - Original Message -
> > > From: "Vladimir Melnik" 
> > > To: "users" 
> > > Sent: Wednesday, 31 January, 2018 12:42:01
> > > Subject: Re: Time-out when creating a template from a snapshot
> >
> > > No, it doesn't seem to be a database-related issue.
> > >
> > > This time I haven't got any error messages at all. Moreover, I see
> this template
> > > as available in the templates' list and there's the following message
> in the
> > > log-file:
> > > 2018-01-31 14:30:09,862 DEBUG [o.a.c.f.j.i.AsyncJobManagerImpl]
> > > (API-Job-Executor-1:ctx-b39838a2 job-1485421 ctx-3b9d9083)
> (logid:cb74bce4)
> > > Complete async job-1485421, jobStatus: SUCCEEDED, resultCode: 0,
> result:
> > > org.apache.cloudstack.api.response.TemplateResponse/
> template/{"id":"b69c65b3-70d6-4000-934c-fac9e887d3ef","name"
> :"Tucha#2018012713000408","displaytext":"Tucha#
> 2018012713000408","ispublic":false,"created":"2018-01-
> 31T13:30:09+0200","isready":true,"passwordenabled":true,"
> format":"QCOW2","isfeatured":false,"crossZones":false,"
> ostypeid":"b5490e1c-bd31-11e6-b74f-06973a00088a","ostypename":"Windows
> > > Server 2012 R2
> > > (64-bit)","account":"admin#lite","zoneid":"c8d773fa-76ca-
> 4637-8ecf-88656444fc86","zonename":"z2.tucha13.net","status":"Download
> > > Complete","size":375809638400,"templatetype":"USER","
> hypervisor":"KVM","domain":"ROOT","domainid":"b514ef44-
> bd2f-11e6-b74f-06973a00088a","isextractable":false,"
> sourcetemplateid":"ba26d2a9-5e2f-468d-8a38-df71a7811ee8","details":{"
> memoryOvercommitRatio":"1.0","cpuNumber":"4","cpuSpeed":"2399","Message.
> ReservedCapacityFreed.Flag":"false","cpuOvercommitRatio":"
> 10","memory":"12288"},"sshkeyenabled":false,"
> isdynamicallyscalable":false,"tags":[]}
> > >
> > > But at the same time I see that the template's file is less than the
> snapshot's
> > > file:
> > > -rw-r--r-- 1 root root 311253204992 Jan 31 00:14
> > > /mnt/SecStorage/ea7ebf9a-5195-31ab-be8e-f9348f9fee2b/
> snapshots/4391/12401/e7364ecf-56f2-451d-ba2e-537b9465097f
> > > -rw-r--r-- 1 root root 195583541248 Jan 31 12:30
> > > /mnt/SecStorage/ea7ebf9a-5195-31ab-be8e-f9348f9fee2b/
> template/tmpl/3121/473/e7364ecf-56f2-451d-ba2e-537b9465097f.qcow2
> > >
> > > The oddest thing is that the "cp" process in the SSVM is being
> terminated
> > > exactly in an hour after its start. Who would be doing that each time
> I'm
> > > trying to create a template? Isn't it being done by some script at the
> SSVM
> > > itself?
> > >
> > >
> > > On Mon, Jan 29, 2018 at 05:36:54PM +0200, Vladimir Melnik wrote:
> > >> Thank you, Lucian! My MySQL timeout thresolds are higher than 1 hour,
> but
> > >> there's HAproxy between ACS and MySQL, so I've changed haproxy's
> timeouts and
> > >> now will see what happens in an hour :-)
> > >>
> > >> On Mon, Jan 29, 2018 at 11:47:31AM +, Nux! wrote:
> > >> > I'm usually a sucker with these Java errors, but the error coming
> from the jdbc
> > >> > mysql driver makes me think maybe this is related to MySQL timeouts.
> > >> >
> > >> > Can you check your db installation for wait_timeouts,
> interactive_timeout,
> > >> > connect_timeout  and so on, see if any match your 3600 seconds?
> > >> >
> > >> > random search result
> > >> > http://www.supermanhamuerto.com/doku.php?id=java%
> 3athelastpacketsuccessfullyreceivedfromserver
> > >> >
> > >> > hth
> > >> > Lucian
> > >> >
> > >> > --
> > >> > Sent from the Delta quadrant using Borg technology!
> > >> >
> > >> > Nux!
> > >> > www.nux.ro
> > >> >
> > >> > - Original Message -
> > >> > > From: "Vladimir Melnik" 

FW: Apache EU Roadshow 2018 in Berlin

2018-02-01 Thread Paul Angus
Hi Everyone,

I’m cross-posting again…

I think that it would be great if we could have a couple of CloudStack 
presentations, but maybe we could pool a user’s story with development piece to 
showcase CloudStack from both perspectives.

Any one up for this?


paul.an...@shapeblue.com 
www.shapeblue.com
53 Chandos Place, Covent Garden, London  WC2N 4HSUK
@shapeblue
  
 

From: Sharan F [mailto:sha...@apache.org]
Sent: 01 February 2018 14:47
To: committ...@apache.org
Subject: Apache EU Roadshow 2018 in Berlin

Hi Everyone

For those of you who may not have seen the blog post, we will be holding an 
Apache EU Roadshow co-located with FOSS Backstage in Berlin on 13th and 14th 
June 2018. 
https://blogs.apache.org/foundation/entry/the-apache-software-foundation-announces28

As we have limited capacity for tracks, we are focussing on areas and projects 
that can deliver full tracks and also can attract good audiences. (IoT, Cloud, 
Httpd and Tomcat). Our community and Apache Way related talks will be managed 
as part of the FOSS Backstage program.

The CFP for our EU Roadshow event is now open at 
http://apachecon.com/euroadshow18/ and we are looking forward to receiving your 
submissions. I encourage you to please promote this event within your projects.

More details will be coming out soon and you can keep up to date by regularly 
checking http://apachecon.com/ or following @ApacheCon on Twitter.

Thanks
Sharan




Re: Time-out when creating a template from a snapshot

2018-02-01 Thread Vladimir Melnik
Thanks a lot, any help will be so much appreciated!

On Wed, Jan 31, 2018 at 05:23:25PM +, Nux! wrote:
> It's possible there are timeouts being hit somewhere. I'd take this to dev@ 
> to be honest, I am not very familiar with the ssvm internals.
> 
> --
> Sent from the Delta quadrant using Borg technology!
> 
> Nux!
> www.nux.ro
> 
> - Original Message -
> > From: "Vladimir Melnik" 
> > To: "users" 
> > Sent: Wednesday, 31 January, 2018 12:42:01
> > Subject: Re: Time-out when creating a template from a snapshot
> 
> > No, it doesn't seem to be a database-related issue.
> > 
> > This time I haven't got any error messages at all. Moreover, I see this 
> > template
> > as available in the templates' list and there's the following message in the
> > log-file:
> > 2018-01-31 14:30:09,862 DEBUG [o.a.c.f.j.i.AsyncJobManagerImpl]
> > (API-Job-Executor-1:ctx-b39838a2 job-1485421 ctx-3b9d9083) (logid:cb74bce4)
> > Complete async job-1485421, jobStatus: SUCCEEDED, resultCode: 0, result:
> > org.apache.cloudstack.api.response.TemplateResponse/template/{"id":"b69c65b3-70d6-4000-934c-fac9e887d3ef","name":"Tucha#2018012713000408","displaytext":"Tucha#2018012713000408","ispublic":false,"created":"2018-01-31T13:30:09+0200","isready":true,"passwordenabled":true,"format":"QCOW2","isfeatured":false,"crossZones":false,"ostypeid":"b5490e1c-bd31-11e6-b74f-06973a00088a","ostypename":"Windows
> > Server 2012 R2
> > (64-bit)","account":"admin#lite","zoneid":"c8d773fa-76ca-4637-8ecf-88656444fc86","zonename":"z2.tucha13.net","status":"Download
> > Complete","size":375809638400,"templatetype":"USER","hypervisor":"KVM","domain":"ROOT","domainid":"b514ef44-bd2f-11e6-b74f-06973a00088a","isextractable":false,"sourcetemplateid":"ba26d2a9-5e2f-468d-8a38-df71a7811ee8","details":{"memoryOvercommitRatio":"1.0","cpuNumber":"4","cpuSpeed":"2399","Message.ReservedCapacityFreed.Flag":"false","cpuOvercommitRatio":"10","memory":"12288"},"sshkeyenabled":false,"isdynamicallyscalable":false,"tags":[]}
> > 
> > But at the same time I see that the template's file is less than the 
> > snapshot's
> > file:
> > -rw-r--r-- 1 root root 311253204992 Jan 31 00:14
> > /mnt/SecStorage/ea7ebf9a-5195-31ab-be8e-f9348f9fee2b/snapshots/4391/12401/e7364ecf-56f2-451d-ba2e-537b9465097f
> > -rw-r--r-- 1 root root 195583541248 Jan 31 12:30
> > /mnt/SecStorage/ea7ebf9a-5195-31ab-be8e-f9348f9fee2b/template/tmpl/3121/473/e7364ecf-56f2-451d-ba2e-537b9465097f.qcow2
> > 
> > The oddest thing is that the "cp" process in the SSVM is being terminated
> > exactly in an hour after its start. Who would be doing that each time I'm
> > trying to create a template? Isn't it being done by some script at the SSVM
> > itself?
> > 
> > 
> > On Mon, Jan 29, 2018 at 05:36:54PM +0200, Vladimir Melnik wrote:
> >> Thank you, Lucian! My MySQL timeout thresolds are higher than 1 hour, but
> >> there's HAproxy between ACS and MySQL, so I've changed haproxy's timeouts 
> >> and
> >> now will see what happens in an hour :-)
> >> 
> >> On Mon, Jan 29, 2018 at 11:47:31AM +, Nux! wrote:
> >> > I'm usually a sucker with these Java errors, but the error coming from 
> >> > the jdbc
> >> > mysql driver makes me think maybe this is related to MySQL timeouts.
> >> > 
> >> > Can you check your db installation for wait_timeouts, 
> >> > interactive_timeout,
> >> > connect_timeout  and so on, see if any match your 3600 seconds?
> >> > 
> >> > random search result
> >> > http://www.supermanhamuerto.com/doku.php?id=java%3athelastpacketsuccessfullyreceivedfromserver
> >> > 
> >> > hth
> >> > Lucian
> >> > 
> >> > --
> >> > Sent from the Delta quadrant using Borg technology!
> >> > 
> >> > Nux!
> >> > www.nux.ro
> >> > 
> >> > - Original Message -
> >> > > From: "Vladimir Melnik" 
> >> > > To: "users" 
> >> > > Sent: Monday, 29 January, 2018 09:29:18
> >> > > Subject: Time-out when creating a template from a snapshot
> >> > 
> >> > > Dear colleagues,
> >> > > 
> >> > > Would anyone be so kind as to help me to find out how to change time 
> >> > > limits for
> >> > > template creation?
> >> > > 
> >> > > When I create a template from a snapshot, I have only an hour to have 
> >> > > it done,
> >> > > othewise the operation is being terminated exaxtly after 3600 seconds, 
> >> > > but I
> >> > > can't understand why does it happen, as my settings seem to be quite 
> >> > > "loose":
> >> > > 
> >> > > create.private.template.from.snapshot.wait = 10800
> >> > > secstorage.cmd.execution.time.max = 240
> >> > >vm.job.timeout = 60
> >> > >  wait = 1800
> >> > > 
> >> > > Here are the messages I see in the management log-file:
> >> > > 
> >> > > 2018-01-29 10:22:04,029 WARN  [o.a.c.f.j.i.AsyncJobMonitor]
> >> > > (Timer-1:ctx-7a53941f) (logid:e215433a) Task (job-1476131) has been 
> >> > > pending for
> >> > > 3577 seconds
> >> 

Re: Time-out when creating a template from a snapshot

2018-02-01 Thread Vladimir Melnik
No, the primary storage is local (in this case), but the primary storage isn't 
being involved, as I'm creating a template from a snapshot which resides on a 
secondary storage.

The snapshot's size is ~300GB.

On Wed, Jan 31, 2018 at 06:18:43PM +, Simon Weller wrote:
> Is your primary storage NFS as well? How big is the disk being snapshotted?
> 
> 
> 
> From: Nux! 
> Sent: Wednesday, January 31, 2018 11:23 AM
> To: users
> Subject: Re: Time-out when creating a template from a snapshot
> 
> It's possible there are timeouts being hit somewhere. I'd take this to dev@ 
> to be honest, I am not very familiar with the ssvm internals.
> 
> --
> Sent from the Delta quadrant using Borg technology!
> 
> Nux!
> www.nux.ro
> 
> - Original Message -
> > From: "Vladimir Melnik" 
> > To: "users" 
> > Sent: Wednesday, 31 January, 2018 12:42:01
> > Subject: Re: Time-out when creating a template from a snapshot
> 
> > No, it doesn't seem to be a database-related issue.
> >
> > This time I haven't got any error messages at all. Moreover, I see this 
> > template
> > as available in the templates' list and there's the following message in the
> > log-file:
> > 2018-01-31 14:30:09,862 DEBUG [o.a.c.f.j.i.AsyncJobManagerImpl]
> > (API-Job-Executor-1:ctx-b39838a2 job-1485421 ctx-3b9d9083) (logid:cb74bce4)
> > Complete async job-1485421, jobStatus: SUCCEEDED, resultCode: 0, result:
> > org.apache.cloudstack.api.response.TemplateResponse/template/{"id":"b69c65b3-70d6-4000-934c-fac9e887d3ef","name":"Tucha#2018012713000408","displaytext":"Tucha#2018012713000408","ispublic":false,"created":"2018-01-31T13:30:09+0200","isready":true,"passwordenabled":true,"format":"QCOW2","isfeatured":false,"crossZones":false,"ostypeid":"b5490e1c-bd31-11e6-b74f-06973a00088a","ostypename":"Windows
> > Server 2012 R2
> > (64-bit)","account":"admin#lite","zoneid":"c8d773fa-76ca-4637-8ecf-88656444fc86","zonename":"z2.tucha13.net","status":"Download
> > Complete","size":375809638400,"templatetype":"USER","hypervisor":"KVM","domain":"ROOT","domainid":"b514ef44-bd2f-11e6-b74f-06973a00088a","isextractable":false,"sourcetemplateid":"ba26d2a9-5e2f-468d-8a38-df71a7811ee8","details":{"memoryOvercommitRatio":"1.0","cpuNumber":"4","cpuSpeed":"2399","Message.ReservedCapacityFreed.Flag":"false","cpuOvercommitRatio":"10","memory":"12288"},"sshkeyenabled":false,"isdynamicallyscalable":false,"tags":[]}
> >
> > But at the same time I see that the template's file is less than the 
> > snapshot's
> > file:
> > -rw-r--r-- 1 root root 311253204992 Jan 31 00:14
> > /mnt/SecStorage/ea7ebf9a-5195-31ab-be8e-f9348f9fee2b/snapshots/4391/12401/e7364ecf-56f2-451d-ba2e-537b9465097f
> > -rw-r--r-- 1 root root 195583541248 Jan 31 12:30
> > /mnt/SecStorage/ea7ebf9a-5195-31ab-be8e-f9348f9fee2b/template/tmpl/3121/473/e7364ecf-56f2-451d-ba2e-537b9465097f.qcow2
> >
> > The oddest thing is that the "cp" process in the SSVM is being terminated
> > exactly in an hour after its start. Who would be doing that each time I'm
> > trying to create a template? Isn't it being done by some script at the SSVM
> > itself?
> >
> >
> > On Mon, Jan 29, 2018 at 05:36:54PM +0200, Vladimir Melnik wrote:
> >> Thank you, Lucian! My MySQL timeout thresolds are higher than 1 hour, but
> >> there's HAproxy between ACS and MySQL, so I've changed haproxy's timeouts 
> >> and
> >> now will see what happens in an hour :-)
> >>
> >> On Mon, Jan 29, 2018 at 11:47:31AM +, Nux! wrote:
> >> > I'm usually a sucker with these Java errors, but the error coming from 
> >> > the jdbc
> >> > mysql driver makes me think maybe this is related to MySQL timeouts.
> >> >
> >> > Can you check your db installation for wait_timeouts, 
> >> > interactive_timeout,
> >> > connect_timeout  and so on, see if any match your 3600 seconds?
> >> >
> >> > random search result
> >> > http://www.supermanhamuerto.com/doku.php?id=java%3athelastpacketsuccessfullyreceivedfromserver
> java:thelastpacketsuccessfullyreceivedfromserver [www 
> ...
> www.supermanhamuerto.com
> Before going on, I think it's important to describe what's my architecture. 
> If it doesn't match yours, probably the solutions described here won't work 
> in your case.
> 
> 
> 
> >> >
> >> > hth
> >> > Lucian
> >> >
> >> > --
> >> > Sent from the Delta quadrant using Borg technology!
> >> >
> >> > Nux!
> >> > www.nux.ro
> >> >
> >> > - Original Message -
> >> > > From: "Vladimir Melnik" 
> >> > > To: "users" 
> >> > > Sent: Monday, 29 January, 2018 09:29:18
> >> > > Subject: Time-out when creating a template from a snapshot
> >> >
> >> > > Dear colleagues,
> >> > >
> >> > > Would anyone be so kind as to help me to find out how to change time 
> >> > > limits for
> >> > > template creation?
> >> > >
> >> > > When I