KVM Live Storage Migration new Features

2020-03-03 Thread Melanie Desaive
Hi all,

I learned that KVM storage live migration is only possible in some rare
cases for our setup using cloudstack 4.11.

Andrija pointed me to the steps necessary to do KVM live storage
migrations on the libvirt layer and doing the database magic
afterwards. This works well. If this is the only way to achieve this I
will be scripting the necessary steps in Python to automate this task.

I realized, that several issues concening KVM live migration will be
handled in the upcoming releases. Maybe an upgrade would solve my issue
and there is no need in scripting the libvirt storage migration?

Unfortunately none of the issues I found on github did exactly address
my case. Could anyone advise me here? Could the update to 4.14 bring us
support for KVM live storage migration? 

We are not using FTP but export blockstorage from our storage system:

We are mounting our storage LUNs over fibrechannel with clvm/gfs2 and
importing it as SharedMountPoint in CloudStack. At this point we can
still change the architecture to make storage features (like storage
migration) possible in Cloudstack.

For reference, those are the issues I found in github:

4.11/#2298: CLOUDSTACK-9620: Enhancements for managed storage
4.12/#2997: Allow KVM VM live migration with ROOT volume on file
storage type
4.13/#3533: KVM local migration issue #3521
4.13/#3424: KVM Volumes: Limit migration of volumes within the same
storage pool.
4.13/#2983: KVM live storage migration intra cluster from NFS source
and destination
n.a./#3508: Enable KVM storage data motion on KVM
hypervisor_capabilities
  
Regards,

Melanie
  
  
  
  
  
  



Re: SystemVM Storage Tags not taken into account?

2019-11-07 Thread Melanie Desaive
Thank you so much!

Worked perfectly for us. Used the procedure to reorganize our storages
and move quite a number of VRs to defined storage pools.

Am Mittwoch, den 06.11.2019, 12:10 + schrieb Richard Lawley:
> I wouldn't say this is something we do routinely, mostly to correct
> mistakes at the start.  You could end up with problems if you
> deployed
> VMs based on an old version of a service offering, then changed tags
> in such a way that there was no possible location a VM could start up
> next time.
> 
> However, with a dash of common sense it should be fine to use :)
> 
> On Wed, 6 Nov 2019 at 11:52, Melanie Desaive
>  wrote:
> > Hi Richard,
> > 
> > looks good. Just did an
> > 
> > update network_offerings set service_offering_id =  > offering id> where id = 
> > 
> > and restarted one of the networks from this offering with cleanup.
> > 
> > Comes up nicely and new tags are taken into account.
> > 
> > Do you use this procedure in production to change tags and
> > parameters
> > like cpus, ram?
> > 
> > Could gain lots of flexibility if this is safely possible.
> > 
> > Greetings,
> > 
> > Melanie
> > 
> > Am Montag, den 04.11.2019, 15:45 +0000 schrieb Richard Lawley:
> > > There's nothing in the API or the UI.  We just change it in the
> > > DB.
> > > 
> > > On Mon, 4 Nov 2019 at 13:48, Melanie Desaive
> > >  wrote:
> > > > Hi Richard,
> > > > 
> > > > thank you for this hint.
> > > > 
> > > > I had a look in the database, and yes, all Network Offeringns
> > > > in
> > > > the
> > > > table network_offerings still reference the old System/Disk
> > > > offering
> > > > IDs from disk_offering/system_offering.
> > > > 
> > > > Is there an intended way to change
> > > > "network_offerings.service_offering_id" for an existing network
> > > > offering? Would it be ok to update the database? Is there an
> > > > API
> > > > call?
> > > > I did not find anything in the documentation.
> > > > 
> > > > Kind regards,
> > > > 
> > > > Melanie
> > > > 
> > > > 
> > > > 
> > > > Am Freitag, den 01.11.2019, 09:25 + schrieb Richard Lawley:
> > > > > Melanie,
> > > > > 
> > > > > > Maybe the procedure for resetting the System Offering for
> > > > > > Virtual
> > > > > > Routers differs from that for SSVM and CP and I missed some
> > > > > > point?
> > > > > 
> > > > > The System Offering for Virtual Routers is not taken from the
> > > > > same
> > > > > place as SSVM/CP - it's set on the Network Offering instead,
> > > > > so
> > > > > you
> > > > > can have different network offerings with different system
> > > > > offerings.
> > > > > 
> > > > > Regards,
> > > > > 
> > > > > Richard
> > > > > 
> > > > > On Fri, 1 Nov 2019 at 08:33, Melanie Desaive
> > > > >  wrote:
> > > > > > Good morning Andrija,
> > > > > > 
> > > > > > yes, I did restart mgmt. Documentation states that.
> > > > > > 
> > > > > > Interestingly the documentation in
> > > > > > http://docs.cloudstack.apache.org/en/4.11.1.0/adminguide/service_offerings.html#changing-the-default-system-offering-for-system-vms
> > > > > > only mentions only resetting the unique_names for Secondary
> > > > > > Storage
> > > > > > VM
> > > > > > and Console Proxy VM not for the Virtual Routers in the
> > > > > > database.
> > > > > > 
> > > > > > Maybe the procedure for resetting the System Offering for
> > > > > > Virtual
> > > > > > Routers differs from that for SSVM and CP and I missed some
> > > > > > point?
> > > > > > 
> > > > > > Greetings,
> > > > > > 
> > > > > > Melanie
> > > > > > 
> > > > > > Am Donnerstag, den 31.10.2019, 17:19 +0100 schrieb Andrija
> > > > > > Panic:
> > > > > > > tried restarting mgmt after tag change? Usually not

Re: SystemVM Storage Tags not taken into account?

2019-11-06 Thread Melanie Desaive
Hi Richard,

looks good. Just did an 

update network_offerings set service_offering_id =  where id = 

and restarted one of the networks from this offering with cleanup.

Comes up nicely and new tags are taken into account.

Do you use this procedure in production to change tags and parameters
like cpus, ram?

Could gain lots of flexibility if this is safely possible.

Greetings,

Melanie

Am Montag, den 04.11.2019, 15:45 + schrieb Richard Lawley:
> There's nothing in the API or the UI.  We just change it in the DB.
> 
> On Mon, 4 Nov 2019 at 13:48, Melanie Desaive
>  wrote:
> > Hi Richard,
> > 
> > thank you for this hint.
> > 
> > I had a look in the database, and yes, all Network Offeringns in
> > the
> > table network_offerings still reference the old System/Disk
> > offering
> > IDs from disk_offering/system_offering.
> > 
> > Is there an intended way to change
> > "network_offerings.service_offering_id" for an existing network
> > offering? Would it be ok to update the database? Is there an API
> > call?
> > I did not find anything in the documentation.
> > 
> > Kind regards,
> > 
> > Melanie
> > 
> > 
> > 
> > Am Freitag, den 01.11.2019, 09:25 + schrieb Richard Lawley:
> > > Melanie,
> > > 
> > > > Maybe the procedure for resetting the System Offering for
> > > > Virtual
> > > > Routers differs from that for SSVM and CP and I missed some
> > > > point?
> > > 
> > > The System Offering for Virtual Routers is not taken from the
> > > same
> > > place as SSVM/CP - it's set on the Network Offering instead, so
> > > you
> > > can have different network offerings with different system
> > > offerings.
> > > 
> > > Regards,
> > > 
> > > Richard
> > > 
> > > On Fri, 1 Nov 2019 at 08:33, Melanie Desaive
> > >  wrote:
> > > > Good morning Andrija,
> > > > 
> > > > yes, I did restart mgmt. Documentation states that.
> > > > 
> > > > Interestingly the documentation in
> > > > http://docs.cloudstack.apache.org/en/4.11.1.0/adminguide/service_offerings.html#changing-the-default-system-offering-for-system-vms
> > > > only mentions only resetting the unique_names for Secondary
> > > > Storage
> > > > VM
> > > > and Console Proxy VM not for the Virtual Routers in the
> > > > database.
> > > > 
> > > > Maybe the procedure for resetting the System Offering for
> > > > Virtual
> > > > Routers differs from that for SSVM and CP and I missed some
> > > > point?
> > > > 
> > > > Greetings,
> > > > 
> > > > Melanie
> > > > 
> > > > Am Donnerstag, den 31.10.2019, 17:19 +0100 schrieb Andrija
> > > > Panic:
> > > > > tried restarting mgmt after tag change? Usually not required
> > > > > but
> > > > > might be
> > > > > for systemVMs.
> > > > > 
> > > > > On Thu, 31 Oct 2019, 15:21 Melanie Desaive, <
> > > > > m.desa...@mailbox.org>
> > > > > wrote:
> > > > > 
> > > > > > Hi all,
> > > > > > 
> > > > > > I just tried to set up storage tags for System VMs, but the
> > > > > > behaviour
> > > > > > is not as expected. The deployment planner does not seem to
> > > > > > take
> > > > > > the
> > > > > > storage tag into account when deciding over the storage.
> > > > > > 
> > > > > > --
> > > > > > 
> > > > > > The only storage with the tag "SYSTEMV" ist "ACS-LUN-SAS-
> > > > > > 01'
> > > > > > with
> > > > > > id=10
> > > > > > 
> > > > > > mysql> select id,name,tag from storage_pool_view where
> > > > > > cluster_name
> > > > > > =
> > > > > > 'cluster2' and status = 'Up' and tag = 'SYSTEMVM' order by
> > > > > > name,tag;
> > > > > > +++--+
> > > > > > > id | name   | tag  |
> > > > > > +++--+
> > > > > > > 10 | ACS-LUN-SAS-01 | SYSTEMVM |
> > > > > > +++--+
> > > > > > 

Re: SystemVM Storage Tags not taken into account?

2019-11-04 Thread Melanie Desaive
Hi Richard,

thank you for this hint.

I had a look in the database, and yes, all Network Offeringns in the
table network_offerings still reference the old System/Disk offering
IDs from disk_offering/system_offering.

Is there an intended way to change
"network_offerings.service_offering_id" for an existing network
offering? Would it be ok to update the database? Is there an API call?
I did not find anything in the documentation.

Kind regards,

Melanie



Am Freitag, den 01.11.2019, 09:25 + schrieb Richard Lawley:
> Melanie,
> 
> > Maybe the procedure for resetting the System Offering for Virtual
> > Routers differs from that for SSVM and CP and I missed some point?
> 
> The System Offering for Virtual Routers is not taken from the same
> place as SSVM/CP - it's set on the Network Offering instead, so you
> can have different network offerings with different system offerings.
> 
> Regards,
> 
> Richard
> 
> On Fri, 1 Nov 2019 at 08:33, Melanie Desaive
>  wrote:
> > Good morning Andrija,
> > 
> > yes, I did restart mgmt. Documentation states that.
> > 
> > Interestingly the documentation in
> > http://docs.cloudstack.apache.org/en/4.11.1.0/adminguide/service_offerings.html#changing-the-default-system-offering-for-system-vms
> > only mentions only resetting the unique_names for Secondary Storage
> > VM
> > and Console Proxy VM not for the Virtual Routers in the database.
> > 
> > Maybe the procedure for resetting the System Offering for Virtual
> > Routers differs from that for SSVM and CP and I missed some point?
> > 
> > Greetings,
> > 
> > Melanie
> > 
> > Am Donnerstag, den 31.10.2019, 17:19 +0100 schrieb Andrija Panic:
> > > tried restarting mgmt after tag change? Usually not required but
> > > might be
> > > for systemVMs.
> > > 
> > > On Thu, 31 Oct 2019, 15:21 Melanie Desaive, <
> > > m.desa...@mailbox.org>
> > > wrote:
> > > 
> > > > Hi all,
> > > > 
> > > > I just tried to set up storage tags for System VMs, but the
> > > > behaviour
> > > > is not as expected. The deployment planner does not seem to
> > > > take
> > > > the
> > > > storage tag into account when deciding over the storage.
> > > > 
> > > > --
> > > > 
> > > > The only storage with the tag "SYSTEMV" ist "ACS-LUN-SAS-01'
> > > > with
> > > > id=10
> > > > 
> > > > mysql> select id,name,tag from storage_pool_view where
> > > > cluster_name
> > > > =
> > > > 'cluster2' and status = 'Up' and tag = 'SYSTEMVM' order by
> > > > name,tag;
> > > > +++--+
> > > > > id | name   | tag  |
> > > > +++--+
> > > > > 10 | ACS-LUN-SAS-01 | SYSTEMVM |
> > > > +++--+
> > > > 1 row in set (0,00 sec)
> > > > 
> > > > --
> > > > 
> > > > I definied the tag "SYSTEVM" for the System Offering for the
> > > > Virtual
> > > > Routers:
> > > > 
> > > > mysql> select id,name,unique_name,type,state,tags from
> > > > disk_offering
> > > > where type='Service' and state='Active' and unique_name like
> > > > 'Cloud.Com-SoftwareRouter' order by unique_name \G
> > > > *** 1. row ***
> > > >  id: 281
> > > >name: System Offering For Software Router - With Tags
> > > > unique_name: Cloud.Com-SoftwareRouter
> > > >type: Service
> > > >   state: Active
> > > >tags: SYSTEMVM
> > > > 1 row in set (0,00 sec)
> > > > 
> > > > --
> > > > 
> > > > But when I redeploy a virtual Router the deployment planner
> > > > takes
> > > > all
> > > > storages into account. :(
> > > > 
> > > > The log saies explicitely "Pools matching tags..." and lists
> > > > several
> > > > other pools.
> > > > What do I miss?
> > > > 
> > > > --
> > > > ClusterScopeStoragePoolAllocator looking for storage pool
> > > > Looking for pools in dc: 1  pod:1  cluster:3. Disabled pools
> > > > will
> > > > be
> > > > ignored.
> > > > Found pools matching tags: [Pool[7|PreSetup], Pool[9|PreSetup],
> > > > Pool[10|PreSetup], Pool[18|PreSetup]]
> > > > ClusterScopeStoragePoolAllocator returning 3 suitable storage
> > > > pools
> > > > ClusterScopeStoragePoolAllocator looking for storage pool
> > > > Looking for pools in dc: 1  pod:1  cluster:3. Disabled pools
> > > > will
> > > > be
> > > > ignored.
> > > > Found pools matching tags: [Pool[7|PreSetup], Pool[9|PreSetup],
> > > > Pool[10|PreSetup], Pool[18|PreSetup]]
> > > > ClusterScopeStoragePoolAllocator returning 3 suitable storage
> > > > pools
> > > > --
> > > > 
> > > > Kind regards,
> > > > 
> > > > Melanie
> > > > 



Re: SystemVM Storage Tags not taken into account?

2019-11-01 Thread Melanie Desaive
Good morning Andrija,

yes, I did restart mgmt. Documentation states that.

Interestingly the documentation in 
http://docs.cloudstack.apache.org/en/4.11.1.0/adminguide/service_offerings.html#changing-the-default-system-offering-for-system-vms
only mentions only resetting the unique_names for Secondary Storage VM
and Console Proxy VM not for the Virtual Routers in the database.

Maybe the procedure for resetting the System Offering for Virtual
Routers differs from that for SSVM and CP and I missed some point?

Greetings,

Melanie

Am Donnerstag, den 31.10.2019, 17:19 +0100 schrieb Andrija Panic:
> tried restarting mgmt after tag change? Usually not required but
> might be
> for systemVMs.
> 
> On Thu, 31 Oct 2019, 15:21 Melanie Desaive, 
> wrote:
> 
> > Hi all,
> > 
> > I just tried to set up storage tags for System VMs, but the
> > behaviour
> > is not as expected. The deployment planner does not seem to take
> > the
> > storage tag into account when deciding over the storage.
> > 
> > --
> > 
> > The only storage with the tag "SYSTEMV" ist "ACS-LUN-SAS-01' with
> > id=10
> > 
> > mysql> select id,name,tag from storage_pool_view where cluster_name
> > =
> > 'cluster2' and status = 'Up' and tag = 'SYSTEMVM' order by
> > name,tag;
> > +++--+
> > > id | name   | tag  |
> > +++--+
> > > 10 | ACS-LUN-SAS-01 | SYSTEMVM |
> > +++--+
> > 1 row in set (0,00 sec)
> > 
> > --
> > 
> > I definied the tag "SYSTEVM" for the System Offering for the
> > Virtual
> > Routers:
> > 
> > mysql> select id,name,unique_name,type,state,tags from
> > disk_offering
> > where type='Service' and state='Active' and unique_name like
> > 'Cloud.Com-SoftwareRouter' order by unique_name \G
> > *** 1. row ***
> >  id: 281
> >name: System Offering For Software Router - With Tags
> > unique_name: Cloud.Com-SoftwareRouter
> >type: Service
> >   state: Active
> >tags: SYSTEMVM
> > 1 row in set (0,00 sec)
> > 
> > --
> > 
> > But when I redeploy a virtual Router the deployment planner takes
> > all
> > storages into account. :(
> > 
> > The log saies explicitely "Pools matching tags..." and lists
> > several
> > other pools.
> > What do I miss?
> > 
> > --
> > ClusterScopeStoragePoolAllocator looking for storage pool
> > Looking for pools in dc: 1  pod:1  cluster:3. Disabled pools will
> > be
> > ignored.
> > Found pools matching tags: [Pool[7|PreSetup], Pool[9|PreSetup],
> > Pool[10|PreSetup], Pool[18|PreSetup]]
> > ClusterScopeStoragePoolAllocator returning 3 suitable storage pools
> > ClusterScopeStoragePoolAllocator looking for storage pool
> > Looking for pools in dc: 1  pod:1  cluster:3. Disabled pools will
> > be
> > ignored.
> > Found pools matching tags: [Pool[7|PreSetup], Pool[9|PreSetup],
> > Pool[10|PreSetup], Pool[18|PreSetup]]
> > ClusterScopeStoragePoolAllocator returning 3 suitable storage pools
> > --
> > 
> > Kind regards,
> > 
> > Melanie
> > 



SystemVM Storage Tags not taken into account?

2019-10-31 Thread Melanie Desaive
Hi all,

I just tried to set up storage tags for System VMs, but the behaviour
is not as expected. The deployment planner does not seem to take the
storage tag into account when deciding over the storage.

--

The only storage with the tag "SYSTEMV" ist "ACS-LUN-SAS-01' with id=10

mysql> select id,name,tag from storage_pool_view where cluster_name =
'cluster2' and status = 'Up' and tag = 'SYSTEMVM' order by name,tag;
+++--+
| id | name   | tag  |
+++--+
| 10 | ACS-LUN-SAS-01 | SYSTEMVM |
+++--+
1 row in set (0,00 sec)

--

I definied the tag "SYSTEVM" for the System Offering for the Virtual
Routers:

mysql> select id,name,unique_name,type,state,tags from disk_offering
where type='Service' and state='Active' and unique_name like
'Cloud.Com-SoftwareRouter' order by unique_name \G
*** 1. row ***
 id: 281
   name: System Offering For Software Router - With Tags
unique_name: Cloud.Com-SoftwareRouter
   type: Service
  state: Active
   tags: SYSTEMVM
1 row in set (0,00 sec)

-- 

But when I redeploy a virtual Router the deployment planner takes all
storages into account. :(

The log saies explicitely "Pools matching tags..." and lists several
other pools.
What do I miss?

--
ClusterScopeStoragePoolAllocator looking for storage pool
Looking for pools in dc: 1  pod:1  cluster:3. Disabled pools will be
ignored.
Found pools matching tags: [Pool[7|PreSetup], Pool[9|PreSetup],
Pool[10|PreSetup], Pool[18|PreSetup]]
ClusterScopeStoragePoolAllocator returning 3 suitable storage pools
ClusterScopeStoragePoolAllocator looking for storage pool
Looking for pools in dc: 1  pod:1  cluster:3. Disabled pools will be
ignored.
Found pools matching tags: [Pool[7|PreSetup], Pool[9|PreSetup],
Pool[10|PreSetup], Pool[18|PreSetup]]
ClusterScopeStoragePoolAllocator returning 3 suitable storage pools
--

Kind regards,

Melanie


signature.asc
Description: This is a digitally signed message part


Re: kvm live volume migration

2019-09-26 Thread Melanie Desaive
Hi Andrija,

thank you so much for your support. 

It worked perfectly.

I used the oVirt 4.3 Repository:
yum install 
https://resources.ovirt.org/pub/yum-repo/ovirt-release43.rpm

And limited the repo to the libvirt and qemu packages:
includepkgs=qemu-* libvirt-*

I had some issues with the configuration of my 2nd Storage NFS and had
to enable rpc.statd on the nfs server, otherwise I was not able to
mount ISO images from 2nd storage. With the packages from CentOS-Base
this was no problem.

After changing to the oVirt packages, I was able to online migrate a
volume between two storage repositories using the "virsh --copy-
storage-all --xml" mechanism. 

Afterwards I updated the CloudStack database setting pool_id in volumes
to the new storage for the migrated volume and everyting looked
perfectly.

I am still unsure how far to use this "hack" in production, but I feel
reassured, that I do now have the opportunity to use this feature for
urgend cases where no VM downtime is possible for a storage migration.

If there is interest I can offer to translate my notes to english to
provide them to others with the same need.

And @Andrija, you mentioned a "detailed step-by-step .docx guide".. I
would really be interested, maybe theres further information I missed.
I would really like you to forward this to me.

Greetings,

Melanie

Am Mittwoch, den 25.09.2019, 13:51 +0200 schrieb Andrija Panic:
> Hi Mellanie,
> 
> so Ubuntu 14.04+  - i.e. 16.04 working fine, 18.04 also being
> supported in
> later releases...
> CentOS7 is THE recommended OS (or more recent Ubuntu) - but yes, RHEL
> makes
> small surprises sometimes (until CentOS 7.1, if not mistaken, they
> also
> didn't provide RBD/Ceph support, only in paid RHEV - won't comment on
> this
> lousy behaviour...)
> 
> Afaik, for KVM specifically, no pooling of volumes' location, and you
> would
> need to update DB (pay attention also to usage records if that is of
> your
> interest)
> You'll need to test this kind of migration and DB schema thoroughly
> (including changing disk offerings and such in DB, in case your
> source/destination storage solution have different Storage TAGs in
> ACS)
> 
> I'm trying to stay away from any clustered file systems, 'cause when
> they
> break, they break bad...so can't comment there.
> You are using those as preSetup in KVM/CloudStack I guess - if it
> works,
> then all good.
> But...move on I suggest, if possible :)
> 
> Best
> Andrija
> 
> On Wed, 25 Sep 2019 at 13:16, Melanie Desaive <
> m.desa...@heinlein-support.de>
> wrote:
> 
> > Hi Andrija,
> > 
> > thank you so much for your detailled explanation! Looks like the my
> > problem can be solved. :)
> > 
> > To summarize the information you provided:
> > 
> > As long as CloudStack does not support volume live migration I
> > could be
> > using
> > 
> > virsh with --copy-storage --xml.
> > 
> > BUT: CentOS7 is lacking necessary features! Bad luck. I started out
> > with CentOS7 as Disto.
> > 
> > You suggest, that it could be worth trying the qemu/libvirt
> > packages
> > from the oVirt repository. I will look into this now.
> > 
> > But if that gets complicated: Cloudstack documentation lists
> > CentOS7
> > and Ubuntu 14.04 as supported Distros. Are there other not
> > officially
> > supported Distros/Version I could be using? I wanted to avoid the
> > quite
> > outdated Ubuntu 14.04 and did for that reason decide towards
> > CentOS7.
> > 
> > And another general question: How is CloudStack getting along with
> > the
> > Volumes of its VMs changing the storage repository without beeing
> > informed about it. Does it get this information through polling, or
> > do
> > I have to manipulate the database?
> > 
> > And to make things clearer: At the moment I am using storage
> > attached
> > through Gibrechannel using clustered LVM logic. Could also be
> > changing
> > to GFS2 on cLVM. Never heard anyone mentioning such a setup by now.
> > Am
> > I the only one running KVM on a proprietary storage system over
> > Fibrechannel, are there limitation/problems to be expected from
> > such a
> > setup?
> > 
> > Greetings,
> > 
> > Melanie
> > 
> > 
> > Am Mittwoch, den 25.09.2019, 11:46 +0200 schrieb Andrija Panic:
> > > So, let me explain.
> > > 
> > > Doing "online storage migration" aka live storage migration is
> > > working for
> > > CEPH/NFS --> SolidFire, starting from 4.11+
> > > Internally it is done in the same way as "vir

Re: kvm live volume migration

2019-09-25 Thread Melanie Desaive
Hi Andrija,

thank you so much for your detailled explanation! Looks like the my
problem can be solved. :)

To summarize the information you provided:

As long as CloudStack does not support volume live migration I could be
using 

virsh with --copy-storage --xml.

BUT: CentOS7 is lacking necessary features! Bad luck. I started out
with CentOS7 as Disto.

You suggest, that it could be worth trying the qemu/libvirt packages
from the oVirt repository. I will look into this now.

But if that gets complicated: Cloudstack documentation lists CentOS7
and Ubuntu 14.04 as supported Distros. Are there other not officially
supported Distros/Version I could be using? I wanted to avoid the quite
outdated Ubuntu 14.04 and did for that reason decide towards CentOS7.

And another general question: How is CloudStack getting along with the
Volumes of its VMs changing the storage repository without beeing
informed about it. Does it get this information through polling, or do
I have to manipulate the database?

And to make things clearer: At the moment I am using storage attached
through Gibrechannel using clustered LVM logic. Could also be changing
to GFS2 on cLVM. Never heard anyone mentioning such a setup by now. Am
I the only one running KVM on a proprietary storage system over
Fibrechannel, are there limitation/problems to be expected from such a
setup?

Greetings,

Melanie 


Am Mittwoch, den 25.09.2019, 11:46 +0200 schrieb Andrija Panic:
> So, let me explain.
> 
> Doing "online storage migration" aka live storage migration is
> working for
> CEPH/NFS --> SolidFire, starting from 4.11+
> Internally it is done in the same way as "virsh with --copy-storage-
> all
> --xml" in short
> 
> Longer explanation:
> Steps:
> You create new volumes on the destination storage (SolidFire in this
> case),
> set QoS etc - simply prepare the destination volumes (empty volumes
> atm).
> On source host/VM, dump VM XML, edit XML, change disk section to
> point to
> new volume path, protocol, etc - and also the IP address for the VNC
> (cloudstack requirement), save XMLT
> Then you do "virsh with --copy-storage-all --xml. myEditedVM.xml ..."
> stuff
> that does the job.
> Then NBD driver will be used to copy blocks from the source volumes
> to the
> destination volumes while that virsh command is working... (here's my
> demo,
> in details..
> https://www.youtube.com/watch?v=Eo8BuHBnVgg=PLEr0fbgkyLKyiPnNzPz7XDjxnmQNxjJWT=5=2s
> )
> 
> This is yet to be extended/coded to support NFS-->NFS or CEPH-->CEPH
> or
> CEPH/NFS-->CEPH/NFS... should not be that much work, the logic is
> there
> (bit part of the code)
> Also, starting from 4.12, you can actually  (I believe using
> identical
> logic) migrate only ROOT volume that are on the LOCAL storage (local
> disks)
> to another host/local storage - but DATA disks are not supported.
> 
> Now...imagine the feature is there - if using CentOS7, our friends at
> RedHat have removed support for actually using live storage migration
> (unless you are paying for RHEV - but it does work fine on CentOS6,
> and
> Ubuntu 14.04+
> 
> I recall "we" had to use qemu/libvirt from the "oVirt" repo which
> DOES
> (DID) support storage live migration (normal EV packages from the
> Special
> Interest Group (2.12 tested) - did NOT include this...)
> 
> I can send you step-by-step .docx guide for manually mimicking what
> is done
> (in SolidFire, but identical logic for other storages) - but not sure
> if
> that still helps you...
> 
> 
> Andrija
> 
> On Wed, 25 Sep 2019 at 10:51, Melanie Desaive <
> m.desa...@heinlein-support.de>
> wrote:
> 
> > Hi all,
> > 
> > I am currently doing my first steps with KVM as hypervisor for
> > CloudStack. I was shocked to realize that currently live volume
> > migration between different shared storages is not supported with
> > KVM.
> > This is a feature I use intensively with XenServer.
> > 
> > How do you get along with this limitation? I do really expect you
> > to
> > use some workarounds, or do you all only accept vm downtimes for a
> > storage migration?
> > 
> > With my first investigation I found three techniques mentioned and
> > would like to ask for suggestions which to investigate deeper:
> > 
> >  x Eric describes a technique using snapshosts and pauses to do a
> > live
> > storage migration in this mailing list tread.
> >  x Dag suggests using virsh with --copy-storage-all --xml.
> >  x I found articles about using virsh blockcopy for storage live
> > migration.
> > 
> > Greetings,
> > 
> > Melanie
> > 
> > Am Freitag, den 02.02.20

Re: kvm live volume migration

2019-09-25 Thread Melanie Desaive
Hi all,

I am currently doing my first steps with KVM as hypervisor for
CloudStack. I was shocked to realize that currently live volume
migration between different shared storages is not supported with KVM.
This is a feature I use intensively with XenServer.

How do you get along with this limitation? I do really expect you to
use some workarounds, or do you all only accept vm downtimes for a
storage migration?

With my first investigation I found three techniques mentioned and
would like to ask for suggestions which to investigate deeper:

 x Eric describes a technique using snapshosts and pauses to do a live
storage migration in this mailing list tread.
 x Dag suggests using virsh with --copy-storage-all --xml.
 x I found articles about using virsh blockcopy for storage live
migration.

Greetings,

Melanie

Am Freitag, den 02.02.2018, 15:55 +0100 schrieb Andrija Panic:
> @Dag, you might want to check with Mike Tutkowski, how he implemented
> this
> for the "online storage migration" from other storages (CEPH and NFS
> implemented so far as sources) to SolidFire.
> 
> We are doing exactly the same demo/manual way (this is what Mike has
> sent
> me back in the days), so perhaps you want to see how to translate
> this into
> general things (so ANY to ANY storage migration) inside CloudStack.
> 
> Cheers
> 
> On 2 February 2018 at 10:28, Dag Sonstebo  >
> wrote:
> 
> > All
> > 
> > I am doing a bit of R around this for a client at the moment. I
> > am
> > semi-successful in getting live migrations to different storage
> > pools to
> > work. The method I’m using is as follows – this does not take into
> > account
> > any efficiency optimisation around the disk transfer (which is next
> > on my
> > list). The below should answer your question Eric about moving to a
> > different location – and I am also working with your steps to see
> > where I
> > can improve the following. Keep in mind all of this is external to
> > CloudStack – although CloudStack picks up the destination KVM host
> > automatically it does not update the volume tables etc., neither
> > does it do
> > any housekeeping.
> > 
> > 1) Ensure the same network bridges are up on source and destination
> > –
> > these are found with:
> > 
> > [root@kvm1 ~]# virsh dumpxml 9 | grep source
> >   
> >   
> >   
> >   
> > 
> > So from this make sure breth1-725 is up on the destionation host
> > (do it
> > the hard way or cheat and spin up a VM from same account and
> > network on
> > that host)
> > 
> > 2) Find size of source disk and create stub disk in destination
> > (this part
> > can be made more efficient to speed up disk transfer – by doing
> > similar
> > things to what Eric is doing):
> > 
> > [root@kvm1 ~]# qemu-img info /mnt/00e88a7b-985f-3be8-b717-
> > 0a59d8197640/d0ab5dd5-e3dd-47ac-a326-5ce3d47d194d
> > image: /mnt/00e88a7b-985f-3be8-b717-0a59d8197640/d0ab5dd5-e3dd-
> > 47ac-a326-5ce3d47d194d
> > file format: qcow2
> > virtual size: 8.0G (8589934592 bytes)
> > disk size: 32M
> > cluster_size: 65536
> > backing file: /mnt/00e88a7b-985f-3be8-b717-0a59d8197640/3caaf4c9-
> > eaec-
> > 11e7-800b-06b4a401075c
> > 
> > ##
> > 
> > [root@kvm3 50848ff7-c6aa-3fdd-b487-27899bf2129c]# qemu-img create
> > -f
> > qcow2 d0ab5dd5-e3dd-47ac-a326-5ce3d47d194d 8G
> > Formatting 'd0ab5dd5-e3dd-47ac-a326-5ce3d47d194d', fmt=qcow2
> > size=8589934592 encryption=off cluster_size=65536
> > [root@kvm3 50848ff7-c6aa-3fdd-b487-27899bf2129c]# qemu-img info
> > d0ab5dd5-e3dd-47ac-a326-5ce3d47d194d
> > image: d0ab5dd5-e3dd-47ac-a326-5ce3d47d194d
> > file format: qcow2
> > virtual size: 8.0G (8589934592 bytes)
> > disk size: 448K
> > cluster_size: 65536
> > 
> > 3) Rewrite the new VM XML file for the destination with:
> > a) New disk location, in this case this is just a new path (Eric –
> > this
> > answers your question)
> > b) Different IP addresses for VNC – in this case 10.0.0.1 to
> > 10.0.0.2
> > and carry out migration.
> > 
> > [root@kvm1 ~]# virsh dumpxml 9 | sed -e 's/00e88a7b-985f-3be8-b717-
> > 0a59d8197640/50848ff7-c6aa-3fdd-b487-27899bf2129c/g' | sed -e 's/
> > 10.0.0.1/10.0.0.2/g' > /root/i-2-14-VM.xml
> > 
> > [root@kvm1 ~]# virsh migrate --live --persistent --copy-storage-all 
> > --xml
> > /root/i-2-14-VM.xml i-2-14-VM qemu+tcp://10.0.0.2/system --verbose
> > --abort-on-error
> > Migration: [ 25 %]
> > 
> > 4) Once complete delete the source file. This can be done with
> > extra
> > switches on the virsh migrate command if need be.
> > = = =
> > 
> > In the simplest tests this works – destination VM remains online
> > and has
> > storage in new location – but it’s not persistent – sometimes the
> > destination VM ends up in a paused state, and I’m working on how to
> > get
> > around this. I also noted virsh migrate has a  migrate-
> > setmaxdowntime which
> > I think can be useful here.
> > 
> > Regards,
> > Dag Sonstebo
> > Cloud Architect
> > ShapeBlue
> > 
> > On 01/02/2018, 20:30, "Andrija Panic" 
> > wrote:
> 

Re: Physical Network Setup and Labels for mixed XenServer / KVM Infrastructure

2019-08-15 Thread Melanie Desaive
Hi Andrija,

Am Donnerstag, den 15.08.2019, 12:54 +0200 schrieb Andrija Panic:
> One question though, why are you aiming to use OVS for KVM? i.e. why
> not regular/default Linux bridge?

I am more familiar with OpenVSwitch and I did have the impression that
is gives more configuration options for VLANs. Would you prefer Linux
bridges?

Cheers,

Melanie

-- 
-- 
Heinlein Support GmbH
Schwedter Str. 8/9b, 10119 Berlin
  
https://www.heinlein-support.de
 
Tel: 030 / 40 50 51 - 62
Fax: 030 / 40 50 51 - 19
  
Amtsgericht Berlin-Charlottenburg - HRB 93818 B
Geschäftsführer: Peer Heinlein - Sitz: Berlin


signature.asc
Description: This is a digitally signed message part


Re: Physical Network Setup and Labels for mixed XenServer / KVM Infrastructure

2019-08-15 Thread Melanie Desaive
Hi Andrija,

Am Mittwoch, den 14.08.2019, 17:57 +0200 schrieb Andrija Panic:

> a) I assume you are planning on staying on single "Physical network"
> in ACS?

Yes. But actually I do have to admit, that I am not aware, how more
than one "Physical Network" could be set up for one zone and what that
would mean.

> b) are you planning on reconfiguring your XenServer hosts at all (in
> sense
> of networking) or just thinking adding KVM hosts and defining proper
> KMV
> Traffic Label for each traffic type you have?

Would it be possible to reconfigure the XenServer Host network labels
without taking all XenServer clusters down?

Anyway. I do not plan to reconfigure those. But I do think, that in
case the KVM hosts do their job nicely, it is very likely, that we will
completely replace the XenServer infrastructure by KVM in the future.

> Makes sense to have all 4 traffic types targeting different Traffic
> Labels
> (networks/bridges) - since later you can change your underlying
> NICs/cabling infra to stick in more NICs as you suggested.
> I assume zero difference on switch port configuration - VLAN is a
> VLAN,
> whatever bridge/openvSwitch you use on hosts locally.

Good to hear your opinion. That gives me courage to look into it more
thoroughly and give it a try. ;)

> This would an interesting exercise anyway...

If you like, I keep you informed!

Cheers, 

Melanie
-- 
-- 
Heinlein Support GmbH
Schwedter Str. 8/9b, 10119 Berlin
  
https://www.heinlein-support.de
 
Tel: 030 / 40 50 51 - 62
Fax: 030 / 40 50 51 - 19
  
Amtsgericht Berlin-Charlottenburg - HRB 93818 B
Geschäftsführer: Peer Heinlein - Sitz: Berlin


signature.asc
Description: This is a digitally signed message part


Physical Network Setup and Labels for mixed XenServer / KVM Infrastructure

2019-08-14 Thread Melanie Desaive
Hello,

we plan to change from XenServer to KVM virtualisation for Apache
CloudStack 4.11 with advanced networking.

I am currently installing an KVM proof of concept und would like to
integrate two KVM hosts for testing as a new cluster to CloudStack.

I got some questions concerning the networking setup for the zone.

I plan to use OpenVSwitch on the KVM virtualization hosts.

I understand, that I can now, once define the labels for the physical
network of the zone for KVM. I do now have the opportunity to decide
newly about the division between the different network types (e.g.
public, guest, management, storage). I do not have to keep the
decisions made for XenServer. Is this correct?

For XenServer we did use one LACP trunk, that carries all traffic. On
this LACP trunk we have one bridge/network, named "LACPTRUNK" which is
used for all four different traffic categories.


uuid ( RO): ca88f7f8-2ce0-b3ea-b218-26336ee6496e
  name-label ( RW): LACPTRUNK
name-description ( RW): 2 x 10GBit/s LACP Dynamic über eth2 und
eth3 via openvswitch
  bridge ( RO): xapi1


I think that it could be a good idea, to handle this differently in 
KVM and would like to ask your opinion.

I think about preparing four bridges on the KVM hosts. One bridge for 
each out of (public, guest, management, storage). Problem is, that in 
our actual hardware configuration I do only have one physical (lacp bond) 
port on the virtualization hosts. I would like to try to use the syntax 
suggested in 

http://docs.cloudstack.apache.org/en/latest/installguide/hypervisor/kvm.html#configure-the-network-using-openvswitch

preparing several "fake bridges" on the main LACP bond interface. Never
nested ovs bridges up to now, do I have to expect something unexpected?

By splitting up the bridges, I could open the opportunity to later set
up virtualization hosts with more physical network interface and
dedicate interfaces to traffic types. Using a different OVS
configuration for future KVM clusters. Is that correct?

What do I have to keep in mind regarding the surrounding network
infrastrukture and VLAN configuration for the switch ports the hosts
are attached to, when assigning more labels for KVM hosts, than I used
with XenServer?

Do I have to expect different requirements for the switches? I would
expect the same switch port setup beeing valid for KVM and XenServer. 

Thank you all and best greetings,

Melanie 



---

-- 
-- 
Heinlein Support GmbH
Schwedter Str. 8/9b, 10119 Berlin
  
https://www.heinlein-support.de
 
Tel: 030 / 40 50 51 - 62
Fax: 030 / 40 50 51 - 19
  
Amtsgericht Berlin-Charlottenburg - HRB 93818 B
Geschäftsführer: Peer Heinlein - Sitz: Berlin


signature.asc
Description: This is a digitally signed message part


Re: Limited network upload rate

2019-03-18 Thread Melanie Desaive
Hi Fariborz,

what Hypervisor are you using? I tested the network limits for
XenServer some months ago and think to remember, that I learned there
are differences how upload and download rates are realized.

If you are interested, I could look into it once again and share my
findings with you.

Greetings,

Melanie

Am Freitag, den 15.03.2019, 21:41 +0330 schrieb Fariborz Navidan:
> Hello,
> 
> My server has 1Gbps network connectivity with no limits and I have
> set
> 500Mbps network rate for the service offering. VMs deployed with this
> service offering seems to have 500Mbps download speed capacity,
> however
> looks like their upload speed rate is limited to 100Mbps by the
> virtual
> router. Also I have network throttling rate to 1 Mbps in the
> zone's
> settings.
> 
> Any idea what's limiting the upload speeds?
> 
> Best Regards
-- 
-- 
Heinlein Support GmbH
Schwedter Str. 8/9b, 10119 Berlin
  
https://www.heinlein-support.de
 
Tel: 030 / 40 50 51 - 62
Fax: 030 / 40 50 51 - 19
  
Amtsgericht Berlin-Charlottenburg - HRB 93818 B
Geschäftsführer: Peer Heinlein - Sitz: Berlin



Re: New VP of CloudStack: Paul Angus

2019-03-13 Thread Melanie Desaive
Wow! Great news! Congratulations Paul!

And thanks a lot to Mike!

All my best wishes to all of you!

Am Montag, den 11.03.2019, 15:16 + schrieb Tutkowski, Mike:
> Hi everyone,
> 
> As you may know, the role of VP of CloudStack (Chair of the
> CloudStack PMC) has a one-year term. My term has now come and gone.
> 
> I’m happy to announce that the CloudStack PMC has elected Paul Angus
> as our new VP of CloudStack.
> 
> As many already know, Paul has been an active member of the
> CloudStack Community for over six years now. I’ve worked with Paul on
> and off throughout much of that time and I believe he’ll be a great
> fit for this role.
> 
> Please join me in welcoming Paul as the new VP of Apache CloudStack!
> 
> Thanks,
> Mike
-- 
-- 
Heinlein Support GmbH
Schwedter Str. 8/9b, 10119 Berlin
  
https://www.heinlein-support.de
 
Tel: 030 / 40 50 51 - 62
Fax: 030 / 40 50 51 - 19
  
Amtsgericht Berlin-Charlottenburg - HRB 93818 B
Geschäftsführer: Peer Heinlein - Sitz: Berlin



Re: Configure individual DNS Servers for isolated Network

2019-03-13 Thread Melanie Desaive
Hey Dag,

thanks a lot for the swift answer!

Yes we where aware of the possiblity to change the VRs DNSMasq
configuration to achieve the desired effect but did not like the
solution, because we did not like it to be non-persistent.

I was thinking about manipulating the database, too. Good to hear you
did not succeed with this approach. So I will not dig deeper into that.

I discuss the options with my collegue!

Wishing you a nice evening!

Greetings,

Melanie


Am Mittwoch, den 13.03.2019, 14:46 + schrieb Dag Sonstebo:
> Hi Melanie,
> 
> You can but it's a hack and it is not persistent whatsoever. The
> following will be wiped out every time you restart VR AND every time
> you create a new VM since it is updated on every handshake with ACS
> management:
> 
> If you edit the third line in /etc/dnsmasq.d/cloud.conf on the VR
> this will achieve it:
> 
> dhcp-range=set:interface-eth0-0,10.1.1.1,static
> dhcp-option=tag:interface-eth0-0,15,cs2cloud.internal
> dhcp-option=tag:interface-eth0-0,6, HERE>,10.1.1.1,8.8.8.8,8.8.4.4
> dhcp-option=tag:interface-eth0-0,3,10.1.1.1
> dhcp-option=tag:interface-eth0-0,1,255.255.255.0
> 
> Once done do a "service dnsmasq restart", then clear the DHCP lease
> on your guest and request again - this will now pass the new
> nameserver in the DHCP lease.
> 
> For reference - I tried changing the network table entry and this
> does *not* accomplish anything:
> 
> > SELECT id,name,dns1,dns2 FROM cloud.networks where id=207
> > id  | name  | dns1  | dns2  |
> > 207 | dagnet2   | 1.1.1.1   | 9.9.9.9  
> 
> As far as I can see there's no API call to manipulate this either.
> 
> Regards,
> Dag Sonstebo
> Cloud Architect
> ShapeBlue
>  
> 
> On 13/03/2019, 13:41, "Melanie Desaive" <
> m.desa...@heinlein-support.de> wrote:
> 
> Hi all,
> 
> for an isolated Network: Is it possible to configure an
> alternative DNS
> Server IP for the network through the API or GUI? 
> 
> I want an individual IP from the isolated network itself to be
> pushed
> as DNS server by the VR DHCP-server. Not the DNS servers defined
> for
> the zone and not the VRs IP itsself.
> 
> I know it is kind of a hack, but it would help us a lot to
> circumvent
> the VR for DNS in that use case.
> 
> Greetings,
> 
> Melanie
> -- 
> -- 
> Heinlein Support GmbH
> Schwedter Str. 8/9b, 10119 Berlin
>   
> https://www.heinlein-support.de
>  
> Tel: 030 / 40 50 51 - 62
> Fax: 030 / 40 50 51 - 19
>   
> Amtsgericht Berlin-Charlottenburg - HRB 93818 B
> Geschäftsführer: Peer Heinlein - Sitz: Berlin
> 
> 
> 
> 
> dag.sonst...@shapeblue.com 
> www.shapeblue.com
> Amadeus House, Floral Street, London  WC2E 9DPUK
> @shapeblue
>   
>  
> 
-- 
-- 
Heinlein Support GmbH
Schwedter Str. 8/9b, 10119 Berlin
  
https://www.heinlein-support.de
 
Tel: 030 / 40 50 51 - 62
Fax: 030 / 40 50 51 - 19
  
Amtsgericht Berlin-Charlottenburg - HRB 93818 B
Geschäftsführer: Peer Heinlein - Sitz: Berlin



Configure individual DNS Servers for isolated Network

2019-03-13 Thread Melanie Desaive
Hi all,

for an isolated Network: Is it possible to configure an alternative DNS
Server IP for the network through the API or GUI? 

I want an individual IP from the isolated network itself to be pushed
as DNS server by the VR DHCP-server. Not the DNS servers defined for
the zone and not the VRs IP itsself.

I know it is kind of a hack, but it would help us a lot to circumvent
the VR for DNS in that use case.

Greetings,

Melanie
-- 
-- 
Heinlein Support GmbH
Schwedter Str. 8/9b, 10119 Berlin
  
https://www.heinlein-support.de
 
Tel: 030 / 40 50 51 - 62
Fax: 030 / 40 50 51 - 19
  
Amtsgericht Berlin-Charlottenburg - HRB 93818 B
Geschäftsführer: Peer Heinlein - Sitz: Berlin



Cleaning up Secondary Storage

2019-01-04 Thread Melanie Desaive
Hi all,

I stumbled over an old Paper of Abhinandan Prateek for CCC Miami where
he describes how to clean up secondary Storage.

Do I get it right, that a:

mysql> select store_id,physical_size,install_path,volume_id,volumes.name
from volume_store_ref left join volumes on volume_store_ref.volume_id =
volumes.id;
+--+---+--+---+-+
| store_id | physical_size | install_path | volume_id | name|
+--+---+--+---+-+
|2 | 0 | volumes/117/1911 |  1911 | imap1_SATA2 |
+--+---+--+---+-+
1 row in set (0,00 sec)

should list all actually used volumes in secondary storage

And that in the above case I can securely delete all directories below
"volumes" besides "117"!

Similar with snapshot_store_ref?

That would free up a lot of space, which I really do need! :D

Greetings,

Melanie

-- 
--

Heinlein Support GmbH
Linux: Akademie - Support - Hosting

http://www.heinlein-support.de
Tel: 030 / 40 50 51 - 0
Fax: 030 / 40 50 51 - 19

Zwangsangaben lt. §35a GmbHG:
HRB 93818 B / Amtsgericht Berlin-Charlottenburg,
Geschäftsführer: Peer Heinlein  -- Sitz: Berlin


Re: How to Reorganise Storage Tags

2018-12-21 Thread Melanie Desaive
Hey andrija,

your suggestions go into the same direction i was guessing. Will test it 
thoroughly after christmas holidays.

And i like "manually change to some new one - and later via UI"! Had the same 
plan!

All the best wishes for christmas!

Greetings,

Melanie

⁣Gesendet mit BlueMail ​

Am 20. Dez. 2018, 17:27, um 17:27, Andrija Panic  
schrieb:
>Hi Melani,
>
>for root volumes - I'm pretty sure it's like following - sorry long
>copy/paste - here we examine sample VM (which we really did migrate
>from
>NFS/CEPH to SolidFire)
>(comments after code below...)
>
>mysql> select name,service_offering_id from vm_instance where
>uuid="5525edf1-94d7-4678-b484-a3292749c08f";
>+--+-+
>| name | service_offering_id |
>+--+-+
>| VM-X | 837 |
>+--+-+
>1 row in set (0.01 sec)
>
>mysql> select name,disk_offering_id from volumes where instance_id in
>(select id from vm_instance where
>uuid="5525edf1-94d7-4678-b484-a3292749c08f") and name like "ROOT%";
>+--+--+
>| name | disk_offering_id |
>+--+--+
>| ROOT-226 |  837 |
>+--+--+
>1 row in set (0.00 sec)
>
>
>mysql> select name,type,tags from disk_offering where id=837;
>+--+-++
>| name | type| tags   |
>+--+-++
>| 4vCPU-8GB-SSD-STD-SF | Service | SolidFire1 |
>+--+-++
>1 row in set (0.00 sec)
>
>mysql> select cpu,speed,ram_size from service_offering where id=837;
>+--+---+--+
>| cpu  | speed | ram_size |
>+--+---+--+
>|4 |  2000 | 8192 |
>+--+---+--+
>
>So here you see both service offering (compute offering for user VMs,
>or
>really service offering for system VMs) AND the disk offering (ROOT)
>have
>same id of 837 (single offering that we examine here).
>
>So it's should be enough to just
>- change the disk_offering_id in the volumes table (for specific,
>migrated
>root - to point to some offering that targets new storage (by storage
>tags)
>- and later again change properly via UI//API to correct one
>- change service_offering_id in vm_instance table (for specific VM
>whose
>ROOT volume was migrated)
>-these 2 above needs to match obviously..
>
>Later when you change via UI/API the to correct/exact Compute Offering
>the
>way you like it - make sure that both tables are updated accordingly
>with
>new ID - in ACS 4.8 only service_offering_id (vm_instances table) was
>updated to point to new service offering, while disk_offering_id in
>"volume" table was NOT updated - we had an in-house patch for this to
>update this one as well...
>
>Above I always say "manually change to some new one - and later via UI"
> -
>in order to generate proper usage records for final offering chosen  -
>otherwise you can target final offering directly with DB edit...)
>
>Hope I did not confuse you even more :)
>
>Cheers
>andrija
>
>
>
>On Thu, 20 Dec 2018 at 14:29, Melanie Desaive
>
>wrote:
>
>> Hi Andrija,
>>
>> I tested your suggestion and they worked perfectly for data-volumes.
>>
>> Now I am trying to figure out how to change storage tags for
>> root-volumes of VMs and SystemVMs.
>>
>> For the root-volumes of user VMs the storage tag seems to come from
>the
>> service offering, I did not find any relationships to the table
>> disk_offering up to now. Still I am not 100% shure through which
>fields
>> the relationship is defined, and continue researching.
>>
>> For the root-volumes of the system-VMs the easiest way to change
>storage
>> tags seems to define new offering and then destroy/redeploy...
>>
>> If you like I will summarize my knowledge about this issue when I am
>> through with the task..
>>
>> Greetings,
>>
>> Melanie
>>
>>
>>
>> Am 13.12.18 um 19:58 schrieb Andrija Panic:
>> > Hi Melanie,
>> >
>> > I did change it, yes (tags on existing offerings) and no need to
>restart
>> > mgmt, just I once had to wait for a minute or two, but Im sure it
>was me
>> > messed up something at that specific moment.
>> >
>> > Tags are evaluated during creation of the volume only (and
>obviously when
>> > changing offering as you can see) and not relevant later for the
>volume -
>> > vs. i.e. cache mode (writeback etc.) 

Re: How to Reorganise Storage Tags

2018-12-20 Thread Melanie Desaive
Hi Andrija,

I tested your suggestion and they worked perfectly for data-volumes.

Now I am trying to figure out how to change storage tags for
root-volumes of VMs and SystemVMs.

For the root-volumes of user VMs the storage tag seems to come from the
service offering, I did not find any relationships to the table
disk_offering up to now. Still I am not 100% shure through which fields
the relationship is defined, and continue researching.

For the root-volumes of the system-VMs the easiest way to change storage
tags seems to define new offering and then destroy/redeploy...

If you like I will summarize my knowledge about this issue when I am
through with the task..

Greetings,

Melanie



Am 13.12.18 um 19:58 schrieb Andrija Panic:
> Hi Melanie,
> 
> I did change it, yes (tags on existing offerings) and no need to restart
> mgmt, just I once had to wait for a minute or two, but Im sure it was me
> messed up something at that specific moment.
> 
> Tags are evaluated during creation of the volume only (and obviously when
> changing offering as you can see) and not relevant later for the volume -
> vs. i.e. cache mode (writeback etc.) which is read during starting VM
> (attaching volume to VM in boot process).
> 
> Let me know if I can help more.
> 
> Cheers
> 
> On Thu, Dec 13, 2018, 18:41 Melanie Desaive  wrote:
> 
>> Hi andrija,
>>
>> thanks a lot for your answer.
>>
>> Indeed is absolutely sufficient for me to know that I may change
>> disk_offering_id for a volume. I would assume it is not necessary to shut
>> down/restart the VM or restart management service, but will try tomorrow.
>>
>> I will figure out a suitable path to migrate the volumes to their
>> destination pools and also change the offering to those with the desired
>> tags that way. Absolutely ok for me to do it in two or more steps.
>>
>> Anyone ever changed disk_offering.tags manually?
>>
>> Anyway, happy to see a solution for my task and looking forward to try it
>> out tomorrow.
>>
>> Greetings,
>>
>> Melanie
>>
>> ⁣Gesendet mit BlueMail ​
>>
>> Am 13. Dez. 2018, 17:32, um 17:32, Andrija Panic 
>> schrieb:
>>> Hi Melanie,
>>>
>>> when  moving volume to new storage, when you want to change disk
>>> offering
>>> (or compute offering for ROOT disk...), ACS doesn't allow that - it
>>> lists
>>> only offerings that have same tag as current offering (not good...)
>>>
>>> We have inhouse patch, so that you CAN do that, by making sure to list
>>> all
>>> offergins that have TAG that matches the TAG of the new destination
>>> pool of
>>> the volume (hope Im clear here).
>>>
>>> All volumes read tag from their offering - so just either change
>>> disk_offering_id filed for each moved/migrated volume  to point to same
>>> sized offering on new storage - and then normally change it once more
>>> via
>>> UI to a new once etc - or manualy change to smaller disk offering (DB
>>> edit)
>>> and later via UI/API to correct (same size) disk offering (or bigger if
>>> you
>>> want to really resize)
>>>
>>> I can try to share a patch in a non-developer, copy/paste way - in case
>>> you
>>> want to patch your ACS to support this (as explained at the begining of
>>> the
>>> email...)
>>>
>>> Hope that helps
>>>
>>> Cheers
>>>
>>> On Thu, 13 Dec 2018 at 13:50, Melanie Desaive
>>> 
>>> wrote:
>>>
>>>> Hi all,
>>>>
>>>> we are currently reorganizing our SAN Setup and I would like to
>>>> introduce new storage tags on my existing volumes.
>>>>
>>>> I was naively assuming to simply change the tags or offering by GUI
>>> or
>>>> API calls.
>>>>
>>>> Does not seem to work. Only way to change the tags, seems to be by
>>> using
>>>> a new disk offering, which is denied, when the tags between old and
>>> new
>>>> offering differ. :( Or am I missing something?
>>>>
>>>> I had a look into the cloud database, and the storage tags, seem to
>>> be
>>>> only stored in
>>>>
>>>>   disk_offering.tags
>>>> and
>>>>   storage_pool_tags.tag
>>>>
>>>> Would it be a valid option for me to update disk_offering.tags by SQL
>>> to
>>>> the desired value or could that break some deeper logic?
>>>>
>>>> Or is there even a better wa

Re: How to Reorganise Storage Tags

2018-12-13 Thread Melanie Desaive
Hi andrija,

thanks a lot for your answer.

Indeed is absolutely sufficient for me to know that I may change 
disk_offering_id for a volume. I would assume it is not necessary to shut 
down/restart the VM or restart management service, but will try tomorrow.

I will figure out a suitable path to migrate the volumes to their destination 
pools and also change the offering to those with the desired tags that way. 
Absolutely ok for me to do it in two or more steps.

Anyone ever changed disk_offering.tags manually?

Anyway, happy to see a solution for my task and looking forward to try it out 
tomorrow.

Greetings,

Melanie

⁣Gesendet mit BlueMail ​

Am 13. Dez. 2018, 17:32, um 17:32, Andrija Panic  
schrieb:
>Hi Melanie,
>
>when  moving volume to new storage, when you want to change disk
>offering
>(or compute offering for ROOT disk...), ACS doesn't allow that - it
>lists
>only offerings that have same tag as current offering (not good...)
>
>We have inhouse patch, so that you CAN do that, by making sure to list
>all
>offergins that have TAG that matches the TAG of the new destination
>pool of
>the volume (hope Im clear here).
>
>All volumes read tag from their offering - so just either change
>disk_offering_id filed for each moved/migrated volume  to point to same
>sized offering on new storage - and then normally change it once more
>via
>UI to a new once etc - or manualy change to smaller disk offering (DB
>edit)
>and later via UI/API to correct (same size) disk offering (or bigger if
>you
>want to really resize)
>
>I can try to share a patch in a non-developer, copy/paste way - in case
>you
>want to patch your ACS to support this (as explained at the begining of
>the
>email...)
>
>Hope that helps
>
>Cheers
>
>On Thu, 13 Dec 2018 at 13:50, Melanie Desaive
>
>wrote:
>
>> Hi all,
>>
>> we are currently reorganizing our SAN Setup and I would like to
>> introduce new storage tags on my existing volumes.
>>
>> I was naively assuming to simply change the tags or offering by GUI
>or
>> API calls.
>>
>> Does not seem to work. Only way to change the tags, seems to be by
>using
>> a new disk offering, which is denied, when the tags between old and
>new
>> offering differ. :( Or am I missing something?
>>
>> I had a look into the cloud database, and the storage tags, seem to
>be
>> only stored in
>>
>>   disk_offering.tags
>> and
>>   storage_pool_tags.tag
>>
>> Would it be a valid option for me to update disk_offering.tags by SQL
>to
>> the desired value or could that break some deeper logic?
>>
>> Or is there even a better way to change the storage tags for existing
>> volumes. (With or without downtime for the VMs)
>>
>> Looking forward to any advice!
>>
>> Greetings,
>>
>> Melanie
>> --
>> --
>>
>> Heinlein Support GmbH
>> Linux: Akademie - Support - Hosting
>>
>> http://www.heinlein-support.de
>> Tel: 030 / 40 50 51 - 0
>> Fax: 030 / 40 50 51 - 19
>>
>> Zwangsangaben lt. §35a GmbHG:
>> HRB 93818 B / Amtsgericht Berlin-Charlottenburg,
>> Geschäftsführer: Peer Heinlein  -- Sitz: Berlin
>>
>>
>
>--
>
>Andrija Panić


How to Reorganise Storage Tags

2018-12-13 Thread Melanie Desaive
Hi all,

we are currently reorganizing our SAN Setup and I would like to
introduce new storage tags on my existing volumes.

I was naively assuming to simply change the tags or offering by GUI or
API calls.

Does not seem to work. Only way to change the tags, seems to be by using
a new disk offering, which is denied, when the tags between old and new
offering differ. :( Or am I missing something?

I had a look into the cloud database, and the storage tags, seem to be
only stored in

  disk_offering.tags
and
  storage_pool_tags.tag

Would it be a valid option for me to update disk_offering.tags by SQL to
the desired value or could that break some deeper logic?

Or is there even a better way to change the storage tags for existing
volumes. (With or without downtime for the VMs)

Looking forward to any advice!

Greetings,

Melanie
-- 
--

Heinlein Support GmbH
Linux: Akademie - Support - Hosting

http://www.heinlein-support.de
Tel: 030 / 40 50 51 - 0
Fax: 030 / 40 50 51 - 19

Zwangsangaben lt. §35a GmbHG:
HRB 93818 B / Amtsgericht Berlin-Charlottenburg,
Geschäftsführer: Peer Heinlein  -- Sitz: Berlin



signature.asc
Description: OpenPGP digital signature


IOPS limitation with XenServer as hypervisor

2018-11-26 Thread Melanie Desaive
Hi all,

do I get it right, that there is no way to limit IOPS per volume with
XenServer as hypervisor? (Using ACS 4.11)

I tried the settings to limit IO bandwidth and IOPS per volume on
hypervisor side with XenServer and only the bandwidth limitation seems
to have an effect. Seems to me, that this is not supported from the
XenServer side at all. Is that correct?

See:
https://bugs.xenserver.org/browse/XSO-580
https://github.com/xapi-project/blktap/issues/241

Are those features working with KVM?

Greetings, Melanie
-- 
--

Heinlein Support GmbH
Linux: Akademie - Support - Hosting

http://www.heinlein-support.de
Tel: 030 / 40 50 51 - 0
Fax: 030 / 40 50 51 - 19

Zwangsangaben lt. §35a GmbHG:
HRB 93818 B / Amtsgericht Berlin-Charlottenburg,
Geschäftsführer: Peer Heinlein  -- Sitz: Berlin


Re: Next CloudStack EU user group date

2018-07-09 Thread Melanie Desaive
Hi Ivan,

Sven is a long time member of German CloudStack Usergroup, I expect him
to have a meeting in Leipzig well organized. Our Usergroup here in
Germany is quite an active bunch of people, both competent and
openminded. So I would be looking forward to join a meetup in Leipzig!

Greetings,

Melanie




Am 09.07.2018 um 13:38 schrieb Ivan Kudryavtsev:
> Hi, Sven, Great! I would love to join if the meetup happens. If the dates
> can be established it will work for me, because visa is required and it
> takes time to apply and get approval...
> 
> пн, 9 июл. 2018 г., 18:34 Sven Vogel :
> 
>> Hi Ivan,
>>
>> i would offer our Location in Germany, Leipzig by EWERK
>>
>> www.ewerk.com
>>
>> https://goo.gl/maps/PMQgXcJ73ZC2
>>
>> Greetings
>>
>> Sven Vogel
>>
>> __
>>
>>
>> Sven Vogel
>> Cloud Solutions Architect
>>
>>
>> EWERK RZ GmbH
>> Brühl 24, D-04109 Leipzig
>> P +49 341 42649 - 11
>> F +49 341 42649 - 18
>> s.vo...@ewerk.com
>> www.ewerk.com
>>
>>
>> Geschäftsführer:
>> Dr. Erik Wende, Hendrik Schubert, Frank Richter, Gerhard Hoyer
>> Registergericht: Leipzig HRB 17023
>>
>>
>> Zertifiziert nach:
>> ISO/IEC 27001:2013
>> DIN EN ISO 9001:2015
>> DIN ISO/IEC 2-1:2011
>>
>>
>> EWERK-Blog | LinkedIn | Xing | Twitter | Facebook
>>
>>
>>
>> Am Samstag, den 07/07/2018 um 07:10 schrieb Ivan Kudryavtsev:
>>
>>
>> Hello, guys.
>>
>> Do you have an ideas about the next CS EU User Group meetup date and
>> location? Would like to participate, so want arrange my plans.
>>
> 

-- 
--

Heinlein Support GmbH
Linux: Akademie - Support - Hosting

http://www.heinlein-support.de
Tel: 030 / 40 50 51 - 0
Fax: 030 / 40 50 51 - 19

Zwangsangaben lt. §35a GmbHG:
HRB 93818 B / Amtsgericht Berlin-Charlottenburg,
Geschäftsführer: Peer Heinlein  -- Sitz: Berlin


Re: com.cloud.agent.api.CheckRouterCommand timeout

2018-06-21 Thread Melanie Desaive



Am 21.06.2018 um 17:08 schrieb Daan Hoogland:
> makes sense, well let's hope all breaks soon ;)

I am sure it will break! :D

And then I will get back to you with more questions!

Thanks a lot for taking the time!

> 
> On Thu, Jun 21, 2018 at 2:15 PM, Melanie Desaive <
> m.desa...@heinlein-support.de> wrote:
> 
>> Hi Daan,
>>
>> Am 21.06.2018 um 15:29 schrieb Daan Hoogland:
>>> Melanie, attachments get deleted for this list. Your assumption for the
>>> comm path is right for xen. Did you try and execute the script as it is
>>> called by the proxy script from the host? and capture the return? We had
>> a
>>> bad problem with getting the template version in the past on xen, this
>>> might be similar. That was due to processing of the returned string in
>> the
>>> script.
>>
>> I called both stages of the script manually but at at time, when all was
>> working as expected and the routers where back to MASTER and BACKUP.
>>
>> Looked like:
>>
>> [root@acs-compute-5 ~]# /opt/cloud/bin/router_proxy.sh checkrouter.sh
>> 169.254.1.178
>> Status: BACKUP
>>
>> root@r-2595-VM:~# /opt/cloud/bin/checkrouter.sh
>> Status: BACKUP
>>
>>
>>>
>>> On Thu, Jun 21, 2018 at 1:16 PM, Melanie Desaive <
>>> m.desa...@heinlein-support.de> wrote:
>>>
>>>> Hi Daan,
>>>>
>>>> thanks for your reply.
>>>>
>>>> The latest occurance of our VRs going to UNKNOWN did resolve 24 hours
>>>> after it had occured. Nevertheless I would appreciate some insight into
>>>> how the checkRouter command is handled, as I expect the problem to come
>>>> back again.
>>>> Am 21.06.2018 um 10:39 schrieb Daan Hoogland:
>>>>> Melanie, this depends a bit on the type of hypervisor. The command
>>>> executes
>>>>> the checkrouter.sh script on the virtual router if it reaches it, but
>> it
>>>>> seems your problem is before that. I would look at the network first
>> and
>>>>> follow the path that the execution takes for your hypervisortype.
>>>>
>>>> With Stephans help I figured out the following guess for the path of
>>>> connections for the checkrouter command. Could someone please correct
>>>> me, if my guess is not correct. ;)
>>>>
>>>>  x Management Nodes connects to XenServer hypervisor host via management
>>>> network on port 22 by SSH
>>>>  x On hypervisor host the wrapper script
>>>> "/opt/cloud/bin/router_proxy.sh" is used to call scripts on system VMs
>>>> via link-local IP and port 3922
>>>>  x On the VR the script "/opt/cloud/bin/checkrouter.sh" does the actual
>>>> check.
>>>>
>>>> In our case the API call times out with log messages
>>>>  x Operation timed out: Commands 1063975411966525473 to Host 29 timed
>>>> out after 60
>>>>  x Unable to update router r-2595-VM's status
>>>>  x Redundant virtual router (name: r-2595-VM, id: 2595)  just switch
>>>> from BACKUP to UNKNOWN
>>>>
>>>> To me it seems that this is a timeout that occurs when ACS management is
>>>> waitig for the API call to return. At what stage (management host <->
>>>> virtualization host) or (virutalization host <-> VR> the answer is
>>>> delayed is unclear to me. (SSH Login from virtualization host to VR via
>>>> link-local is working all the time)
>>>>
>>>> And it is unclear to me, why both VRs of the respective network stay in
>>>> UNKNOWN for 24 hours, are accessible via link-local but come back
>>>> immedately after a reboot.
>>>>
>>>> I am happy for any suggestions or explanations in this topic and will
>>>> investigate further as soon, as the problem comes back again.
>>>>
>>>> A portion of our management log for the latest occurance of the problem
>>>> is attached to this email.
>>>>
>>>> Greetings,
>>>>
>>>> Melanie
>>>>
>>>>>
>>>>> On Wed, Jun 20, 2018 at 1:53 PM, Melanie Desaive <
>>>>> m.desa...@heinlein-support.de> wrote:
>>>>>
>>>>>> Hi all,
>>>>>>
>>>>>> we have a recurring problem with our virtual routers. By the log
>>>>>> messages it seems that com.cloud.agent.api.CheckRouterCommand runs
>>

Re: com.cloud.agent.api.CheckRouterCommand timeout

2018-06-21 Thread Melanie Desaive
Hi Daan,

Am 21.06.2018 um 15:29 schrieb Daan Hoogland:
> Melanie, attachments get deleted for this list. Your assumption for the
> comm path is right for xen. Did you try and execute the script as it is
> called by the proxy script from the host? and capture the return? We had a
> bad problem with getting the template version in the past on xen, this
> might be similar. That was due to processing of the returned string in the
> script.

I called both stages of the script manually but at at time, when all was
working as expected and the routers where back to MASTER and BACKUP.

Looked like:

[root@acs-compute-5 ~]# /opt/cloud/bin/router_proxy.sh checkrouter.sh
169.254.1.178
Status: BACKUP

root@r-2595-VM:~# /opt/cloud/bin/checkrouter.sh
Status: BACKUP


> 
> On Thu, Jun 21, 2018 at 1:16 PM, Melanie Desaive <
> m.desa...@heinlein-support.de> wrote:
> 
>> Hi Daan,
>>
>> thanks for your reply.
>>
>> The latest occurance of our VRs going to UNKNOWN did resolve 24 hours
>> after it had occured. Nevertheless I would appreciate some insight into
>> how the checkRouter command is handled, as I expect the problem to come
>> back again.
>> Am 21.06.2018 um 10:39 schrieb Daan Hoogland:
>>> Melanie, this depends a bit on the type of hypervisor. The command
>> executes
>>> the checkrouter.sh script on the virtual router if it reaches it, but it
>>> seems your problem is before that. I would look at the network first and
>>> follow the path that the execution takes for your hypervisortype.
>>
>> With Stephans help I figured out the following guess for the path of
>> connections for the checkrouter command. Could someone please correct
>> me, if my guess is not correct. ;)
>>
>>  x Management Nodes connects to XenServer hypervisor host via management
>> network on port 22 by SSH
>>  x On hypervisor host the wrapper script
>> "/opt/cloud/bin/router_proxy.sh" is used to call scripts on system VMs
>> via link-local IP and port 3922
>>  x On the VR the script "/opt/cloud/bin/checkrouter.sh" does the actual
>> check.
>>
>> In our case the API call times out with log messages
>>  x Operation timed out: Commands 1063975411966525473 to Host 29 timed
>> out after 60
>>  x Unable to update router r-2595-VM's status
>>  x Redundant virtual router (name: r-2595-VM, id: 2595)  just switch
>> from BACKUP to UNKNOWN
>>
>> To me it seems that this is a timeout that occurs when ACS management is
>> waitig for the API call to return. At what stage (management host <->
>> virtualization host) or (virutalization host <-> VR> the answer is
>> delayed is unclear to me. (SSH Login from virtualization host to VR via
>> link-local is working all the time)
>>
>> And it is unclear to me, why both VRs of the respective network stay in
>> UNKNOWN for 24 hours, are accessible via link-local but come back
>> immedately after a reboot.
>>
>> I am happy for any suggestions or explanations in this topic and will
>> investigate further as soon, as the problem comes back again.
>>
>> A portion of our management log for the latest occurance of the problem
>> is attached to this email.
>>
>> Greetings,
>>
>> Melanie
>>
>>>
>>> On Wed, Jun 20, 2018 at 1:53 PM, Melanie Desaive <
>>> m.desa...@heinlein-support.de> wrote:
>>>
>>>> Hi all,
>>>>
>>>> we have a recurring problem with our virtual routers. By the log
>>>> messages it seems that com.cloud.agent.api.CheckRouterCommand runs into
>>>> a timeout and therefore switches to UNKNOWN.
>>>>
>>>> All network traffic through the routers is still working. They can be
>>>> accessed by their link-local IP adresses, and configuration looks good
>>>> at a first sight. But configuration changes through the CloudStack API
>>>> do no longer reach the routers. A reboot fixes the problem.
>>>>
>>>> I would like to investigate a little further but lack understanding
>>>> about how the checkRouter command is trying to access the virtual
>> router.
>>>>
>>>> Could someone point me to some relevant documentation or give a short
>>>> overview how the connection from CS-Management is done and where such an
>>>> timeout could occur?
>>>>
>>>> As background information - the sequence from the management log looks
>>>> kind of this:
>>>>
>>>> ---
>>>>
>>>>  x Every few seconds the com.cloud.agent.

Re: com.cloud.agent.api.CheckRouterCommand timeout

2018-06-21 Thread Melanie Desaive
Hi Daan,

thanks for your reply.

The latest occurance of our VRs going to UNKNOWN did resolve 24 hours
after it had occured. Nevertheless I would appreciate some insight into
how the checkRouter command is handled, as I expect the problem to come
back again.
Am 21.06.2018 um 10:39 schrieb Daan Hoogland:
> Melanie, this depends a bit on the type of hypervisor. The command executes
> the checkrouter.sh script on the virtual router if it reaches it, but it
> seems your problem is before that. I would look at the network first and
> follow the path that the execution takes for your hypervisortype.

With Stephans help I figured out the following guess for the path of
connections for the checkrouter command. Could someone please correct
me, if my guess is not correct. ;)

 x Management Nodes connects to XenServer hypervisor host via management
network on port 22 by SSH
 x On hypervisor host the wrapper script
"/opt/cloud/bin/router_proxy.sh" is used to call scripts on system VMs
via link-local IP and port 3922
 x On the VR the script "/opt/cloud/bin/checkrouter.sh" does the actual
check.

In our case the API call times out with log messages
 x Operation timed out: Commands 1063975411966525473 to Host 29 timed
out after 60
 x Unable to update router r-2595-VM's status
 x Redundant virtual router (name: r-2595-VM, id: 2595)  just switch
from BACKUP to UNKNOWN

To me it seems that this is a timeout that occurs when ACS management is
waitig for the API call to return. At what stage (management host <->
virtualization host) or (virutalization host <-> VR> the answer is
delayed is unclear to me. (SSH Login from virtualization host to VR via
link-local is working all the time)

And it is unclear to me, why both VRs of the respective network stay in
UNKNOWN for 24 hours, are accessible via link-local but come back
immedately after a reboot.

I am happy for any suggestions or explanations in this topic and will
investigate further as soon, as the problem comes back again.

A portion of our management log for the latest occurance of the problem
is attached to this email.

Greetings,

Melanie

> 
> On Wed, Jun 20, 2018 at 1:53 PM, Melanie Desaive <
> m.desa...@heinlein-support.de> wrote:
> 
>> Hi all,
>>
>> we have a recurring problem with our virtual routers. By the log
>> messages it seems that com.cloud.agent.api.CheckRouterCommand runs into
>> a timeout and therefore switches to UNKNOWN.
>>
>> All network traffic through the routers is still working. They can be
>> accessed by their link-local IP adresses, and configuration looks good
>> at a first sight. But configuration changes through the CloudStack API
>> do no longer reach the routers. A reboot fixes the problem.
>>
>> I would like to investigate a little further but lack understanding
>> about how the checkRouter command is trying to access the virtual router.
>>
>> Could someone point me to some relevant documentation or give a short
>> overview how the connection from CS-Management is done and where such an
>> timeout could occur?
>>
>> As background information - the sequence from the management log looks
>> kind of this:
>>
>> ---
>>
>>  x Every few seconds the com.cloud.agent.api.CheckRouterCommand returns
>> a state BACKUP or MASTER correctly
>>  x When the problem occurs the log messages change. Some snippets below
>>
>>  x ... Waiting some more time because this is the current command
>>  x ... Waiting some more time because this is the current command
>>  x Could not find exception:
>> com.cloud.exception.OperationTimedoutException in error code list for
>> exceptions
>>  x Timed out on Seq 28-2352567855348137104
>>  x Seq 28-2352567855348137104: Cancelling.
>>  x Operation timed out: Commands 2352567855348137104 to Host 28 timed
>> out after 60
>>  x Unable to update router r-2594-VM's status
>>  x Redundant virtual router (name: r-2594-VM, id: 2594)  just switch
>> from MASTER to UNKNOWN
>>
>>  x Those error messages are now repeated for each following
>> CheckRouterCommand until the virtual router is rebootet
>>
>>
>> Greetings,
>>
>> Melanie
>>
>> --
>> --
>>
>> Heinlein Support GmbH
>> Linux: Akademie - Support - Hosting
>>
>> http://www.heinlein-support.de
>> Tel: 030 / 40 50 51 - 0
>> Fax: 030 / 40 50 51 - 19
>>
>> Zwangsangaben lt. §35a GmbHG:
>> HRB 93818 B / Amtsgericht Berlin-Charlottenburg,
>> Geschäftsführer: Peer Heinlein  -- Sitz: Berlin
>>
> 
> 
> 

-- 
--

Heinlein Support GmbH
Linux: Akademie - Support - Hosting

http://www.heinlein-support.de
Tel: 030 / 40 50 51 - 0
Fax: 030 / 40 50 51 - 19

Zwangsangaben lt. §35a GmbHG:
HRB 93818 B / Amtsgericht Berlin-Charlottenburg,
Geschäftsführer: Peer Heinlein  -- Sitz: Berlin


com.cloud.agent.api.CheckRouterCommand timeout

2018-06-20 Thread Melanie Desaive
Hi all,

we have a recurring problem with our virtual routers. By the log
messages it seems that com.cloud.agent.api.CheckRouterCommand runs into
a timeout and therefore switches to UNKNOWN.

All network traffic through the routers is still working. They can be
accessed by their link-local IP adresses, and configuration looks good
at a first sight. But configuration changes through the CloudStack API
do no longer reach the routers. A reboot fixes the problem.

I would like to investigate a little further but lack understanding
about how the checkRouter command is trying to access the virtual router.

Could someone point me to some relevant documentation or give a short
overview how the connection from CS-Management is done and where such an
timeout could occur?

As background information - the sequence from the management log looks
kind of this:

---

 x Every few seconds the com.cloud.agent.api.CheckRouterCommand returns
a state BACKUP or MASTER correctly
 x When the problem occurs the log messages change. Some snippets below

 x ... Waiting some more time because this is the current command
 x ... Waiting some more time because this is the current command
 x Could not find exception:
com.cloud.exception.OperationTimedoutException in error code list for
exceptions
 x Timed out on Seq 28-2352567855348137104
 x Seq 28-2352567855348137104: Cancelling.
 x Operation timed out: Commands 2352567855348137104 to Host 28 timed
out after 60
 x Unable to update router r-2594-VM's status
 x Redundant virtual router (name: r-2594-VM, id: 2594)  just switch
from MASTER to UNKNOWN

 x Those error messages are now repeated for each following
CheckRouterCommand until the virtual router is rebootet


Greetings,

Melanie

-- 
--

Heinlein Support GmbH
Linux: Akademie - Support - Hosting

http://www.heinlein-support.de
Tel: 030 / 40 50 51 - 0
Fax: 030 / 40 50 51 - 19

Zwangsangaben lt. §35a GmbHG:
HRB 93818 B / Amtsgericht Berlin-Charlottenburg,
Geschäftsführer: Peer Heinlein  -- Sitz: Berlin


Re: CloudStack Uasge lists expunged Volumes. Is that correct?

2018-05-18 Thread Melanie Desaive
Hi Daan,

Am 18.05.2018 um 10:28 schrieb Daan Hoogland:

> It seems that only 5 of those volumes should have been
> reported between the 16th and the 16th, correct? Can you query for events
> for those volumes in the event table? By the order of events we might be
> able to draw some conclusions.

You asked for a query like the following?

mysql> select description from event where description like '%%' or
description like '%2420%' or description like '%2567%' or description
like '%2719%' or description like '%2920%' or description like '%2528%'
or description like '%2809%' or description like '%2243%' or description
like '%2505%' or description like '%2396%' or description like '%3696%'
or description like '%2239%' or description like '%3174%' or description
like '%3675%' or description like '%2172%' or description like '%2225%'
or description like '%2223%' or description like '%3255%';
Empty set (0.10 sec)

This did not return any records. :(

Or do you ask for a different kind of information?

Greetings,

Melanie
-- 
--

Heinlein Support GmbH
Linux: Akademie - Support - Hosting

http://www.heinlein-support.de
Tel: 030 / 40 50 51 - 0
Fax: 030 / 40 50 51 - 19

Zwangsangaben lt. §35a GmbHG:
HRB 93818 B / Amtsgericht Berlin-Charlottenburg,
Geschäftsführer: Peer Heinlein  -- Sitz: Berlin



signature.asc
Description: OpenPGP digital signature


CloudStack Uasge lists expunged Volumes. Is that correct?

2018-05-17 Thread Melanie Desaive
Hi all,

I am just starting to have a first look on CloudStack Usage Service in 4.11.

Maybe I am getting something wrong, but to me it seems, that Usage
reports lists destroyed and expunged Volumes. Is that the intended
behaviour?

For an example project it lists the following records:

(rz-admin) :D > list usagerecords startdate=2018-05-16
enddate=2018-05-16 projectid=a5496954-0f3c-4511-b54c-db9aa14ee9ac type=6
filter=description,
count = 18
usagerecord:
++
|  description   |
++
|   Volume Id:  usage time (Template: 471)   |
|   Volume Id: 2420 usage time (Template: 521)   |
| Volume Id: 2567 usage time (DiskOffering: 147) |
| Volume Id: 2719 usage time (DiskOffering: 96)  |
| Volume Id: 2920 usage time (DiskOffering: 95)  |
|   Volume Id: 2528 usage time (Template: 521)   |
|   Volume Id: 2809 usage time (Template: 394)   |
|   Volume Id: 2243 usage time (Template: 467)   |
|   Volume Id: 2505 usage time (Template: 394)   |
|   Volume Id: 2396 usage time (Template: 517)   |
|   Volume Id: 3696 usage time (Template: 521)   |
|   Volume Id: 2239 usage time (Template: 400)   |
| Volume Id: 3174 usage time (DiskOffering: 95)  |
|   Volume Id: 3675 usage time (Template: 521)   |
|   Volume Id: 2172 usage time (Template: 394)   |
|   Volume Id: 2225 usage time (Template: 390)   |
|   Volume Id: 2223 usage time (Template: 471)   |
|   Volume Id: 3255 usage time (Template: 521)   |
++

Quering the IDs in the database I get records for a lot of destroyed and
expunged volumes.

mysql> select distinct id,removed,state from volumes where id= or
id=2420 or id=2567 or id=2719 or id=2920 or id=2528 or id=2809 or
id=2243 or id=2505 or id=2396 or id=3696 or id=2239 or id=3174 or
id=3675 or id=2172 or id=2225 or id=2223 or id=3255 order by id;
+--+-+--+
| id   | removed | state|
+--+-+--+
| 2172 | 2017-08-28 12:37:57 | Destroy  |
|  | 2017-01-30 13:06:40 | Destroy  |
| 2223 | 2017-01-30 12:08:54 | Destroy  |
| 2225 | 2017-01-30 13:06:21 | Destroy  |
| 2239 | 2017-02-09 13:25:26 | Destroy  |
| 2243 | 2017-08-28 12:28:36 | Destroy  |
| 2396 | 2017-05-15 10:13:24 | Destroy  |
| 2420 | 2017-08-28 11:48:57 | Destroy  |
| 2505 | NULL| Ready|
| 2528 | 2017-07-15 08:54:32 | Destroy  |
| 2567 | 2017-08-28 12:27:57 | Destroy  |
| 2719 | 2017-07-15 09:03:36 | Expunged |
| 2809 | 2017-07-25 12:51:50 | Destroy  |
| 2920 | 2017-10-24 12:12:05 | Expunged |
| 3174 | NULL| Ready|
| 3255 | NULL| Ready|
| 3675 | NULL| Ready|
| 3696 | NULL| Ready|
+--+-+--+
18 rows in set (0.00 sec)

I would usage expect to list only the volumes actually provisioned. So
listing "Destroyed" volumes might be to be expected. But listing
"Expunged" volumes seems wrong to me.

Greetings,

Melanie
-- 
--

Heinlein Support GmbH
Linux: Akademie - Support - Hosting

http://www.heinlein-support.de
Tel: 030 / 40 50 51 - 0
Fax: 030 / 40 50 51 - 19

Zwangsangaben lt. §35a GmbHG:
HRB 93818 B / Amtsgericht Berlin-Charlottenburg,
Geschäftsführer: Peer Heinlein  -- Sitz: Berlin



signature.asc
Description: OpenPGP digital signature


Re: PV vs HVM guest on XenServer 7.0

2018-04-12 Thread Melanie Desaive
Hi Yiping,


Am 11.04.2018 um 23:30 schrieb Yiping Zhang:

> 
> Why does CloudStack convert just a few rhel 6.x instances into HVM mode, 
> while leaving most in PV mode?
> How would I force them back to PV guests?

To my understanding CloudStacks decides about the virtualiuzation mode
of a VM depending on the value you select in "Details"-"OS Type".

Greetings,

Melanie

-- 
--

Heinlein Support GmbH
Linux: Akademie - Support - Hosting

http://www.heinlein-support.de
Tel: 030 / 40 50 51 - 0
Fax: 030 / 40 50 51 - 19

Zwangsangaben lt. §35a GmbHG:
HRB 93818 B / Amtsgericht Berlin-Charlottenburg,
Geschäftsführer: Peer Heinlein  -- Sitz: Berlin



signature.asc
Description: OpenPGP digital signature


Can someone reproduce issue on adding NIC to VM on network created through web-UI after upgrade to 4.11?

2018-04-11 Thread Melanie Desaive
Hi all,

on one of our two CloudStack instances we have an issue adding NICs to
VMs after upgrading to 4.11.0.0. On another instance the problem does
not occur after upgrade.

Before posting an issue on GitHub I would like to know if someone else
can reproduce the following problem:

Our setup: Advanced Networking, Cloudstack 4.11.0.0 on Ubuntu 14.04 LTS,
MySQL 5.5.59-0ubuntu0.14.04.1

Steps to reproduce issue:

  x Create a new isolated network
  x Wait until network is "implemented"
  x Add NIC on this new network to a VM

=> Textbox: Insufficient capacity when adding NIC to VM[User|i--VM]:
com.cloud.exception.InsufficientAddressCapacityException: Insufficient
address capacityScope=interface com.cloud.dc.DataCenter; id=1
=> No new NIC is added to the VM

For a network created with a CloudMonkey command like the following NICs
can be added to VMs:

create network displaytext=deleteme-cloudmonkey
name=deleteme-cloudmonkey networkofferingid= zoneid=
projectid= gateway=172.17.1.1 netmask=255.255.252.0
networkdomain=deleteme-cloudmonkey.heinlein-intern.d

Greetings,

Melanie

-- 
--

Heinlein Support GmbH
Linux: Akademie - Support - Hosting

http://www.heinlein-support.de
Tel: 030 / 40 50 51 - 0
Fax: 030 / 40 50 51 - 19

Zwangsangaben lt. §35a GmbHG:
HRB 93818 B / Amtsgericht Berlin-Charlottenburg,
Geschäftsführer: Peer Heinlein  -- Sitz: Berlin



signature.asc
Description: OpenPGP digital signature


Re: Upgrade CloudStack from 4.9.2.0 to 4.11.0

2018-04-06 Thread Melanie Desaive
Hi Dag,

Stephan and I posted the issues we encountered after upgrading to 4.11
on https://github.com/apache/cloudstack/issues.

Those are:

Admin Dashboard System Capacity broken with German Locale #2539
problem adding new shared network NIC to VM "A NIC with this MAC address
exits for network:" #2540
Add "Lets Encrypt CA" Certpath to SSVM Keystore (for cdimage.debian.org)
#2541
CloudStack-Usage Broken after Upgrade from 4.9 to 4.11 #2542
Web-UI creates all isolated Nets with IP range 10.1.1.0/24 #2533
Textbox "Account and projectId can't be specified together" #2543
Password Reset does not work with Isolated Networks with redundant
routers #2544

Still we are very happy with our shiny new 4.11 setup!

Thanks a lot for this great piece of software!

Greetings,

Melanie and Stephan

Am 04.04.2018 um 16:08 schrieb Dag Sonstebo:
> Hi Stephan,
> 
> Thanks for the summary – can you log these as new issues in the new issues 
> tracker https://github.com/apache/cloudstack/issues  please (note not Jira).
> 
> Regards,
> Dag Sonstebo
> Cloud Architect
> ShapeBlue
> 
> On 04/04/2018, 10:39, "Stephan Seitz"  wrote:
> 
> Hi!
> 
> We're currently using XenServer instead of VMware, so I just don't know
> if you really need to build your own packages. Afaik shapeblue's public
> repository has been built with noredist.
> 
> Here's short list (sorry, we didn't report everything to the bugtracker
> by now) of caveats:
> 
> * There's a more precise dashboard (XX.XX% instead of XX%)
> -> Nice, but only works with locale set to EN or C or anything with
> decimalpoints instead of commas :) ... consequently the default
> language of the GUI will also be selected identical to your locale.
> 
> -> Ldap Authentication doesn't work. You need to apply https://github.c
> om/apache/cloudstack/pull/2517 to get this working.
> 
> -> Adding a NIC to a running VM (only tested in Advanced Zone,
> Xenserver, Shared Network to add) fails with an "duplicate MAC-address" 
> error. See my post on the ML yesterday.
> 
> -> cloudstack-usage doesn't start since (at least Ubuntu, deb packages)
> the update doesn't clean old libs from /usr/share/cloudstack-
> usage/libs. For us cleanup and reinstall fixed that.
> 
> -> SSVM's java keystore doesn't contain Let's Encrypt Root-CA (maybe
> others are also missing) so don't expect working downloads from
> cdimage.debian.org via https :)
> 
> -> A few nasty popups occur (but can be ignored) in the GUI e.g.
> selecting a project and viewing networks.
> 
> -> A minor documentation bug in the upgrade document: The apt-get.eu
> Repository doesn't contain 4.11 right now. download.cloudstack.org
> does.
> 
> 
> To us none of that problems was a stopper, but your mileage may vary.
> 
> cheers,
> 
> - Stephan
> 
> 
> Am Mittwoch, den 04.04.2018, 11:08 +0200 schrieb Marc Poll Garcia:
> > Hello,
> > 
> > My current infrastructure is Apache Cloudstack 4.9.2 with VMware
> > hosts and
> > the management server on CentOS.
> > 
> > 
> > I'm planning to perform an upgrade from the actual 4.9.2 versión to
> > the
> > latest one.
> > 
> > I found this tutorial from Cloudstack website:
> > 
> > http://docs.cloudstack.apache.org/projects/cloudstack-release-notes/e
> > n/4.11.0.0/upgrade/upgrade-4.9.html
> > 
> > But i don't know if any of you already did it, and had upgraded the
> > system?
> > I was wondering if anyone had any issues during the execution of the
> > process.
> > 
> > And also if someone can send more info, or another guide to follow or
> > best
> > practice?
> > 
> > We've check it out and found that we need to compile our own
> > cloudstack
> > software because we're using vmware hosts, is it true? any
> > suggestions?
> > 
> > Thanks in advance.
> > 
> > Kind regards.
> > 
> > 
> -- 
> 
> Heinlein Support GmbH
> Schwedter Str. 8/9b, 10119 Berlin
> 
> http://www.heinlein-support.de
> 
> Tel: 030 / 405051-44
> Fax: 030 / 405051-19
> 
> Zwangsangaben lt. §35a GmbHG: HRB 93818 B / Amtsgericht
> Berlin-Charlottenburg,
> Geschäftsführer: Peer Heinlein -- Sitz: Berlin
> 
> 
> 
> 
> dag.sonst...@shapeblue.com 
> www.shapeblue.com
> 53 Chandos Place, Covent Garden, London  WC2N 4HSUK
> @shapeblue
>   
>  
> 

-- 
--

Heinlein Support GmbH
Linux: Akademie - Support - Hosting

http://www.heinlein-support.de
Tel: 030 / 40 50 51 - 0
Fax: 030 / 40 50 51 - 19

Zwangsangaben lt. §35a GmbHG:
HRB 93818 B / Amtsgericht Berlin-Charlottenburg,
Geschäftsführer: Peer Heinlein  -- Sitz: Berlin


ACS 4.11 creates isolated Net only with IP Range 10.1.1.0/24

2018-04-04 Thread Melanie Desaive
Hi all,

after upgrading to 4.11 we have the issue, that isolated nets created
with the web-UI are always created for the IP range 10.1.1.0/24 - no
matter what values are filled in the fields "Guest Gateway" and "Guest
Netmask".

Creating an isolated network with CloudMonkey works perfectly using the
syntax:

create network displaytext=deleteme-cloudmonkey
name=deleteme-cloudmonkey networkofferingid= zoneid=
projectid= gateway=172.17.1.1 netmask=255.255.252.0
networkdomain=deleteme-cloudmonkey.heinlein-intern.de

Could this be a bug with 4.11? Can someone reproduce this behaviour?

Greetings,

Melanie
-- 
--

Heinlein Support GmbH
Linux: Akademie - Support - Hosting

http://www.heinlein-support.de
Tel: 030 / 40 50 51 - 0
Fax: 030 / 40 50 51 - 19

Zwangsangaben lt. §35a GmbHG:
HRB 93818 B / Amtsgericht Berlin-Charlottenburg,
Geschäftsführer: Peer Heinlein  -- Sitz: Berlin



signature.asc
Description: OpenPGP digital signature


Re: CloudStack Meetup - Frankfurt, Wednesday 28th of February

2018-02-26 Thread Melanie Desaive
Sehr schön,

ich habe gerade meine Bahntickets und Hotebuchung bekommen. :)

Bin also auch beim Abendprogramm dabei.

Liebe Grüße,

Melanie

Am 26.02.2018 um 11:58 schrieb Swen - swen.io:
> Hi @all,
> 
> just a reminder for the first German CloudStack Meetup this year in
> Frankfurt. Speeches will be mainly in English, so do not hesitate to visit
> this event even if you are not speaking German!
> 
> https://www.meetup.com/de-DE/german-CloudStack-user-group/events/246861772/
> 
> 
> Mit freundlichen Grüßen / With kind regards,
> 
> Swen
> 
> 

-- 
--

Heinlein Support GmbH
Linux: Akademie - Support - Hosting

http://www.heinlein-support.de
Tel: 030 / 40 50 51 - 0
Fax: 030 / 40 50 51 - 19

Zwangsangaben lt. §35a GmbHG:
HRB 93818 B / Amtsgericht Berlin-Charlottenburg,
Geschäftsführer: Peer Heinlein  -- Sitz: Berlin



signature.asc
Description: OpenPGP digital signature


Redundant router looses VRRP service IP

2017-10-21 Thread Melanie Desaive
Hi all,

I am currently trying to set up an isolated Networks with redundant
routers in CloudStack 4.9.2, but fail to solve a problem:

Any time I start a virtual machine on the isolated network the virtual
router in the master role looses its service IP on the internal network.
A simple "service keepalived restart" fixes the IP setup.

/var/log/cloud.log on the respective router shows messages, that suggest
the IP is removed on purpose by the script "/opt/cloud/bin/cs/CsAddress.py".

The portion in the log is:

2017-10-21 10:40:44,253  CsHelper.py execute:184 Executing: ip addr show
dev eth0
2017-10-21 10:40:44,265  CsAddress.py is_guest_gateway:657 Checking if
cidr is a gateway for rVPC. IP ==> 10.1.2.1/32 / device ==> eth0
2017-10-21 10:40:44,266  CsAddress.py is_guest_gateway:660 Interface has
the following gateway ==> None
2017-10-21 10:40:44,277  CsAddress.py delete:676 Removed address
10.1.2.1/32 from device eth0
2017-10-21 10:40:44,278  CsAddress.py post_config_change:558 Not able to
setup source-nat for a regular router yet

After looking into CsAddress.py I have the impression, that the service
IP is not in the pool of expected IPs for the machine and therefore
deleted. Maybe I missed some configuration parameter, to let CloudStack
know, that it should not remove the service IP?

Can someone give some advice?

Greetings,

Melanie

-

Below some data from my configuration that might be helpful:

The network from the API:

melaniedesaive@HS-X201-03 [2001] $ cloudmonkey -p ocl-admin -d json list
networks id=68198cf0-f61f-4dac-9d74-bfa21764717c
projectid=ce960375-6fd2-4e00-add2-9c8a644a24b9 listall=true
{
  "count": 1,
  "network": [
{
  "acltype": "Account",
  "broadcastdomaintype": "Vlan",
  "broadcasturi": "vlan://580",
  "canusefordeploy": true,
  "cidr": "10.1.2.0/24",
  "displaynetwork": true,
  "displaytext": "Netz mit finalem Offering HA expliziter Gateway 2",
  "dns1": "192.168.100.1",
  "dns2": "192.168.100.1",
  "domain": "Temp",
  "domainid": "0a092d9b-b055-4c2f-82e5-4bbd21706273",
  "gateway": "10.1.2.1",
  "id": "68198cf0-f61f-4dac-9d74-bfa21764717c",
  "ispersistent": false,
  "issystem": false,
  "name": "Netz mit finalem Offering HA expliziter Gateway 2",
  "netmask": "255.255.255.0",
  "networkdomain": "meltest.heinlein-intern.de",
  "networkofferingavailability": "Optional",
  "networkofferingconservemode": true,
  "networkofferingdisplaytext": "Offering for Isolated networks with
Source Nat service enabled HA With redundant Routers",
  "networkofferingid": "4aa7e796-d3f0-4696-89ad-708b956ce9c5",
  "networkofferingname":
"DefaultIsolatedNetworkOfferingWithSourceNatServiceHA",
  "physicalnetworkid": "f7a3527c-b5a9-4e04-9d15-5d22fe3c71f9",
  "project": "Mel Diverses",
  "projectid": "ce960375-6fd2-4e00-add2-9c8a644a24b9",
  "related": "68198cf0-f61f-4dac-9d74-bfa21764717c",
  "restartrequired": false,
  "service": [
{
  "capability": [
{
  "canchooseservicecapability": false,
  "name": "RedundantRouter",
  "value": "true"
},
{
  "canchooseservicecapability": false,
  "name": "SupportedSourceNatTypes",
  "value": "peraccount"
}
  ],
  "name": "SourceNat"
},
{
  "name": "PortForwarding"
},
{
  "capability": [
{
  "canchooseservicecapability": false,
  "name": "AllowDnsSuffixModification",
  "value": "true"
}
  ],
  "name": "Dns"
},
{
  "name": "StaticNat"
},
{
  "name": "UserData"
},
{
  "capability": [
{
  "canchooseservicecapability": false,
  "name": "VpnTypes",
  "value": "removeaccessvpn"
},
{
  "canchooseservicecapability": false,
  "name": "SupportedVpnTypes",
  "value": "pptp,l2tp,ipsec"
}
  ],
  "name": "Vpn"
},
{
  "capability": [
{
  "canchooseservicecapability": false,
  "name": "MultipleIps",
  "value": "true"
},
{
  "canchooseservicecapability": false,
  "name": "SupportedTrafficDirection",
  "value": "ingress, egress"
},
{
  "canchooseservicecapability": false,
  "name": "SupportedProtocols",
  "value": "tcp,udp,icmp"
},
{
  "canchooseservicecapability": false,
  "name": "TrafficStatistics",
  "value": "per public ip"
},
{
  "canchooseservicecapability": false,
  "name": 

Re: How to stop running storage migration?

2017-05-14 Thread Melanie Desaive
Hi Swen,

The case I had yesterday was the following:

I am starting a VM and CloudStack decides, that there are not enough
resources in the actual Cluster and starts migrating the volumes to a
different storage. Instead of waiting for the storage migration to
finish I would prefer to abort the storage migration, make same room in
the old cluster and start the VM there. Is there a chance to do this?

I am using XenServer 6.5.

Greetings,

Melanie

Am 14.05.2017 um 05:24 schrieb S. Brüseke - proIO GmbH:
> Hi Melanie,
> 
> are you talking about a running VM or just a volume you are migrating? What 
> hypervisor are you using?
> 
> Mit freundlichen Grüßen / With kind regards,
> 
> Swen Brüseke
> 
> -Ursprüngliche Nachricht-
> Von: Melanie Desaive [mailto:m.desa...@heinlein-support.de] 
> Gesendet: Samstag, 13. Mai 2017 09:46
> An: users@cloudstack.apache.org
> Betreff: How to stop running storage migration?
> 
> Hi all,
> 
> does anyone know a way to abort a running storage migration without risking 
> corrupted data?
> 
> That information could help me a lot!
> 
> Greetings,
> 
> Melanie
> --
> --
> 
> Heinlein Support GmbH
> Linux: Akademie - Support - Hosting
> 
> http://www.heinlein-support.de
> Tel: 030 / 40 50 51 - 0
> Fax: 030 / 40 50 51 - 19
> 
> Zwangsangaben lt. §35a GmbHG:
> HRB 93818 B / Amtsgericht Berlin-Charlottenburg,
> Geschäftsführer: Peer Heinlein  -- Sitz: Berlin
> 
> 
> 
> - proIO GmbH -
> Geschäftsführer: Swen Brüseke
> Sitz der Gesellschaft: Frankfurt am Main
> 
> USt-IdNr. DE 267 075 918
> Registergericht: Frankfurt am Main - HRB 86239
> 
> Diese E-Mail enthält vertrauliche und/oder rechtlich geschützte 
> Informationen. 
> Wenn Sie nicht der richtige Adressat sind oder diese E-Mail irrtümlich 
> erhalten haben, 
> informieren Sie bitte sofort den Absender und vernichten Sie diese Mail. 
> Das unerlaubte Kopieren sowie die unbefugte Weitergabe dieser Mail sind nicht 
> gestattet. 
> 
> This e-mail may contain confidential and/or privileged information. 
> If you are not the intended recipient (or have received this e-mail in error) 
> please notify 
> the sender immediately and destroy this e-mail.  
> Any unauthorized copying, disclosure or distribution of the material in this 
> e-mail is strictly forbidden. 
> 
> 

-- 
--

Heinlein Support GmbH
Linux: Akademie - Support - Hosting

http://www.heinlein-support.de
Tel: 030 / 40 50 51 - 0
Fax: 030 / 40 50 51 - 19

Zwangsangaben lt. §35a GmbHG:
HRB 93818 B / Amtsgericht Berlin-Charlottenburg,
Geschäftsführer: Peer Heinlein  -- Sitz: Berlin



signature.asc
Description: OpenPGP digital signature


How to stop running storage migration?

2017-05-13 Thread Melanie Desaive
Hi all,

does anyone know a way to abort a running storage migration without
risking corrupted data?

That information could help me a lot!

Greetings,

Melanie
-- 
--

Heinlein Support GmbH
Linux: Akademie - Support - Hosting

http://www.heinlein-support.de
Tel: 030 / 40 50 51 - 0
Fax: 030 / 40 50 51 - 19

Zwangsangaben lt. §35a GmbHG:
HRB 93818 B / Amtsgericht Berlin-Charlottenburg,
Geschäftsführer: Peer Heinlein  -- Sitz: Berlin



signature.asc
Description: OpenPGP digital signature


Re: Can it happen that XenServer shuts down a VM autonomously?

2017-03-29 Thread Melanie Desaive
Hi Dag,

thanks for you suggestion!

> Have you ruled out that the VM might have been shut down from within the VM 
> guest OS itself? 

And yes of course! Just checked with a test VM and the log messages look
exactly like those from Sunday. Stupid me. :/

Will investigate further in this direction.

Greetings,

Melanie

> 
> On 28/03/2017, 14:48, "Melanie Desaive" <m.desa...@heinlein-support.de> wrote:
> 
> Hi all,
> 
> on Sunday we had an issue, because one VM was unexpetedly down. After
> starting the VM in ACS everything worked fine again.
> 
> After investigating I found out the following situation:
> 
> The ACS Logs point out, that ACS received a poweroff report while the VM
> was expected to be running:
> 
> ---
> 2017-03-26 13:03:38,468 INFO [c.c.v.VirtualMachineManagerImpl]
> (DirectAgentCronJob-260:ctx-905a294a) VM i-86-1412-VM is at Running and
> we received a power-off report while there is no pending jobs on it
> 2017-03-26 13:03:38,470 DEBUG [c.c.a.t.Request]
> (DirectAgentCronJob-260:ctx-905a294a) Seq 28-7473723581621346009:
> Sending { Cmd , MgmtId: 57177340185274, via: 28(acs-compute-6), Ver: v1,
> Flags: 100011,
> 
> [{"com.cloud.agent.api.StopCommand":{"isProxy":false,"checkBeforeCleanup":true,"vmName":"i-86-1412-VM","executeInSequence":false,"wait":0}}]
> }
> 2017-03-26 13:03:38,470 DEBUG [c.c.a.t.Request]
> (DirectAgentCronJob-260:ctx-905a294a) Seq 28-7473723581621346009:
> Executing: { Cmd , MgmtId: 57177340185274, via: 28(acs-compute-6), Ver:
> v1, Flags: 100011,
> 
> [{"com.cloud.agent.api.StopCommand":{"isProxy":false,"checkBeforeCleanup":true,"vmName":"i-86-1412-VM","executeInSequence":false,"wait":0}}]
> }
> 2017-03-26 13:03:38,480 DEBUG [c.c.h.x.r.w.x.CitrixStopCommandWrapper]
> (DirectAgent-291:ctx-9b1e2233) 9. The VM i-86-1412-VM is in Stopping state
> ---
> 
> The xensource Log on the Compute node indicates, that the machine was
> stopped:
> 
> ---
> Mar 26 13:00:10 acs-compute-6 xenopsd:
> [debug|acs-compute-6|1|events|xenops] Received an event on managed VM
> e3bad3f3-c49f-873d-943f-bc8a2af365e0
> Mar 26 13:00:10 acs-compute-6 xenopsd:
> [debug|acs-compute-6|1|queue|xenops] Queue.push ["VM_check_state",
> "e3bad3f3-c49f-873d-943f-bc8a2af365e0"] onto
> e3bad3f3-c49f-873d-943f-bc8a2af365e0:[  ]
> Mar 26 13:00:10 acs-compute-6 xenopsd: [debug|acs-compute-6|10||xenops]
> Queue.pop returned ["VM_check_state",
> "e3bad3f3-c49f-873d-943f-bc8a2af365e0"]
> Mar 26 13:00:10 acs-compute-6 xenopsd:
> [debug|acs-compute-6|10|events|xenops] Task 449410 reference events:
> ["VM_check_state", "e3bad3f3-c49f-873d-943f-bc8a2af365e0"]
> Mar 26 13:00:10 acs-compute-6 xenopsd:
> [debug|acs-compute-6|2|xenstore|xenstore_watch] xenstore unwatch
> /vm/e3bad3f3-c49f-873d-943f-bc8a2af365e0/rtc/timeoffset
> Mar 26 13:00:10 acs-compute-6 xenstored:  A820862  unwatch
> /vm/e3bad3f3-c49f-873d-943f-bc8a2af365e0/rtc/timeoffset
> /vm/e3bad3f3-c49f-873d-943f-bc8a2af365e0/rtc/timeoffset
> Mar 26 13:00:10 acs-compute-6 xenstored:  A820862  unwatch
> /vm/e3bad3f3-c49f-873d-943f-bc8a2af365e0/rtc/timeoffset
> /vm/e3bad3f3-c49f-873d-943f-bc8a2af365e0/rtc/timeoffset
> Mar 26 13:00:10 acs-compute-6 xenopsd:
> [debug|acs-compute-6|10|events|xenops] VM.shutdown
> e3bad3f3-c49f-873d-943f-bc8a2af365e0
> ---
> 
> Unfortunately I am not able to explain, how the VM.shutdown was
> triggered on XenServer side. Are there known situations, when a
> XenServer does trigger a VM shutdown autonomously?
> 
> Greetings,
> 
> Melanie
> -- 
> --
> 
> Heinlein Support GmbH
> Linux: Akademie - Support - Hosting
> 
> http://www.heinlein-support.de
> Tel: 030 / 40 50 51 - 0
> Fax: 030 / 40 50 51 - 19
> 
> Zwangsangaben lt. §35a GmbHG:
> HRB 93818 B / Amtsgericht Berlin-Charlottenburg,
> Geschäftsführer: Peer Heinlein  -- Sitz: Berlin
> 
> 
> 
> 
> dag.sonst...@shapeblue.com 
> www.shapeblue.com
> 53 Chandos Place, Covent Garden, London  WC2N 4HSUK
> @shapeblue
>   
>  
> 

-- 
--

Heinlein Support GmbH
Linux: Akademie - Support - Hosting

http://www.heinlein-support.de
Tel: 030 / 40 50 51 - 0
Fax: 030 / 40 50 51 - 19

Zwangsangaben lt. §35a GmbHG:
HRB 93818 B / Amtsgericht Berlin-Charlottenburg,
Geschäftsführer: Peer Heinlein  -- Sitz: Berlin



signature.asc
Description: OpenPGP digital signature


Can it happen that XenServer shuts down a VM autonomously?

2017-03-28 Thread Melanie Desaive
Hi all,

on Sunday we had an issue, because one VM was unexpetedly down. After
starting the VM in ACS everything worked fine again.

After investigating I found out the following situation:

The ACS Logs point out, that ACS received a poweroff report while the VM
was expected to be running:

---
2017-03-26 13:03:38,468 INFO [c.c.v.VirtualMachineManagerImpl]
(DirectAgentCronJob-260:ctx-905a294a) VM i-86-1412-VM is at Running and
we received a power-off report while there is no pending jobs on it
2017-03-26 13:03:38,470 DEBUG [c.c.a.t.Request]
(DirectAgentCronJob-260:ctx-905a294a) Seq 28-7473723581621346009:
Sending { Cmd , MgmtId: 57177340185274, via: 28(acs-compute-6), Ver: v1,
Flags: 100011,
[{"com.cloud.agent.api.StopCommand":{"isProxy":false,"checkBeforeCleanup":true,"vmName":"i-86-1412-VM","executeInSequence":false,"wait":0}}]
}
2017-03-26 13:03:38,470 DEBUG [c.c.a.t.Request]
(DirectAgentCronJob-260:ctx-905a294a) Seq 28-7473723581621346009:
Executing: { Cmd , MgmtId: 57177340185274, via: 28(acs-compute-6), Ver:
v1, Flags: 100011,
[{"com.cloud.agent.api.StopCommand":{"isProxy":false,"checkBeforeCleanup":true,"vmName":"i-86-1412-VM","executeInSequence":false,"wait":0}}]
}
2017-03-26 13:03:38,480 DEBUG [c.c.h.x.r.w.x.CitrixStopCommandWrapper]
(DirectAgent-291:ctx-9b1e2233) 9. The VM i-86-1412-VM is in Stopping state
---

The xensource Log on the Compute node indicates, that the machine was
stopped:

---
Mar 26 13:00:10 acs-compute-6 xenopsd:
[debug|acs-compute-6|1|events|xenops] Received an event on managed VM
e3bad3f3-c49f-873d-943f-bc8a2af365e0
Mar 26 13:00:10 acs-compute-6 xenopsd:
[debug|acs-compute-6|1|queue|xenops] Queue.push ["VM_check_state",
"e3bad3f3-c49f-873d-943f-bc8a2af365e0"] onto
e3bad3f3-c49f-873d-943f-bc8a2af365e0:[  ]
Mar 26 13:00:10 acs-compute-6 xenopsd: [debug|acs-compute-6|10||xenops]
Queue.pop returned ["VM_check_state",
"e3bad3f3-c49f-873d-943f-bc8a2af365e0"]
Mar 26 13:00:10 acs-compute-6 xenopsd:
[debug|acs-compute-6|10|events|xenops] Task 449410 reference events:
["VM_check_state", "e3bad3f3-c49f-873d-943f-bc8a2af365e0"]
Mar 26 13:00:10 acs-compute-6 xenopsd:
[debug|acs-compute-6|2|xenstore|xenstore_watch] xenstore unwatch
/vm/e3bad3f3-c49f-873d-943f-bc8a2af365e0/rtc/timeoffset
Mar 26 13:00:10 acs-compute-6 xenstored:  A820862  unwatch
/vm/e3bad3f3-c49f-873d-943f-bc8a2af365e0/rtc/timeoffset
/vm/e3bad3f3-c49f-873d-943f-bc8a2af365e0/rtc/timeoffset
Mar 26 13:00:10 acs-compute-6 xenstored:  A820862  unwatch
/vm/e3bad3f3-c49f-873d-943f-bc8a2af365e0/rtc/timeoffset
/vm/e3bad3f3-c49f-873d-943f-bc8a2af365e0/rtc/timeoffset
Mar 26 13:00:10 acs-compute-6 xenopsd:
[debug|acs-compute-6|10|events|xenops] VM.shutdown
e3bad3f3-c49f-873d-943f-bc8a2af365e0
---

Unfortunately I am not able to explain, how the VM.shutdown was
triggered on XenServer side. Are there known situations, when a
XenServer does trigger a VM shutdown autonomously?

Greetings,

Melanie
-- 
--

Heinlein Support GmbH
Linux: Akademie - Support - Hosting

http://www.heinlein-support.de
Tel: 030 / 40 50 51 - 0
Fax: 030 / 40 50 51 - 19

Zwangsangaben lt. §35a GmbHG:
HRB 93818 B / Amtsgericht Berlin-Charlottenburg,
Geschäftsführer: Peer Heinlein  -- Sitz: Berlin



signature.asc
Description: OpenPGP digital signature


Long downtimes for VMs through automatically triggered storage migration

2016-10-12 Thread Melanie Desaive
Hi all,

my college and I are having a dispute on when cloudstack should
automatically trigger storage migrations and what options we have to
control cloudstacks behavior in terms of storage migrations.

We are operating a setup with two XenServer clusters which are combined
into one pod. Each cluster with its own independent SRs of type lvmoiscsi.

Unfortunately we had a XenServer bug, which prevented a few VMs to start
on any compute node. Any time this bug appeared, CloudStack tried to
start the concerned VM successively on each node of the actual cluster
and afterwards started a storage migration to the second cluster.

We are using UserDispersing deployment planner.

The decision of the deployment planner to start the storage migration
was very unfortunate for us. Mainly because:
 * We are operating some VMs with big data volumes which where
inaccessible for the time the storage migration was running.
 * The SR on the destination cluster did not even have the capacity to
take all volumes of the big VMs. Still the migration was triggered.

We would like to have some kind of best practice advice on how other are
preventing long, unplanned downtimes for VMs with huge data volumes
through automated storage migration.

We discussed the topic and came up with the following questions:
 * Is the described behaviour of the deployment planner intentional?
 * Is it possible to prevent some few VMs with huge storage volumes from
automated storage migration and what would be the best way to achieve
this? Could we use storage or host tags for this purpose?
 * Is it possible to globally prevent the deployment planner from
starting storage migrations?
* Are there global settings to achieve this?
* Would we have to adapt the deployment planner?
 * Do we have to rethink our system architecture and avoid huge data
volumes completely?
 * Was the decision to put two clusters into one pod a bad idea?
 * Are there other solutions to our problem?

We would greatly appreciate any advice in the issue!

Best regards,

Melanie

-- 
--

Heinlein Support GmbH
Linux: Akademie - Support - Hosting

http://www.heinlein-support.de
Tel: 030 / 40 50 51 - 0
Fax: 030 / 40 50 51 - 19

Zwangsangaben lt. §35a GmbHG:
HRB 93818 B / Amtsgericht Berlin-Charlottenburg,
Geschäftsführer: Peer Heinlein  -- Sitz: Berlin