AW: CloudStack Confirmed Events - 2023

2023-01-09 Thread S . Brüseke - proIO GmbH
Hello Ivet,

we are interested of some kind of sponsoring. We are in the middle of starting 
our proIO Cloud based on CloudStack and using some kind of CS for a long time 
now for our old cloud infrastructure.

Mit freundlichen Grüßen / With kind regards,

Swen Brüseke

 
proIO GmbH
Kleyerstr. 79 - 89
D-60326 Frankfurt am Main 
 
Mail: s.brues...@proio.com
Tel:  +(49) (0) 69 739049-15
Fax:  +(49) (0) 69 739049-25
Web:  www.proio.com
 
- Support -
Mail: supp...@proio.com
24h:  +(49) (0) 1805 522 855



-Ursprüngliche Nachricht-
Von: Ivet Petrova  
Gesendet: Montag, 9. Januar 2023 13:27
An: users@cloudstack.apache.org; Apache CloudStack Marketing 

Betreff: CloudStack Confirmed Events - 2023

Hi all,

I am happy to announce that our community has confirmed its participation at 2 
events in Q1 and Q2 2023.
We will be joining:
- CloudFest in March
- Kubecon in April

I am looking both for volunteers who would like to join the events and also for 
companies who want to support us.
We need support in the following directions:
- sponsorships to cover booth costs
- sponsorship to cover costs for swags
- as we will present ACS we need to bring some marketing materials presenting 
success user stories. I would be happy if companies using ACS contact me to 
prepare success stories abut their use case.
Ideally I need 3-4 new case studies to present at the events.



Kind regards,


 



- proIO GmbH -
Geschäftsführer: Swen Brüseke
Sitz der Gesellschaft: Frankfurt am Main

USt-IdNr. DE 267 075 918
Registergericht: Frankfurt am Main - HRB 86239

Diese E-Mail enthält vertrauliche und/oder rechtlich geschützte Informationen. 
Wenn Sie nicht der richtige Adressat sind oder diese E-Mail irrtümlich erhalten 
haben, 
informieren Sie bitte sofort den Absender und vernichten Sie diese Mail. 
Das unerlaubte Kopieren sowie die unbefugte Weitergabe dieser Mail sind nicht 
gestattet. 

This e-mail may contain confidential and/or privileged information. 
If you are not the intended recipient (or have received this e-mail in error) 
please notify 
the sender immediately and destroy this e-mail.  
Any unauthorized copying, disclosure or distribution of the material in this 
e-mail is strictly forbidden. 




AW: KVM with shared storage

2018-02-19 Thread S . Brüseke - proIO GmbH
Thank you guys for your replies!
So the only stable option for KVM and shared storage is to use NFS or CEPH at 
the moment?

@Dag: I am evaluating ScaleIO with KVM and it works great! Easy installation, 
good performance and handling. I installed ScaleIO on the same servers as KVM 
is installed, because I took a hyper-converged approach. We do not have any 
developing skills in our company, but I am really interested in ScaleIO 
drivers. Do you know any other customers/users of CS who are looking for it too?

Mit freundlichen Grüßen / With kind regards,

Swen

-Ursprüngliche Nachricht-
Von: Andrija Panic [mailto:andrija.pa...@gmail.com] 
Gesendet: Montag, 19. Februar 2018 15:39
An: users <users@cloudstack.apache.org>
Betreff: Re: KVM with shared storage

>From my (production) experience from few years ago, even GFS2  as a clustered 
>file system was S unstable, and lead to locks, causing the share to 
>become unresponsive 100%, and then you go and fix the things any way you can 
>(and this was with only 3 nodes !!! accessing the share with LIGHT write 
>IO)...all setup by RH themselves back in the days (not cloud, some web project)

Avoid at all cost.

Again, I need to refer to the work of Mike.Tutkowski from Solidfire, where he 
implemented "online storage migration" from NFS/CEPH to SOLIDFIRE - I hope that 
someone (DEVs) could implement this globally between NFS storages for begining, 
if there is any interest, since you can very easily do this migration manually 
with virsh and editing XML to reference new volumes on new storage, then you 
can live migrate VM while it's running with zero downtime...(so zero shared 
storage here, but again live migration works (with storage))

Cheers

On 19 February 2018 at 15:21, Dag Sonstebo <dag.sonst...@shapeblue.com>
wrote:

> Hi Swen,
>
> +1 for Simon’s comments. I did a fair bit of CLVM POC work a year ago 
> +and
> ended up concluding it’s just not fit for purpose, it’s too unstable 
> and STONITH was a challenge to say the least. As you’ve seen from our 
> blog article we have had a project in the past using OCFS2 – but it is 
> again a challenge to set up and to get running smoothly + you need to 
> go cherry picking modules which can be tricky since you are into Oracle 
> territory.
>
> For smooth running I recommend sticking to NFS if you can – if not 
> take a look at CEPH or gluster, or ScaleIO as Simon suggested.
>
> Regards,
> Dag Sonstebo
> Cloud Architect
> ShapeBlue
>
> On 19/02/2018, 13:45, "Simon Weller" <swel...@ena.com.INVALID> wrote:
>
> So CLVM used to be supported (and probably still works). I'd 
> highly recommend you avoid using a clustered file system if you can 
> avoid it. See if your SAN supports the ability to do exclusive locking.
>
> There was chatter on the list a couple of years ago about a 
> scaleio storage driver, but I'm not sure whether there was any 
> movement on that or not.
>
>
> - Si
>
>
> 
> From: S. Brüseke - proIO GmbH <s.brues...@proio.com>
> Sent: Monday, February 19, 2018 4:57 AM
> To: users@cloudstack.apache.org
> Subject: KVM with shared storage
>
> Hi @all,
>
> I am evaluating KVM for our Cloudstack installation. We are using 
> XenServer at the moment. We want to use shared storage so we can do 
> live migration of VMs.
> For our KVM hosts I am using CentOS7 with standard kernel as OS 
> and for shared storage I am evaluating ScaleIO. KVM and ScaleIO 
> installation is working great and I can map a volume to all KVM hosts.
> I end up with a device (dev/scinia) of block storage on all KVM hosts.
>
> As far as I understand I need a cluster filesystem on this device 
> so I can use it on all KVM hosts simultaneously. So my options are 
> ocfs2, gfs2 and clvm. As documented clvm is not supported by 
> Cloudstack and therefore not really an option.
> I found this shapeblue howto (http://www.shapeblue.com/
> installing-and-configuring-an-ocfs2-clustered-file-system/) for ocfs2, 
> but this is for CentOS6 and I am unable to find a working ocfs2 module 
> for
> CentOS7 standard kernel.
> [http://www.shapeblue.com/wp-content/uploads/2017/03/
> Fotolia_51644947_XS-1.jpg]<http://www.shapeblue.com/
> installing-and-configuring-an-ocfs2-clustered-file-system/>
>
> Installing and Configuring an OCFS2 Clustered File System ...<
> http://www.shapeblue.com/installing-and-configuring-an-
> ocfs2-clustered-file-system/>
> www.shapeblue.com
> Last year we had a project which required us to build out a KVM 
> environment which used shared storage. Most often that would be NFS 
> all the way and very occasionally ...
>
>
>
>
>

KVM with shared storage

2018-02-19 Thread S . Brüseke - proIO GmbH
Hi @all,

I am evaluating KVM for our Cloudstack installation. We are using XenServer at 
the moment. We want to use shared storage so we can do live migration of VMs.
For our KVM hosts I am using CentOS7 with standard kernel as OS and for shared 
storage I am evaluating ScaleIO. KVM and ScaleIO installation is working great 
and I can map a volume to all KVM hosts.
I end up with a device (dev/scinia) of block storage on all KVM hosts.

As far as I understand I need a cluster filesystem on this device so I can use 
it on all KVM hosts simultaneously. So my options are ocfs2, gfs2 and clvm. As 
documented clvm is not supported by Cloudstack and therefore not really an 
option.
I found this shapeblue howto 
(http://www.shapeblue.com/installing-and-configuring-an-ocfs2-clustered-file-system/)
 for ocfs2, but this is for CentOS6 and I am unable to find a working ocfs2 
module for CentOS7 standard kernel.

So my question is how are you implementing shared storage with KVM hosts in 
your Cloudstack installation if it is not NFS? Thx for your help!

Mit freundlichen Grüßen / With kind regards,

Swen



- proIO GmbH -
Geschäftsführer: Swen Brüseke
Sitz der Gesellschaft: Frankfurt am Main

USt-IdNr. DE 267 075 918
Registergericht: Frankfurt am Main - HRB 86239

Diese E-Mail enthält vertrauliche und/oder rechtlich geschützte Informationen. 
Wenn Sie nicht der richtige Adressat sind oder diese E-Mail irrtümlich erhalten 
haben, 
informieren Sie bitte sofort den Absender und vernichten Sie diese Mail. 
Das unerlaubte Kopieren sowie die unbefugte Weitergabe dieser Mail sind nicht 
gestattet. 

This e-mail may contain confidential and/or privileged information. 
If you are not the intended recipient (or have received this e-mail in error) 
please notify 
the sender immediately and destroy this e-mail.  
Any unauthorized copying, disclosure or distribution of the material in this 
e-mail is strictly forbidden. 




AW: XenServer Licensing Change - Switch Hypervisors?

2018-02-03 Thread S . Brüseke - proIO GmbH
No concern at all, just a lack of knowledge on my side. ;-) How do you mount 
the shared ScaleIO volume on the KVM host? Do you just mount it and use a 
cluster filesystem on it?
 
 
Mit freundlichen Grüßen / With kind regards,
 
Swen
 
Von: Alessandro Caviglione [mailto:c.alessan...@gmail.com] 
Gesendet: Samstag, 3. Februar 2018 12:41
An: users@cloudstack.apache.org; S. Brüseke - proIO GmbH <s.brues...@proio.com>
Betreff: Re: XenServer Licensing Change - Switch Hypervisors?
 
We too was evaluating ScaleIO on XS, now we move to KVM or Hyper-V but always 
with ScaleIO, I don't understand your concern about ScaleIO on KVM.
 
 
On Sat, Feb 3, 2018 at 12:32 PM, S. Brüseke - proIO GmbH <s.brues...@proio.com> 
wrote:
Hi Alessandro,

we are in the same situation with using XenServer 6.5. Besides it will not be 
patched XS6.5 has poor memory performance. We are in the middle of evaluating 
KVM with ScaleIO to get rid of XenServer in the near future. I am stuck right 
now on how to mount a ScaleIO volume do a KVM host.
Migrate all VMs will be a pain in the ass, but XenServer is not the way to go 
after all the decisions Citrix did in the past with its hypervisor.

We are more Linux related so Hyper-V is not an option for us and as far as I 
know you cannot mix Hyper-V with other hypervisors in the same zone in 
Cloudstack. But I am not sure if this limitation is still present in the latest 
stable release.

Mit freundlichen Grüßen / With kind regards,

Swen

-Ursprüngliche Nachricht-
Von: Alessandro Caviglione [mailto:c.alessan...@gmail.com]
Gesendet: Samstag, 3. Februar 2018 01:10
An: users@cloudstack.apache.org
Betreff: Re: XenServer Licensing Change - Switch Hypervisors?

Hi all,
I'm also trying to find a solution, our infrastructure is based on XS6.5 that 
will not patched on meltdown and spectre so we're considering to create new 
cluster based on a different hypervisor instead of upgrade to XS7.2.
In fact, I think that all here work for a company that has a MS SPLA agreement 
in place, so my question is: since we're already paying MS Datacenter license, 
what do you think about Hyper-V under Cloudstack?
I'm trying to compare it versus KVM...


Thank you.


On Tue, Jan 9, 2018 at 3:18 AM, Pierre-Luc Dion <pd...@cloudops.com> wrote:

> Hi Dingo,
>
> That's an interesting answer to recent citrix licensing change for
> xenserver, I'll definitely keep an eye on this project!
>
> thanks!
>
>
> *Pierre-Luc DION*
> Architecte de Solution Cloud | Cloud Solutions Architect t
> 855.652.5683
>
> *CloudOps* Votre partenaire infonuagique* | *Cloud Solutions Experts
> 420 rue Guy *|* Montreal *|* Quebec *|* H3J 1S6 w cloudops.com *|* tw
> @CloudOps_
>
> On Mon, Jan 8, 2018 at 10:13 AM, Nux! <n...@li.nux.ro> wrote:
>
> > Good luck, Sean. It should be doable.
> > If you're buying new Intel hardware, make sure it supports the
> > invpcid
> cpu
> > flag, or buy AMD Epyc.
> > See my other recent email on the list about performance implications
> > of Meltdown.
> >
> > --
> > Sent from the Delta quadrant using Borg technology!
> >
> > Nux!
> > www.nux.ro
> >
> > - Original Message -
> > > From: "Sean Lair" <sl...@ippathways.com>
> > > To: "users" <users@cloudstack.apache.org>
> > > Sent: Sunday, 7 January, 2018 18:20:11
> > > Subject: RE: XenServer Licensing Change - Switch Hypervisors?
> >
> > > Thanks for the reply Nux, yea we originally chose XenServer over
> > > KVM
> > because KVM
> > > didn't support all of the VM snapshot functionality of XenServer.
> > >
> > > We are evaluating switching to KVM now...  But don't have a good
> > > way of
> > moving
> > > customers over from XenServer host to KVM hosts...
> > >
> > > We are on XenServer 6.5 and with the new Spectre and Meltdown
> > vulnerabilities
> > > not being patched in 6.5...  We may accelerate the move to KVM.
> > >
> > >
> > > -Original Message-
> > > From: Nux! [mailto:n...@li.nux.ro]
> > > Sent: Friday, January 5, 2018 4:22 PM
> > > To: users <users@cloudstack.apache.org>
> > > Subject: Re: XenServer Licensing Change - Switch Hypervisors?
> > >
> > > If you have expertise with XenServer and don't mind paying, then
> > > it's
> > not a bad
> > > direction to follow. It's a nice HV.
> > > On the long term I think KVM will be a much better solution though.
> > >
> > > --
> > > Sent from the Delta quadrant using Borg technology!
> > >
> > > Nux!
> > > www.nux.ro
> > >
> > > - Original Message -
> > &g

AW: XenServer Licensing Change - Switch Hypervisors?

2018-02-03 Thread S . Brüseke - proIO GmbH
Hi Alessandro,

we are in the same situation with using XenServer 6.5. Besides it will not be 
patched XS6.5 has poor memory performance. We are in the middle of evaluating 
KVM with ScaleIO to get rid of XenServer in the near future. I am stuck right 
now on how to mount a ScaleIO volume do a KVM host.
Migrate all VMs will be a pain in the ass, but XenServer is not the way to go 
after all the decisions Citrix did in the past with its hypervisor.

We are more Linux related so Hyper-V is not an option for us and as far as I 
know you cannot mix Hyper-V with other hypervisors in the same zone in 
Cloudstack. But I am not sure if this limitation is still present in the latest 
stable release.

Mit freundlichen Grüßen / With kind regards,

Swen

-Ursprüngliche Nachricht-
Von: Alessandro Caviglione [mailto:c.alessan...@gmail.com] 
Gesendet: Samstag, 3. Februar 2018 01:10
An: users@cloudstack.apache.org
Betreff: Re: XenServer Licensing Change - Switch Hypervisors?

Hi all,
I'm also trying to find a solution, our infrastructure is based on XS6.5 that 
will not patched on meltdown and spectre so we're considering to create new 
cluster based on a different hypervisor instead of upgrade to XS7.2.
In fact, I think that all here work for a company that has a MS SPLA agreement 
in place, so my question is: since we're already paying MS Datacenter license, 
what do you think about Hyper-V under Cloudstack?
I'm trying to compare it versus KVM...


Thank you.


On Tue, Jan 9, 2018 at 3:18 AM, Pierre-Luc Dion  wrote:

> Hi Dingo,
>
> That's an interesting answer to recent citrix licensing change for 
> xenserver, I'll definitely keep an eye on this project!
>
> thanks!
>
>
> *Pierre-Luc DION*
> Architecte de Solution Cloud | Cloud Solutions Architect t 
> 855.652.5683
>
> *CloudOps* Votre partenaire infonuagique* | *Cloud Solutions Experts
> 420 rue Guy *|* Montreal *|* Quebec *|* H3J 1S6 w cloudops.com *|* tw 
> @CloudOps_
>
> On Mon, Jan 8, 2018 at 10:13 AM, Nux!  wrote:
>
> > Good luck, Sean. It should be doable.
> > If you're buying new Intel hardware, make sure it supports the 
> > invpcid
> cpu
> > flag, or buy AMD Epyc.
> > See my other recent email on the list about performance implications 
> > of Meltdown.
> >
> > --
> > Sent from the Delta quadrant using Borg technology!
> >
> > Nux!
> > www.nux.ro
> >
> > - Original Message -
> > > From: "Sean Lair" 
> > > To: "users" 
> > > Sent: Sunday, 7 January, 2018 18:20:11
> > > Subject: RE: XenServer Licensing Change - Switch Hypervisors?
> >
> > > Thanks for the reply Nux, yea we originally chose XenServer over 
> > > KVM
> > because KVM
> > > didn't support all of the VM snapshot functionality of XenServer.
> > >
> > > We are evaluating switching to KVM now...  But don't have a good 
> > > way of
> > moving
> > > customers over from XenServer host to KVM hosts...
> > >
> > > We are on XenServer 6.5 and with the new Spectre and Meltdown
> > vulnerabilities
> > > not being patched in 6.5...  We may accelerate the move to KVM.
> > >
> > >
> > > -Original Message-
> > > From: Nux! [mailto:n...@li.nux.ro]
> > > Sent: Friday, January 5, 2018 4:22 PM
> > > To: users 
> > > Subject: Re: XenServer Licensing Change - Switch Hypervisors?
> > >
> > > If you have expertise with XenServer and don't mind paying, then 
> > > it's
> > not a bad
> > > direction to follow. It's a nice HV.
> > > On the long term I think KVM will be a much better solution though.
> > >
> > > --
> > > Sent from the Delta quadrant using Borg technology!
> > >
> > > Nux!
> > > www.nux.ro
> > >
> > > - Original Message -
> > >> From: "Sean Lair" 
> > >> To: "users" 
> > >> Sent: Wednesday, 3 January, 2018 00:53:32
> > >> Subject: XenServer Licensing Change - Switch Hypervisors?
> > >
> > >> It looks like XenServer 7.3 will no longer have the following 
> > >> features in the Free Edition.  Is anyone considering moving from 
> > >> Free to Standard Edition or possibly to another hyper-visor (like 
> > >> KVM) for
> their
> > >> CloudStack environment?
> > >>
> > >> Thoughts?  We are looking more at KVM at this point, any feature 
> > >> gaps we should be aware of?
> > >>
> > >> Free Edition Changes
> > >>
> > >> -  Limited to up to 3 hosts per clusters
> > >>
> > >> -  No Pool High-Availability
> > >>
> > >> -  No Dynamic Memory Control (DMC)
> > >>
> > >> https://www.citrix.com/content/dam/citrix/en_us/
> documents/product-over
> > >> view/citrix-xenserver-feature-matrix.pdf
> > >>
> > >> Thanks
> > > > Sean
> >
>


- proIO GmbH -
Geschäftsführer: Swen Brüseke
Sitz der Gesellschaft: Frankfurt am Main

USt-IdNr. DE 267 075 918
Registergericht: Frankfurt am Main - HRB 86239

Diese E-Mail enthält vertrauliche und/oder rechtlich geschützte Informationen. 
Wenn Sie nicht der richtige Adressat 

AW: AW: AW: KVM storage cluster

2018-02-02 Thread S . Brüseke - proIO GmbH
Hi Andrija,

you are right, of course it is Samsung PM1633a. I am not sure if this is really 
only RAM. I let the fio command run for more than 30min and IOPS did not drop.
I am using 6 SSDs in my setup, each has 35.000 IOPS random write max, so 
ScaleIO can do 210.000 IOPS (read) at its best. fio shows around 140.000 IOPS 
(read) max. ScaleIO GUI shows me around 45.000 IOPS (read/write combined) per 
SSD.

Do you have a different fio command I can run?

Mit freundlichen Grüßen / With kind regards,

Swen

-Ursprüngliche Nachricht-
Von: Andrija Panic [mailto:andrija.pa...@gmail.com] 
Gesendet: Freitag, 2. Februar 2018 16:04
An: users <users@cloudstack.apache.org>
Cc: S. Brüseke - proIO GmbH <s.brues...@proio.com>
Betreff: Re: AW: AW: KVM storage cluster

>From my extremely short reading on ScaleIO few months ago, they are utilizing 
>RAM or similar for write caching, so basically, you write to RAM or other part 
>of ultra fast temp memory (NVME,etc) and later it is flushed to durable part 
>of storage.

I assume its 1633a not 1663a ? -
http://www.samsung.com/semiconductor/ssd/enterprise-ssd/MZILS1T9HEJH/ ( ?) This 
one can barely do 35K IOPS of write per spec... and based on my humble 
experience with Samsung, you can hardly ever reach that specification, even 
with locally attached SSD and a lot of CPU available...(local filesystem)

So it must be RAM writing for sure...so make sure you saturate the benchmark 
enough, so that the flushing process kicks in, and that the benchmark will make 
sense when you later have constant IO load on the cluster.

Cheers


On 2 February 2018 at 15:56, Ivan Kudryavtsev <kudryavtsev...@bw-sw.com>
wrote:

> Swen, performance looks awesome, but still wonder where is the magic 
> here, because AFAIK Ceph is not capable to even touch the base, but 
> Red Hat bets on it... Might it be the ScaleIO doesn't wait while the 
> replication complete for IO or other hack is used?
>
> 2 февр. 2018 г. 3:19 ПП пользователь "S. Brüseke - proIO GmbH" < 
> s.brues...@proio.com> написал:
>
> > Hi Ivan,
> >
> >
> >
> > it is a 50/50 read-write mix. Here is the fio command I used:
> >
> > fio --name=test --readwrite=randrw --rwmixwrite=50 --bs=4k 
> > --invalidate=1 --group_reporting --direct=1 --filename=/dev/scinia 
> > --time_based
> > --runtime= --ioengine=libaio --numjobs=4 --iodepth=256 
> > --norandommap
> > --randrepeat=0 –exitall
> >
> >
> >
> > Result was:
> >
> > IO Workload 274.000 IOPS
> >
> > 1,0 GB/s transfer
> >
> > Read Bandwith 536MB/s
> >
> > Read IOPS 137.000
> >
> > Write Bandwith 536MB/s
> >
> > Write IOPS 137.000
> >
> >
> >
> > If you want me to run a different fio command just send it. My lab 
> > is still running.
> >
> >
> >
> > Any idea how I can mount my ScaleIO volume in KVM?
> >
> >
> >
> > Mit freundlichen Grüßen / With kind regards,
> >
> >
> >
> > Swen
> >
> >
> >
> > *Von:* Ivan Kudryavtsev [mailto:kudryavtsev...@bw-sw.com]
> > *Gesendet:* Freitag, 2. Februar 2018 02:58
> > *An:* users@cloudstack.apache.org; S. Brüseke - proIO GmbH < 
> > s.brues...@proio.com>
> > *Betreff:* Re: AW: KVM storage cluster
> >
> >
> >
> > Hi, Swen. Do you test with direct or cached ops or buffered ones? Is 
> > it a write test or rw with certain rw percenrage? Hardly believe the
> deployment
> > can do 250k IOs for writting with single VM test.
> >
> >
> >
> > 2 февр. 2018 г. 4:56 пользователь "S. Brüseke - proIO GmbH" < 
> > s.brues...@proio.com> написал:
> >
> > I am also testing with ScaleIO on CentOS7 with KVM. With a 3 node 
> > cluster with each node has 2x 2TB SSD (Samsung PM1663a) I get 
> > 250.000 IOPS when doing a fio test (random 4k).
> > The only problem is that I do not know how to mount the shared 
> > volume so that KVM can use it to store vms on it. Does anyone know how to 
> > do it?
> >
> > Mit freundlichen Grüßen / With kind regards,
> >
> > Swen
> >
> > -Ursprüngliche Nachricht-
> > Von: Andrija Panic [mailto:andrija.pa...@gmail.com]
> > Gesendet: Donnerstag, 1. Februar 2018 22:00
> > An: users <users@cloudstack.apache.org>
> > Betreff: Re: KVM storage cluster
> >
> >
> > a bit late, but:
> >
> > - for any IO heavy (medium even...) workload, try to avoid CEPH, no 
> > offence, simply it takes lot of $$$ to make CEPH perform in random 
> > IO worlds (imagine RHEL and vendors provide only refernce 
> > ar

AW: AW: AW: KVM storage cluster

2018-02-02 Thread S . Brüseke - proIO GmbH
Hi Ivan,

it is a standard installation without any tuning. We are using 2x 10Gbit 
interfaces on all servers. I am not really sure how ScaleIO handles the 
replication at the moment. I do not have any experience with Ceph too so I am 
unable to compare it.
FYI: If you use 128k instead of 4k blocks than the IOPS are dropping to 11.000.

Mit freundlichen Grüßen / With kind regards,

Swen

-Ursprüngliche Nachricht-
Von: Ivan Kudryavtsev [mailto:kudryavtsev...@bw-sw.com] 
Gesendet: Freitag, 2. Februar 2018 15:57
An: S. Brüseke - proIO GmbH <s.brues...@proio.com>
Cc: users@cloudstack.apache.org
Betreff: Re: AW: AW: KVM storage cluster

Swen, performance looks awesome, but still wonder where is the magic here, 
because AFAIK Ceph is not capable to even touch the base, but Red Hat bets on 
it... Might it be the ScaleIO doesn't wait while the replication complete for 
IO or other hack is used?

2 февр. 2018 г. 3:19 ПП пользователь "S. Brüseke - proIO GmbH" < 
s.brues...@proio.com> написал:

> Hi Ivan,
>
>
>
> it is a 50/50 read-write mix. Here is the fio command I used:
>
> fio --name=test --readwrite=randrw --rwmixwrite=50 --bs=4k 
> --invalidate=1 --group_reporting --direct=1 --filename=/dev/scinia 
> --time_based
> --runtime= --ioengine=libaio --numjobs=4 --iodepth=256 
> --norandommap
> --randrepeat=0 –exitall
>
>
>
> Result was:
>
> IO Workload 274.000 IOPS
>
> 1,0 GB/s transfer
>
> Read Bandwith 536MB/s
>
> Read IOPS 137.000
>
> Write Bandwith 536MB/s
>
> Write IOPS 137.000
>
>
>
> If you want me to run a different fio command just send it. My lab is 
> still running.
>
>
>
> Any idea how I can mount my ScaleIO volume in KVM?
>
>
>
> Mit freundlichen Grüßen / With kind regards,
>
>
>
> Swen
>
>
>
> *Von:* Ivan Kudryavtsev [mailto:kudryavtsev...@bw-sw.com]
> *Gesendet:* Freitag, 2. Februar 2018 02:58
> *An:* users@cloudstack.apache.org; S. Brüseke - proIO GmbH < 
> s.brues...@proio.com>
> *Betreff:* Re: AW: KVM storage cluster
>
>
>
> Hi, Swen. Do you test with direct or cached ops or buffered ones? Is 
> it a write test or rw with certain rw percenrage? Hardly believe the 
> deployment can do 250k IOs for writting with single VM test.
>
>
>
> 2 февр. 2018 г. 4:56 пользователь "S. Brüseke - proIO GmbH" < 
> s.brues...@proio.com> написал:
>
> I am also testing with ScaleIO on CentOS7 with KVM. With a 3 node 
> cluster with each node has 2x 2TB SSD (Samsung PM1663a) I get 250.000 
> IOPS when doing a fio test (random 4k).
> The only problem is that I do not know how to mount the shared volume 
> so that KVM can use it to store vms on it. Does anyone know how to do it?
>
> Mit freundlichen Grüßen / With kind regards,
>
> Swen
>
> -Ursprüngliche Nachricht-
> Von: Andrija Panic [mailto:andrija.pa...@gmail.com]
> Gesendet: Donnerstag, 1. Februar 2018 22:00
> An: users <users@cloudstack.apache.org>
> Betreff: Re: KVM storage cluster
>
>
> a bit late, but:
>
> - for any IO heavy (medium even...) workload, try to avoid CEPH, no 
> offence, simply it takes lot of $$$ to make CEPH perform in random IO 
> worlds (imagine RHEL and vendors provide only refernce architecutre 
> with SEQUNATIAL benchmark workload, not random) - not to mention a 
> huge list of bugs we hit back in the days (simply, one/single great 
> guy handled the CEPH integration for CloudStack, but otherwise not lot 
> of help from other committers, if not mistaken, afaik...)
> - NFS better performance but not magic... (but most well supported, 
> code wise, bug-less wise :)
> - and for top notch (cost some $$$) SolidFire is the way to go (we 
> have tons of IO heavy customers, so this THE solution really, after 
> living with CEPH, then NFS on SSDs, etc) and provides guarantied IOPS etc...
>
> Cheers.
>
> On 7 January 2018 at 22:46, Grégoire Lamodière <g.lamodi...@dimsi.fr>
> wrote:
>
> > Hi Vahric,
> >
> > Thank you. I will have a look on it.
> >
> > Grégoire
> >
> >
> >
> > Envoyé depuis mon smartphone Samsung Galaxy.
> >
> >
> >  Message d'origine 
> > De : Vahric MUHTARYAN <vah...@doruk.net.tr> Date : 07/01/2018 21:08
> > (GMT+01:00) À : users@cloudstack.apache.org Objet : Re: KVM storage 
> > cluster
> >
> > Hello Grégoire,
> >
> > I suggest you to look EMC scaleio for block based operations. It has 
> > a free one too ! And as a block working better then Ceph ;)
> >
> > Regards
> > VM
> >
> > On 7.01.2018 18:12, "Grégoire Lamodière" <g.lamodi...@dimsi.fr> wrote:
> >
> > 

AW: AW: KVM storage cluster

2018-02-02 Thread S . Brüseke - proIO GmbH
Hi Ivan,
 
it is a 50/50 read-write mix. Here is the fio command I used:
fio --name=test --readwrite=randrw --rwmixwrite=50 --bs=4k --invalidate=1 
--group_reporting --direct=1 --filename=/dev/scinia --time_based --runtime= 
--ioengine=libaio --numjobs=4 --iodepth=256 --norandommap --randrepeat=0 
–exitall
 
Result was:
IO Workload 274.000 IOPS
1,0 GB/s transfer
Read Bandwith 536MB/s
Read IOPS 137.000
Write Bandwith 536MB/s
Write IOPS 137.000
 
If you want me to run a different fio command just send it. My lab is still 
running.
 
Any idea how I can mount my ScaleIO volume in KVM?
 
Mit freundlichen Grüßen / With kind regards,
 
Swen
 
Von: Ivan Kudryavtsev [mailto:kudryavtsev...@bw-sw.com] 
Gesendet: Freitag, 2. Februar 2018 02:58
An: users@cloudstack.apache.org; S. Brüseke - proIO GmbH <s.brues...@proio.com>
Betreff: Re: AW: KVM storage cluster
 
Hi, Swen. Do you test with direct or cached ops or buffered ones? Is it a write 
test or rw with certain rw percenrage? Hardly believe the deployment can do 
250k IOs for writting with single VM test. 
 
2 февр. 2018 г. 4:56 пользователь "S. Brüseke - proIO GmbH" 
<s.brues...@proio.com> написал:
I am also testing with ScaleIO on CentOS7 with KVM. With a 3 node cluster with 
each node has 2x 2TB SSD (Samsung PM1663a) I get 250.000 IOPS when doing a fio 
test (random 4k).
The only problem is that I do not know how to mount the shared volume so that 
KVM can use it to store vms on it. Does anyone know how to do it?

Mit freundlichen Grüßen / With kind regards,

Swen

-Ursprüngliche Nachricht-
Von: Andrija Panic [mailto:andrija.pa...@gmail.com]
Gesendet: Donnerstag, 1. Februar 2018 22:00
An: users <users@cloudstack.apache.org>
Betreff: Re: KVM storage cluster

a bit late, but:

- for any IO heavy (medium even...) workload, try to avoid CEPH, no offence, 
simply it takes lot of $$$ to make CEPH perform in random IO worlds (imagine 
RHEL and vendors provide only refernce architecutre with SEQUNATIAL benchmark 
workload, not random) - not to mention a huge list of bugs we hit back in the 
days (simply, one/single great guy handled the CEPH integration for CloudStack, 
but otherwise not lot of help from other committers, if not mistaken, afaik...)
- NFS better performance but not magic... (but most well supported, code wise, 
bug-less wise :)
- and for top notch (cost some $$$) SolidFire is the way to go (we have tons of 
IO heavy customers, so this THE solution really, after living with CEPH, then 
NFS on SSDs, etc) and provides guarantied IOPS etc...

Cheers.

On 7 January 2018 at 22:46, Grégoire Lamodière <g.lamodi...@dimsi.fr> wrote:

> Hi Vahric,
>
> Thank you. I will have a look on it.
>
> Grégoire
>
>
>
> Envoyé depuis mon smartphone Samsung Galaxy.
>
>
>  Message d'origine 
> De : Vahric MUHTARYAN <vah...@doruk.net.tr> Date : 07/01/2018 21:08
> (GMT+01:00) À : users@cloudstack.apache.org Objet : Re: KVM storage
> cluster
>
> Hello Grégoire,
>
> I suggest you to look EMC scaleio for block based operations. It has a
> free one too ! And as a block working better then Ceph ;)
>
> Regards
> VM
>
> On 7.01.2018 18:12, "Grégoire Lamodière" <g.lamodi...@dimsi.fr> wrote:
>
> Hi Ivan,
>
> Thank you for your quick reply.
>
> I'll have a look on Ceph and related perfs.
> As you mentionned, 2 DRDB nfs servers can do the job, but if I can
> avoid using 2 blades for just passing blocks to nfs, this is even
> better (and maintain them as well).
>
> Thanks for pointing to ceph.
>
> Grégoire
>
>
>
>
> ---
> Grégoire Lamodière
> T/ + 33 6 76 27 03 31
> F/ + 33 1 75 43 89 71
>
> -Message d'origine-
> De : Ivan Kudryavtsev [mailto:kudryavtsev...@bw-sw.com]
> Envoyé : dimanche 7 janvier 2018 15:20
> À : users@cloudstack.apache.org
> Objet : Re: KVM storage cluster
>
> Hi, Grégoire,
> You could have
> - local storage if you like, so every compute node could have own
> space (one lun per host)
> - to have Ceph deployed on the same compute nodes (distribute raw
> devices among nodes)
> - to dedicate certain node as NFS server (or two servers with
> DRBD)
>
> I don't think that shared FS is a good option, even clustered LVM
> is a big pain.
>
> 2018-01-07 21:08 GMT+07:00 Grégoire Lamodière <g.lamodi...@dimsi.fr>:
>
> > Dear all,
> >
> > Since Citrix changed deeply the free version of XenServer 7.3, I
> am in
> > the process of Pocing moving our Xen clusters to KVM on Centos 7 I
> > decided to use HP blades connected to HP P2000 over mutipath SAS
> links.
> >
> > The network part seems fine to me, not so far from what we use

AW: KVM storage cluster

2018-02-01 Thread S . Brüseke - proIO GmbH
I am also testing with ScaleIO on CentOS7 with KVM. With a 3 node cluster with 
each node has 2x 2TB SSD (Samsung PM1663a) I get 250.000 IOPS when doing a fio 
test (random 4k).
The only problem is that I do not know how to mount the shared volume so that 
KVM can use it to store vms on it. Does anyone know how to do it? 

Mit freundlichen Grüßen / With kind regards,

Swen

-Ursprüngliche Nachricht-
Von: Andrija Panic [mailto:andrija.pa...@gmail.com] 
Gesendet: Donnerstag, 1. Februar 2018 22:00
An: users 
Betreff: Re: KVM storage cluster

a bit late, but:

- for any IO heavy (medium even...) workload, try to avoid CEPH, no offence, 
simply it takes lot of $$$ to make CEPH perform in random IO worlds (imagine 
RHEL and vendors provide only refernce architecutre with SEQUNATIAL benchmark 
workload, not random) - not to mention a huge list of bugs we hit back in the 
days (simply, one/single great guy handled the CEPH integration for CloudStack, 
but otherwise not lot of help from other committers, if not mistaken, afaik...)
- NFS better performance but not magic... (but most well supported, code wise, 
bug-less wise :)
- and for top notch (cost some $$$) SolidFire is the way to go (we have tons of 
IO heavy customers, so this THE solution really, after living with CEPH, then 
NFS on SSDs, etc) and provides guarantied IOPS etc...

Cheers.

On 7 January 2018 at 22:46, Grégoire Lamodière  wrote:

> Hi Vahric,
>
> Thank you. I will have a look on it.
>
> Grégoire
>
>
>
> Envoyé depuis mon smartphone Samsung Galaxy.
>
>
>  Message d'origine 
> De : Vahric MUHTARYAN  Date : 07/01/2018 21:08 
> (GMT+01:00) À : users@cloudstack.apache.org Objet : Re: KVM storage 
> cluster
>
> Hello Grégoire,
>
> I suggest you to look EMC scaleio for block based operations. It has a 
> free one too ! And as a block working better then Ceph ;)
>
> Regards
> VM
>
> On 7.01.2018 18:12, "Grégoire Lamodière"  wrote:
>
> Hi Ivan,
>
> Thank you for your quick reply.
>
> I'll have a look on Ceph and related perfs.
> As you mentionned, 2 DRDB nfs servers can do the job, but if I can 
> avoid using 2 blades for just passing blocks to nfs, this is even 
> better (and maintain them as well).
>
> Thanks for pointing to ceph.
>
> Grégoire
>
>
>
>
> ---
> Grégoire Lamodière
> T/ + 33 6 76 27 03 31
> F/ + 33 1 75 43 89 71
>
> -Message d'origine-
> De : Ivan Kudryavtsev [mailto:kudryavtsev...@bw-sw.com]
> Envoyé : dimanche 7 janvier 2018 15:20
> À : users@cloudstack.apache.org
> Objet : Re: KVM storage cluster
>
> Hi, Grégoire,
> You could have
> - local storage if you like, so every compute node could have own 
> space (one lun per host)
> - to have Ceph deployed on the same compute nodes (distribute raw 
> devices among nodes)
> - to dedicate certain node as NFS server (or two servers with 
> DRBD)
>
> I don't think that shared FS is a good option, even clustered LVM 
> is a big pain.
>
> 2018-01-07 21:08 GMT+07:00 Grégoire Lamodière :
>
> > Dear all,
> >
> > Since Citrix changed deeply the free version of XenServer 7.3, I 
> am in
> > the process of Pocing moving our Xen clusters to KVM on Centos 7 I
> > decided to use HP blades connected to HP P2000 over mutipath SAS 
> links.
> >
> > The network part seems fine to me, not so far from what we used to do
> > with Xen.
> > About the storage, I am a little but confused about the shared
> > mountpoint storage option offerds by CS.
> >
> > What would be the good option, in terms of CS, to create a cluster fs
> > using my SAS array ?
> > I read somewhere (a Dag SlideShare I think) that GFS2 is the only
> > clustered FS supported by CS. Is it still correct ?
> > Does it mean I have to create the GFS2 cluster, make identical mount
> > conf on all host, and use it on CS as NFS ?
> > I do not have to add the storage to KVM prior CS zone creation ?
> >
> > Thanks a lot for any help / information.
> >
> > ---
> > Grégoire Lamodière
> > T/ + 33 6 76 27 03 31
> > F/ + 33 1 75 43 89 71
> >
> >
>
>
> --
> With best regards, Ivan Kudryavtsev
> Bitworks Software, Ltd.
> Cell: +7-923-414-1515
> WWW: http://bitworks.software/ 
>
>
>
>


-- 

Andrija Panić


- proIO GmbH -
Geschäftsführer: Swen Brüseke
Sitz der Gesellschaft: Frankfurt am Main

USt-IdNr. DE 267 075 918
Registergericht: Frankfurt am Main - HRB 86239

Diese E-Mail enthält vertrauliche und/oder rechtlich geschützte Informationen. 
Wenn Sie nicht der richtige Adressat sind oder diese E-Mail irrtümlich erhalten 
haben, 
informieren Sie bitte sofort den Absender und vernichten Sie diese Mail. 
Das unerlaubte Kopieren sowie die unbefugte Weitergabe dieser Mail sind nicht 

AW: Quick 1 Question Survey

2017-09-12 Thread S . Brüseke - proIO GmbH
Cloudstack Management = centos6
KVM/XEN = XenServer 6.5 SP1

Mit freundlichen Grüßen / With kind regards,

Swen

-Ursprüngliche Nachricht-
Von: Rene Moser [mailto:m...@renemoser.net] 
Gesendet: Dienstag, 12. September 2017 14:13
An: users@cloudstack.apache.org
Betreff: Quick 1 Question Survey

What Linux OS and release are you running below your:

* CloudStack/Cloudplatform Management
* KVM/XEN Hypvervisor Host

Possible answer example

Cloudstack Management = centos6
KVM/XEN = None, No KVM/XEN

Thanks in advance

Regards
René



- proIO GmbH -
Geschäftsführer: Swen Brüseke
Sitz der Gesellschaft: Frankfurt am Main

USt-IdNr. DE 267 075 918
Registergericht: Frankfurt am Main - HRB 86239

Diese E-Mail enthält vertrauliche und/oder rechtlich geschützte Informationen. 
Wenn Sie nicht der richtige Adressat sind oder diese E-Mail irrtümlich erhalten 
haben, 
informieren Sie bitte sofort den Absender und vernichten Sie diese Mail. 
Das unerlaubte Kopieren sowie die unbefugte Weitergabe dieser Mail sind nicht 
gestattet. 

This e-mail may contain confidential and/or privileged information. 
If you are not the intended recipient (or have received this e-mail in error) 
please notify 
the sender immediately and destroy this e-mail.  
Any unauthorized copying, disclosure or distribution of the material in this 
e-mail is strictly forbidden. 




AW: somebody experience with bare metal?

2017-09-11 Thread S . Brüseke - proIO GmbH
Hi Harikrishna,

thank  you for your response! We are using Juniper switches. I found this here: 
https://www.juniper.net/documentation/en_US/release-independent/junos/topics/topic-map/cloudstack-network-guru-plugin.html
Any experience with it? It looks a little bit outdated.

Mit freundlichen Grüßen / With kind regards,

Swen

-Ursprüngliche Nachricht-
Von: Harikrishna Patnala [mailto:harikrishna.patn...@accelerite.com] 
Gesendet: Montag, 11. September 2017 11:01
An: users@cloudstack.apache.org; S. Brüseke - proIO GmbH <s.brues...@proio.com>
Betreff: Re: somebody experience with bare metal?

Hi,

We have a pretty good experience with baremetal deployments in both basic and 
advanced zones.

Yes, as you said currently cloudstack supports only Dell Force10 switch for 
dynamic VLAN configuration. 
Since this is a plugin model, one can develop their own support for the 
switches. Only thing is that, switch should support configuring VLANs 
dynamically.

Here is the interface to implement
https://github.com/apache/cloudstack/blob/master/plugins/hypervisors/baremetal/src/com/cloud/baremetal/networkservice/BaremetalSwitchBackend.java

Regards,
Harikrishna
 

> On 04-Sep-2017, at 3:45 PM, S. Brüseke - proIO GmbH <s.brues...@proio.com> 
> wrote:
> 
> Hello,
> 
> I have 2 questions and hope somebody can share his/her experience with me:
> 1) Does somebody have experience with bare metal servers in an advanced 
> network environment?
> 
> 2) As far as I understand the docu correctly bare metal servers will only 
> work with Force10 switches because of automated network port configuration of 
> the uplink port for the physical servers.
> We are using Juniper EX switches and I found a plugin called Network Guru 
> Plugin from Juniper Networks 
> (http://www.juniper.net/documentation/en_US/release-independent/junos/topics/topic-map/cloudstack-network-guru-plugin.html).
> Does anybody know or using thus plugin?
> 
> Thanks to all!
> 
> Mit freundlichen Grüßen / With kind regards,
> 
> Swen
> 
> 
> 
> - proIO GmbH -
> Geschäftsführer: Swen Brüseke
> Sitz der Gesellschaft: Frankfurt am Main
> 
> USt-IdNr. DE 267 075 918
> Registergericht: Frankfurt am Main - HRB 86239
> 
> Diese E-Mail enthält vertrauliche und/oder rechtlich geschützte 
> Informationen. 
> Wenn Sie nicht der richtige Adressat sind oder diese E-Mail irrtümlich 
> erhalten haben, informieren Sie bitte sofort den Absender und vernichten Sie 
> diese Mail.
> Das unerlaubte Kopieren sowie die unbefugte Weitergabe dieser Mail sind nicht 
> gestattet. 
> 
> This e-mail may contain confidential and/or privileged information. 
> If you are not the intended recipient (or have received this e-mail in 
> error) please notify the sender immediately and destroy this e-mail.
> Any unauthorized copying, disclosure or distribution of the material in this 
> e-mail is strictly forbidden. 
> 
> 

DISCLAIMER
==
This e-mail may contain privileged and confidential information which is the 
property of Accelerite, a Persistent Systems business. It is intended only for 
the use of the individual or entity to which it is addressed. If you are not 
the intended recipient, you are not authorized to read, retain, copy, print, 
distribute or use this message. If you have received this communication in 
error, please notify the sender and delete all copies of this message. 
Accelerite, a Persistent Systems business does not accept any liability for 
virus infected mails.



- proIO GmbH -
Geschäftsführer: Swen Brüseke
Sitz der Gesellschaft: Frankfurt am Main

USt-IdNr. DE 267 075 918
Registergericht: Frankfurt am Main - HRB 86239

Diese E-Mail enthält vertrauliche und/oder rechtlich geschützte Informationen. 
Wenn Sie nicht der richtige Adressat sind oder diese E-Mail irrtümlich erhalten 
haben, 
informieren Sie bitte sofort den Absender und vernichten Sie diese Mail. 
Das unerlaubte Kopieren sowie die unbefugte Weitergabe dieser Mail sind nicht 
gestattet. 

This e-mail may contain confidential and/or privileged information. 
If you are not the intended recipient (or have received this e-mail in error) 
please notify 
the sender immediately and destroy this e-mail.  
Any unauthorized copying, disclosure or distribution of the material in this 
e-mail is strictly forbidden. 




AW: cloud-set-guest-password working with systemctl

2017-09-08 Thread S . Brüseke - proIO GmbH
Hi Sebastian,

if you want you can take a look at our preseed and sysprep configs:
https://gitlab.proio.com/s.brueseke/megonacloudtemplates

You can also find ready templates here: http://openvm.eu/.

Thanks again to Nux. :-)

Mit freundlichen Grüßen / With kind regards,

Swen

-Ursprüngliche Nachricht-
Von: Sebastian Gomez [mailto:tioc...@gmail.com] 
Gesendet: Donnerstag, 7. September 2017 20:52
An: users@cloudstack.apache.org
Betreff: Re: cloud-set-guest-password working with systemctl

Could you share them, please?

Yes, I agree, perhaps our doc needs an update.


Thanks in advanced.



Atentamente,
Sebastián Gómez

On Thu, Sep 7, 2017 at 12:38 PM, Pierre-Luc Dion 
wrote:

> Hi,
>
> On my side we use cloud-init in all our Linux templates. We got it to 
> work for password, password-reset, sshkey and sshkey-reset. It also 
> support user-data, and provide an easy way to manage a different user than 
> root.
>
> I think our community documentation need an update on this topic...
>
> Cheers,
>
> Le 28 août 2017 5:29 PM, "Sebastian Gomez"  a écrit :
>
> I had the same problem like Marc and I could not resolve it on Ubuntu 
> 16.04.
> I followed the cloudstack steps to set it up, but the script is not 
> running well on our environment.
>
> If I execute the script from command line, it works fine. During the 
> boot process, there is any problem on systemd and DHCP conjunction, 
> and VR is not reachable to reply this request.
>
> On the other hand, I found many different versions of this OLD script, 
> and I remember that we had to use other community version also for 
> Ubuntu 14.04. I noticed that this is a very old script, and has not 
> been modified for years.
> As you proposed, I'm trying to do it via cloud-init. Just landed from 
> holidays I will finish this integration.
>
> To do a so simply task like setting up a simple template is becoming a 
> nightmare, so *I would like to know how people *is setting Ubuntu 
> templates up?
> - SSL key pairs? would be a good way, but Cloudstack 4.9.2 does not 
> allow to manage SSL key pairs using the GUI, only via API, so this is 
> not a way for not advanced users.
> - Default user/password?
>
>
> I can't believe that we are the only two buddies with this problem.
>
>
>
> Regards.
>
>
>
>
> Atentamente,
> Sebastián Gómez
>
> On Mon, Aug 7, 2017 at 3:36 PM, Pierre-Luc Dion 
> wrote:
>
> > Has Syed is saying its most likely because of systemd. Have you try 
> > to
> use
> > cloud-init? It work well but need some tuning for password and 
> > sshkey reset...
> >
> >
> >
> > Le 7 août 2017 09:33, "Syed Ahmed"  a écrit :
> >
> >> Hi,
> >>
> >> Is the password server mentioned in the logs 192.168.155.1 reachable?
> The
> >> password server is hosted on the VR. Sometimes, restarting the 
> >> network will reboot the VR and fix this. If not let us know, we'll 
> >> help you out.
> >>
> >> Thanks,
> >> -Syed
> >>
> >> On Fri, Aug 4, 2017 at 9:33 AM, Marc Poll Garcia < 
> >> marc.poll.gar...@upcnet.es
> >> > wrote:
> >>
> >> > Hello everyone,
> >> >
> >> > last days we are experiencing some issues on our "Ubuntu 16.04"
> >> template,
> >> > created step by step following this tutorial:
> >> >
> >> > http://docs.cloudstack.apache.org/projects/cloudstack-
> >> > administration/en/4.8/templates/_create_linux.html
> >> >
> >> > It is happening on "Ubuntu 16.04.2 LTS \n \l" system and  
> >> > "CloudStack 4.9.2.0" version.
> >> >
> >> >
> >> > Unfortunately we detected that some of the features we had, no 
> >> > longer
> >> work,
> >> > such as the changing password one.
> >> >
> >> > After several test and reviewing it more deeply, i see that it 
> >> > works "intermittently" and the service sometimes does not start on boot:
> >> >
> >> > /etc/init.d/cloud-set-guest-password status ● 
> >> > cloud-set-guest-password.service - LSB: Init file for Password
> >> Download
> >> > Client
> >> >Loaded: loaded (/etc/init.d/cloud-set-guest-password; bad; 
> >> > vendor
> >> > preset: enabled)
> >> >Active: *failed* (Result: exit-code) since Thu 2017-08-03 
> >> > 16:09:09
> >> CEST;
> >> > 39min ago
> >> >  Docs: man:systemd-sysv-generator(8)
> >> >
> >> > Aug 03 16:08:58 Ubuntu16PassKO systemd[1]: Starting LSB: Init 
> >> > file for Password Download Client...
> >> > Aug 03 16:08:58 Ubuntu16PassKO cloud-set-guest-password[1062]:  *
> >> Starting
> >> > cloud cloud-set-guest-password
> >> > Aug 03 16:09:09 Ubuntu16PassKO cloud[1252]: DHCP file
> >> > (/var/lib/dhcp/dhclient-ens160.leases) exists. No need to restart 
> >> > dhclient.
> >> > Aug 03 16:09:09 Ubuntu16PassKO cloud[1256]: Using DHCP lease from 
> >> > /var/lib/dhcp/dhclient-ens160.leases
> >> > Aug 03 16:09:09 Ubuntu16PassKO cloud[1263]: Found password server 
> >> > IP
> >> > 192.168.155.1 in /var/lib/dhcp/dhclient-ens160.leases
> >> > Aug 03 16:09:09 Ubuntu16PassKO cloud[1264]: Sending request to
> password
> >> > 

AW: Free capacity calculation within ACS

2017-09-05 Thread S . Brüseke - proIO GmbH
Hi Ingo,

did you try to work with host tags? I am not sure if this will solve it, but it 
is worth a try.

Mit freundlichen Grüßen / With kind regards,

Swen

-Ursprüngliche Nachricht-
Von: Jochim, Ingo [mailto:ingo.joc...@bitgroup.de] 
Gesendet: Dienstag, 5. September 2017 09:48
An: users@cloudstack.apache.org
Betreff: Free capacity calculation within ACS

Hello all,

within our CloudStack environment we like to park a couple of large machines in 
powered off state.
Those machines are demo machines which are needed only sometimes.
Those machines will get included in the capacity. That means we cannot build 
new machines even if there are free ressources on the hypervisors.
We don't want to solve it with overcommitment.
Is there a possibility to calculate free capacity without all powered off 
machines?

Currently we have a dirty workaround. We created an offering with 1 core and 
1MB RAM and used that for all parked machines.
But this is not very nice.

Any ideas or comments are welcome.
Thank you.
Ingo


- proIO GmbH -
Geschäftsführer: Swen Brüseke
Sitz der Gesellschaft: Frankfurt am Main

USt-IdNr. DE 267 075 918
Registergericht: Frankfurt am Main - HRB 86239

Diese E-Mail enthält vertrauliche und/oder rechtlich geschützte Informationen. 
Wenn Sie nicht der richtige Adressat sind oder diese E-Mail irrtümlich erhalten 
haben, 
informieren Sie bitte sofort den Absender und vernichten Sie diese Mail. 
Das unerlaubte Kopieren sowie die unbefugte Weitergabe dieser Mail sind nicht 
gestattet. 

This e-mail may contain confidential and/or privileged information. 
If you are not the intended recipient (or have received this e-mail in error) 
please notify 
the sender immediately and destroy this e-mail.  
Any unauthorized copying, disclosure or distribution of the material in this 
e-mail is strictly forbidden. 




somebody experience with bare metal?

2017-09-04 Thread S . Brüseke - proIO GmbH
Hello,

I have 2 questions and hope somebody can share his/her experience with me:
1) Does somebody have experience with bare metal servers in an advanced network 
environment?

2) As far as I understand the docu correctly bare metal servers will only work 
with Force10 switches because of automated network port configuration of the 
uplink port for the physical servers.
We are using Juniper EX switches and I found a plugin called Network Guru 
Plugin from Juniper Networks 
(http://www.juniper.net/documentation/en_US/release-independent/junos/topics/topic-map/cloudstack-network-guru-plugin.html).
Does anybody know or using thus plugin?

Thanks to all!

Mit freundlichen Grüßen / With kind regards,

Swen



- proIO GmbH -
Geschäftsführer: Swen Brüseke
Sitz der Gesellschaft: Frankfurt am Main

USt-IdNr. DE 267 075 918
Registergericht: Frankfurt am Main - HRB 86239

Diese E-Mail enthält vertrauliche und/oder rechtlich geschützte Informationen. 
Wenn Sie nicht der richtige Adressat sind oder diese E-Mail irrtümlich erhalten 
haben, 
informieren Sie bitte sofort den Absender und vernichten Sie diese Mail. 
Das unerlaubte Kopieren sowie die unbefugte Weitergabe dieser Mail sind nicht 
gestattet. 

This e-mail may contain confidential and/or privileged information. 
If you are not the intended recipient (or have received this e-mail in error) 
please notify 
the sender immediately and destroy this e-mail.  
Any unauthorized copying, disclosure or distribution of the material in this 
e-mail is strictly forbidden. 




KVM evaluation

2017-08-14 Thread S . Brüseke - proIO GmbH
Hi,

we want to give KVM a change in our CS installation. Can somebody provide 
his/her experience?
- Which version of KVM are you using?
- What host OS are you running KVM on?
- What do you tweak on your KVM installations?

Thank you very much for sharing this information!

Mit freundlichen Grüßen / With kind regards,

Swen



- proIO GmbH -
Geschäftsführer: Swen Brüseke
Sitz der Gesellschaft: Frankfurt am Main

USt-IdNr. DE 267 075 918
Registergericht: Frankfurt am Main - HRB 86239

Diese E-Mail enthält vertrauliche und/oder rechtlich geschützte Informationen. 
Wenn Sie nicht der richtige Adressat sind oder diese E-Mail irrtümlich erhalten 
haben, 
informieren Sie bitte sofort den Absender und vernichten Sie diese Mail. 
Das unerlaubte Kopieren sowie die unbefugte Weitergabe dieser Mail sind nicht 
gestattet. 

This e-mail may contain confidential and/or privileged information. 
If you are not the intended recipient (or have received this e-mail in error) 
please notify 
the sender immediately and destroy this e-mail.  
Any unauthorized copying, disclosure or distribution of the material in this 
e-mail is strictly forbidden. 




AW: Creating a Cloud-Init Ready Template for CentOS7

2017-08-03 Thread S . Brüseke - proIO GmbH
Hi,

we are using the following config for template createn via virt-install:
https://gitlab.proio.com/s.brueseke/megonacloudtemplates/blob/master/CentOS/centos7.cfg

But take a look at: http://dl.openvm.eu/
You can find tempates there.

Mit freundlichen Grüßen / With kind regards,

Swen

-Ursprüngliche Nachricht-
Von: Imran Ahmed [mailto:im...@eaxiom.net] 
Gesendet: Donnerstag, 3. August 2017 18:13
An: users@cloudstack.apache.org
Betreff: Creating a Cloud-Init Ready Template for CentOS7

Hi all,

Can someone suggest what packages are recommended to create a new cloud-init 
enabled template for CentOS 7?
For an OpenStack environment we need below packages:
acpid
Cloud-init
cloud-utils-growpart

Also  on the host machine we use virt-sysprep  to remove mac addresses etc.

Regards,

Imran



- proIO GmbH -
Geschäftsführer: Swen Brüseke
Sitz der Gesellschaft: Frankfurt am Main

USt-IdNr. DE 267 075 918
Registergericht: Frankfurt am Main - HRB 86239

Diese E-Mail enthält vertrauliche und/oder rechtlich geschützte Informationen. 
Wenn Sie nicht der richtige Adressat sind oder diese E-Mail irrtümlich erhalten 
haben, 
informieren Sie bitte sofort den Absender und vernichten Sie diese Mail. 
Das unerlaubte Kopieren sowie die unbefugte Weitergabe dieser Mail sind nicht 
gestattet. 

This e-mail may contain confidential and/or privileged information. 
If you are not the intended recipient (or have received this e-mail in error) 
please notify 
the sender immediately and destroy this e-mail.  
Any unauthorized copying, disclosure or distribution of the material in this 
e-mail is strictly forbidden. 




AW: Instance with a larger disk size then Template

2017-08-03 Thread S . Brüseke - proIO GmbH
Hi Imran,

you are talking about 3 different levels here to reach your goal of resizing a 
volume. First level is the volume itself. This is what you can do within CS. 
After that you need to extend the partition and then you need to expand the 
filesystem. The last to levels you need to do within the os of the server.

What we do is using cloud-init within our template to automate this. But our 
templates do not use LVM. Our templates are checking at boot if the root volume 
has been extended and expanding the partition and the filesystem.

If you want to know more about it, I can give you more details.

Mit freundlichen Grüßen / With kind regards,

Swen Brüseke

-Ursprüngliche Nachricht-
Von: Imran Ahmed [mailto:im...@eaxiom.net] 
Gesendet: Donnerstag, 3. August 2017 12:00
An: users@cloudstack.apache.org
Betreff: Instance with a larger disk size then Template

Hi All,

I am creating an instance with a 300GB disk from a CentOS 7 template that has 
5GB disk (LVM Based).
The issue is that the root LVM partition inside the new VM instance  still 
shows 5GB .  

The device size  (/dev/vda) however shows 300GB.  The question is what is the 
best strategy to resize the root LVM partition so that I could use all 300G.

Kind regards,

Imran 



- proIO GmbH -
Geschäftsführer: Swen Brüseke
Sitz der Gesellschaft: Frankfurt am Main

USt-IdNr. DE 267 075 918
Registergericht: Frankfurt am Main - HRB 86239

Diese E-Mail enthält vertrauliche und/oder rechtlich geschützte Informationen. 
Wenn Sie nicht der richtige Adressat sind oder diese E-Mail irrtümlich erhalten 
haben, 
informieren Sie bitte sofort den Absender und vernichten Sie diese Mail. 
Das unerlaubte Kopieren sowie die unbefugte Weitergabe dieser Mail sind nicht 
gestattet. 

This e-mail may contain confidential and/or privileged information. 
If you are not the intended recipient (or have received this e-mail in error) 
please notify 
the sender immediately and destroy this e-mail.  
Any unauthorized copying, disclosure or distribution of the material in this 
e-mail is strictly forbidden. 




AW: Snapshot and secondary storage utilisation.

2017-07-10 Thread S . Brüseke - proIO GmbH
Hi Makran,

please take a look at global setting "snapshot.delta.max". As far as I 
understand scheduled snapshots ACP is using deltas to minimize time and 
transferred data. So after the first snapshot has been done, the next is only a 
delta until you hit snapshot.delta.max.
Because ACP is needing the full snapshot as long as you need one of the deltas 
you will see the vhd file on your secondary storage, but not in the UI.

Hope that helped.

Mit freundlichen Grüßen / With kind regards,

Swen Brüseke

-Ursprüngliche Nachricht-
Von: Makrand [mailto:makrandsa...@gmail.com] 
Gesendet: Montag, 10. Juli 2017 09:14
An: users@cloudstack.apache.org
Betreff: Snapshot and secondary storage utilisation.

​​
Hi all,

My setup is:- ACS 4.4. XENserver 6.2 SP1. 4TB of secondary storage coming from 
NFS.

I am observing some issues the way *.vhd* files are stored and cleaned up in 
secondary storage. Let's take an example of a VM-813. It has 250G root disk 
(disk ID 1015) The snapshot is scheduled to happen once every week (sat night) 
and supposes to keep only 1 snapshot. From GUI I am seeing its only keeping the 
latest week snapshot.

But resource utilization on CS GUI is increasing day by day. So I just ran du 
-smh and found there are multiple vhd files of different sizes under secondary 
storage.

Here is snippet:-

root@gcx-bom-cloudstack:/mnt/secondary2/snapshots/22# du -smhh *
1.5K1002
1.5K1003
1.5K1004
*243G1015*
1.5K1114

root@gcx-bom-cloudstack:/mnt/secondary2/snapshots/22# ls -lht *
*1015:*
*total 243G*
*-rw-r--r-- 1 nobody nogroup  32G Jul  8 21:19
8a7e6580-5191-4eb0-9eb1-3ec8e75ce104.vhd*
*-rw-r--r-- 1 nobody nogroup  40G Jul  1 21:30
f52b82b0-0eaf-4297-a973-1f5477c10b5e.vhd*
*-rw-r--r-- 1 nobody nogroup  43G Jun 24 21:35
3dc72a3b-91ad-45ae-b618-9aefb7565edb.vhd*
*-rw-r--r-- 1 nobody nogroup  40G Jun 17 21:30
c626a9c5-1929-4489-b181-6524af1c88ad.vhd*
*-rw-r--r-- 1 nobody nogroup  29G Jun 10 21:16
697cf9bd-4433-426d-a4a1-545f03aae3e6.vhd*
*-rw-r--r-- 1 nobody nogroup  29G Jun  3 21:00
bff859b3-a51c-4186-8c19-1ba94f99f9e7.vhd*
*-rw-r--r-- 1 nobody nogroup  43G May 27 21:35
127e3f6e-4fa5-45ed-a95d-7d0b850a053d.vhd*
*-rw-r--r-- 1 nobody nogroup  60G May 20 22:01
619fe1ed-6807-441c-9526-526486d7a6d2.vhd*
*-rw-r--r-- 1 nobody nogroup  35G May 13 21:23
71b0d6a8-3c93-493f-b82c-732b7a808f6d.vhd*
*-rw-r--r-- 1 nobody nogroup  31G May  6 21:19
ccbfb3ec-abd8-448c-ba79-36631b227203.vhd*
*-rw-r--r-- 1 nobody nogroup  32G Apr 29 21:18
52215821-ed4d-4283-9aed-9f9cc5acd5bd.vhd*
*-rw-r--r-- 1 nobody nogroup  38G Apr 22 21:26
4cb6ea42-8450-493a-b6f2-5be5b0594a30.vhd*
*-rw-r--r-- 1 nobody nogroup 248G Apr 16 00:44
243f50d6-d06a-47af-ab45-e0b8599aac8d.vhd*


Observed same behavior for root disks of other 4 VMs. So the number of vhds are 
ever growing on secondary storage and one will eventually run out of secondary 
storage size.

Simple Question:-

1) Why is cloud stack creating multiple vhd files? Should not it supposed to 
keep only one vhd at secondary storage defined in snap policy?

Any thoughts? As explained earlier...from GUI I am seeing last weeks snap as 
backed up.



--
Makrand


- proIO GmbH -
Geschäftsführer: Swen Brüseke
Sitz der Gesellschaft: Frankfurt am Main

USt-IdNr. DE 267 075 918
Registergericht: Frankfurt am Main - HRB 86239

Diese E-Mail enthält vertrauliche und/oder rechtlich geschützte Informationen. 
Wenn Sie nicht der richtige Adressat sind oder diese E-Mail irrtümlich erhalten 
haben, 
informieren Sie bitte sofort den Absender und vernichten Sie diese Mail. 
Das unerlaubte Kopieren sowie die unbefugte Weitergabe dieser Mail sind nicht 
gestattet. 

This e-mail may contain confidential and/or privileged information. 
If you are not the intended recipient (or have received this e-mail in error) 
please notify 
the sender immediately and destroy this e-mail.  
Any unauthorized copying, disclosure or distribution of the material in this 
e-mail is strictly forbidden. 




AW: Recreating SystemVM's

2017-06-15 Thread S . Brüseke - proIO GmbH
I once did have some similar problem with my systemvms and my root cause was 
that in the global settings it referred to the wrong systemvm template. I am 
not sure if this helps you, but wanted to tell you.

Mit freundlichen Grüßen / With kind regards,

Swen

-Ursprüngliche Nachricht-
Von: Jeremy Peterson [mailto:jpeter...@acentek.net] 
Gesendet: Donnerstag, 15. Juni 2017 01:55
An: users@cloudstack.apache.org
Betreff: RE: Recreating SystemVM's

Hahaha.  The best response ever. 

I dug through these emails and someone had soft of the same log messages cannot 
attach network and blamed xenserver. Ok I'm cool with that but why oh why is it 
only system vms?

Jeremy

From: Imran Ahmed [im...@eaxiom.net]
Sent: Wednesday, June 14, 2017 6:22 PM
To: users@cloudstack.apache.org
Subject: RE: Recreating SystemVM's

Yes,

-Original Message-
From: Jeremy Peterson [mailto:jpeter...@acentek.net]
Sent: Wednesday, June 14, 2017 9:59 PM
To: users@cloudstack.apache.org
Subject: RE: Recreating SystemVM's

Is there anyone out there reading these messages?

Am I just not seeing responses?

Jeremy


-Original Message-
From: Jeremy Peterson [mailto:jpeter...@acentek.net]
Sent: Wednesday, June 14, 2017 8:12 AM
To: users@cloudstack.apache.org
Subject: RE: Recreating SystemVM's

I opened an issue since this is still an issue.  CLOUDSTACK-9960

Jeremy

-Original Message-
From: Jeremy Peterson [mailto:jpeter...@acentek.net]
Sent: Sunday, June 11, 2017 9:10 AM
To: users@cloudstack.apache.org
Subject: Re: Recreating SystemVM's

Any other suggestions?

I am going to be scheduling to run XenServer updates.  But this all points back 
to CANNOT_ATTACH_NETWORk.

I've verified nothing is active on the Public IP space that those two VM's were 
living on.

Jeremy

From: Jeremy Peterson 
Sent: Friday, June 9, 2017 9:58 AM
To: users@cloudstack.apache.org
Subject: RE: Recreating SystemVM's

I see the vm's try to create on a host that I just removed from maintenance 
mode to install updates and here are the logs

I don't see anything that sticks out to me as a failure message.

Jun  9 09:53:54 Xen3 SM: [13068] ['ip', 'route', 'del', '169.254.0.0/16']
Jun  9 09:53:54 Xen3 SM: [13068]   pread SUCCESS
Jun  9 09:53:54 Xen3 SM: [13068] ['ifconfig', 'xapi12', '169.254.0.1', 
'netmask', '255.255.0.0']
Jun  9 09:53:54 Xen3 SM: [13068]   pread SUCCESS
Jun  9 09:53:54 Xen3 SM: [13068] ['ip', 'route', 'add', '169.254.0.0/16', 
'dev', 'xapi12', 'src', '169.254.0.1']
Jun  9 09:53:54 Xen3 SM: [13068]   pread SUCCESS
Jun  9 09:53:54 Xen3 SM: [13071] ['ip', 'route', 'del', '169.254.0.0/16']
Jun  9 09:53:54 Xen3 SM: [13071]   pread SUCCESS
Jun  9 09:53:54 Xen3 SM: [13071] ['ifconfig', 'xapi12', '169.254.0.1', 
'netmask', '255.255.0.0']
Jun  9 09:53:54 Xen3 SM: [13071]   pread SUCCESS
Jun  9 09:53:54 Xen3 SM: [13071] ['ip', 'route', 'add', '169.254.0.0/16', 
'dev', 'xapi12', 'src', '169.254.0.1']
Jun  9 09:53:54 Xen3 SM: [13071]   pread SUCCESS


Jun  9 09:54:00 Xen3 SM: [13115] on-slave.multi: {'vgName':
'VG_XenStorage-469b6dcd-8466-3d03-de0e-cc3983e1b6e2', 'lvName1':
'VHD-68a7-6c40-4aa6-b88e-c798b6fdc04d', 'action1':
'deactivateNoRefcount', 'action2': 'cleanupLock', 'uuid2':
'68a7-6c40-4aa6-b88e-c798b6fdc04d', 'ns2':
'lvm-469b6dcd-8466-3d03-de0e-cc3983e1b6e2'}
Jun  9 09:54:00 Xen3 SM: [13115] LVMCache created for
VG_XenStorage-469b6dcd-8466-3d03-de0e-cc3983e1b6e2
Jun  9 09:54:00 Xen3 SM: [13115] on-slave.action 1: deactivateNoRefcount Jun
9 09:54:00 Xen3 SM: [13115] LVMCache: will initialize now Jun  9 09:54:00
Xen3 SM: [13115] LVMCache: refreshing Jun  9 09:54:00 Xen3 SM: [13115] 
['/usr/sbin/lvs', '--noheadings', '--units', 'b', '-o', '+lv_tags', 
'/dev/VG_XenStorage-469b6dcd-8466-3d03-de0e-cc3983e1b6e2']
Jun  9 09:54:00 Xen3 SM: [13115]   pread SUCCESS
Jun  9 09:54:00 Xen3 SM: [13115] ['/usr/sbin/lvchange', '-an',
'/dev/VG_XenStorage-469b6dcd-8466-3d03-de0e-cc3983e1b6e2/VHD-68a7-6c40-4
aa6-b88e-c798b6fdc04d']
Jun  9 09:54:00 Xen3 SM: [13115]   pread SUCCESS
Jun  9 09:54:00 Xen3 SM: [13115] ['/sbin/dmsetup', 'status',
'VG_XenStorage--469b6dcd--8466--3d03--de0e--cc3983e1b6e2-VHD--68a7--6c40
--4aa6--b88e--c798b6fdc04d']
Jun  9 09:54:00 Xen3 SM: [13115]   pread SUCCESS
Jun  9 09:54:00 Xen3 SM: [13115] on-slave.action 2: cleanupLock

Jun  9 09:54:16 Xen3 SM: [13230] ['ip', 'route', 'del', '169.254.0.0/16']
Jun  9 09:54:16 Xen3 SM: [13230]   pread SUCCESS
Jun  9 09:54:16 Xen3 SM: [13230] ['ifconfig', 'xapi12', '169.254.0.1', 
'netmask', '255.255.0.0']
Jun  9 09:54:16 Xen3 SM: [13230]   pread SUCCESS
Jun  9 09:54:16 Xen3 SM: [13230] ['ip', 'route', 'add', '169.254.0.0/16', 
'dev', 'xapi12', 'src', '169.254.0.1']
Jun  9 09:54:16 Xen3 SM: [13230]   pread SUCCESS
Jun  9 09:54:19 Xen3 updatempppathd: [15446] The garbage collection routine
returned: 0 Jun  9 09:54:23 Xen3 SM: [13277] ['ip', 'route', 'del', 

AW: How to stop running storage migration?

2017-05-13 Thread S . Brüseke - proIO GmbH
Hi Melanie,

are you talking about a running VM or just a volume you are migrating? What 
hypervisor are you using?

Mit freundlichen Grüßen / With kind regards,

Swen Brüseke

-Ursprüngliche Nachricht-
Von: Melanie Desaive [mailto:m.desa...@heinlein-support.de] 
Gesendet: Samstag, 13. Mai 2017 09:46
An: users@cloudstack.apache.org
Betreff: How to stop running storage migration?

Hi all,

does anyone know a way to abort a running storage migration without risking 
corrupted data?

That information could help me a lot!

Greetings,

Melanie
--
--

Heinlein Support GmbH
Linux: Akademie - Support - Hosting

http://www.heinlein-support.de
Tel: 030 / 40 50 51 - 0
Fax: 030 / 40 50 51 - 19

Zwangsangaben lt. §35a GmbHG:
HRB 93818 B / Amtsgericht Berlin-Charlottenburg,
Geschäftsführer: Peer Heinlein  -- Sitz: Berlin



- proIO GmbH -
Geschäftsführer: Swen Brüseke
Sitz der Gesellschaft: Frankfurt am Main

USt-IdNr. DE 267 075 918
Registergericht: Frankfurt am Main - HRB 86239

Diese E-Mail enthält vertrauliche und/oder rechtlich geschützte Informationen. 
Wenn Sie nicht der richtige Adressat sind oder diese E-Mail irrtümlich erhalten 
haben, 
informieren Sie bitte sofort den Absender und vernichten Sie diese Mail. 
Das unerlaubte Kopieren sowie die unbefugte Weitergabe dieser Mail sind nicht 
gestattet. 

This e-mail may contain confidential and/or privileged information. 
If you are not the intended recipient (or have received this e-mail in error) 
please notify 
the sender immediately and destroy this e-mail.  
Any unauthorized copying, disclosure or distribution of the material in this 
e-mail is strictly forbidden. 




AW: XenServer VM does no longer start

2017-02-23 Thread S . Brüseke - proIO GmbH
Hi Martin,

as Abhinandan was pointing out in a previous mail it looks like you hit a bug. 
Take a look a the link he provided in his mail.
Please detach all data disks and try to start the VM. Is this working?

Mit freundlichen Grüßen / With kind regards,

Swen


-Ursprüngliche Nachricht-
Von: Martin Emrich [mailto:martin.emr...@empolis.com] 
Gesendet: Donnerstag, 23. Februar 2017 13:49
An: users@cloudstack.apache.org; S. Brüseke - proIO GmbH
Betreff: AW: XenServer VM does no longer start

Hi!

How can I check that? 

I tried starting the VM, not a single line appeared on the SMlog during that 
attempt.

Thanks,

Martin

-Ursprüngliche Nachricht-
Von: S. Brüseke - proIO GmbH [mailto:s.brues...@proio.com] 
Gesendet: Mittwoch, 22. Februar 2017 12:41
An: users@cloudstack.apache.org
Betreff: AW: XenServer VM does no longer start

Hi Martin,

does the volume still exist on primary storage? You can also take a look at 
SMlog on XenServer.

Mit freundlichen Grüßen / With kind regards,

Swen


-Ursprüngliche Nachricht-
Von: Martin Emrich [mailto:martin.emr...@empolis.com]
Gesendet: Mittwoch, 22. Februar 2017 12:27
An: users@cloudstack.apache.org
Betreff: XenServer VM does no longer start

Hi!

After shutting down a VM for resizing, it no longer starts. The GUI reports 
insufficient Capacity (but there's plenty), and in the Log I see this:

2017-02-22 12:18:40,626 DEBUG [o.a.c.e.o.VolumeOrchestrator] 
(Work-Job-Executor-11:ctx-c5cca7da job-70304/job-70306 ctx-a412f4b8) Checking 
if we need to prepare 4 volumes for VM[User|i-18-2998-VM]
2017-02-22 12:18:40,626 DEBUG [o.a.c.e.o.VolumeOrchestrator] 
(Work-Job-Executor-11:ctx-c5cca7da job-70304/job-70306 ctx-a412f4b8) No need to 
recreate the volume: Vol[5050|vm=2998|ROOT], since it already has a pool 
assigned: 29, adding di sk to VM
2017-02-22 12:18:40,627 DEBUG [o.a.c.e.o.VolumeOrchestrator] 
(Work-Job-Executor-11:ctx-c5cca7da job-70304/job-70306 ctx-a412f4b8) No need to 
recreate the volume: Vol[5051|vm=2998|DATADISK], since it already has a pool 
assigned: 29, addin g disk to VM
2017-02-22 12:18:40,627 DEBUG [o.a.c.e.o.VolumeOrchestrator] 
(Work-Job-Executor-11:ctx-c5cca7da job-70304/job-70306 ctx-a412f4b8) No need to 
recreate the volume: Vol[5052|vm=2998|DATADISK], since it already has a pool 
assigned: 29, addin g disk to VM
2017-02-22 12:18:40,627 DEBUG [o.a.c.e.o.VolumeOrchestrator] 
(Work-Job-Executor-11:ctx-c5cca7da job-70304/job-70306 ctx-a412f4b8) No need to 
recreate the volume: Vol[5053|vm=2998|DATADISK], since it already has a pool 
assigned: 29, addin g disk to VM
2017-02-22 12:18:40,669 DEBUG [c.c.h.x.r.w.x.CitrixStartCommandWrapper] 
(DirectAgent-469:ctx-d6e5768e) 1. The VM i-18-2998-VM is in Starting state.
2017-02-22 12:18:40,688 DEBUG [c.c.h.x.r.CitrixResourceBase] 
(DirectAgent-469:ctx-d6e5768e) Created VM e37afda2-9661-4655-e750-1855b0318787 
for i-18-2998-VM
2017-02-22 12:18:40,710 DEBUG [c.c.h.x.r.CitrixResourceBase] 
(DirectAgent-469:ctx-d6e5768e) VBD d560c831-29f8-c82b-7e81-778ce33318ae created 
for com.cloud.agent.api.to.DiskTO@1d82661a
2017-02-22 12:18:40,720 DEBUG [c.c.h.x.r.CitrixResourceBase] 
(DirectAgent-469:ctx-d6e5768e) VBD b083c0c8-31bc-1248-859a-234e276d9b4c created 
for com.cloud.agent.api.to.DiskTO@5bfd4418
2017-02-22 12:18:40,729 DEBUG [c.c.h.x.r.CitrixResourceBase] 
(DirectAgent-469:ctx-d6e5768e) VBD 48701244-a29a-e9ce-f6c3-ed5225271aa7 created 
for com.cloud.agent.api.to.DiskTO@5081b2d6
2017-02-22 12:18:40,737 DEBUG [c.c.a.m.DirectAgentAttache] 
(DirectAgentCronJob-352:ctx-569e5f7b) Ping from 337(esc-fra1-xn011)
2017-02-22 12:18:40,739 DEBUG [c.c.h.x.r.CitrixResourceBase] 
(DirectAgent-469:ctx-d6e5768e) VBD 755de6cb-3994-8251-c0d5-e45cda52ca98 created 
for com.cloud.agent.api.to.DiskTO@64992bda
2017-02-22 12:18:40,744 WARN  [c.c.h.x.r.w.x.CitrixStartCommandWrapper] 
(DirectAgent-469:ctx-d6e5768e) Catch Exception: class 
com.xensource.xenapi.Types$InvalidDevice due to The device name is invalid The 
device name is invalid
at com.xensource.xenapi.Types.checkResponse(Types.java:1169)
at com.xensource.xenapi.Connection.dispatch(Connection.java:395)
at 
com.cloud.hypervisor.xenserver.resource.XenServerConnectionPool$XenServerConnection.dispatch(XenServerConnectionPool.java:457)
at com.xensource.xenapi.VBD.create(VBD.java:322)
at 
com.cloud.hypervisor.xenserver.resource.CitrixResourceBase.createVbd(CitrixResourceBase.java:1156)
at 
com.cloud.hypervisor.xenserver.resource.wrapper.xenbase.CitrixStartCommandWrapper.execute(CitrixStartCommandWrapper.java:121)
at 
com.cloud.hypervisor.xenserver.resource.wrapper.xenbase.CitrixStartCommandWrapper.execute(CitrixStartCommandWrapper.java:53)
at 
com.cloud.hypervisor.xenserver.resource.wrapper.xenbase.CitrixRequestWrapper.execute(CitrixRequestWrapper.java:122)
at 
com.cloud.hypervisor.xenserver.resource.CitrixResourceBase.executeRequest(CitrixResourceBase.java:1687

AW: XenServer VM does no longer start

2017-02-22 Thread S . Brüseke - proIO GmbH
Hi Martin,

does the volume still exist on primary storage? You can also take a look at 
SMlog on XenServer.

Mit freundlichen Grüßen / With kind regards,

Swen


-Ursprüngliche Nachricht-
Von: Martin Emrich [mailto:martin.emr...@empolis.com] 
Gesendet: Mittwoch, 22. Februar 2017 12:27
An: users@cloudstack.apache.org
Betreff: XenServer VM does no longer start

Hi!

After shutting down a VM for resizing, it no longer starts. The GUI reports 
insufficient Capacity (but there's plenty), and in the Log I see this:

2017-02-22 12:18:40,626 DEBUG [o.a.c.e.o.VolumeOrchestrator] 
(Work-Job-Executor-11:ctx-c5cca7da job-70304/job-70306 ctx-a412f4b8) Checking 
if we need to prepare 4 volumes for VM[User|i-18-2998-VM]
2017-02-22 12:18:40,626 DEBUG [o.a.c.e.o.VolumeOrchestrator] 
(Work-Job-Executor-11:ctx-c5cca7da job-70304/job-70306 ctx-a412f4b8) No need to 
recreate the volume: Vol[5050|vm=2998|ROOT], since it already has a pool 
assigned: 29, adding di sk to VM
2017-02-22 12:18:40,627 DEBUG [o.a.c.e.o.VolumeOrchestrator] 
(Work-Job-Executor-11:ctx-c5cca7da job-70304/job-70306 ctx-a412f4b8) No need to 
recreate the volume: Vol[5051|vm=2998|DATADISK], since it already has a pool 
assigned: 29, addin g disk to VM
2017-02-22 12:18:40,627 DEBUG [o.a.c.e.o.VolumeOrchestrator] 
(Work-Job-Executor-11:ctx-c5cca7da job-70304/job-70306 ctx-a412f4b8) No need to 
recreate the volume: Vol[5052|vm=2998|DATADISK], since it already has a pool 
assigned: 29, addin g disk to VM
2017-02-22 12:18:40,627 DEBUG [o.a.c.e.o.VolumeOrchestrator] 
(Work-Job-Executor-11:ctx-c5cca7da job-70304/job-70306 ctx-a412f4b8) No need to 
recreate the volume: Vol[5053|vm=2998|DATADISK], since it already has a pool 
assigned: 29, addin g disk to VM
2017-02-22 12:18:40,669 DEBUG [c.c.h.x.r.w.x.CitrixStartCommandWrapper] 
(DirectAgent-469:ctx-d6e5768e) 1. The VM i-18-2998-VM is in Starting state.
2017-02-22 12:18:40,688 DEBUG [c.c.h.x.r.CitrixResourceBase] 
(DirectAgent-469:ctx-d6e5768e) Created VM e37afda2-9661-4655-e750-1855b0318787 
for i-18-2998-VM
2017-02-22 12:18:40,710 DEBUG [c.c.h.x.r.CitrixResourceBase] 
(DirectAgent-469:ctx-d6e5768e) VBD d560c831-29f8-c82b-7e81-778ce33318ae created 
for com.cloud.agent.api.to.DiskTO@1d82661a
2017-02-22 12:18:40,720 DEBUG [c.c.h.x.r.CitrixResourceBase] 
(DirectAgent-469:ctx-d6e5768e) VBD b083c0c8-31bc-1248-859a-234e276d9b4c created 
for com.cloud.agent.api.to.DiskTO@5bfd4418
2017-02-22 12:18:40,729 DEBUG [c.c.h.x.r.CitrixResourceBase] 
(DirectAgent-469:ctx-d6e5768e) VBD 48701244-a29a-e9ce-f6c3-ed5225271aa7 created 
for com.cloud.agent.api.to.DiskTO@5081b2d6
2017-02-22 12:18:40,737 DEBUG [c.c.a.m.DirectAgentAttache] 
(DirectAgentCronJob-352:ctx-569e5f7b) Ping from 337(esc-fra1-xn011)
2017-02-22 12:18:40,739 DEBUG [c.c.h.x.r.CitrixResourceBase] 
(DirectAgent-469:ctx-d6e5768e) VBD 755de6cb-3994-8251-c0d5-e45cda52ca98 created 
for com.cloud.agent.api.to.DiskTO@64992bda
2017-02-22 12:18:40,744 WARN  [c.c.h.x.r.w.x.CitrixStartCommandWrapper] 
(DirectAgent-469:ctx-d6e5768e) Catch Exception: class 
com.xensource.xenapi.Types$InvalidDevice due to The device name is invalid The 
device name is invalid
at com.xensource.xenapi.Types.checkResponse(Types.java:1169)
at com.xensource.xenapi.Connection.dispatch(Connection.java:395)
at 
com.cloud.hypervisor.xenserver.resource.XenServerConnectionPool$XenServerConnection.dispatch(XenServerConnectionPool.java:457)
at com.xensource.xenapi.VBD.create(VBD.java:322)
at 
com.cloud.hypervisor.xenserver.resource.CitrixResourceBase.createVbd(CitrixResourceBase.java:1156)
at 
com.cloud.hypervisor.xenserver.resource.wrapper.xenbase.CitrixStartCommandWrapper.execute(CitrixStartCommandWrapper.java:121)
at 
com.cloud.hypervisor.xenserver.resource.wrapper.xenbase.CitrixStartCommandWrapper.execute(CitrixStartCommandWrapper.java:53)
at 
com.cloud.hypervisor.xenserver.resource.wrapper.xenbase.CitrixRequestWrapper.execute(CitrixRequestWrapper.java:122)
at 
com.cloud.hypervisor.xenserver.resource.CitrixResourceBase.executeRequest(CitrixResourceBase.java:1687)
at 
com.cloud.agent.manager.DirectAgentAttache$Task.runInContext(DirectAgentAttache.java:315)
at 
org.apache.cloudstack.managed.context.ManagedContextRunnable$1.run(ManagedContextRunnable.java:49)
at 
org.apache.cloudstack.managed.context.impl.DefaultManagedContext$1.call(DefaultManagedContext.java:56)
at 
org.apache.cloudstack.managed.context.impl.DefaultManagedContext.callWithContext(DefaultManagedContext.java:103)
at 
org.apache.cloudstack.managed.context.impl.DefaultManagedContext.runWithContext(DefaultManagedContext.java:53)
at 
org.apache.cloudstack.managed.context.ManagedContextRunnable.run(ManagedContextRunnable.java:46)
at 
java.util.concurrent.Executors$RunnableAdapter.call(Executors.java:471)
at java.util.concurrent.FutureTask.run(FutureTask.java:262)
at 

AW: Unplanned downtime

2017-02-08 Thread S . Brüseke - proIO GmbH
Hi Vibol,

CS can only bring up VMs in the same cluster, because only this cluster has 
access to primary storage of the cluster where VM data is on.
So if all hosts of one cluster are gone there is no way for CS to bring up 
these VMs.

Mit freundlichen Grüßen / With kind regards,

Swen


-Ursprüngliche Nachricht-
Von: Vibol Vireak [mailto:vibolvir...@gmail.com] 
Gesendet: Mittwoch, 8. Februar 2017 15:15
An: users@cloudstack.apache.org; S. Brüseke - proIO GmbH
Betreff: Re: Unplanned downtime

I just install cloudstack, kvm as a hypervisor with ceph primary storage.
And i don't have any running instant yet because i plan to setup advance 
network with openvswitch for my testing environment. as you mention, Did the 
cloudstack will automatic failover or restart the vmware to another hypervisor 
if we enable an instant HA ?

Thank and Best Regards,

2017-02-08 18:17 GMT+07:00 S. Brüseke - proIO GmbH <s.brues...@proio.com>:

> Hi Vibol,
>
> that depends on your hypervisor setup and settings in CS. Is the 
> compute offering the VMs based on HA enabled? Is your hypervisor cluster HA 
> enabled?
>
> Mit freundlichen Grüßen / With kind regards,
>
> Swen
>
>
> -Ursprüngliche Nachricht-
> Von: Vibol Vireak [mailto:vibolvir...@gmail.com]
> Gesendet: Mittwoch, 8. Februar 2017 12:11
> An: users@cloudstack.apache.org; S. Brüseke - proIO GmbH
> Betreff: Re: Unplanned downtime
>
> Dear Swen,
>
> Well firstly I'm really appreciated your response and apologize for 
> make u confusing my question I'm well understanding your Answer, and i 
> just make my question clear. What happen is the hypervisor host 
> accidently crash at night on 3am (Example) do i need to manual restart 
> the instants as your said or cloudstack automatic restart it ?
>
> Thank and Best Regards,
>
>
> 2017-02-08 18:00 GMT+07:00 S. Brüseke - proIO GmbH <s.brues...@proio.com>:
>
> > Hi Vibol,
> >
> > I do not get you question 100%. What do you mean by unplanned 
> > hypervisor downtime? Do you mean that a hypervisor is crashing and 
> > goes
> down?
> > CS has tools to restart VMs that should not be down. But you will 
> > have a downtime for VMs between crash of hypervisor and restarting 
> > the VM on another host. So we are talking about disaster recovery 
> > here. If you need real high availability you need to do this on 
> > application level for example
> > 2 webserver behind a load balancer.
> >
> > Mit freundlichen Grüßen / With kind regards,
> >
> > Swen
> >
> >
> > -Ursprüngliche Nachricht-
> > Von: Vibol Vireak [mailto:vibolvir...@gmail.com]
> > Gesendet: Mittwoch, 8. Februar 2017 11:24
> > An: users@cloudstack.apache.org
> > Betreff: Unplanned downtime
> >
> > Dear all,
> >
> > Well migrating instants is a very good function that available in 
> > Cloudstack presented on Apache Cloudstack Documentation. But after 
> > spending a week read and searching for more information. I found out 
> > that unplanned downtime for a hypervisor can cause a vm inside to go
> offline.
> >
> > In general, Is there a way to prevent and instants offline cause by 
> > the unplanned hypervisor downtime ? or is there any auto migration 
> > available for Apache Cloustack?
> >
> > Thank and Best Regards,
> >
> >
> > - proIO GmbH -
> > Geschäftsführer: Swen Brüseke
> > Sitz der Gesellschaft: Frankfurt am Main
> >
> > USt-IdNr. DE 267 075 918
> > Registergericht: Frankfurt am Main - HRB 86239
> >
> > Diese E-Mail enthält vertrauliche und/oder rechtlich geschützte 
> > Informationen.
> > Wenn Sie nicht der richtige Adressat sind oder diese E-Mail 
> > irrtümlich erhalten haben, informieren Sie bitte sofort den Absender 
> > und vernichten Sie diese Mail.
> > Das unerlaubte Kopieren sowie die unbefugte Weitergabe dieser Mail 
> > sind nicht gestattet.
> >
> > This e-mail may contain confidential and/or privileged information.
> > If you are not the intended recipient (or have received this e-mail 
> > in
> > error) please notify
> > the sender immediately and destroy this e-mail.
> > Any unauthorized copying, disclosure or distribution of the material 
> > in this e-mail is strictly forbidden.
> >
> >
> >
>
>
> - proIO GmbH -
> Geschäftsführer: Swen Brüseke
> Sitz der Gesellschaft: Frankfurt am Main
>
> USt-IdNr. DE 267 075 918
> Registergericht: Frankfurt am Main - HRB 86239
>
> Diese E-Mail enthält vertrauliche und/oder rechtlich geschützte 
> Informationen.
> Wenn Sie nicht der richtige Adressat sind oder diese E-Mail irrtümlich 

AW: Unplanned downtime

2017-02-08 Thread S . Brüseke - proIO GmbH
Hi Vibol,

that depends on your hypervisor setup and settings in CS. Is the compute 
offering the VMs based on HA enabled? Is your hypervisor cluster HA enabled?

Mit freundlichen Grüßen / With kind regards,

Swen


-Ursprüngliche Nachricht-
Von: Vibol Vireak [mailto:vibolvir...@gmail.com] 
Gesendet: Mittwoch, 8. Februar 2017 12:11
An: users@cloudstack.apache.org; S. Brüseke - proIO GmbH
Betreff: Re: Unplanned downtime

Dear Swen,

Well firstly I'm really appreciated your response and apologize for make u 
confusing my question I'm well understanding your Answer, and i just make my 
question clear. What happen is the hypervisor host accidently crash at night on 
3am (Example) do i need to manual restart the instants as your said or 
cloudstack automatic restart it ?

Thank and Best Regards,


2017-02-08 18:00 GMT+07:00 S. Brüseke - proIO GmbH <s.brues...@proio.com>:

> Hi Vibol,
>
> I do not get you question 100%. What do you mean by unplanned 
> hypervisor downtime? Do you mean that a hypervisor is crashing and goes down?
> CS has tools to restart VMs that should not be down. But you will have 
> a downtime for VMs between crash of hypervisor and restarting the VM 
> on another host. So we are talking about disaster recovery here. If 
> you need real high availability you need to do this on application 
> level for example
> 2 webserver behind a load balancer.
>
> Mit freundlichen Grüßen / With kind regards,
>
> Swen
>
>
> -Ursprüngliche Nachricht-
> Von: Vibol Vireak [mailto:vibolvir...@gmail.com]
> Gesendet: Mittwoch, 8. Februar 2017 11:24
> An: users@cloudstack.apache.org
> Betreff: Unplanned downtime
>
> Dear all,
>
> Well migrating instants is a very good function that available in 
> Cloudstack presented on Apache Cloudstack Documentation. But after 
> spending a week read and searching for more information. I found out 
> that unplanned downtime for a hypervisor can cause a vm inside to go offline.
>
> In general, Is there a way to prevent and instants offline cause by 
> the unplanned hypervisor downtime ? or is there any auto migration 
> available for Apache Cloustack?
>
> Thank and Best Regards,
>
>
> - proIO GmbH -
> Geschäftsführer: Swen Brüseke
> Sitz der Gesellschaft: Frankfurt am Main
>
> USt-IdNr. DE 267 075 918
> Registergericht: Frankfurt am Main - HRB 86239
>
> Diese E-Mail enthält vertrauliche und/oder rechtlich geschützte 
> Informationen.
> Wenn Sie nicht der richtige Adressat sind oder diese E-Mail irrtümlich 
> erhalten haben, informieren Sie bitte sofort den Absender und 
> vernichten Sie diese Mail.
> Das unerlaubte Kopieren sowie die unbefugte Weitergabe dieser Mail 
> sind nicht gestattet.
>
> This e-mail may contain confidential and/or privileged information.
> If you are not the intended recipient (or have received this e-mail in
> error) please notify
> the sender immediately and destroy this e-mail.
> Any unauthorized copying, disclosure or distribution of the material 
> in this e-mail is strictly forbidden.
>
>
>


- proIO GmbH -
Geschäftsführer: Swen Brüseke
Sitz der Gesellschaft: Frankfurt am Main

USt-IdNr. DE 267 075 918
Registergericht: Frankfurt am Main - HRB 86239

Diese E-Mail enthält vertrauliche und/oder rechtlich geschützte Informationen. 
Wenn Sie nicht der richtige Adressat sind oder diese E-Mail irrtümlich erhalten 
haben, 
informieren Sie bitte sofort den Absender und vernichten Sie diese Mail. 
Das unerlaubte Kopieren sowie die unbefugte Weitergabe dieser Mail sind nicht 
gestattet. 

This e-mail may contain confidential and/or privileged information. 
If you are not the intended recipient (or have received this e-mail in error) 
please notify 
the sender immediately and destroy this e-mail.  
Any unauthorized copying, disclosure or distribution of the material in this 
e-mail is strictly forbidden. 




AW: Unplanned downtime

2017-02-08 Thread S . Brüseke - proIO GmbH
Hi Vibol,

I do not get you question 100%. What do you mean by unplanned hypervisor 
downtime? Do you mean that a hypervisor is crashing and goes down?
CS has tools to restart VMs that should not be down. But you will have a 
downtime for VMs between crash of hypervisor and restarting the VM on another 
host. So we are talking about disaster recovery here. If you need real high 
availability you need to do this on application level for example 2 webserver 
behind a load balancer.

Mit freundlichen Grüßen / With kind regards,

Swen


-Ursprüngliche Nachricht-
Von: Vibol Vireak [mailto:vibolvir...@gmail.com] 
Gesendet: Mittwoch, 8. Februar 2017 11:24
An: users@cloudstack.apache.org
Betreff: Unplanned downtime

Dear all,

Well migrating instants is a very good function that available in Cloudstack 
presented on Apache Cloudstack Documentation. But after spending a week read 
and searching for more information. I found out that unplanned downtime for a 
hypervisor can cause a vm inside to go offline.

In general, Is there a way to prevent and instants offline cause by the 
unplanned hypervisor downtime ? or is there any auto migration available for 
Apache Cloustack?

Thank and Best Regards,


- proIO GmbH -
Geschäftsführer: Swen Brüseke
Sitz der Gesellschaft: Frankfurt am Main

USt-IdNr. DE 267 075 918
Registergericht: Frankfurt am Main - HRB 86239

Diese E-Mail enthält vertrauliche und/oder rechtlich geschützte Informationen. 
Wenn Sie nicht der richtige Adressat sind oder diese E-Mail irrtümlich erhalten 
haben, 
informieren Sie bitte sofort den Absender und vernichten Sie diese Mail. 
Das unerlaubte Kopieren sowie die unbefugte Weitergabe dieser Mail sind nicht 
gestattet. 

This e-mail may contain confidential and/or privileged information. 
If you are not the intended recipient (or have received this e-mail in error) 
please notify 
the sender immediately and destroy this e-mail.  
Any unauthorized copying, disclosure or distribution of the material in this 
e-mail is strictly forbidden. 




AW: Issues upgrading to XenServer 7

2017-02-02 Thread S . Brüseke - proIO GmbH
Hi Martin,

here are my 2 cents:
we do not upgrade XenServer in our CS installation at all. What we do is to 
move all VMs from one host, eject it from the pool and delete it from CS. After 
that we are doing a fresh installation of XenServer, create a new pool and add 
this pool as new cluster to CS. Then we move VMs via live migration from the 
old cluster to the new one so we end up in the next free XenServer in the old 
pool to eject it and so on.

Mit freundlichen Grüßen / With kind regards,

Swen


-Ursprüngliche Nachricht-
Von: Dag Sonstebo [mailto:dag.sonst...@shapeblue.com] 
Gesendet: Donnerstag, 2. Februar 2017 15:15
An: users@cloudstack.apache.org
Betreff: Re: Issues upgrading to XenServer 7

Hi Martin

Check 
http://docs.cloudstack.apache.org/projects/cloudstack-installation/en/4.6/hypervisor/xenserver.html#upgrading-xenserver-versions


Regards,
Dag Sonstebo
Cloud Architect
ShapeBlue

On 02/02/2017, 13:09, "Martin Emrich"  wrote:

Hi!,

After getting the upgrade to 4.9.2.0 on my test cloud going, I decided to 
upgrade the hosts from XenServer 6.5SP1 to XenServer 7. I followed the process 
like I did many times with upgrades form 6.0 to 6.2 or to 6.5.
Normally, Cloudstack "reinstalled" the integration scripts under /opt/cloud 
after upgrading a XenServer, I usually can watch it In the logs while it copies 
scripts and stuff to the XenServers.

But this time, they do not appear, and the host stays in the Alert state.

In the log, I see:

2017-02-02 14:07:07,431 DEBUG [c.c.h.Status] (AgentTaskPool-6:ctx-3c41c4cd) 
(logid:47da929b) Transition:[Resource state = Enabled, Agent event = 
AgentDisconnected, Host id = 2, name = cdsdev-xen5]
2017-02-02 14:07:07,434 WARN  [c.c.r.ResourceManagerImpl] 
(AgentTaskPool-5:ctx-f768a3d8) (logid:aefd7fd0) Unable to connect due to
com.cloud.exception.ConnectionException: Reinitialize agent after setup.
at 
com.cloud.hypervisor.xenserver.discoverer.XcpServerDiscoverer.processConnect(XcpServerDiscoverer.java:627)
at 
com.cloud.agent.manager.AgentManagerImpl.notifyMonitorsOfConnection(AgentManagerImpl.java:564)
at 
com.cloud.agent.manager.AgentManagerImpl.handleDirectConnectAgent(AgentManagerImpl.java:1518)
at 
com.cloud.resource.ResourceManagerImpl.createHostAndAgent(ResourceManagerImpl.java:1902)
at 
com.cloud.resource.ResourceManagerImpl.createHostAndAgent(ResourceManagerImpl.java:2035)
at sun.reflect.NativeMethodAccessorImpl.invoke0(Native Method)
at 
sun.reflect.NativeMethodAccessorImpl.invoke(NativeMethodAccessorImpl.java:57)
at 
sun.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:43)
at java.lang.reflect.Method.invoke(Method.java:606)
at 
org.springframework.aop.support.AopUtils.invokeJoinpointUsingReflection(AopUtils.java:317)
at 
org.springframework.aop.framework.ReflectiveMethodInvocation.invokeJoinpoint(ReflectiveMethodInvocation.java:183)
at 
org.springframework.aop.framework.ReflectiveMethodInvocation.proceed(ReflectiveMethodInvocation.java:150)
at 
org.springframework.aop.interceptor.ExposeInvocationInterceptor.invoke(ExposeInvocationInterceptor.java:91)
at 
org.springframework.aop.framework.ReflectiveMethodInvocation.proceed(ReflectiveMethodInvocation.java:172)
at 
org.springframework.aop.framework.JdkDynamicAopProxy.invoke(JdkDynamicAopProxy.java:204)
at com.sun.proxy.$Proxy161.createHostAndAgent(Unknown Source)
at 
com.cloud.agent.manager.AgentManagerImpl$SimulateStartTask.runInContext(AgentManagerImpl.java:1135)
at 
org.apache.cloudstack.managed.context.ManagedContextRunnable$1.run(ManagedContextRunnable.java:49)
at 
org.apache.cloudstack.managed.context.impl.DefaultManagedContext$1.call(DefaultManagedContext.java:56)
at 
org.apache.cloudstack.managed.context.impl.DefaultManagedContext.callWithContext(DefaultManagedContext.java:103)
at 
org.apache.cloudstack.managed.context.impl.DefaultManagedContext.runWithContext(DefaultManagedContext.java:53)
at 
org.apache.cloudstack.managed.context.ManagedContextRunnable.run(ManagedContextRunnable.java:46)
at 
java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1145)
at 
java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:615)
at java.lang.Thread.run(Thread.java:745)

How can I motivate CloudStack to install the agent/scripts on the 
XenServers? Or can I do it manually?

Thanks,

Martin




dag.sonst...@shapeblue.com
www.shapeblue.com
53 Chandos Place, Covent Garden, London  WC2N 4HSUK @shapeblue
  
 



- proIO GmbH -
Geschäftsführer: Swen Brüseke
Sitz der Gesellschaft: Frankfurt am Main

USt-IdNr. DE 267 075 918

ApacheCon Miami

2017-01-31 Thread S . Brüseke - proIO GmbH
Hey guys,

somebody plan to visit ApacheCon (http://us.cloudstackcollab.org/) in Miami 
this May?

Is there a way for our community to get a special price for tickets?

Mit freundlichen Grüßen / With kind regards,

Swen




- proIO GmbH -
Geschäftsführer: Swen Brüseke
Sitz der Gesellschaft: Frankfurt am Main

USt-IdNr. DE 267 075 918
Registergericht: Frankfurt am Main - HRB 86239

Diese E-Mail enthält vertrauliche und/oder rechtlich geschützte Informationen. 
Wenn Sie nicht der richtige Adressat sind oder diese E-Mail irrtümlich erhalten 
haben, 
informieren Sie bitte sofort den Absender und vernichten Sie diese Mail. 
Das unerlaubte Kopieren sowie die unbefugte Weitergabe dieser Mail sind nicht 
gestattet. 

This e-mail may contain confidential and/or privileged information. 
If you are not the intended recipient (or have received this e-mail in error) 
please notify 
the sender immediately and destroy this e-mail.  
Any unauthorized copying, disclosure or distribution of the material in this 
e-mail is strictly forbidden. 




developing own webinterface

2017-01-30 Thread S . Brüseke - proIO GmbH
Hello,

we want to develop our own webinterface for CS and I want to ask if somebody 
also did this in the past and can share the experience. Our webinterface will 
use API calls, but what language is the best to use for a webinterface? We use 
PHP a lot for our web projects, but I am not use if this is the best option 
here, because sometimes you need a runtime to fulfill a chain of API calls with 
unknown outcome.

Can you please share your experience? Thx!

Mit freundlichen Grüßen / With kind regards,

Swen




- proIO GmbH -
Geschäftsführer: Swen Brüseke
Sitz der Gesellschaft: Frankfurt am Main

USt-IdNr. DE 267 075 918
Registergericht: Frankfurt am Main - HRB 86239

Diese E-Mail enthält vertrauliche und/oder rechtlich geschützte Informationen. 
Wenn Sie nicht der richtige Adressat sind oder diese E-Mail irrtümlich erhalten 
haben, 
informieren Sie bitte sofort den Absender und vernichten Sie diese Mail. 
Das unerlaubte Kopieren sowie die unbefugte Weitergabe dieser Mail sind nicht 
gestattet. 

This e-mail may contain confidential and/or privileged information. 
If you are not the intended recipient (or have received this e-mail in error) 
please notify 
the sender immediately and destroy this e-mail.  
Any unauthorized copying, disclosure or distribution of the material in this 
e-mail is strictly forbidden. 




AW: Public IaaS advanced zone

2017-01-26 Thread S . Brüseke - proIO GmbH
Hi Chiradeep,

take a look at www.megona.de. Page is in German, but interface is in English 
too.

Mit freundlichen Grüßen / With kind regards,

Swen


-Ursprüngliche Nachricht-
Von: Chiradeep Vittal [mailto:chirade...@gmail.com] 
Gesendet: Donnerstag, 26. Januar 2017 05:07
An: users@cloudstack.apache.org
Betreff: Public IaaS advanced zone

Can somebody recommend a CloudStack - based public IAAS cloud? I need -
- advanced zone
- API access
- VPC support with multiple subnets
- preferably KVM / XenServer
- credit card payment
- hourly billing

Sent from my iPhone


- proIO GmbH -
Geschäftsführer: Swen Brüseke
Sitz der Gesellschaft: Frankfurt am Main

USt-IdNr. DE 267 075 918
Registergericht: Frankfurt am Main - HRB 86239

Diese E-Mail enthält vertrauliche und/oder rechtlich geschützte Informationen. 
Wenn Sie nicht der richtige Adressat sind oder diese E-Mail irrtümlich erhalten 
haben, 
informieren Sie bitte sofort den Absender und vernichten Sie diese Mail. 
Das unerlaubte Kopieren sowie die unbefugte Weitergabe dieser Mail sind nicht 
gestattet. 

This e-mail may contain confidential and/or privileged information. 
If you are not the intended recipient (or have received this e-mail in error) 
please notify 
the sender immediately and destroy this e-mail.  
Any unauthorized copying, disclosure or distribution of the material in this 
e-mail is strictly forbidden. 




AW: Template management

2017-01-23 Thread S . Brüseke - proIO GmbH
Hi Dag,

good point! Thank you for bringing it up.
Our situation is that we need to use storage live migration to do XenServer 
updates anyway.

Mit freundlichen Grüßen / With kind regards,

Swen


-Ursprüngliche Nachricht-
Von: Dag Sonstebo [mailto:dag.sonst...@shapeblue.com] 
Gesendet: Montag, 23. Januar 2017 12:28
An: users@cloudstack.apache.org
Betreff: Re: Template management

Hi Swen,

Keep in mind what you are doing during this process – the migration effectively 
merges the disk chain for each VM to a single bigger disk, which will now take 
up a lot more space on the destination than on the source storage pool. This 
won’t matter with a single VM – but if you have multiple VMs using the same 
template you lose all the benefits of the space saving in the linked clone disk 
chains. Every VM you do this to now use the full size merged disk – no disk 
chains – as a result you are using a lot more space in your estate.

Regards,
Dag Sonstebo
Cloud Architect
ShapeBlue

On 23/01/2017, 08:35, "S. Brüseke - proIO GmbH" <s.brues...@proio.com> wrote:

I did some testing and want to share my findings:
When using local storage a way to delete old templates which are stuck 
because of a XenServer chain is to perform a live migration and move the vm to 
another host. The chain will be deleted and after the clean up job of CS did 
run the template will be deleted too. Any idea how we can use this? 

Mit freundlichen Grüßen / With kind regards,

Swen


-Ursprüngliche Nachricht-
Von: Dag Sonstebo [mailto:dag.sonst...@shapeblue.com] 
Gesendet: Donnerstag, 19. Januar 2017 15:34
An: users@cloudstack.apache.org
Betreff: Re: Template management

Hi Swen,

Assuming you are using advanced zones my idea below would involve:

1) Create a patching account in your CloudStack environment.
2) Spin up your repo clone boxes in this account – and configure these with 
some sort of nightly synch with the RHEL / Ubuntu / CentOS / etc yum etc 
repositories.
3) On the public IP address for the patching account configure firewalling 
/ NATing to allow anyone from the same public IP range to access the repo boxes.
4) Configure a DNS entry for this IP address on the DNS servers used by 
your CloudStack infrastructure.
5) Configure cloud-init or similar to check for updates on the DNS server 
name – either on reboot or with a cron type job on a specific date of the month.

Just one idea, there will be many ways to do this. The synched repo boxes 
don’t need to be hosted in CloudStack, they could just be hosted externally on 
an IP address accessible from your public range.
The other thing is you probably want your end users to be able to opt in or 
out of this mechanism, so you may want to put in place some user key/values to 
control this. If you wanted you could also rig up some automation where the VM 
is snapshot’ed prior to patching so users have a rollback point.

Regards,
Dag Sonstebo
Cloud Architect
ShapeBlue

On 19/01/2017, 14:09, "S. Brüseke - proIO GmbH" <s.brues...@proio.com> 
wrote:

Hi Dag,

how can I provide connection to an internal repo for all networks in my 
CS installation by default?

Mit freundlichen Grüßen / With kind regards,

Swen


-Ursprüngliche Nachricht-
Von: Dag Sonstebo [mailto:dag.sonst...@shapeblue.com] 
Gesendet: Donnerstag, 19. Januar 2017 14:41
An: users@cloudstack.apache.org
Betreff: Re: Template management

Hi Swen,

If you wanted to do this on boot with cloud-init or a similar mechanism 
you would actually engineer the solution such that an internet connection 
wasn’t required. If you have every VM updating over the internet you end up 
paying for a lot of unnecessary bandwidth. You would instead make sure you have 
internal cloned patch repositories which you synchronize hourly/daily  - which 
means all user VMs only pull patches on the internal network. You could even 
“eat your own dogfood/drink your own champagne” and host this on one of the 
accounts in the same CloudStack infrastructure – then simply set up connection 
on the public network. That way the update traffic isn’t ever leaving your 
switches per se.

Not sure how AWS etc. do this, but they have deep pockets…

Regards,
Dag Sonstebo
Cloud Architect
ShapeBlue

    On 19/01/2017, 13:31, "S. Brüseke - proIO GmbH" <s.brues...@proio.com> 
wrote:

@Dag: Thanks for the confirmation and for the link.

@Rene: Of course it is the user's responsibility, but we want to 
provide a VM with the latest updates each time you deploy a new VM. :-) I know 
that cloud-init can do this on boot, but

AW: Template management

2017-01-23 Thread S . Brüseke - proIO GmbH
I did some testing and want to share my findings:
When using local storage a way to delete old templates which are stuck because 
of a XenServer chain is to perform a live migration and move the vm to another 
host. The chain will be deleted and after the clean up job of CS did run the 
template will be deleted too. Any idea how we can use this? 

Mit freundlichen Grüßen / With kind regards,

Swen


-Ursprüngliche Nachricht-
Von: Dag Sonstebo [mailto:dag.sonst...@shapeblue.com] 
Gesendet: Donnerstag, 19. Januar 2017 15:34
An: users@cloudstack.apache.org
Betreff: Re: Template management

Hi Swen,

Assuming you are using advanced zones my idea below would involve:

1) Create a patching account in your CloudStack environment.
2) Spin up your repo clone boxes in this account – and configure these with 
some sort of nightly synch with the RHEL / Ubuntu / CentOS / etc yum etc 
repositories.
3) On the public IP address for the patching account configure firewalling / 
NATing to allow anyone from the same public IP range to access the repo boxes.
4) Configure a DNS entry for this IP address on the DNS servers used by your 
CloudStack infrastructure.
5) Configure cloud-init or similar to check for updates on the DNS server name 
– either on reboot or with a cron type job on a specific date of the month.

Just one idea, there will be many ways to do this. The synched repo boxes don’t 
need to be hosted in CloudStack, they could just be hosted externally on an IP 
address accessible from your public range.
The other thing is you probably want your end users to be able to opt in or out 
of this mechanism, so you may want to put in place some user key/values to 
control this. If you wanted you could also rig up some automation where the VM 
is snapshot’ed prior to patching so users have a rollback point.

Regards,
Dag Sonstebo
Cloud Architect
ShapeBlue

On 19/01/2017, 14:09, "S. Brüseke - proIO GmbH" <s.brues...@proio.com> wrote:

Hi Dag,

how can I provide connection to an internal repo for all networks in my CS 
installation by default?

Mit freundlichen Grüßen / With kind regards,

Swen


-Ursprüngliche Nachricht-
Von: Dag Sonstebo [mailto:dag.sonst...@shapeblue.com] 
Gesendet: Donnerstag, 19. Januar 2017 14:41
An: users@cloudstack.apache.org
Betreff: Re: Template management

Hi Swen,

If you wanted to do this on boot with cloud-init or a similar mechanism you 
would actually engineer the solution such that an internet connection wasn’t 
required. If you have every VM updating over the internet you end up paying for 
a lot of unnecessary bandwidth. You would instead make sure you have internal 
cloned patch repositories which you synchronize hourly/daily  - which means all 
user VMs only pull patches on the internal network. You could even “eat your 
own dogfood/drink your own champagne” and host this on one of the accounts in 
the same CloudStack infrastructure – then simply set up connection on the 
public network. That way the update traffic isn’t ever leaving your switches 
per se.

Not sure how AWS etc. do this, but they have deep pockets…

Regards,
Dag Sonstebo
Cloud Architect
ShapeBlue

On 19/01/2017, 13:31, "S. Brüseke - proIO GmbH" <s.brues...@proio.com> 
wrote:

@Dag: Thanks for the confirmation and for the link.

@Rene: Of course it is the user's responsibility, but we want to 
provide a VM with the latest updates each time you deploy a new VM. :-) I know 
that cloud-init can do this on boot, but what if the network has no internet 
connection?

Does anybody know how AWS or DigitalOcean is handling this?

Mit freundlichen Grüßen / With kind regards,

Swen


-Ursprüngliche Nachricht-
Von: Rene Moser [mailto:m...@renemoser.net] 
Gesendet: Donnerstag, 19. Januar 2017 11:03
An: users@cloudstack.apache.org
Betreff: Re: Template management

Hi Swen

    On 01/19/2017 10:04 AM, S. Brüseke - proIO GmbH wrote:

> I am really interested in other solutions and workflows, so please 
> shoot. :-)

We decided to not doing or minimize (1-2 updates per year) templates 
updates for "system updates" for two main reasons:

1. It is the user's responsibility to keep systems up to date anyway.
2. Using cfg management and/or cloud-init is more than easy to update 
systems.

Regards
René


- proIO GmbH -
Geschäftsführer: Swen Brüseke
Sitz der Gesellschaft: Frankfurt am Main

USt-IdNr. DE 267 075 918
Registergericht: Frankfurt am Main - HRB 86239

Diese E-Mail enthält vertrauliche und/oder rechtlich geschützte 
Informationen. 
 

AW: Template management

2017-01-19 Thread S . Brüseke - proIO GmbH
Hi Dag,

how can I provide connection to an internal repo for all networks in my CS 
installation by default?

Mit freundlichen Grüßen / With kind regards,

Swen


-Ursprüngliche Nachricht-
Von: Dag Sonstebo [mailto:dag.sonst...@shapeblue.com] 
Gesendet: Donnerstag, 19. Januar 2017 14:41
An: users@cloudstack.apache.org
Betreff: Re: Template management

Hi Swen,

If you wanted to do this on boot with cloud-init or a similar mechanism you 
would actually engineer the solution such that an internet connection wasn’t 
required. If you have every VM updating over the internet you end up paying for 
a lot of unnecessary bandwidth. You would instead make sure you have internal 
cloned patch repositories which you synchronize hourly/daily  - which means all 
user VMs only pull patches on the internal network. You could even “eat your 
own dogfood/drink your own champagne” and host this on one of the accounts in 
the same CloudStack infrastructure – then simply set up connection on the 
public network. That way the update traffic isn’t ever leaving your switches 
per se.

Not sure how AWS etc. do this, but they have deep pockets…

Regards,
Dag Sonstebo
Cloud Architect
ShapeBlue

On 19/01/2017, 13:31, "S. Brüseke - proIO GmbH" <s.brues...@proio.com> wrote:

@Dag: Thanks for the confirmation and for the link.

@Rene: Of course it is the user's responsibility, but we want to provide a 
VM with the latest updates each time you deploy a new VM. :-) I know that 
cloud-init can do this on boot, but what if the network has no internet 
connection?

Does anybody know how AWS or DigitalOcean is handling this?

Mit freundlichen Grüßen / With kind regards,

Swen


-Ursprüngliche Nachricht-
Von: Rene Moser [mailto:m...@renemoser.net] 
Gesendet: Donnerstag, 19. Januar 2017 11:03
An: users@cloudstack.apache.org
Betreff: Re: Template management

Hi Swen

On 01/19/2017 10:04 AM, S. Brüseke - proIO GmbH wrote:

> I am really interested in other solutions and workflows, so please 
> shoot. :-)

We decided to not doing or minimize (1-2 updates per year) templates 
updates for "system updates" for two main reasons:

1. It is the user's responsibility to keep systems up to date anyway.
2. Using cfg management and/or cloud-init is more than easy to update 
systems.

Regards
René


- proIO GmbH -
Geschäftsführer: Swen Brüseke
Sitz der Gesellschaft: Frankfurt am Main

USt-IdNr. DE 267 075 918
Registergericht: Frankfurt am Main - HRB 86239

Diese E-Mail enthält vertrauliche und/oder rechtlich geschützte 
Informationen. 
Wenn Sie nicht der richtige Adressat sind oder diese E-Mail irrtümlich 
erhalten haben, 
informieren Sie bitte sofort den Absender und vernichten Sie diese Mail. 
Das unerlaubte Kopieren sowie die unbefugte Weitergabe dieser Mail sind 
nicht gestattet. 

This e-mail may contain confidential and/or privileged information. 
If you are not the intended recipient (or have received this e-mail in 
error) please notify 
the sender immediately and destroy this e-mail.  
Any unauthorized copying, disclosure or distribution of the material in 
this e-mail is strictly forbidden. 





dag.sonst...@shapeblue.com
www.shapeblue.com
53 Chandos Place, Covent Garden, London  WC2N 4HSUK @shapeblue
  
 



- proIO GmbH -
Geschäftsführer: Swen Brüseke
Sitz der Gesellschaft: Frankfurt am Main

USt-IdNr. DE 267 075 918
Registergericht: Frankfurt am Main - HRB 86239

Diese E-Mail enthält vertrauliche und/oder rechtlich geschützte Informationen. 
Wenn Sie nicht der richtige Adressat sind oder diese E-Mail irrtümlich erhalten 
haben, 
informieren Sie bitte sofort den Absender und vernichten Sie diese Mail. 
Das unerlaubte Kopieren sowie die unbefugte Weitergabe dieser Mail sind nicht 
gestattet. 

This e-mail may contain confidential and/or privileged information. 
If you are not the intended recipient (or have received this e-mail in error) 
please notify 
the sender immediately and destroy this e-mail.  
Any unauthorized copying, disclosure or distribution of the material in this 
e-mail is strictly forbidden. 




AW: Template management

2017-01-19 Thread S . Brüseke - proIO GmbH
@Dag: Thanks for the confirmation and for the link.

@Rene: Of course it is the user's responsibility, but we want to provide a VM 
with the latest updates each time you deploy a new VM. :-) I know that 
cloud-init can do this on boot, but what if the network has no internet 
connection?

Does anybody know how AWS or DigitalOcean is handling this?

Mit freundlichen Grüßen / With kind regards,

Swen


-Ursprüngliche Nachricht-
Von: Rene Moser [mailto:m...@renemoser.net] 
Gesendet: Donnerstag, 19. Januar 2017 11:03
An: users@cloudstack.apache.org
Betreff: Re: Template management

Hi Swen

On 01/19/2017 10:04 AM, S. Brüseke - proIO GmbH wrote:

> I am really interested in other solutions and workflows, so please 
> shoot. :-)

We decided to not doing or minimize (1-2 updates per year) templates updates 
for "system updates" for two main reasons:

1. It is the user's responsibility to keep systems up to date anyway.
2. Using cfg management and/or cloud-init is more than easy to update systems.

Regards
René


- proIO GmbH -
Geschäftsführer: Swen Brüseke
Sitz der Gesellschaft: Frankfurt am Main

USt-IdNr. DE 267 075 918
Registergericht: Frankfurt am Main - HRB 86239

Diese E-Mail enthält vertrauliche und/oder rechtlich geschützte Informationen. 
Wenn Sie nicht der richtige Adressat sind oder diese E-Mail irrtümlich erhalten 
haben, 
informieren Sie bitte sofort den Absender und vernichten Sie diese Mail. 
Das unerlaubte Kopieren sowie die unbefugte Weitergabe dieser Mail sind nicht 
gestattet. 

This e-mail may contain confidential and/or privileged information. 
If you are not the intended recipient (or have received this e-mail in error) 
please notify 
the sender immediately and destroy this e-mail.  
Any unauthorized copying, disclosure or distribution of the material in this 
e-mail is strictly forbidden. 




Template management

2017-01-19 Thread S . Brüseke - proIO GmbH
Hey guys,

I have a question regarding templates and how you manage these in you CS 
installation.

We are planning to create a new template for Debian, CentOS and Ubuntu each 
month to keep them up2date, because we have a short lifecycle for servers. Does 
anybody do the same or has other workflows for that?

One big downside of that is that (as far as I understand CS) our primary 
storage (we are using XenServer) is getting filled up with templates. CS is not 
deleting a template until all VMs created by this template are expunged. Can 
somebody confirm this?

I am really interested in other solutions and workflows, so please shoot. :-)

Mit freundlichen Grüßen / With kind regards,

Swen



- proIO GmbH -
Geschäftsführer: Swen Brüseke
Sitz der Gesellschaft: Frankfurt am Main

USt-IdNr. DE 267 075 918
Registergericht: Frankfurt am Main - HRB 86239

Diese E-Mail enthält vertrauliche und/oder rechtlich geschützte Informationen. 
Wenn Sie nicht der richtige Adressat sind oder diese E-Mail irrtümlich erhalten 
haben, 
informieren Sie bitte sofort den Absender und vernichten Sie diese Mail. 
Das unerlaubte Kopieren sowie die unbefugte Weitergabe dieser Mail sind nicht 
gestattet. 

This e-mail may contain confidential and/or privileged information. 
If you are not the intended recipient (or have received this e-mail in error) 
please notify 
the sender immediately and destroy this e-mail.  
Any unauthorized copying, disclosure or distribution of the material in this 
e-mail is strictly forbidden. 




AW: API migrateVirtualMachine does not respect affinity group assignment

2016-11-09 Thread S . Brüseke - proIO GmbH
We run into this "problem" too. Here are my 2 cents:

API call should respect affinity groups, but it should be able for 
administrator to force a migration (force=true).
As an administrator you cannot control (or have in mind) all affinity groups 
when you need to evacuate a host. At the moment you run into the situation that 
you migrate a vm to an host where another vm of the same affinity group is 
running. When you stop this vm you are unable to start it because then the 
affinity group kicks in.

Mit freundlichen Grüßen / With kind regards,

Swen


-Ursprüngliche Nachricht-
Von: Marc-Aurèle Brothier [mailto:ma...@exoscale.ch] 
Gesendet: Mittwoch, 9. November 2016 08:41
An: users@cloudstack.apache.org
Betreff: Re: API migrateVirtualMachine does not respect affinity group 
assignment

IMHO it's something desirable, because in case of emergency, it's better to 
migrate a VM to a host that does not follow the anti affinity group, rather 
than leaving the VM on a host that must be shutdown for example and loosing the 
VM. It's up to the admin to make this transgression during the shortest amount 
of time.
Those migration API calls are always done by an admin, and therefore should 
take care of such case, which is not very complicated. I have a python script 
that does the job (
https://gist.github.com/marcaurele/dc1774b1ea13d81be702faf235bf2afe) for live 
migration for example.

On Wed, Nov 9, 2016 at 2:47 AM, Simon Weller  wrote:

> Can you open a jira issue on this?
>
> Simon Weller/ENA
> (615) 312-6068
>
> -Original Message-
> From: Yiping Zhang [yzh...@marketo.com]
> Received: Tuesday, 08 Nov 2016, 8:03PM
> To: users@cloudstack.apache.org [users@cloudstack.apache.org]
> Subject: API migrateVirtualMachine does not respect affinity group 
> assignment
>
> Hi,
>
> It seems that the API migrateVirtualMachine does not respect 
> instance’s affinity group assignment.  Is this intentional?
>
> To reproduce:
>
> Assigning two VM instances running on different hosts, say v1 running 
> on
> h1 and v2 running on h2, to the same affinity group.  In GUI, it won’t 
> let you migrate v1 and v2 to the same host, but if you use 
> cloudmonkey,  you are able to move both instances to h1 or h2 with 
> migrateVirtualMachine API call.
>
> IMHO, the API call should return with an error message that the 
> migration is prohibited by affinity group assignment. However, if the 
> current behavior is desirable in some situations, then a parameter 
> like ignore-affinity-group=true should be passed to the API call (or 
> vice versa, depending on which behavior is chosen as the default)
>
> Yiping
>


- proIO GmbH -
Geschäftsführer: Swen Brüseke
Sitz der Gesellschaft: Frankfurt am Main

USt-IdNr. DE 267 075 918
Registergericht: Frankfurt am Main - HRB 86239

Diese E-Mail enthält vertrauliche und/oder rechtlich geschützte Informationen. 
Wenn Sie nicht der richtige Adressat sind oder diese E-Mail irrtümlich erhalten 
haben, 
informieren Sie bitte sofort den Absender und vernichten Sie diese Mail. 
Das unerlaubte Kopieren sowie die unbefugte Weitergabe dieser Mail sind nicht 
gestattet. 

This e-mail may contain confidential and/or privileged information. 
If you are not the intended recipient (or have received this e-mail in error) 
please notify 
the sender immediately and destroy this e-mail.  
Any unauthorized copying, disclosure or distribution of the material in this 
e-mail is strictly forbidden. 




AW: AW: AW: AW: Shared Network

2016-10-19 Thread S . Brüseke - proIO GmbH
I am sorry, but I have no idea. Maybe a bug.

Mit freundlichen Grüßen / With kind regards,

Swen

-Ursprüngliche Nachricht-
Von: Jānis Andersons | Failiem.lv [mailto:j...@failiem.lv] 
Gesendet: Mittwoch, 19. Oktober 2016 12:14
An: users@cloudstack.apache.org
Betreff: Re: AW: AW: AW: Shared Network

Well there is no "ERROR" message, but log says:

2016-10-19 13:07:37,669 DEBUG [c.c.a.ApiServlet]
(catalina-exec-11:ctx-810c5388) (logid:91e4d99e) ===START===
84.237.231.244 -- GET
command=createNetwork=a0dab64d-c81c-4030-bbf9-c62157cdbad9=bf738b2a-143c-433b-8e29-d266daa3c28a=d8a8e83e-4afd-4f15-abb8-fcd53d5a61a6=Public%20Network=Public%20IP%20for%20Instance=1=domain=X.X.X.1=255.255.255.0=X.X.X.2=X.X.X.29=json&_=1476871657611
2016-10-19 13:07:37,721 DEBUG [c.c.n.g.BigSwitchBcfGuestNetworkGuru]
(catalina-exec-11:ctx-810c5388 ctx-924d0503) (logid:91e4d99e) Refusing to 
design this network, the physical isolation type is not BCF_SEGMENT
2016-10-19 13:07:37,722 DEBUG [o.a.c.n.c.m.ContrailGuru]
(catalina-exec-11:ctx-810c5388 ctx-924d0503) (logid:91e4d99e) Refusing to 
design this network
2016-10-19 13:07:37,722 DEBUG [c.c.n.g.MidoNetGuestNetworkGuru]
(catalina-exec-11:ctx-810c5388 ctx-924d0503) (logid:91e4d99e) design called
2016-10-19 13:07:37,723 DEBUG [c.c.n.g.MidoNetGuestNetworkGuru]
(catalina-exec-11:ctx-810c5388 ctx-924d0503) (logid:91e4d99e) Refusing to 
design this network, the physical isolation type is not MIDO
2016-10-19 13:07:37,724 DEBUG [c.c.n.g.NiciraNvpGuestNetworkGuru]
(catalina-exec-11:ctx-810c5388 ctx-924d0503) (logid:91e4d99e) Refusing to 
design this network
2016-10-19 13:07:37,725 DEBUG [o.a.c.n.o.OpendaylightGuestNetworkGuru]
(catalina-exec-11:ctx-810c5388 ctx-924d0503) (logid:91e4d99e) Refusing to 
design this network
2016-10-19 13:07:37,726 DEBUG [c.c.n.g.OvsGuestNetworkGuru]
(catalina-exec-11:ctx-810c5388 ctx-924d0503) (logid:91e4d99e) Refusing to 
design this network
2016-10-19 13:07:37,744 DEBUG [c.c.a.t.Request]
(StatsCollector-1:ctx-f02ca0a3) (logid:20b8a596) Seq
27-3923479700369858755: Received:  { Ans: , MgmtId: 95537004648, via: 
27(xs4.failiem.lv), Ver: v1, Flags: 10, { GetHostStatsAnswer } }
2016-10-19 13:07:37,750 DEBUG [o.a.c.n.g.SspGuestNetworkGuru]
(catalina-exec-11:ctx-810c5388 ctx-924d0503) (logid:91e4d99e) SSP not 
configured to be active
2016-10-19 13:07:37,752 DEBUG [c.c.n.g.BrocadeVcsGuestNetworkGuru]
(catalina-exec-11:ctx-810c5388 ctx-924d0503) (logid:91e4d99e) Refusing to 
design this network
2016-10-19 13:07:37,753 DEBUG [o.a.c.e.o.NetworkOrchestrator]
(catalina-exec-11:ctx-810c5388 ctx-924d0503) (logid:91e4d99e) Releasing lock 
for Acct[f0e76687-48f1-11e6-a9f1-00163e44393e-system]
2016-10-19 13:07:37,755 DEBUG [c.c.c.ConfigurationManagerImpl]
(catalina-exec-11:ctx-810c5388 ctx-924d0503) (logid:91e4d99e) Access granted to 
Acct[f0e7fdf6-48f1-11e6-a9f1-00163e44393e-admin] to zone:9 by 
AffinityGroupAccessChecker
2016-10-19 13:07:37,758 DEBUG [c.c.u.d.T.Transaction]
(catalina-exec-11:ctx-810c5388 ctx-924d0503) (logid:91e4d99e) Rolling back the 
transaction: Time = 47 Name =  catalina-exec-11; called by
-TransactionLegacy.rollback:879-TransactionLegacy.removeUpTo:822-TransactionLegacy.close:646-Transaction.execute:43-NetworkServiceImpl.commitNetwork:1344-NetworkServiceImpl.createGuestNetwork:1307-NativeMethodAccessorImpl.invoke0:-2-NativeMethodAccessorImpl.invoke:57-DelegatingMethodAccessorImpl.invoke:43-Method.invoke:606-AopUtils.invokeJoinpointUsingReflection:317-ReflectiveMethodInvocation.invokeJoinpoint:183
2016-10-19 13:07:37,766 INFO  [c.c.a.ApiServer]
(catalina-exec-11:ctx-810c5388 ctx-924d0503) (logid:91e4d99e) The IP range with 
tag: vlan://1 in zone LTC-DC has overlapped with the subnet. 
Please specify a different gateway/netmask.
2016-10-19 13:07:37,768 DEBUG [c.c.a.ApiServlet]
(catalina-exec-11:ctx-810c5388 ctx-924d0503) (logid:91e4d99e) ===END===
84.237.231.244 -- GET
command=createNetwork=a0dab64d-c81c-4030-bbf9-c62157cdbad9=bf738b2a-143c-433b-8e29-d266daa3c28a=d8a8e83e-4afd-4f15-abb8-fcd53d5a61a6=Public%20Network=Public%20IP%20for%20Instance=1=domain=X.X.X.1=255.255.255.0=X.X.X.2=X.X.X.29=json&_=1476871657611
2016-10-19 13:07:39,026 DEBUG [c.c.a.ApiServlet]
(catalina-exec-15:ctx-16903321) (logid:36cd14d7) ===START===
84.237.231.244 -- GET
command=listNetworks=e657df59-c99d-4c82-ab8f-07ee4f4b20f3=true=json=true&_=1476871658973

Jānis Andersons
http://serveri.failiem.lv
http://files.fm
http://failiem.lv
mobile: +371 26606064
j...@failiem.lv

On 19.10.2016 09:58, S. Brüseke - proIO GmbH wrote:
> Can you please post the error mesg from management server log?
>
> Mit freundlichen Grüßen / With kind regards,
>
> Swen
>
>
> -Ursprüngliche Nachricht-
> Von: Jānis Andersons | Failiem.lv [mailto:j...@failiem.lv]
> Gesendet: Dienstag, 18. Oktober 2016 18:10
> An: users@cloudstack.apache.org
> Betreff: Re: AW: AW: Shared Network
>
> Yup.
> I added fist range from x.x.x.30 to x.x.x.178 

AW: AW: AW: Shared Network

2016-10-19 Thread S . Brüseke - proIO GmbH
Can you please post the error mesg from management server log?

Mit freundlichen Grüßen / With kind regards,

Swen


-Ursprüngliche Nachricht-
Von: Jānis Andersons | Failiem.lv [mailto:j...@failiem.lv] 
Gesendet: Dienstag, 18. Oktober 2016 18:10
An: users@cloudstack.apache.org
Betreff: Re: AW: AW: Shared Network

Yup.
I added fist range from x.x.x.30 to x.x.x.178 with gw: x.x.x.1 and netmask 
255.255.255.0 And when I try to add second range for Shared network from 
x.x.x.2 to
x.x.x.29 with gw: x.x.x.1 and Netmask 255.255.255.0 I get previous error.

Jānis Andersons
http://serveri.failiem.lv
http://files.fm
http://failiem.lv
mobile: +371 26606064
j...@failiem.lv

On 18.10.2016 19:01, S. Brüseke - proIO GmbH wrote:
> Did you used a different start ip?
>
> I have added the first range from x.x.x.2 to x.x.x.150 and a second from 
> x.x.x.151 to x.x.x.254.
> vlan gateway and netmask are the same.
>
> Mit freundlichen Grüßen / With kind regards,
>
> Swen
>
>
> -Ursprüngliche Nachricht-
> Von: Jānis Andersons | Failiem.lv [mailto:j...@failiem.lv]
> Gesendet: Dienstag, 18. Oktober 2016 17:59
> An: users@cloudstack.apache.org
> Betreff: Re: AW: Shared Network
>
> Well there is a problem that i get error:
> The IP range with tag: vlan://1 in zone LTC-DC has overlapped with the 
> subnet. Please specify a different gateway/netmask.
>
> Problem is that I already added x.x.x.x/24 subnet
>
> --
> Janis
>
> On 18.10.2016 18:47, S. Brüseke - proIO GmbH wrote:
>> Just add the other range to your cloudstack.
>>
>> Mit freundlichen Grüßen / With kind regards,
>>
>> Swen
>>
>>
>> -Ursprüngliche Nachricht-
>> Von: Jānis Andersons | Failiem.lv [mailto:j...@failiem.lv]
>> Gesendet: Dienstag, 18. Oktober 2016 17:41
>> An: users@cloudstack.apache.org
>> Betreff: Shared Network
>>
>> I have added 150 IP addresses from one subnet to my ACS with Advanced 
>> networking.
>> Is it possible to add Shared Network with remaining IP addresses from the 
>> same subnet?
>>
>>
>> --
>> Janis Andersons
>>
>>
>>
>> - proIO GmbH -
>> Geschäftsführer: Swen Brüseke
>> Sitz der Gesellschaft: Frankfurt am Main
>>
>> USt-IdNr. DE 267 075 918
>> Registergericht: Frankfurt am Main - HRB 86239
>>
>> Diese E-Mail enthält vertrauliche und/oder rechtlich geschützte 
>> Informationen.
>> Wenn Sie nicht der richtige Adressat sind oder diese E-Mail 
>> irrtümlich erhalten haben, informieren Sie bitte sofort den Absender und 
>> vernichten Sie diese Mail.
>> Das unerlaubte Kopieren sowie die unbefugte Weitergabe dieser Mail sind 
>> nicht gestattet.
>>
>> This e-mail may contain confidential and/or privileged information.
>> If you are not the intended recipient (or have received this e-mail 
>> in
>> error) please notify the sender immediately and destroy this e-mail.
>> Any unauthorized copying, disclosure or distribution of the material in this 
>> e-mail is strictly forbidden.
>>
>>
>
>
> - proIO GmbH -
> Geschäftsführer: Swen Brüseke
> Sitz der Gesellschaft: Frankfurt am Main
>
> USt-IdNr. DE 267 075 918
> Registergericht: Frankfurt am Main - HRB 86239
>
> Diese E-Mail enthält vertrauliche und/oder rechtlich geschützte Informationen.
> Wenn Sie nicht der richtige Adressat sind oder diese E-Mail irrtümlich 
> erhalten haben, informieren Sie bitte sofort den Absender und vernichten Sie 
> diese Mail.
> Das unerlaubte Kopieren sowie die unbefugte Weitergabe dieser Mail sind nicht 
> gestattet.
>
> This e-mail may contain confidential and/or privileged information.
> If you are not the intended recipient (or have received this e-mail in 
> error) please notify the sender immediately and destroy this e-mail.
> Any unauthorized copying, disclosure or distribution of the material in this 
> e-mail is strictly forbidden.
>
>



- proIO GmbH -
Geschäftsführer: Swen Brüseke
Sitz der Gesellschaft: Frankfurt am Main

USt-IdNr. DE 267 075 918
Registergericht: Frankfurt am Main - HRB 86239

Diese E-Mail enthält vertrauliche und/oder rechtlich geschützte Informationen. 
Wenn Sie nicht der richtige Adressat sind oder diese E-Mail irrtümlich erhalten 
haben, 
informieren Sie bitte sofort den Absender und vernichten Sie diese Mail. 
Das unerlaubte Kopieren sowie die unbefugte Weitergabe dieser Mail sind nicht 
gestattet. 

This e-mail may contain confidential and/or privileged information. 
If you are not the intended recipient (or have received this e-mail in error) 
please notify 
the sender immediately and destroy this e-mail.  
Any unauthorized copying, disclosure or distribution of the material in this 
e-mail is strictly forbidden. 




AW: AW: Shared Network

2016-10-18 Thread S . Brüseke - proIO GmbH
Did you used a different start ip?

I have added the first range from x.x.x.2 to x.x.x.150 and a second from 
x.x.x.151 to x.x.x.254.
vlan gateway and netmask are the same.

Mit freundlichen Grüßen / With kind regards,

Swen


-Ursprüngliche Nachricht-
Von: Jānis Andersons | Failiem.lv [mailto:j...@failiem.lv] 
Gesendet: Dienstag, 18. Oktober 2016 17:59
An: users@cloudstack.apache.org
Betreff: Re: AW: Shared Network

Well there is a problem that i get error:
The IP range with tag: vlan://1 in zone LTC-DC has overlapped with the subnet. 
Please specify a different gateway/netmask.

Problem is that I already added x.x.x.x/24 subnet

--
Janis

On 18.10.2016 18:47, S. Brüseke - proIO GmbH wrote:
> Just add the other range to your cloudstack.
>
> Mit freundlichen Grüßen / With kind regards,
>
> Swen
>
>
> -Ursprüngliche Nachricht-
> Von: Jānis Andersons | Failiem.lv [mailto:j...@failiem.lv]
> Gesendet: Dienstag, 18. Oktober 2016 17:41
> An: users@cloudstack.apache.org
> Betreff: Shared Network
>
> I have added 150 IP addresses from one subnet to my ACS with Advanced 
> networking.
> Is it possible to add Shared Network with remaining IP addresses from the 
> same subnet?
>
>
> --
> Janis Andersons
>
>
>
> - proIO GmbH -
> Geschäftsführer: Swen Brüseke
> Sitz der Gesellschaft: Frankfurt am Main
>
> USt-IdNr. DE 267 075 918
> Registergericht: Frankfurt am Main - HRB 86239
>
> Diese E-Mail enthält vertrauliche und/oder rechtlich geschützte Informationen.
> Wenn Sie nicht der richtige Adressat sind oder diese E-Mail irrtümlich 
> erhalten haben, informieren Sie bitte sofort den Absender und vernichten Sie 
> diese Mail.
> Das unerlaubte Kopieren sowie die unbefugte Weitergabe dieser Mail sind nicht 
> gestattet.
>
> This e-mail may contain confidential and/or privileged information.
> If you are not the intended recipient (or have received this e-mail in 
> error) please notify the sender immediately and destroy this e-mail.
> Any unauthorized copying, disclosure or distribution of the material in this 
> e-mail is strictly forbidden.
>
>



- proIO GmbH -
Geschäftsführer: Swen Brüseke
Sitz der Gesellschaft: Frankfurt am Main

USt-IdNr. DE 267 075 918
Registergericht: Frankfurt am Main - HRB 86239

Diese E-Mail enthält vertrauliche und/oder rechtlich geschützte Informationen. 
Wenn Sie nicht der richtige Adressat sind oder diese E-Mail irrtümlich erhalten 
haben, 
informieren Sie bitte sofort den Absender und vernichten Sie diese Mail. 
Das unerlaubte Kopieren sowie die unbefugte Weitergabe dieser Mail sind nicht 
gestattet. 

This e-mail may contain confidential and/or privileged information. 
If you are not the intended recipient (or have received this e-mail in error) 
please notify 
the sender immediately and destroy this e-mail.  
Any unauthorized copying, disclosure or distribution of the material in this 
e-mail is strictly forbidden. 




AW: Storagemigration / Primary Storages

2016-05-13 Thread S . Brüseke - proIO GmbH
I am not sure if the workflow is deleting a volume copied to another storage, 
but I know that there is a cleanup job. Take a look at global setting:

storage.cleanup.interval

Mit freundlichen Grüßen / With kind regards,

Swen


-Ursprüngliche Nachricht-
Von: Stephan Seitz [mailto:s.se...@secretresearchfacility.com] 
Gesendet: Freitag, 13. Mai 2016 10:12
An: users@cloudstack.apache.org
Betreff: Re: Storagemigration / Primary Storages

Sanjeev,

thank's for your response. As you said, CS will delete the volumes from the 
source storage, but I'ld expect it to be done immediately after a successful 
migration.
I don't think this happened correctly. Is there an easy way to track down 
CS-volumeid to xen-vbds to xen-vdi to the respective LV (LVMoHBA)?
So I could check removal-tasks against the LV.

Thanks in advance!

- Stephan

On Fr, 2016-05-13 at 06:05 +, Sanjeev Neelarapu wrote:
> Hi Stephan,
> 
> Once the volume migration is successful then only CS will delete it 
> from the source storage. Please make sure that there are no issues 
> with volume migrations.
> 
> Best Regards,
> Sanjeev N
> Chief Product Engineer, Accelerite
> Off: +91 40 6722 9368 | EMail: sanjeev.neelar...@accelerite.com
> 
> 
> -Original Message-
> From: Stephan Seitz [mailto:s.se...@secretresearchfacility.com]
> Sent: Thursday, May 12, 2016 9:49 PM
> To: users@cloudstack.apache.org
> Subject: Storagemigration / Primary Storages
> 
> Hi!
> 
> We're currently migrating volumes from one to another storage with the 
> goal to get the source LUN freed to finally remove the whole storage.
> This runs w/ ACS 4.8 and XenServer 6.5 with attached FC-Storages.
> 
> Interestingly, the free space not only decreases (as expected) on the 
> target LUN. Also the source LUN is running full during this progress.
> 
> By now, I did'nt dug too deep, but maybe anyone had seen this issue.
> too? And maybe could give a hint for the reason? ;)
> 
> What we had was:
> SAS-LUN   w/ Tag SAS
> SATA-LUN w/ Tag SATA
> 
> Every offering is configured with the respective Tags.
> 
> What we prepared:
> SAS-LUN2 w/ Tags SAS,SASNEW
> SATA-LUN2 w/ Tags SATA,SATAMEW
> SAS-LUN w/ Tag SASOLD (changed from SAS) SATA-LUN w/ Tag SATAOLD 
> (changed from SATA)
> 
> Most of the volumes are migrated live via cloudmonkey as simple as:
> 
> migrate volume volumeid=[somevolume-on-"old"-lun] storageid=SATA-LUN2 
> livemigrate=true
> 
> Some of the migration-jobs ran into ACS timouts until we changed 
> job.cancel.threshold.minutes to 240 (some of the bigger volumes took 
> some amount of time).
> 
> Thanks for any suggestions.
> 
> - Stephan
> 
> 
> 
> 
> 
> 
> DISCLAIMER
> ==
> This e-mail may contain privileged and confidential information which 
> is the property of Accelerite, a Persistent Systems business. It is 
> intended only for the use of the individual or entity to which it is 
> addressed. If you are not the intended recipient, you are not 
> authorized to read, retain, copy, print, distribute or use this 
> message. If you have received this communication in error, please 
> notify the sender and delete all copies of this message. Accelerite, a 
> Persistent Systems business does not accept any liability for virus 
> infected mails.


- proIO GmbH -
Geschäftsführer: Swen Brüseke
Sitz der Gesellschaft: Frankfurt am Main

USt-IdNr. DE 267 075 918
Registergericht: Frankfurt am Main - HRB 86239

Diese E-Mail enthält vertrauliche und/oder rechtlich geschützte Informationen. 
Wenn Sie nicht der richtige Adressat sind oder diese E-Mail irrtümlich erhalten 
haben, 
informieren Sie bitte sofort den Absender und vernichten Sie diese Mail. 
Das unerlaubte Kopieren sowie die unbefugte Weitergabe dieser Mail sind nicht 
gestattet. 

This e-mail may contain confidential and/or privileged information. 
If you are not the intended recipient (or have received this e-mail in error) 
please notify 
the sender immediately and destroy this e-mail.  
Any unauthorized copying, disclosure or distribution of the material in this 
e-mail is strictly forbidden. 




AW: Migrate system VMs from a cluster to another

2016-04-06 Thread S . Brüseke - proIO GmbH
Hi Ugo,

put your hosts of the old cluster into maintenance mode and CS will start the 
System VMs on another cluster. There will be a short downtime.

Mit freundlichen Grüßen / With kind regards,

Swen


-Ursprüngliche Nachricht-
Von: Ugo Vasi [mailto:ugo.v...@procne.it] 
Gesendet: Mittwoch, 6. April 2016 09:29
An: users@cloudstack.apache.org
Betreff: Migrate system VMs from a cluster to another

Hi all,
I have to dispose of the servers in a cluster and migrate all the VM on a new 
one but I have a problem with the system VMs.

The two cluster servers are different so I can not do a live migration.

I tried to stop and restart the console-proxy and, after a couple of tries, it 
is gone in the new cluster.

The SecondaryStorageVm always starts in the same host. I tried to create a new 
offering with host tag on a host of new cluster but I can not assign it because 
when the VM is stopped is not possible to change parameters and the VM starts 
automatically.

Regards, Ugo

-- 

   U g o   V a s i
   P r o c n e  s.r.l>)
   via Cotonificio 45  33010 Tavagnacco IT
   phone: +390432486523 fax: +390432486523

Le informazioni contenute in questo messaggio sono riservate e confidenziali ed 
è vietata la diffusione in qualunque modo eseguita.
Qualora Lei non fosse la persona a cui il presente messaggio è destinato, La 
invitiamo ad eliminarlo e a non leggerlo, dandocene gentilmente comunicazione.
Per qualsiasi informazione si prega di contattare supp...@procne.it .
Rif. D.L. 196/2003



- proIO GmbH -
Geschäftsführer: Swen Brüseke
Sitz der Gesellschaft: Frankfurt am Main

USt-IdNr. DE 267 075 918
Registergericht: Frankfurt am Main - HRB 86239

Diese E-Mail enthält vertrauliche und/oder rechtlich geschützte Informationen. 
Wenn Sie nicht der richtige Adressat sind oder diese E-Mail irrtümlich erhalten 
haben, 
informieren Sie bitte sofort den Absender und vernichten Sie diese Mail. 
Das unerlaubte Kopieren sowie die unbefugte Weitergabe dieser Mail sind nicht 
gestattet. 

This e-mail may contain confidential and/or privileged information. 
If you are not the intended recipient (or have received this e-mail in error) 
please notify 
the sender immediately and destroy this e-mail.  
Any unauthorized copying, disclosure or distribution of the material in this 
e-mail is strictly forbidden. 




AW: snapshot cleanup on hypervisor primary storage

2016-04-05 Thread S . Brüseke - proIO GmbH
XenSever is not deleting snapshots automatically when you delete a vm. 
XenCenter is asking you what snapshot you want to delete.
I do not think that is a problem of our environment, I really think this is 
happening in every CS / XenServer installation.

Can somebody confirm this?

Mit freundlichen Grüßen / With kind regards,

Swen


-Ursprüngliche Nachricht-
Von: Pavan Bandarupally [mailto:pavan.bandarupa...@accelerite.com] 
Gesendet: Dienstag, 5. April 2016 14:03
An: S. Brüseke - proIO GmbH; users@cloudstack.apache.org
Betreff: RE: snapshot cleanup on hypervisor primary storage

Hi Swen,

Cloud Stack just hands over the snapshot operation to XenServer via XenServer 
commands. As mentioned earlier if the volume (VDI/VBD) is deleted all the 
associated snapshots will be cleaned up by Xenserver. 

My assumption here is that the volume that you have deleted from cloud stack is 
not deleted in the XenServer and hence the snapshots are not getting deleted ? 
Can you please check on XenServer if the VDI chain (for the deleted volume) is 
still present in the SR ?

I am not exactly aware of any changes in functionality from 4.5.1 but you can 
try one thing to isolate if the issue is with CS version or XenServer. Can you 
please create a standalone VM on Xenserver and take a snapshot of the VDI ? 
Delete the VDI and check if the snapshots are getting deleted ?

Regards,
Pavan
-Original Message-
From: S. Brüseke - proIO GmbH [mailto:s.brues...@proio.com] 
Sent: Tuesday, April 05, 2016 4:54 PM
To: Pavan Bandarupally; users@cloudstack.apache.org
Subject: AW: snapshot cleanup on hypervisor primary storage

Hi Pavan,

XenServer is not deleting the snapshots even after do a rescan:

Apr  5 13:20:59 cp-test-xs-2 SMGC: [24074] No work, exiting Apr  5 13:20:59 
cp-test-xs-2 SMGC: [24074] In cleanup

I am really not sure why he should do it in first place. Can you please tell me 
more about it? Please have in mind that we are using CS 4.5.1 and not the 
newest version.

Mit freundlichen Grüßen / With kind regards,

Swen


-Ursprüngliche Nachricht-
Von: Pavan Bandarupally [mailto:pavan.bandarupa...@accelerite.com]
Gesendet: Dienstag, 5. April 2016 13:14
An: users@cloudstack.apache.org; S. Brüseke - proIO GmbH
Betreff: RE: snapshot cleanup on hypervisor primary storage

Hi Swen,

Yes , if you have deleted the volume, the snapshots of that volume should be 
cleaned from primary storage by Xenserver. 

Can you please run sr scan and check if they are getting cleaned up (running sr 
scan will manually trigger the GC on Xenserver which should clean up these if 
GC is not run for some reason) ?

Regards,
Pavan
-Original Message-
From: S. Brüseke - proIO GmbH [mailto:s.brues...@proio.com]
Sent: Tuesday, April 05, 2016 4:05 PM
To: users@cloudstack.apache.org
Subject: AW: snapshot cleanup on hypervisor primary storage

Hi Pavan,

snapshots are working fine. Are you sure that snapshots on primary storage 
should be deleted?

I did some testing and observed the following:

First volume-snapshot of an instance
1. you create a volume-snapshot in CS UI 2. XenServer is taking a snapshot of 
the vm 3. XenServer is mounting secondary storage 4. XenServer is copying 
snapshot to secondary storage 5. XenServer is unmounting secondary storage

Second (or more) volume-snapshot of an instance 1. you create a volume-snapshot 
in CS UI 2. XenServer is taking a snapshot of the vm and uses the first 
snapshot as parent (you are unable to see this in XenCenter because the parent 
snapshot will be hidden) 3. XenServer is mounting secondary storage 4. 
XenServer is copying only delta of snapshot to secondary storage into the same 
directory as the first snapshot 5. XenServer is unmounting secondary storage


If you delete the a snapshot via XenCenter all following volume-snapshots are 
going to fail because of missing parent!

If you migrate vm to another XenServer host and create a volume-snapshot it 
will work. Reason for that is that CS is starting from the beginning here and 
handles this as a first snapshot. CS also uses a new folder on secondary 
storage for this volume-snapshot.

The last snapshot on XenServer will always be there, because CS needs it as 
parent for the next one and XenServer is creating a vhd chain.

The questions here are:
1. When I delete an instance is the snapshot on XenServer still needed?
I think now, it can be deleted even the volume-snapshot is still on secondary 
storage.

2. When I delete all volume-snapshots in CS UI of the snapshot-chain will CS 
delete the snapshots on XenServer?
As far as I can see, no.

3. Who is cleaning up the snapshots on XenServer's primary storage when you do 
a storage migration (normal and live) to another primary storage?
Right now, nobody is doing this and if you are using storage migration a lot 
(if you are using local storage you will do it a lot) than you end up with GBs 
of unwanted data on your primary storages.

Mit freundlichen Grüßen / With kind

AW: snapshot cleanup on hypervisor primary storage

2016-04-05 Thread S . Brüseke - proIO GmbH
Hi Pavan,

XenServer is not deleting the snapshots even after do a rescan:

Apr  5 13:20:59 cp-test-xs-2 SMGC: [24074] No work, exiting
Apr  5 13:20:59 cp-test-xs-2 SMGC: [24074] In cleanup

I am really not sure why he should do it in first place. Can you please tell me 
more about it? Please have in mind that we are using CS 4.5.1 and not the 
newest version.

Mit freundlichen Grüßen / With kind regards,

Swen


-Ursprüngliche Nachricht-
Von: Pavan Bandarupally [mailto:pavan.bandarupa...@accelerite.com] 
Gesendet: Dienstag, 5. April 2016 13:14
An: users@cloudstack.apache.org; S. Brüseke - proIO GmbH
Betreff: RE: snapshot cleanup on hypervisor primary storage

Hi Swen,

Yes , if you have deleted the volume, the snapshots of that volume should be 
cleaned from primary storage by Xenserver. 

Can you please run sr scan and check if they are getting cleaned up (running sr 
scan will manually trigger the GC on Xenserver which should clean up these if 
GC is not run for some reason) ?

Regards,
Pavan
-Original Message-
From: S. Brüseke - proIO GmbH [mailto:s.brues...@proio.com] 
Sent: Tuesday, April 05, 2016 4:05 PM
To: users@cloudstack.apache.org
Subject: AW: snapshot cleanup on hypervisor primary storage

Hi Pavan,

snapshots are working fine. Are you sure that snapshots on primary storage 
should be deleted?

I did some testing and observed the following:

First volume-snapshot of an instance
1. you create a volume-snapshot in CS UI 2. XenServer is taking a snapshot of 
the vm 3. XenServer is mounting secondary storage 4. XenServer is copying 
snapshot to secondary storage 5. XenServer is unmounting secondary storage

Second (or more) volume-snapshot of an instance 1. you create a volume-snapshot 
in CS UI 2. XenServer is taking a snapshot of the vm and uses the first 
snapshot as parent (you are unable to see this in XenCenter because the parent 
snapshot will be hidden) 3. XenServer is mounting secondary storage 4. 
XenServer is copying only delta of snapshot to secondary storage into the same 
directory as the first snapshot 5. XenServer is unmounting secondary storage


If you delete the a snapshot via XenCenter all following volume-snapshots are 
going to fail because of missing parent!

If you migrate vm to another XenServer host and create a volume-snapshot it 
will work. Reason for that is that CS is starting from the beginning here and 
handles this as a first snapshot. CS also uses a new folder on secondary 
storage for this volume-snapshot.

The last snapshot on XenServer will always be there, because CS needs it as 
parent for the next one and XenServer is creating a vhd chain.

The questions here are:
1. When I delete an instance is the snapshot on XenServer still needed?
I think now, it can be deleted even the volume-snapshot is still on secondary 
storage.

2. When I delete all volume-snapshots in CS UI of the snapshot-chain will CS 
delete the snapshots on XenServer?
As far as I can see, no.

3. Who is cleaning up the snapshots on XenServer's primary storage when you do 
a storage migration (normal and live) to another primary storage?
Right now, nobody is doing this and if you are using storage migration a lot 
(if you are using local storage you will do it a lot) than you end up with GBs 
of unwanted data on your primary storages.

Mit freundlichen Grüßen / With kind regards,

Swen


-Ursprüngliche Nachricht-
Von: Pavan Bandarupally [mailto:pavan.bandarupa...@accelerite.com]
Gesendet: Dienstag, 5. April 2016 11:17
An: users@cloudstack.apache.org; S. Brüseke - proIO GmbH
Betreff: RE: snapshot cleanup on hypervisor primary storage

Hi Swen,

I don't think this is expected. The snapshots should get cleaned from Primary 
Storage by XenServer. Can you please check if the snapshots are usable or 
corrupted ? 

Regarding deletion of snapshots on expunging the VM is not expected, because we 
do keep the snapshots (in secondary store) for further usage as templates / 
volumes can be created from the snapshots irrespective of whether the VM from 
whose disk the snapshot was created exists or not.

Regards,
Pavan

-Original Message-
From: S. Brüseke - proIO GmbH [mailto:s.brues...@proio.com]
Sent: Tuesday, April 05, 2016 1:21 PM
To: users@cloudstack.apache.org
Subject: snapshot cleanup on hypervisor primary storage

Hey guys,

we are using CS 4.5 with XenServer 6.5 SP1 and observed a behavior with 
volume-snapshots that will fill up your primary storage over time:

Workflow:
1. create an instance
2. create a volume-snapshot of the ROOT-disk of that instance 3. delete 
instance and expunge it 4. check primary storage of XenServer. The latest 
snapshot of each volume-snapshot will stay on primary storage and is not being 
deleted even after waiting for storage.cleanup.interval

Can somebody reproduce this?

As far as I understand the workflow of volume-snapshots CS will XenServer ask 
to do a snapshot of the vm and then copy this snapshot to secondary storage. 
But why

AW: snapshot cleanup on hypervisor primary storage

2016-04-05 Thread S . Brüseke - proIO GmbH
Hi Pavan,

snapshots are working fine. Are you sure that snapshots on primary storage 
should be deleted?

I did some testing and observed the following:

First volume-snapshot of an instance
1. you create a volume-snapshot in CS UI
2. XenServer is taking a snapshot of the vm
3. XenServer is mounting secondary storage
4. XenServer is copying snapshot to secondary storage
5. XenServer is unmounting secondary storage

Second (or more) volume-snapshot of an instance
1. you create a volume-snapshot in CS UI
2. XenServer is taking a snapshot of the vm and uses the first snapshot as 
parent (you are unable to see this in XenCenter because the parent snapshot 
will be hidden)
3. XenServer is mounting secondary storage
4. XenServer is copying only delta of snapshot to secondary storage into the 
same directory as the first snapshot
5. XenServer is unmounting secondary storage


If you delete the a snapshot via XenCenter all following volume-snapshots are 
going to fail because of missing parent!

If you migrate vm to another XenServer host and create a volume-snapshot it 
will work. Reason for that is that CS is starting from the beginning here and 
handles this as a first snapshot. CS also uses a new folder on secondary 
storage for this volume-snapshot.

The last snapshot on XenServer will always be there, because CS needs it as 
parent for the next one and XenServer is creating a vhd chain.

The questions here are:
1. When I delete an instance is the snapshot on XenServer still needed?
I think now, it can be deleted even the volume-snapshot is still on secondary 
storage.

2. When I delete all volume-snapshots in CS UI of the snapshot-chain will CS 
delete the snapshots on XenServer?
As far as I can see, no.

3. Who is cleaning up the snapshots on XenServer's primary storage when you do 
a storage migration (normal and live) to another primary storage?
Right now, nobody is doing this and if you are using storage migration a lot 
(if you are using local storage you will do it a lot) than you end up with GBs 
of unwanted data on your primary storages.


Mit freundlichen Grüßen / With kind regards,

Swen


-Ursprüngliche Nachricht-
Von: Pavan Bandarupally [mailto:pavan.bandarupa...@accelerite.com] 
Gesendet: Dienstag, 5. April 2016 11:17
An: users@cloudstack.apache.org; S. Brüseke - proIO GmbH
Betreff: RE: snapshot cleanup on hypervisor primary storage

Hi Swen,

I don't think this is expected. The snapshots should get cleaned from Primary 
Storage by XenServer. Can you please check if the snapshots are usable or 
corrupted ? 

Regarding deletion of snapshots on expunging the VM is not expected, because we 
do keep the snapshots (in secondary store) for further usage as templates / 
volumes can be created from the snapshots irrespective of whether the VM from 
whose disk the snapshot was created exists or not.

Regards,
Pavan

-Original Message-
From: S. Brüseke - proIO GmbH [mailto:s.brues...@proio.com] 
Sent: Tuesday, April 05, 2016 1:21 PM
To: users@cloudstack.apache.org
Subject: snapshot cleanup on hypervisor primary storage

Hey guys,

we are using CS 4.5 with XenServer 6.5 SP1 and observed a behavior with 
volume-snapshots that will fill up your primary storage over time:

Workflow:
1. create an instance
2. create a volume-snapshot of the ROOT-disk of that instance 3. delete 
instance and expunge it 4. check primary storage of XenServer. The latest 
snapshot of each volume-snapshot will stay on primary storage and is not being 
deleted even after waiting for storage.cleanup.interval

Can somebody reproduce this?

As far as I understand the workflow of volume-snapshots CS will XenServer ask 
to do a snapshot of the vm and then copy this snapshot to secondary storage. 
But why CS is not deleting the snapshot on primary storage after a success copy 
to secondary storage? Is this a "broken" workflow or is there a reason for that?

Is this the same behavior in newer releases of CS?

Mit freundlichen Grüßen / With kind regards,

Swen




- proIO GmbH -
Geschäftsführer: Swen Brüseke
Sitz der Gesellschaft: Frankfurt am Main

USt-IdNr. DE 267 075 918
Registergericht: Frankfurt am Main - HRB 86239

Diese E-Mail enthält vertrauliche und/oder rechtlich geschützte Informationen. 
Wenn Sie nicht der richtige Adressat sind oder diese E-Mail irrtümlich erhalten 
haben, informieren Sie bitte sofort den Absender und vernichten Sie diese Mail. 
Das unerlaubte Kopieren sowie die unbefugte Weitergabe dieser Mail sind nicht 
gestattet. 

This e-mail may contain confidential and/or privileged information. 
If you are not the intended recipient (or have received this e-mail in error) 
please notify the sender immediately and destroy this e-mail.  
Any unauthorized copying, disclosure or distribution of the material in this 
e-mail is strictly forbidden. 





DISCLAIMER
==
This e-mail may contain privileged and confidential information which is the 
property of Accelerite, a Persistent System

snapshot cleanup on hypervisor primary storage

2016-04-05 Thread S . Brüseke - proIO GmbH
Hey guys,

we are using CS 4.5 with XenServer 6.5 SP1 and observed a behavior with 
volume-snapshots that will fill up your primary storage over time:

Workflow:
1. create an instance
2. create a volume-snapshot of the ROOT-disk of that instance
3. delete instance and expunge it
4. check primary storage of XenServer. The latest snapshot of each 
volume-snapshot will stay on primary storage and is not being deleted even 
after waiting for storage.cleanup.interval

Can somebody reproduce this?

As far as I understand the workflow of volume-snapshots CS will XenServer ask 
to do a snapshot of the vm and then copy this snapshot to secondary storage. 
But why CS is not deleting the snapshot on primary storage after a success copy 
to secondary storage? Is this a "broken" workflow or is there a reason for that?

Is this the same behavior in newer releases of CS?

Mit freundlichen Grüßen / With kind regards,

Swen




- proIO GmbH -
Geschäftsführer: Swen Brüseke
Sitz der Gesellschaft: Frankfurt am Main

USt-IdNr. DE 267 075 918
Registergericht: Frankfurt am Main - HRB 86239

Diese E-Mail enthält vertrauliche und/oder rechtlich geschützte Informationen. 
Wenn Sie nicht der richtige Adressat sind oder diese E-Mail irrtümlich erhalten 
haben, 
informieren Sie bitte sofort den Absender und vernichten Sie diese Mail. 
Das unerlaubte Kopieren sowie die unbefugte Weitergabe dieser Mail sind nicht 
gestattet. 

This e-mail may contain confidential and/or privileged information. 
If you are not the intended recipient (or have received this e-mail in error) 
please notify 
the sender immediately and destroy this e-mail.  
Any unauthorized copying, disclosure or distribution of the material in this 
e-mail is strictly forbidden. 




AW: Storage decommissioning

2016-04-04 Thread S . Brüseke - proIO GmbH
Hi,

what do you mean? XenServer is taking care of the metadata backup if you 
enabled it. As far as I know XenServer will update the metadata.

Mit freundlichen Grüßen / With kind regards,

Swen


-Ursprüngliche Nachricht-
Von: Alessandro Caviglione [mailto:c.alessan...@gmail.com] 
Gesendet: Sonntag, 3. April 2016 23:20
An: users@cloudstack.apache.org
Betreff: Re: Storage decommissioning

Ok guys... we're about to finish the migration of primary storage... but I've a 
question.
Since we've moved all the Instance's volume (ROOT and DATA) through ACS (4.5), 
should we consider anything about VMs metadata?
We're running XS 6.2 SP1.

Moving the volume will move also metadata?

Regards.

On Thu, Mar 24, 2016 at 1:13 PM, Alessandro Caviglione < 
c.alessan...@gmail.com> wrote:

> Thanks for your feedback, I am now a bit more relaxed! :-
>
> On Thu, Mar 24, 2016 at 12:07 PM, S. Brüseke - proIO GmbH < 
> s.brues...@proio.com> wrote:
>
>> Don't hesitate to ask if you have more question. We did the exact 
>> same job. We migrated primary and secondary storage. Secondary was 
>> horror, primary is not a problem at all. ;-)
>>
>>
>>
>> Mit freundlichen Grüßen / With kind regards,
>>
>>
>>
>> Swen
>>
>>
>>
>> *Von:* Alessandro Caviglione [mailto:c.alessan...@gmail.com]
>> *Gesendet:* Donnerstag, 24. März 2016 12:06
>> *An:* S. Brüseke - proIO GmbH
>> *Betreff:* Re: Storage decommissioning
>>
>>
>>
>> Ok, great!
>>
>> I'll inform you about the operations... :)
>>
>>
>>
>> On Thu, Mar 24, 2016 at 11:45 AM, S. Brüseke - proIO GmbH < 
>> s.brues...@proio.com> wrote:
>>
>> You do not need to move them. If you put the primary storage they are 
>> located into maintenance than CS will recreate VRs and System VMs on 
>> another primary storage.
>> This will include a short downtime for each VR, but CS will do this 
>> step by step; VR after VR
>>
>> Mit freundlichen Grüßen / With kind regards,
>>
>> Swen
>>
>>
>> -Ursprüngliche Nachricht-
>> Von: Alessandro Caviglione [mailto:c.alessan...@gmail.com]
>> Gesendet: Donnerstag, 24. März 2016 11:15
>> An: users@cloudstack.apache.org
>> Betreff: Re: Storage decommissioning
>>
>>
>> Hi,
>> thank you for your suggests... I'm going to move VMs ROOT and DATA to 
>> the new storage using CS GUI.
>> Our environment is CS 4.5.2 with XenServer 6.2 SP1.
>> I think that the problem shoud arrive when we've to move secondary 
>> storage but one step at time.
>> Just another question How can I move VR and SSVM??
>>
>> On Thu, Mar 24, 2016 at 6:30 AM, Sanjeev Neelarapu < 
>> sanjeev.neelar...@accelerite.com> wrote:
>>
>> > Hi,
>> >
>> > Decommissioning of primary storage should not be a problem. Since 
>> > we can migrate all the disks to additional primary storage in the cluster.
>> > Coming to artifacts stored in the secondary storage, if we make the 
>> > templates as public, all the templates will be available in 
>> > additional secondary storages within the zone. For snapshots and 
>> > volumes may have to rsync and change the image_store id in all the 
>> > relevant db tables.
>> >
>> > Best Regards,
>> > Sanjeev N
>> > Chief Product Engineer, Accelerite
>> > Off: +91 40 6722 9368 | EMail: sanjeev.neelar...@accelerite.com
>> >
>> >
>> > -Original Message-
>> > From: ilya [mailto:ilya.mailing.li...@gmail.com]
>> > Sent: Thursday, March 24, 2016 1:20 AM
>> > To: users@cloudstack.apache.org
>> > Subject: Re: Storage decommissioning
>> >
>> > I'd strongly suggest you play this out in the lab environment that 
>> > mimics what you need to do.
>> >
>> > On 3/23/16 12:47 PM, ilya wrote:
>> > > Alessandro,
>> > >
>> > > You told us nothing about your environment and setup. No downtime 
>> > > is only possible with specific hypervisors - like ESX and perhaps Xen.
>> > >
>> > > Regards
>> > > ilya
>> > >
>> > >
>> > > On 3/22/16 3:29 PM, Alessandro Caviglione wrote:
>> > >> Hi guys,
>> > >> I'm writing just to ask for advice...
>> > >> We're decommissionig our storage and we need to move all the VMs 
>> > >> from old storage to this new one.
>> > >> In CS we've defined a new primary storage and I think we've to 
>> > >> migrate all ROO

AW: CloudStack HA

2016-03-30 Thread S . Brüseke - proIO GmbH
Hi Martins,

you need to check XenServer logs. CS will not reboot any hypervisor.
XenServer will also reboot in some situations where Dom0 has no resources (CPU, 
RAM) left. Which version of XS are you using?

Mit freundlichen Grüßen / With kind regards,

Swen


-Ursprüngliche Nachricht-
Von: Mārtiņš Jakubovičs [mailto:martins-li...@hostnet.lv] 
Gesendet: Mittwoch, 30. März 2016 10:14
An: users@cloudstack.apache.org
Betreff: CloudStack HA

Hello,

This morning I faced unexpected problem, one of XenServer hosts rebooted. I 
checked logs and it looks like due network issue, but question is why host 
rebooted it self? CloudStack's XS Pool is not HA enabled. And as I know, in ACS 
4.3.2 CloudStack did not manage Host's HA or am I wrong?

Mar 30 07:00:33 cloudstack-1 heartbeat: Potential problem with
/var/run/sr-mount/858d490c-38e8-0f44-2840-c6acb98c3ae9/hb-b81b5d17-dea8-4257-a9b5-30b52229cc68:
 
not reachable since 65 seconds
Mar 30 07:00:33 cloudstack-1 heartbeat: Problem with
/var/run/sr-mount/858d490c-38e8-0f44-2840-c6acb98c3ae9/hb-b81b5d17-dea8-4257-a9b5-30b52229cc68:
 
not reachable for 65 seconds, rebooting system!

[root@cloudstack-1 ~]# xe pool-list params=all | grep ha-
   ha-enabled ( RO): false
 ha-configuration ( RO):
ha-statefiles ( RO):
 ha-host-failures-to-tolerate ( RW): 0
   ha-plan-exists-for ( RO): 0
  ha-allow-overcommit ( RW): false
 ha-overcommitted ( RO): false

So did ACS manage some kind of host's HA?

XenServer 6.2
ACS 4.3.2

Best regards,
Martins



- proIO GmbH -
Geschäftsführer: Swen Brüseke
Sitz der Gesellschaft: Frankfurt am Main

USt-IdNr. DE 267 075 918
Registergericht: Frankfurt am Main - HRB 86239

Diese E-Mail enthält vertrauliche und/oder rechtlich geschützte Informationen. 
Wenn Sie nicht der richtige Adressat sind oder diese E-Mail irrtümlich erhalten 
haben, 
informieren Sie bitte sofort den Absender und vernichten Sie diese Mail. 
Das unerlaubte Kopieren sowie die unbefugte Weitergabe dieser Mail sind nicht 
gestattet. 

This e-mail may contain confidential and/or privileged information. 
If you are not the intended recipient (or have received this e-mail in error) 
please notify 
the sender immediately and destroy this e-mail.  
Any unauthorized copying, disclosure or distribution of the material in this 
e-mail is strictly forbidden. 




AW: Storage decommissioning

2016-03-23 Thread S . Brüseke - proIO GmbH
Hi Alessandro,

moving ROOT and DATA volumes should be no problem and can be done via 
webinterface. Some hypervisor can do this even without downtime. What 
hypervisor do you use?

Moving your secondary storage is difficult, because there is now automation as 
far as I know. But we are not using the latest version of CS.
Can you move the IP of secondary storage to the new storage? If so the I would 
do:

1. rsync all files
2. stop management service
3. wait and be sure all task of CS are finished
4. do one more rsync
5. shutdown old storage
6. configure same IP on new storage
7. start Management service


Mit freundlichen Grüßen / With kind regards,

Swen


-Ursprüngliche Nachricht-
Von: Alessandro Caviglione [mailto:c.alessan...@gmail.com] 
Gesendet: Dienstag, 22. März 2016 23:30
An: users@cloudstack.apache.org
Betreff: Storage decommissioning

Hi guys,
I'm writing just to ask for advice...
We're decommissionig our storage and we need to move all the VMs from old 
storage to this new one.
In CS we've defined a new primary storage and I think we've to migrate all ROOT 
and DATA disks from old PRIMARY to the new PRIMARY, this should be done without 
interruption of the service.
The new storage will host also SECONDARY storage, so we need to move also 
Template, Snapshots and other things.
How can we do it?

Thank you!!


- proIO GmbH -
Geschäftsführer: Swen Brüseke
Sitz der Gesellschaft: Frankfurt am Main

USt-IdNr. DE 267 075 918
Registergericht: Frankfurt am Main - HRB 86239

Diese E-Mail enthält vertrauliche und/oder rechtlich geschützte Informationen. 
Wenn Sie nicht der richtige Adressat sind oder diese E-Mail irrtümlich erhalten 
haben, 
informieren Sie bitte sofort den Absender und vernichten Sie diese Mail. 
Das unerlaubte Kopieren sowie die unbefugte Weitergabe dieser Mail sind nicht 
gestattet. 

This e-mail may contain confidential and/or privileged information. 
If you are not the intended recipient (or have received this e-mail in error) 
please notify 
the sender immediately and destroy this e-mail.  
Any unauthorized copying, disclosure or distribution of the material in this 
e-mail is strictly forbidden. 




AW: Persisting Source IP on Load Balancers

2016-03-19 Thread S . Brüseke - proIO GmbH
Hi Len,

I am not aware of a solutions for ssl traffic for this. 
A workaround would be to delete loadbalancing for ssl on the VR and create a 
nginx instance running ssl lb.

Mit freundlichen Grüßen / With kind regards,

Swen


-Ursprüngliche Nachricht-
Von: Len Bellemore [mailto:len.bellem...@alternativenetworks.com] 
Gesendet: Mittwoch, 16. März 2016 18:22
An: S. Brüseke - proIO GmbH; users@cloudstack.apache.org
Betreff: RE: Persisting Source IP on Load Balancers

Thanks Swen,

OK, then I suppose my next question would be, could I then terminate the SSL on 
the virtual router, and then follow your suggestion?

Thanks
Len

-Original Message-
From: S. Brüseke - proIO GmbH [mailto:s.brues...@proio.com] 
Sent: 16 March 2016 16:53
To: users@cloudstack.apache.org
Cc: Bellemore, Len - Data Analytics
Subject: AW: Persisting Source IP on Load Balancers

Hi Len,

you need to change the LogFormat on the target servers behind the LB.

If you are using apache2 do the this:

1. open your apache2 conf file
2. add "LogFormat "%{X-Forwarded-For}i %l %u %t \"%r\" %>s %O" common_lb" to 
the LogFormat section 3. open your vhost file and swap "common" to "common_lb" 
in your CustomLog line.
4. Restart apache2

Now you can see the client IP in the log.

This will only work with http and not with https traffic because LB cannot open 
https traffic.

Mit freundlichen Grüßen / With kind regards,

Swen


-Ursprüngliche Nachricht-
Von: Len Bellemore [mailto:len.bellem...@alternativenetworks.com]
Gesendet: Mittwoch, 16. März 2016 16:15
An: users@cloudstack.apache.org
Betreff: Persisting Source IP on Load Balancers

Hi Guys,

Does anyone know if it is possible to preserve the source IP that is coming in 
to servers behind the virtual router load balancer?

In my web servers logs, every connection is from the virtual router.

Thanks
Len


- proIO GmbH -
Geschäftsführer: Swen Brüseke
Sitz der Gesellschaft: Frankfurt am Main

USt-IdNr. DE 267 075 918
Registergericht: Frankfurt am Main - HRB 86239

Diese E-Mail enthält vertrauliche und/oder rechtlich geschützte Informationen. 
Wenn Sie nicht der richtige Adressat sind oder diese E-Mail irrtümlich erhalten 
haben, informieren Sie bitte sofort den Absender und vernichten Sie diese Mail. 
Das unerlaubte Kopieren sowie die unbefugte Weitergabe dieser Mail sind nicht 
gestattet. 

This e-mail may contain confidential and/or privileged information. 
If you are not the intended recipient (or have received this e-mail in error) 
please notify the sender immediately and destroy this e-mail.  
Any unauthorized copying, disclosure or distribution of the material in this 
e-mail is strictly forbidden. 




- proIO GmbH -
Geschäftsführer: Swen Brüseke
Sitz der Gesellschaft: Frankfurt am Main

USt-IdNr. DE 267 075 918
Registergericht: Frankfurt am Main - HRB 86239

Diese E-Mail enthält vertrauliche und/oder rechtlich geschützte Informationen. 
Wenn Sie nicht der richtige Adressat sind oder diese E-Mail irrtümlich erhalten 
haben, 
informieren Sie bitte sofort den Absender und vernichten Sie diese Mail. 
Das unerlaubte Kopieren sowie die unbefugte Weitergabe dieser Mail sind nicht 
gestattet. 

This e-mail may contain confidential and/or privileged information. 
If you are not the intended recipient (or have received this e-mail in error) 
please notify 
the sender immediately and destroy this e-mail.  
Any unauthorized copying, disclosure or distribution of the material in this 
e-mail is strictly forbidden. 




AW: Persisting Source IP on Load Balancers

2016-03-19 Thread S . Brüseke - proIO GmbH
Hi Len,

you need to change the LogFormat on the target servers behind the LB.

If you are using apache2 do the this:

1. open your apache2 conf file
2. add "LogFormat "%{X-Forwarded-For}i %l %u %t \"%r\" %>s %O" common_lb" to 
the LogFormat section
3. open your vhost file and swap "common" to "common_lb" in your CustomLog line.
4. Restart apache2

Now you can see the client IP in the log.

This will only work with http and not with https traffic because LB cannot open 
https traffic.

Mit freundlichen Grüßen / With kind regards,

Swen


-Ursprüngliche Nachricht-
Von: Len Bellemore [mailto:len.bellem...@alternativenetworks.com] 
Gesendet: Mittwoch, 16. März 2016 16:15
An: users@cloudstack.apache.org
Betreff: Persisting Source IP on Load Balancers

Hi Guys,

Does anyone know if it is possible to preserve the source IP that is coming in 
to servers behind the virtual router load balancer?

In my web servers logs, every connection is from the virtual router.

Thanks
Len


- proIO GmbH -
Geschäftsführer: Swen Brüseke
Sitz der Gesellschaft: Frankfurt am Main

USt-IdNr. DE 267 075 918
Registergericht: Frankfurt am Main - HRB 86239

Diese E-Mail enthält vertrauliche und/oder rechtlich geschützte Informationen. 
Wenn Sie nicht der richtige Adressat sind oder diese E-Mail irrtümlich erhalten 
haben, 
informieren Sie bitte sofort den Absender und vernichten Sie diese Mail. 
Das unerlaubte Kopieren sowie die unbefugte Weitergabe dieser Mail sind nicht 
gestattet. 

This e-mail may contain confidential and/or privileged information. 
If you are not the intended recipient (or have received this e-mail in error) 
please notify 
the sender immediately and destroy this e-mail.  
Any unauthorized copying, disclosure or distribution of the material in this 
e-mail is strictly forbidden. 




AW: AW: AW: AW: AW: AW: cloud-init and user-data/meta-data

2016-03-14 Thread S . Brüseke - proIO GmbH
Thank you very much! :-)

Mit freundlichen Grüßen / With kind regards,

Swen


-Ursprüngliche Nachricht-
Von: Nux! [mailto:n...@li.nux.ro] 
Gesendet: Montag, 14. März 2016 15:29
An: users@cloudstack.apache.org; S. Brüseke - proIO GmbH
Betreff: Re: AW: AW: AW: AW: AW: cloud-init and user-data/meta-data

Sorry for image
http://storage3.static.itmages.com/i/16/0314/h_1457965735_1160614_2a9ad21577.png

--
Sent from the Delta quadrant using Borg technology!

Nux!
www.nux.ro

- Original Message -
> From: "S. Brüseke - proIO GmbH" <s.brues...@proio.com>
> To: users@cloudstack.apache.org
> Sent: Monday, 14 March, 2016 14:03:12
> Subject: AW: AW: AW: AW: AW: cloud-init and user-data/meta-data

> Hi,
> 
> can someone provide the following curl request from inside an instance 
> in a CS
> 4.8 installation?
> 
> curl http://10.1.1.1/latest/meta-data/
> 
> where 10.1.1.1 is your VR of the network the instance is in.
> 
> Thank you very much!
> 
> Mit freundlichen Grüßen / With kind regards,
> 
> Swen
> 
> 
> -Ursprüngliche Nachricht-
> Von: Nux! [mailto:n...@li.nux.ro]
> Gesendet: Freitag, 11. März 2016 12:32
> An: users@cloudstack.apache.org; S. Brüseke - proIO GmbH
> Betreff: Re: AW: AW: AW: AW: cloud-init and user-data/meta-data
> 
> Yes indeed.
> Put a script in /var/lib/cloud/scripts/per-boot, it will be executed 
> at every boot (hopefully after networking is up).
> In it you could inspect /var/lib/dhcp/blah for the full hostname or 
> domain name and amend /etc/hosts as you need.
> 
> HTH
> Lucian
> 
> --
> Sent from the Delta quadrant using Borg technology!
> 
> Nux!
> www.nux.ro
> 
> - Original Message -
>> From: "S. Brüseke - proIO GmbH" <s.brues...@proio.com>
>> To: users@cloudstack.apache.org
>> Sent: Friday, 11 March, 2016 10:59:58
>> Subject: AW: AW: AW: AW: cloud-init and user-data/meta-data
> 
>> But DHCP will not alter /etc/hosts as far as I know.
>> 
>> By per-boot hack you mean to write a script and run it via cloud-init 
>> at boot time?
>> 
>> Mit freundlichen Grüßen / With kind regards,
>> 
>> Swen
>> 
>> 
>> -Ursprüngliche Nachricht-
>> Von: Nux! [mailto:n...@li.nux.ro]
>> Gesendet: Freitag, 11. März 2016 11:54
>> An: users@cloudstack.apache.org; S. Brüseke - proIO GmbH
>> Betreff: Re: AW: AW: AW: cloud-init and user-data/meta-data
>> 
>> I am using cloud-init in the templates, but as I said I have not 
>> looked that much into it. I just let DHCP do its job.
>> I would look at the per-boot scripts hack. :)
>> 
>> --
>> Sent from the Delta quadrant using Borg technology!
>> 
>> Nux!
>> www.nux.ro
>> 
>> - Original Message -
>>> From: "S. Brüseke - proIO GmbH" <s.brues...@proio.com>
>>> To: users@cloudstack.apache.org
>>> Sent: Friday, 11 March, 2016 10:50:04
>>> Subject: AW: AW: AW: cloud-init and user-data/meta-data
>> 
>>> To me it looks like CS needs to extend the user-data/meta-data with 
>>> "hostname".
>>> 
>>> Are you using cloud-init in your templates? How to you edit 
>>> /etc/hosts with correct values?
>>> 
>>> Mit freundlichen Grüßen / With kind regards,
>>> 
>>> Swen
>>> 
>>> 
>>> -Ursprüngliche Nachricht-
>>> Von: Nux! [mailto:n...@li.nux.ro]
>>> Gesendet: Freitag, 11. März 2016 11:38
>>> An: users@cloudstack.apache.org; S. Brüseke - proIO GmbH
>>> Betreff: Re: AW: AW: cloud-init and user-data/meta-data
>>> 
>>> Swen,
>>> 
>>> I am not very familiar with it, but unless they have a proper module 
>>> in place to deal with DHCP as they do with the hosts file, you can 
>>> always add a script in /var/lib/cloud/scripts/per-boot to check the 
>>> dhcp info in /var/lib/dhcp and perform stuff based on it.
>>> Not exactly kosher, but hey ..
>>> 
>>> --
>>> Sent from the Delta quadrant using Borg technology!
>>> 
>>> Nux!
>>> www.nux.ro
>>> 
>>> - Original Message -
>>>> From: "S. Brüseke - proIO GmbH" <s.brues...@proio.com>
>>>> To: users@cloudstack.apache.org
>>>> Sent: Friday, 11 March, 2016 10:34:22
>>>> Subject: AW: AW: cloud-init and user-data/meta-data
>>> 
>>>> yes, but cloud-init is rewriting the /etc/hosts file and it does 
>>>> not have access to DHCP information or does it?
>>>> 
>>>> Mit freundliche

AW: AW: AW: AW: AW: cloud-init and user-data/meta-data

2016-03-14 Thread S . Brüseke - proIO GmbH
Hi,

can someone provide the following curl request from inside an instance in a CS 
4.8 installation?

curl http://10.1.1.1/latest/meta-data/

where 10.1.1.1 is your VR of the network the instance is in.

Thank you very much!

Mit freundlichen Grüßen / With kind regards,

Swen


-Ursprüngliche Nachricht-
Von: Nux! [mailto:n...@li.nux.ro] 
Gesendet: Freitag, 11. März 2016 12:32
An: users@cloudstack.apache.org; S. Brüseke - proIO GmbH
Betreff: Re: AW: AW: AW: AW: cloud-init and user-data/meta-data

Yes indeed.
Put a script in /var/lib/cloud/scripts/per-boot, it will be executed at every 
boot (hopefully after networking is up).
In it you could inspect /var/lib/dhcp/blah for the full hostname or domain name 
and amend /etc/hosts as you need.

HTH
Lucian

--
Sent from the Delta quadrant using Borg technology!

Nux!
www.nux.ro

- Original Message -
> From: "S. Brüseke - proIO GmbH" <s.brues...@proio.com>
> To: users@cloudstack.apache.org
> Sent: Friday, 11 March, 2016 10:59:58
> Subject: AW: AW: AW: AW: cloud-init and user-data/meta-data

> But DHCP will not alter /etc/hosts as far as I know.
> 
> By per-boot hack you mean to write a script and run it via cloud-init 
> at boot time?
> 
> Mit freundlichen Grüßen / With kind regards,
> 
> Swen
> 
> 
> -Ursprüngliche Nachricht-
> Von: Nux! [mailto:n...@li.nux.ro]
> Gesendet: Freitag, 11. März 2016 11:54
> An: users@cloudstack.apache.org; S. Brüseke - proIO GmbH
> Betreff: Re: AW: AW: AW: cloud-init and user-data/meta-data
> 
> I am using cloud-init in the templates, but as I said I have not 
> looked that much into it. I just let DHCP do its job.
> I would look at the per-boot scripts hack. :)
> 
> --
> Sent from the Delta quadrant using Borg technology!
> 
> Nux!
> www.nux.ro
> 
> - Original Message -
>> From: "S. Brüseke - proIO GmbH" <s.brues...@proio.com>
>> To: users@cloudstack.apache.org
>> Sent: Friday, 11 March, 2016 10:50:04
>> Subject: AW: AW: AW: cloud-init and user-data/meta-data
> 
>> To me it looks like CS needs to extend the user-data/meta-data with 
>> "hostname".
>> 
>> Are you using cloud-init in your templates? How to you edit 
>> /etc/hosts with correct values?
>> 
>> Mit freundlichen Grüßen / With kind regards,
>> 
>> Swen
>> 
>> 
>> -Ursprüngliche Nachricht-
>> Von: Nux! [mailto:n...@li.nux.ro]
>> Gesendet: Freitag, 11. März 2016 11:38
>> An: users@cloudstack.apache.org; S. Brüseke - proIO GmbH
>> Betreff: Re: AW: AW: cloud-init and user-data/meta-data
>> 
>> Swen,
>> 
>> I am not very familiar with it, but unless they have a proper module 
>> in place to deal with DHCP as they do with the hosts file, you can 
>> always add a script in /var/lib/cloud/scripts/per-boot to check the 
>> dhcp info in /var/lib/dhcp and perform stuff based on it.
>> Not exactly kosher, but hey ..
>> 
>> --
>> Sent from the Delta quadrant using Borg technology!
>> 
>> Nux!
>> www.nux.ro
>> 
>> - Original Message -
>>> From: "S. Brüseke - proIO GmbH" <s.brues...@proio.com>
>>> To: users@cloudstack.apache.org
>>> Sent: Friday, 11 March, 2016 10:34:22
>>> Subject: AW: AW: cloud-init and user-data/meta-data
>> 
>>> yes, but cloud-init is rewriting the /etc/hosts file and it does not 
>>> have access to DHCP information or does it?
>>> 
>>> Mit freundlichen Grüßen / With kind regards,
>>> 
>>> Swen
>>> 
>>> 
>>> -Ursprüngliche Nachricht-
>>> Von: Nux! [mailto:n...@li.nux.ro]
>>> Gesendet: Freitag, 11. März 2016 11:31
>>> An: S. Brüseke - proIO GmbH
>>> Cc: users@cloudstack.apache.org
>>> Betreff: Re: AW: cloud-init and user-data/meta-data
>>> 
>>> Ah I see.
>>> Well, afaik the fqdn should be provided via DHCP, right?
>>> 
>>> --
>>> Sent from the Delta quadrant using Borg technology!
>>> 
>>> Nux!
>>> www.nux.ro
>>> 
>>> - Original Message -
>>>> From: "S. Brüseke - proIO GmbH" <s.brues...@proio.com>
>>>> To: "Nux!" <n...@li.nux.ro>, users@cloudstack.apache.org
>>>> Sent: Friday, 11 March, 2016 10:29:41
>>>> Subject: AW: cloud-init and user-data/meta-data
>>> 
>>>> Hi Lucian,
>>>> 
>>>> no, local-hostname will provide only the hostname without the 
>>>> domain name (network name).
>>>> But this is how it is documented i

AW: AW: AW: AW: cloud-init and user-data/meta-data

2016-03-11 Thread S . Brüseke - proIO GmbH
But DHCP will not alter /etc/hosts as far as I know.

By per-boot hack you mean to write a script and run it via cloud-init at boot 
time?

Mit freundlichen Grüßen / With kind regards,

Swen


-Ursprüngliche Nachricht-
Von: Nux! [mailto:n...@li.nux.ro] 
Gesendet: Freitag, 11. März 2016 11:54
An: users@cloudstack.apache.org; S. Brüseke - proIO GmbH
Betreff: Re: AW: AW: AW: cloud-init and user-data/meta-data

I am using cloud-init in the templates, but as I said I have not looked that 
much into it. I just let DHCP do its job.
I would look at the per-boot scripts hack. :)

--
Sent from the Delta quadrant using Borg technology!

Nux!
www.nux.ro

- Original Message -
> From: "S. Brüseke - proIO GmbH" <s.brues...@proio.com>
> To: users@cloudstack.apache.org
> Sent: Friday, 11 March, 2016 10:50:04
> Subject: AW: AW: AW: cloud-init and user-data/meta-data

> To me it looks like CS needs to extend the user-data/meta-data with 
> "hostname".
> 
> Are you using cloud-init in your templates? How to you edit /etc/hosts 
> with correct values?
> 
> Mit freundlichen Grüßen / With kind regards,
> 
> Swen
> 
> 
> -Ursprüngliche Nachricht-
> Von: Nux! [mailto:n...@li.nux.ro]
> Gesendet: Freitag, 11. März 2016 11:38
> An: users@cloudstack.apache.org; S. Brüseke - proIO GmbH
> Betreff: Re: AW: AW: cloud-init and user-data/meta-data
> 
> Swen,
> 
> I am not very familiar with it, but unless they have a proper module 
> in place to deal with DHCP as they do with the hosts file, you can 
> always add a script in /var/lib/cloud/scripts/per-boot to check the 
> dhcp info in /var/lib/dhcp and perform stuff based on it.
> Not exactly kosher, but hey ..
> 
> --
> Sent from the Delta quadrant using Borg technology!
> 
> Nux!
> www.nux.ro
> 
> - Original Message -
>> From: "S. Brüseke - proIO GmbH" <s.brues...@proio.com>
>> To: users@cloudstack.apache.org
>> Sent: Friday, 11 March, 2016 10:34:22
>> Subject: AW: AW: cloud-init and user-data/meta-data
> 
>> yes, but cloud-init is rewriting the /etc/hosts file and it does not 
>> have access to DHCP information or does it?
>> 
>> Mit freundlichen Grüßen / With kind regards,
>> 
>> Swen
>> 
>> 
>> -Ursprüngliche Nachricht-
>> Von: Nux! [mailto:n...@li.nux.ro]
>> Gesendet: Freitag, 11. März 2016 11:31
>> An: S. Brüseke - proIO GmbH
>> Cc: users@cloudstack.apache.org
>> Betreff: Re: AW: cloud-init and user-data/meta-data
>> 
>> Ah I see.
>> Well, afaik the fqdn should be provided via DHCP, right?
>> 
>> --
>> Sent from the Delta quadrant using Borg technology!
>> 
>> Nux!
>> www.nux.ro
>> 
>> - Original Message -
>>> From: "S. Brüseke - proIO GmbH" <s.brues...@proio.com>
>>> To: "Nux!" <n...@li.nux.ro>, users@cloudstack.apache.org
>>> Sent: Friday, 11 March, 2016 10:29:41
>>> Subject: AW: cloud-init and user-data/meta-data
>> 
>>> Hi Lucian,
>>> 
>>> no, local-hostname will provide only the hostname without the domain 
>>> name (network name).
>>> But this is how it is documented in cloud-init:
>>> https://github.com/number5/cloud-init/blob/master/doc/examples/cloud
>>> -
>>> c
>>> onfig.txt
>>> line 433 - 439
>>> 
>>> But I need the whole fqdn.
>>> 
>>> Mit freundlichen Grüßen / With kind regards,
>>> 
>>> Swen
>>> 
>>> 
>>> -Ursprüngliche Nachricht-
>>> Von: Nux! [mailto:n...@li.nux.ro]
>>> Gesendet: Freitag, 11. März 2016 11:22
>>> An: users@cloudstack.apache.org; S. Brüseke - proIO GmbH
>>> Betreff: Re: cloud-init and user-data/meta-data
>>> 
>>> Hi Swen,
>>> 
>>> From that page:
>>> "local-hostname. The hostname of the VM"
>>> 
>>> Isn't that what you need?
>>> 
>>> Lucian
>>> 
>>> --
>>> Sent from the Delta quadrant using Borg technology!
>>> 
>>> Nux!
>>> www.nux.ro
>>> 
>>> - Original Message -
>>>> From: "S. Brüseke - proIO GmbH" <s.brues...@proio.com>
>>>> To: users@cloudstack.apache.org
>>>> Sent: Friday, 11 March, 2016 09:27:09
>>>> Subject: cloud-init and user-data/meta-data
>>> 
>>>> Hi to all,
>>>> 
>>>> I am testing cloud-init in our CS installation.
>>>> 
>>>> I want to recreate /etc/hosts on first boot so I use "

AW: AW: AW: cloud-init and user-data/meta-data

2016-03-11 Thread S . Brüseke - proIO GmbH
To me it looks like CS needs to extend the user-data/meta-data with "hostname".

Are you using cloud-init in your templates? How to you edit /etc/hosts with 
correct values?

Mit freundlichen Grüßen / With kind regards,

Swen


-Ursprüngliche Nachricht-
Von: Nux! [mailto:n...@li.nux.ro] 
Gesendet: Freitag, 11. März 2016 11:38
An: users@cloudstack.apache.org; S. Brüseke - proIO GmbH
Betreff: Re: AW: AW: cloud-init and user-data/meta-data

Swen,

I am not very familiar with it, but unless they have a proper module in place 
to deal with DHCP as they do with the hosts file, you can always add a script 
in /var/lib/cloud/scripts/per-boot to check the dhcp info in /var/lib/dhcp and 
perform stuff based on it.
Not exactly kosher, but hey ..

--
Sent from the Delta quadrant using Borg technology!

Nux!
www.nux.ro

- Original Message -
> From: "S. Brüseke - proIO GmbH" <s.brues...@proio.com>
> To: users@cloudstack.apache.org
> Sent: Friday, 11 March, 2016 10:34:22
> Subject: AW: AW: cloud-init and user-data/meta-data

> yes, but cloud-init is rewriting the /etc/hosts file and it does not 
> have access to DHCP information or does it?
> 
> Mit freundlichen Grüßen / With kind regards,
> 
> Swen
> 
> 
> -Ursprüngliche Nachricht-
> Von: Nux! [mailto:n...@li.nux.ro]
> Gesendet: Freitag, 11. März 2016 11:31
> An: S. Brüseke - proIO GmbH
> Cc: users@cloudstack.apache.org
> Betreff: Re: AW: cloud-init and user-data/meta-data
> 
> Ah I see.
> Well, afaik the fqdn should be provided via DHCP, right?
> 
> --
> Sent from the Delta quadrant using Borg technology!
> 
> Nux!
> www.nux.ro
> 
> - Original Message -
>> From: "S. Brüseke - proIO GmbH" <s.brues...@proio.com>
>> To: "Nux!" <n...@li.nux.ro>, users@cloudstack.apache.org
>> Sent: Friday, 11 March, 2016 10:29:41
>> Subject: AW: cloud-init and user-data/meta-data
> 
>> Hi Lucian,
>> 
>> no, local-hostname will provide only the hostname without the domain 
>> name (network name).
>> But this is how it is documented in cloud-init:
>> https://github.com/number5/cloud-init/blob/master/doc/examples/cloud-
>> c
>> onfig.txt
>> line 433 - 439
>> 
>> But I need the whole fqdn.
>> 
>> Mit freundlichen Grüßen / With kind regards,
>> 
>> Swen
>> 
>> 
>> -Ursprüngliche Nachricht-
>> Von: Nux! [mailto:n...@li.nux.ro]
>> Gesendet: Freitag, 11. März 2016 11:22
>> An: users@cloudstack.apache.org; S. Brüseke - proIO GmbH
>> Betreff: Re: cloud-init and user-data/meta-data
>> 
>> Hi Swen,
>> 
>> From that page:
>> "local-hostname. The hostname of the VM"
>> 
>> Isn't that what you need?
>> 
>> Lucian
>> 
>> --
>> Sent from the Delta quadrant using Borg technology!
>> 
>> Nux!
>> www.nux.ro
>> 
>> - Original Message -
>>> From: "S. Brüseke - proIO GmbH" <s.brues...@proio.com>
>>> To: users@cloudstack.apache.org
>>> Sent: Friday, 11 March, 2016 09:27:09
>>> Subject: cloud-init and user-data/meta-data
>> 
>>> Hi to all,
>>> 
>>> I am testing cloud-init in our CS installation.
>>> 
>>> I want to recreate /etc/hosts on first boot so I use "manage_etc_hosts: 
>>> true".
>>> The recreation works but it looks like the fqdn is not provided by CS.
>>> As far as I understand cloud-init uses user-data/meta-data to get 
>>> all needed information. For fqdn it uses meta-data "hostname"
>>> (https://github.com/number5/cloud-init/blob/master/doc/examples/clou
>>> d
>>> -
>>> config.txt
>>> line 441 - 445).
>>> 
>>> Now my problem: CS do not provide "hostname" in meta-data.
>>> (http://docs.cloudstack.apache.org/projects/cloudstack-administration/en/latest/virtual_machines.html#user-data-and-meta-data).
>>> Where do I get the correct domain name from?
>>> 
>>> Mit freundlichen Grüßen / With kind regards,
>>> 
>>> Swen
>>> 
>>> 
>>> 
>>> 
>>> - proIO GmbH -
>>> Geschäftsführer: Swen Brüseke
>>> Sitz der Gesellschaft: Frankfurt am Main
>>> 
>>> USt-IdNr. DE 267 075 918
>>> Registergericht: Frankfurt am Main - HRB 86239
>>> 
>>> Diese E-Mail enthält vertrauliche und/oder rechtlich geschützte 
>>> Informationen.
>>> Wenn Sie nicht der richtige Adressat sind oder diese E-Mail 
>>> irrtümlich erhalten haben, informieren Sie bitte sofort den Abse

AW: AW: cloud-init and user-data/meta-data

2016-03-11 Thread S . Brüseke - proIO GmbH
yes, but cloud-init is rewriting the /etc/hosts file and it does not have 
access to DHCP information or does it?

Mit freundlichen Grüßen / With kind regards,

Swen


-Ursprüngliche Nachricht-
Von: Nux! [mailto:n...@li.nux.ro] 
Gesendet: Freitag, 11. März 2016 11:31
An: S. Brüseke - proIO GmbH
Cc: users@cloudstack.apache.org
Betreff: Re: AW: cloud-init and user-data/meta-data

Ah I see.
Well, afaik the fqdn should be provided via DHCP, right?

--
Sent from the Delta quadrant using Borg technology!

Nux!
www.nux.ro

- Original Message -
> From: "S. Brüseke - proIO GmbH" <s.brues...@proio.com>
> To: "Nux!" <n...@li.nux.ro>, users@cloudstack.apache.org
> Sent: Friday, 11 March, 2016 10:29:41
> Subject: AW: cloud-init and user-data/meta-data

> Hi Lucian,
> 
> no, local-hostname will provide only the hostname without the domain 
> name (network name).
> But this is how it is documented in cloud-init:
> https://github.com/number5/cloud-init/blob/master/doc/examples/cloud-c
> onfig.txt
> line 433 - 439
> 
> But I need the whole fqdn.
> 
> Mit freundlichen Grüßen / With kind regards,
> 
> Swen
> 
> 
> -Ursprüngliche Nachricht-
> Von: Nux! [mailto:n...@li.nux.ro]
> Gesendet: Freitag, 11. März 2016 11:22
> An: users@cloudstack.apache.org; S. Brüseke - proIO GmbH
> Betreff: Re: cloud-init and user-data/meta-data
> 
> Hi Swen,
> 
> From that page:
> "local-hostname. The hostname of the VM"
> 
> Isn't that what you need?
> 
> Lucian
> 
> --
> Sent from the Delta quadrant using Borg technology!
> 
> Nux!
> www.nux.ro
> 
> - Original Message -
>> From: "S. Brüseke - proIO GmbH" <s.brues...@proio.com>
>> To: users@cloudstack.apache.org
>> Sent: Friday, 11 March, 2016 09:27:09
>> Subject: cloud-init and user-data/meta-data
> 
>> Hi to all,
>> 
>> I am testing cloud-init in our CS installation.
>> 
>> I want to recreate /etc/hosts on first boot so I use "manage_etc_hosts: 
>> true".
>> The recreation works but it looks like the fqdn is not provided by CS.
>> As far as I understand cloud-init uses user-data/meta-data to get all 
>> needed information. For fqdn it uses meta-data "hostname"
>> (https://github.com/number5/cloud-init/blob/master/doc/examples/cloud
>> -
>> config.txt
>> line 441 - 445).
>> 
>> Now my problem: CS do not provide "hostname" in meta-data.
>> (http://docs.cloudstack.apache.org/projects/cloudstack-administration/en/latest/virtual_machines.html#user-data-and-meta-data).
>> Where do I get the correct domain name from?
>> 
>> Mit freundlichen Grüßen / With kind regards,
>> 
>> Swen
>> 
>> 
>> 
>> 
>> - proIO GmbH -
>> Geschäftsführer: Swen Brüseke
>> Sitz der Gesellschaft: Frankfurt am Main
>> 
>> USt-IdNr. DE 267 075 918
>> Registergericht: Frankfurt am Main - HRB 86239
>> 
>> Diese E-Mail enthält vertrauliche und/oder rechtlich geschützte 
>> Informationen.
>> Wenn Sie nicht der richtige Adressat sind oder diese E-Mail 
>> irrtümlich erhalten haben, informieren Sie bitte sofort den Absender 
>> und vernichten Sie diese Mail.
>> Das unerlaubte Kopieren sowie die unbefugte Weitergabe dieser Mail 
>> sind nicht gestattet.
>> 
>> This e-mail may contain confidential and/or privileged information.
>> If you are not the intended recipient (or have received this e-mail 
>> in
>> error) please notify the sender immediately and destroy this e-mail.
>> Any unauthorized copying, disclosure or distribution of the material 
>> in this e-mail is strictly forbidden.
> 
> 
> - proIO GmbH -
> Geschäftsführer: Swen Brüseke
> Sitz der Gesellschaft: Frankfurt am Main
> 
> USt-IdNr. DE 267 075 918
> Registergericht: Frankfurt am Main - HRB 86239
> 
> Diese E-Mail enthält vertrauliche und/oder rechtlich geschützte Informationen.
> Wenn Sie nicht der richtige Adressat sind oder diese E-Mail irrtümlich 
> erhalten haben, informieren Sie bitte sofort den Absender und 
> vernichten Sie diese Mail.
> Das unerlaubte Kopieren sowie die unbefugte Weitergabe dieser Mail 
> sind nicht gestattet.
> 
> This e-mail may contain confidential and/or privileged information.
> If you are not the intended recipient (or have received this e-mail in 
> error) please notify the sender immediately and destroy this e-mail.
> Any unauthorized copying, disclosure or distribution of the material 
> in this e-mail is strictly forbidden.


- proIO GmbH -
Geschäftsführer: Swen Brüseke
Sitz der Gesellschaft: Frankfurt am Main

USt-IdNr. DE 267 075 918

AW: cloud-init and user-data/meta-data

2016-03-11 Thread S . Brüseke - proIO GmbH
Hi Lucian,

no, local-hostname will provide only the hostname without the domain name 
(network name).
But this is how it is documented in cloud-init:
https://github.com/number5/cloud-init/blob/master/doc/examples/cloud-config.txt 
line 433 - 439

But I need the whole fqdn.

Mit freundlichen Grüßen / With kind regards,

Swen


-Ursprüngliche Nachricht-
Von: Nux! [mailto:n...@li.nux.ro] 
Gesendet: Freitag, 11. März 2016 11:22
An: users@cloudstack.apache.org; S. Brüseke - proIO GmbH
Betreff: Re: cloud-init and user-data/meta-data

Hi Swen,

>From that page:
"local-hostname. The hostname of the VM"

Isn't that what you need?

Lucian

--
Sent from the Delta quadrant using Borg technology!

Nux!
www.nux.ro

- Original Message -----
> From: "S. Brüseke - proIO GmbH" <s.brues...@proio.com>
> To: users@cloudstack.apache.org
> Sent: Friday, 11 March, 2016 09:27:09
> Subject: cloud-init and user-data/meta-data

> Hi to all,
> 
> I am testing cloud-init in our CS installation.
> 
> I want to recreate /etc/hosts on first boot so I use "manage_etc_hosts: true".
> The recreation works but it looks like the fqdn is not provided by CS.
> As far as I understand cloud-init uses user-data/meta-data to get all 
> needed information. For fqdn it uses meta-data "hostname"
> (https://github.com/number5/cloud-init/blob/master/doc/examples/cloud-
> config.txt
> line 441 - 445).
> 
> Now my problem: CS do not provide "hostname" in meta-data.
> (http://docs.cloudstack.apache.org/projects/cloudstack-administration/en/latest/virtual_machines.html#user-data-and-meta-data).
> Where do I get the correct domain name from?
> 
> Mit freundlichen Grüßen / With kind regards,
> 
> Swen
> 
> 
> 
> 
> - proIO GmbH -
> Geschäftsführer: Swen Brüseke
> Sitz der Gesellschaft: Frankfurt am Main
> 
> USt-IdNr. DE 267 075 918
> Registergericht: Frankfurt am Main - HRB 86239
> 
> Diese E-Mail enthält vertrauliche und/oder rechtlich geschützte Informationen.
> Wenn Sie nicht der richtige Adressat sind oder diese E-Mail irrtümlich 
> erhalten haben, informieren Sie bitte sofort den Absender und 
> vernichten Sie diese Mail.
> Das unerlaubte Kopieren sowie die unbefugte Weitergabe dieser Mail 
> sind nicht gestattet.
> 
> This e-mail may contain confidential and/or privileged information.
> If you are not the intended recipient (or have received this e-mail in 
> error) please notify the sender immediately and destroy this e-mail.
> Any unauthorized copying, disclosure or distribution of the material 
> in this e-mail is strictly forbidden.


- proIO GmbH -
Geschäftsführer: Swen Brüseke
Sitz der Gesellschaft: Frankfurt am Main

USt-IdNr. DE 267 075 918
Registergericht: Frankfurt am Main - HRB 86239

Diese E-Mail enthält vertrauliche und/oder rechtlich geschützte Informationen. 
Wenn Sie nicht der richtige Adressat sind oder diese E-Mail irrtümlich erhalten 
haben, 
informieren Sie bitte sofort den Absender und vernichten Sie diese Mail. 
Das unerlaubte Kopieren sowie die unbefugte Weitergabe dieser Mail sind nicht 
gestattet. 

This e-mail may contain confidential and/or privileged information. 
If you are not the intended recipient (or have received this e-mail in error) 
please notify 
the sender immediately and destroy this e-mail.  
Any unauthorized copying, disclosure or distribution of the material in this 
e-mail is strictly forbidden. 




cloud-init and user-data/meta-data

2016-03-11 Thread S . Brüseke - proIO GmbH
Hi to all,

I am testing cloud-init in our CS installation.

I want to recreate /etc/hosts on first boot so I use "manage_etc_hosts: true". 
The recreation works but it looks like the fqdn is not provided by CS.
As far as I understand cloud-init uses user-data/meta-data to get all needed 
information. For fqdn it uses meta-data "hostname" 
(https://github.com/number5/cloud-init/blob/master/doc/examples/cloud-config.txt
 line 441 - 445). 

Now my problem: CS do not provide "hostname" in meta-data. 
(http://docs.cloudstack.apache.org/projects/cloudstack-administration/en/latest/virtual_machines.html#user-data-and-meta-data).
 Where do I get the correct domain name from?

Mit freundlichen Grüßen / With kind regards,

Swen




- proIO GmbH -
Geschäftsführer: Swen Brüseke
Sitz der Gesellschaft: Frankfurt am Main

USt-IdNr. DE 267 075 918
Registergericht: Frankfurt am Main - HRB 86239

Diese E-Mail enthält vertrauliche und/oder rechtlich geschützte Informationen. 
Wenn Sie nicht der richtige Adressat sind oder diese E-Mail irrtümlich erhalten 
haben, 
informieren Sie bitte sofort den Absender und vernichten Sie diese Mail. 
Das unerlaubte Kopieren sowie die unbefugte Weitergabe dieser Mail sind nicht 
gestattet. 

This e-mail may contain confidential and/or privileged information. 
If you are not the intended recipient (or have received this e-mail in error) 
please notify 
the sender immediately and destroy this e-mail.  
Any unauthorized copying, disclosure or distribution of the material in this 
e-mail is strictly forbidden. 




glibc vulnerable (CVE-2015-7547)

2016-02-22 Thread S . Brüseke - proIO GmbH
Hi,

is the latest system vm template vulnerable to CVE-2015-7547 
(https://security-tracker.debian.org/tracker/CVE-2015-7547)?
I cannot find anything about it in the mailinglist and/or CS page.

Mit freundlichen Grüßen / With kind regards,

Swen




- proIO GmbH -
Geschäftsführer: Swen Brüseke
Sitz der Gesellschaft: Frankfurt am Main

USt-IdNr. DE 267 075 918
Registergericht: Frankfurt am Main - HRB 86239

Diese E-Mail enthält vertrauliche und/oder rechtlich geschützte Informationen. 
Wenn Sie nicht der richtige Adressat sind oder diese E-Mail irrtümlich erhalten 
haben, 
informieren Sie bitte sofort den Absender und vernichten Sie diese Mail. 
Das unerlaubte Kopieren sowie die unbefugte Weitergabe dieser Mail sind nicht 
gestattet. 

This e-mail may contain confidential and/or privileged information. 
If you are not the intended recipient (or have received this e-mail in error) 
please notify 
the sender immediately and destroy this e-mail.  
Any unauthorized copying, disclosure or distribution of the material in this 
e-mail is strictly forbidden. 




overprovisioning of local storage

2016-02-01 Thread S . Brüseke - proIO GmbH
Hi,

we are using CS 4.4.0 and it looks like we are unable to overprovisioning local 
storage. We configured storage.overprovisioning.factor with 4 and using ext 
local storage on XenServer pools. Is this a known limitation/bug? And if so 
does anybody know if this has been fixed in newer versions?

Thanks for any help and clarification on this!

Mit freundlichen Grüßen / With kind regards,

Swen




- proIO GmbH -
Geschäftsführer: Swen Brüseke
Sitz der Gesellschaft: Frankfurt am Main

USt-IdNr. DE 267 075 918
Registergericht: Frankfurt am Main - HRB 86239

Diese E-Mail enthält vertrauliche und/oder rechtlich geschützte Informationen. 
Wenn Sie nicht der richtige Adressat sind oder diese E-Mail irrtümlich erhalten 
haben, 
informieren Sie bitte sofort den Absender und vernichten Sie diese Mail. 
Das unerlaubte Kopieren sowie die unbefugte Weitergabe dieser Mail sind nicht 
gestattet. 

This e-mail may contain confidential and/or privileged information. 
If you are not the intended recipient (or have received this e-mail in error) 
please notify 
the sender immediately and destroy this e-mail.  
Any unauthorized copying, disclosure or distribution of the material in this 
e-mail is strictly forbidden. 




AW: Working Debian template

2016-01-07 Thread S . Brüseke - proIO GmbH
Hi Glenn,
 
thank you for your reply. Can you please check if /etc/hosts will be updated in 
your templates? As far as my testing goes it will not be updated after first 
boot of an instance created out of a template. It will still show a "127.0.1.1" 
entry. After the second boot it will add a correct entry at the end of the 
file, but still leave the "127.0.1.1" entry in /etc/hosts.
 
Mit freundlichen Grüßen / With kind regards,
 
Swen Brüseke
 
 
proIO GmbH   
Kleyerstr. 79 - 89 / Tor 13   
D-60326 Frankfurt am Main 
 
Mail: s.brues...@proio.com
Tel:  +(49) (0) 69 739049-15
Fax:  +(49) (0) 69 739049-25  
Web:  www.proio.com
 
- Support -
Mail: supp...@proio.com
24h:  +(49) (0) 1805 522 855
 
Von: Glenn Wagner [mailto:glenn.wag...@shapeblue.com] 
Gesendet: Donnerstag, 7. Januar 2016 09:43
An: users@cloudstack.apache.org; S. Brüseke - proIO GmbH
Betreff: Re: Working Debian template
 
Hi,

We have created a few of these for projects, using the same documents 
if you want a quick template you can try 
http://dl.openvm.eu/cloudstack/debian/vanilla/old/x86_64/

Thanks
Glenn

 
 
 
 
 
 Glenn Wagner
 
 Senior Consultant
 , 
 ShapeBlue
 
 
 d: 
  | s: +27 21 527 0091
  | 
 m: 
 +27 73 917 4111
 
 e: 
 glenn.wag...@shapeblue.com | t: 
  | 
 w: 
 www.shapeblue.com
 
 a: 
 2nd Floor, Oudehuis Centre, 122 Main Rd, Somerset West Cape Town 7130 South 
Africa
 
 
 Shape Blue Ltd is a company incorporated in England & Wales. ShapeBlue 
Services India LLP is a company incorporated in India and is operated under 
license from Shape Blue Ltd. Shape Blue Brasil Consultoria Ltda is a company 
incorporated in Brasil and is operated under license from Shape Blue Ltd. 
ShapeBlue SA Pty Ltd is a company registered by The Republic of South Africa 
and is traded under license from Shape Blue Ltd. ShapeBlue is a registered 
trademark.
This email and any attachments to it may be confidential and are intended 
solely for the use of the individual to whom it is addressed. Any views or 
opinions expressed are solely those of the author and do not necessarily 
represent those of Shape Blue Ltd or related companies. If you are not the 
intended recipient of this email, you must neither take any action based upon 
its contents, nor copy or show it to anyone. Please contact the sender if you 
believe you have received this email in error.
 
____________
From: S. Brüseke - proIO GmbH <s.brues...@proio.com>
Sent: Thursday, January 7, 2016 3:09 AM
To: users@cloudstack.apache.org
Subject: Working Debian template

Hi,

does someone created a working template with Debian 7 for CS?
I used 
http://docs.cloudstack.apache.org/projects/cloudstack-administration/en/4.6/templates.html#creating-a-linux-template
 but still have a problem with /etc/hosts.
I also tried https://gist.github.com/makuk66/6379642 and the fork at 
https://gist.github.com/srics/8e59654004fc38bed9fe

But I can still see "127.0.1.1 localhost.cs2cloud.internal localhost" and no 
replacement with the dhcp ip and the active hostname. hostname itself was 
changed to the correct one.

Any ideas?

Mit freundlichen Grüßen / With kind regards,

Swen




- proIO GmbH -
Geschäftsführer: Swen Brüseke
Sitz der Gesellschaft: Frankfurt am Main

USt-IdNr. DE 267 075 918
Registergericht: Frankfurt am Main - HRB 86239

Diese E-Mail enthält vertrauliche und/oder rechtlich geschützte Informationen.
Wenn Sie nicht der richtige Adressat sind oder diese E-Mail irrtümlich erhalten 
haben,
informieren Sie bitte sofort den Absender und vernichten Sie diese Mail.
Das unerlaubte Kopieren sowie die unbefugte Weitergabe dieser Mail sind nicht 
gestattet.

This e-mail may contain confidential and/or privileged information.
If you are not the intended recipient (or have received this e-mail in error) 
please notify
the sender immediately and destroy this e-mail.
Any unauthorized copying, disclosure or distribution of the material in this 
e-mail is strictly forbidden.
Find out more about ShapeBlue and our range of CloudStack related services:
IaaS Cloud Design & Build | CSForge – rapid IaaS deployment framework
CloudStack Consulting | CloudStack Software Engineering
CloudStack Infrastructure Support | CloudStack Bootcamp Training Courses

- proIO GmbH -
Geschäftsführer: Swen Brüseke
Sitz der Gesellschaft: Frankfurt am Main

USt-IdNr. DE 267 075 918
Registergericht: Frankfurt am Main - HRB 86239

Diese E-Mail enthält vertrauliche und/oder rechtlich geschützte Informationen. 
Wenn Sie nicht der richtige Adressat sind oder diese E-Mail irrtümlich erhalten 
haben, 
informieren Sie bitte sofort den Absender und vernichten Sie diese Mail. 
Das unerlaubte Kopieren sowie die unbefugte Weitergabe dieser Mail sind nicht 
gestattet. 

This e-mail may contain confidential and/or privileged information. 
If you are not the intended recipient (or have received this e-mail in error) 
please notify 
the sender immediately and destroy this e-

Working Debian template

2016-01-06 Thread S . Brüseke - proIO GmbH
Hi,

does someone created a working template with Debian 7 for CS?
I used 
http://docs.cloudstack.apache.org/projects/cloudstack-administration/en/4.6/templates.html#creating-a-linux-template
 but still have a problem with /etc/hosts.
I also tried https://gist.github.com/makuk66/6379642 and the fork at 
https://gist.github.com/srics/8e59654004fc38bed9fe

But I can still see "127.0.1.1 localhost.cs2cloud.internal localhost" and no 
replacement with the dhcp ip and the active hostname. hostname itself was 
changed to the correct one.

Any ideas?

Mit freundlichen Grüßen / With kind regards,

Swen




- proIO GmbH -
Geschäftsführer: Swen Brüseke
Sitz der Gesellschaft: Frankfurt am Main

USt-IdNr. DE 267 075 918
Registergericht: Frankfurt am Main - HRB 86239

Diese E-Mail enthält vertrauliche und/oder rechtlich geschützte Informationen. 
Wenn Sie nicht der richtige Adressat sind oder diese E-Mail irrtümlich erhalten 
haben, 
informieren Sie bitte sofort den Absender und vernichten Sie diese Mail. 
Das unerlaubte Kopieren sowie die unbefugte Weitergabe dieser Mail sind nicht 
gestattet. 

This e-mail may contain confidential and/or privileged information. 
If you are not the intended recipient (or have received this e-mail in error) 
please notify 
the sender immediately and destroy this e-mail.  
Any unauthorized copying, disclosure or distribution of the material in this 
e-mail is strictly forbidden. 




Problem with storage live migration

2015-12-18 Thread S . Brüseke - proIO GmbH
Hi,

we ran into a problem with storage live migration in CLoudstack and need your 
help to verify this.
We are using XenServer with local storage.

Steps to reproduce:
1. create an instance from an ISO (not template!) on local storage
2. while instance is running migrate it to another XenServer via UI
3. please check if instance was migrated successfully

Problem:
The instance and volume are migrated to the other XenServer but then the 
instance is shutdown immediately and UI is giving an error message. After some 
minutes CS is changing the status of the instance from "Running" to "Stopped" 
and if you try to start it, it will fail.
You can see that the volume is on the new XenServer local storage, but CS did 
not changed pool_id and path in volumes db table. It looks to me like a broken 
workflow. If you created an instance out of a template everything works fine!

We are using CS 4.3.2 Can someone please test this in a newer release?
Thanks for helping us out!

Mit freundlichen Grüßen / With kind regards,

Swen Brüseke



- proIO GmbH -
Geschäftsführer: Swen Brüseke
Sitz der Gesellschaft: Frankfurt am Main

USt-IdNr. DE 267 075 918
Registergericht: Frankfurt am Main - HRB 86239

Diese E-Mail enthält vertrauliche und/oder rechtlich geschützte Informationen. 
Wenn Sie nicht der richtige Adressat sind oder diese E-Mail irrtümlich erhalten 
haben, 
informieren Sie bitte sofort den Absender und vernichten Sie diese Mail. 
Das unerlaubte Kopieren sowie die unbefugte Weitergabe dieser Mail sind nicht 
gestattet. 

This e-mail may contain confidential and/or privileged information. 
If you are not the intended recipient (or have received this e-mail in error) 
please notify 
the sender immediately and destroy this e-mail.  
Any unauthorized copying, disclosure or distribution of the material in this 
e-mail is strictly forbidden. 




AW: AW: Deleting Compute Offering

2015-08-18 Thread S . Brüseke - proIO GmbH
Hi Suneel,

thank you very much for that link!

Mit freundlichen Grüßen / With kind regards,

Swen Brüseke

-Ursprüngliche Nachricht-
Von: mvs babu [mailto:mvsbabu0...@outlook.com] 
Gesendet: Montag, 17. August 2015 14:08
An: users@cloudstack.apache.org
Betreff: RE: AW: Deleting Compute Offering

Hello Swen,


 


Below link will clarify your doubt, 


 


http://cloudstack-administration.readthedocs.org/en/latest/service_offerings.html#modifying-or-deleting-a-service-offering


 


Thank you,


Suneel Mallela.

 From: s.brues...@proio.com
 To: users@cloudstack.apache.org
 Subject: AW: Deleting Compute Offering
 Date: Mon, 17 Aug 2015 12:32:11 +0200
 
 Hi Somesh,
 
 thank you very much for testing!
 
 Mit freundlichen Grüßen / With kind regards,
 
 Swen Brüseke
 
 -Ursprüngliche Nachricht-
 Von: Somesh Naidu [mailto:somesh.na...@citrix.com]
 Gesendet: Freitag, 14. August 2015 18:12
 An: users@cloudstack.apache.org
 Betreff: RE: Deleting Compute Offering
 
 That is correct, you should be able to delete a compute offering even if 
 there are active VMs using it. I just performed a quick test for migrate and 
 stop/start operations work fine. I even performed a Reinstall VM operation 
 (that recreates a VM from template) and it worked fine.
 
 Regards,
 Somesh
 
 
 -Original Message-
 From: S. Brüseke - proIO GmbH [mailto:s.brues...@proio.com]
 Sent: Friday, August 14, 2015 5:04 AM
 To: users@cloudstack.apache.org
 Subject: AW: Deleting Compute Offering
 
 Hi Vadim,
 
 I am not 100% sure, but I think I was able to delete a compute offering while 
 VMs still using it.
 
 Mit freundlichen Grüßen / With kind regards,
 
 Swen
 
 -Ursprüngliche Nachricht-
 Von: Vadim Kimlaychuk [mailto:vadim.kimlayc...@elion.ee]
 Gesendet: Donnerstag, 13. August 2015 15:06
 An: users@cloudstack.apache.org; S. Brüseke - proIO GmbH
 Betreff: RE: Deleting Compute Offering
 
 Hello Swen,
 
   If I am not mistaken - you can't delete offering if it is used. You 
 will get an error. First -- you must assign new offering and then 
 remove the old one
 
 Vadim.
 
 -Original Message-
 From: S. Brüseke - proIO GmbH [mailto:s.brues...@proio.com]
 Sent: Thursday, August 13, 2015 1:29 PM
 To: users@cloudstack.apache.org
 Subject: Deleting Compute Offering
 
 Hi,
 
 I need to delete a Compute Offering because it is using a storage tag of a 
 primary storage which we want to get rid of in the future and I do not want 
 users to still be able to deploy new VMs on this primary storage. Of course 
 it is easy to delete the offering, but what will happen with existing VMs 
 using this offering?
 Will I be still able to use live migration and start/stop them?
 We re using CS 4.3
 
 Mit freundlichen Grüßen / With kind regards,
 
 Swen Brüseke
 
 
 
 - proIO GmbH -
 Geschäftsführer: Swen Brüseke
 Sitz der Gesellschaft: Frankfurt am Main
 
 USt-IdNr. DE 267 075 918
 Registergericht: Frankfurt am Main - HRB 86239
 
 Diese E-Mail enthält vertrauliche und/oder rechtlich geschützte 
 Informationen. 
 Wenn Sie nicht der richtige Adressat sind oder diese E-Mail irrtümlich 
 erhalten haben, informieren Sie bitte sofort den Absender und vernichten Sie 
 diese Mail. 
 Das unerlaubte Kopieren sowie die unbefugte Weitergabe dieser Mail sind nicht 
 gestattet. 
 
 This e-mail may contain confidential and/or privileged information. 
 If you are not the intended recipient (or have received this e-mail in error) 
 please notify the sender immediately and destroy this e-mail.  
 Any unauthorized copying, disclosure or distribution of the material in this 
 e-mail is strictly forbidden. 
 
 
 
 
 
 - proIO GmbH -
 Geschäftsführer: Swen Brüseke
 Sitz der Gesellschaft: Frankfurt am Main
 
 USt-IdNr. DE 267 075 918
 Registergericht: Frankfurt am Main - HRB 86239
 
 Diese E-Mail enthält vertrauliche und/oder rechtlich geschützte 
 Informationen. 
 Wenn Sie nicht der richtige Adressat sind oder diese E-Mail irrtümlich 
 erhalten haben, informieren Sie bitte sofort den Absender und vernichten Sie 
 diese Mail. 
 Das unerlaubte Kopieren sowie die unbefugte Weitergabe dieser Mail sind nicht 
 gestattet. 
 
 This e-mail may contain confidential and/or privileged information. 
 If you are not the intended recipient (or have received this e-mail in error) 
 please notify the sender immediately and destroy this e-mail.  
 Any unauthorized copying, disclosure or distribution of the material in this 
 e-mail is strictly forbidden. 
 
 
 
 
 
 - proIO GmbH -
 Geschäftsführer: Swen Brüseke
 Sitz der Gesellschaft: Frankfurt am Main
 
 USt-IdNr. DE 267 075 918
 Registergericht: Frankfurt am Main - HRB 86239
 
 Diese E-Mail enthält vertrauliche und/oder rechtlich geschützte 
 Informationen. 
 Wenn Sie nicht der richtige Adressat sind oder diese E-Mail irrtümlich 
 erhalten haben, informieren Sie bitte sofort den Absender und vernichten Sie 
 diese Mail.
 Das unerlaubte Kopieren sowie die unbefugte Weitergabe dieser Mail sind nicht 
 gestattet

AW: Deleting Compute Offering

2015-08-17 Thread S . Brüseke - proIO GmbH
Hi Somesh,

thank you very much for testing!

Mit freundlichen Grüßen / With kind regards,

Swen Brüseke

-Ursprüngliche Nachricht-
Von: Somesh Naidu [mailto:somesh.na...@citrix.com] 
Gesendet: Freitag, 14. August 2015 18:12
An: users@cloudstack.apache.org
Betreff: RE: Deleting Compute Offering

That is correct, you should be able to delete a compute offering even if there 
are active VMs using it. I just performed a quick test for migrate and 
stop/start operations work fine. I even performed a Reinstall VM operation 
(that recreates a VM from template) and it worked fine.

Regards,
Somesh


-Original Message-
From: S. Brüseke - proIO GmbH [mailto:s.brues...@proio.com]
Sent: Friday, August 14, 2015 5:04 AM
To: users@cloudstack.apache.org
Subject: AW: Deleting Compute Offering

Hi Vadim,

I am not 100% sure, but I think I was able to delete a compute offering while 
VMs still using it.

Mit freundlichen Grüßen / With kind regards,

Swen

-Ursprüngliche Nachricht-
Von: Vadim Kimlaychuk [mailto:vadim.kimlayc...@elion.ee]
Gesendet: Donnerstag, 13. August 2015 15:06
An: users@cloudstack.apache.org; S. Brüseke - proIO GmbH
Betreff: RE: Deleting Compute Offering

Hello Swen,

If I am not mistaken - you can't delete offering if it is used. You 
will get an error. First -- you must assign new offering and then remove the 
old one

Vadim.

-Original Message-
From: S. Brüseke - proIO GmbH [mailto:s.brues...@proio.com]
Sent: Thursday, August 13, 2015 1:29 PM
To: users@cloudstack.apache.org
Subject: Deleting Compute Offering

Hi,

I need to delete a Compute Offering because it is using a storage tag of a 
primary storage which we want to get rid of in the future and I do not want 
users to still be able to deploy new VMs on this primary storage. Of course it 
is easy to delete the offering, but what will happen with existing VMs using 
this offering?
Will I be still able to use live migration and start/stop them?
We re using CS 4.3

Mit freundlichen Grüßen / With kind regards,

Swen Brüseke



- proIO GmbH -
Geschäftsführer: Swen Brüseke
Sitz der Gesellschaft: Frankfurt am Main

USt-IdNr. DE 267 075 918
Registergericht: Frankfurt am Main - HRB 86239

Diese E-Mail enthält vertrauliche und/oder rechtlich geschützte Informationen. 
Wenn Sie nicht der richtige Adressat sind oder diese E-Mail irrtümlich erhalten 
haben, informieren Sie bitte sofort den Absender und vernichten Sie diese Mail. 
Das unerlaubte Kopieren sowie die unbefugte Weitergabe dieser Mail sind nicht 
gestattet. 

This e-mail may contain confidential and/or privileged information. 
If you are not the intended recipient (or have received this e-mail in error) 
please notify the sender immediately and destroy this e-mail.  
Any unauthorized copying, disclosure or distribution of the material in this 
e-mail is strictly forbidden. 





- proIO GmbH -
Geschäftsführer: Swen Brüseke
Sitz der Gesellschaft: Frankfurt am Main

USt-IdNr. DE 267 075 918
Registergericht: Frankfurt am Main - HRB 86239

Diese E-Mail enthält vertrauliche und/oder rechtlich geschützte Informationen. 
Wenn Sie nicht der richtige Adressat sind oder diese E-Mail irrtümlich erhalten 
haben, informieren Sie bitte sofort den Absender und vernichten Sie diese Mail. 
Das unerlaubte Kopieren sowie die unbefugte Weitergabe dieser Mail sind nicht 
gestattet. 

This e-mail may contain confidential and/or privileged information. 
If you are not the intended recipient (or have received this e-mail in error) 
please notify the sender immediately and destroy this e-mail.  
Any unauthorized copying, disclosure or distribution of the material in this 
e-mail is strictly forbidden. 





- proIO GmbH -
Geschäftsführer: Swen Brüseke
Sitz der Gesellschaft: Frankfurt am Main

USt-IdNr. DE 267 075 918
Registergericht: Frankfurt am Main - HRB 86239

Diese E-Mail enthält vertrauliche und/oder rechtlich geschützte Informationen. 
Wenn Sie nicht der richtige Adressat sind oder diese E-Mail irrtümlich erhalten 
haben, 
informieren Sie bitte sofort den Absender und vernichten Sie diese Mail. 
Das unerlaubte Kopieren sowie die unbefugte Weitergabe dieser Mail sind nicht 
gestattet. 

This e-mail may contain confidential and/or privileged information. 
If you are not the intended recipient (or have received this e-mail in error) 
please notify 
the sender immediately and destroy this e-mail.  
Any unauthorized copying, disclosure or distribution of the material in this 
e-mail is strictly forbidden. 




AW: Deleting Compute Offering

2015-08-14 Thread S . Brüseke - proIO GmbH
Hi Vadim,

I am not 100% sure, but I think I was able to delete a compute offering while 
VMs still using it.

Mit freundlichen Grüßen / With kind regards,

Swen

-Ursprüngliche Nachricht-
Von: Vadim Kimlaychuk [mailto:vadim.kimlayc...@elion.ee] 
Gesendet: Donnerstag, 13. August 2015 15:06
An: users@cloudstack.apache.org; S. Brüseke - proIO GmbH
Betreff: RE: Deleting Compute Offering

Hello Swen,

If I am not mistaken - you can't delete offering if it is used. You 
will get an error. First -- you must assign new offering and then remove the 
old one

Vadim.

-Original Message-
From: S. Brüseke - proIO GmbH [mailto:s.brues...@proio.com] 
Sent: Thursday, August 13, 2015 1:29 PM
To: users@cloudstack.apache.org
Subject: Deleting Compute Offering

Hi,

I need to delete a Compute Offering because it is using a storage tag of a 
primary storage which we want to get rid of in the future and I do not want 
users to still be able to deploy new VMs on this primary storage. Of course it 
is easy to delete the offering, but what will happen with existing VMs using 
this offering?
Will I be still able to use live migration and start/stop them?
We re using CS 4.3

Mit freundlichen Grüßen / With kind regards,

Swen Brüseke



- proIO GmbH -
Geschäftsführer: Swen Brüseke
Sitz der Gesellschaft: Frankfurt am Main

USt-IdNr. DE 267 075 918
Registergericht: Frankfurt am Main - HRB 86239

Diese E-Mail enthält vertrauliche und/oder rechtlich geschützte Informationen. 
Wenn Sie nicht der richtige Adressat sind oder diese E-Mail irrtümlich erhalten 
haben, informieren Sie bitte sofort den Absender und vernichten Sie diese Mail. 
Das unerlaubte Kopieren sowie die unbefugte Weitergabe dieser Mail sind nicht 
gestattet. 

This e-mail may contain confidential and/or privileged information. 
If you are not the intended recipient (or have received this e-mail in error) 
please notify the sender immediately and destroy this e-mail.  
Any unauthorized copying, disclosure or distribution of the material in this 
e-mail is strictly forbidden. 





- proIO GmbH -
Geschäftsführer: Swen Brüseke
Sitz der Gesellschaft: Frankfurt am Main

USt-IdNr. DE 267 075 918
Registergericht: Frankfurt am Main - HRB 86239

Diese E-Mail enthält vertrauliche und/oder rechtlich geschützte Informationen. 
Wenn Sie nicht der richtige Adressat sind oder diese E-Mail irrtümlich erhalten 
haben, 
informieren Sie bitte sofort den Absender und vernichten Sie diese Mail. 
Das unerlaubte Kopieren sowie die unbefugte Weitergabe dieser Mail sind nicht 
gestattet. 

This e-mail may contain confidential and/or privileged information. 
If you are not the intended recipient (or have received this e-mail in error) 
please notify 
the sender immediately and destroy this e-mail.  
Any unauthorized copying, disclosure or distribution of the material in this 
e-mail is strictly forbidden. 




Deleting Compute Offering

2015-08-13 Thread S . Brüseke - proIO GmbH
Hi,

I need to delete a Compute Offering because it is using a storage tag of a 
primary storage which we want to get rid of in the future and I do not want 
users to still be able to deploy new VMs on this primary storage. Of course it 
is easy to delete the offering, but what will happen with existing VMs using 
this offering?
Will I be still able to use live migration and start/stop them?
We re using CS 4.3

Mit freundlichen Grüßen / With kind regards,

Swen Brüseke



- proIO GmbH -
Geschäftsführer: Swen Brüseke
Sitz der Gesellschaft: Frankfurt am Main

USt-IdNr. DE 267 075 918
Registergericht: Frankfurt am Main - HRB 86239

Diese E-Mail enthält vertrauliche und/oder rechtlich geschützte Informationen. 
Wenn Sie nicht der richtige Adressat sind oder diese E-Mail irrtümlich erhalten 
haben, 
informieren Sie bitte sofort den Absender und vernichten Sie diese Mail. 
Das unerlaubte Kopieren sowie die unbefugte Weitergabe dieser Mail sind nicht 
gestattet. 

This e-mail may contain confidential and/or privileged information. 
If you are not the intended recipient (or have received this e-mail in error) 
please notify 
the sender immediately and destroy this e-mail.  
Any unauthorized copying, disclosure or distribution of the material in this 
e-mail is strictly forbidden. 




storage migration of vm on local storage

2015-05-29 Thread S . Brüseke - proIO GmbH
Hi,

documentation is a little bit confusing about this. Is it possible in CS 4.3 
(XenServer 6.2 SP! as hypervisor) to migrate a vm if this vm is on local 
storage without to stop the vm?

Mit freundlichen Grüßen / With kind regards,

Swen Brüseke




- proIO GmbH -
Geschäftsführer: Swen Brüseke
Sitz der Gesellschaft: Frankfurt am Main

USt-IdNr. DE 267 075 918
Registergericht: Frankfurt am Main - HRB 86239

Diese E-Mail enthält vertrauliche und/oder rechtlich geschützte Informationen. 
Wenn Sie nicht der richtige Adressat sind oder diese E-Mail irrtümlich erhalten 
haben, 
informieren Sie bitte sofort den Absender und vernichten Sie diese Mail. 
Das unerlaubte Kopieren sowie die unbefugte Weitergabe dieser Mail sind nicht 
gestattet. 

This e-mail may contain confidential and/or privileged information. 
If you are not the intended recipient (or have received this e-mail in error) 
please notify 
the sender immediately and destroy this e-mail.  
Any unauthorized copying, disclosure or distribution of the material in this 
e-mail is strictly forbidden. 




system VMs on local storage

2015-05-28 Thread S . Brüseke - proIO GmbH
Hi,

we are thinking about to use only local storage in our CS installation. We are 
using XenServer 6.2 SP1 as hypervisor.
What about System VMs? Is it okay to change system.vm.use.local.storage to 
true and then migrate all System VMs and Router VMs to local storage?
Is it wise to put System VMs on local storage?

Thank for sharing your experience!


Mit freundlichen Grüßen / With kind regards,

Swen Brüseke



- proIO GmbH -
Geschäftsführer: Swen Brüseke
Sitz der Gesellschaft: Frankfurt am Main

USt-IdNr. DE 267 075 918
Registergericht: Frankfurt am Main - HRB 86239

Diese E-Mail enthält vertrauliche und/oder rechtlich geschützte Informationen. 
Wenn Sie nicht der richtige Adressat sind oder diese E-Mail irrtümlich erhalten 
haben, 
informieren Sie bitte sofort den Absender und vernichten Sie diese Mail. 
Das unerlaubte Kopieren sowie die unbefugte Weitergabe dieser Mail sind nicht 
gestattet. 

This e-mail may contain confidential and/or privileged information. 
If you are not the intended recipient (or have received this e-mail in error) 
please notify 
the sender immediately and destroy this e-mail.  
Any unauthorized copying, disclosure or distribution of the material in this 
e-mail is strictly forbidden. 




AW: system VMs on local storage

2015-05-28 Thread S . Brüseke - proIO GmbH
Hi Glenn,

will CS create the SystemVMs after a XenServer failure automatically?
We do not want to use shared storage at all.

Mit freundlichen Grüßen / With kind regards,

Swen Brüseke


-Ursprüngliche Nachricht-
Von: Glenn Wagner [mailto:glenn.wag...@shapeblue.com] 
Gesendet: Donnerstag, 28. Mai 2015 11:13
An: users@cloudstack.apache.org; S. Brüseke - proIO GmbH
Betreff: RE: system VMs on local storage

Hi,

I honestly would not put the System VM's on local Storage on a Production 
environment, if you have a Xenserver failure the System VM's will go down, You 
will need to recreate them causing downtime.


Glenn Wagner
Senior Consultant, South Africa



Phone: +27 21 527 0091 | Mobile: +27 73 917 4111

glenn.wag...@shapeblue.com | www.shapeblue.com | Twitter:@shapeBlue ShapeBlue 
SA (Pty) Ltd, 2nd Floor, Oudehuis Centre, 122 Main Rd, Somerset West, Cape Town 
7130

-Original Message-
From: S. Brüseke - proIO GmbH [mailto:s.brues...@proio.com]
Sent: 28 May 2015 10:58 AM
To: users@cloudstack.apache.org
Subject: system VMs on local storage

Hi,

we are thinking about to use only local storage in our CS installation. We are 
using XenServer 6.2 SP1 as hypervisor.
What about System VMs? Is it okay to change system.vm.use.local.storage to 
true and then migrate all System VMs and Router VMs to local storage?
Is it wise to put System VMs on local storage?

Thank for sharing your experience!


Mit freundlichen Grüßen / With kind regards,

Swen Brüseke



- proIO GmbH -
Geschäftsführer: Swen Brüseke
Sitz der Gesellschaft: Frankfurt am Main

USt-IdNr. DE 267 075 918
Registergericht: Frankfurt am Main - HRB 86239

Diese E-Mail enthält vertrauliche und/oder rechtlich geschützte Informationen.
Wenn Sie nicht der richtige Adressat sind oder diese E-Mail irrtümlich erhalten 
haben, informieren Sie bitte sofort den Absender und vernichten Sie diese Mail.
Das unerlaubte Kopieren sowie die unbefugte Weitergabe dieser Mail sind nicht 
gestattet.

This e-mail may contain confidential and/or privileged information.
If you are not the intended recipient (or have received this e-mail in error) 
please notify the sender immediately and destroy this e-mail.
Any unauthorized copying, disclosure or distribution of the material in this 
e-mail is strictly forbidden.


Find out more about ShapeBlue and our range of CloudStack related services

IaaS Cloud Design  Buildhttp://shapeblue.com/iaas-cloud-design-and-build//
CSForge – rapid IaaS deployment frameworkhttp://shapeblue.com/csforge/
CloudStack Consultinghttp://shapeblue.com/cloudstack-consultancy/
CloudStack Software 
Engineeringhttp://shapeblue.com/cloudstack-software-engineering/
CloudStack Infrastructure 
Supporthttp://shapeblue.com/cloudstack-infrastructure-support/
CloudStack Bootcamp Training Courseshttp://shapeblue.com/cloudstack-training/

This email and any attachments to it may be confidential and are intended 
solely for the use of the individual to whom it is addressed. Any views or 
opinions expressed are solely those of the author and do not necessarily 
represent those of Shape Blue Ltd or related companies. If you are not the 
intended recipient of this email, you must neither take any action based upon 
its contents, nor copy or show it to anyone. Please contact the sender if you 
believe you have received this email in error. Shape Blue Ltd is a company 
incorporated in England  Wales. ShapeBlue Services India LLP is a company 
incorporated in India and is operated under license from Shape Blue Ltd. Shape 
Blue Brasil Consultoria Ltda is a company incorporated in Brasil and is 
operated under license from Shape Blue Ltd. ShapeBlue SA Pty Ltd is a company 
registered by The Republic of South Africa and is traded under license from 
Shape Blue Ltd. ShapeBlue is a registered trademark.



- proIO GmbH -
Geschäftsführer: Swen Brüseke
Sitz der Gesellschaft: Frankfurt am Main

USt-IdNr. DE 267 075 918
Registergericht: Frankfurt am Main - HRB 86239

Diese E-Mail enthält vertrauliche und/oder rechtlich geschützte Informationen. 
Wenn Sie nicht der richtige Adressat sind oder diese E-Mail irrtümlich erhalten 
haben, 
informieren Sie bitte sofort den Absender und vernichten Sie diese Mail. 
Das unerlaubte Kopieren sowie die unbefugte Weitergabe dieser Mail sind nicht 
gestattet. 

This e-mail may contain confidential and/or privileged information. 
If you are not the intended recipient (or have received this e-mail in error) 
please notify 
the sender immediately and destroy this e-mail.  
Any unauthorized copying, disclosure or distribution of the material in this 
e-mail is strictly forbidden. 




AW: AW: system VMs on local storage

2015-05-28 Thread S . Brüseke - proIO GmbH
thank you! So it would make sense to keep a nfs shared storage only for 
SystemVMs? Can we use our secondary storage for this?

Mit freundlichen Grüßen / With kind regards,

Swen Brüseke

 
proIO GmbH   
Kleyerstr. 79 - 89 / Tor 13   
D-60326 Frankfurt am Main 
 
Mail: s.brues...@proio.com
Tel:  +(49) (0) 69 739049-15
Fax:  +(49) (0) 69 739049-25  
Web:  www.proio.com
 
- Support -
Mail: supp...@proio.com
24h:  +(49) (0) 1805 522 855


-Ursprüngliche Nachricht-
Von: Prashant Kumar Mishra [mailto:prashantkumar.mis...@citrix.com] 
Gesendet: Donnerstag, 28. Mai 2015 14:59
An: users@cloudstack.apache.org
Betreff: Re: AW: system VMs on local storage

1-To recreate router vm you need to stop-start any existing vm  or deploy a new 
vm .
2-In case if you are changing storage(shared to local or local to share) you 
need to destroy existing router vm and follow step 1.


On 5/28/15, 6:12 PM, S. Brüseke - proIO GmbH s.brues...@proio.com
wrote:

How do you recreate VRs?

Mit freundlichen Grüßen / With kind regards,

Swen Brüseke

-Ursprüngliche Nachricht-
Von: Koushik Das [mailto:koushik@citrix.com]
Gesendet: Donnerstag, 28. Mai 2015 14:30
An: users@cloudstack.apache.org
Betreff: Re: system VMs on local storage

SSVM and CPVM will be started simultaneously.
But virtual routers won't get created automatically.


On 28-May-2015, at 5:51 PM, S. Brüseke - proIO GmbH 
s.brues...@proio.com
 wrote:

 Hi Prashant,
 
 so the only downside is that it will take some time until CS 
created all SystemVMs after a host crash? Will CS create SystemVMs one by one?
 
 Mit freundlichen Grüßen / With kind regards,
 
 Swen Brüseke
 
 
 -Ursprüngliche Nachricht-
 Von: Prashant Kumar Mishra [mailto:prashantkumar.mis...@citrix.com]
 Gesendet: Donnerstag, 28. Mai 2015 14:12
 An: users@cloudstack.apache.org
 Betreff: Re: AW: system VMs on local storage
 
 Yes , if your zone is in enabled state and system vm went down , CS 
will try to bring them up on shared/local storage depend on current 
system.vm.use.local.storage settings .
 
 On 5/28/15, 5:19 PM, S. Brüseke - proIO GmbH s.brues...@proio.com
 wrote:
 
 Hi Glenn,
 
 will CS create the SystemVMs after a XenServer failure automatically?
 We do not want to use shared storage at all.
 
 Mit freundlichen Grüßen / With kind regards,
 
 Swen Brüseke
 
 
 -Ursprüngliche Nachricht-
 Von: Glenn Wagner [mailto:glenn.wag...@shapeblue.com]
 Gesendet: Donnerstag, 28. Mai 2015 11:13
 An: users@cloudstack.apache.org; S. Brüseke - proIO GmbH
 Betreff: RE: system VMs on local storage
 
 Hi,
 
 I honestly would not put the System VM's on local Storage on a 
 Production environment, if you have a Xenserver failure the System 
 VM's will go down, You will need to recreate them causing downtime.
 
 
 Glenn Wagner
 Senior Consultant, South Africa
 
 
 
 Phone: +27 21 527 0091 | Mobile: +27 73 917 4111
 
 glenn.wag...@shapeblue.com | www.shapeblue.com | Twitter:@shapeBlue 
 ShapeBlue SA (Pty) Ltd, 2nd Floor, Oudehuis Centre, 122 Main Rd, 
 Somerset West, Cape Town 7130
 
 -Original Message-
 From: S. Brüseke - proIO GmbH [mailto:s.brues...@proio.com]
 Sent: 28 May 2015 10:58 AM
 To: users@cloudstack.apache.org
 Subject: system VMs on local storage
 
 Hi,
 
 we are thinking about to use only local storage in our CS installation.
 We are using XenServer 6.2 SP1 as hypervisor.
 What about System VMs? Is it okay to change 
system.vm.use.local.storage
 to true and then migrate all System VMs and Router VMs to local 
storage?
 Is it wise to put System VMs on local storage?
 
 Thank for sharing your experience!
 
 
 Mit freundlichen Grüßen / With kind regards,
 
 Swen Brüseke
 
 
 
 - proIO GmbH -
 Geschäftsführer: Swen Brüseke
 Sitz der Gesellschaft: Frankfurt am Main
 
 USt-IdNr. DE 267 075 918
 Registergericht: Frankfurt am Main - HRB 86239
 
 Diese E-Mail enthält vertrauliche und/oder rechtlich geschützte 
 Informationen.
 Wenn Sie nicht der richtige Adressat sind oder diese E-Mail 
 irrtümlich erhalten haben, informieren Sie bitte sofort den Absender 
 und vernichten Sie diese Mail.
 Das unerlaubte Kopieren sowie die unbefugte Weitergabe dieser Mail 
 sind nicht gestattet.
 
 This e-mail may contain confidential and/or privileged information.
 If you are not the intended recipient (or have received this e-mail 
 in
 error) please notify the sender immediately and destroy this e-mail.
 Any unauthorized copying, disclosure or distribution of the material 
 in this e-mail is strictly forbidden.
 
 
 Find out more about ShapeBlue and our range of CloudStack related 
 services
 
 IaaS Cloud Design 
 Buildhttp://secure-web.cisco.com/1dYmNXcpqRQ4xMQy8yCKTrGSQaWN6hV3Cu
 o
 Ig
 11a
 tTedR7-uxNmPD5eRyhs9cqh-RJ__Q9a5GlyOpZv6fZdddcg0OzTeAwpLrvQxK3BIQGTe
 e
 sv
 IvS
 r6VYLK_-KkXUlV6J6lRwaLZDlCKv8EvEQgDo2tz7k-MM84I94oI3Vj1Gb24JeHSV6g38
 n
 wp
 ANW
 JekyS/http%3A%2F%2Fshapeblue.com%2Fiaas-cloud-design-and-build%2F%2F
 
 CSForge ­ rapid IaaS deployment
 frameworkhttp

enable local storage

2015-05-28 Thread S . Brüseke - proIO GmbH
Hi,

I installed new disks in some of our XenServer hosts and added the raid to the 
XenServer-host. Now I need to add this new storage to CS.
I enabled local storage for the zone, but I am still unable to see the new 
local storage as primary storage. Is there some kind of cron which needs to run 
before I see this storage on infrastructure tab in CS UI? Does the local 
storage needs to use a specific name or can I use a random name as name-label 
inside XenServer?

Mit freundlichen Grüßen / With kind regards,

Swen Brüseke




- proIO GmbH -
Geschäftsführer: Swen Brüseke
Sitz der Gesellschaft: Frankfurt am Main

USt-IdNr. DE 267 075 918
Registergericht: Frankfurt am Main - HRB 86239

Diese E-Mail enthält vertrauliche und/oder rechtlich geschützte Informationen. 
Wenn Sie nicht der richtige Adressat sind oder diese E-Mail irrtümlich erhalten 
haben, 
informieren Sie bitte sofort den Absender und vernichten Sie diese Mail. 
Das unerlaubte Kopieren sowie die unbefugte Weitergabe dieser Mail sind nicht 
gestattet. 

This e-mail may contain confidential and/or privileged information. 
If you are not the intended recipient (or have received this e-mail in error) 
please notify 
the sender immediately and destroy this e-mail.  
Any unauthorized copying, disclosure or distribution of the material in this 
e-mail is strictly forbidden. 




AW: enable local storage

2015-05-28 Thread S . Brüseke - proIO GmbH
Is it possible to attach only specific local storage to CS? We using raid1 for 
XenServer installation and during installation xs created local storage of the 
rest of this raid1. We installed a new raid5 with ssd and named it localSSD. 
This is the local storage with I want CS installing instances on.

Mit freundlichen Grüßen / With kind regards,

Swen Brüseke

-Ursprüngliche Nachricht-
Von: Koushik Das [mailto:koushik@citrix.com] 
Gesendet: Donnerstag, 28. Mai 2015 15:57
An: users@cloudstack.apache.org
Betreff: Re: enable local storage

After enabling local storage for zone, MS restart is required for local storage 
to be discovered for already added hosts. For new hosts it gets added at the 
time of adding host.

On 28-May-2015, at 7:03 PM, S. Brüseke - proIO GmbH s.brues...@proio.com 
wrote:

 Hi,
 
 I installed new disks in some of our XenServer hosts and added the raid to 
 the XenServer-host. Now I need to add this new storage to CS.
 I enabled local storage for the zone, but I am still unable to see the new 
 local storage as primary storage. Is there some kind of cron which needs to 
 run before I see this storage on infrastructure tab in CS UI? Does the local 
 storage needs to use a specific name or can I use a random name as name-label 
 inside XenServer?
 
 Mit freundlichen Grüßen / With kind regards,
 
 Swen Brüseke
 
 
 
 
 - proIO GmbH -
 Geschäftsführer: Swen Brüseke
 Sitz der Gesellschaft: Frankfurt am Main
 
 USt-IdNr. DE 267 075 918
 Registergericht: Frankfurt am Main - HRB 86239
 
 Diese E-Mail enthält vertrauliche und/oder rechtlich geschützte 
 Informationen. 
 Wenn Sie nicht der richtige Adressat sind oder diese E-Mail irrtümlich 
 erhalten haben, informieren Sie bitte sofort den Absender und vernichten Sie 
 diese Mail.
 Das unerlaubte Kopieren sowie die unbefugte Weitergabe dieser Mail sind nicht 
 gestattet. 
 
 This e-mail may contain confidential and/or privileged information. 
 If you are not the intended recipient (or have received this e-mail in 
 error) please notify the sender immediately and destroy this e-mail.
 Any unauthorized copying, disclosure or distribution of the material in this 
 e-mail is strictly forbidden. 
 
 




- proIO GmbH -
Geschäftsführer: Swen Brüseke
Sitz der Gesellschaft: Frankfurt am Main

USt-IdNr. DE 267 075 918
Registergericht: Frankfurt am Main - HRB 86239

Diese E-Mail enthält vertrauliche und/oder rechtlich geschützte Informationen. 
Wenn Sie nicht der richtige Adressat sind oder diese E-Mail irrtümlich erhalten 
haben, 
informieren Sie bitte sofort den Absender und vernichten Sie diese Mail. 
Das unerlaubte Kopieren sowie die unbefugte Weitergabe dieser Mail sind nicht 
gestattet. 

This e-mail may contain confidential and/or privileged information. 
If you are not the intended recipient (or have received this e-mail in error) 
please notify 
the sender immediately and destroy this e-mail.  
Any unauthorized copying, disclosure or distribution of the material in this 
e-mail is strictly forbidden. 




AW: cpu overprovisioning

2015-04-22 Thread S . Brüseke - proIO GmbH
Dashboard is not wrong. You only have these resources left in your CS.

Here is an example as far as I remember:
Dashboard shows 100 GHz / 180 Ghz (cpu over provisioning is 1 / 3 Hosts each 
24x 2.5Ghz)
You deploy 10 vms with 1 Ghz
Dashboard shows 110 Ghz / 180 Ghz (cpu over provisioning is 1 / 3 Hosts each 
24x 2.5Ghz)
You change cpu over provisioning to 4
Dashboard shows 440 Ghz / 720 Ghz (cpu over provisioning is 4 / 3 Hosts each 
24x 2.5Ghz)
You stop and start all vms
Dashboard shows 110 Ghz / 720 Ghz

Mit freundlichen Grüßen / With kind regards,

Swen Brüseke

-Ursprüngliche Nachricht-
Von: Abhinandan Prateek [mailto:abhinandan.prat...@shapeblue.com] 
Gesendet: Mittwoch, 22. April 2015 06:35
An: users@cloudstack.apache.org
Betreff: Re: cpu overprovisioning

Can we fix the dashboard to reflect the available capacity given an over commit 
?


 On 22-Apr-2015, at 9:56 am, Bharat Kumar bharat.ku...@citrix.com wrote:

 Also In case of change in the overcommit value the total capacity will 
 change instantaneously, but the used resource and the total available will 
 change when the capacity checker thread runs. The interval at which this will 
 run can be changed from the global settings.

 Thanks,
 Bharat.

 On 22-Apr-2015, at 9:50 am, Bharat Kumar bharat.ku...@citrix.com wrote:

 Hi,

 The change in cpu overcommit factor will not change the amount of 
 resource that are available to you. It will change the way you want to use 
 the free resource available at that time. It won't change what was already 
 allocated.

 for example if you have deployed the VMs with overcommit say 2  and after a 
 while you change the overcommit to 3, the used resource will be scaled based 
 on the change.
 changing the overcommit cannot create resource. It can only change 
 the way you want to use the free resource.  So now after the change 
 in overcommit you can deploy 3 times more VMs (of a given service offering) 
 than usual using the the resource that is free at the time of changing the 
 overcommit value. The perviously allocated resource cannot be freed until 
 you restart the VMs.

 This link will tell you how resource allocation calculations are made.
 https://cwiki.apache.org/confluence/display/CLOUDSTACK/CPU+and+RAM+Ov
 ercommit

 Thanks,
 Bharat.


 On 21-Apr-2015, at 11:20 pm, Rafael Weingärtner 
 rafaelweingart...@gmail.commailto:rafaelweingart...@gmail.com wrote:

 I have not looked at the code, but that seems odd.
 Restarting VMs instances to update the resource usage/resource 
 availability when changing the overprovisioning factor.

 On Tue, Apr 21, 2015 at 12:51 PM, S. Brüseke - proIO GmbH  
 s.brues...@proio.commailto:s.brues...@proio.com wrote:

 Please try to stop and start the vm. Maybe reboot is not working 
 here. You should start to see more free CPU after the first vm.

 Mit freundlichen Grüßen / With kind regards,

 Swen Brüseke

 -Ursprüngliche Nachricht-
 Von: Ugo Vasi [mailto:ugo.v...@procne.it]
 Gesendet: Dienstag, 21. April 2015 16:40
 An: users@cloudstack.apache.orgmailto:users@cloudstack.apache.org
 Betreff: Re: AW: cpu overprovisioning

 Hi Swen,
 I try rebboting 2 vm but the amount of cpu MHz is not changed. Do I 
 have to restart all vm before you see a change?


 Il 21/04/2015 16:16, S. Brüseke - proIO GmbH ha scritto:
 This is because the factor of overprovisioning is attached to the 
 instance too! And this factor is only changing after rebooting the instance.
 So after a change of cpu.overprovisioning.factor you need to reboot 
 all instances on this cluster which were running before this change was made.

 Mit freundlichen Grüßen / With kind regards,

 Swen Brüseke

 -Ursprüngliche Nachricht-
 Von: Ugo Vasi [mailto:ugo.v...@procne.it]
 Gesendet: Dienstag, 21. April 2015 16:06
 An: users@cloudstack.apache.orgmailto:users@cloudstack.apache.org
 Betreff: Re: cpu overprovisioning

 If I change the over-provisioning ratio in cluster-relative settings
 (infrastructure-cluser-cluster-name-settings) the dashboard show 
 the sum of real GHz the overprovisioned amount of GHz (324GHz) as 
 aspected, but after a while also the sum of Ghz is multiplied by the 
 same factor making it unnecessary...

 Il 21/04/2015 15:26, Bharat Kumar ha scritto:
 In case of cloudstack, cpu capacity for host is total of cpus * no of 
 Ghz * overporvisoning.

 total capacity = sum of capacities of individual hosts.

 Thanks,
 Bharat.

 On 21-Apr-2015, at 6:49 pm, Abhinandan Prateek  
 abhinandan.prat...@shapeblue.commailto:abhinandan.prat...@shapeblue.com 
 wrote:

 Available CPU will be 3 times the actual CPU when over provisioned by 
 3.

 The UI not showing the over provisioned value seems like a bug.

 -abhi


 On 21-Apr-2015, at 6:34 pm, Ugo Vasi 
 ugo.v...@procne.itmailto:ugo.v...@procne.it wrote:

 Hi all,
 we have a cluster of three machines with 16 CPU at 2.2GHz each and we 
 have set a cpu-overprovizioning to 3.

 I would expect that the CPU system capacity of the dashboard

AW: cpu overprovisioning

2015-04-21 Thread S . Brüseke - proIO GmbH
This is because the factor of overprovisioning is attached to the instance too! 
And this factor is only changing after rebooting the instance.
So after a change of cpu.overprovisioning.factor you need to reboot all 
instances on this cluster which were running before this change was made.

Mit freundlichen Grüßen / With kind regards,

Swen Brüseke

-Ursprüngliche Nachricht-
Von: Ugo Vasi [mailto:ugo.v...@procne.it] 
Gesendet: Dienstag, 21. April 2015 16:06
An: users@cloudstack.apache.org
Betreff: Re: cpu overprovisioning

If I change the over-provisioning ratio in cluster-relative settings
(infrastructure-cluser-cluster-name-settings) the dashboard show the sum of 
real GHz the overprovisioned amount of GHz (324GHz) as aspected, but after a 
while also the sum of Ghz is multiplied by the same factor making it 
unnecessary...

Il 21/04/2015 15:26, Bharat Kumar ha scritto:
 In case of cloudstack, cpu capacity for host is total of cpus * no of Ghz * 
 overporvisoning.

 total capacity = sum of capacities of individual hosts.

 Thanks,
 Bharat.
   
 On 21-Apr-2015, at 6:49 pm, Abhinandan Prateek 
 abhinandan.prat...@shapeblue.com wrote:

 Available CPU will be 3 times the actual CPU when over provisioned by 3.

 The UI not showing the over provisioned value seems like a bug.

 -abhi


 On 21-Apr-2015, at 6:34 pm, Ugo Vasi ugo.v...@procne.it wrote:

 Hi all,
 we have a cluster of three machines with 16 CPU at 2.2GHz each and we have 
 set a cpu-overprovizioning to 3.

 I would expect that the CPU system capacity of the dashboard appear with 
 the sum of megahertz CPUs multiplied by three instead I get the real sum 
 (108MHz).

 I do not understand if this overprovisioning allows me to allocate more MHz 
 of real ones asthe documentation say since in reality the virtual machines 
 occupy on average only 20% of the computing power.

 thanks in advance

 --

 U g o   V a s iugo.v...@procne.it
 P r o c n e  s.r.l)
 via Cotonificio 45  33010 Tavagnacco IT
 phone: +390432486523 fax: +390432486523

 Le informazioni contenute in questo messaggio sono riservate e 
 confidenziali ed è vietata la diffusione in qualunque modo eseguita.
 Qualora Lei non fosse la persona a cui il presente messaggio è 
 destinato, La invitiamo ad eliminarlo e a non leggerlo, dandocene 
 gentilmente comunicazione.
 Per qualsiasi informazione si prega di contattare supp...@procne.it .
 Rif. D.L. 196/2003

 Find out more about ShapeBlue and our range of CloudStack related 
 services

 IaaS Cloud Design  
 Buildhttp://secure-web.cisco.com/1Yol7cn0c4CnDObcpSirV3DCPZ0WVdvOa3L
 hGNjqR1LC8fdUx8eCLTnMMSoshd9HpKAnV6UPQ_yMd3M-1eZw2Ta-m7j5KgANh1cCgIkU
 g7gUvZUUMIkYZDZDpYP9IDOTuw6o3kuFAtJGiF0j7g8WWr6r9HIr7CqwJi047V_38MGrr
 r_r5JkLvVUoDdjYuhuBW/http%3A%2F%2Fshapeblue.com%2Fiaas-cloud-design-a
 nd-build%2F%2F CSForge - rapid IaaS deployment 
 frameworkhttp://secure-web.cisco.com/1QG_8HfqAJxyrN85oTEHIYNcbRgpmjr
 Akp0COA-UIvjKFK3VDTqCkj9wp1e8W2JML6G56lRvuJpQMr5faxHOlQGVsy3eWivku7VX
 STR7Z_reGWbZS3mVJGbFxehTWTUAsA_HOfZ_i-dHdhIuYwCSXyx0PRQcIdyEE51ZlHghI
 wpl_CcVK4rj2EVlq5FWSNjbo/http%3A%2F%2Fshapeblue.com%2Fcsforge%2F
 CloudStack 
 Consultinghttp://secure-web.cisco.com/1Jd9xC30lbTOK4Yz24hsK-SYyFxcaJ
 15JHz5Aat3Lyd0qe4l_RGA3YuQAxdRlfgsKaT6hI61qwykCBNqiJRWwlaPIeK_FNJxaJt
 TYZ9o6c9gUZUI1GDDw2fGIesLOAjS5FnkfMjK4oqpmjkwKOCLxeXnBDDSoZhjQjyBB65K
 27drcx_ll0lovUbqvBoz1HRfM/http%3A%2F%2Fshapeblue.com%2Fcloudstack-con
 sultancy%2F CloudStack Software 
 Engineeringhttp://secure-web.cisco.com/10UUubQoT9nA8gFa_q8Qn0och7aMN
 dFTHV-Yar0Ga9h9N4I3YNAockHpCCMMiqhnaECX7ZcolpDZ0IWds3ca9SDAQ_9hoGadj_
 YhjpPvW95NwJlv15UDNFNs8r99wke8pQl6BKjgzv94SQuWHueziP1J7X8G7uzAbyj7qqQ
 IjwV8qqxyV81xYENQyXC13qHt9/http%3A%2F%2Fshapeblue.com%2Fcloudstack-so
 ftware-engineering%2F CloudStack Infrastructure 
 Supporthttp://secure-web.cisco.com/1tHfpu6yxqCTFv4ED5hd1IzUqlDUAasw9
 gn30YT3ucOvb0PNffUKzHjrKXCpTie0_dZ2a1Veui99C8Qp-p9CiKkobjVQ7pQow2X2A6
 AYXKH5HwcPQ4zvZuLr30h5QBdE3MjtkFm_PM2JKk7WRAhtnukS0nvUJnIY1CzNyvezE5Y
 Vmd8PpcV-eZRBptj4Mv_5i/http%3A%2F%2Fshapeblue.com%2Fcloudstack-infras
 tructure-support%2F CloudStack Bootcamp Training 
 Courseshttp://secure-web.cisco.com/1x7d61M2SDzx2SvrIj9y8lD6Eepy3dddj
 EIFHasGDV_PPBn4EwtHVaqqdWSD1VShX_OR-DKgxqKGgB-dO7jrsSLYkWAw7UE1EeKQdH
 I4RW9BPnlYREiTeAcWT3YeOlfQh9EzHqk1uXnuftQxPCbewMDyEcqAzM6O-6r7R7sYuIx
 I6xPdCoei7JH2Bkpsljdp-/http%3A%2F%2Fshapeblue.com%2Fcloudstack-traini
 ng%2F

 This email and any attachments to it may be confidential and are intended 
 solely for the use of the individual to whom it is addressed. Any views or 
 opinions expressed are solely those of the author and do not necessarily 
 represent those of Shape Blue Ltd or related companies. If you are not the 
 intended recipient of this email, you must neither take any action based 
 upon its contents, nor copy or show it to anyone. Please contact the sender 
 if you believe you have received this email in error. Shape Blue Ltd is a 
 company incorporated in England  

AW: AW: cpu overprovisioning

2015-04-21 Thread S . Brüseke - proIO GmbH
Please try to stop and start the vm. Maybe reboot is not working here. You 
should start to see more free CPU after the first vm.

Mit freundlichen Grüßen / With kind regards,

Swen Brüseke

-Ursprüngliche Nachricht-
Von: Ugo Vasi [mailto:ugo.v...@procne.it] 
Gesendet: Dienstag, 21. April 2015 16:40
An: users@cloudstack.apache.org
Betreff: Re: AW: cpu overprovisioning

Hi Swen,
I try rebboting 2 vm but the amount of cpu MHz is not changed. Do I have to 
restart all vm before you see a change?


Il 21/04/2015 16:16, S. Brüseke - proIO GmbH ha scritto:
 This is because the factor of overprovisioning is attached to the instance 
 too! And this factor is only changing after rebooting the instance.
 So after a change of cpu.overprovisioning.factor you need to reboot all 
 instances on this cluster which were running before this change was made.

 Mit freundlichen Grüßen / With kind regards,

 Swen Brüseke

 -Ursprüngliche Nachricht-
 Von: Ugo Vasi [mailto:ugo.v...@procne.it]
 Gesendet: Dienstag, 21. April 2015 16:06
 An: users@cloudstack.apache.org
 Betreff: Re: cpu overprovisioning

 If I change the over-provisioning ratio in cluster-relative settings
 (infrastructure-cluser-cluster-name-settings) the dashboard show the sum 
 of real GHz the overprovisioned amount of GHz (324GHz) as aspected, but after 
 a while also the sum of Ghz is multiplied by the same factor making it 
 unnecessary...

 Il 21/04/2015 15:26, Bharat Kumar ha scritto:
 In case of cloudstack, cpu capacity for host is total of cpus * no of Ghz * 
 overporvisoning.

 total capacity = sum of capacities of individual hosts.

 Thanks,
 Bharat.

 On 21-Apr-2015, at 6:49 pm, Abhinandan Prateek 
 abhinandan.prat...@shapeblue.com wrote:

 Available CPU will be 3 times the actual CPU when over provisioned by 3.

 The UI not showing the over provisioned value seems like a bug.

 -abhi


 On 21-Apr-2015, at 6:34 pm, Ugo Vasi ugo.v...@procne.it wrote:

 Hi all,
 we have a cluster of three machines with 16 CPU at 2.2GHz each and we have 
 set a cpu-overprovizioning to 3.

 I would expect that the CPU system capacity of the dashboard appear with 
 the sum of megahertz CPUs multiplied by three instead I get the real sum 
 (108MHz).

 I do not understand if this overprovisioning allows me to allocate more 
 MHz of real ones asthe documentation say since in reality the virtual 
 machines occupy on average only 20% of the computing power.

 thanks in advance

 --

 U g o   V a s iugo.v...@procne.it
 P r o c n e  s.r.l)
 via Cotonificio 45  33010 Tavagnacco IT
 phone: +390432486523 fax: +390432486523

 Le informazioni contenute in questo messaggio sono riservate e
 confidenziali ed è vietata la diffusione in qualunque modo eseguita.
 Qualora Lei non fosse la persona a cui il presente messaggio è
 destinato, La invitiamo ad eliminarlo e a non leggerlo, dandocene
 gentilmente comunicazione.
 Per qualsiasi informazione si prega di contattare supp...@procne.it .
 Rif. D.L. 196/2003

 Find out more about ShapeBlue and our range of CloudStack related
 services

 IaaS Cloud Design 
 Buildhttp://secure-web.cisco.com/1Yol7cn0c4CnDObcpSirV3DCPZ0WVdvOa3L
 hGNjqR1LC8fdUx8eCLTnMMSoshd9HpKAnV6UPQ_yMd3M-1eZw2Ta-m7j5KgANh1cCgIkU
 g7gUvZUUMIkYZDZDpYP9IDOTuw6o3kuFAtJGiF0j7g8WWr6r9HIr7CqwJi047V_38MGrr
 r_r5JkLvVUoDdjYuhuBW/http%3A%2F%2Fshapeblue.com%2Fiaas-cloud-design-a
 nd-build%2F%2F CSForge - rapid IaaS deployment
 frameworkhttp://secure-web.cisco.com/1QG_8HfqAJxyrN85oTEHIYNcbRgpmjr
 Akp0COA-UIvjKFK3VDTqCkj9wp1e8W2JML6G56lRvuJpQMr5faxHOlQGVsy3eWivku7VX
 STR7Z_reGWbZS3mVJGbFxehTWTUAsA_HOfZ_i-dHdhIuYwCSXyx0PRQcIdyEE51ZlHghI
 wpl_CcVK4rj2EVlq5FWSNjbo/http%3A%2F%2Fshapeblue.com%2Fcsforge%2F
 CloudStack
 Consultinghttp://secure-web.cisco.com/1Jd9xC30lbTOK4Yz24hsK-SYyFxcaJ
 15JHz5Aat3Lyd0qe4l_RGA3YuQAxdRlfgsKaT6hI61qwykCBNqiJRWwlaPIeK_FNJxaJt
 TYZ9o6c9gUZUI1GDDw2fGIesLOAjS5FnkfMjK4oqpmjkwKOCLxeXnBDDSoZhjQjyBB65K
 27drcx_ll0lovUbqvBoz1HRfM/http%3A%2F%2Fshapeblue.com%2Fcloudstack-con
 sultancy%2F CloudStack Software
 Engineeringhttp://secure-web.cisco.com/10UUubQoT9nA8gFa_q8Qn0och7aMN
 dFTHV-Yar0Ga9h9N4I3YNAockHpCCMMiqhnaECX7ZcolpDZ0IWds3ca9SDAQ_9hoGadj_
 YhjpPvW95NwJlv15UDNFNs8r99wke8pQl6BKjgzv94SQuWHueziP1J7X8G7uzAbyj7qqQ
 IjwV8qqxyV81xYENQyXC13qHt9/http%3A%2F%2Fshapeblue.com%2Fcloudstack-so
 ftware-engineering%2F CloudStack Infrastructure
 Supporthttp://secure-web.cisco.com/1tHfpu6yxqCTFv4ED5hd1IzUqlDUAasw9
 gn30YT3ucOvb0PNffUKzHjrKXCpTie0_dZ2a1Veui99C8Qp-p9CiKkobjVQ7pQow2X2A6
 AYXKH5HwcPQ4zvZuLr30h5QBdE3MjtkFm_PM2JKk7WRAhtnukS0nvUJnIY1CzNyvezE5Y
 Vmd8PpcV-eZRBptj4Mv_5i/http%3A%2F%2Fshapeblue.com%2Fcloudstack-infras
 tructure-support%2F CloudStack Bootcamp Training
 Courseshttp://secure-web.cisco.com/1x7d61M2SDzx2SvrIj9y8lD6Eepy3dddj
 EIFHasGDV_PPBn4EwtHVaqqdWSD1VShX_OR-DKgxqKGgB-dO7jrsSLYkWAw7UE1EeKQdH
 I4RW9BPnlYREiTeAcWT3YeOlfQh9EzHqk1uXnuftQxPCbewMDyEcqAzM6O-6r7R7sYuIx
 I6xPdCoei7JH2Bkpsljdp-/http%3A%2F%2Fshapeblue.com%2Fcloudstack-traini
 ng%2F

AW: cpu overprovisioning

2015-04-21 Thread S . Brüseke - proIO GmbH
By instance I mean VM.

Mit freundlichen Grüßen / With kind regards,

Swen Brüseke

-Ursprüngliche Nachricht-
Von: Rafael Weingartner [mailto:rafaelweingart...@gmail.com] 
Gesendet: Dienstag, 21. April 2015 16:28
An: users@cloudstack.apache.org
Betreff: Re: cpu overprovisioning

When you say instance, did you mean CS web application instance?

On Tue, Apr 21, 2015 at 11:16 AM, S. Brüseke - proIO GmbH  
s.brues...@proio.com wrote:

 This is because the factor of overprovisioning is attached to the 
 instance too! And this factor is only changing after rebooting the instance.
 So after a change of cpu.overprovisioning.factor you need to reboot 
 all instances on this cluster which were running before this change was made.

 Mit freundlichen Grüßen / With kind regards,

 Swen Brüseke

 -Ursprüngliche Nachricht-
 Von: Ugo Vasi [mailto:ugo.v...@procne.it]
 Gesendet: Dienstag, 21. April 2015 16:06
 An: users@cloudstack.apache.org
 Betreff: Re: cpu overprovisioning

 If I change the over-provisioning ratio in cluster-relative settings
 (infrastructure-cluser-cluster-name-settings) the dashboard show 
 the sum of real GHz the overprovisioned amount of GHz (324GHz) as 
 aspected, but after a while also the sum of Ghz is multiplied by the 
 same factor making it unnecessary...

 Il 21/04/2015 15:26, Bharat Kumar ha scritto:
  In case of cloudstack, cpu capacity for host is total of cpus * no 
  of
 Ghz * overporvisoning.
 
  total capacity = sum of capacities of individual hosts.
 
  Thanks,
  Bharat.
 
  On 21-Apr-2015, at 6:49 pm, Abhinandan Prateek 
 abhinandan.prat...@shapeblue.com wrote:
 
  Available CPU will be 3 times the actual CPU when over provisioned by 3.
 
  The UI not showing the over provisioned value seems like a bug.
 
  -abhi
 
 
  On 21-Apr-2015, at 6:34 pm, Ugo Vasi ugo.v...@procne.it wrote:
 
  Hi all,
  we have a cluster of three machines with 16 CPU at 2.2GHz each and 
  we
 have set a cpu-overprovizioning to 3.
 
  I would expect that the CPU system capacity of the dashboard 
  appear
 with the sum of megahertz CPUs multiplied by three instead I get the 
 real sum (108MHz).
 
  I do not understand if this overprovisioning allows me to allocate
 more MHz of real ones asthe documentation say since in reality the 
 virtual machines occupy on average only 20% of the computing power.
 
  thanks in advance
 
  --
 
  U g o   V a s iugo.v...@procne.it
  P r o c n e  s.r.l)
  via Cotonificio 45  33010 Tavagnacco IT
  phone: +390432486523 fax: +390432486523
 
  Le informazioni contenute in questo messaggio sono riservate e 
  confidenziali ed è vietata la diffusione in qualunque modo eseguita.
  Qualora Lei non fosse la persona a cui il presente messaggio è

  destinato, La invitiamo ad eliminarlo e a non leggerlo, dandocene 
  gentilmente comunicazione.
  Per qualsiasi informazione si prega di contattare supp...@procne.it .
  Rif. D.L. 196/2003
 
  Find out more about ShapeBlue and our range of CloudStack related 
  services
 
  IaaS Cloud Design 
  Buildhttp://secure-web.cisco.com/1Yol7cn0c4CnDObcpSirV3DCPZ0WVdvOa
  3L 
  hGNjqR1LC8fdUx8eCLTnMMSoshd9HpKAnV6UPQ_yMd3M-1eZw2Ta-m7j5KgANh1cCgI
  kU 
  g7gUvZUUMIkYZDZDpYP9IDOTuw6o3kuFAtJGiF0j7g8WWr6r9HIr7CqwJi047V_38MG
  rr 
  r_r5JkLvVUoDdjYuhuBW/http%3A%2F%2Fshapeblue.com%2Fiaas-cloud-design
  -a nd-build%2F%2F CSForge - rapid IaaS deployment 
  frameworkhttp://secure-web.cisco.com/1QG_8HfqAJxyrN85oTEHIYNcbRgpm
  jr 
  Akp0COA-UIvjKFK3VDTqCkj9wp1e8W2JML6G56lRvuJpQMr5faxHOlQGVsy3eWivku7
  VX 
  STR7Z_reGWbZS3mVJGbFxehTWTUAsA_HOfZ_i-dHdhIuYwCSXyx0PRQcIdyEE51ZlHg
  hI 
  wpl_CcVK4rj2EVlq5FWSNjbo/http%3A%2F%2Fshapeblue.com%2Fcsforge%2F
  CloudStack
  Consultinghttp://secure-web.cisco.com/1Jd9xC30lbTOK4Yz24hsK-SYyFxc
  aJ 
  15JHz5Aat3Lyd0qe4l_RGA3YuQAxdRlfgsKaT6hI61qwykCBNqiJRWwlaPIeK_FNJxa
  Jt 
  TYZ9o6c9gUZUI1GDDw2fGIesLOAjS5FnkfMjK4oqpmjkwKOCLxeXnBDDSoZhjQjyBB6
  5K 
  27drcx_ll0lovUbqvBoz1HRfM/http%3A%2F%2Fshapeblue.com%2Fcloudstack-c
  on
  sultancy%2F CloudStack Software
  Engineeringhttp://secure-web.cisco.com/10UUubQoT9nA8gFa_q8Qn0och7a
  MN 
  dFTHV-Yar0Ga9h9N4I3YNAockHpCCMMiqhnaECX7ZcolpDZ0IWds3ca9SDAQ_9hoGad
  j_ 
  YhjpPvW95NwJlv15UDNFNs8r99wke8pQl6BKjgzv94SQuWHueziP1J7X8G7uzAbyj7q
  qQ 
  IjwV8qqxyV81xYENQyXC13qHt9/http%3A%2F%2Fshapeblue.com%2Fcloudstack-
  so ftware-engineering%2F CloudStack Infrastructure
  Supporthttp://secure-web.cisco.com/1tHfpu6yxqCTFv4ED5hd1IzUqlDUAas
  w9
  gn30YT3ucOvb0PNffUKzHjrKXCpTie0_dZ2a1Veui99C8Qp-p9CiKkobjVQ7pQow2X2
  A6 
  AYXKH5HwcPQ4zvZuLr30h5QBdE3MjtkFm_PM2JKk7WRAhtnukS0nvUJnIY1CzNyvezE
  5Y 
  Vmd8PpcV-eZRBptj4Mv_5i/http%3A%2F%2Fshapeblue.com%2Fcloudstack-infr
  as tructure-support%2F CloudStack Bootcamp Training 
  Courseshttp://secure-web.cisco.com/1x7d61M2SDzx2SvrIj9y8lD6Eepy3dd
  dj 
  EIFHasGDV_PPBn4EwtHVaqqdWSD1VShX_OR-DKgxqKGgB-dO7jrsSLYkWAw7UE1EeKQ
  dH 
  I4RW9BPnlYREiTeAcWT3YeOlfQh9EzHqk1uXnuftQxPCbewMDyEcqAzM6O-6r7R7sYu
  Ix 
  I6xPdCoei7JH2Bkpsljdp-/http%3A%2F

volume download link will not be deleted

2015-04-07 Thread S . Brüseke - proIO GmbH
Hi,

we are using CS 4.3.0.2 and I think we found a bug.

1. Create a volume out of a snapshot
2. Extract (download) the volume via UI
3. Delete volume

If you do this the garbage collector will not delete the symlink on secondary 
storage in /var/www/html/userdata/
If you do not delete the volume before the garbage collector time the symlink 
will be removed!

Is this a known bug?
Can somebody test this on CS 4.5.1?

Thank you for your help!

Mit freundlichen Grüßen / With kind regards,

Swen Brüseke



- proIO GmbH -
Geschäftsführer: Swen Brüseke
Sitz der Gesellschaft: Frankfurt am Main

USt-IdNr. DE 267 075 918
Registergericht: Frankfurt am Main - HRB 86239

Diese E-Mail enthält vertrauliche und/oder rechtlich geschützte Informationen. 
Wenn Sie nicht der richtige Adressat sind oder diese E-Mail irrtümlich erhalten 
haben, 
informieren Sie bitte sofort den Absender und vernichten Sie diese Mail. 
Das unerlaubte Kopieren sowie die unbefugte Weitergabe dieser Mail sind nicht 
gestattet. 

This e-mail may contain confidential and/or privileged information. 
If you are not the intended recipient (or have received this e-mail in error) 
please notify 
the sender immediately and destroy this e-mail.  
Any unauthorized copying, disclosure or distribution of the material in this 
e-mail is strictly forbidden. 




AW: volume download link will not be deleted

2015-04-07 Thread S . Brüseke - proIO GmbH
Hi Rajani,
 
here it is: https://issues.apache.org/jira/browse/CLOUDSTACK-8370
 
Mit freundlichen Grüßen / With kind regards,
 
Swen Brüseke
 
 
proIO GmbH   
Kleyerstr. 79 - 89 / Tor 13   
D-60326 Frankfurt am Main 
 
Mail: s.brues...@proio.com
Tel:  +(49) (0) 69 739049-15
Fax:  +(49) (0) 69 739049-25  
Web:  www.proio.com
 
- Support -
Mail: supp...@proio.com
24h:  +(49) (0) 1805 522 855
 
Von: Rajani Karuturi [mailto:raj...@apache.org] 
Gesendet: Dienstag, 7. April 2015 16:20
An: users@cloudstack.apache.org; S. Brüseke - proIO GmbH
Betreff: Re: volume download link will not be deleted
 
bug exists on 4.5.1 as well. Can you log it in jira?

~Rajani
 
On Tue, Apr 7, 2015 at 5:50 PM, S. Brüseke - proIO GmbH s.brues...@proio.com 
wrote:
Hi,

we are using CS 4.3.0.2 and I think we found a bug.

1. Create a volume out of a snapshot
2. Extract (download) the volume via UI
3. Delete volume

If you do this the garbage collector will not delete the symlink on secondary 
storage in /var/www/html/userdata/
If you do not delete the volume before the garbage collector time the symlink 
will be removed!

Is this a known bug?
Can somebody test this on CS 4.5.1?

Thank you for your help!

Mit freundlichen Grüßen / With kind regards,

Swen Brüseke



- proIO GmbH -
Geschäftsführer: Swen Brüseke
Sitz der Gesellschaft: Frankfurt am Main

USt-IdNr. DE 267 075 918
Registergericht: Frankfurt am Main - HRB 86239

Diese E-Mail enthält vertrauliche und/oder rechtlich geschützte Informationen.
Wenn Sie nicht der richtige Adressat sind oder diese E-Mail irrtümlich erhalten 
haben,
informieren Sie bitte sofort den Absender und vernichten Sie diese Mail.
Das unerlaubte Kopieren sowie die unbefugte Weitergabe dieser Mail sind nicht 
gestattet.

This e-mail may contain confidential and/or privileged information.
If you are not the intended recipient (or have received this e-mail in error) 
please notify
the sender immediately and destroy this e-mail.
Any unauthorized copying, disclosure or distribution of the material in this 
e-mail is strictly forbidden.


- proIO GmbH -
Geschäftsführer: Swen Brüseke
Sitz der Gesellschaft: Frankfurt am Main

USt-IdNr. DE 267 075 918
Registergericht: Frankfurt am Main - HRB 86239

Diese E-Mail enthält vertrauliche und/oder rechtlich geschützte Informationen. 
Wenn Sie nicht der richtige Adressat sind oder diese E-Mail irrtümlich erhalten 
haben, 
informieren Sie bitte sofort den Absender und vernichten Sie diese Mail. 
Das unerlaubte Kopieren sowie die unbefugte Weitergabe dieser Mail sind nicht 
gestattet. 

This e-mail may contain confidential and/or privileged information. 
If you are not the intended recipient (or have received this e-mail in error) 
please notify 
the sender immediately and destroy this e-mail.  
Any unauthorized copying, disclosure or distribution of the material in this 
e-mail is strictly forbidden. 



AW: Workflow for extracting a snapshot

2015-04-02 Thread S . Brüseke - proIO GmbH
Hi,

it looks like the link will not be deleted.
As far as I understand CS creates a symlink on secondary storage VM in 
/var/www/html/userdata/ to the volume itself.
Is there any kind of garbage collection which deletes the symlink? I consider 
this as a security risk, because the volume will be downloadable forever.

Mit freundlichen Grüßen / With kind regards,

Swen Brüseke

-Ursprüngliche Nachricht-
Von: Ahmad Emneina [mailto:aemne...@gmail.com] 
Gesendet: Mittwoch, 1. April 2015 17:05
An: Cloudstack users mailing list; S. Brüseke - proIO GmbH
Betreff: Re: Workflow for extracting a snapshot

That looks like the correct flow, snaps cant be downloaded. The download URL 
expires in about 30 minutes ( I could be wrong here).

On Wed, Apr 1, 2015 at 5:02 AM, S. Brüseke - proIO GmbH  s.brues...@proio.com 
wrote:

 Hi,

 can somebody tell me what the best workflow is to extract snapshots? 
 As far as I know a direct extraction of a snapshot is not possible.
 The VM needs to run all the time during the extraction!

 Here is my workflow:
 1. create snapshot of volume
 2. create new volume via snapshot
 3. download new volume
 4. delete new volume

 Another question is for how long will the download link be valid and 
 will the link deleted after a while?
 Thank you for help!

 Mit freundlichen Grüßen / With kind regards,

 Swen Brüseke



 - proIO GmbH -
 Geschäftsführer: Swen Brüseke
 Sitz der Gesellschaft: Frankfurt am Main

 USt-IdNr. DE 267 075 918
 Registergericht: Frankfurt am Main - HRB 86239

 Diese E-Mail enthält vertrauliche und/oder rechtlich geschützte 
 Informationen.
 Wenn Sie nicht der richtige Adressat sind oder diese E-Mail irrtümlich 
 erhalten haben, informieren Sie bitte sofort den Absender und 
 vernichten Sie diese Mail.
 Das unerlaubte Kopieren sowie die unbefugte Weitergabe dieser Mail 
 sind nicht gestattet.

 This e-mail may contain confidential and/or privileged information.
 If you are not the intended recipient (or have received this e-mail in
 error) please notify
 the sender immediately and destroy this e-mail.
 Any unauthorized copying, disclosure or distribution of the material 
 in this e-mail is strictly forbidden.






- proIO GmbH -
Geschäftsführer: Swen Brüseke
Sitz der Gesellschaft: Frankfurt am Main

USt-IdNr. DE 267 075 918
Registergericht: Frankfurt am Main - HRB 86239

Diese E-Mail enthält vertrauliche und/oder rechtlich geschützte Informationen. 
Wenn Sie nicht der richtige Adressat sind oder diese E-Mail irrtümlich erhalten 
haben, 
informieren Sie bitte sofort den Absender und vernichten Sie diese Mail. 
Das unerlaubte Kopieren sowie die unbefugte Weitergabe dieser Mail sind nicht 
gestattet. 

This e-mail may contain confidential and/or privileged information. 
If you are not the intended recipient (or have received this e-mail in error) 
please notify 
the sender immediately and destroy this e-mail.  
Any unauthorized copying, disclosure or distribution of the material in this 
e-mail is strictly forbidden. 




AW: Workflow for extracting a snapshot

2015-04-02 Thread S . Brüseke - proIO GmbH
Hi,

it took some time, but here are some infos I do not want to keep back:

The life of a URL generated when we click on download template depends on two 
parameters in global settings in CCP. They are extract.url.cleanup.interval and 
extract.url.expiration.interval. 

Once the value in seconds is met after the creation of the URL it is expired, 
say the value of 'extract.url.expiration.interval' is 4 hours and we are 
generating a URL by clicking the download template now, then the URL is valid 
till 4 hours from now, once it is reaching the 4 hours from now it is cleaned 
up once the cleanup thread is run. Since the cleanup thread is run as per the 
'extract.url.cleanup.interval' say 2 hours, This could run immediately after 
the URL is expired or even it could go upto 2 hours from the time of URL 
expiry. 

So that minimum life of URL is = extract.url.expiration.interval 
Maximum life of URL = extract.url.expiration.interval + 
extract.url.cleanup.interval

I will confirm this info after some testing.

Mit freundlichen Grüßen / With kind regards,

Swen Brüseke

 
proIO GmbH   
Kleyerstr. 79 - 89 / Tor 13   
D-60326 Frankfurt am Main 
 
Mail: s.brues...@proio.com
Tel:  +(49) (0) 69 739049-15
Fax:  +(49) (0) 69 739049-25  
Web:  www.proio.com
 
- Support -
Mail: supp...@proio.com
24h:  +(49) (0) 1805 522 855


-Ursprüngliche Nachricht-
Von: S. Brüseke - proIO GmbH [mailto:s.brues...@proio.com] 
Gesendet: Donnerstag, 2. April 2015 10:33
An: users@cloudstack.apache.org; aemne...@gmail.com
Betreff: AW: Workflow for extracting a snapshot

Hi,

it looks like the link will not be deleted.
As far as I understand CS creates a symlink on secondary storage VM in 
/var/www/html/userdata/ to the volume itself.
Is there any kind of garbage collection which deletes the symlink? I consider 
this as a security risk, because the volume will be downloadable forever.

Mit freundlichen Grüßen / With kind regards,

Swen Brüseke

-Ursprüngliche Nachricht-
Von: Ahmad Emneina [mailto:aemne...@gmail.com]
Gesendet: Mittwoch, 1. April 2015 17:05
An: Cloudstack users mailing list; S. Brüseke - proIO GmbH
Betreff: Re: Workflow for extracting a snapshot

That looks like the correct flow, snaps cant be downloaded. The download URL 
expires in about 30 minutes ( I could be wrong here).

On Wed, Apr 1, 2015 at 5:02 AM, S. Brüseke - proIO GmbH  s.brues...@proio.com 
wrote:

 Hi,

 can somebody tell me what the best workflow is to extract snapshots? 
 As far as I know a direct extraction of a snapshot is not possible.
 The VM needs to run all the time during the extraction!

 Here is my workflow:
 1. create snapshot of volume
 2. create new volume via snapshot
 3. download new volume
 4. delete new volume

 Another question is for how long will the download link be valid and 
 will the link deleted after a while?
 Thank you for help!

 Mit freundlichen Grüßen / With kind regards,

 Swen Brüseke



 - proIO GmbH -
 Geschäftsführer: Swen Brüseke
 Sitz der Gesellschaft: Frankfurt am Main

 USt-IdNr. DE 267 075 918
 Registergericht: Frankfurt am Main - HRB 86239

 Diese E-Mail enthält vertrauliche und/oder rechtlich geschützte 
 Informationen.
 Wenn Sie nicht der richtige Adressat sind oder diese E-Mail irrtümlich 
 erhalten haben, informieren Sie bitte sofort den Absender und 
 vernichten Sie diese Mail.
 Das unerlaubte Kopieren sowie die unbefugte Weitergabe dieser Mail 
 sind nicht gestattet.

 This e-mail may contain confidential and/or privileged information.
 If you are not the intended recipient (or have received this e-mail in
 error) please notify
 the sender immediately and destroy this e-mail.
 Any unauthorized copying, disclosure or distribution of the material 
 in this e-mail is strictly forbidden.






- proIO GmbH -
Geschäftsführer: Swen Brüseke
Sitz der Gesellschaft: Frankfurt am Main

USt-IdNr. DE 267 075 918
Registergericht: Frankfurt am Main - HRB 86239

Diese E-Mail enthält vertrauliche und/oder rechtlich geschützte Informationen. 
Wenn Sie nicht der richtige Adressat sind oder diese E-Mail irrtümlich erhalten 
haben, informieren Sie bitte sofort den Absender und vernichten Sie diese Mail. 
Das unerlaubte Kopieren sowie die unbefugte Weitergabe dieser Mail sind nicht 
gestattet. 

This e-mail may contain confidential and/or privileged information. 
If you are not the intended recipient (or have received this e-mail in error) 
please notify the sender immediately and destroy this e-mail.  
Any unauthorized copying, disclosure or distribution of the material in this 
e-mail is strictly forbidden. 





- proIO GmbH -
Geschäftsführer: Swen Brüseke
Sitz der Gesellschaft: Frankfurt am Main

USt-IdNr. DE 267 075 918
Registergericht: Frankfurt am Main - HRB 86239

Diese E-Mail enthält vertrauliche und/oder rechtlich geschützte Informationen. 
Wenn Sie nicht der richtige Adressat sind oder diese E-Mail irrtümlich erhalten 
haben, 
informieren Sie bitte sofort den Absender und vernichten Sie diese

Workflow for extracting a snapshot

2015-04-01 Thread S . Brüseke - proIO GmbH
Hi,

can somebody tell me what the best workflow is to extract snapshots? As far as 
I know a direct extraction of a snapshot is not possible.
The VM needs to run all the time during the extraction!

Here is my workflow:
1. create snapshot of volume
2. create new volume via snapshot
3. download new volume
4. delete new volume

Another question is for how long will the download link be valid and will the 
link deleted after a while?
Thank you for help!

Mit freundlichen Grüßen / With kind regards,

Swen Brüseke



- proIO GmbH -
Geschäftsführer: Swen Brüseke
Sitz der Gesellschaft: Frankfurt am Main

USt-IdNr. DE 267 075 918
Registergericht: Frankfurt am Main - HRB 86239

Diese E-Mail enthält vertrauliche und/oder rechtlich geschützte Informationen. 
Wenn Sie nicht der richtige Adressat sind oder diese E-Mail irrtümlich erhalten 
haben, 
informieren Sie bitte sofort den Absender und vernichten Sie diese Mail. 
Das unerlaubte Kopieren sowie die unbefugte Weitergabe dieser Mail sind nicht 
gestattet. 

This e-mail may contain confidential and/or privileged information. 
If you are not the intended recipient (or have received this e-mail in error) 
please notify 
the sender immediately and destroy this e-mail.  
Any unauthorized copying, disclosure or distribution of the material in this 
e-mail is strictly forbidden. 




[USAGE] usagetype 6 not providing virtualmachineid (CLOUDSTACK-8348)

2015-03-27 Thread S . Brüseke - proIO GmbH
Hello,

listUsageRecords does not provide virtualmachineid with usagetype 6 
(Volume), but in my opinion this is needed for correct billing an instance. API 
call listVolumes is providing the virtualmachineid so the information is in 
the system, but will not be provided by usage. What is your opinion on this? 
What do we need to do to implement this information into usage?

Mit freundlichen Grüßen / With kind regards,

Swen Brüseke


- proIO GmbH -
Geschäftsführer: Swen Brüseke
Sitz der Gesellschaft: Frankfurt am Main

USt-IdNr. DE 267 075 918
Registergericht: Frankfurt am Main - HRB 86239

Diese E-Mail enthält vertrauliche und/oder rechtlich geschützte Informationen. 
Wenn Sie nicht der richtige Adressat sind oder diese E-Mail irrtümlich erhalten 
haben, 
informieren Sie bitte sofort den Absender und vernichten Sie diese Mail. 
Das unerlaubte Kopieren sowie die unbefugte Weitergabe dieser Mail sind nicht 
gestattet. 

This e-mail may contain confidential and/or privileged information. 
If you are not the intended recipient (or have received this e-mail in error) 
please notify 
the sender immediately and destroy this e-mail.  
Any unauthorized copying, disclosure or distribution of the material in this 
e-mail is strictly forbidden.