CentOS KVM systemvm issue

2014-09-12 Thread John Skinner
I have found that on CloudStack 4.2 + (when we changed to using the 
virtio-socket to send data to the systemvm) when running CentOS 6.X 
cloud-early-config fails. On new systemvm creation there is a high chance for 
success, but still a chance for failure. After the systemvm has been created a 
simple reboot will cause start to fail every time. This has been confirmed on 2 
separate CloudStack 4.2 environments; 1 running CentOS 6.3 KVM, and another 
running CentOS 6.2 KVM. This can be fixed with a simple modification to the 
get_boot_params function in the cloud-early-config script. If you wrap the 
while read line inside of another while that checks if $cmd returns an empty 
string it fixes the issue.

This is a pretty nasty issue for any one running CloudStack 4.2 + on CentOS 6.X

John Skinner
Appcore

XenServer Snapshots

2014-05-20 Thread John Skinner
 that are active for that volume that 
XenServer sees inside of the VHD chain is the most recent ID 5259. I am 
assuming that XenServer is coalescing the other 3 into the active volume during 
a coalesce process (correct me if I am wrong here).

So if my theory there is correct, then that looks good. So we actually check 
what is on secondary storage:

[root@pd1-xh4 37316]# ls
261bb966-cff6-433b-a167-ac42e1b50d1e.vhd  
5489fb4d-96b2-4eb6-820b-cb9aab2bc207.vhd  
adeed41f-340e-4670-b6be-6c90d14f3a6d.vhd
4ddf21eb-a862-4c6e-8971-69d79824d1e3.vhd  
77b00655-8f74-4326-88d1-919c9ebaf587.vhd

All 5 of the snapshots exist on secondary storage. However, the one that 
CloudStack has marked as removed is also still on secondary storage. When does 
CloudStack decide it’s time to clean up secondary storage? According to the 
database this snapshot is removed but it is clearly still there. Since 
CloudStack is actually keeping deltas for XenServer on secondary storage I am 
assuming that it needs to keep that snapshot that it claims is removed to be 
able to coalesce them into a full snapshot once that threshold is reached. I am 
just curious how long the snapshot will exist on secondary storage and when I 
should expect it to be removed. Is this based on the number of snapshots to 
keep between full snapshots setting in the global settings?

John Skinner
Senior Systems Administrator | Appcore

Office +1.800.735.7104 | Direct +1.515.612.7783  
john.skin...@appcore.com  |  www.appcore.com
--
The information in this message is intended for the named recipients only. It 
may contain information that is privileged, confidential or otherwise protected 
from disclosure. If you are not the intended recipient, you are hereby notified 
that any disclosure, copying, distribution, or the taking of any action in 
reliance on the contents of this message is strictly prohibited. If you have 
received this e-mail in error, do not print it or disseminate it or its 
contents. In such event, please notify the sender by return e-mail and delete 
the e-mail file immediately thereafter. Thank you.



Re: VPC VPN Multiple Connections to Same Gateway

2014-05-16 Thread John Skinner
Excellent! Thanks everyone.

John

On May 14, 2014, at 12:59 PM, Sheng Yang sh...@yasker.org wrote:

 Hi John,
 
 This has been addressed as
 https://issues.apache.org/jira/browse/CLOUDSTACK-5501
 
 The fix would be in 4.4 and after.
 
 Thanks!
 
 --Sheng
 
 On Tue, May 13, 2014 at 2:38 PM, John Skinner john.skin...@appcore.comwrote:
 
 Hey list -
 
 Having an issue with VPCs and site to site VPNs with CloudStack 4.2 . We
 have an account that has a VPC setup in 2 zones within the cloud. In zone
 A, they have created the VPN gateway and setup a connection back to their
 office. In zone B, they are trying to re-create that same VPN connection to
 their office but it is failing. CloudStack is not letting them use that
 same VPN gateway because it is already in use. They are also unable to
 create a new gateway with the same settings because the gateway IP address
 is already in the system.
 
 It looks like with CloudStack 4.2 we are unable to create multiple
 connections to the same gateway (1 connection from each zone). I have
 reviewed the notes for 4.3 and also looked at issues in Jira and do not see
 a duplicate of this any where so I do not believe it has been reported.
 
 Is any one else able to reproduce this to check my sanity?
 
 



VPC VPN Multiple Connections to Same Gateway

2014-05-13 Thread John Skinner
Hey list -

Having an issue with VPCs and site to site VPNs with CloudStack 4.2 . We have 
an account that has a VPC setup in 2 zones within the cloud. In zone A, they 
have created the VPN gateway and setup a connection back to their office. In 
zone B, they are trying to re-create that same VPN connection to their office 
but it is failing. CloudStack is not letting them use that same VPN gateway 
because it is already in use. They are also unable to create a new gateway with 
the same settings because the gateway IP address is already in the system. 

It looks like with CloudStack 4.2 we are unable to create multiple connections 
to the same gateway (1 connection from each zone). I have reviewed the notes 
for 4.3 and also looked at issues in Jira and do not see a duplicate of this 
any where so I do not believe it has been reported.

Is any one else able to reproduce this to check my sanity? 



GlusterFS QEMU libgfapi

2013-07-15 Thread John Skinner
Is there any way to use GlusterFS with the native QEMU libgfapi so we do not 
have to use Fuse to access the shares? Or are there any plans to build libgfapi 
QEMU support into CloudStack in the future?

Thanks,

John





Re: GlusterFS QEMU libgfapi

2013-07-15 Thread John Skinner
Thanks, Wido.

I am not a programmer per se, but I am going to pull the code down and have a 
look to see if I can figure it out. I know some java guys so may be able to get 
some help on that end.

Thanks,

John

On Jul 15, 2013, at 9:36 AM, Wido den Hollander w...@widodh.nl wrote:

 Hi John,
 
 On 07/15/2013 04:31 PM, John Skinner wrote:
 Is there any way to use GlusterFS with the native QEMU libgfapi so we do not 
 have to use Fuse to access the shares? Or are there any plans to build 
 libgfapi QEMU support into CloudStack in the future?
 
 
 As for now there is no way to use libgfapi with Qemu/KVM in CloudStack, nor 
 are there any plans to implement this.
 
 Patches are welcome though! Would be great to see this be written.
 
 Wido
 
 Thanks,
 
 John
 
 
 
 



Re: GlusterFS QEMU libgfapi

2013-07-15 Thread John Skinner
Wido,

Are you sure on that? I know the libgfapi is in C. But I thought GlusterFS was 
now supported in both libvirt and qemu (1.0.1+, 1.3; respectively).

1.0.1: Dec 17 2012
Features:
Introduce virtlockd daemon (Daniel P. Berrange),
parallels: add disk and network device support (Dmitry Guryanov),
Add virDomainSendProcessSignal API (Daniel P. Berrange),
Introduce virDomainFSTrim() public API (Michal Privoznik),
add fuse support for libvirt lxc (Gao feng),
Add Gluster protocol as supported network disk backend (Harsh Prateek Bora),
various snapshot improvements (Peter Krempa, Eric Blake)
Thanks,

John

On Jul 15, 2013, at 9:56 AM, Wido den Hollander w...@widodh.nl wrote:

 Hi John,
 
 On 07/15/2013 04:52 PM, John Skinner wrote:
 Thanks, Wido.
 
 I am not a programmer per se, but I am going to pull the code down and have 
 a look to see if I can figure it out. I know some java guys so may be able 
 to get some help on that end.
 
 
 It won't be only Java code, but also C code to manage the GlusterFS storage 
 pool in libvirt: http://libvirt.org/storage.html
 
 Currently GlusterFS isn't supported in libvirt as a storage pool, but the 
 CloudStack agent relies on that.
 
 It might be possible to do without libvirt, but I'm not sure how that would 
 work out.
 
 Ceph and RBD are my thing, I'm not a GlusterFS expert.
 
 Wido
 
 Thanks,
 
 John
 
 On Jul 15, 2013, at 9:36 AM, Wido den Hollander w...@widodh.nl wrote:
 
 Hi John,
 
 On 07/15/2013 04:31 PM, John Skinner wrote:
 Is there any way to use GlusterFS with the native QEMU libgfapi so we do 
 not have to use Fuse to access the shares? Or are there any plans to build 
 libgfapi QEMU support into CloudStack in the future?
 
 
 As for now there is no way to use libgfapi with Qemu/KVM in CloudStack, nor 
 are there any plans to implement this.
 
 Patches are welcome though! Would be great to see this be written.
 
 Wido
 
 Thanks,
 
 John
 
 
 
 
 
 



Re: GlusterFS QEMU libgfapi

2013-07-15 Thread John Skinner
I dug a little bit deeper and found that it IS a supported storage type of 
NETFS, as a valid poor format type (see below from libvirt.org). Now, not being 
familiar with how CloudStack handles storage; I was think that under 
cloud-plugin-hypervisoer-kvm  src  com.cloud.hypervisor.kvm.storage  
LibvirtStorageAdaptor.java I create a pool type similar to the one for NFS with 
the required information for using GlusterFS. Is this assumption correct?

Thanks,

John 

Valid volume format types
The valid volume types are the same as for the directory pool type.

Network filesystem pool
This is a variant of the filesystem pool. Instead of requiring a local block 
device as the source, it requires the name of a host and path of an exported 
directory. It will mount this network filesystem and manage files within the 
directory of its mount point. It will default to using NFS as the protocol.

Example pool input
  pool type=netfs
namevirtimages/name
source
  host name=nfs.example.com/
  dir path=/var/lib/virt/images/
/source
target
  path/var/lib/virt/images/path
/target
  /pool
Valid pool format types
The network filesystem pool supports the following formats:

auto - automatically determine format
nfs
glusterfs
cifs

On Jul 15, 2013, at 10:08 AM, John Skinner john.skin...@appcore.com wrote:

 Wido,
 
 Are you sure on that? I know the libgfapi is in C. But I thought GlusterFS 
 was now supported in both libvirt and qemu (1.0.1+, 1.3; respectively).
 
 1.0.1: Dec 17 2012
 Features:
 Introduce virtlockd daemon (Daniel P. Berrange),
 parallels: add disk and network device support (Dmitry Guryanov),
 Add virDomainSendProcessSignal API (Daniel P. Berrange),
 Introduce virDomainFSTrim() public API (Michal Privoznik),
 add fuse support for libvirt lxc (Gao feng),
 Add Gluster protocol as supported network disk backend (Harsh Prateek Bora),
 various snapshot improvements (Peter Krempa, Eric Blake)
 Thanks,
 
 John
 
 On Jul 15, 2013, at 9:56 AM, Wido den Hollander w...@widodh.nl wrote:
 
 Hi John,
 
 On 07/15/2013 04:52 PM, John Skinner wrote:
 Thanks, Wido.
 
 I am not a programmer per se, but I am going to pull the code down and have 
 a look to see if I can figure it out. I know some java guys so may be able 
 to get some help on that end.
 
 
 It won't be only Java code, but also C code to manage the GlusterFS storage 
 pool in libvirt: http://libvirt.org/storage.html
 
 Currently GlusterFS isn't supported in libvirt as a storage pool, but the 
 CloudStack agent relies on that.
 
 It might be possible to do without libvirt, but I'm not sure how that would 
 work out.
 
 Ceph and RBD are my thing, I'm not a GlusterFS expert.
 
 Wido
 
 Thanks,
 
 John
 
 On Jul 15, 2013, at 9:36 AM, Wido den Hollander w...@widodh.nl wrote:
 
 Hi John,
 
 On 07/15/2013 04:31 PM, John Skinner wrote:
 Is there any way to use GlusterFS with the native QEMU libgfapi so we do 
 not have to use Fuse to access the shares? Or are there any plans to 
 build libgfapi QEMU support into CloudStack in the future?
 
 
 As for now there is no way to use libgfapi with Qemu/KVM in CloudStack, 
 nor are there any plans to implement this.
 
 Patches are welcome though! Would be great to see this be written.
 
 Wido
 
 Thanks,
 
 John