Re: Experience with clustered/shared filesystems based on SAN storage on KVM?

2021-10-29 Thread Ignazio Cassano
Hi Mauro, what would you like to store on the clustered file system ?
If you want use it for virtual machine disks I think nfs is a good solution.
Clustered file system could be used if your virtualization nodes have a lot
of disks.
I usually I prefer use a nas or a San.
If you have a San you can use iscsi with clustered logical volumes.
Each logical volume can host a virtual machine volume and clustered lvm can
handle locks.
Ignazio



Il Gio 28 Ott 2021, 14:02 Mauro Ferraro - G2K Hosting <
mferr...@g2khosting.com> ha scritto:

> Hi,
>
> We are trying to make a lab with ACS 4.16 and Linstor. As soon as we
> finish the tests we can give you some approach for the results. Are
> someone already try this technology?.
>
> Regards,
>
> El 28/10/2021 a las 02:34, Pratik Chandrakar escribió:
> > Since NFS alone doesn't offer HA. What do you recommend for HA NFS?
> >
> > On Thu, Oct 28, 2021 at 7:37 AM Hean Seng  wrote:
> >
> >> I have similar consideration when start exploring  Cloudstack , but in
> >> reality  Clustered Filesystem is not easy to maintain.  You seems have
> >> choice of OCFS or GFS2 ,  gfs2 is hard to maintain and in redhat ,  ocfs
> >> recently only maintained in oracle linux.  I believe you do not want to
> >> choose solution that is very propriety .   Thus just SAN or ISCSI o is
> not
> >> really a direct solution here , except you want to encapsulate it in NFS
> >> and facing Cloudstack Storage.
> >>
> >> It work good on CEPH and NFS , but performance wise,  NFS is better .
> And
> >> all documentation and features you saw  in Cloudstack , it work
> perfectly
> >> on NFS.
> >>
> >> If you choose CEPH,  may be you have to compensate with some performance
> >> degradation,
> >>
> >>
> >>
> >> On Thu, Oct 28, 2021 at 12:44 AM Leandro Mendes 
> >> wrote:
> >>
> >>> I've been using Ceph in prod for volumes for some time. Note that
> >> although
> >>> I had several cloudstack installations,  this one runs on top of
> Cinder,
> >>> but it basic translates as libvirt and rados.
> >>>
> >>> It is totally stable and performance IMHO is enough for virtualized
> >>> services.
> >>>
> >>> IO might suffer some penalization due the data replication inside Ceph.
> >>> Elasticsearch for instance, the degradation would be a bit worse as
> there
> >>> is replication also in the application size, but IMHO, unless you need
> >>> extreme low latency it would be ok.
> >>>
> >>>
> >>> Best,
> >>>
> >>> Leandro.
> >>>
> >>> On Thu, Oct 21, 2021, 11:20 AM Brussk, Michael <
> >> michael.bru...@nttdata.com
> >>> wrote:
> >>>
>  Hello community,
> 
>  today I need your experience and knowhow about clustered/shared
>  filesystems based on SAN storage to be used with KVM.
>  We need to consider about a clustered/shared filesystem based on SAN
>  storage (no NFS or iSCSI), but do not have any knowhow or experience
> >> with
>  this.
>  Those I would like to ask if there any productive used environments
> out
>  there based on SAN storage on KVM?
>  If so, which clustered/shared filesystem you are using and how is your
>  experience with that (stability, reliability, maintainability,
> >>> performance,
>  useability,...)?
>  Furthermore, if you had already to consider in the past between SAN
>  storage or CEPH, I would also like to participate on your
> >> considerations
>  and results :)
> 
>  Regards,
>  Michael
> 
> >>
> >> --
> >> Regards,
> >> Hean Seng
> >>
> >
>


cloudstack 4.4.2 opendaylight url

2015-07-02 Thread Ignazio Cassano
Hi guys,
we are experimenting cloudstack sdn with opendaylight and we did not find
any documentation about its setup.
We found some videos on internet but it is not clear how we must configure
opendayligt witch cloudstackfor example:

which is the url we must specify in cloudstack Service Provider  for
opendaylight ?

We found documentation about openstack neutron but nothing about ACS.
Regards
Ignazio


Cloudstack 4.4.2 kvm ovstunne.log error

2015-06-04 Thread Ignazio Cassano
Hi guys, we installed a testing environment with cloudstack 4.4.2. In this
testing environment we have two centos 6.6 nodes with ovs 2.3.1.
Providing a virtual machine with a network service offering  based on ovs,
we saw an is ovs bridge was automatically configured on kvm nodes but no
gre tunnel port was created within it.
Reading ovstunnel.log we discovered an error : the gre tunnel  was created
and immediatelly deleted as you can see in the following ovstunnel.log
lines :

2015-06-01 16:15:25,736 - bridge OVSTunnel3093 for creating tunnel -
VERIFIED
2015-06-01 16:15:25,736 - Executing:['/usr/bin/ovs-vsctl', 'add-port',
'OVSTunnel3093', 't3093-1-4', '--', 'set', 'interface', 't3093-1-4',
'type=gre', 'options:key=3093', 'options:remote_ip=192.168.10.2']
2015-06-01 16:15:25,750 - Executing:['/usr/bin/ovs-vsctl', 'get', 'port',
't3093-1-4', 'interfaces']
2015-06-01 16:15:25,756 - Executing:['/usr/bin/ovs-vsctl', 'get',
'interface', '0e739122-dc8c-4de7-a91e-1ddbdae7c82e', 'options:key']
2015-06-01 16:15:25,761 - Executing:['/usr/bin/ovs-vsctl', 'get',
'interface', '0e739122-dc8c-4de7-a91e-1ddbdae7c82e', 'options:remote_ip']
2015-06-01 16:15:25,767 - Tunnel interface validated:['/usr/bin/ovs-vsctl',
'get', 'interface', '0e739122-dc8c-4de7-a91e-1ddbdae7c82e',
'options:remote_ip']
2015-06-01 16:15:25,767 - Executing:['/usr/bin/ovs-vsctl', 'get',
'interface', '0e739122-dc8c-4de7-a91e-1ddbdae7c82e', 'ofport']
2015-06-01 16:15:25,773 - Executing:['/usr/bin/ovs-vsctl', 'get', 'bridge',
'OVSTunnel3093', 'other_config:is-ovs-tun-network']
2015-06-01 16:15:25,778 - Executing:['/usr/bin/ovs-vsctl', 'get', 'bridge',
'OVSTunnel3093', 'other_config:is-ovs_vpc_distributed_vr_network']
2015-06-01 16:15:25,784 - The command exited with the error code: 1 (stderr
output:ovs-vsctl: no key is-ovs_vpc_distributed_vr_network in Bridge
record OVSTunnel3093 column other_config
)
2015-06-01 16:15:25,784 - An unexpected error occured. Rolling back


RE: cloudstack kvm openvswitch

2015-06-02 Thread Ignazio Cassano
On yesterday, we tried to install acs 4.4.2 and we found ovs in network
service provider panel.
It never appears in acs 4.5.1.
Now we have to verify if a gre tunnel is automatically created with an ovs
network offering.
We are trying with centos 6.6 and ovs 2.3.1: we suggest to leave the ovs
kernel module supplied by centos because the ovs 2.3.1 module causes some
kernel panics.
 Il giorno 01/giu/2015 09:31, Vadim Kimlaychuk vadim.kimlayc...@elion.ee
ha scritto:

 Hello Ignazio,

I haven't been working with KVM and OVS for a while, but my
 previous experience with this plugin was unlucky. I will put some personal
 thoughts here and do not pretend to be 100% correct in all the points:
1. There is a number of OVS versions and which one(s) are
 officially supported -- don't know. Documentation said that is 1.9 (if you
 patch it), but I have successfully used 2.x versions without errors. Think
 documentation is outdated. Still  I was not sure that it is properly
 supported at CS level.
2. There are number of problems with plugin activation since
 4.4 version. Look here for example:
 https://issues.apache.org/jira/browse/CLOUDSTACK-7446
3. I've been using OVS with VLAN isolation and have seen it is
 working with CS properly. What does exactly do plugin activation - don't
 know. May be plugin should put some more network offers. But since it was
 broken I was not able to test it.
4. Didn't try GRE. Have nothing to say here.
5. The overall feeling -- this functionality is not yet
 mature.  I also lacks documentation. Take it on your own risk

 Regards,

 Vadim

 -Original Message-
 From: Ignazio Cassano [mailto:ignaziocass...@gmail.com]
 Sent: Saturday, May 30, 2015 12:33 PM
 To: users@cloudstack.apache.org; r...@remi.nl
 Subject: Re: cloudstack kvm openvswitch

 Hi guys, last week we tried to install cloudstack 4.5.1  with two kvm
 nodes defining an advanced zone with gre isolation.
 Ovs never appears on network service providers but guests vlan are
 automatically created on ovs.
 Since gre tunnel between nodes has not automatically generated , we set up
 it manually and now  VMs can communicate between nodes.
 Why ovs does not appear in network service provider panel ?
 It should not appear if  vlan isolation was been choosen in
 configuration but we choose gre!
 Why gre tunnel was not generated ?
 Another question is related to the availability of ovs controllers ?
 Has anyone tried to configure any  opensource openvswitch controller with
 cloudstack ?
 Many thanks
 Ignazio
 Il giorno 24/apr/2015 10:50, Remi Bergsma r...@remi.nl ha scritto:

  Hi,
 
  I recently worked on KVM with Open vSwitch, but controlled by NSX (aka
  Nicira). Apart from the controller and some settings, it should be the
  same.
 
  Some pointers:
  http://docs.cloudstack.apache.org/en/latest/networking/ovs-plugin.html
  https://cwiki.apache.org/confluence/display/CLOUDSTACK/KVM+with+OpenVS
  witch
 
  http://blog.remibergsma.com/2015/04/04/adding-a-kvm-cluster-with-nsx-n
  etworking-to-cloudstack/
  (skip the NSX stuff)
 
  It is not needed anymore to compile ovs yourself. I worked with
  CentOS6 and
  7 and Ubuntu 14.04 out of the box. There is some extra work to get STT
  to work, but since you do use GRE instead you can ignore it.
 
  This is a script I wrote to help setup KVM/OVS/CloudStack in my lab to
  be able to do quick POCs. It could help you too in setting up the
  bridges and
  interfaces:
  https://github.com/remibergsma/openvswitch-kvm-config
 
  To answer your question about the tunnels: in my case they are created
  on the fly. I would just give it a go and see if ovs creates any tunnels.
 
  Hope this helps.
 
  Regards,
  Remi
 
 
 
  2015-04-24 7:50 GMT+02:00 Ignazio Cassano ignaziocass...@gmail.com:
 
   Thanks a lot.
   What I need to know is how I must prepare kvm nodes for openvswitch:
   for example if I must create a gre tunnel between nodes or if it is
   enough adding an address on each openvswitch  bridge.
   Regards
   Il giorno 24/apr/2015 06:22, Sanjeev N sanj...@apache.org ha
  scritto:
  
Hi ,
   
We can't avoid configuring vlans on physical switched even if we
use
  OVS
instead of bridge more on KVM. One option would be to use GRE
isolation instead of VLAN based in creating Physical network in a
 zone.
   
-Sanjeev
   
On Thu, Apr 23, 2015 at 4:05 PM, Ignazio Cassano 
   ignaziocass...@gmail.com

wrote:
   
 Hi all,
 I would like to install a new cloudstack infrastracture with kvm
  nodes
but
 I also would like to use OVS to configure guests vlan avoiding
 to
configure
 them on
 physical switches.
 I am planning to user three kvm nodes but I do not know how
 prepare
   them
to
 use OVS: must I create a gre tunnel between nodes ?
 Regards

   
  
 



Re: cloudstack kvm openvswitch

2015-05-30 Thread Ignazio Cassano
Hi guys, last week we tried to install cloudstack 4.5.1  with two kvm nodes
defining an advanced zone with gre isolation.
Ovs never appears on network service providers but guests vlan are
automatically created on ovs.
Since gre tunnel between nodes has not automatically generated , we set up
it manually and now  VMs can communicate between nodes.
Why ovs does not appear in network service provider panel ?
It should not appear if  vlan isolation was been choosen in configuration
but we choose gre!
Why gre tunnel was not generated ?
Another question is related to the availability of ovs controllers ?
Has anyone tried to configure any  opensource openvswitch controller with
cloudstack ?
Many thanks
Ignazio
Il giorno 24/apr/2015 10:50, Remi Bergsma r...@remi.nl ha scritto:

 Hi,

 I recently worked on KVM with Open vSwitch, but controlled by NSX (aka
 Nicira). Apart from the controller and some settings, it should be the
 same.

 Some pointers:
 http://docs.cloudstack.apache.org/en/latest/networking/ovs-plugin.html
 https://cwiki.apache.org/confluence/display/CLOUDSTACK/KVM+with+OpenVSwitch

 http://blog.remibergsma.com/2015/04/04/adding-a-kvm-cluster-with-nsx-networking-to-cloudstack/
 (skip the NSX stuff)

 It is not needed anymore to compile ovs yourself. I worked with CentOS6 and
 7 and Ubuntu 14.04 out of the box. There is some extra work to get STT to
 work, but since you do use GRE instead you can ignore it.

 This is a script I wrote to help setup KVM/OVS/CloudStack in my lab to be
 able to do quick POCs. It could help you too in setting up the bridges and
 interfaces:
 https://github.com/remibergsma/openvswitch-kvm-config

 To answer your question about the tunnels: in my case they are created on
 the fly. I would just give it a go and see if ovs creates any tunnels.

 Hope this helps.

 Regards,
 Remi



 2015-04-24 7:50 GMT+02:00 Ignazio Cassano ignaziocass...@gmail.com:

  Thanks a lot.
  What I need to know is how I must prepare kvm nodes for openvswitch: for
  example if I must create a gre tunnel between nodes or if it is enough
  adding an address on each openvswitch  bridge.
  Regards
  Il giorno 24/apr/2015 06:22, Sanjeev N sanj...@apache.org ha
 scritto:
 
   Hi ,
  
   We can't avoid configuring vlans on physical switched even if we use
 OVS
   instead of bridge more on KVM. One option would be to use GRE isolation
   instead of VLAN based in creating Physical network in a zone.
  
   -Sanjeev
  
   On Thu, Apr 23, 2015 at 4:05 PM, Ignazio Cassano 
  ignaziocass...@gmail.com
   
   wrote:
  
Hi all,
I would like to install a new cloudstack infrastracture with kvm
 nodes
   but
I also would like to use OVS to configure guests vlan avoiding to
   configure
them on
physical switches.
I am planning to user three kvm nodes but I do not know how prepare
  them
   to
use OVS: must I create a gre tunnel between nodes ?
Regards
   
  
 



cloudstack 4.5.1 ovs on kvm

2015-05-29 Thread Ignazio Cassano
Hi guys,
We read carefully cloudstack documentation to activate gre isolation on kvm
nodes.
First issue we  found is:  ovs does not appear in service network
provider.
Second issue we found : the gre tunnel betweten kvm nodes is not
automatically created also after migrating a vm ; so a vm on first node
cannot ping  a vm on second node.

We solved manually creating gre tunnel between kvm nodes.

Any suggestion is accepted.
Regards
Ignazio and Gianpiero


Re: cloudstack kvm openvswitch

2015-04-24 Thread Ignazio Cassano
I'll try asap.
Many thanks
Il giorno 24/apr/2015 10:50, Remi Bergsma r...@remi.nl ha scritto:

 Hi,

 I recently worked on KVM with Open vSwitch, but controlled by NSX (aka
 Nicira). Apart from the controller and some settings, it should be the
 same.

 Some pointers:
 http://docs.cloudstack.apache.org/en/latest/networking/ovs-plugin.html
 https://cwiki.apache.org/confluence/display/CLOUDSTACK/KVM+with+OpenVSwitch

 http://blog.remibergsma.com/2015/04/04/adding-a-kvm-cluster-with-nsx-networking-to-cloudstack/
 (skip the NSX stuff)

 It is not needed anymore to compile ovs yourself. I worked with CentOS6 and
 7 and Ubuntu 14.04 out of the box. There is some extra work to get STT to
 work, but since you do use GRE instead you can ignore it.

 This is a script I wrote to help setup KVM/OVS/CloudStack in my lab to be
 able to do quick POCs. It could help you too in setting up the bridges and
 interfaces:
 https://github.com/remibergsma/openvswitch-kvm-config

 To answer your question about the tunnels: in my case they are created on
 the fly. I would just give it a go and see if ovs creates any tunnels.

 Hope this helps.

 Regards,
 Remi



 2015-04-24 7:50 GMT+02:00 Ignazio Cassano ignaziocass...@gmail.com:

  Thanks a lot.
  What I need to know is how I must prepare kvm nodes for openvswitch: for
  example if I must create a gre tunnel between nodes or if it is enough
  adding an address on each openvswitch  bridge.
  Regards
  Il giorno 24/apr/2015 06:22, Sanjeev N sanj...@apache.org ha
 scritto:
 
   Hi ,
  
   We can't avoid configuring vlans on physical switched even if we use
 OVS
   instead of bridge more on KVM. One option would be to use GRE isolation
   instead of VLAN based in creating Physical network in a zone.
  
   -Sanjeev
  
   On Thu, Apr 23, 2015 at 4:05 PM, Ignazio Cassano 
  ignaziocass...@gmail.com
   
   wrote:
  
Hi all,
I would like to install a new cloudstack infrastracture with kvm
 nodes
   but
I also would like to use OVS to configure guests vlan avoiding to
   configure
them on
physical switches.
I am planning to user three kvm nodes but I do not know how prepare
  them
   to
use OVS: must I create a gre tunnel between nodes ?
Regards
   
  
 



cloudstack kvm openvswitch

2015-04-23 Thread Ignazio Cassano
Hi all,
I would like to install a new cloudstack infrastracture with kvm nodes but
I also would like to use OVS to configure guests vlan avoiding to configure
them on
physical switches.
I am planning to user three kvm nodes but I do not know how prepare them to
use OVS: must I create a gre tunnel between nodes ?
Regards


Re: cloudstack kvm openvswitch

2015-04-23 Thread Ignazio Cassano
Thanks a lot.
What I need to know is how I must prepare kvm nodes for openvswitch: for
example if I must create a gre tunnel between nodes or if it is enough
adding an address on each openvswitch  bridge.
Regards
Il giorno 24/apr/2015 06:22, Sanjeev N sanj...@apache.org ha scritto:

 Hi ,

 We can't avoid configuring vlans on physical switched even if we use OVS
 instead of bridge more on KVM. One option would be to use GRE isolation
 instead of VLAN based in creating Physical network in a zone.

 -Sanjeev

 On Thu, Apr 23, 2015 at 4:05 PM, Ignazio Cassano ignaziocass...@gmail.com
 
 wrote:

  Hi all,
  I would like to install a new cloudstack infrastracture with kvm nodes
 but
  I also would like to use OVS to configure guests vlan avoiding to
 configure
  them on
  physical switches.
  I am planning to user three kvm nodes but I do not know how prepare them
 to
  use OVS: must I create a gre tunnel between nodes ?
  Regards
 



Re: kvm virtio disk windows vm

2015-03-19 Thread Ignazio Cassano
many tanks

Ignazio
Il giorno 19/mar/2015 10:49, Andrija Panic andrija.pa...@gmail.com ha
scritto:

 Choose Windows PV from OS type when deploying new VM/template.
 Prior to this install VirtIO drivers from Fedora site, inside your
 templates...
 Works like charm...

 On 19 March 2015 at 10:41, Ignazio Cassano ignaziocass...@gmail.com
 wrote:

  Hi all,
  I' like to know if it is possible to force kvm windows guest for using
  virtio disk .
  Any howto using api or hooks scripts ?
  Regards
  Ignazio
 



 --

 Andrija Panić



kvm virtio disk windows vm

2015-03-19 Thread Ignazio Cassano
Hi all,
I' like to know if it is possible to force kvm windows guest for using
virtio disk .
Any howto using api or hooks scripts ?
Regards
Ignazio


Re: kvm virtio disk windows vm

2015-03-19 Thread Ignazio Cassano
Andrija, does it work on all cloudstack versions ?

Ignazio
Il giorno 19/mar/2015 10:49, Andrija Panic andrija.pa...@gmail.com ha
scritto:

 Choose Windows PV from OS type when deploying new VM/template.
 Prior to this install VirtIO drivers from Fedora site, inside your
 templates...
 Works like charm...

 On 19 March 2015 at 10:41, Ignazio Cassano ignaziocass...@gmail.com
 wrote:

  Hi all,
  I' like to know if it is possible to force kvm windows guest for using
  virtio disk .
  Any howto using api or hooks scripts ?
  Regards
  Ignazio
 



 --

 Andrija Panić



Re: KVM + VMware (and ceph)

2014-06-11 Thread Ignazio Cassano
Hi all, I would like to make the same configuration but using ISCSI instead
of NFS.
I could set two phisical servers(for high availability)  with ubuntu and
lio iscsi target.
On the ceph side this servers will use rbd and on the vmware side they will
exports rbd disks as LUNs.
I would verify if there will be a performance degradation vs kvm rbd
integration.
Someone has tested this configuration yet ?

Regards


2014-06-11 10:38 GMT+02:00 Andrei Mikhailovsky and...@arhont.com:

 Hi guys,

 This is what I've done to get a failover nfs storage solution with
 XenServer on top of ceph.

 1. Setup two rbd volumes (one for vm images and another one for shared nfs
 state folder). Make sure you disable rbd caching on the nfs servers.
 2. Install ucarp package on two servers which will be the nfs servers.
 These servers will need to have access to the rbd volumes that you've set
 up in step 1.
 3. Configure ucarp to be master/slave on the servers. Choose a virtual IP
 address that will be shared between the nfs servers.
 4. Configure ucarp vif-up/vif-down scripts on both servers to a)map/unmap
 rbd volumes, b)mount/unmount rbd volumes, c) start/stop nfs service
 5. Add nfs server using the virtual IP you've chosen as a storage to
 XenServer/ACS

 This way I am able to perform maintenance tasks on the nfs servers without
 the need to shutdown the running vms. The switching between master and
 slave takes about 5 seconds or so and this doesn't seem to impact vm
 performance. I've done some basic failover tests and the setup is working
 okay.

 I would really appreciate your feedback and thoughts on my setup. Does it
 look like a viable solution for a production env?

 Cheers

 Andrei


 - Original Message -

 From: Gerolamo Valcamonica gerolamo.valcamon...@overweb.it
 To: users@cloudstack.apache.org
 Sent: Tuesday, 10 June, 2014 11:24:13 AM
 Subject: Re: KVM + VMware (and ceph)

 Thanks to all

 (OT:
 I'm investigating with Inktank for VMware support by CEPH.
 If I find something new I will inform you)

 Gerolamo Valcamonica


 Il 09/06/2014 22:30, ilya musayev ha scritto:
  Gerolamo,
 
  As previously noted, you can mix and match with some degree of
  segregation (i.e. templates must be different).
 
  I've not tried mixing KVM + VMware recently, but i see no reason why
  you cannot do that, if i recall correctly, i've done so a year ago or
  so, when i first installed cloudstack.
 
  As for Cephs, i have somewhat similar setup, I have beefy vSphere
  hypervisors with about 1.5tb of SSD drives on each hypervisor with
  10GB NICs, that until recently have been idle. I'm setting up Ceph
  cluster on these and will front them with iSCSI to ESX hosts as VMFS.
  You can also look into presenting Ceph as NFS to vmware.
 
  Regards,
  ilya
 
 
 
  On 6/9/14, 8:17 AM, Gerolamo Valcamonica wrote:
  Hi Everybody,
 
  i have a production environment of cloudstack 4.3 based on KVM hosts
  and CEPH storage
 
  It's a good solution for me and i have good performance on both
  compute and storage side
 
  But now i have an explicit customer request for VMware environment so
  I'm investigating about it.
 
  Here my questions:
  - Can i have a mixed environment KVM + VMware vSphere Essentials Plus
  Kit under Cloudstack?
  - Can I have a mixed networking environment , so that i can, as
  example, have a frontend VMs on KVM and a backend VMs on VMware, on
  the same customere?
  (- Third, but off topic, question: can i have VMware hosts and CEPH
  storage?)
 
  Is there someone with similar enviroment that can give me suggestion
  about this?
 
 


 --
 Gerolamo Valcamonica
 Overweb Srl

 - - - - - - - - - - - - - - - - - - - - - - - - - - -
 Nota di riservatezza. Il presente messaggio, e i relativi allegati,
 contengono informazioni strettamente riservate e sono destinati
 esclusivamente al destinatario sopra indicato, il quale è l'unico
 autorizzato a usarlo, copiarlo e, sotto la propria responsabilità,
 diffonderlo. Chiunque ricevesse questo messaggio per errore e/o lo leggesse
 senza esserne legittimato è avvertito che trattenerlo, copiarlo,
 divulgarlo, distribuirlo a persone diverse dal destinatario è severamente
 proibito, ed è pregato di rinviarlo immediatamente al mittente distruggendo
 l'originale.





Re: KVM + VMware (and ceph)

2014-06-09 Thread Ignazio Cassano
Hi, I think you could export an rbd device as an iscsi lun and let vmware
to use it.
In this configuration you obtain a replicated lun .
 Il giorno 09/giu/2014 17:18, Gerolamo Valcamonica gerol...@pyder.com
ha scritto:

 Hi Everybody,

 i have a production environment of cloudstack 4.3 based on KVM hosts and
 CEPH storage

 It's a good solution for me and i have good performance on both compute
 and storage side

 But now i have an explicit customer request for VMware environment so I'm
 investigating about it.

 Here my questions:
 - Can i have a mixed environment KVM + VMware vSphere Essentials Plus Kit
 under Cloudstack?
 - Can I have a mixed networking environment , so that i can, as example,
 have a frontend VMs on KVM and a backend VMs on VMware, on the same
 customere?
 (- Third, but off topic, question: can i have VMware hosts and CEPH
 storage?)

 Is there someone with similar enviroment that can give me suggestion about
 this?

 --
 Gerolamo Valcamonica



Re: Cloudstack ceph primary storage

2014-04-30 Thread Ignazio Cassano
Many many thanks .
Do you know anythings about integration also with glustuer block device
(libgfapi) ?

2014-04-30 9:54 GMT+02:00 Wido den Hollander w...@widodh.nl:

 Since CloudStack 4.0 Primary Storage using Ceph's RBD is supported.

 I'd recommend at least 4.2, but when 4.3.1 comes out there are some fixes
 which fix some small bugs.

 Stability wise it's just fine since that is all handled by KVM/Qemu and
 librbd.

 I made some improvements in deployment times, those will go into 4.4

 For now I could recommend running with Ubuntu since that has the best Ceph
 support. 14.04 (the new LTS) has all you need.

 Wido


 On 04/29/2014 10:40 PM, Sebastien Goasguen wrote:

 Wido copied has done the integration, he will be able to answer you

 -sebastien


 On Apr 29, 2014, at 2:19 PM, Ignazio Cassano ignaziocass...@gmail.com
 wrote:

 Hi all,
 does cloudstack support ceph remote block device as  primary storage ?
 I checked documentation and I did not found any reference while ceph
 documentation says that support is available from  cloudstack 4.0.
 Ps
 I am asking for kvm hypervisors

 Regards
 Ignazio





Re: Cloudstack ceph primary storage

2014-04-30 Thread Ignazio Cassano
Many thanks.
I am going to test rbd :-)
Ignazio


2014-04-30 10:12 GMT+02:00 Nux! n...@li.nux.ro:

 On 30.04.2014 09:09, Ignazio Cassano wrote:

 Many many thanks .
 Do you know anythings about integration also with glustuer block device
 (libgfapi) ?


 It works with 4.3 but you need to patch it manually, proper support is
 coming in v4.4.

 Lucian

 --
 Sent from the Delta quadrant using Borg technology!

 Nux!
 www.nux.ro



Cloudstack ceph primary storage

2014-04-29 Thread Ignazio Cassano
Hi all,
does cloudstack support ceph remote block device as  primary storage ?
I checked documentation and I did not found any reference while ceph
documentation says that support is available from  cloudstack 4.0.
Ps
I am asking for kvm hypervisors

Regards
Ignazio


Re: Cloudstack ceph primary storage

2014-04-29 Thread Ignazio Cassano
Many thanks
Il giorno 30/apr/2014 01:20, Jonathan Gowar j...@whiteheat.org.uk ha
scritto:

 Not Wido ... but I used Ceph as primary storage on CS 4.2 and 4.3, with
 KVM, it works well.  Here is a good reference of Wido's I used:-


 http://blog.widodh.nl/2013/06/a-quick-note-on-running-cloudstack-with-rbd-on-ubuntu-12-04/

 On Tue, 2014-04-29 at 16:40 -0400, Sebastien Goasguen wrote:
  Wido copied has done the integration, he will be able to answer you
 
  -sebastien
 
 
  On Apr 29, 2014, at 2:19 PM, Ignazio Cassano ignaziocass...@gmail.com
 wrote:
 
   Hi all,
   does cloudstack support ceph remote block device as  primary storage ?
   I checked documentation and I did not found any reference while ceph
   documentation says that support is available from  cloudstack 4.0.
   Ps
   I am asking for kvm hypervisors
  
   Regards
   Ignazio
 





Re: Cloudstack with iscsi storage

2014-04-26 Thread Ignazio Cassano
You must not create gfs or other file system because with clvm virtual
machines disks are created on logical volumes .
 Il giorno 25/apr/2014 20:02, Nux! n...@li.nux.ro ha scritto:

 On 25.04.2014 17:59, rammohan ganapavarapu wrote:

 Do i have to create gfs or extfs on that lvm or i can just present logical
 vol itself (/dev/mapper/vg01_vol01)?



 You need to configure CLVM, check for tutorials on the web.



 --
 Sent from the Delta quadrant using Borg technology!

 Nux!
 www.nux.ro



Re: How many vms per primary storage can offer best performance?

2013-07-04 Thread Ignazio Cassano
Hi, I think nfs is not a good solution.
Try clvm over iscsi or fc.
Regards
Il giorno 04/lug/2013 18:26, Conrad Geiger cgei...@it1solutions.com ha
scritto:

 I would also say that 8 spindles for 15-20 VMs is low.  You are going to
 run out of iops.


 Sent from my Verizon Wireless 4G LTE Smartphone



  Original message 
 From: Ahmad Emneina aemne...@gmail.com
 Date: 07/04/2013 9:10 AM (GMT-05:00)
 To: Cloudstack users mailing list users@cloudstack.apache.org
 Subject: Re: How many vms per primary storage can offer best performance?


 I would google NFS tuning and atomically test changes. Changes vary from
 the kernel level up through the switches (sizing frames) as well as
 introducing bonding. YMMV here NFS tuning is a huge part trial and error.


 On Thu, Jul 4, 2013 at 5:26 AM, WXR 1485739...@qq.com wrote:

  I use NFS share as primary storage,the NFS share is on a 8 SATA HDDs
  RAID10 volume.
  The network link is gigabit ethernet.The switch is dell powerconnect.
 
  When I just create 15-20 vm instances and start them(not run any software
  on them),I find the disk IO performance of the vm is very low.
  If a file copy job on a pc needs 10 minutes , the same job on the vm
 needs
  20minutes.
 
  I don't know if it is normal,and I want to know the correct configuration
  of the primary storage,I need your suggests with enough experience.



Re: Problems with Security Groups over CloudStack 4.0.1 with XenServer 6.0.2 and Basic Zone

2013-04-04 Thread Ignazio Cassano
Ciao Sergio, I suggest using Advanced Zones instead of Basic.
I do not know very well CS4, but in previous versions Advanced zones have a
lot of features.
Ciao
Ignazio
PS (fammi sapere come  questa nuova versione)


2013/4/4 Sergio Tonani sergio.ton...@csi.it

 Hi all, I am trying CloudStack 4.0.1 with XenServer 6.0.2 in a Basic
 Zone...
 Security Groups does not work.
 I follow all the instructions of the manual. CSP is installed and host
 network
 work in bridge mode.
 I have another cluster with KVM that work fine.

 On XenServer host, CS don't write any ebtable's rules neither iptables. On
 KVM
 host ebtable and iptables rule are populated correctly.

 Log file management-server.log show these messages when i create a new
 instance
 in a security group:

 2013-04-04 15:02:03,611 WARN [xen.resource.CitrixResourceBase]
 (DirectAgent-214:null) Host 10.102.90.3 cannot do bridge firewalling
 2013-04-04 15:02:03,612 DEBUG [agent.manager.DirectAgentAttache]
 (DirectAgent-214:null) Seq 8-949355071: Response Received:
 2013-04-04 15:02:03,612 DEBUG [agent.transport.Request]
 (DirectAgent-214:null)
 Seq 8-949355071: Processing: { Ans: , MgmtId: 218022145849384, via: 8,
 Ver: v1,
 Flags: 110,

 [{SecurityGroupRuleAnswer:{logSequenceNumber:1,vmId:13,reason:CANNOT_BRIDGE_FIREWALL,result:false,details:Host
 10.102.90.3 cannot do bridge firewalling,wait:0}}] }
 2013-04-04 15:02:03,615 DEBUG [network.security.SecurityGroupListener]
 (DirectAgent-214:null) Failed to program rule
 com.cloud.agent.api.SecurityGroupRuleAnswer into host 8 due to Host
 10.102.90.3
 cannot do bridge firewalling and updated jobs
 2013-04-04 15:02:03,615 DEBUG [network.security.SecurityGroupListener]
 (DirectAgent-214:null) Not retrying security group rules for vm 13 on
 failure
 since host 8 cannot do bridge firewalling
 2013-04-04 15:02:03,617 DEBUG [network.security.SecurityGroupListener]
 (DirectAgent-214:null) Failed to program rule
 com.cloud.agent.api.SecurityGroupRuleAnswer into host 8 due to Host
 10.102.90.3
 cannot do bridge firewalling and updated jobs
 2013-04-04 15:02:03,617 DEBUG [network.security.SecurityGroupListener]
 (DirectAgent-214:null) Not retrying security group rules for vm 13 on
 failure
 since host 8 cannot do bridge firewalling

 Where could I start to troubleshoot SecurityGroups on XenServer? Any
 suggestions?

  __
  Sergio Tonani