Re: [Users] Gluster question

2014-02-05 Thread Maurice James
OK I think I got it now. What I meant by NFS on each host was. In ovirt you
can set up an NFS storage domain on each host and have the available to all
hosts in the cluster

 

 

From: Andrew Lau [mailto:and...@andrewklau.com] 
Sent: Wednesday, February 05, 2014 9:25 PM
To: Maurice James
Cc: users
Subject: Re: [Users] Gluster question

 

I'm not sure what you mean by NFS each host, but you'll need some way to at
least ensure the data is available. Be that be replicated gluster or a
centralized SAN etc.

 

On Thu, Feb 6, 2014 at 1:21 PM, Maurice James mailto:midnightst...@msn.com> > wrote:

Hmm. So in that case would I be able to drop the Gluster setup and use NFS
each host and make sure power fencing is enabled? Will that still achieve
fault tolerance, or is a replicated gluster still required?

 

From: Andrew Lau [mailto:and...@andrewklau.com
 ] 
Sent: Wednesday, February 05, 2014 9:17 PM
To: Maurice James
Cc: users
Subject: Re: [Users] Gluster question

 

There was another recent post about this but a sum up was:

 

You must have power fencing to support VM HA otherwise they'll be an issue
with the engine not knowing whether the VM is still running and not bring it
up on a new host to avoid data corruption. Also make sure you have your
quorum setup properly based on your replication scenario so you can
withstand 1 host being lost.

 

I don't believe they'll "keep running" in a sense because of the host being
lost, but they would restart on another host. At least that's what I've
noticed in my case.

 

On Thu, Feb 6, 2014 at 1:04 PM, Maurice James mailto:midnightst...@msn.com> > wrote:

I currently have a new setup running ovirt 3.3.3. I have a Gluster storage
domain with roughly 2.5TB of usable space.  Gluster is installed on the same
systems as the ovirt hosts. The host break down is as follows

 

Ovirt DC:

4 hosts in the cluster. Each host has 4 physical disks in a RAID 5. Each
disk is 500GB. With the OS installed and configured I end up with 1.2TB of
usable space left for my data volume

 

Gluster volume:

4 bricks with 1.2TB of space per brick (Distribute Replicate leaves me with
about 2.5TB in the storage domain)

 

 

Does this setup give me enough fault tolerance to survive losing a host and
have my HA vm automatically move to an available host and keep running??

 


___
Users mailing list
Users@ovirt.org  
http://lists.ovirt.org/mailman/listinfo/users

 

 

___
Users mailing list
Users@ovirt.org
http://lists.ovirt.org/mailman/listinfo/users


Re: [Users] Gluster question

2014-02-05 Thread Andrew Lau
I'm not sure what you mean by NFS each host, but you'll need some way to at
least ensure the data is available. Be that be replicated gluster or a
centralized SAN etc.

On Thu, Feb 6, 2014 at 1:21 PM, Maurice James  wrote:

> Hmm. So in that case would I be able to drop the Gluster setup and use NFS
> each host and make sure power fencing is enabled? Will that still achieve
> fault tolerance, or is a replicated gluster still required?
>
>
>
> *From:* Andrew Lau [mailto:and...@andrewklau.com]
> *Sent:* Wednesday, February 05, 2014 9:17 PM
> *To:* Maurice James
> *Cc:* users
> *Subject:* Re: [Users] Gluster question
>
>
>
> There was another recent post about this but a sum up was:
>
>
>
> You must have power fencing to support VM HA otherwise they'll be an issue
> with the engine not knowing whether the VM is still running and not bring
> it up on a new host to avoid data corruption. Also make sure you have your
> quorum setup properly based on your replication scenario so you can
> withstand 1 host being lost.
>
>
>
> I don't believe they'll "keep running" in a sense because of the host
> being lost, but they would restart on another host. At least that's what
> I've noticed in my case.
>
>
>
> On Thu, Feb 6, 2014 at 1:04 PM, Maurice James 
> wrote:
>
> I currently have a new setup running ovirt 3.3.3. I have a Gluster storage
> domain with roughly 2.5TB of usable space.  Gluster is installed on the
> same systems as the ovirt hosts. The host break down is as follows
>
>
>
> Ovirt DC:
>
> 4 hosts in the cluster. Each host has 4 physical disks in a RAID 5. Each
> disk is 500GB. With the OS installed and configured I end up with 1.2TB of
> usable space left for my data volume
>
>
>
> Gluster volume:
>
> 4 bricks with 1.2TB of space per brick (Distribute Replicate leaves me
> with about 2.5TB in the storage domain)
>
>
>
>
>
> Does this setup give me enough fault tolerance to survive losing a host
> and have my HA vm automatically move to an available host and keep running??
>
>
>
>
> ___
> Users mailing list
> Users@ovirt.org
> http://lists.ovirt.org/mailman/listinfo/users
>
>
>
___
Users mailing list
Users@ovirt.org
http://lists.ovirt.org/mailman/listinfo/users


Re: [Users] Gluster question

2014-02-05 Thread Maurice James
Hmm. So in that case would I be able to drop the Gluster setup and use NFS
each host and make sure power fencing is enabled? Will that still achieve
fault tolerance, or is a replicated gluster still required?

 

From: Andrew Lau [mailto:and...@andrewklau.com] 
Sent: Wednesday, February 05, 2014 9:17 PM
To: Maurice James
Cc: users
Subject: Re: [Users] Gluster question

 

There was another recent post about this but a sum up was:

 

You must have power fencing to support VM HA otherwise they'll be an issue
with the engine not knowing whether the VM is still running and not bring it
up on a new host to avoid data corruption. Also make sure you have your
quorum setup properly based on your replication scenario so you can
withstand 1 host being lost.

 

I don't believe they'll "keep running" in a sense because of the host being
lost, but they would restart on another host. At least that's what I've
noticed in my case.

 

On Thu, Feb 6, 2014 at 1:04 PM, Maurice James mailto:midnightst...@msn.com> > wrote:

I currently have a new setup running ovirt 3.3.3. I have a Gluster storage
domain with roughly 2.5TB of usable space.  Gluster is installed on the same
systems as the ovirt hosts. The host break down is as follows

 

Ovirt DC:

4 hosts in the cluster. Each host has 4 physical disks in a RAID 5. Each
disk is 500GB. With the OS installed and configured I end up with 1.2TB of
usable space left for my data volume

 

Gluster volume:

4 bricks with 1.2TB of space per brick (Distribute Replicate leaves me with
about 2.5TB in the storage domain)

 

 

Does this setup give me enough fault tolerance to survive losing a host and
have my HA vm automatically move to an available host and keep running??

 


___
Users mailing list
Users@ovirt.org  
http://lists.ovirt.org/mailman/listinfo/users

 

___
Users mailing list
Users@ovirt.org
http://lists.ovirt.org/mailman/listinfo/users


Re: [Users] Gluster question

2014-02-05 Thread Andrew Lau
There was another recent post about this but a sum up was:

You must have power fencing to support VM HA otherwise they'll be an issue
with the engine not knowing whether the VM is still running and not bring
it up on a new host to avoid data corruption. Also make sure you have your
quorum setup properly based on your replication scenario so you can
withstand 1 host being lost.

I don't believe they'll "keep running" in a sense because of the host being
lost, but they would restart on another host. At least that's what I've
noticed in my case.


On Thu, Feb 6, 2014 at 1:04 PM, Maurice James  wrote:

> I currently have a new setup running ovirt 3.3.3. I have a Gluster storage
> domain with roughly 2.5TB of usable space.  Gluster is installed on the
> same systems as the ovirt hosts. The host break down is as follows
>
>
>
> Ovirt DC:
>
> 4 hosts in the cluster. Each host has 4 physical disks in a RAID 5. Each
> disk is 500GB. With the OS installed and configured I end up with 1.2TB of
> usable space left for my data volume
>
>
>
> Gluster volume:
>
> 4 bricks with 1.2TB of space per brick (Distribute Replicate leaves me
> with about 2.5TB in the storage domain)
>
>
>
>
>
> Does this setup give me enough fault tolerance to survive losing a host
> and have my HA vm automatically move to an available host and keep running??
>
>
>
> ___
> Users mailing list
> Users@ovirt.org
> http://lists.ovirt.org/mailman/listinfo/users
>
>
___
Users mailing list
Users@ovirt.org
http://lists.ovirt.org/mailman/listinfo/users


Re: [Users] Import VMware

2014-02-05 Thread Maurice James
Ok. Thanks. Sounds do-able

 

From: Ted Miller [mailto:tmil...@hcjb.org] 
Sent: Wednesday, February 05, 2014 3:44 PM
To: Maurice James; users@ovirt.org
Subject: RE: [Users] Import VMware

 

From: Maurice James mailto:midnightst...@msn.com> >

 

Do you know of the best way to get a vmware guest into ovirt without
virt-v2v by chance?

 

From: users-boun...@ovirt.org 
[mailto:users-boun...@ovirt.org] On Behalf Of Ted Miller

 

>> On 2/4/2014 10:49 AM, Maurice James wrote:

>>> Is it possible to import vmware images into ovirt 3.3, Or is a running
Esx instance still required? 

 

>> This bug https://rhn.redhat.com/errata/RHBA-2013-1749.html officially
withdrew support for importing image files directly, because it didn't
always work.

>> Ted Miller 

  _  

 

> From: Maurice James mailto:midnightst...@msn.com>
>

 

> Do you know of the best way to get a vmware guest into ovirt without
virt-v2v by chance?

 

  _  


No.  I had a gluster + sanlock problem take out my ovirt cluster (2 hosts),
and I only have it partially back up.  My dozen VMs are currently available
only when my (dual boot) hardware isn't running oVirt.  Or, to put it the
other way, I can only run oVirt when I can take down the VMWare group,
because I don't have spare hardware.  Working on rebuilding one VM in KVM
today (VMWare copy had a problem).

The only way I have heard succeed is to use ESX/ESXi or the "hollow pig"
method.  Create a VM in ovirt, including the hard drive.  Replace hard drive
file with file from VMWare (or otherwise get data into file).  Fiddle with
VM hardware & settings until it runs.

Ted Miller
Elkhart, IN

___
Users mailing list
Users@ovirt.org
http://lists.ovirt.org/mailman/listinfo/users


Re: [Users] Quotas

2014-02-05 Thread Maurice James
Thanks for the input.

-Original Message-
From: Gilad Chaplik [mailto:gchap...@redhat.com] 
Sent: Wednesday, February 05, 2014 3:48 AM
To: Maurice James
Cc: users@ovirt.org
Subject: Re: [Users] Quotas

Hi Maurice, 

With quota you can set a hard limit for the number of CPU consumed by the user, 
and by that limit the num of vms, other quota parameters (memory and storage) 
can be unlimited.

Although these videos are for version 3.1 they are pretty useful, since there 
weren't any drastic change since then:
http://www.youtube.com/watch?v=zazJ_fW05Qk
http://www.youtube.com/watch?v=y8qUoRImimY

you are more than welcome to send your progress/use-case/any other question :-)

Thanks,
Gilad.

- Original Message -
> From: "Maurice James" 
> To: users@ovirt.org
> Sent: Wednesday, February 5, 2014 3:47:51 AM
> Subject: [Users] Quotas
> 
> 
> 
> How do I limit the number of VMs that a user can create using quotas?
> 
> ___
> Users mailing list
> Users@ovirt.org
> http://lists.ovirt.org/mailman/listinfo/users
> 
___
Users mailing list
Users@ovirt.org
http://lists.ovirt.org/mailman/listinfo/users


[Users] Gluster question

2014-02-05 Thread Maurice James
I currently have a new setup running ovirt 3.3.3. I have a Gluster storage
domain with roughly 2.5TB of usable space.  Gluster is installed on the same
systems as the ovirt hosts. The host break down is as follows

 

Ovirt DC:

4 hosts in the cluster. Each host has 4 physical disks in a RAID 5. Each
disk is 500GB. With the OS installed and configured I end up with 1.2TB of
usable space left for my data volume

 

Gluster volume:

4 bricks with 1.2TB of space per brick (Distribute Replicate leaves me with
about 2.5TB in the storage domain)

 

 

Does this setup give me enough fault tolerance to survive losing a host and
have my HA vm automatically move to an available host and keep running??

 

___
Users mailing list
Users@ovirt.org
http://lists.ovirt.org/mailman/listinfo/users


Re: [Users] Mixing tagged and untagged VLANs

2014-02-05 Thread Trey Dockendorf
I attempted to set ovirtmgmt to untagged and making it a VM network
but that would not save.

The configuration:

ovirtmgmt:
 - Display Network
 - Migration Network
 - VM Network
 - NO VLAN

ipmi:
 - VM Network
 - VLAN 2

When I go into a host's Network Interfaces and attempt to sync the
removal of VLAN 1 from ovirtmgmt I get the following message:

"Cannot setup Networks. The following Network Interfaces can have only
a single VM Logical Network, or at most one non-VM Logical Network
and/or several VLAN Logical Networks: eth0."

So simply having one untagged and the rest tagged doesn't work.  Is
that expected?

The use case here is simply to minimize cabling and hardware
requirements as our current hypervisors have two GbE interfaces and a
single IB interface.  Our connectivity requirements are to 3 networks,
public (direct connected to switches we have no control over for
campus) and private (internal LAN) and our dedicated IPMI /
Out-of-Band network (internal LAN).  To interface VMs with all 3
networks and only 2 uplinks we used a tagged VLAN on our primary
private network switch to link up our IPMI network as long as
interface is tagged VLAN 2.  The private continues to work as-is by
remaining untagged.

At the moment since the untagged ovirtmgmt + tagged ipmi logical
network isn't allowed in oVirt we've had to explicitly set VLAN 1
(private) to tagged for specific ports on our switch.  While this
isn't a huge issue, to us it's an unnecessary complication of
something that "should" work.  I'm able to achieve the desired
untagged + tagged setup using bridges in Linux when configured by
hand, so I know that it technically can be done.

Thanks
- Trey

On Tue, Feb 4, 2014 at 2:11 AM, Assaf Muller  wrote:
>> Is it not possible to have multiple untagged VLAN networks associated
> to one interface in oVirt?
>
> No, not at this time.
>
> You can have one untagged network and N tagged networks on the same device,
> but only up to one untagged network.
>
> If you need multiple untagged networks on a single device then you're very
> welcome to report an RFE :)
>
>
> Assaf Muller, Cloud Networking Engineer
> Red Hat
>
> - Original Message -
> From: "Trey Dockendorf" 
> To: "users" 
> Sent: Monday, February 3, 2014 10:45:44 PM
> Subject: [Users] Mixing tagged and untagged VLANs
>
> Using 3.3.2 I seem unable to mix tagged and untagged VLANs on a single
> interface.  I'm trying to put the following logical networks on a
> host's eth0.
>
> ovirtmgmt:
>  - Display Network
>  - Migration Network
>  - NOT VM Network
>  - NO VLAN
>
> private:
>  - VM network
>  - NO VLAN
>
> ipmi:
>  - VM Network
>  - VLAN 2
>
> In the host's network setup ovirtmgmt is already linked to eth0.  If I
> attach 'ipmi (VLAN 2)' then try and attach 'private' the message is
> "Cannot have more than one non-VLAN network on one interface".  Same
> occurs if I try and attach 'private' when only 'ovirtmgmt' is assigned
> to eth0.
>
> Is it not possible to have multiple untagged VLAN networks associated
> to one interface in oVirt?
>
> Thanks
> - Trey
> ___
> Users mailing list
> Users@ovirt.org
> http://lists.ovirt.org/mailman/listinfo/users
___
Users mailing list
Users@ovirt.org
http://lists.ovirt.org/mailman/listinfo/users


Re: [Users] Import VMware

2014-02-05 Thread Ted Miller
From: Maurice James 

Do you know of the best way to get a vmware guest into ovirt without virt-v2v 
by chance?

From: users-boun...@ovirt.org [mailto:users-boun...@ovirt.org] On Behalf Of Ted 
Miller

>> On 2/4/2014 10:49 AM, Maurice James wrote:

>>> Is it possible to import vmware images into ovirt 3.3, Or is a running Esx 
>>> instance still required?

>> This bug https://rhn.redhat.com/errata/RHBA-2013-1749.html officially 
>> withdrew support for importing image files directly, because it didn't 
>> always work.

>> Ted Miller


> From: Maurice James 

> Do you know of the best way to get a vmware guest into ovirt without virt-v2v 
> by chance?



No.  I had a gluster + sanlock problem take out my ovirt cluster (2 hosts), and 
I only have it partially back up.  My dozen VMs are currently available only 
when my (dual boot) hardware isn't running oVirt.  Or, to put it the other way, 
I can only run oVirt when I can take down the VMWare group, because I don't 
have spare hardware.  Working on rebuilding one VM in KVM today (VMWare copy 
had a problem).

The only way I have heard succeed is to use ESX/ESXi or the "hollow pig" 
method.  Create a VM in ovirt, including the hard drive.  Replace hard drive 
file with file from VMWare (or otherwise get data into file).  Fiddle with VM 
hardware & settings until it runs.

Ted Miller
Elkhart, IN

___
Users mailing list
Users@ovirt.org
http://lists.ovirt.org/mailman/listinfo/users


[Users] A/B network setup

2014-02-05 Thread Pat Pierson
I am having some issues wrapping my head around this but what I am trying
to setup is a A/B testing environment with a 3node cluster.  Each node has
2 nics, 1 for ovirtmgmt and 1 for vlaned A/B network.  I guess what I am
trying to understand is if ovirt is tagging the vlan's I setup and is
properly passing that to the switch?  This would allow me to have multiple
subnets on a host that the switch also can see as multiple vlans.  Right?
I would still have to configure the switch to allow that port access to
both vlans.

-- 
Patrick Pierson
___
Users mailing list
Users@ovirt.org
http://lists.ovirt.org/mailman/listinfo/users


Re: [Users] failed nestd vm only with spice and not vnc

2014-02-05 Thread Gianluca Cecchi
I replicated the problem with same environment but attached to iSCSI
storage domain.
So the Gluster part is not involved.
As soon as I run once a VM on the host the VM goes into paused state
and in host messages:


Feb  5 19:22:45 localhost kernel: [16851.192234] cgroup: libvirtd
(1460) created nested cgroup for controller "memory" which has
incomplete hierarchy support. Nested cgroups may change behavior in
the future.
Feb  5 19:22:45 localhost kernel: [16851.192240] cgroup: "memory"
requires setting use_hierarchy to 1 on the root.
Feb  5 19:22:46 localhost kernel: [16851.228204] device vnet0 entered
promiscuous mode
Feb  5 19:22:46 localhost kernel: [16851.236198] ovirtmgmt: port
2(vnet0) entered forwarding state
Feb  5 19:22:46 localhost kernel: [16851.236208] ovirtmgmt: port
2(vnet0) entered forwarding state
Feb  5 19:22:46 localhost kernel: [16851.591058] qemu-system-x86:
sending ioctl 5326 to a partition!
Feb  5 19:22:46 localhost kernel: [16851.591074] qemu-system-x86:
sending ioctl 80200204 to a partition!
Feb  5 19:22:46 localhost vdsm vm.Vm WARNING
vmId=`7094da5f-6c08-4b0c-ae98-8bfb6de1b9c0`::_readPauseCode
unsupported by libvirt vm
Feb  5 19:22:47 localhost avahi-daemon[449]: Registering new address
record for fe80


And in qemu.log for the VM:

2014-02-05 18:22:46.280+: starting up
LC_ALL=C PATH=/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin
QEMU_AUDIO_DRV=spice /usr/bin/qemu-kvm -name c6i -S -machine
pc-1.0,accel=kvm,usb=off -cpu Nehalem -m 1024 -smp
1,sockets=1,cores=1,threads=1 -uuid
7094da5f-6c08-4b0c-ae98-8bfb6de1b9c0 -smbios
type=1,manufacturer=oVirt,product=oVirt
Node,version=19-6,serial=421F4B4A-9A4C-A7C4-54E5-847BF1ADE1A5,uuid=7094da5f-6c08-4b0c-ae98-8bfb6de1b9c0
-no-user-config -nodefaults -chardev
socket,id=charmonitor,path=/var/lib/libvirt/qemu/c6i.monitor,server,nowait
-mon chardev=charmonitor,id=monitor,mode=control -rtc
base=2014-02-05T18:22:45,driftfix=slew -no-shutdown -device
piix3-usb-uhci,id=usb,bus=pci.0,addr=0x1.0x2 -device
virtio-scsi-pci,id=scsi0,bus=pci.0,addr=0x4 -device
virtio-serial-pci,id=virtio-serial0,bus=pci.0,addr=0x5 -drive
file=/rhev/data-center/mnt/ovirt.localdomain.local:_var_lib_exports_iso/6e80607d-5437-4fc5-b73c-66794f6381e0/images/----/CentOS-6.4-x86_64-bin-DVD1.iso,if=none,id=drive-ide0-1-0,readonly=on,format=raw,serial=
-device ide-cd,bus=ide.1,unit=0,drive=drive-ide0-1-0,id=ide0-1-0,bootindex=1
-drive 
file=/rhev/data-center/mnt/blockSD/f741671e-6480-4d7b-b357-8cf6e8d2c0f1/images/0912658d-1541-4d56-945b-112b0b074d29/67aaf7db-4d1c-42bd-a1b0-988d95c5d5d2,if=none,id=drive-virtio-disk0,format=qcow2,serial=0912658d-1541-4d56-945b-112b0b074d29,cache=none,werror=stop,rerror=stop,aio=native
-device 
virtio-blk-pci,scsi=off,bus=pci.0,addr=0x6,drive=drive-virtio-disk0,id=virtio-disk0,bootindex=2
-netdev tap,fd=30,id=hostnet0,vhost=on,vhostfd=31 -device
virtio-net-pci,netdev=hostnet0,id=net0,mac=00:1a:4a:bb:9f:17,bus=pci.0,addr=0x3,bootindex=3
-chardev 
socket,id=charchannel0,path=/var/lib/libvirt/qemu/channels/7094da5f-6c08-4b0c-ae98-8bfb6de1b9c0.com.redhat.rhevm.vdsm,server,nowait
-device 
virtserialport,bus=virtio-serial0.0,nr=1,chardev=charchannel0,id=channel0,name=com.redhat.rhevm.vdsm
-chardev 
socket,id=charchannel1,path=/var/lib/libvirt/qemu/channels/7094da5f-6c08-4b0c-ae98-8bfb6de1b9c0.org.qemu.guest_agent.0,server,nowait
-device 
virtserialport,bus=virtio-serial0.0,nr=2,chardev=charchannel1,id=channel1,name=org.qemu.guest_agent.0
-chardev spicevmc,id=charchannel2,name=vdagent -device
virtserialport,bus=virtio-serial0.0,nr=3,chardev=charchannel2,id=channel2,name=com.redhat.spice.0
-spice 
tls-port=5900,addr=0,x509-dir=/etc/pki/vdsm/libvirt-spice,tls-channel=main,tls-channel=display,tls-channel=inputs,tls-channel=cursor,tls-channel=playback,tls-channel=record,tls-channel=smartcard,tls-channel=usbredir,seamless-migration=on
-k en-us -vga qxl -global qxl-vga.ram_size=67108864 -global
qxl-vga.vram_size=33554432 -device
virtio-balloon-pci,id=balloon0,bus=pci.0,addr=0x7
KVM: unknown exit, hardware reason 3
EAX=0094 EBX=6e44 ECX=000e EDX=0511
ESI=0002 EDI=6df8 EBP=6e08 ESP=6dd4
EIP=3ffe1464 EFL=0017 [APC] CPL=0 II=0 A20=1 SMM=0 HLT=0
ES =0010   00c09300 DPL=0 DS   [-WA]
CS =0008   00c09b00 DPL=0 CS32 [-RA]
SS =0010   00c09300 DPL=0 DS   [-WA]
DS =0010   00c09300 DPL=0 DS   [-WA]
FS =0010   00c09300 DPL=0 DS   [-WA]
GS =0010   00c09300 DPL=0 DS   [-WA]
LDT=   8200 DPL=0 LDT
TR =   8b00 DPL=0 TSS32-busy
GDT= 000fd3a8 0037
IDT= 000fd3e6 
CR0=0011 CR2= CR3= CR4=
DR0= DR1= DR2=
DR3=
DR6=0ff0 DR7=0400
EFER=
Code=eb be 83 c4 08 5b 5e 5f 5d c3 89 c1 ba 11 05 00 00 eb 01 ec <49>
83 f9 ff 75 f9 c3 57 5

Re: [Users] ovirt 3.3.3 host deploy push an "old" vdsm

2014-02-05 Thread Alon Bar-Lev


- Original Message -
> From: "Gianluca Cecchi" 
> To: "Alon Bar-Lev" 
> Cc: "Itamar Heim" , "users" 
> Sent: Wednesday, February 5, 2014 8:29:49 PM
> Subject: Re: [Users] ovirt 3.3.3 host deploy push an "old" vdsm
> 
> On Wed, Feb 5, 2014 at 7:19 PM, Alon Bar-Lev  wrote:
> >
> >
> > - Original Message -
> >> From: "Itamar Heim"
> 
> >>
> >> engine will deploy the latest vdsm the host sees based on the repos
> >> configured on the host?
> >
> > indeed.
> 
> I thought sort of archive of rpm packages "embedded" and delivered
> from engine itself... and I thought it was in ovirt-host-deploy
> package on engine
> I didn't have any ovirt repo configured on host at deploy stage. Why
> not use repo of engine?
> 
> Gianluca
> 

I also suggested this a while ago...
However, the current implementation is to use the system repo.
___
Users mailing list
Users@ovirt.org
http://lists.ovirt.org/mailman/listinfo/users


Re: [Users] ovirt 3.3.3 host deploy push an "old" vdsm

2014-02-05 Thread Gianluca Cecchi
On Wed, Feb 5, 2014 at 7:19 PM, Alon Bar-Lev  wrote:
>
>
> - Original Message -
>> From: "Itamar Heim"

>>
>> engine will deploy the latest vdsm the host sees based on the repos
>> configured on the host?
>
> indeed.

I thought sort of archive of rpm packages "embedded" and delivered
from engine itself... and I thought it was in ovirt-host-deploy
package on engine
I didn't have any ovirt repo configured on host at deploy stage. Why
not use repo of engine?

Gianluca
___
Users mailing list
Users@ovirt.org
http://lists.ovirt.org/mailman/listinfo/users


Re: [Users] ovirt 3.3.3 host deploy push an "old" vdsm

2014-02-05 Thread Alon Bar-Lev


- Original Message -
> From: "Itamar Heim" 
> To: "Gianluca Cecchi" , "users" 
> Sent: Wednesday, February 5, 2014 8:17:38 PM
> Subject: Re: [Users] ovirt 3.3.3 host deploy push an "old" vdsm
> 
> On 02/05/2014 06:40 PM, Gianluca Cecchi wrote:
> > Hello,
> > fedora 19 engine 3.3.3 with ovirt-host-deploy-1.1.3-1.fc19.noarch
> >
> > when I deploy a fedora 19 host from web admin gui I get on it
> > vdsm-4.13.0-11.fc19.x86_64
> > (it seems updated at november 2013)
> >
> > If after that I explicitly enable ovirt 3.3.3 yum repo and run yum
> > update on host it correctly passes to :
> >
> > vdsm-4.13.3-3.fc19.x86_64
> >
> > Gianluca
> > ___
> > Users mailing list
> > Users@ovirt.org
> > http://lists.ovirt.org/mailman/listinfo/users
> >
> 
> engine will deploy the latest vdsm the host sees based on the repos
> configured on the host?

indeed.
___
Users mailing list
Users@ovirt.org
http://lists.ovirt.org/mailman/listinfo/users


Re: [Users] ovirt 3.3.3 host deploy push an "old" vdsm

2014-02-05 Thread Itamar Heim

On 02/05/2014 06:40 PM, Gianluca Cecchi wrote:

Hello,
fedora 19 engine 3.3.3 with ovirt-host-deploy-1.1.3-1.fc19.noarch

when I deploy a fedora 19 host from web admin gui I get on it
vdsm-4.13.0-11.fc19.x86_64
(it seems updated at november 2013)

If after that I explicitly enable ovirt 3.3.3 yum repo and run yum
update on host it correctly passes to :

vdsm-4.13.3-3.fc19.x86_64

Gianluca
___
Users mailing list
Users@ovirt.org
http://lists.ovirt.org/mailman/listinfo/users



engine will deploy the latest vdsm the host sees based on the repos 
configured on the host?

___
Users mailing list
Users@ovirt.org
http://lists.ovirt.org/mailman/listinfo/users


[Users] ovirt 3.3.3 host deploy push an "old" vdsm

2014-02-05 Thread Gianluca Cecchi
Hello,
fedora 19 engine 3.3.3 with ovirt-host-deploy-1.1.3-1.fc19.noarch

when I deploy a fedora 19 host from web admin gui I get on it
vdsm-4.13.0-11.fc19.x86_64
(it seems updated at november 2013)

If after that I explicitly enable ovirt 3.3.3 yum repo and run yum
update on host it correctly passes to :

vdsm-4.13.3-3.fc19.x86_64

Gianluca
___
Users mailing list
Users@ovirt.org
http://lists.ovirt.org/mailman/listinfo/users


[Users] New iSCSI domain problem when first input wrong password

2014-02-05 Thread Gianluca Cecchi
Hello,
I have a fedora 19 engine based on 3.3 final.
I installed a fedora 19 host (configured as infrastructure server) and
then installed from web gui. Then yum update of the host updated vdsm
(see separate thread I'm going to open for this).

I create iSCSI DC and add that host to it.
When I try to create the new iSCSI storage domain (sw iscsi target on
a CentOS 6.5 server) I initially input the wrong chap password so that
discovery goes ok, while authentication fails. Then I put correct pwd
but after apparently going on (I see the exposed lun) then gives error

in webadmin gui I have
2014-Feb-05, 16:57
Failed to attach Storage Domain OV01 to Data Center ISCSI. (User:
admin@internal)

2014-Feb-05, 16:57
Failed to attach Storage Domains to Data Center ISCSI. (User: admin@internal)

2014-Feb-05, 16:57
The error message for connection 192.168.230.101
iqn.2013-09.local.localdomain:c6iscsit.target11 (LUN 1IET_00010001)
returned by VDSM was: Failed to setup iSCSI subsystem

2014-Feb-05, 16:57
Storage Domain OV01 was added by admin@internal

A workaround to have it activated is then (see logs 18:11):

- put host in maintenance
-  reactivate it
- attach sd to DC
- ok (but see watermark message I don't understand)


2014-Feb-05, 18:13
The system has reached the 80% watermark on the VG metadata area size
on OV01. This is due to a high number of Vdisks or large Vdisks size
allocated on this specific VG.

2014-Feb-05, 18:13
Storage Domains were attached to Data Center ISCSI by admin@internal

2014-Feb-05, 18:13
Storage Domain OV01 (Data Center ISCSI) was activated by admin@internal

2014-Feb-05, 18:13
Storage Pool Manager runs on Host ovnode03 (Address: 192.168.33.44).

2014-Feb-05, 18:12
Data Center is being initialized, please wait for initialization to complete.

2014-Feb-05, 18:12
State was set to Up for host ovnode03.

2014-Feb-05, 18:11
Host ovnode03 was activated by admin@internal.

2014-Feb-05, 18:11
Host ovnode03 was switched to Maintenance mode by admin@internal.

See my vdsm.log and supervdsm.log
here
https://drive.google.com/file/d/0BwoPbcrMv8mval9XMnN4ZGoxa00/edit?usp=sharing

https://drive.google.com/file/d/0BwoPbcrMv8mveC1KQmo1dFAtNzQ/edit?usp=sharing

Possibly initially wrong password is not optimally managed when
putting the correct one, as I see some traceback.


Gianluca
___
Users mailing list
Users@ovirt.org
http://lists.ovirt.org/mailman/listinfo/users


Re: [Users] NAT configuration

2014-02-05 Thread Sven Kieske
Well I didn't know the exact background for this code
and I can understand it from a management perspective, but from
a sysadmin perspective it is useless (it does not prevent anything
against an informed "attacker") and may be even lead to false security
assumptions ("nobody can mess with libvirt, it's all authenticated").

But thanks for pointing out the reasoning behind this, I still don't
like it, but I can understand it.

(Funny side fact: the very first thing we did, when we found that
libvirt just allows authenticated access was to find out how to
create our own user, and every admin asks at first: how can I access
libvirt, when something goes wrong?)


Am 05.02.2014 13:45, schrieb Dan Kenigsberg:
> On Wed, Feb 05, 2014 at 09:50:04AM +, Sven Kieske wrote:
>> I can confirm that vdsm@ovirt does work.
>>
>> However, I have the strong feeling that
>> the password in /etc/pki/vdsm/keys/libvirt_password
>> is static for all installations.
>>
>> And gerrit proves me right:
>>
>> http://gerrit.ovirt.org/gitweb?p=vdsm.git;a=blob;f=vdsm/libvirt_password;h=09e60bce9bc401bb8943154f7cb9cb08bd0f49da;hb=refs/heads/master
>>
>> So what is the purpose of authentication when that information
>> is public?
>>
>> I created a BZ for this:
>>
>> https://bugzilla.redhat.com/show_bug.cgi?id=1061639
>>
>> PS: I hope, whoever coded this, feels a little bit ashamed
>> and perhaps buys a good book on writing secure code and reads it..
> 
> I feel ashamed, but not due to the "security" issue here.
> 
> Vdsm uses a unix domain socket to connect to libvirtd. That socket is
> owned by vdsm, so that only vdsm and root can use it. There is no
> security reason to use a password at all.
> 
> I am ashamed for caving in and adding an obfuscation layer, designed
> only to deter local administrators from messing with libvirt under the
> feet of ovirt. This little hurdle does not deter from messing with qemu
> directly, but I suppose that qemu's command line does a good job anyway.
> 
> Red Hat support folks repeatedly claim that this hurdle is more
> effective than putting a release note warning of the dangers in direct
> libvirt access.
> 
> Dan.
> 
> 
> 
> 
> 

-- 
Mit freundlichen Grüßen / Regards

Sven Kieske

Systemadministrator
Mittwald CM Service GmbH & Co. KG
Königsberger Straße 6
32339 Espelkamp
T: +49-5772-293-100
F: +49-5772-293-333
https://www.mittwald.de
Geschäftsführer: Robert Meyer
St.Nr.: 331/5721/1033, USt-IdNr.: DE814773217, HRA 6640, AG Bad Oeynhausen
Komplementärin: Robert Meyer Verwaltungs GmbH, HRB 13260, AG Bad Oeynhausen
___
Users mailing list
Users@ovirt.org
http://lists.ovirt.org/mailman/listinfo/users


Re: [Users] Putting a Gluster host into an ISCSI DC

2014-02-05 Thread Gianluca Cecchi
On Wed, Feb 5, 2014 at 4:10 PM, Itamar Heim  wrote:
> Are the volumes hosted on the host?

Yes, but I have put down all resources related to it
___
Users mailing list
Users@ovirt.org
http://lists.ovirt.org/mailman/listinfo/users


Re: [Users] Putting a Gluster host into an ISCSI DC

2014-02-05 Thread Itamar Heim
Are the volumes hosted on the host?
On Feb 5, 2014 12:18 PM, "Gianluca Cecchi" 
wrote:

> Hello,
> in a test infra I would to temporarily put a Host that is part of
> Gluster DC and has gluster volumes active into an ISCSI DC.
>
> I put Gluster DC in maintenance, stop glustervolume and put both hosts
> in maintenance.
> Then I try to edit host and put into the ISCSI DC.
> But I receive error
>
> Error while executing action:
>
> node02:
>
> Cannot Edit host. Server having Gluster volume
>
> Is there any basic check so that I cannot do this at all?
> Any tip?
>
> Thanks,
> Gianluca
> ___
> Users mailing list
> Users@ovirt.org
> http://lists.ovirt.org/mailman/listinfo/users
>
___
Users mailing list
Users@ovirt.org
http://lists.ovirt.org/mailman/listinfo/users


Re: [Users] My first wiki page

2014-02-05 Thread Juan Pablo Lorier
Thank you, I'll update the wiky in a few days to include this info (and 
I'll use it myself).

Regards,

El 05/02/14 04:43, Sahina Bose escribió:


On 02/03/2014 07:18 PM, Juan Pablo Lorier wrote:

Hi,

I've created my first wiki page and I'd like someone to review it and
tell me if there's something that need to be changed (besides it does
not have any style yet)
The URL is
http://www.ovirt.org/oVirt_Wiki:How_to_change_Gluster%27s_network_interface
Regards,



Firstly, thanks for putting this information up!

Some comments -

1.  when you use different IP addresses for engine -to -gluster host 
(say IP1) and gluster -to -gluster communication (say IP2), operations 
from ovirt engine like add brick or remove brick would fail (as brick 
is tried to be added with IP1 which gluster does not understand)


To work around this, it is better to use a FQDN both for registering 
the host with engine and also to peer probe the host from gluster CLI.
You could have multiple IP addresses on the host resolve to the same 
FQDN.


2. To reuse a brick directory, gluster provides the force option 
during volume creation as well as adding bricks. This is available 
from gluster 3.5 upwards.


[Adding Vijay to correct me, if I'm wrong]






___
Users mailing list
Users@ovirt.org
http://lists.ovirt.org/mailman/listinfo/users




___
Users mailing list
Users@ovirt.org
http://lists.ovirt.org/mailman/listinfo/users


Re: [Users] NAT configuration

2014-02-05 Thread Dan Kenigsberg
On Wed, Feb 05, 2014 at 02:25:08PM +0100, Itamar Heim wrote:
> On 02/05/2014 09:27 AM, Sven Kieske wrote:
> >Well, I just tested this.
> >and I can't connect to virsh with this information.
> >I guess the mentioned user vdsm@rhevh might not be the actual one
> >used in 3.3.2 anymore? (mail is from 2012 and mentions rhev, so..)
> 
> well, that's a question for vdsm-devel

Yes, vdsm@rhevh is a remnant of the past (that can still be seen on the
RHEV product). I believe that it can be very easily dropped from there,
too. (And `/usr/sbin/saslpasswd2 -p -a libvirt -d vdsm@ovirt` moved to
vdsm-tool, for symetry)
___
Users mailing list
Users@ovirt.org
http://lists.ovirt.org/mailman/listinfo/users


Re: [Users] Will this two node concept scale and work?

2014-02-05 Thread Jorick Astrego



On Wed, 2014-02-05 at 08:49 -0500, Yedidyah Bar David wrote:
> 
> 
> From: "ml ml" 
> To: Users@ovirt.org
> Sent: Wednesday, February 5, 2014 12:45:55 PM
> Subject: [Users] Will this two node concept scale and work?
> 
> 
> 
> Hello List,
> 
> 
> 
> my aim is to host multiple VMs which are redundant and are
> high available. It should also scale well.
> 
> 
> 
> I think usually people just buy a fat iSCSI Storage and attach
> this. In my case it should scale well from very small nodes to
> big ones.
> 
> Therefore an iSCSI Target will bring a lot of overhead (10GBit
> Links and two Paths, and really i should have a 2nd Hot
> Standby SAN, too). This makes scalability very hard.
> 
> 
> 
> 
> This post is also not meant to be a iscsi discussion.
> 
> 
> 
> Since oVirt does not support DRBD out of the box i came up
> with my own concept:
> 
> 
> http://oi62.tinypic.com/2550xg5.jpg
> 
> 
> 
> As far as i can tell i have the following advantages:
> 
> 
> - i can start with two simple cheap nodes
> 
> - i could add more disks to my nodes. Maybe even a SSD as a
> dedicated drbd resource.
> 
> - i can connect the two nodes directly to each other with
> bonding or infiniband. i dont need a switch or something
> between it.
> 

> 
> Downside:
> ---
> 
> - i always need two nodes (as a couple)
> 
> 
> 
> Will this setup work for me. So far i think i will be quite
> happy with it.
> 
> Since the DRBD Resources are shared in dual primary mode i am
> not sure if ovirt can handle it. It is not allowed to write to
> a vm disk at the same time.
> 
> 
> I don't know ovirt enough to comment on that.
> 
> 
> I did play in the past with drbd and libvirt (virsh).
> Note that having both nodes primary all the time for all resources is
> calling for a disaster. In any case of split brain, for any reason,
> drbd
> will not know what to do.
> 

I second that, had many problems without proper fencing and even with
fencing.

> What I did was to allow both to be primary, but had only one primary
> most of the time (per resource). I wrote a script to do migration,
> which
> made both primary for the duration of the migration (required by qemu)
> and then moved the source to secondary when migration finished. This
> way you still have a chance for a disaster, if there is a problem
> (split
> brain, node failure) during a migration. So if you decide to go this
> way,
> carefully plan and test to see that it works well for you. One source
> for
> a split brain, for me, at the time, was buggy nic drivers and bad
> bonding
> configuration. So test that well too if applicable.
> 
> 
> The approach I took seems similar to "DRBD on LV level" in [1], but
> with custom scripts and without ovirt.
> 
> 
> You might be able to make ovirt do this for you with hooks. Didn't try
> that.\


You could use drbd9 but I haven't tested it extensively yet. DRBD 9 has
primary on write so you have both sides on passive until one of the
nodes want's to write. It should automatically become primary then. This
has been done by linbit to decrease split brain and expand to more than
two nodes.

http://www.drbd.org/users-guide-9.0/s-automatic-promotion.html

But I don't know why it shouldn't work, maybe not with the node image
but you can make a node of a normal rhel/centos/fedora install.

One problem I always have with drbd and RHEL/Centos is that when you
don't pay for the Linbit support, you don't get access to the repo and
drbd is an additional option on RHEL. On Centos and Fedora the version
is always lagging behind, so I have to compile the kernel module
everytime for a new version or kernel update.

> 
> An obvious downside to this approach is that if one node in a pair is
> down, the other has no backup now. If you have multiple nodes and
> external shared storage, multiple nodes can be down with no disruption
> to service if the remaining nodes are capable enough.
> 
> 
> [1] http://www.ovirt.org/Features/DRBD
> 
> 
> Best regards,
> -- 
> 
> Didi
> 


___
Users mailing list
Users@ovirt.org
http://lists.ovirt.org/mailman/listinfo/users


Re: [Users] Will this two node concept scale and work?

2014-02-05 Thread Yedidyah Bar David
> From: "ml ml" 
> To: Users@ovirt.org
> Sent: Wednesday, February 5, 2014 12:45:55 PM
> Subject: [Users] Will this two node concept scale and work?

> Hello List,

> my aim is to host multiple VMs which are redundant and are high available. It
> should also scale well.

> I think usually people just buy a fat iSCSI Storage and attach this. In my
> case it should scale well from very small nodes to big ones.
> Therefore an iSCSI Target will bring a lot of overhead (10GBit Links and two
> Paths, and really i should have a 2nd Hot Standby SAN, too). This makes
> scalability very hard.

> This post is also not meant to be a iscsi discussion.

> Since oVirt does not support DRBD out of the box i came up with my own
> concept:

> http://oi62 . tinypic .com/2550xg5. jpg

> As far as i can tell i have the following advantages:
> 
> - i can start with two simple cheap nodes
> - i could add more disks to my nodes. Maybe even a SSD as a dedicated drbd
> resource.
> - i can connect the two nodes directly to each other with bonding or
> infiniband . i dont need a switch or something between it.

> Downside:
> ---
> - i always need two nodes (as a couple)

> Will this setup work for me. So far i think i will be quite happy with it.
> Since the DRBD Resources are shared in dual primary mode i am not sure if
> ovirt can handle it. It is not allowed to write to a vm disk at the same
> time.

I don't know ovirt enough to comment on that. 

I did play in the past with drbd and libvirt (virsh). 
Note that having both nodes primary all the time for all resources is 
calling for a disaster. In any case of split brain, for any reason, drbd 
will not know what to do. 

What I did was to allow both to be primary, but had only one primary 
most of the time (per resource). I wrote a script to do migration, which 
made both primary for the duration of the migration (required by qemu) 
and then moved the source to secondary when migration finished. This 
way you still have a chance for a disaster, if there is a problem (split 
brain, node failure) during a migration. So if you decide to go this way, 
carefully plan and test to see that it works well for you. One source for 
a split brain, for me, at the time, was buggy nic drivers and bad bonding 
configuration. So test that well too if applicable. 

The approach I took seems similar to " DRBD on LV level " in [1], but 
with custom scripts and without ovirt. 

You might be able to make ovirt do this for you with hooks. Didn't try that. 

An obvious downside to this approach is that if one node in a pair is 
down, the other has no backup now. If you have multiple nodes and 
external shared storage, multiple nodes can be down with no disruption 
to service if the remaining nodes are capable enough. 

[1] http://www.ovirt.org/Features/DRBD 

Best regards, 
-- 
Didi 
___
Users mailing list
Users@ovirt.org
http://lists.ovirt.org/mailman/listinfo/users


Re: [Users] Networking. Hosted Setup. All in One Host. Hetzner

2014-02-05 Thread Dan Kenigsberg
On Wed, Feb 05, 2014 at 06:36:32AM -0500, Assaf Muller wrote:
> Could you explain further why does the host need to do any routing?

 From what I gather, the hosting service (Hetzner) allows only IP traffic
out of the box, but Peter may correct me.
___
Users mailing list
Users@ovirt.org
http://lists.ovirt.org/mailman/listinfo/users


Re: [Users] NAT configuration

2014-02-05 Thread Itamar Heim

On 02/05/2014 09:27 AM, Sven Kieske wrote:

Well, I just tested this.
and I can't connect to virsh with this information.
I guess the mentioned user vdsm@rhevh might not be the actual one
used in 3.3.2 anymore? (mail is from 2012 and mentions rhev, so..)


well, that's a question for vdsm-devel



or can't libvirt manage multiple authenticated users?
Because I registered my own using:

saslpasswd2 -a libvirt USERNAME

which still works.

Am 05.02.2014 09:15, schrieb Sven Kieske:

It's actually step 5 :)

Am 05.02.2014 07:50, schrieb Itamar Heim:

(step 6 explains where the user/password for libvirt/virsh are.






___
Users mailing list
Users@ovirt.org
http://lists.ovirt.org/mailman/listinfo/users


Re: [Users] Will this two node concept scale and work?

2014-02-05 Thread ml ml
Hello Ron,

thanks for your reply.

This post is also not meant to be a iscsi discussion.
>>
>> Since oVirt does not support DRBD out of the box i came up with my own
>> concept:
>>
>
> check out posix storage domain.
> If it supports gluster you might be able to use it for DRBD.
>

Sorry, I dont quite understand how posix will work with gluster here.

What would the architechture look like and on what layer would it
replicate.
With "gluster" the glusterfs comes into my mind. How will drbd come into
place here?

Thanks a lot!

Mario
___
Users mailing list
Users@ovirt.org
http://lists.ovirt.org/mailman/listinfo/users


Re: [Users] NAT configuration

2014-02-05 Thread Dan Kenigsberg
On Tue, Feb 04, 2014 at 08:20:26PM +, martin.tru...@cspq.gouv.qc.ca wrote:
> Hi,
> 
> I want to configure NAT in Ovirt with this procedure :
> http://lists.ovirt.org/pipermail/users/2012-April/001751.html
> 
> But I don't have the login password of virsh and for the PosgreSQL database.
> 
> I used the last version of oVirt with default installation with engine-setup

Note that now it is possible to use vdsm-hook-extnet instead of changing
Engine's database (step 12 and forth). You do not have your NAT network
defined in oVirt, but you can define a vNIC Profile with the property
extnet=natbr0 and whatever network you have defined (say ovirtmgmt).

When you attach a vnic of a VM to your vNIC profile and start the
VM, the hook script kicks into action and points the vnic to natbr0.

Dan.
___
Users mailing list
Users@ovirt.org
http://lists.ovirt.org/mailman/listinfo/users


Re: [Users] NAT configuration

2014-02-05 Thread Dan Kenigsberg
On Wed, Feb 05, 2014 at 09:50:04AM +, Sven Kieske wrote:
> I can confirm that vdsm@ovirt does work.
> 
> However, I have the strong feeling that
> the password in /etc/pki/vdsm/keys/libvirt_password
> is static for all installations.
> 
> And gerrit proves me right:
> 
> http://gerrit.ovirt.org/gitweb?p=vdsm.git;a=blob;f=vdsm/libvirt_password;h=09e60bce9bc401bb8943154f7cb9cb08bd0f49da;hb=refs/heads/master
> 
> So what is the purpose of authentication when that information
> is public?
> 
> I created a BZ for this:
> 
> https://bugzilla.redhat.com/show_bug.cgi?id=1061639
> 
> PS: I hope, whoever coded this, feels a little bit ashamed
> and perhaps buys a good book on writing secure code and reads it..

I feel ashamed, but not due to the "security" issue here.

Vdsm uses a unix domain socket to connect to libvirtd. That socket is
owned by vdsm, so that only vdsm and root can use it. There is no
security reason to use a password at all.

I am ashamed for caving in and adding an obfuscation layer, designed
only to deter local administrators from messing with libvirt under the
feet of ovirt. This little hurdle does not deter from messing with qemu
directly, but I suppose that qemu's command line does a good job anyway.

Red Hat support folks repeatedly claim that this hurdle is more
effective than putting a release note warning of the dangers in direct
libvirt access.

Dan.


___
Users mailing list
Users@ovirt.org
http://lists.ovirt.org/mailman/listinfo/users


Re: [Users] Putting a Gluster host into an ISCSI DC

2014-02-05 Thread Kanagaraj


On 02/05/2014 04:48 PM, Gianluca Cecchi wrote:

Hello,
in a test infra I would to temporarily put a Host that is part of
Gluster DC and has gluster volumes active into an ISCSI DC.

I put Gluster DC in maintenance, stop glustervolume and put both hosts
in maintenance.
Then I try to edit host and put into the ISCSI DC.
But I receive error

Error while executing action:

node02:

Cannot Edit host. Server having Gluster volume


As you said above, if you have volumes in the cluster, volumes needs to 
be deleted before moving the hosts to  a different cluster.





Is there any basic check so that I cannot do this at all?
Any tip?

Thanks,
Gianluca
___
Users mailing list
Users@ovirt.org
http://lists.ovirt.org/mailman/listinfo/users


___
Users mailing list
Users@ovirt.org
http://lists.ovirt.org/mailman/listinfo/users


Re: [Users] Networking. Hosted Setup. All in One Host. Hetzner

2014-02-05 Thread Assaf Muller
Could you explain further why does the host need to do any routing?


Assaf Muller, Cloud Networking Engineer 
Red Hat 

- Original Message -
From: "Dan Kenigsberg" 
To: "Peter Styk" , amul...@redhat.com
Cc: users@ovirt.org
Sent: Wednesday, February 5, 2014 1:23:54 PM
Subject: Re: [Users] Networking. Hosted Setup. All in One Host. Hetzner

On Thu, Jan 16, 2014 at 11:51:25PM +, Peter Styk wrote:
> Greetings,
> 
> I'm writing here as to share some of my findings about hosting with
> Hetzner. All in one setups on single remote host can be tricky. Provider
> mounted an extra /29 subnet to the main host but none is routed by default
> and host has to become router itself. At the same time single mistake in
> bridging configuration and lost access results in need for re-bootstrap.
> It's still tempting to try and with many trials I eventually got to see
> guests talking to the net.
> 
> Scenario 1: Working. Package bridge-utils, oVirt engine, setup bridge,
> VDSM, add host to engine, add routing to host routing table. Networking by
> trial and error. Still something is not right. Occasionally on ping out I'm
> getting "Redirect Host (New nexthop" messages.
> http://styk.tv/wp-content/uploads/2014/01/oVirtHosted1_almost_working.png

Unfortunately, I fail to understand what can be hampering your routing
there. Assaf, do you have a guess?

Which version of ovirt have you been using? Now, with source-routing
implemented into ovirt-3.3, there is a danger in setting your own
content into route-, as it would be overwritten if  is
reconfigured via Engine.

> 
> Scenario 2: Dreaming. Private network with private router/dhcp/nat. Private
> 10.0.0.0/24 network. No problems with routing as gateway 10.0.0.1 would be
> on the same subnet. Thought of using pfSense but can't seem to bring up an
> instance with two network cards on two different networks. I thought this
> would be easy.
> Go to Networks, click create new network, type private, save
> ok. then go to new instance. point at iso, attach two network cards. save
> ok. Launch "Host did not satisfy internal filter Network" No idea what that
> is. Maybe I don't understand how this works.

I do not understand where having an instance with two nics fail. Is the
"Host did not satisfy internal filter Network" message coming from
Hetzner management, or oVirt's?

> I even tried removing
> ovirtmgmt network and leaving private network by itself. Tried with all 3
> network card types (rtl8139/e1000/VirtIO)
> http://styk.tv/wp-content/uploads/2014/01/oVirtHosted2_preferred.png

Could you explain how you configured your provate network? In my
experience, your easiest option is to define a dummy interface
ip link add name dummy_private type dummy
and set up a normal oVirt network on top of it, as if it were a true
nic.

> 
> Either way if you have a minute or two please take a look at both attached
> diagrams. Deliberately making it difficult by forcing all elements on
> single box in hosted environment.
> 
> Maybe there is a way to have this all installed with Neutron or vSwitch on
> the same box or is that pushing it?
___
Users mailing list
Users@ovirt.org
http://lists.ovirt.org/mailman/listinfo/users


Re: [Users] Networking. Hosted Setup. All in One Host. Hetzner

2014-02-05 Thread Dan Kenigsberg
On Thu, Jan 16, 2014 at 11:51:25PM +, Peter Styk wrote:
> Greetings,
> 
> I'm writing here as to share some of my findings about hosting with
> Hetzner. All in one setups on single remote host can be tricky. Provider
> mounted an extra /29 subnet to the main host but none is routed by default
> and host has to become router itself. At the same time single mistake in
> bridging configuration and lost access results in need for re-bootstrap.
> It's still tempting to try and with many trials I eventually got to see
> guests talking to the net.
> 
> Scenario 1: Working. Package bridge-utils, oVirt engine, setup bridge,
> VDSM, add host to engine, add routing to host routing table. Networking by
> trial and error. Still something is not right. Occasionally on ping out I'm
> getting "Redirect Host (New nexthop" messages.
> http://styk.tv/wp-content/uploads/2014/01/oVirtHosted1_almost_working.png

Unfortunately, I fail to understand what can be hampering your routing
there. Assaf, do you have a guess?

Which version of ovirt have you been using? Now, with source-routing
implemented into ovirt-3.3, there is a danger in setting your own
content into route-, as it would be overwritten if  is
reconfigured via Engine.

> 
> Scenario 2: Dreaming. Private network with private router/dhcp/nat. Private
> 10.0.0.0/24 network. No problems with routing as gateway 10.0.0.1 would be
> on the same subnet. Thought of using pfSense but can't seem to bring up an
> instance with two network cards on two different networks. I thought this
> would be easy.
> Go to Networks, click create new network, type private, save
> ok. then go to new instance. point at iso, attach two network cards. save
> ok. Launch "Host did not satisfy internal filter Network" No idea what that
> is. Maybe I don't understand how this works.

I do not understand where having an instance with two nics fail. Is the
"Host did not satisfy internal filter Network" message coming from
Hetzner management, or oVirt's?

> I even tried removing
> ovirtmgmt network and leaving private network by itself. Tried with all 3
> network card types (rtl8139/e1000/VirtIO)
> http://styk.tv/wp-content/uploads/2014/01/oVirtHosted2_preferred.png

Could you explain how you configured your provate network? In my
experience, your easiest option is to define a dummy interface
ip link add name dummy_private type dummy
and set up a normal oVirt network on top of it, as if it were a true
nic.

> 
> Either way if you have a minute or two please take a look at both attached
> diagrams. Deliberately making it difficult by forcing all elements on
> single box in hosted environment.
> 
> Maybe there is a way to have this all installed with Neutron or vSwitch on
> the same box or is that pushing it?
___
Users mailing list
Users@ovirt.org
http://lists.ovirt.org/mailman/listinfo/users


[Users] Putting a Gluster host into an ISCSI DC

2014-02-05 Thread Gianluca Cecchi
Hello,
in a test infra I would to temporarily put a Host that is part of
Gluster DC and has gluster volumes active into an ISCSI DC.

I put Gluster DC in maintenance, stop glustervolume and put both hosts
in maintenance.
Then I try to edit host and put into the ISCSI DC.
But I receive error

Error while executing action:

node02:

Cannot Edit host. Server having Gluster volume

Is there any basic check so that I cannot do this at all?
Any tip?

Thanks,
Gianluca
___
Users mailing list
Users@ovirt.org
http://lists.ovirt.org/mailman/listinfo/users


Re: [Users] Will this two node concept scale and work?

2014-02-05 Thread Dafna Ron

see in line


On 02/05/2014 10:45 AM, ml ml wrote:

Hello List,

my aim is to host multiple VMs which are redundant and are high 
available. It should also scale well.


I'm assuming you are talking about HA cluster since redundancy vm and HA 
vms are a contradiction :)


I think usually people just buy a fat iSCSI Storage and attach this. 
In my case it should scale well from very small nodes to big ones.
Therefore an iSCSI Target will bring a lot of overhead (10GBit Links 
and two Paths, and really i should have a 2nd Hot Standby SAN, too). 
This makes scalability very hard.


This post is also not meant to be a iscsi discussion.

Since oVirt does not support DRBD out of the box i came up with my own 
concept:


check out posix storage domain.
If it supports gluster you might be able to use it for DRBD.



http://oi62.tinypic.com/2550xg5.jpg

As far as i can tell i have the following advantages:

- i can start with two simple cheap nodes
- i could add more disks to my nodes. Maybe even a SSD as a dedicated 
drbd resource.
- i can connect the two nodes directly to each other with bonding or 
infiniband. i dont need a switch or something between it.


Downside:
---
- i always need two nodes (as a couple)

Will this setup work for me. So far i think i will be quite happy with it.
Since the DRBD Resources are shared in dual primary mode i am not sure 
if ovirt can handle it. It is not allowed to write to a vm disk at the 
same time.


not true that you cannot write to the same vm disk at the same time - 
you have a shared disk option




The Concept of Linbit 
(http://www.linbit.com/en/company/news/333-high-available-virtualization-at-a-most-reasonable-price) 
seems to much of an overhead with the iSCSI Layer and pacemaker setup. 
Its just too much for such a simple task.


Please tell me that this concept is great and will work and scale well.
Otherwise i am also thankful for any hints or critical ideas.


Thanks a lot,
Mario


___
Users mailing list
Users@ovirt.org
http://lists.ovirt.org/mailman/listinfo/users



--
Dafna Ron
___
Users mailing list
Users@ovirt.org
http://lists.ovirt.org/mailman/listinfo/users


Re: [Users] Engine-setup Fail

2014-02-05 Thread Kevin Tibi
Yes after the download with yum, engine-setup works without fail. I up
3.4.0 to 3.5.0.

 I can send you log files when engine-setup fail and when he works. I have
just one ovirt-engine so i can't reproduce this.

Kev.


2014-02-05 Yedidyah Bar David :

> That's actually bad news, not good ones, because now we have no clue what
> the actual
> problem was :-)
>
> Anyway, principally this is unsupported. Usually this is done only during
> engine-setup
> when the engine is down - in order to prevent the new engine access the
> old database.
>
> After doing this you should immediately run engine-setup. Did you? Was it
> successful?
>
> Any chance you'll be able to try and reproduce this (on some other system
> if needed)?
> For the record, yours is the first such report I heard of. I still can't
> imagine how this can
> be a bug of ovirt, but what do I know :-(
>
> Thanks!
> --
> Didi
>
> --
>
> *From: *"Kevin Tibi" 
> *To: *"Alon Bar-Lev" 
> *Cc: *"users" 
> *Sent: *Wednesday, February 5, 2014 8:37:27 AM
>
> *Subject: *Re: [Users] Engine-setup Fail
>
> Works ! Thx Alon.
>
>
> 2014-02-04 Alon Bar-Lev :
>
>> Can you please try to use yum for that, I do not think it is an issue of
>> ovirt...
>>
>> yum --disableplugin=versionlock update ovirt-engine
>>
>> - Original Message -
>> > From: "Kevin Tibi" 
>> > To: "Yedidyah Bar David" 
>> > Cc: "users" 
>> > Sent: Tuesday, February 4, 2014 5:25:42 PM
>> > Subject: Re: [Users] Engine-setup Fail
>> >
>> > It's work if i down myself. But when i put the file in cache of yum,
>> > engine-setup download and bug.
>> >
>> > Maybe i put the file in the wrong directory.
>> >
>> >
>> > 2014-02-04 Yedidyah Bar David < d...@redhat.com > :
>> >
>> >
>> >
>> > What happens if you try downloading manually? E.g.
>> > wget
>> >
>> http://resources.ovirt.org/releases/nightly/rpm/Fedora/19/noarch/ovirt-engine-userportal-3.5.0-0.0.master.20140130171441.git9257b30.fc19.noarch.rpm
>> > --
>> > Didi
>> >
>> >
>> >
>> >
>> > From: "Kevin Tibi" < kevint...@hotmail.com >
>> > To: "Alon Bar-Lev" < alo...@redhat.com >
>> > Cc: "users" < users@ovirt.org >
>> > Sent: Tuesday, February 4, 2014 3:51:18 PM
>> >
>> > Subject: Re: [Users] Engine-setup Fail
>> >
>> > Don't work :/ Same fail
>> >
>> >
>> > 2014-02-04 Alon Bar-Lev < alo...@redhat.com > :
>> >
>> >
>> >
>> > I really do not understand what's wrong, it is not within the
>> > resources.ovirt.org as far as I can see.
>> >
>> > Please try:
>> >
>> > # yum clean all
>> > # yum clean dbcache
>> >
>> > Then try again...
>> >
>> > - Original Message -
>> > > From: "Kevin Tibi" < kevint...@hotmail.com >
>> > > To: "Alon Bar-Lev" < alo...@redhat.com >
>> > > Cc: "users" < users@ovirt.org >
>> > > Sent: Tuesday, February 4, 2014 3:23:12 PM
>> > > Subject: Re: [Users] Engine-setup Fail
>> > >
>> > > Thx Alon, I tried your solution but :
>> > >
>> > > [ ERROR ] Yum [u'Des erreurs ont \xe9t\xe9 rencontr\xe9e durant le
>> > > t\xe9l\xe9chargement des paquets.',
>> > >
>> u'ovirt-engine-userportal-3.5.0-0.0.master.20140130171441.git9257b30.fc19.noarch:
>> > > failure:
>> > >
>> noarch/ovirt-engine-userportal-3.5.0-0.0.master.20140130171441.git9257b30.fc19.noarch.rpm
>> > > from ovirt-nightly: [Errno 256] No more mirrors to try.\nhttp://
>> > >
>> resources.ovirt.org/releases/nightly/rpm/Fedora/19/noarch/ovirt-engine-userportal-3.5.0-0.0.master.20140130171441.git9257b30.fc19.noarch.rpm
>> > > :
>> > > [Errno 14] HTTP Error 416 - Requested Range Not Satisfiable']
>> > > [ ERROR ] Failed to execute stage 'Package installation': [u'Des
>> erreurs
>> > > ont \xe9t\xe9 rencontr\xe9e durant le t\xe9l\xe9chargement des
>> paquets.',
>> > >
>> u'ovirt-engine-userportal-3.5.0-0.0.master.20140130171441.git9257b30.fc19.noarch:
>> > > failure:
>> > >
>> noarch/ovirt-engine-userportal-3.5.0-0.0.master.20140130171441.git9257b30.fc19.noarch.rpm
>> > > from ovirt-nightly: [Errno 256] No more mirrors to try.\nhttp://
>> > >
>> resources.ovirt.org/releases/nightly/rpm/Fedora/19/noarch/ovirt-engine-userportal-3.5.0-0.0.master.20140130171441.git9257b30.fc19.noarch.rpm
>> > > :
>> > > [Errno 14] HTTP Error 416 - Requested Range Not Satisfiable']
>> > >
>> > >
>> > >
>> > >
>> > >
>> > > 2014-02-04 Alon Bar-Lev < alo...@redhat.com >:
>> > >
>> > > >
>> > > > HTTP Error 416 - Requested Range Not Satisfiable
>> > > >
>> > > >
>> > > > Please use resources.ovirt.org and not ovirt.ovirt.org at you repo
>> > > > configuration.
>> > > >
>> > > >
>> > > > - Original Message -
>> > > > > From: "Kevin Tibi" < kevint...@hotmail.com >
>> > > > > To: "users" < users@ovirt.org >
>> > > > > Sent: Tuesday, February 4, 2014 3:03:01 PM
>> > > > > Subject: [Users] Engine-setup Fail
>> > > > >
>> > > > > Hi all,
>> > > > >
>> > > > > i try to update my ovirt-engine. I use nightly repo. For moment,
>> i am
>> > > > > on
>> > > > 3.4.
>> > > > > When i try to up 3.5 i have a fail when it download the package:
>> > > > >
>> > > > > [ INFO ] Yum Down

[Users] Will this two node concept scale and work?

2014-02-05 Thread ml ml
Hello List,

my aim is to host multiple VMs which are redundant and are high available.
It should also scale well.

I think usually people just buy a fat iSCSI Storage and attach this. In my
case it should scale well from very small nodes to big ones.
Therefore an iSCSI Target will bring a lot of overhead (10GBit Links and
two Paths, and really i should have a 2nd Hot Standby SAN, too). This makes
scalability very hard.

This post is also not meant to be a iscsi discussion.

Since oVirt does not support DRBD out of the box i came up with my own
concept:

http://oi62.tinypic.com/2550xg5.jpg

As far as i can tell i have the following advantages:

- i can start with two simple cheap nodes
- i could add more disks to my nodes. Maybe even a SSD as a dedicated
drbdresource.
- i can connect the two nodes directly to each other with bonding or
infiniband. i dont need a switch or something between it.

Downside:
---
- i always need two nodes (as a couple)

Will this setup work for me. So far i think i will be quite happy with it.
Since the DRBD Resources are shared in dual primary mode i am not sure if
ovirt can handle it. It is not allowed to write to a vm disk at the same
time.

The Concept of Linbit (
http://www.linbit.com/en/company/news/333-high-available-virtualization-at-a-most-reasonable-price)
seems to much of an overhead with the iSCSI Layer and pacemaker setup. Its
just too much for such a simple task.

Please tell me that this concept is great and will work and scale well.
Otherwise i am also thankful for any hints or critical ideas.


Thanks a lot,
Mario
___
Users mailing list
Users@ovirt.org
http://lists.ovirt.org/mailman/listinfo/users


Re: [Users] NAT configuration

2014-02-05 Thread Sven Kieske
I can confirm that vdsm@ovirt does work.

However, I have the strong feeling that
the password in /etc/pki/vdsm/keys/libvirt_password
is static for all installations.

And gerrit proves me right:

http://gerrit.ovirt.org/gitweb?p=vdsm.git;a=blob;f=vdsm/libvirt_password;h=09e60bce9bc401bb8943154f7cb9cb08bd0f49da;hb=refs/heads/master

So what is the purpose of authentication when that information
is public?

I created a BZ for this:

https://bugzilla.redhat.com/show_bug.cgi?id=1061639

PS: I hope, whoever coded this, feels a little bit ashamed
and perhaps buys a good book on writing secure code and reads it..

Am 05.02.2014 09:55, schrieb Joop:
> On 5-2-2014 9:27, Sven Kieske wrote:
>> Well, I just tested this.
>> and I can't connect to virsh with this information.
>> I guess the mentioned user vdsm@rhevh might not be the actual one
>> used in 3.3.2 anymore? (mail is from 2012 and mentions rhev, so..)
>>
>>
> vdsm@ovirt seems to work :-)
> 
> Joop


-- 
Mit freundlichen Grüßen / Regards

Sven Kieske

Systemadministrator
Mittwald CM Service GmbH & Co. KG
Königsberger Straße 6
32339 Espelkamp
T: +49-5772-293-100
F: +49-5772-293-333
https://www.mittwald.de
Geschäftsführer: Robert Meyer
St.Nr.: 331/5721/1033, USt-IdNr.: DE814773217, HRA 6640, AG Bad Oeynhausen
Komplementärin: Robert Meyer Verwaltungs GmbH, HRB 13260, AG Bad Oeynhausen
___
Users mailing list
Users@ovirt.org
http://lists.ovirt.org/mailman/listinfo/users


Re: [Users] failed nestd vm only with spice and not vnc

2014-02-05 Thread Gianluca Cecchi
Hello,
I revamp this thread putting a subject more in line with the real problem.
Previous thread subject was
"
unable to start vm in 3.3 and f19 with gluster
"

and began here on ovirt users mailing list:
http://lists.ovirt.org/pipermail/users/2013-September/016628.html

Now I updated all to final 3.3.3 and I see that the problem is here yet.

So now I have updated Fedora 19 hosts that are VMs (virtual hw version
9) inside vSphere infra version 5.1.

CPU of ESX host is E7-4870 and cluster in oVirt is defined as "Intel
Nehalem Family"


On oVirt host VM
[root@ovnode01 qemu]# rpm -q libvirt qemu-kvm
libvirt-1.0.5.9-1.fc19.x86_64
qemu-kvm-1.4.2-15.fc19.x86_64

[root@ovnode01 qemu]# uname -r
3.12.9-201.fc19.x86_64

flags of cpuinfo:
flags   : fpu vme de pse tsc msr pae mce cx8 apic sep mtrr pge
mca cmov pat pse36 clflush dts mmx fxsr sse sse2 ss syscall nx rdtscp
lm constant_tsc arch_perfmon pebs bts nopl xtopology tsc_reliable
nonstop_tsc aperfmperf pni monitor vmx ssse3 cx16 sse4_1 sse4_2 x2apic
popcnt lahf_lm ida arat epb dtherm tpr_shadow vnmi ept vpid

[root@ovnode01 ~]# vdsClient -s localhost getVdsCapabilities
HBAInventory = {'FC': [], 'iSCSI': [{'InitiatorName':
'iqn.1994-05.com.redhat:6344c23973df'}]}
ISCSIInitiatorName = 'iqn.1994-05.com.redhat:6344c23973df'
bondings = {'bond0': {'addr': '',
  'cfg': {},
  'hwaddr': '32:5c:6a:20:cd:21',
  'ipv6addrs': [],
  'mtu': '1500',
  'netmask': '',
  'slaves': []}}
bridges = {'ovirtmgmt': {'addr': '192.168.33.41',
 'cfg': {'BOOTPROTO': 'none',
 'DEFROUTE': 'yes',
 'DELAY': '0',
 'DEVICE': 'ovirtmgmt',
 'GATEWAY': '192.168.33.15',
 'IPADDR': '192.168.33.41',
 'NETMASK': '255.255.255.0',
 'NM_CONTROLLED': 'no',
 'ONBOOT': 'yes',
 'STP': 'no',
 'TYPE': 'Bridge'},
 'gateway': '192.168.33.15',
 'ipv6addrs': ['fe80::250:56ff:fe9f:686b/64'],
 'ipv6gateway': '::',
 'mtu': '1500',
 'netmask': '255.255.255.0',
 'ports': ['eth0', 'vnet1'],
 'stp': 'off'},
   'vlan172': {'addr': '',
   'cfg': {'DEFROUTE': 'no',
   'DELAY': '0',
   'DEVICE': 'vlan172',
   'NM_CONTROLLED': 'no',
   'ONBOOT': 'yes',
   'STP': 'no',
   'TYPE': 'Bridge'},
   'gateway': '0.0.0.0',
   'ipv6addrs': ['fe80::250:56ff:fe9f:3b86/64'],
   'ipv6gateway': '::',
   'mtu': '1500',
   'netmask': '',
   'ports': ['ens256.172', 'vnet0'],
   'stp': 'off'}}
clusterLevels = ['3.0', '3.1', '3.2', '3.3']
cpuCores = '4'
cpuFlags =
'fpu,vme,de,pse,tsc,msr,pae,mce,cx8,apic,sep,mtrr,pge,mca,cmov,pat,pse36,clflush,dts,mmx,fxsr,sse,sse2,ss,syscall,nx,rdtscp,lm,constant_tsc,arch_perfmon,pebs,bts,nopl,xtopology,tsc_reliable,nonstop_tsc,aperfmperf,pni,monitor,vmx,ssse3,cx16,sse4_1,sse4_2,x2apic,popcnt,lahf_lm,ida,arat,epb,dtherm,tpr_shadow,vnmi,ept,vpid,model_Nehalem,model_Conroe,model_coreduo,model_core2duo,model_Penryn,model_n270'
cpuModel = 'Intel(R) Xeon(R) CPU E7- 4870  @ 2.40GHz'
cpuSockets = '4'
cpuSpeed = '2394.000'
cpuThreads = '4'
emulatedMachines = ['pc',
'q35',
'isapc',
'pc-0.10',
'pc-0.11',
'pc-0.12',
'pc-0.13',
'pc-0.14',
'pc-0.15',
'pc-1.0',
'pc-1.1',
'pc-1.2',
'pc-1.3',
'none']
guestOverhead = '65'
hooks = {}
kvmEnabled = 'true'
lastClient = '127.0.0.1'
lastClientIface = 'lo'
management_ip = '0.0.0.0'
memSize = '16050'
netConfig

Re: [Users] Quotas

2014-02-05 Thread Doron Fediuck
- Original Message -
> From: "Maurice James" 
> To: users@ovirt.org
> Sent: Wednesday, February 5, 2014 3:47:51 AM
> Subject: [Users] Quotas
> 
> 
> 
> How do I limit the number of VMs that a user can create using quotas?
> 

Hi Maurice.
Quota supports the resource level (storage, CPU, RAM) rather than VM count.
So please open an RFE with your needs so we'll be able to add it to the
requested tasks.

We can try to hack it using a scheduling filter, but I'd rather have a
proper support for such a request.

Doron

___
Users mailing list
Users@ovirt.org
http://lists.ovirt.org/mailman/listinfo/users


Re: [Users] NAT configuration

2014-02-05 Thread Joop

On 5-2-2014 9:27, Sven Kieske wrote:

Well, I just tested this.
and I can't connect to virsh with this information.
I guess the mentioned user vdsm@rhevh might not be the actual one
used in 3.3.2 anymore? (mail is from 2012 and mentions rhev, so..)



vdsm@ovirt seems to work :-)

Joop

___
Users mailing list
Users@ovirt.org
http://lists.ovirt.org/mailman/listinfo/users


Re: [Users] Quotas

2014-02-05 Thread Gilad Chaplik
Hi Maurice, 

With quota you can set a hard limit for the number of CPU consumed by the user, 
and
by that limit the num of vms, other quota parameters (memory and storage) can 
be unlimited.

Although these videos are for version 3.1 they are pretty useful, 
since there weren't any drastic change since then:
http://www.youtube.com/watch?v=zazJ_fW05Qk
http://www.youtube.com/watch?v=y8qUoRImimY

you are more than welcome to send your progress/use-case/any other question :-)

Thanks, 
Gilad.

- Original Message -
> From: "Maurice James" 
> To: users@ovirt.org
> Sent: Wednesday, February 5, 2014 3:47:51 AM
> Subject: [Users] Quotas
> 
> 
> 
> How do I limit the number of VMs that a user can create using quotas?
> 
> ___
> Users mailing list
> Users@ovirt.org
> http://lists.ovirt.org/mailman/listinfo/users
> 
___
Users mailing list
Users@ovirt.org
http://lists.ovirt.org/mailman/listinfo/users


Re: [Users] ovirt engine and vdsm on same node

2014-02-05 Thread Hans Emmanuel
Thanks for your inputs .


On Wed, Feb 5, 2014 at 2:04 PM, Yedidyah Bar David  wrote:

> - Original Message -
> > From: "Sandro Bonazzola" 
> > To: "Hans Emmanuel" , users@ovirt.org
> > Sent: Wednesday, February 5, 2014 10:22:28 AM
> > Subject: Re: [Users] ovirt engine and vdsm on same node
> >
> > Il 05/02/2014 09:18, Hans Emmanuel ha scritto:
> > > Can we install ovirt engine and vdsm on same node ?
> >
> > Sure, you can use all-in-one plugin for having both running on the same
> node.
> > In 3.4.0 you'll be able also to run ovirt engine inside a VM using
> > hosted-engine feature.
>
> Also note that doing this manually, not using the all-in-one plugin, is
> unsupported and might cause problems (iirc _will_ cause). Meaning: Do _not_
> do the following: install and setup engine, then add the host it's on as a
> "host".
>
> >
> > > If so any down side ?
> >
> > You'll have less resources available for VMs on the host running the
> engine.
>
> It's also a bit more complex, and generally considered suitable only for
> very
> small installations (just one machine) etc. With hosted-engine, even if
> you start
> with a single host, if you add one in the future, you'll be able to
> migrate the
> engine VM to the other host and have HA for it.
> --
> Didi
>



-- 
*Hans Emmanuel*

*NOthing to FEAR but something to FEEL..*
___
Users mailing list
Users@ovirt.org
http://lists.ovirt.org/mailman/listinfo/users


Re: [Users] ovirt engine and vdsm on same node

2014-02-05 Thread Yedidyah Bar David
- Original Message -
> From: "Sandro Bonazzola" 
> To: "Hans Emmanuel" , users@ovirt.org
> Sent: Wednesday, February 5, 2014 10:22:28 AM
> Subject: Re: [Users] ovirt engine and vdsm on same node
> 
> Il 05/02/2014 09:18, Hans Emmanuel ha scritto:
> > Can we install ovirt engine and vdsm on same node ?
> 
> Sure, you can use all-in-one plugin for having both running on the same node.
> In 3.4.0 you'll be able also to run ovirt engine inside a VM using
> hosted-engine feature.

Also note that doing this manually, not using the all-in-one plugin, is
unsupported and might cause problems (iirc _will_ cause). Meaning: Do _not_
do the following: install and setup engine, then add the host it's on as a
"host".

> 
> > If so any down side ?
> 
> You'll have less resources available for VMs on the host running the engine.

It's also a bit more complex, and generally considered suitable only for very
small installations (just one machine) etc. With hosted-engine, even if you 
start
with a single host, if you add one in the future, you'll be able to migrate the
engine VM to the other host and have HA for it.
-- 
Didi
___
Users mailing list
Users@ovirt.org
http://lists.ovirt.org/mailman/listinfo/users


Re: [Users] NAT configuration

2014-02-05 Thread Sven Kieske
Well, I just tested this.
and I can't connect to virsh with this information.
I guess the mentioned user vdsm@rhevh might not be the actual one
used in 3.3.2 anymore? (mail is from 2012 and mentions rhev, so..)

or can't libvirt manage multiple authenticated users?
Because I registered my own using:

saslpasswd2 -a libvirt USERNAME

which still works.

Am 05.02.2014 09:15, schrieb Sven Kieske:
> It's actually step 5 :)
> 
> Am 05.02.2014 07:50, schrieb Itamar Heim:
>> (step 6 explains where the user/password for libvirt/virsh are.
> 

-- 
Mit freundlichen Grüßen / Regards

Sven Kieske

Systemadministrator
Mittwald CM Service GmbH & Co. KG
Königsberger Straße 6
32339 Espelkamp
T: +49-5772-293-100
F: +49-5772-293-333
https://www.mittwald.de
Geschäftsführer: Robert Meyer
St.Nr.: 331/5721/1033, USt-IdNr.: DE814773217, HRA 6640, AG Bad Oeynhausen
Komplementärin: Robert Meyer Verwaltungs GmbH, HRB 13260, AG Bad Oeynhausen
___
Users mailing list
Users@ovirt.org
http://lists.ovirt.org/mailman/listinfo/users


Re: [Users] ovirt engine and vdsm on same node

2014-02-05 Thread Sandro Bonazzola
Il 05/02/2014 09:18, Hans Emmanuel ha scritto:
> Can we install ovirt engine and vdsm on same node ?

Sure, you can use all-in-one plugin for having both running on the same node.
In 3.4.0 you'll be able also to run ovirt engine inside a VM using 
hosted-engine feature.

> If so any down side ?

You'll have less resources available for VMs on the host running the engine.

> 
> -- 
> *Hans Emmanuel*
> 
> /NOthing to FEAR but something to FEEL../
> 
> 
> 
> ___
> Users mailing list
> Users@ovirt.org
> http://lists.ovirt.org/mailman/listinfo/users
> 


-- 
Sandro Bonazzola
Better technology. Faster innovation. Powered by community collaboration.
See how it works at redhat.com
___
Users mailing list
Users@ovirt.org
http://lists.ovirt.org/mailman/listinfo/users


[Users] ovirt engine and vdsm on same node

2014-02-05 Thread Hans Emmanuel
Can we install ovirt engine and vdsm on same node ?
If so any down side ?

-- 
*Hans Emmanuel*

*NOthing to FEAR but something to FEEL..*
___
Users mailing list
Users@ovirt.org
http://lists.ovirt.org/mailman/listinfo/users


Re: [Users] NAT configuration

2014-02-05 Thread Sven Kieske
It's actually step 5 :)

Am 05.02.2014 07:50, schrieb Itamar Heim:
> (step 6 explains where the user/password for libvirt/virsh are.

-- 
Mit freundlichen Grüßen / Regards

Sven Kieske

Systemadministrator
Mittwald CM Service GmbH & Co. KG
Königsberger Straße 6
32339 Espelkamp
T: +49-5772-293-100
F: +49-5772-293-333
https://www.mittwald.de
Geschäftsführer: Robert Meyer
St.Nr.: 331/5721/1033, USt-IdNr.: DE814773217, HRA 6640, AG Bad Oeynhausen
Komplementärin: Robert Meyer Verwaltungs GmbH, HRB 13260, AG Bad Oeynhausen
___
Users mailing list
Users@ovirt.org
http://lists.ovirt.org/mailman/listinfo/users


Re: [Users] Storage server

2014-02-05 Thread Itamar Heim

On 02/04/2014 03:10 PM, Elad Ben Aharon wrote:

Actually, the best way to do it would be to create another storage domain and 
migrate all the VMs's disks to it (you can do it while the VMs are running), 
then you won't suffer from a down time.


notice this will create a snapshot for your vms and may change them from 
raw to qcow. also, this not supported for shared disks.



If you don't have the option to do so, the best way to do it would be to shut 
down the VMs, put the storage domain to 'Maintenance' and then perform the 
storage server network change. When the connection to the storage is fixed, 
activate the domain and resume the VMs manually.


that's assuming the domain was added using dns and not ip address, and 
that the dns expiration/caching on all hosts will notice the new ip address.





- Original Message -
From: "Koen Vanoppen" 
To: "Elad Ben Aharon" , users@ovirt.org
Sent: Tuesday, February 4, 2014 2:57:57 PM
Subject: Re: [Users] Storage server

For the moment, nothing is going on. The change is planned on Thursday. So,
correct me if I'm wrong, you say that the fastest and easiest way is to
suspend all the machines, make the change of the storage server, rediscover
the iscsi server and then bring the vm's back up?

Kind regards,
Koen
On Feb 4, 2014 1:47 PM, "Elad Ben Aharon"  wrote:


Hi,

I suppose that those VMs are now in 'Paused' state, correct me if I'm
wrong.
After connectivity with storage comes back, all VMs which couldn't detect
their disks should be resumed from 'Paused' to 'Up' state automatically. It
shouldn't take them a long time to change their state (a few minutes).

- Original Message -
From: "Koen Vanoppen" 
To: users@ovirt.org
Sent: Tuesday, February 4, 2014 2:32:46 PM
Subject: [Users] Storage server

Dear All,

At work, the network guys have come to a discovery that they have made a
mistake in their network configuration... The Storage server that we use as
ISCSI target in oVirt, is in the wrong VLAN, so the wrong IP address...

What is the best way to make sure that all of our vms (40) that are using
this scsi storage will come up again after the storage server has changed
his ip?
What are the best steps to be taken...?
We are using oVirt Engine Version: 3.3.1-2.el6

Kind Regards,

Koen

___
Users mailing list
Users@ovirt.org
http://lists.ovirt.org/mailman/listinfo/users


___
Users mailing list
Users@ovirt.org
http://lists.ovirt.org/mailman/listinfo/users



___
Users mailing list
Users@ovirt.org
http://lists.ovirt.org/mailman/listinfo/users