Re: [ovirt-users] hosted-engine vm-status stale data and cluster seems "broken"

2016-06-14 Thread Sahina Bose
Are the ovirt-ha-agent and ovirt-ha-broker services running on all the 
nodes? If they are, check the agent.log and broker.log for errors.


On 06/14/2016 05:29 PM, Charles Kozler wrote:
Anyone have any other possible information on this? I've noticed this 
issue before and usually it just takes a bit of time for the cluster 
to 'settle' after some node reboots but its been a few days and its 
still marked as stale.




--== Host 1 status ==--

Status up-to-date  : False
Hostname   : njsevcnp01
Host ID: 1
Engine status  : unknown stale-data
Score  : 0
stopped: True
Local maintenance  : False
crc32  : 260dbf06
Host timestamp : 327


--== Host 2 status ==--

Status up-to-date  : False
Hostname   : njsevcnp02
Host ID: 2
Engine status  : unknown stale-data
Score  : 0
stopped: True
Local maintenance  : False
crc32  : 25da07df
Host timestamp : 3030


--== Host 3 status ==--

Status up-to-date  : False
Hostname   : njsevcnp03
Host ID: 3
Engine status  : unknown stale-data
Score  : 0
stopped: True
Local maintenance  : False
crc32  : c67818cb
Host timestamp : 10877406


&& vdsClient on node2 showing hosted engine is up on node 2

48207078-8cb0-413c-8984-40aa772f4d94
Status = Up
nicModel = rtl8139,pv
statusTime = 4540044460
emulatedMachine = pc
pid = 30571
vmName = HostedEngine
devices = [{'device': 'memballoon', 'specParams': {'model': 'none'}, 
'type': 'balloon', 'alias': 'balloon0'}, {'alias': 'scsi0', 
'deviceId': '17f10db1-2e9e-4422-9ea5-61a628072e29', 'address': 
{'slot': '0x04', 'bus': '0x00', 'domain': '0x', 'type': 'pci', 
'function': '0x0'}, 'device': 'scsi', 'model': 'virtio-scsi', 'type': 
'controller'}, {'device': 'usb', 'alias': 'usb', 'type': 'controller', 
'deviceId': '9be34ac0-7d00-4a95-bdfe-5b328fc1355b', 'address': 
{'slot': '0x01', 'bus': '0x00', 'domain': '0x', 'type': 'pci', 
'function': '0x2'}}, {'device': 'ide', 'alias': 'ide', 'type': 
'controller', 'deviceId': '222629a8-0dd6-4e8e-9b42-43aac314c0c2', 
'address': {'slot': '0x01', 'bus': '0x00', 'domain': '0x', 'type': 
'pci', 'function': '0x1'}}, {'device': 'virtio-serial', 'alias': 
'virtio-serial0', 'type': 'controller', 'deviceId': 
'7cbccd04-853a-408f-94c2-5b10b641b7af', 'address': {'slot': '0x05', 
'bus': '0x00', 'domain': '0x', 'type': 'pci', 'function': '0x0'}}, 
{'device': 'vnc', 'specParams': {'spiceSecureChannels': 
'smain,sdisplay,sinputs,scursor,splayback,srecord,ssmartcard,susbredir', 
'displayIp': '0'}, 'type': 'graphics', 'port': '5900'}, {'nicModel': 
'pv', 'macAddr': '00:16:3e:16:83:91', 'linkActive': True, 'network': 
'ovirtmgmt', 'alias': 'net0', 'deviceId': 
'3f679659-142c-41f3-a69d-4264d7234fbc', 'address': {'slot': '0x03', 
'bus': '0x00', 'domain': '0x', 'type': 'pci', 'function': '0x0'}, 
'device': 'bridge', 'type': 'interface', 'name': 'vnet0'}, {'address': 
{'slot': '0x06', 'bus': '0x00', 'domain': '0x', 'type': 'pci', 
'function': '0x0'}, 'volumeInfo': {'domainID': 
'c6323975-2966-409d-b9e0-48370a513a98', 'volType': 'path', 
'leaseOffset': 0, 'volumeID': 'aa66d378-5a5f-490c-b0ab-993b79838d95', 
'leasePath': 
'/rhev/data-center/mnt/nas01:_volume1_vm__os_ovirt36__engine/c6323975-2966-409d-b9e0-48370a513a98/images/8518ef4a-7b17-4291-856c-81875ba4e264/aa66d378-5a5f-490c-b0ab-993b79838d95.lease', 
'imageID': '8518ef4a-7b17-4291-856c-81875ba4e264', 'path': 
'/rhev/data-center/mnt/nas01:_volume1_vm__os_ovirt36__engine/c6323975-2966-409d-b9e0-48370a513a98/images/8518ef4a-7b17-4291-856c-81875ba4e264/aa66d378-5a5f-490c-b0ab-993b79838d95'}, 
'index': '0', 'iface': 'virtio', 'apparentsize': '10737418240', 
'imageID': '8518ef4a-7b17-4291-856c-81875ba4e264', 'readonly': 
'False', 'shared': 'exclusive', 'truesize': '6899802112', 'type': 
'disk', 'domainID': 'c6323975-2966-409d-b9e0-48370a513a98', 'reqsize': 
'0', 'format': 'raw', 'deviceId': 
'8518ef4a-7b17-4291-856c-81875ba4e264', 'poolID': 
'----', 'device': 'disk', 'path': 
'/var/run/vdsm/storage/c6323975-2966-409d-b9e0-48370a513a98/8518ef4a-7b17-4291-856c-81875ba4e264/aa66d378-5a5f-490c-b0ab-993b79838d95', 
'propagateErrors': 'off', 'name': 'vda', 'bootOrder': '1', 'volumeID': 
'aa66d378-5a5f-490c-b0ab-993b79838d95', 'alias': 'virtio-disk0', 
'volumeChain': [{'domainID': 'c6323975-2966-409d-b9e0-48370a513a98', 
'volType': 'path', 'leaseOffset': 0, 'volumeID': 
'aa66d378-5a5f-490c-b0ab-993b79838d95', 

[ovirt-users] RHEV-M installation failure

2016-06-14 Thread Grant Lowe
Hi all,

I'm trying to install an RHEV-M image on an RHEV-V hypervisor. When I do, the 
installation finishes with this error:

2016-06-14 15:00:21 DEBUG otopi.context context.dumpEnvironment:510 ENV 
OVEHOSTED_VM/consoleUUID=str:'e6bed88f-2b2e-4d1d-8e63-40a404a75a12'
2016-06-14 15:00:21 DEBUG otopi.context context.dumpEnvironment:510 ENV 
OVEHOSTED_VM/emulatedMachine=str:'rhel6.5.0'
2016-06-14 15:00:21 DEBUG otopi.context context.dumpEnvironment:510 ENV 
OVEHOSTED_VM/nicUUID=str:'a3bdab54-72a2-404b-b899-a9e93d04c56f'
2016-06-14 15:00:21 DEBUG otopi.context context.dumpEnvironment:510 ENV 
OVEHOSTED_VM/ovfArchive=str:'/data/ovirt-hosted-engine-setup/rhevm-appliance-20160515.0-1.x86_64.rhevm.ova'
2016-06-14 15:00:21 DEBUG otopi.context context.dumpEnvironment:510 ENV 
OVEHOSTED_VM/subst=dict:'{}'
2016-06-14 15:00:21 DEBUG otopi.context context.dumpEnvironment:510 ENV 
OVEHOSTED_VM/vmBoot=str:'disk'
2016-06-14 15:00:21 DEBUG otopi.context context.dumpEnvironment:510 ENV 
OVEHOSTED_VM/vmCDRom=NoneType:'None'
2016-06-14 15:00:21 DEBUG otopi.context context.dumpEnvironment:510 ENV 
OVEHOSTED_VM/vmMACAddr=str:'00:16:3e:69:8b:6b'
2016-06-14 15:00:21 DEBUG otopi.context context.dumpEnvironment:510 ENV 
OVEHOSTED_VM/vmMemSizeMB=int:'4096'
2016-06-14 15:00:21 DEBUG otopi.context context.dumpEnvironment:510 ENV 
OVEHOSTED_VM/vmUUID=str:'b25144f2-cbdb-4d18-a344-171d5d631ef2'
2016-06-14 15:00:21 DEBUG otopi.context context.dumpEnvironment:510 ENV 
OVEHOSTED_VM/vmVCpus=str:'2'
2016-06-14 15:00:21 DEBUG otopi.context context.dumpEnvironment:510 ENV 
OVESETUP_CORE/offlinePackager=bool:'True'
2016-06-14 15:00:21 DEBUG otopi.context context.dumpEnvironment:510 ENV 
PACKAGER/dnfDisabledPlugins=list:'[]'
2016-06-14 15:00:21 DEBUG otopi.context context.dumpEnvironment:510 ENV 
PACKAGER/dnfExpireCache=bool:'True'
2016-06-14 15:00:21 DEBUG otopi.context context.dumpEnvironment:510 ENV 
PACKAGER/dnfRollback=bool:'True'
2016-06-14 15:00:21 DEBUG otopi.context context.dumpEnvironment:510 ENV 
PACKAGER/dnfpackagerEnabled=bool:'True'
2016-06-14 15:00:21 DEBUG otopi.context context.dumpEnvironment:510 ENV 
PACKAGER/keepAliveInterval=int:'30'
2016-06-14 15:00:21 DEBUG otopi.context context.dumpEnvironment:510 ENV 
PACKAGER/yumDisabledPlugins=list:'[]'
2016-06-14 15:00:21 DEBUG otopi.context context.dumpEnvironment:510 ENV 
PACKAGER/yumEnabledPlugins=list:'[]'
2016-06-14 15:00:21 DEBUG otopi.context context.dumpEnvironment:510 ENV 
PACKAGER/yumExpireCache=bool:'True'
2016-06-14 15:00:21 DEBUG otopi.context context.dumpEnvironment:510 ENV 
PACKAGER/yumRollback=bool:'True'
2016-06-14 15:00:21 DEBUG otopi.context context.dumpEnvironment:510 ENV 
PACKAGER/yumpackagerEnabled=bool:'False'
2016-06-14 15:00:21 DEBUG otopi.context context.dumpEnvironment:510 ENV 
SYSTEM/clockMaxGap=int:'5'
2016-06-14 15:00:21 DEBUG otopi.context context.dumpEnvironment:510 ENV 
SYSTEM/clockSet=bool:'False'
2016-06-14 15:00:21 DEBUG otopi.context context.dumpEnvironment:510 ENV 
SYSTEM/commandPath=str:'/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin'
2016-06-14 15:00:21 DEBUG otopi.context context.dumpEnvironment:510 ENV 
SYSTEM/reboot=bool:'False'
2016-06-14 15:00:21 DEBUG otopi.context context.dumpEnvironment:510 ENV 
SYSTEM/rebootAllow=bool:'True'
2016-06-14 15:00:21 DEBUG otopi.context context.dumpEnvironment:510 ENV 
SYSTEM/rebootDeferTime=int:'10'
2016-06-14 15:00:21 DEBUG otopi.context context.dumpEnvironment:514 ENVIRONMENT 
DUMP - END
2016-06-14 15:00:21 DEBUG otopi.context context._executeMethod:142 Stage 
pre-terminate METHOD otopi.plugins.otopi.dialog.cli.Plugin._pre_terminate
2016-06-14 15:00:21 DEBUG otopi.context context._executeMethod:148 condition 
False
2016-06-14 15:00:21 INFO otopi.context context.runSequence:427 Stage: 
Termination
2016-06-14 15:00:21 DEBUG otopi.context context.runSequence:431 STAGE terminate
2016-06-14 15:00:21 DEBUG otopi.context context._executeMethod:142 Stage 
terminate METHOD 
otopi.plugins.ovirt_hosted_engine_setup.core.misc.Plugin._terminate
2016-06-14 15:00:21 ERROR otopi.plugins.ovirt_hosted_engine_setup.core.misc 
misc._terminate:170 Hosted Engine deployment failed: this system is not 
reliable, please check the issue, fix and redeploy
2016-06-14 15:00:21 DEBUG otopi.plugins.otopi.dialog.human 
dialog.__logString:219 DIALOG:SEND Log file is located at 
/var/log/ovirt-hosted-engine-setup/ovirt-hosted-engine-setup-20160614145427-u8mxun.log
2016-06-14 15:00:21 DEBUG otopi.context context._executeMethod:142 Stage 
terminate METHOD otopi.plugins.otopi.dialog.human.Plugin._terminate
2016-06-14 15:00:21 DEBUG otopi.context context._executeMethod:142 Stage 
terminate METHOD otopi.plugins.otopi.dialog.machine.Plugin._terminate
2016-06-14 15:00:21 DEBUG otopi.context context._executeMethod:148 condition 
False
2016-06-14 15:00:21 DEBUG otopi.context context._executeMethod:142 Stage 
terminate METHOD otopi.plugins.otopi.core.log.Plugin._terminate
[root@rhev-v data]

So what do I need to do? I'm 

Re: [ovirt-users] Hosted Engine Woes

2016-06-14 Thread Simone Tiraboschi
On Tue, Jun 14, 2016 at 8:45 PM, Nic Seltzer  wrote:
> Hello!
>
> I'm looking for someone who can help me out with a hosted-engine setup that
> I have. I experienced a power event a couple of weeks ago. Initially, things
> seemed to have come back fine, but the other day, I noticed that one of the
> nodes for the cluster was down. I tried to drop it into maintenance mode
> (which never completed) and reboot it then "Confirm the Host has been
> rebooted". Neither of these steps allowed the host to re-enter the cluster.
> Has anyone encountered this? At this point, I'd like to reboot the
> hosted-engine, but I can't find documentation instructing me on "how". I'm

hosted-engine --set-maintenance --mode=global
hosted-engine --vm-shutdown
hosted-engine --vm-status # poll till the VM is down
hosted-engine --vm-start
hosted-engine --set-maintenance --mode=none

> also open to other suggestions or references to documentation that will help
> triage my issue.
>
> Thanks!
>
>
>
> nic
>
> ___
> Users mailing list
> Users@ovirt.org
> http://lists.ovirt.org/mailman/listinfo/users
>
___
Users mailing list
Users@ovirt.org
http://lists.ovirt.org/mailman/listinfo/users


Re: [ovirt-users] Storage types in oVirt

2016-06-14 Thread Dan Yasny
On Jun 14, 2016 5:37 PM, "Fernando Frediani" 
wrote:
>
> Hi Nir,
>
> I wouldn't say that the performance coming from LVM is significantly
better than from a filesystem if the last is well built. In VMware the
performance from a VMDK running on the top of VMFS5 and from a RDM has no
significant gain one over another. I've always preferred to have machines
in a filesystem for the ease of management. In some cases with hundreds of
them in a single filesystem never faced performance issues. The bottleneck
normally is down to the storage architecture (Storage Controller, RAID
config, etc).
>
> The multipath is certainly a plus that helps in certain cases.
>

Extended scalability  (200 node clusters) and no bottlenecks around scsi3
pr are another couple of pluses.

> I guess the answer to my original question is clear. If I want to use
block storage shared among different hosts there is no choice in oVirt
other than LVM.
> In a particular case I have a storage shared via a kind of internal SAS
backplane to all servers. The only alternative to that would be dedicate a
server to own the storage and export it as NFS, but in that case there
would be some looses in the terms of hardware an reliability.

If your SAS storage is exposed to multiple hosts and presents the same wwid
to all clients you can set it up as fc, as long as multipath can detect it.
DDAS like dell md3000 works great this way.

>
> Thanks
> Fernando
>
>
> On Tue, Jun 14, 2016 at 11:23 PM, Fernando Frediani <
fernando.fredi...@upx.com.br> wrote:
>>>
>>> Hi Nir,
>>> Thanks for clarification.
>>>
>>> Answering your questions: The intent was to use a Posix like filesystem
>>> similar to VMFS5 (GFS2, OCFS2, or other) where you have no choice for
how
>>> the block storage is presented to multiple servers. Yes I heard about
GFS2
>>> escalation issues in the past, but thought it had been gone now a days,
it
>>> seems not.
>>>
>>> I had the impression that qcow2 images have both thin-provisioning and
>>> snapshot capabilities.
>>
>> Yes, using file based storage, you have both snapshots and thin
provisioning,
>> this is the most reliable way to get thin provisioning in ovirt.
>>
>> But then you pay for the file system overhead, where in block storage
the qemu
>> image is using the lv directly.
>>
>> In block storage we use multipath, so if you have mutiple nics and
networks,
>> you get better reliability and performance.
>>
>>> Regarding LVM I don't like the idea of having VMs buried into a LVM
volume
>>> nor the idea of troubleshooting LVM volumes when necessary. Dealing with
>>> qcow2 images for every VM separately makes things much easier for doing
>>> several tasks. I would say that people coming from VMware would prefer
to
>>> deal with a VMDK rather than a RDM LUN. In the other hand I have
nothing to
>>> say about LVM performance.
>>
>> LVM has its own issues with many lvs on the same vg - we recommend to
>> use up to 350 lvs per vg. If you need more, you need to use another vg.
>>
>> The best would be to try both and use the best storage for the particular
>> use case.
>>
>> Nir
>
>
> ___
> Users mailing list
> Users@ovirt.org
> http://lists.ovirt.org/mailman/listinfo/users
___
Users mailing list
Users@ovirt.org
http://lists.ovirt.org/mailman/listinfo/users


Re: [ovirt-users] which NIC/network NFS storage is using

2016-06-14 Thread Ryan Mahoney
I tried a few different additions of ip route to the host, but I am unable
to connect/ping to the nfs server once I add them.  I first tried adding it
to the bond0 dev, and then to one of the logical interfaces within that
bond, but neither worked.  I also get an error of "rtnetlink answers:
network is unreachable" when I include a gateway in the ip route add
command.  if I leave the gateway off, it will allow me to add the route,
but it does not work as a valid route. I do not have an IP address on the
logical  network that I am attempting to use on the bonded connection, the
manager does not like it when I try and assign one.

On Tue, Jun 14, 2016 at 2:04 PM, Ryan Mahoney <
r...@beaconhillentertainment.com> wrote:

> And I should be able to give ip route bond0 as the nic for the 10gbe bond?
> or do I have to give it both individual nics in the bond separately
>
> On Tue, Jun 14, 2016 at 1:35 PM, Charles Kozler 
> wrote:
>
>> Set a static route to the storage to go through the NIC(s) you want it to
>>
>> ip route add x.x.x.x/32 dev  via 
>>
>> where x.x.x.x/32 is the IP of the NFS server
>>
>> On Tue, Jun 14, 2016 at 1:30 PM, Ryan Mahoney <
>> r...@beaconhillentertainment.com> wrote:
>>
>>> Yes, Fernando this is exactly what I'm asking for
>>>
>>> On Tue, Jun 14, 2016 at 1:25 PM, Fernando Frediani <
>>> fernando.fredi...@upx.com.br> wrote:
>>>
 I guess what the colleague wants to know is how to specify a interface
 in a different VLAN on the top of the 10Gb LACP in order for the NFS
 traffic to flow.
 In VMware world that would be vmkernel interface, so a new
 network/interface with an different IP address than Management (ovirtmgmt).

 Fernando


 Em 14/06/2016 13:52, Ryan Mahoney escreveu:

 Right, but how do you specify which network the nfs traffic is using?

 On Tue, Jun 14, 2016 at 12:41 PM, Nir Soffer 
 wrote:

> On Tue, Jun 14, 2016 at 5:26 PM, Ryan Mahoney
>  wrote:
> > On my hosts, I have configured a 1gbe nic for ovirtmgmt whose usage
> is
> > currently setup for Management, Display, VM and Migration. I also
> have a 2
> > 10gbe nics bonded LACP which are VLAN tagged and assigned the dozen
> or so
> > VLANS needed for the various VM's to access.  I have NFS storage
> mounted to
> > the Data Center, and I would like to know how I check/specify which
> network
> > connection ovirt is using for that NFS storage.  I want to make sure
> it is
> > utilizing the 10gbe bond on each host vs using the 1gbe connection.
>
> We don't configured anything regarding network used for nfs storage,
> so it works
> just like any other nfs mount you create yourself.
>
> Nir
>



 ___
 Users mailing 
 listUsers@ovirt.orghttp://lists.ovirt.org/mailman/listinfo/users



 ___
 Users mailing list
 Users@ovirt.org
 http://lists.ovirt.org/mailman/listinfo/users


>>>
>>> ___
>>> Users mailing list
>>> Users@ovirt.org
>>> http://lists.ovirt.org/mailman/listinfo/users
>>>
>>>
>>
>>
>> --
>>
>> *Charles Kozler*
>> *Vice President, IT Operations*
>>
>> FIX Flyer, LLC
>> 225 Broadway | Suite 1600 | New York, NY 10007
>> 1-888-349-3593
>> http://www.fixflyer.com 
>>
>> NOTICE TO RECIPIENT: THIS E-MAIL IS MEANT ONLY FOR THE INTENDED
>> RECIPIENT(S) OF THE TRANSMISSION, AND CONTAINS CONFIDENTIAL INFORMATION
>> WHICH IS PROPRIETARY TO FIX FLYER LLC.  ANY UNAUTHORIZED USE, COPYING,
>> DISTRIBUTION, OR DISSEMINATION IS STRICTLY PROHIBITED.  ALL RIGHTS TO THIS
>> INFORMATION IS RESERVED BY FIX FLYER LLC.  IF YOU ARE NOT THE INTENDED
>> RECIPIENT, PLEASE CONTACT THE SENDER BY REPLY E-MAIL AND PLEASE DELETE THIS
>> E-MAIL FROM YOUR SYSTEM AND DESTROY ANY COPIES.
>>
>
>
___
Users mailing list
Users@ovirt.org
http://lists.ovirt.org/mailman/listinfo/users


Re: [ovirt-users] Storage types in oVirt

2016-06-14 Thread Fernando Frediani

Hi Nir,

I wouldn't say that the performance coming from LVM is significantly 
better than from a filesystem if the last is well built. In VMware the 
performance from a VMDK running on the top of VMFS5 and from a RDM has 
no significant gain one over another. I've always preferred to have 
machines in a filesystem for the ease of management. In some cases with 
hundreds of them in a single filesystem never faced performance issues. 
The bottleneck normally is down to the storage architecture (Storage 
Controller, RAID config, etc).


The multipath is certainly a plus that helps in certain cases.

I guess the answer to my original question is clear. If I want to use 
block storage shared among different hosts there is no choice in oVirt 
other than LVM.
In a particular case I have a storage shared via a kind of internal SAS 
backplane to all servers. The only alternative to that would be dedicate 
a server to own the storage and export it as NFS, but in that case there 
would be some looses in the terms of hardware an reliability.


Thanks
Fernando

On Tue, Jun 14, 2016 at 11:23 PM, Fernando Frediani 
 wrote:

Hi Nir,
Thanks for clarification.

Answering your questions: The intent was to use a Posix like filesystem
similar to VMFS5 (GFS2, OCFS2, or other) where you have no choice for how
the block storage is presented to multiple servers. Yes I heard about GFS2
escalation issues in the past, but thought it had been gone now a days, it
seems not.

I had the impression that qcow2 images have both thin-provisioning and
snapshot capabilities.

Yes, using file based storage, you have both snapshots and thin provisioning,
this is the most reliable way to get thin provisioning in ovirt.

But then you pay for the file system overhead, where in block storage the qemu
image is using the lv directly.

In block storage we use multipath, so if you have mutiple nics and networks,
you get better reliability and performance.


Regarding LVM I don't like the idea of having VMs buried into a LVM volume
nor the idea of troubleshooting LVM volumes when necessary. Dealing with
qcow2 images for every VM separately makes things much easier for doing
several tasks. I would say that people coming from VMware would prefer to
deal with a VMDK rather than a RDM LUN. In the other hand I have nothing to
say about LVM performance.

LVM has its own issues with many lvs on the same vg - we recommend to
use up to 350 lvs per vg. If you need more, you need to use another vg.

The best would be to try both and use the best storage for the particular
use case.

Nir


___
Users mailing list
Users@ovirt.org
http://lists.ovirt.org/mailman/listinfo/users


Re: [ovirt-users] Storage types in oVirt

2016-06-14 Thread Nir Soffer
On Tue, Jun 14, 2016 at 11:23 PM, Fernando Frediani
 wrote:
> Hi Nir,
> Thanks for clarification.
>
> Answering your questions: The intent was to use a Posix like filesystem
> similar to VMFS5 (GFS2, OCFS2, or other) where you have no choice for how
> the block storage is presented to multiple servers. Yes I heard about GFS2
> escalation issues in the past, but thought it had been gone now a days, it
> seems not.
>
> I had the impression that qcow2 images have both thin-provisioning and
> snapshot capabilities.

Yes, using file based storage, you have both snapshots and thin provisioning,
this is the most reliable way to get thin provisioning in ovirt.

But then you pay for the file system overhead, where in block storage the qemu
image is using the lv directly.

In block storage we use multipath, so if you have mutiple nics and networks,
you get better reliability and performance.

> Regarding LVM I don't like the idea of having VMs buried into a LVM volume
> nor the idea of troubleshooting LVM volumes when necessary. Dealing with
> qcow2 images for every VM separately makes things much easier for doing
> several tasks. I would say that people coming from VMware would prefer to
> deal with a VMDK rather than a RDM LUN. In the other hand I have nothing to
> say about LVM performance.

LVM has its own issues with many lvs on the same vg - we recommend to
use up to 350 lvs per vg. If you need more, you need to use another vg.

The best would be to try both and use the best storage for the particular
use case.

Nir
___
Users mailing list
Users@ovirt.org
http://lists.ovirt.org/mailman/listinfo/users


Re: [ovirt-users] Storage types in oVirt

2016-06-14 Thread Fernando Frediani

Hi Nir,
Thanks for clarification.

Answering your questions: The intent was to use a Posix like filesystem 
similar to VMFS5 (GFS2, OCFS2, or other) where you have no choice for 
how the block storage is presented to multiple servers. Yes I heard 
about GFS2 escalation issues in the past, but thought it had been gone 
now a days, it seems not.


I had the impression that qcow2 images have both thin-provisioning and 
snapshot capabilities.


Regarding LVM I don't like the idea of having VMs buried into a LVM 
volume nor the idea of troubleshooting LVM volumes when necessary. 
Dealing with qcow2 images for every VM separately makes things much 
easier for doing several tasks. I would say that people coming from 
VMware would prefer to deal with a VMDK rather than a RDM LUN. In the 
other hand I have nothing to say about LVM performance.


Best
Fernando


Em 14/06/2016 16:35, Nir Soffer escreveu:

On Tue, Jun 14, 2016 at 8:59 PM, Fernando Frediani
 wrote:

Hi there,

I see that supported storage types in oVirt are: iSCSI, FCoE NFS, Local and
Gluster.

We support iSCSI, FC, FCoE, NFS, Gluster, Ceph, Local and any posix like
shared file system.


Specifically speaking about iSCSI and FCoE I see they use LVM on the block
storage level to store the Virtual Machines.

To be more specific, we use lvm to create volumes. Each virtual machine disk
use one volume and additional volume for each snapshot.


I just wanted to understand why the choice was to have LVM

What would use use instead?


and if that is
the only option at the moment.

This is the only option for block storage if you need snapshots or thin
provisioning.

If preallocated disk without snapshots is good enough for you, you
can attach a LUN directly to a vm. This will give the best performance.


Was ever considered to have something like GFS2 ou OCFS2 in comparison with
VMFS5 and VMs running in qcow2 ou raw files on the top of it ?

Any posix compatible file system can be used, using raw or qcow2 files.

You can use GFS2, but I heard that it does not scale well.


I don't like LVM and have a strong preference for anything related to
storage that doesn't use it so the reason I'm looking for a different way to
use block storage without it having to be a LVM.

You can use one of the file based storage options, or ceph.

Whats wrong with lvm?


Nir


___
Users mailing list
Users@ovirt.org
http://lists.ovirt.org/mailman/listinfo/users


Re: [ovirt-users] Storage types in oVirt

2016-06-14 Thread Nir Soffer
On Tue, Jun 14, 2016 at 8:59 PM, Fernando Frediani
 wrote:
> Hi there,
>
> I see that supported storage types in oVirt are: iSCSI, FCoE NFS, Local and
> Gluster.

We support iSCSI, FC, FCoE, NFS, Gluster, Ceph, Local and any posix like
shared file system.

> Specifically speaking about iSCSI and FCoE I see they use LVM on the block
> storage level to store the Virtual Machines.

To be more specific, we use lvm to create volumes. Each virtual machine disk
use one volume and additional volume for each snapshot.

> I just wanted to understand why the choice was to have LVM

What would use use instead?

> and if that is
> the only option at the moment.

This is the only option for block storage if you need snapshots or thin
provisioning.

If preallocated disk without snapshots is good enough for you, you
can attach a LUN directly to a vm. This will give the best performance.

> Was ever considered to have something like GFS2 ou OCFS2 in comparison with
> VMFS5 and VMs running in qcow2 ou raw files on the top of it ?

Any posix compatible file system can be used, using raw or qcow2 files.

You can use GFS2, but I heard that it does not scale well.

> I don't like LVM and have a strong preference for anything related to
> storage that doesn't use it so the reason I'm looking for a different way to
> use block storage without it having to be a LVM.

You can use one of the file based storage options, or ceph.

Whats wrong with lvm?


Nir
___
Users mailing list
Users@ovirt.org
http://lists.ovirt.org/mailman/listinfo/users


[ovirt-users] Hosted Engine Woes

2016-06-14 Thread Nic Seltzer
Hello!

I'm looking for someone who can help me out with a hosted-engine setup that
I have. I experienced a power event a couple of weeks ago. Initially,
things seemed to have come back fine, but the other day, I noticed that one
of the nodes for the cluster was down. I tried to drop it into maintenance
mode (which never completed) and reboot it then "Confirm the Host has been
rebooted". Neither of these steps allowed the host to re-enter the cluster.
Has anyone encountered this? At this point, I'd like to reboot the
hosted-engine, but I can't find documentation instructing me on "how". I'm
also open to other suggestions or references to documentation that will
help triage my issue.

Thanks!


nic
___
Users mailing list
Users@ovirt.org
http://lists.ovirt.org/mailman/listinfo/users


Re: [ovirt-users] which NIC/network NFS storage is using

2016-06-14 Thread Ryan Mahoney
And I should be able to give ip route bond0 as the nic for the 10gbe bond?
or do I have to give it both individual nics in the bond separately

On Tue, Jun 14, 2016 at 1:35 PM, Charles Kozler 
wrote:

> Set a static route to the storage to go through the NIC(s) you want it to
>
> ip route add x.x.x.x/32 dev  via 
>
> where x.x.x.x/32 is the IP of the NFS server
>
> On Tue, Jun 14, 2016 at 1:30 PM, Ryan Mahoney <
> r...@beaconhillentertainment.com> wrote:
>
>> Yes, Fernando this is exactly what I'm asking for
>>
>> On Tue, Jun 14, 2016 at 1:25 PM, Fernando Frediani <
>> fernando.fredi...@upx.com.br> wrote:
>>
>>> I guess what the colleague wants to know is how to specify a interface
>>> in a different VLAN on the top of the 10Gb LACP in order for the NFS
>>> traffic to flow.
>>> In VMware world that would be vmkernel interface, so a new
>>> network/interface with an different IP address than Management (ovirtmgmt).
>>>
>>> Fernando
>>>
>>>
>>> Em 14/06/2016 13:52, Ryan Mahoney escreveu:
>>>
>>> Right, but how do you specify which network the nfs traffic is using?
>>>
>>> On Tue, Jun 14, 2016 at 12:41 PM, Nir Soffer  wrote:
>>>
 On Tue, Jun 14, 2016 at 5:26 PM, Ryan Mahoney
  wrote:
 > On my hosts, I have configured a 1gbe nic for ovirtmgmt whose usage is
 > currently setup for Management, Display, VM and Migration. I also
 have a 2
 > 10gbe nics bonded LACP which are VLAN tagged and assigned the dozen
 or so
 > VLANS needed for the various VM's to access.  I have NFS storage
 mounted to
 > the Data Center, and I would like to know how I check/specify which
 network
 > connection ovirt is using for that NFS storage.  I want to make sure
 it is
 > utilizing the 10gbe bond on each host vs using the 1gbe connection.

 We don't configured anything regarding network used for nfs storage, so
 it works
 just like any other nfs mount you create yourself.

 Nir

>>>
>>>
>>>
>>> ___
>>> Users mailing 
>>> listUsers@ovirt.orghttp://lists.ovirt.org/mailman/listinfo/users
>>>
>>>
>>>
>>> ___
>>> Users mailing list
>>> Users@ovirt.org
>>> http://lists.ovirt.org/mailman/listinfo/users
>>>
>>>
>>
>> ___
>> Users mailing list
>> Users@ovirt.org
>> http://lists.ovirt.org/mailman/listinfo/users
>>
>>
>
>
> --
>
> *Charles Kozler*
> *Vice President, IT Operations*
>
> FIX Flyer, LLC
> 225 Broadway | Suite 1600 | New York, NY 10007
> 1-888-349-3593
> http://www.fixflyer.com 
>
> NOTICE TO RECIPIENT: THIS E-MAIL IS MEANT ONLY FOR THE INTENDED
> RECIPIENT(S) OF THE TRANSMISSION, AND CONTAINS CONFIDENTIAL INFORMATION
> WHICH IS PROPRIETARY TO FIX FLYER LLC.  ANY UNAUTHORIZED USE, COPYING,
> DISTRIBUTION, OR DISSEMINATION IS STRICTLY PROHIBITED.  ALL RIGHTS TO THIS
> INFORMATION IS RESERVED BY FIX FLYER LLC.  IF YOU ARE NOT THE INTENDED
> RECIPIENT, PLEASE CONTACT THE SENDER BY REPLY E-MAIL AND PLEASE DELETE THIS
> E-MAIL FROM YOUR SYSTEM AND DESTROY ANY COPIES.
>
___
Users mailing list
Users@ovirt.org
http://lists.ovirt.org/mailman/listinfo/users


[ovirt-users] Storage types in oVirt

2016-06-14 Thread Fernando Frediani

Hi there,

I see that supported storage types in oVirt are: iSCSI, FCoE NFS, Local 
and Gluster.
Specifically speaking about iSCSI and FCoE I see they use LVM on the 
block storage level to store the Virtual Machines.


I just wanted to understand why the choice was to have LVM and if that 
is the only option at the moment.


Was ever considered to have something like GFS2 ou OCFS2 in comparison 
with VMFS5 and VMs running in qcow2 ou raw files on the top of it ?


I don't like LVM and have a strong preference for anything related to 
storage that doesn't use it so the reason I'm looking for a different 
way to use block storage without it having to be a LVM.


Thanks
Fernando
___
Users mailing list
Users@ovirt.org
http://lists.ovirt.org/mailman/listinfo/users


Re: [ovirt-users] which NIC/network NFS storage is using

2016-06-14 Thread Charles Kozler
Set a static route to the storage to go through the NIC(s) you want it to

ip route add x.x.x.x/32 dev  via 

where x.x.x.x/32 is the IP of the NFS server

On Tue, Jun 14, 2016 at 1:30 PM, Ryan Mahoney <
r...@beaconhillentertainment.com> wrote:

> Yes, Fernando this is exactly what I'm asking for
>
> On Tue, Jun 14, 2016 at 1:25 PM, Fernando Frediani <
> fernando.fredi...@upx.com.br> wrote:
>
>> I guess what the colleague wants to know is how to specify a interface in
>> a different VLAN on the top of the 10Gb LACP in order for the NFS traffic
>> to flow.
>> In VMware world that would be vmkernel interface, so a new
>> network/interface with an different IP address than Management (ovirtmgmt).
>>
>> Fernando
>>
>>
>> Em 14/06/2016 13:52, Ryan Mahoney escreveu:
>>
>> Right, but how do you specify which network the nfs traffic is using?
>>
>> On Tue, Jun 14, 2016 at 12:41 PM, Nir Soffer  wrote:
>>
>>> On Tue, Jun 14, 2016 at 5:26 PM, Ryan Mahoney
>>>  wrote:
>>> > On my hosts, I have configured a 1gbe nic for ovirtmgmt whose usage is
>>> > currently setup for Management, Display, VM and Migration. I also have
>>> a 2
>>> > 10gbe nics bonded LACP which are VLAN tagged and assigned the dozen or
>>> so
>>> > VLANS needed for the various VM's to access.  I have NFS storage
>>> mounted to
>>> > the Data Center, and I would like to know how I check/specify which
>>> network
>>> > connection ovirt is using for that NFS storage.  I want to make sure
>>> it is
>>> > utilizing the 10gbe bond on each host vs using the 1gbe connection.
>>>
>>> We don't configured anything regarding network used for nfs storage, so
>>> it works
>>> just like any other nfs mount you create yourself.
>>>
>>> Nir
>>>
>>
>>
>>
>> ___
>> Users mailing 
>> listUsers@ovirt.orghttp://lists.ovirt.org/mailman/listinfo/users
>>
>>
>>
>> ___
>> Users mailing list
>> Users@ovirt.org
>> http://lists.ovirt.org/mailman/listinfo/users
>>
>>
>
> ___
> Users mailing list
> Users@ovirt.org
> http://lists.ovirt.org/mailman/listinfo/users
>
>


-- 

*Charles Kozler*
*Vice President, IT Operations*

FIX Flyer, LLC
225 Broadway | Suite 1600 | New York, NY 10007
1-888-349-3593
http://www.fixflyer.com 

NOTICE TO RECIPIENT: THIS E-MAIL IS MEANT ONLY FOR THE INTENDED
RECIPIENT(S) OF THE TRANSMISSION, AND CONTAINS CONFIDENTIAL INFORMATION
WHICH IS PROPRIETARY TO FIX FLYER LLC.  ANY UNAUTHORIZED USE, COPYING,
DISTRIBUTION, OR DISSEMINATION IS STRICTLY PROHIBITED.  ALL RIGHTS TO THIS
INFORMATION IS RESERVED BY FIX FLYER LLC.  IF YOU ARE NOT THE INTENDED
RECIPIENT, PLEASE CONTACT THE SENDER BY REPLY E-MAIL AND PLEASE DELETE THIS
E-MAIL FROM YOUR SYSTEM AND DESTROY ANY COPIES.
___
Users mailing list
Users@ovirt.org
http://lists.ovirt.org/mailman/listinfo/users


Re: [ovirt-users] which NIC/network NFS storage is using

2016-06-14 Thread Ryan Mahoney
Yes, Fernando this is exactly what I'm asking for

On Tue, Jun 14, 2016 at 1:25 PM, Fernando Frediani <
fernando.fredi...@upx.com.br> wrote:

> I guess what the colleague wants to know is how to specify a interface in
> a different VLAN on the top of the 10Gb LACP in order for the NFS traffic
> to flow.
> In VMware world that would be vmkernel interface, so a new
> network/interface with an different IP address than Management (ovirtmgmt).
>
> Fernando
>
>
> Em 14/06/2016 13:52, Ryan Mahoney escreveu:
>
> Right, but how do you specify which network the nfs traffic is using?
>
> On Tue, Jun 14, 2016 at 12:41 PM, Nir Soffer  wrote:
>
>> On Tue, Jun 14, 2016 at 5:26 PM, Ryan Mahoney
>>  wrote:
>> > On my hosts, I have configured a 1gbe nic for ovirtmgmt whose usage is
>> > currently setup for Management, Display, VM and Migration. I also have
>> a 2
>> > 10gbe nics bonded LACP which are VLAN tagged and assigned the dozen or
>> so
>> > VLANS needed for the various VM's to access.  I have NFS storage
>> mounted to
>> > the Data Center, and I would like to know how I check/specify which
>> network
>> > connection ovirt is using for that NFS storage.  I want to make sure it
>> is
>> > utilizing the 10gbe bond on each host vs using the 1gbe connection.
>>
>> We don't configured anything regarding network used for nfs storage, so
>> it works
>> just like any other nfs mount you create yourself.
>>
>> Nir
>>
>
>
>
> ___
> Users mailing listUsers@ovirt.orghttp://lists.ovirt.org/mailman/listinfo/users
>
>
>
> ___
> Users mailing list
> Users@ovirt.org
> http://lists.ovirt.org/mailman/listinfo/users
>
>
___
Users mailing list
Users@ovirt.org
http://lists.ovirt.org/mailman/listinfo/users


Re: [ovirt-users] which NIC/network NFS storage is using

2016-06-14 Thread Fernando Frediani
I guess what the colleague wants to know is how to specify a interface 
in a different VLAN on the top of the 10Gb LACP in order for the NFS 
traffic to flow.
In VMware world that would be vmkernel interface, so a new 
network/interface with an different IP address than Management (ovirtmgmt).


Fernando

Em 14/06/2016 13:52, Ryan Mahoney escreveu:

Right, but how do you specify which network the nfs traffic is using?

On Tue, Jun 14, 2016 at 12:41 PM, Nir Soffer > wrote:


On Tue, Jun 14, 2016 at 5:26 PM, Ryan Mahoney
> wrote:
> On my hosts, I have configured a 1gbe nic for ovirtmgmt whose
usage is
> currently setup for Management, Display, VM and Migration. I
also have a 2
> 10gbe nics bonded LACP which are VLAN tagged and assigned the
dozen or so
> VLANS needed for the various VM's to access.  I have NFS storage
mounted to
> the Data Center, and I would like to know how I check/specify
which network
> connection ovirt is using for that NFS storage.  I want to make
sure it is
> utilizing the 10gbe bond on each host vs using the 1gbe connection.

We don't configured anything regarding network used for nfs
storage, so it works
just like any other nfs mount you create yourself.

Nir




___
Users mailing list
Users@ovirt.org
http://lists.ovirt.org/mailman/listinfo/users


___
Users mailing list
Users@ovirt.org
http://lists.ovirt.org/mailman/listinfo/users


Re: [ovirt-users] which NIC/network NFS storage is using

2016-06-14 Thread Pavel Gashev
Ryan,

You can check it with the following shell command:
# ip route get x.x.x.x
where x.x.x.x is an IP address of your NFS storage.

On 14/06/16 19:52, "users-boun...@ovirt.org on 
behalf of Ryan Mahoney" 
 on behalf of 
r...@beaconhillentertainment.com> 
wrote:

Right, but how do you specify which network the nfs traffic is using?

On Tue, Jun 14, 2016 at 12:41 PM, Nir Soffer 
> wrote:
On Tue, Jun 14, 2016 at 5:26 PM, Ryan Mahoney
> 
wrote:
> On my hosts, I have configured a 1gbe nic for ovirtmgmt whose usage is
> currently setup for Management, Display, VM and Migration. I also have a 2
> 10gbe nics bonded LACP which are VLAN tagged and assigned the dozen or so
> VLANS needed for the various VM's to access.  I have NFS storage mounted to
> the Data Center, and I would like to know how I check/specify which network
> connection ovirt is using for that NFS storage.  I want to make sure it is
> utilizing the 10gbe bond on each host vs using the 1gbe connection.

We don't configured anything regarding network used for nfs storage, so it works
just like any other nfs mount you create yourself.

Nir

___
Users mailing list
Users@ovirt.org
http://lists.ovirt.org/mailman/listinfo/users


Re: [ovirt-users] which NIC/network NFS storage is using

2016-06-14 Thread Ryan Mahoney
Right, but how do you specify which network the nfs traffic is using?

On Tue, Jun 14, 2016 at 12:41 PM, Nir Soffer  wrote:

> On Tue, Jun 14, 2016 at 5:26 PM, Ryan Mahoney
>  wrote:
> > On my hosts, I have configured a 1gbe nic for ovirtmgmt whose usage is
> > currently setup for Management, Display, VM and Migration. I also have a
> 2
> > 10gbe nics bonded LACP which are VLAN tagged and assigned the dozen or so
> > VLANS needed for the various VM's to access.  I have NFS storage mounted
> to
> > the Data Center, and I would like to know how I check/specify which
> network
> > connection ovirt is using for that NFS storage.  I want to make sure it
> is
> > utilizing the 10gbe bond on each host vs using the 1gbe connection.
>
> We don't configured anything regarding network used for nfs storage, so it
> works
> just like any other nfs mount you create yourself.
>
> Nir
>
___
Users mailing list
Users@ovirt.org
http://lists.ovirt.org/mailman/listinfo/users


Re: [ovirt-users] which NIC/network NFS storage is using

2016-06-14 Thread Nir Soffer
On Tue, Jun 14, 2016 at 5:26 PM, Ryan Mahoney
 wrote:
> On my hosts, I have configured a 1gbe nic for ovirtmgmt whose usage is
> currently setup for Management, Display, VM and Migration. I also have a 2
> 10gbe nics bonded LACP which are VLAN tagged and assigned the dozen or so
> VLANS needed for the various VM's to access.  I have NFS storage mounted to
> the Data Center, and I would like to know how I check/specify which network
> connection ovirt is using for that NFS storage.  I want to make sure it is
> utilizing the 10gbe bond on each host vs using the 1gbe connection.

We don't configured anything regarding network used for nfs storage, so it works
just like any other nfs mount you create yourself.

Nir
___
Users mailing list
Users@ovirt.org
http://lists.ovirt.org/mailman/listinfo/users


Re: [ovirt-users] oVirt training request

2016-06-14 Thread Dan Yasny
If you want something official, I think RH318, the RHCVA course should
cover everything you need.

There are also two (slightly out of date) books available:
https://www.amazon.ca/Getting-Started-Alexey-Lesovsky-2013-11-22/dp/B01FGLUZMA/ref=sr_1_1?ie=UTF8=1465915525=8-1=ovirt
https://www.amazon.ca/Getting-Started-Red-Enterprise-Virtualization/dp/1782167404/ref=sr_1_2?s=books=UTF8=1465915578=1-2=red+hat+enterprise+virtualization

For anything else, there's plenty of documentation available

On Tue, Jun 14, 2016 at 10:34 AM, Gonzalo Faramiñan 
wrote:

> Hello everyone,
> this is my very fist message in this users list. I'm new to oVirt, and
> would like to know about online training/courses about it.
> Courses can be spanish or english imparted. Any info would be really
> appreciated.
>
> Thank you!
> ___
> Users mailing list
> Users@ovirt.org
> http://lists.ovirt.org/mailman/listinfo/users
>
___
Users mailing list
Users@ovirt.org
http://lists.ovirt.org/mailman/listinfo/users


[ovirt-users] oVirt training request

2016-06-14 Thread Gonzalo Faramiñan
Hello everyone,
this is my very fist message in this users list. I'm new to oVirt, and
would like to know about online training/courses about it.
Courses can be spanish or english imparted. Any info would be really
appreciated.

Thank you!
___
Users mailing list
Users@ovirt.org
http://lists.ovirt.org/mailman/listinfo/users


[ovirt-users] which NIC/network NFS storage is using

2016-06-14 Thread Ryan Mahoney
On my hosts, I have configured a 1gbe nic for ovirtmgmt whose usage is
currently setup for Management, Display, VM and Migration. I also have a 2
10gbe nics bonded LACP which are VLAN tagged and assigned the dozen or so
VLANS needed for the various VM's to access.  I have NFS storage mounted to
the Data Center, and I would like to know how I check/specify which network
connection ovirt is using for that NFS storage.  I want to make sure it is
utilizing the 10gbe bond on each host vs using the 1gbe connection.

Thanks,
Ryan
___
Users mailing list
Users@ovirt.org
http://lists.ovirt.org/mailman/listinfo/users


Re: [ovirt-users] host kernel upgrade

2016-06-14 Thread Rafael Almeida
"kernel is hot patched ", mm which version of CentOS use your? i use: 
ovirt-engine-3.6.6.2 over centos 3.10.0-327.18.2.el7.x86_64 and 
periodically updates the kernel.


I've seen: kpatch and ksplice, What is your implementation?

greetings



On 06/14/2016 08:14 AM, Nathanaël Blanchet wrote:
Since el7, you don't need to reboot anymore after your kernel upgrade, 
kernel is hot patched.


Le 14/06/2016 15:09, Rafael Almeida a écrit :

Great, thnx


On 06/13/2016 06:12 PM, Nir Soffer wrote:

On Tue, Jun 14, 2016 at 1:12 AM, Rafael Almeida
 wrote:
Hello, friends, it is safe reboot my host after update the kernel 
in my

centos 7.2 x64, the ovirt engine 3.6 run over this centos in a
independent host. which it is the frequency at which the
host/hypervisors communicates with the engine oVirt?
The hypervisors do not communicate with engine, engine communicate 
with them,

so you can safely reboot the engine host.

Nir






___
Users mailing list
Users@ovirt.org
http://lists.ovirt.org/mailman/listinfo/users


Re: [ovirt-users] host kernel upgrade

2016-06-14 Thread Nathanaël Blanchet
Since el7, you don't need to reboot anymore after your kernel upgrade, 
kernel is hot patched.


Le 14/06/2016 15:09, Rafael Almeida a écrit :

Great, thnx


On 06/13/2016 06:12 PM, Nir Soffer wrote:

On Tue, Jun 14, 2016 at 1:12 AM, Rafael Almeida
 wrote:

Hello, friends, it is safe reboot my host after update the kernel in my
centos 7.2 x64, the ovirt engine 3.6 run over this centos in a
independent host. which it is the frequency at which the
host/hypervisors communicates with the engine oVirt?
The hypervisors do not communicate with engine, engine communicate 
with them,

so you can safely reboot the engine host.

Nir





___
Users mailing list
Users@ovirt.org
http://lists.ovirt.org/mailman/listinfo/users


--
Nathanaël Blanchet

Supervision réseau
Pôle Infrastrutures Informatiques
227 avenue Professeur-Jean-Louis-Viala
34193 MONTPELLIER CEDEX 5   
Tél. 33 (0)4 67 54 84 55
Fax  33 (0)4 67 54 84 14
blanc...@abes.fr

___
Users mailing list
Users@ovirt.org
http://lists.ovirt.org/mailman/listinfo/users


Re: [ovirt-users] Problem with Getty and Serial Consoles

2016-06-14 Thread Francesco Romani
- Original Message -
> From: "Christophe TREFOIS" 
> To: "users" 
> Sent: Monday, June 13, 2016 5:05:28 PM
> Subject: [ovirt-users] Problem with Getty and Serial Consoles
> 
> Dear all,
> 
> I am running 3.6.6 and am able to select a Console, however, the screen is
> black.
> 
> On the hypervisor, I try to start getty service manually, and end up with
> following error in journal
> 
> Jun 13 17:01:01 elephant-server.lcsb.uni.lu systemd[1]: Stopping
> user-0.slice.
> -- Subject: Unit user-0.slice has begun shutting down
> -- Defined-By: systemd
> -- Support: http://lists.freedesktop.org/mailman/listinfo/systemd-devel
> --
> -- Unit user-0.slice has begun shutting down.
> Jun 13 17:01:37 elephant-server.lcsb.uni.lu systemd[1]: Job
> dev-hvc0.device/start timed out.
> Jun 13 17:01:37 elephant-server.lcsb.uni.lu systemd[1]: Timed out waiting for
> device dev-hvc0.device.
> -- Subject: Unit dev-hvc0.device has failed
> -- Defined-By: systemd
> -- Support: http://lists.freedesktop.org/mailman/listinfo/systemd-devel
> --
> -- Unit dev-hvc0.device has failed.
> --
> -- The result is timeout.
> Jun 13 17:01:37 elephant-server.lcsb.uni.lu systemd[1]: Dependency failed for
> Serial Getty on hvc0.
> -- Subject: Unit serial-getty@hvc0.service has failed
> -- Defined-By: systemd
> -- Support: http://lists.freedesktop.org/mailman/listinfo/systemd-devel
> --
> -- Unit serial-getty@hvc0.service has failed.
> 
> I am running CentOS 7.2.
> 
> Does anybody have some pointers on what could be the issue here?

Hi!

Make sure you added the right parameter to the kernel command line, as per

"""
TL;DR: To make use of a serial console, just use console=ttyS0 on the kernel 
command line, and systemd will automatically start a getty on it for you.
"""

nice full tutorial on http://0pointer.de/blog/projects/serial-console.html

Bests,

-- 
Francesco Romani
RedHat Engineering Virtualization R & D
Phone: 8261328
IRC: fromani
___
Users mailing list
Users@ovirt.org
http://lists.ovirt.org/mailman/listinfo/users


Re: [ovirt-users] hosted-engine vm-status stale data and cluster seems "broken"

2016-06-14 Thread Charles Kozler
Anyone have any other possible information on this? I've noticed this issue
before and usually it just takes a bit of time for the cluster to 'settle'
after some node reboots but its been a few days and its still marked as
stale.



--== Host 1 status ==--

Status up-to-date  : False
Hostname   : njsevcnp01
Host ID: 1
Engine status  : unknown stale-data
Score  : 0
stopped: True
Local maintenance  : False
crc32  : 260dbf06
Host timestamp : 327


--== Host 2 status ==--

Status up-to-date  : False
Hostname   : njsevcnp02
Host ID: 2
Engine status  : unknown stale-data
Score  : 0
stopped: True
Local maintenance  : False
crc32  : 25da07df
Host timestamp : 3030


--== Host 3 status ==--

Status up-to-date  : False
Hostname   : njsevcnp03
Host ID: 3
Engine status  : unknown stale-data
Score  : 0
stopped: True
Local maintenance  : False
crc32  : c67818cb
Host timestamp : 10877406


&& vdsClient on node2 showing hosted engine is up on node 2

48207078-8cb0-413c-8984-40aa772f4d94
Status = Up
nicModel = rtl8139,pv
statusTime = 4540044460
emulatedMachine = pc
pid = 30571
vmName = HostedEngine
devices = [{'device': 'memballoon', 'specParams': {'model': 'none'},
'type': 'balloon', 'alias': 'balloon0'}, {'alias': 'scsi0', 'deviceId':
'17f10db1-2e9e-4422-9ea5-61a628072e29', 'address': {'slot': '0x04', 'bus':
'0x00', 'domain': '0x', 'type': 'pci', 'function': '0x0'}, 'device':
'scsi', 'model': 'virtio-scsi', 'type': 'controller'}, {'device': 'usb',
'alias': 'usb', 'type': 'controller', 'deviceId':
'9be34ac0-7d00-4a95-bdfe-5b328fc1355b', 'address': {'slot': '0x01', 'bus':
'0x00', 'domain': '0x', 'type': 'pci', 'function': '0x2'}}, {'device':
'ide', 'alias': 'ide', 'type': 'controller', 'deviceId':
'222629a8-0dd6-4e8e-9b42-43aac314c0c2', 'address': {'slot': '0x01', 'bus':
'0x00', 'domain': '0x', 'type': 'pci', 'function': '0x1'}}, {'device':
'virtio-serial', 'alias': 'virtio-serial0', 'type': 'controller',
'deviceId': '7cbccd04-853a-408f-94c2-5b10b641b7af', 'address': {'slot':
'0x05', 'bus': '0x00', 'domain': '0x', 'type': 'pci', 'function':
'0x0'}}, {'device': 'vnc', 'specParams': {'spiceSecureChannels':
'smain,sdisplay,sinputs,scursor,splayback,srecord,ssmartcard,susbredir',
'displayIp': '0'}, 'type': 'graphics', 'port': '5900'}, {'nicModel': 'pv',
'macAddr': '00:16:3e:16:83:91', 'linkActive': True, 'network': 'ovirtmgmt',
'alias': 'net0', 'deviceId': '3f679659-142c-41f3-a69d-4264d7234fbc',
'address': {'slot': '0x03', 'bus': '0x00', 'domain': '0x', 'type':
'pci', 'function': '0x0'}, 'device': 'bridge', 'type': 'interface', 'name':
'vnet0'}, {'address': {'slot': '0x06', 'bus': '0x00', 'domain': '0x',
'type': 'pci', 'function': '0x0'}, 'volumeInfo': {'domainID':
'c6323975-2966-409d-b9e0-48370a513a98', 'volType': 'path', 'leaseOffset':
0, 'volumeID': 'aa66d378-5a5f-490c-b0ab-993b79838d95', 'leasePath':
'/rhev/data-center/mnt/nas01:_volume1_vm__os_ovirt36__engine/c6323975-2966-409d-b9e0-48370a513a98/images/8518ef4a-7b17-4291-856c-81875ba4e264/aa66d378-5a5f-490c-b0ab-993b79838d95.lease',
'imageID': '8518ef4a-7b17-4291-856c-81875ba4e264', 'path':
'/rhev/data-center/mnt/nas01:_volume1_vm__os_ovirt36__engine/c6323975-2966-409d-b9e0-48370a513a98/images/8518ef4a-7b17-4291-856c-81875ba4e264/aa66d378-5a5f-490c-b0ab-993b79838d95'},
'index': '0', 'iface': 'virtio', 'apparentsize': '10737418240', 'imageID':
'8518ef4a-7b17-4291-856c-81875ba4e264', 'readonly': 'False', 'shared':
'exclusive', 'truesize': '6899802112', 'type': 'disk', 'domainID':
'c6323975-2966-409d-b9e0-48370a513a98', 'reqsize': '0', 'format': 'raw',
'deviceId': '8518ef4a-7b17-4291-856c-81875ba4e264', 'poolID':
'----', 'device': 'disk', 'path':
'/var/run/vdsm/storage/c6323975-2966-409d-b9e0-48370a513a98/8518ef4a-7b17-4291-856c-81875ba4e264/aa66d378-5a5f-490c-b0ab-993b79838d95',
'propagateErrors': 'off', 'name': 'vda', 'bootOrder': '1', 'volumeID':
'aa66d378-5a5f-490c-b0ab-993b79838d95', 'alias': 'virtio-disk0',
'volumeChain': [{'domainID': 'c6323975-2966-409d-b9e0-48370a513a98',
'volType': 'path', 'leaseOffset': 0, 'volumeID':
'aa66d378-5a5f-490c-b0ab-993b79838d95', 'leasePath':
'/rhev/data-center/mnt/nas01:_volume1_vm__os_ovirt36__engine/c6323975-2966-409d-b9e0-48370a513a98/images/8518ef4a-7b17-4291-856c-81875ba4e264/aa66d378-5a5f-490c-b0ab-993b79838d95.lease',
'imageID': 

Re: [ovirt-users] Neutron appliance & Cloud-init - no bootable device

2016-06-14 Thread Yaniv Dary
Dan, can you refer to the right person?

Yaniv Dary
Technical Product Manager
Red Hat Israel Ltd.
34 Jerusalem Road
Building A, 4th floor
Ra'anana, Israel 4350109

Tel : +972 (9) 7692306
8272306
Email: yd...@redhat.com
IRC : ydary


On Wed, Jun 1, 2016 at 12:44 AM, Sandro Bonazzola 
wrote:

>
> Il 31/Mag/2016 17:35, "Yaniv Dary"  ha scritto:
> >
> > The appliance is not supported for a few versions now.
>
> So, can someone remove it from our glance repo and mark the feature as
> deprecated on the website?
> Thanks
>
> >
> > Yaniv Dary
> > Technical Product Manager
> > Red Hat Israel Ltd.
> > 34 Jerusalem Road
> > Building A, 4th floor
> > Ra'anana, Israel 4350109
> >
> > Tel : +972 (9) 7692306
> > 8272306
> > Email: yd...@redhat.com
> > IRC : ydary
> >
> >
> > On Sun, May 29, 2016 at 6:25 PM, Bill Bill  wrote:
> >>
> >> Hello,
> >>
> >>
> >>
> >> Trying to configure the Neutron appliance however, when running
> cloud-init to attempt to set a root password or another user password, the
> VM simply boots to a “no bootable device” state. Without cloud-init, the VM
> boots however, have no idea what the default root login is.
> >>
> >>
> >>
> >> I’ve also tried booting into single user mode which also fails – has
> this Neutron appliance and cloud-init been tested for v 3.6?
> >>
> >>
> >> ___
> >> Users mailing list
> >> Users@ovirt.org
> >> http://lists.ovirt.org/mailman/listinfo/users
> >>
> >
> >
> > ___
> > Users mailing list
> > Users@ovirt.org
> > http://lists.ovirt.org/mailman/listinfo/users
> >
>
___
Users mailing list
Users@ovirt.org
http://lists.ovirt.org/mailman/listinfo/users


Re: [ovirt-users] how to build and install ovirt to the Product Environment

2016-06-14 Thread Martin Perina
On Tue, Jun 14, 2016 at 2:26 AM, Dewey Du  wrote:

> Yes, RPMs runs well. But I want to build from source and install it on
> production also.
>

​Hi,

in that case I'd recommend building RPMs from source and install them. But
be aware that engine is only one part whole set of RPMs which oVirt project
contains. Anyway if you want to build an RPM from source, please take a
look at README.adoc in root directory in short here are steps:

  make dist
  rpmbuild -ts ovirt-engine-X.Y.Z.tar.gz
  yum-builddep   #  should be replace with real name of
.src.rpm from previous step
  rpmbuild -tb ​

​ovirt-engine-X.Y.Z.tar.gz

Created RPMs are stored in $HOME/rpmbuild/RPMS


Be aware that if you want to install those RPMs you will still need other
RPMs from oVirt project like otopi, ovirt-host-deploy, ovirt-setup-lib,
ovirt-engine-extension-aaa-jdbc an others. Building all of them is quite
huge task, so that's why I recomended you RPM installation.

Martin Perina

​

>
> On Tue, Jun 14, 2016 at 12:36 AM, Martin Perina 
> wrote:
>
>>
>>
>> On Mon, Jun 13, 2016 at 6:27 PM, Nir Soffer  wrote:
>>
>>> For such issues better use de...@ovirt.org mailing list:
>>> http://lists.ovirt.org/mailman/listinfo/devel
>>>
>>> Nir
>>>
>>> On Mon, Jun 13, 2016 at 6:58 PM, Dewey Du  wrote:
>>> > To build and install ovirt-engine at your home folder under
>>> ovirt-engine
>>> > directory execute the folllowing command:
>>> >
>>> > $ make clean install-dev PREFIX="${PREFIX}"
>>> >
>>> > What about installing a Product Environment. Is the folllowing command
>>> > right?
>>>
>>
>> ​Do you want to use oVirt in production? If so, then I'd highly recommend
>> to use latest stable version installed from RPMs. More info can be found at
>>
>> http://www.ovirt.org/download/
>>
>> Martin Perina
>> ​
>>
>>
>>> >
>>> > $ make clean install PREFIX="${PREFIX}"
>>> >
>>> >
>>> > ___
>>> > Users mailing list
>>> > Users@ovirt.org
>>> > http://lists.ovirt.org/mailman/listinfo/users
>>> >
>>> ___
>>> Users mailing list
>>> Users@ovirt.org
>>> http://lists.ovirt.org/mailman/listinfo/users
>>>
>>
>>
>
___
Users mailing list
Users@ovirt.org
http://lists.ovirt.org/mailman/listinfo/users