[ovirt-users] Failed to activate Storage Domain infer (Data Center Default) by admin

2016-05-13 Thread William Kwan
hi all,
I still have the lock error which I need some help to resolve. I found some 
posts used sanlock command. But I'm not sure if it fit in my case. Thanks in 
advance. libvirtError: Failed to acquire lock: No space left on 
deviceThread-49::DEBUG::2016-05-13 
22:02:49,693::vm::2779::vm.Vm::(setDownStatus) 
vmId=`bd48bba5-4109-4a58-ac89-832add1f4de4`::Changed state to Down: Failed to 
acquire lock: No space left on device 

I wasn't able to bring up the hosted-engine on Node1 and I got the above error. 
 I was able to run `hosted-engine --start-pool` on Node2, but not Node1.
#hosted-engine --start-poolConnecting Storage Pool
Starting SPMf5f78fb8-42e3-46e5-93c2-6c3b00914048Activating Storage 
DomainResource timeout: ()


With the "Storage pool" up on Node2, I was able to start the hosted-engine.  
But when I try to active the Master storage domain called 'infra'
"Host cannot access the Storage Domain(s)  attached to the Data Center 
Default. Setting Host state to Non-Operational""Failed to connect Host to 
Storage Pool Default"
Tried again and I would get "Failed to activate Storage Domain infra (Data 
Center Default)"

ThanksW___
Users mailing list
Users@ovirt.org
http://lists.ovirt.org/mailman/listinfo/users


Re: [ovirt-users] no network with matching name 'vdsm-ovirtmgmt'

2016-05-13 Thread William Kwan
Hi all,
There was an issue with sudoers syntax.  Vdsm didn't start correctly so 54321 
wasn't available.After fixing it, I got some errors on locks.  I'm not sure how 
this should be fixed correctly.
Thread-49::DEBUG::2016-05-13 
22:02:49,689::libvirtconnection::143::root::(wrapper) Unknown libvirterror: 
ecode: 38 edom: 42 level: 2 message: Failed to acquire lock: No space left on 
deviceThread-49::DEBUG::2016-05-13 
22:02:49,690::vm::2287::vm.Vm::(_startUnderlyingVm) 
vmId=`bd48bba5-4109-4a58-ac89-832add1f4de4`::_ongoingCreations 
releasedThread-49::ERROR::2016-05-13 
22:02:49,690::vm::2324::vm.Vm::(_startUnderlyingVm) 
vmId=`bd48bba5-4109-4a58-ac89-832add1f4de4`::The vm start process 
failedTraceback (most recent call last):  File "/usr/share/vdsm/virt/vm.py", 
line 2264, in _startUnderlyingVm    self._run()  File 
"/usr/share/vdsm/virt/vm.py", line 3328, in _run    
self._connection.createXML(domxml, flags),  File 
"/usr/lib/python2.6/site-packages/vdsm/libvirtconnection.py", line 111, in 
wrapper    ret = f(*args, **kwargs)  File 
"/usr/lib64/python2.6/site-packages/libvirt.py", line 2709, in createXML    if 
ret is None:raise libvirtError('virDomainCreateXML() failed', 
conn=self)libvirtError: Failed to acquire lock: No space left on 
deviceThread-49::DEBUG::2016-05-13 
22:02:49,693::vm::2779::vm.Vm::(setDownStatus) 
vmId=`bd48bba5-4109-4a58-ac89-832add1f4de4`::Changed state to Down: Failed to 
acquire lock: No space left on device (code=1)



San lock.log:2016-05-13 22:09:58-0400 4283 [16627]: cmd 9 target pid 17793 not 
found2016-05-13 22:09:58-0400 4284 [16632]: r4 cmd_acquire 2,10,17793 invalid 
lockspace found -1 failed 0 name 50816073-c2eb-4d9d-be72-30ad62bf85f2
I did cluster heal.  File systems seems to be fine.  I see this lockspace file 
with hosted-engine.  Does it have anything do with the issue?
# find /gluster -name 
*lock*/gluster/engine/brick1/50816073-c2eb-4d9d-be72-30ad62bf85f2/ha_agent/hosted-engine.lockspace


ThanksWill  ___
Users mailing list
Users@ovirt.org
http://lists.ovirt.org/mailman/listinfo/users


Re: [ovirt-users] no network with matching name 'vdsm-ovirtmgmt'

2016-05-13 Thread William Kwan
OK. I should change the subject to "failed to start oVirt 3.5".
I found a bugzilla page with the following.  It seems to work fine after I 
replaced 'dhcp' with 'none' on two networks.  "Fixed by manually removing 
'dhcp' from /var/lib/vdsm/persistence/netconf/nets/$NETWORK" 
When I try to bring up the hosted engine with `hosted-engine start`, I get # 
hosted-engine --vm-startConnection to localhost:54321 refused
Still working on 'recovery'.  
Will
On Friday, May 13, 2016 7:56 PM, William Kwan  wrote:
 

 HI all,
I have a problem with my 3.5 setup with hosted-engine.  After reboot, I can get 
things started.  When i try to start vdsm manually, it says "no network with 
matching name 'vdsm-ovirtmgmt'
The bridge "ovirtmgmt" is up with underlying bond0 are good.  I'm not sure 
where to start.  Can anyone shed some light on where to start to fix this?  
Much appreciated.
ThanksWill
# /etc/init.d/vdsmd startinitctl: Job is already running: libvirtdvdsm: Running 
mkdirsvdsm: Running configure_coredumpvdsm: Running configure_vdsm_logsvdsm: 
Running run_init_hooksvdsm: Running upgraded_version_checkvdsm: Running 
check_is_configuredlibvirt is already configured for vdsmvdsm: Running 
validate_configurationSUCCESS: ssl configured to true. No conflictsvdsm: 
Running prepare_transient_repositoryvdsm: Running syslog_availablevdsm: Running 
nwfiltervdsm: Running dummybrvdsm: Running load_needed_modulesvdsm: Running 
tune_systemvdsm: Running test_spacevdsm: Running test_lovdsm: Running 
unified_network_persistence_upgradevdsm: Running restore_netslibvirt: Network 
Driver error : Network not found: no network with matching name 
'vdsm-ovirtmgmt'Traceback (most recent call last):  File 
"/usr/share/vdsm/vdsm-restore-net-config", line 137, in     restore()  
File "/usr/share/vdsm/vdsm-restore-net-config", line 123, in restore    
unified_restoration()  File "/usr/share/vdsm/vdsm-restore-net-config", line 69, 
in unified_restoration    setupNetworks(nets, bonds, connectivityCheck=False, 
_inRollback=True)  File "/usr/share/vdsm/network/api.py", line 680, in 
setupNetworks    implicitBonding=True, _netinfo=_netinfo, **d)  File 
"/usr/share/vdsm/network/api.py", line 226, in wrapped    ret = func(**attrs)  
File "/usr/share/vdsm/network/api.py", line 315, in addNetwork    
netEnt.configure(**options)  File "/usr/share/vdsm/network/models.py", line 
169, in configure    self.configurator.configureBridge(self, **opts)  File 
"/usr/share/vdsm/network/configurators/ifcfg.py", line 88, in configureBridge   
 ifup(bridge.name, bridge.ipConfig.async)  File 
"/usr/share/vdsm/network/configurators/ifcfg.py", line 819, in ifup    rc, out, 
err = _ifup(iface)  File "/usr/share/vdsm/network/configurators/ifcfg.py", line 
808, in _ifup    raise ConfigNetworkError(ERR_FAILED_IFUP, out[-1] if out else 
'')network.errors.ConfigNetworkError: (29, 'Determining IP information for 
DMZ... failed.')vdsm: stopped during execute restore_nets task (task returned 
with error code 1).vdsm start                                                 
[FAILED]

  ___
Users mailing list
Users@ovirt.org
http://lists.ovirt.org/mailman/listinfo/users


Re: [ovirt-users] Problems exporting VM's

2016-05-13 Thread Luciano Natale
Ok! I'll do!

On Wed, May 11, 2016 at 9:30 AM, Nir Soffer  wrote:

> On Sun, May 8, 2016 at 3:14 AM, Luciano Natale  wrote:
> > Hi everyone. I've been having trouble when exporting VM's. I get error
> when
> > moving image. I've created a whole new storage domain exclusive for this
> > issue, and same thing happens. It's not always the same VM that fails,
> but
> > once it fails on a certain storage domain, I cannot export it anymore.
> > Please tell me which logs are relevant so i can post them and any other
> > relevant iformation I can provide, and maybe someone can help me get
> through
> > this problem.
> >
> > Ovirt version is 3.5.4.2-1.el6.
>
> Please upgrade to latest 3.5 version, and report if this issue still
> exists there.
>
> Nir
>



-- 
Luciano Natale
___
Users mailing list
Users@ovirt.org
http://lists.ovirt.org/mailman/listinfo/users


[ovirt-users] no network with matching name 'vdsm-ovirtmgmt'

2016-05-13 Thread William Kwan
HI all,
I have a problem with my 3.5 setup with hosted-engine.  After reboot, I can get 
things started.  When i try to start vdsm manually, it says "no network with 
matching name 'vdsm-ovirtmgmt'
The bridge "ovirtmgmt" is up with underlying bond0 are good.  I'm not sure 
where to start.  Can anyone shed some light on where to start to fix this?  
Much appreciated.
ThanksWill
# /etc/init.d/vdsmd startinitctl: Job is already running: libvirtdvdsm: Running 
mkdirsvdsm: Running configure_coredumpvdsm: Running configure_vdsm_logsvdsm: 
Running run_init_hooksvdsm: Running upgraded_version_checkvdsm: Running 
check_is_configuredlibvirt is already configured for vdsmvdsm: Running 
validate_configurationSUCCESS: ssl configured to true. No conflictsvdsm: 
Running prepare_transient_repositoryvdsm: Running syslog_availablevdsm: Running 
nwfiltervdsm: Running dummybrvdsm: Running load_needed_modulesvdsm: Running 
tune_systemvdsm: Running test_spacevdsm: Running test_lovdsm: Running 
unified_network_persistence_upgradevdsm: Running restore_netslibvirt: Network 
Driver error : Network not found: no network with matching name 
'vdsm-ovirtmgmt'Traceback (most recent call last):  File 
"/usr/share/vdsm/vdsm-restore-net-config", line 137, in     restore()  
File "/usr/share/vdsm/vdsm-restore-net-config", line 123, in restore    
unified_restoration()  File "/usr/share/vdsm/vdsm-restore-net-config", line 69, 
in unified_restoration    setupNetworks(nets, bonds, connectivityCheck=False, 
_inRollback=True)  File "/usr/share/vdsm/network/api.py", line 680, in 
setupNetworks    implicitBonding=True, _netinfo=_netinfo, **d)  File 
"/usr/share/vdsm/network/api.py", line 226, in wrapped    ret = func(**attrs)  
File "/usr/share/vdsm/network/api.py", line 315, in addNetwork    
netEnt.configure(**options)  File "/usr/share/vdsm/network/models.py", line 
169, in configure    self.configurator.configureBridge(self, **opts)  File 
"/usr/share/vdsm/network/configurators/ifcfg.py", line 88, in configureBridge   
 ifup(bridge.name, bridge.ipConfig.async)  File 
"/usr/share/vdsm/network/configurators/ifcfg.py", line 819, in ifup    rc, out, 
err = _ifup(iface)  File "/usr/share/vdsm/network/configurators/ifcfg.py", line 
808, in _ifup    raise ConfigNetworkError(ERR_FAILED_IFUP, out[-1] if out else 
'')network.errors.ConfigNetworkError: (29, 'Determining IP information for 
DMZ... failed.')vdsm: stopped during execute restore_nets task (task returned 
with error code 1).vdsm start                                                 
[FAILED]___
Users mailing list
Users@ovirt.org
http://lists.ovirt.org/mailman/listinfo/users


Re: [ovirt-users] Hosted Engine Appliance hostname causing issues in production environment

2016-05-13 Thread Langley, Robert
I’m not quite sure about configuring the appliance with cloud-init. I think I 
might know what you mean. Maybe that is the part of the setup that did not 
happen for me.
I ended up creating my own hosted engine with a CDROM ISO to get me going today.
Thank you.

From: Simone Tiraboschi [mailto:stira...@redhat.com]
Sent: Friday, May 13, 2016 8:20 AM
To: Langley, Robert 
Cc: users 
Subject: Re: [ovirt-users] Hosted Engine Appliance hostname causing issues in 
production environment


Il 13 Mag 2016 1:29 AM, "Langley, Robert" 
> ha scritto:
>
> Hello;
>
>
>
> I discovered, the hard way, that the appliance is configured with the 
> hostname of localhost.localdomain.localdomain
>
> It seems that something did not go well with the hosted-engine –deploy

Did you try configuring the appliance with cloud-init?
Can you please attach hosted-engine-setup logs?

> I can go into the engine VM and edit the hostname, but if for any reason I 
> need to poweroff the engine VM, I get in trouble. Because, then the hostname 
> goes back to localhost.localdomain.localdomain and both DHCP & DNS within our 
> enterprise/production environment ends up assuming that for the hostname to 
> IP address reference or record. And it seems that other linux systems (not 
> the ones I administer; none of my almost 50 systems are effected. Of those, I 
> have about 5 linux/UNIX based systems attached to the production network) are 
> going offline because of this.
>
> I assume they do not have their HOSTS file configured correctly, but they 
> (the other administrators) are not listening to me. I have already told them 
> what to look for, in order to fix the issue on their own systems, but they’ve 
> ended up with this now for the third time.
>
> This is because I work for a local government and we have various 
> administrators for the various agencies within our County government.
>
> I have unplugged my initial physical host from our County network and I think 
> I’m going to end up bringing up a physical engine host, then work on 
> converting that to a VM for a hosted engine. I hope that is going to work. I 
> tried bringing up the OVA of the appliance within VirtualBox, but that 
> doesn’t work. Is there another virtual software I can bring up within a 
> Windows PC to simply edit the hostname, then export the appliance again?
>
> I think the appliance for download needs to be changed to have something 
> other than “localhost.localdomain.localdomain” as the hostname in the 
> hosted-engine appliance. I downloaded the OVA file from the 
> Jenkins.ovirt.org site.
>
>
>
> Sincerely,
>
> -Robert
>
>
> ___
> Users mailing list
> Users@ovirt.org
> http://lists.ovirt.org/mailman/listinfo/users
>
___
Users mailing list
Users@ovirt.org
http://lists.ovirt.org/mailman/listinfo/users


Re: [ovirt-users] creation of lun disks

2016-05-13 Thread Fabrice Bacchella

> Le 13 mai 2016 à 20:04, Juan Hernández  a écrit :
> 
> On 05/13/2016 04:21 PM, Fabrice Bacchella wrote:
>> I'm trying to generated a lun disk, using the python SDK.
>> 
>> My code can be found
>> at https://github.com/fbacchella/ovirtcmd/blob/master/ovlib/disks/__init__.py
>> 
>> If a try to see the content of a existing disk and test with :
>> print disk.get_type()
>> print disk.get_storage_type()
>> 
>> I'm getting :
>> None
>> lun
>> 
>> Now I try to create another LUN disk with
>> kwargs['storage_type'] = 'lun'
>> kwargs['type_'] = 'system'
>> 
>> I'm getting:
>> 
>>> POST /api/disks HTTP/1.1
>>> 
>>>system
>>>virtio-scsi
>>>
>>>
>>>
>>>lun
>>> 
>> ...
>> < 
>> < Incomplete parameters
>> < Storage [type] required for invoke0
>> < 
>> 
>> changing to 
>>kwargs['type_'] = None 
>> change nothing, I'm still getting the same error message.
>> 
>>> POST /api/disks HTTP/1.1
>>> 
>>>virtio-scsi
>>>
>>>
>>>
>>>lun
>>> 
>> ...
>> < 
>> < Incomplete parameters
>> < Storage [type] required for invoke0
>> < 
>> 
>> 
>> What did I do wrong ? There is nothing about that in the logs
>> 
> 
> When creating a LUN disk you need to specify the type (fcp or iscsi)
> inside the "lun_storage" element, so you need to send an XML document
> like this:
> 
>  
>virtio
>
>  fcp
>  
>
>  
> 
> To do that with the Python SDK you need the following code:
> 
>  params.Disk(
>interface='virtio',
>lun_storage=params.Storage(
>  type_='fcp',
>  logical_unit=[
>params.LogicalUnit(
>  id='0QEMU_QEMU_HARDDISK_drive-scsi0-0-0-2',
>),
>  ],
>),
>  )
> 

Ok thanks, but now I'm checking the rsdl (I should have done that earlier) and 
see :


disk.lun_storage.type


disk.lun_storage.logical_unit


logical_unit.id


logical_unit.address


logical_unit.port


logical_unit.target




Given the output for a existing SAN disk, I see :


SHP_MSA_2040_SAS_00c0ff26285a209135570100
HP
MSA 2040 SAS
33
1099511627776
0
dd98b206-08e4-48c0-9795-d10bc7581a95


There is no address, port or target, that are needed only for iscsi. I tried 
without and everything was fine. So I think they are not required.


___
Users mailing list
Users@ovirt.org
http://lists.ovirt.org/mailman/listinfo/users


Re: [ovirt-users] Problem remove snapshot

2016-05-13 Thread Marcelo Leandro
I going try this steps:
- power off the VM's
- delete the snapshots.

Thanks.



2016-05-13 17:45 GMT-03:00 Greg Padgett :

> On 05/13/2016 02:42 PM, Marcelo Leandro wrote:
>
>> Hello Greg,
>>  You can help me?
>>
>> Thanks.
>>
>
> Hi Marcelo,
>
> I see in your engine log that one of the images can't be found when a
> merge is attempted:
>
>   VDSErrorException: Failed to MergeVDS, error = Drive image file could
>   not be found, code = 13
>
> There were some recent fixes to Live Merge that help is progress beyond
> this particular error, see [1] for details on that.  The fix is in 3.6.6.
>
> It might be helpful to check the vdsm log to see what happened on the host
> when the first merge on that snapshot was attempted.  It should be in the
> Host04 log around time 2016-05-11 19:33:43.
>
> Thanks,
> Greg
>
> [1] https://bugzilla.redhat.com/show_bug.cgi?id=1327140
>
>
>
> 2016-05-11 19:56 GMT-03:00 Marcelo Leandro > >:
>>
>>
>> hello,
>>
>> i have problem other vm now:
>> the vm contain two disk, only a09bfb5d-3922-406d-b4e0-daafad96ffec
>> mark illegal in snapshot tab.
>>
>> information database:
>>   imagestatus |   storage_pool_id|
>>   storage_domain_id   |image_group_id|
>>   im
>> age_guid  |   parentid
>>
>> -+--+--+--+
>> --+--
>> 1 | 77e24b20-9d21-4952-a089-3c5c592b4e6d |
>> 0dc658da-feed-498b-bfd2-3950f7198e11 |
>> e94d711c-b50b-43bd-a67c-cb4643808d9d | b9478a6c-28c5-4
>> 03b-a889-226d16d399a5 | ----
>> 1 | 77e24b20-9d21-4952-a089-3c5c592b4e6d |
>> 6e5cce71-3438-4045-9d54-607123e0557e |
>> 05aab51a-8342-4d1f-88c4-6d733da5959a | 023110fa-7d24-4
>> 6ec-ada8-d617d7c2adaf | a09bfb5d-3922-406d-b4e0-daafad96ffec
>> 4 | 77e24b20-9d21-4952-a089-3c5c592b4e6d |
>> 6e5cce71-3438-4045-9d54-607123e0557e |
>> 05aab51a-8342-4d1f-88c4-6d733da5959a | a09bfb5d-3922-4
>> 06d-b4e0-daafad96ffec | ----
>>
>> erro msg :
>> Failed to delete snapshot 'Backup the VM' for VM 'vm-name.
>>
>>
>> vm-disk-info.py output:
>>
>> Disk e94d711c-b50b-43bd-a67c-cb4643808d9d
>> (sd:0dc658da-feed-498b-bfd2-3950f7198e11)
>> Volumes:
>>  b9478a6c-28c5-403b-a889-226d16d399a5
>>  Disk 05aab51a-8342-4d1f-88c4-6d733da5959a
>> (sd:6e5cce71-3438-4045-9d54-607123e0557e)
>> Warning: volume 023110fa-7d24-46ec-ada8-d617d7c2adaf is in chain but
>> illegal
>>  Volumes:
>>  a09bfb5d-3922-406d-b4e0-daafad96ffec
>>
>> engine.log :
>>
>> https://www.dropbox.com/s/yfsaexpzdz3ngm1/engine.rar?dl=0
>>
>> Thanks.
>>
>>
>> 2016-05-11 18:39 GMT-03:00 Greg Padgett > >:
>>
>>
>> On 05/11/2016 09:49 AM, Marcelo Leandro wrote:
>>
>> hello,
>> i have problem for delete one snapshot.
>> i see the  error:
>> Due to partial snapshot removal, Snapshot 'Backup the VM' of
>> VM 'vmname'
>> now contains only the following disks: 'vmdiskname'.
>>
>>
>> Failed to delete snapshot 'Backup the VM' for VM 'vmname'.
>>
>>
>> Hi Marcelo,
>>
>> Can you try to remove the snapshot again and attach the engine log
>> if it fails?
>>
>> Thanks,
>> Greg
>>
>> information in db :
>>imagestatus |   storage_pool_id|
>>storage_domain_id   |image_group_id
>>  |
>>im
>> age_guid  |   parentid
>>
>> -+--+--+--+
>> --+--
>>  1 | 77e24b20-9d21-4952-a089-3c5c592b4e6d |
>> 6e5cce71-3438-4045-9d54-607123e0557e |
>> bfb1e6b5-7a06-41e4-b8b4-2e237ed0c7ab | 1bb7bd08-5ee8-4
>> a3b-8978-c15e15d1c5e4 | 6008671c-2f9d-48de-94f3-cc7d6273de31
>>  4 | 77e24b20-9d21-4952-a089-3c5c592b4e6d |
>> 6e5cce71-3438-4045-9d54-607123e0557e |
>> bfb1e6b5-7a06-41e4-b8b4-2e237ed0c7ab | 6008671c-2f9d-4
>> 8de-94f3-cc7d6273de31 | ----
>>
>>
>> output the script vm-disk-info.py
>>
>>
>>Disk bfb1e6b5-7a06-41e4-b8b4-2e237ed0c7ab
>> 

Re: [ovirt-users] VMs seems to be slow, while snapshot delete is happening

2016-05-13 Thread Greg Padgett

On 05/12/2016 03:46 AM, SATHEESARAN wrote:

Hi All,

I have created a VM with 60GB of disk space.
I have started I/O inside the VM, after sometime when the disk was 4GB full,
I took a snapshot of the disk from UI.

I/O kept continuing and the new image file ( overlay file ) was 15GB in
size.

Now I deleted the snapshot.
I think this would initiate block-merges.
During this time, my I/O was were slower, till the snapshot delete is
completed.

Its not very slow, but slow.
Is this a known behavior ?


Hi Satheesaran,

It sounds like this was a live merge, ie snapshot removal while the VM was 
running.  This is currently implemented as a block commit, where data from 
the snapshot (your 15GB overlay, in this case) is merged down.


For snapshot deletion to complete, this means that your concurrent I/O and 
the data already in the overlay have to be written into the next-lower 
image.  So in short, yes, depending on the storage system backing your VM, 
I/O may slow down a bit until completion.


HTH,
Greg


Advance thanks,
Satheesaran S
___
Users mailing list
Users@ovirt.org
http://lists.ovirt.org/mailman/listinfo/users



___
Users mailing list
Users@ovirt.org
http://lists.ovirt.org/mailman/listinfo/users


Re: [ovirt-users] RemoveDiskSnapshots error's on engine log

2016-05-13 Thread Greg Padgett

On 05/11/2016 05:32 PM, Ivo Rütsche wrote:


Hi List

At the moment, we replace a filer and we have to move the VM's to
another filer. We have a lot of problems with it. Sometime we have to
restart the engine, sometime we have to move the image arround and
sometimes we don't have a chance to remove the snapshots. When the VM is
running, most of the time the snapshot remove never starts.

Now, i found these errors every second in the engine.log since today:

2016-05-11 23:25:29,102 ERROR
[org.ovirt.engine.core.bll.CommandsFactory]
(DefaultQuartzScheduler_Worker-90) [] Error in invocating CTOR of
command 'RemoveDiskSnapshots': null
2016-05-11 23:25:29,102 ERROR
[org.ovirt.engine.core.utils.timer.SchedulerUtilQuartzImpl]
(DefaultQuartzScheduler_Worker-90) [] Failed to invoke scheduled method
invokeCallbackMethods: null
2016-05-11 23:25:30,104 ERROR
[org.ovirt.engine.core.bll.CommandsFactory]
(DefaultQuartzScheduler_Worker-66) [] Error in invocating CTOR of
command 'RemoveDiskSnapshots': null
2016-05-11 23:25:30,104 ERROR
[org.ovirt.engine.core.utils.timer.SchedulerUtilQuartzImpl]
(DefaultQuartzScheduler_Worker-66) [] Failed to invoke scheduled method
invokeCallbackMethods: null
2016-05-11 23:25:31,105 ERROR
[org.ovirt.engine.core.bll.CommandsFactory]
(DefaultQuartzScheduler_Worker-76) [] Error in invocating CTOR of
command 'RemoveDiskSnapshots': null
2016-05-11 23:25:31,105 ERROR
[org.ovirt.engine.core.utils.timer.SchedulerUtilQuartzImpl]
(DefaultQuartzScheduler_Worker-76) [] Failed to invoke scheduled method
invokeCallbackMethods: null

I try everything, but i can't find the problem.

Anyone who have an idea, where i have to search?


Hi Ivo,

I'm not sure what would cause this to start happening after a successful 
deployment, but perhaps enabling debug logging would help narrow it down. 
 See [1] for instructions on that.


Greg

[1] 
http://old.ovirt.org/OVirt_Engine_Development_Environment#Enable_DEBUG_log_-_Runtime_Change.3B_No_Restart




Thank's for help.

Ivo

___
Users mailing list
Users@ovirt.org
http://lists.ovirt.org/mailman/listinfo/users



___
Users mailing list
Users@ovirt.org
http://lists.ovirt.org/mailman/listinfo/users


Re: [ovirt-users] Problem remove snapshot

2016-05-13 Thread Greg Padgett

On 05/13/2016 02:42 PM, Marcelo Leandro wrote:

Hello Greg,
 You can help me?

Thanks.


Hi Marcelo,

I see in your engine log that one of the images can't be found when a 
merge is attempted:


  VDSErrorException: Failed to MergeVDS, error = Drive image file could
  not be found, code = 13

There were some recent fixes to Live Merge that help is progress beyond 
this particular error, see [1] for details on that.  The fix is in 3.6.6.


It might be helpful to check the vdsm log to see what happened on the host 
when the first merge on that snapshot was attempted.  It should be in the 
Host04 log around time 2016-05-11 19:33:43.


Thanks,
Greg

[1] https://bugzilla.redhat.com/show_bug.cgi?id=1327140




2016-05-11 19:56 GMT-03:00 Marcelo Leandro >:

hello,

i have problem other vm now:
the vm contain two disk, only a09bfb5d-3922-406d-b4e0-daafad96ffec
mark illegal in snapshot tab.

information database:
  imagestatus |   storage_pool_id|
  storage_domain_id   |image_group_id|
  im
age_guid  |   parentid

-+--+--+--+
--+--
1 | 77e24b20-9d21-4952-a089-3c5c592b4e6d |
0dc658da-feed-498b-bfd2-3950f7198e11 |
e94d711c-b50b-43bd-a67c-cb4643808d9d | b9478a6c-28c5-4
03b-a889-226d16d399a5 | ----
1 | 77e24b20-9d21-4952-a089-3c5c592b4e6d |
6e5cce71-3438-4045-9d54-607123e0557e |
05aab51a-8342-4d1f-88c4-6d733da5959a | 023110fa-7d24-4
6ec-ada8-d617d7c2adaf | a09bfb5d-3922-406d-b4e0-daafad96ffec
4 | 77e24b20-9d21-4952-a089-3c5c592b4e6d |
6e5cce71-3438-4045-9d54-607123e0557e |
05aab51a-8342-4d1f-88c4-6d733da5959a | a09bfb5d-3922-4
06d-b4e0-daafad96ffec | ----

erro msg :
Failed to delete snapshot 'Backup the VM' for VM 'vm-name.


vm-disk-info.py output:

Disk e94d711c-b50b-43bd-a67c-cb4643808d9d
(sd:0dc658da-feed-498b-bfd2-3950f7198e11)
Volumes:
 b9478a6c-28c5-403b-a889-226d16d399a5
 Disk 05aab51a-8342-4d1f-88c4-6d733da5959a
(sd:6e5cce71-3438-4045-9d54-607123e0557e)
Warning: volume 023110fa-7d24-46ec-ada8-d617d7c2adaf is in chain but
illegal
 Volumes:
 a09bfb5d-3922-406d-b4e0-daafad96ffec

engine.log :

https://www.dropbox.com/s/yfsaexpzdz3ngm1/engine.rar?dl=0

Thanks.


2016-05-11 18:39 GMT-03:00 Greg Padgett >:

On 05/11/2016 09:49 AM, Marcelo Leandro wrote:

hello,
i have problem for delete one snapshot.
i see the  error:
Due to partial snapshot removal, Snapshot 'Backup the VM' of
VM 'vmname'
now contains only the following disks: 'vmdiskname'.


Failed to delete snapshot 'Backup the VM' for VM 'vmname'.


Hi Marcelo,

Can you try to remove the snapshot again and attach the engine log
if it fails?

Thanks,
Greg

information in db :
   imagestatus |   storage_pool_id|
   storage_domain_id   |image_group_id
 |
   im
age_guid  |   parentid

-+--+--+--+
--+--
 1 | 77e24b20-9d21-4952-a089-3c5c592b4e6d |
6e5cce71-3438-4045-9d54-607123e0557e |
bfb1e6b5-7a06-41e4-b8b4-2e237ed0c7ab | 1bb7bd08-5ee8-4
a3b-8978-c15e15d1c5e4 | 6008671c-2f9d-48de-94f3-cc7d6273de31
 4 | 77e24b20-9d21-4952-a089-3c5c592b4e6d |
6e5cce71-3438-4045-9d54-607123e0557e |
bfb1e6b5-7a06-41e4-b8b4-2e237ed0c7ab | 6008671c-2f9d-4
8de-94f3-cc7d6273de31 | ----


output the script vm-disk-info.py


   Disk bfb1e6b5-7a06-41e4-b8b4-2e237ed0c7ab
(sd:6e5cce71-3438-4045-9d54-607123e0557e)
Warning: volume 1bb7bd08-5ee8-4a3b-8978-c15e15d1c5e4 is in
chain but illegal
  Volumes:
  6008671c-2f9d-48de-94f3-cc7d6273de31
  1bb7bd08-5ee8-4a3b-8978-c15e15d1c5e4


output the  command md5sum :
volume :   1bb7bd08-5ee8-4a3b-8978-c15e15d1c5e4
4c05c8258ed5746da6ac2cdb80c8974e
1bb7bd08-5ee8-4a3b-8978-c15e15d1c5e4

 

Re: [ovirt-users] Problem remove snapshot

2016-05-13 Thread Marcelo Leandro
Hello Greg,
You can help me?

Thanks.

2016-05-11 19:56 GMT-03:00 Marcelo Leandro :

> hello,
>
> i have problem other vm now:
> the vm contain two disk, only a09bfb5d-3922-406d-b4e0-daafad96ffec mark
> illegal in snapshot tab.
>
> information database:
>  imagestatus |   storage_pool_id|
>  storage_domain_id   |image_group_id|
>im
> age_guid  |   parentid
>
> -+--+--+--+
> --+--
>1 | 77e24b20-9d21-4952-a089-3c5c592b4e6d |
> 0dc658da-feed-498b-bfd2-3950f7198e11 | e94d711c-b50b-43bd-a67c-cb4643808d9d
> | b9478a6c-28c5-4
> 03b-a889-226d16d399a5 | ----
>1 | 77e24b20-9d21-4952-a089-3c5c592b4e6d |
> 6e5cce71-3438-4045-9d54-607123e0557e | 05aab51a-8342-4d1f-88c4-6d733da5959a
> | 023110fa-7d24-4
> 6ec-ada8-d617d7c2adaf | a09bfb5d-3922-406d-b4e0-daafad96ffec
>4 | 77e24b20-9d21-4952-a089-3c5c592b4e6d |
> 6e5cce71-3438-4045-9d54-607123e0557e | 05aab51a-8342-4d1f-88c4-6d733da5959a
> | a09bfb5d-3922-4
> 06d-b4e0-daafad96ffec | ----
>
> erro msg :
> Failed to delete snapshot 'Backup the VM' for VM 'vm-name.
>
> vm-disk-info.py output:
>
> Disk e94d711c-b50b-43bd-a67c-cb4643808d9d
> (sd:0dc658da-feed-498b-bfd2-3950f7198e11)
> Volumes:
> b9478a6c-28c5-403b-a889-226d16d399a5
> Disk 05aab51a-8342-4d1f-88c4-6d733da5959a
> (sd:6e5cce71-3438-4045-9d54-607123e0557e)
> Warning: volume 023110fa-7d24-46ec-ada8-d617d7c2adaf is in chain but
> illegal
> Volumes:
> a09bfb5d-3922-406d-b4e0-daafad96ffec
>
> engine.log :
>
> https://www.dropbox.com/s/yfsaexpzdz3ngm1/engine.rar?dl=0
>
> Thanks.
>
>
> 2016-05-11 18:39 GMT-03:00 Greg Padgett :
>
>> On 05/11/2016 09:49 AM, Marcelo Leandro wrote:
>>
>>> hello,
>>> i have problem for delete one snapshot.
>>> i see the  error:
>>> Due to partial snapshot removal, Snapshot 'Backup the VM' of VM 'vmname'
>>> now contains only the following disks: 'vmdiskname'.
>>>
>>>
>>> Failed to delete snapshot 'Backup the VM' for VM 'vmname'.
>>>
>>
>> Hi Marcelo,
>>
>> Can you try to remove the snapshot again and attach the engine log if it
>> fails?
>>
>> Thanks,
>> Greg
>>
>> information in db :
>>>   imagestatus |   storage_pool_id|
>>>   storage_domain_id   |image_group_id|
>>>   im
>>> age_guid  |   parentid
>>>
>>> -+--+--+--+
>>> --+--
>>> 1 | 77e24b20-9d21-4952-a089-3c5c592b4e6d |
>>> 6e5cce71-3438-4045-9d54-607123e0557e |
>>> bfb1e6b5-7a06-41e4-b8b4-2e237ed0c7ab | 1bb7bd08-5ee8-4
>>> a3b-8978-c15e15d1c5e4 | 6008671c-2f9d-48de-94f3-cc7d6273de31
>>> 4 | 77e24b20-9d21-4952-a089-3c5c592b4e6d |
>>> 6e5cce71-3438-4045-9d54-607123e0557e |
>>> bfb1e6b5-7a06-41e4-b8b4-2e237ed0c7ab | 6008671c-2f9d-4
>>> 8de-94f3-cc7d6273de31 | ----
>>>
>>>
>>> output the script vm-disk-info.py
>>>
>>>
>>>   Disk bfb1e6b5-7a06-41e4-b8b4-2e237ed0c7ab
>>> (sd:6e5cce71-3438-4045-9d54-607123e0557e)
>>> Warning: volume 1bb7bd08-5ee8-4a3b-8978-c15e15d1c5e4 is in chain but
>>> illegal
>>>  Volumes:
>>>  6008671c-2f9d-48de-94f3-cc7d6273de31
>>>  1bb7bd08-5ee8-4a3b-8978-c15e15d1c5e4
>>>
>>>
>>> output the  command md5sum :
>>> volume :   1bb7bd08-5ee8-4a3b-8978-c15e15d1c5e4
>>> 4c05c8258ed5746da6ac2cdb80c8974e  1bb7bd08-5ee8-4a3b-8978-c15e15d1c5e4
>>>
>>> ec74d4d8189a5ea24720349868ff7ab7  1bb7bd08-5ee8-4a3b-8978-c15e15d1c5e4
>>>
>>> the volume change
>>>
>>> output the command md5sum :
>>> volume: 6008671c-2f9d-48de-94f3-cc7d6273de31
>>>
>>> 86daf229ad0b65fb8be3a3de6b0ee840  6008671c-2f9d-48de-94f3-cc7d6273de31
>>>
>>> 86daf229ad0b65fb8be3a3de6b0ee840  6008671c-2f9d-48de-94f3-cc7d6273de31
>>>
>>> the volume not change
>>>
>>> Thanks.
>>>
>>>
>>> ___
>>> Users mailing list
>>> Users@ovirt.org
>>> http://lists.ovirt.org/mailman/listinfo/users
>>>
>>>
>>
>
___
Users mailing list
Users@ovirt.org
http://lists.ovirt.org/mailman/listinfo/users


Re: [ovirt-users] creation of lun disks

2016-05-13 Thread Juan Hernández
On 05/13/2016 04:21 PM, Fabrice Bacchella wrote:
> I'm trying to generated a lun disk, using the python SDK.
> 
> My code can be found
> at https://github.com/fbacchella/ovirtcmd/blob/master/ovlib/disks/__init__.py
> 
> If a try to see the content of a existing disk and test with :
> print disk.get_type()
> print disk.get_storage_type()
> 
> I'm getting :
> None
> lun
> 
> Now I try to create another LUN disk with
> kwargs['storage_type'] = 'lun'
> kwargs['type_'] = 'system'
> 
> I'm getting:
> 
>> POST /api/disks HTTP/1.1
>> 
>> system
>> virtio-scsi
>> 
>> 
>> 
>> lun
>> 
> ...
> < 
> < Incomplete parameters
> < Storage [type] required for invoke0
> < 
> 
> changing to 
> kwargs['type_'] = None 
> change nothing, I'm still getting the same error message.
> 
>> POST /api/disks HTTP/1.1
>> 
>> virtio-scsi
>> 
>> 
>> 
>> lun
>> 
> ...
> < 
> < Incomplete parameters
> < Storage [type] required for invoke0
> < 
> 
> 
>  What did I do wrong ? There is nothing about that in the logs
> 

When creating a LUN disk you need to specify the type (fcp or iscsi)
inside the "lun_storage" element, so you need to send an XML document
like this:

  
virtio

  fcp
  

  

To do that with the Python SDK you need the following code:

  params.Disk(
interface='virtio',
lun_storage=params.Storage(
  type_='fcp',
  logical_unit=[
params.LogicalUnit(
  id='0QEMU_QEMU_HARDDISK_drive-scsi0-0-0-2',
),
  ],
),
  )

-- 
Dirección Comercial: C/Jose Bardasano Baos, 9, Edif. Gorbea 3, planta
3ºD, 28016 Madrid, Spain
Inscrita en el Reg. Mercantil de Madrid – C.I.F. B82657941 - Red Hat S.L.
___
Users mailing list
Users@ovirt.org
http://lists.ovirt.org/mailman/listinfo/users


Re: [ovirt-users] Adding another host to my cluster

2016-05-13 Thread Gervais de Montbrun
Hi Nir,

Thank you for you input.

It was suggest that I try starting vdsm from the shell so I could see the
output in an effort to solve my issue:
http://lists.ovirt.org/pipermail/users/2016-May/039690.html

Cheers,
Gervais



On May 13, 2016, at 12:12 PM, Nir Soffer  wrote:

On Fri, May 13, 2016 at 3:37 PM, Gervais de Montbrun
 wrote:

Hi Charles,

I think the problem I am having is due to the setup failing and not
something in vdsm configs as I have never gotten this server to start up
properly and the BRIDGE ethernet interface + ovirt routes are not setup.

I put the logs here:
https://www.dropbox.com/sh/5ugyykqh1lgru9l/AACXxRYWr3tgd0WbBVFW5twHa?dl=0

hosted-engine--deploy-logs.zip # Logs from when I tried to deploy and it
failed
vdsm.tar.gz # /var/log/vdsm

Output from running vdsm from the command line:

[root@cultivar2 log]# su -s /bin/bash vdsm


This cannot work unless supervdsmd is running...

[vdsm@cultivar2 log]$ python /usr/share/vdsm/vdsm
(PID: 6521) I am the actual vdsm 4.17.26-1.el7
cultivar2.grove.silverorange.com (3.10.0-327.el7.x86_64)
VDSM will run with cpu affinity: frozenset([1])
/usr/bin/taskset --all-tasks --pid --cpu-list 1 6521 (cwd None)
SUCCESS:  = '';  = 0
Starting scheduler vdsm.Scheduler
started
Run and protect:
registerDomainStateChangeCallback(callbackFunc=)
Run and protect: registerDomainStateChangeCallback, Return response: None
Trying to connect to Super Vdsm
Preparing MOM interface
Using named unix socket /var/run/vdsm/mom-vdsm.sock
Unregistering all secrests
trying to connect libvirt
recovery: started
Setting channels' timeout to 30 seconds.
Starting VM channels listener thread.
Listening at 0.0.0.0:54321
Adding detector 
recovery: completed in 0s
Adding detector 
Starting executor
Starting worker jsonrpc.Executor/0
Worker started
Starting worker jsonrpc.Executor/1
Worker started
Starting worker jsonrpc.Executor/2
Worker started
Starting worker jsonrpc.Executor/3
Worker started
Starting worker jsonrpc.Executor/4
Worker started
Starting worker jsonrpc.Executor/5
Worker started
Starting worker jsonrpc.Executor/6
Worker started
Starting worker jsonrpc.Executor/7
Worker started
XMLRPC server running
Starting executor
Starting worker periodic/0
Worker started
Starting worker periodic/1
Worker started
Starting worker periodic/2
Worker started
Starting worker periodic/3
Worker started
trying to connect libvirt
Panic: Connect to supervdsm service failed: [Errno 2] No such file or
directory
Traceback (most recent call last):
File "/usr/share/vdsm/supervdsm.py", line 78, in _connect
 utils.retry(self._manager.connect, Exception, timeout=60, tries=3)
File "/usr/lib/python2.7/site-packages/vdsm/utils.py", line 959, in retry
 return func()
File "/usr/lib64/python2.7/multiprocessing/managers.py", line 500, in
connect
 conn = Client(self._address, authkey=self._authkey)
File "/usr/lib64/python2.7/multiprocessing/connection.py", line 173, in
Client
 c = SocketClient(address)
File "/usr/lib64/python2.7/multiprocessing/connection.py", line 308, in
SocketClient
 s.connect(address)
File "/usr/lib64/python2.7/socket.py", line 224, in meth
 return getattr(self._sock,name)(*args)
error: [Errno 2] No such file or directory


Vdsm tries to connect to supervdsmd on startup, and if it is not running
it will fail.

You can do:

systemctl start supervdsmd

And they you can run vdsmd from the shell.

But why do you need to run vdsm from the shell?

Nir
___
Users mailing list
Users@ovirt.org
http://lists.ovirt.org/mailman/listinfo/users


Re: [ovirt-users] Hosted Engine Appliance hostname causing issues in production environment

2016-05-13 Thread Simone Tiraboschi
Il 13 Mag 2016 1:29 AM, "Langley, Robert"  ha
scritto:
>
> Hello;
>
>
>
> I discovered, the hard way, that the appliance is configured with the
hostname of localhost.localdomain.localdomain
>
> It seems that something did not go well with the hosted-engine –deploy

Did you try configuring the appliance with cloud-init?
Can you please attach hosted-engine-setup logs?

> I can go into the engine VM and edit the hostname, but if for any reason
I need to poweroff the engine VM, I get in trouble. Because, then the
hostname goes back to localhost.localdomain.localdomain and both DHCP & DNS
within our enterprise/production environment ends up assuming that for the
hostname to IP address reference or record. And it seems that other linux
systems (not the ones I administer; none of my almost 50 systems are
effected. Of those, I have about 5 linux/UNIX based systems attached to the
production network) are going offline because of this.
>
> I assume they do not have their HOSTS file configured correctly, but they
(the other administrators) are not listening to me. I have already told
them what to look for, in order to fix the issue on their own systems, but
they’ve ended up with this now for the third time.
>
> This is because I work for a local government and we have various
administrators for the various agencies within our County government.
>
> I have unplugged my initial physical host from our County network and I
think I’m going to end up bringing up a physical engine host, then work on
converting that to a VM for a hosted engine. I hope that is going to work.
I tried bringing up the OVA of the appliance within VirtualBox, but that
doesn’t work. Is there another virtual software I can bring up within a
Windows PC to simply edit the hostname, then export the appliance again?
>
> I think the appliance for download needs to be changed to have something
other than “localhost.localdomain.localdomain” as the hostname in the
hosted-engine appliance. I downloaded the OVA file from the
Jenkins.ovirt.org site.
>
>
>
> Sincerely,
>
> -Robert
>
>
> ___
> Users mailing list
> Users@ovirt.org
> http://lists.ovirt.org/mailman/listinfo/users
>
___
Users mailing list
Users@ovirt.org
http://lists.ovirt.org/mailman/listinfo/users


Re: [ovirt-users] Adding another host to my cluster

2016-05-13 Thread Nir Soffer
On Fri, May 13, 2016 at 3:37 PM, Gervais de Montbrun
 wrote:
> Hi Charles,
>
> I think the problem I am having is due to the setup failing and not
> something in vdsm configs as I have never gotten this server to start up
> properly and the BRIDGE ethernet interface + ovirt routes are not setup.
>
> I put the logs here:
> https://www.dropbox.com/sh/5ugyykqh1lgru9l/AACXxRYWr3tgd0WbBVFW5twHa?dl=0
>
> hosted-engine--deploy-logs.zip # Logs from when I tried to deploy and it
> failed
> vdsm.tar.gz # /var/log/vdsm
>
> Output from running vdsm from the command line:
>
> [root@cultivar2 log]# su -s /bin/bash vdsm

This cannot work unless supervdsmd is running...

> [vdsm@cultivar2 log]$ python /usr/share/vdsm/vdsm
> (PID: 6521) I am the actual vdsm 4.17.26-1.el7
> cultivar2.grove.silverorange.com (3.10.0-327.el7.x86_64)
> VDSM will run with cpu affinity: frozenset([1])
> /usr/bin/taskset --all-tasks --pid --cpu-list 1 6521 (cwd None)
> SUCCESS:  = '';  = 0
> Starting scheduler vdsm.Scheduler
> started
> Run and protect:
> registerDomainStateChangeCallback(callbackFunc= 0x381b158>)
> Run and protect: registerDomainStateChangeCallback, Return response: None
> Trying to connect to Super Vdsm
> Preparing MOM interface
> Using named unix socket /var/run/vdsm/mom-vdsm.sock
> Unregistering all secrests
> trying to connect libvirt
> recovery: started
> Setting channels' timeout to 30 seconds.
> Starting VM channels listener thread.
> Listening at 0.0.0.0:54321
> Adding detector 
> recovery: completed in 0s
> Adding detector 
> Starting executor
> Starting worker jsonrpc.Executor/0
> Worker started
> Starting worker jsonrpc.Executor/1
> Worker started
> Starting worker jsonrpc.Executor/2
> Worker started
> Starting worker jsonrpc.Executor/3
> Worker started
> Starting worker jsonrpc.Executor/4
> Worker started
> Starting worker jsonrpc.Executor/5
> Worker started
> Starting worker jsonrpc.Executor/6
> Worker started
> Starting worker jsonrpc.Executor/7
> Worker started
> XMLRPC server running
> Starting executor
> Starting worker periodic/0
> Worker started
> Starting worker periodic/1
> Worker started
> Starting worker periodic/2
> Worker started
> Starting worker periodic/3
> Worker started
> trying to connect libvirt
> Panic: Connect to supervdsm service failed: [Errno 2] No such file or
> directory
> Traceback (most recent call last):
>   File "/usr/share/vdsm/supervdsm.py", line 78, in _connect
> utils.retry(self._manager.connect, Exception, timeout=60, tries=3)
>   File "/usr/lib/python2.7/site-packages/vdsm/utils.py", line 959, in retry
> return func()
>   File "/usr/lib64/python2.7/multiprocessing/managers.py", line 500, in
> connect
> conn = Client(self._address, authkey=self._authkey)
>   File "/usr/lib64/python2.7/multiprocessing/connection.py", line 173, in
> Client
> c = SocketClient(address)
>   File "/usr/lib64/python2.7/multiprocessing/connection.py", line 308, in
> SocketClient
> s.connect(address)
>   File "/usr/lib64/python2.7/socket.py", line 224, in meth
> return getattr(self._sock,name)(*args)
> error: [Errno 2] No such file or directory

Vdsm tries to connect to supervdsmd on startup, and if it is not running
it will fail.

You can do:

systemctl start supervdsmd

And they you can run vdsmd from the shell.

But why do you need to run vdsm from the shell?

Nir
___
Users mailing list
Users@ovirt.org
http://lists.ovirt.org/mailman/listinfo/users


Re: [ovirt-users] Adding another host to my cluster

2016-05-13 Thread Nir Soffer
On Fri, May 13, 2016 at 3:53 PM, Charles Tassell  wrote:
> Hi Gervais,
>
>   Okay, I see two problems: there are some leftover direcyories causing
> issues and for some reason VDSM seems to be trying to bind to a port
> something is already running on (probably an older version of VDSM.)  Try
> removing the duplicate dirs (rmdir
> /var/run/vdsm/storage/248f46f0-d793-4581-9810-c9d965e2f286 and
> /rhev/data-center/mnt - if they aren't empty don't rm -rf them because they
> might be mounted from your production servers.  Just mv -i them to /root or
> somewhere.)

You should not touch directories under /run/vdsm/storage or /rhev/data-center,
they should not have any negative effect on the system.

>   Next shutdown the vdsm service with "service vdsm stop" (I think, might be
> service stop vdsm, I don't use CentOS much) and kill any running vdsm
> processes (ps ax |grep vdsm)  The error that I saw was:
>
> MainThread::ERROR::2016-05-13 08:58:38,262::clientIF::128::vds::(__init__)
> failed to init clientIF, shutting down storage dispatcher
> MainThread::ERROR::2016-05-13 08:58:38,289::vdsm::171::vds::(run) Exception
> raised
> Traceback (most recent call last):
>   File "/usr/share/vdsm/vdsm", line 169, in run
> serve_clients(log)
>   File "/usr/share/vdsm/vdsm", line 102, in serve_clients
> cif = clientIF.getInstance(irs, log, scheduler)
>   File "/usr/share/vdsm/clientIF.py", line 193, in getInstance
> cls._instance = clientIF(irs, log, scheduler)
>   File "/usr/share/vdsm/clientIF.py", line 123, in __init__
> self._createAcceptor(host, port)
>   File "/usr/share/vdsm/clientIF.py", line 201, in _createAcceptor
> port, sslctx)
>   File "/usr/share/vdsm/protocoldetector.py", line 170, in __init__
> sock = _create_socket(host, port)
>   File "/usr/share/vdsm/protocoldetector.py", line 40, in _create_socket
> server_socket.bind(addr[0][4])
>   File "/usr/lib64/python2.7/socket.py", line 224, in meth
> return getattr(self._sock,name)(*args)
> error: [Errno 98] Address already in use

Please open a bug - this should never happen.
https://bugzilla.redhat.com/enter_bug.cgi?product=vdsm
oVirt team: Infra
Severity: High

Vdsm must set the socket option socket.SO_REUSEADDR
before binding.

Ensuring that only one vdsm instance is running must be
done elsewhere if needed, for example using a lock file.

Nir
___
Users mailing list
Users@ovirt.org
http://lists.ovirt.org/mailman/listinfo/users


[ovirt-users] creation of lun disks

2016-05-13 Thread Fabrice Bacchella
I'm trying to generated a lun disk, using the python SDK.

My code can be found at 
https://github.com/fbacchella/ovirtcmd/blob/master/ovlib/disks/__init__.py

If a try to see the content of a existing disk and test with :
print disk.get_type()
print disk.get_storage_type()

I'm getting :
None
lun

Now I try to create another LUN disk with
kwargs['storage_type'] = 'lun'
kwargs['type_'] = 'system'

I'm getting:

> POST /api/disks HTTP/1.1
> 
> system
> virtio-scsi
> 
> 
> 
> lun
> 
...
< 
< Incomplete parameters
< Storage [type] required for invoke0
< 

changing to 
kwargs['type_'] = None 
change nothing, I'm still getting the same error message.

> POST /api/disks HTTP/1.1
> 
> virtio-scsi
> 
> 
> 
> lun
> 
...
< 
< Incomplete parameters
< Storage [type] required for invoke0
< 


 What did I do wrong ? There is nothing about that in the logs




___
Users mailing list
Users@ovirt.org
http://lists.ovirt.org/mailman/listinfo/users


Re: [ovirt-users] [moVirt] Add several datacenters

2016-05-13 Thread Tomas Jelinek


- Original Message -
> From: "Nicolas Ecarnot" 
> To: "users" 
> Sent: Friday, May 13, 2016 3:54:03 PM
> Subject: [ovirt-users] [moVirt] Add several datacenters
> 
> Hello,
> 
> I finally was able to correctly connect to one of our DC with moVirt, so
> I can show off and it is so cool.
> But I find no way to add several datacenters.

If you mean by "datacenter" the datacenters inside one ovirt installation then 
it should work and you should see VMs from all of them (with ability to filter 
by cluster).

If you mean several ovirt installations than it is not implemented, but it is 
one of the things we would like to implement.
So I have opened an issue for it: https://github.com/matobet/moVirt/issues/174

> 
> May someone help please?
> 
> --
> Nicolas ECARNOT
> ___
> Users mailing list
> Users@ovirt.org
> http://lists.ovirt.org/mailman/listinfo/users
> 
___
Users mailing list
Users@ovirt.org
http://lists.ovirt.org/mailman/listinfo/users


Re: [ovirt-users] Trying to understand the mixed storage configuration options.

2016-05-13 Thread Jason Ziemba
Nir,

The use cases of my current configuration is a mix of duplicated data
clusters (Percona/Consul/CouchBase) and ephemeral systems (web servers)
that work perfectly when 'locked' to a local host  These are built in such
a way that losing a physical host reduces redundancy but isn't outwardly
service degrading.  The other side is the duplicated databases (namely
Percona) that would induce a fairly high level of NAS bandwidth that is
avoided when they leverage the local storage on their host system.

The goal would be to have the ability to house the highly redundant and/or
ephemeral systems to be housed on the hosts local storage, whereas guests
that need HA or one-offs would be housed using the NAS allowing seamless HA
or online migration.

On Fri, May 13, 2016 at 7:04 AM, Nir Soffer  wrote:

> On Thu, May 12, 2016 at 5:50 AM, Jason Ziemba  wrote:
> > I'm fairly new to oVirt (coming from ProxMox) and trying to wrap my head
> > around the mixed (local/NAS) data domain options that are available.
> >
> > I'm trying to configure a set of systems to have local storage, as their
> > primary data storage domain, though also want to have the ability to
> have a
> > NAS based data domain for guests that are 'mobile' between hosts.
> Currently
> > I'm able to do one or the other, but not both (so it seems).
> >
> > When I put all of the systems in to a single cluster (or single
> data-center)
> > I'm able to have the shared data domain, though have only found the
> ability
> > to configure one system for local storage (not all of them).   When I
> split
> > them out in to separate data centers, they all have their local data
> domain
> > working, but only a single dc is able to access the shared data domain
> at a
> > time.
> >
> > Am I missing something along the way (probably fairly obvious) that does
> > exactly what I'm outlining, or is this functionality not available by
> > design?
>
> This is not available by design.
>
> Can you explain the use case, why do you need to use local storage as
> your primary data storage?
>
> How are you going to migrate your vms if the primary storage is local?
>
> How are you going to start the vms on another host after host failures,
> if the vm storage is on the failed host, and the last state of that disk
> is not available or even lost?
>
> Nir
>
___
Users mailing list
Users@ovirt.org
http://lists.ovirt.org/mailman/listinfo/users


[ovirt-users] [moVirt] Add several datacenters

2016-05-13 Thread Nicolas Ecarnot

Hello,

I finally was able to correctly connect to one of our DC with moVirt, so 
I can show off and it is so cool.

But I find no way to add several datacenters.

May someone help please?

--
Nicolas ECARNOT
___
Users mailing list
Users@ovirt.org
http://lists.ovirt.org/mailman/listinfo/users


Re: [ovirt-users] Trying to understand the mixed storage configuration options.

2016-05-13 Thread Pavel Gashev
Nir,

Basically, almost any server has a local storage. It could be a reliable, RAID 
enabled storage. It would be great to allow to use it simultaneously with 
shared storages.

Other virtualizations like vmware/hyperv support local and shared storages 
simultaneously. If your VM is on local storage, it's tied to a host. Just like 
when VM has attached local device(s). VM migration moves disks between hosts as 
well.

An obvious solution is to export local storages via NFS to make it shared. This 
would allow to drop «local storage datacenters» as a class.

TODO list would be the following:
1. VDSM: add creating NFS exports
2. Engine: creating NFS exports before local storage activation, and updating 
NFS exports after adding/removing hosts
3. Engine: add moving disks between storages during (just before, just after) 
VM migration. 
4. Engine: add constraints to avoid dangerous situations - disallow changing VM 
startup host, disallow moving disks to another local storage without VM 
migration, etc.

On 13/05/16 14:04, "users-boun...@ovirt.org on behalf of Nir Soffer" 
 wrote:

>On Thu, May 12, 2016 at 5:50 AM, Jason Ziemba  wrote:
>> I'm fairly new to oVirt (coming from ProxMox) and trying to wrap my head
>> around the mixed (local/NAS) data domain options that are available.
>>
>> I'm trying to configure a set of systems to have local storage, as their
>> primary data storage domain, though also want to have the ability to have a
>> NAS based data domain for guests that are 'mobile' between hosts.  Currently
>> I'm able to do one or the other, but not both (so it seems).
>>
>> When I put all of the systems in to a single cluster (or single data-center)
>> I'm able to have the shared data domain, though have only found the ability
>> to configure one system for local storage (not all of them).   When I split
>> them out in to separate data centers, they all have their local data domain
>> working, but only a single dc is able to access the shared data domain at a
>> time.
>>
>> Am I missing something along the way (probably fairly obvious) that does
>> exactly what I'm outlining, or is this functionality not available by
>> design?
>
>This is not available by design.
>
>Can you explain the use case, why do you need to use local storage as
>your primary data storage?
>
>How are you going to migrate your vms if the primary storage is local?
>
>How are you going to start the vms on another host after host failures,
>if the vm storage is on the failed host, and the last state of that disk
>is not available or even lost?
>
>Nir
>___
>Users mailing list
>Users@ovirt.org
>http://lists.ovirt.org/mailman/listinfo/users

___
Users mailing list
Users@ovirt.org
http://lists.ovirt.org/mailman/listinfo/users


Re: [ovirt-users] Adding another host to my cluster

2016-05-13 Thread Charles Tassell

Hi Gervais,

  Okay, I see two problems: there are some leftover direcyories causing 
issues and for some reason VDSM seems to be trying to bind to a port 
something is already running on (probably an older version of VDSM.)  
Try removing the duplicate dirs (rmdir 
/var/run/vdsm/storage/248f46f0-d793-4581-9810-c9d965e2f286 and 
/rhev/data-center/mnt - if they aren't empty don't rm -rf them because 
they might be mounted from your production servers.  Just mv -i them to 
/root or somewhere.)


  Next shutdown the vdsm service with "service vdsm stop" (I think, 
might be service stop vdsm, I don't use CentOS much) and kill any 
running vdsm processes (ps ax |grep vdsm)  The error that I saw was:


MainThread::ERROR::2016-05-13 
08:58:38,262::clientIF::128::vds::(__init__) failed to init clientIF, 
shutting down storage dispatcher
MainThread::ERROR::2016-05-13 08:58:38,289::vdsm::171::vds::(run) 
Exception raised

Traceback (most recent call last):
  File "/usr/share/vdsm/vdsm", line 169, in run
serve_clients(log)
  File "/usr/share/vdsm/vdsm", line 102, in serve_clients
cif = clientIF.getInstance(irs, log, scheduler)
  File "/usr/share/vdsm/clientIF.py", line 193, in getInstance
cls._instance = clientIF(irs, log, scheduler)
  File "/usr/share/vdsm/clientIF.py", line 123, in __init__
self._createAcceptor(host, port)
  File "/usr/share/vdsm/clientIF.py", line 201, in _createAcceptor
port, sslctx)
  File "/usr/share/vdsm/protocoldetector.py", line 170, in __init__
sock = _create_socket(host, port)
  File "/usr/share/vdsm/protocoldetector.py", line 40, in _create_socket
server_socket.bind(addr[0][4])
  File "/usr/lib64/python2.7/socket.py", line 224, in meth
return getattr(self._sock,name)(*args)
error: [Errno 98] Address already in use

If you get the same error, do a netstat -lnp and compare it to the same 
from a working box to see if something else is running on the VDSM port.



On 2016-05-13 09:37 AM, Gervais de Montbrun wrote:

Hi Charles,

I think the problem I am having is due to the setup failing and not 
something in vdsm configs as I have never gotten this server to start 
up properly and the BRIDGE ethernet interface + ovirt routes are not 
setup.


I put the logs here: 
https://www.dropbox.com/sh/5ugyykqh1lgru9l/AACXxRYWr3tgd0WbBVFW5twHa?dl=0


hosted-engine--deploy-logs.zip# Logs from when I tried to deploy and 
it failed

vdsm.tar.gz# /var/log/vdsm

Output from running vdsm from the command line:

[root@cultivar2 log]# su -s /bin/bash vdsm
[vdsm@cultivar2 log]$ python /usr/share/vdsm/vdsm
(PID: 6521) I am the actual vdsm 4.17.26-1.el7
cultivar2.grove.silverorange.com
 (3.10.0-327.el7.x86_64)
VDSM will run with cpu affinity: frozenset([1])
/usr/bin/taskset --all-tasks --pid --cpu-list 1 6521 (cwd None)
SUCCESS:  = '';  = 0
Starting scheduler vdsm.Scheduler
started
Run and protect:
registerDomainStateChangeCallback(callbackFunc=)
Run and protect: registerDomainStateChangeCallback, Return
response: None
Trying to connect to Super Vdsm
Preparing MOM interface
Using named unix socket /var/run/vdsm/mom-vdsm.sock
Unregistering all secrests
trying to connect libvirt
recovery: started
Setting channels' timeout to 30 seconds.
Starting VM channels listener thread.
Listening at 0.0.0.0:54321 
Adding detector 
recovery: completed in 0s
Adding detector 
Starting executor
Starting worker jsonrpc.Executor/0
Worker started
Starting worker jsonrpc.Executor/1
Worker started
Starting worker jsonrpc.Executor/2
Worker started
Starting worker jsonrpc.Executor/3
Worker started
Starting worker jsonrpc.Executor/4
Worker started
Starting worker jsonrpc.Executor/5
Worker started
Starting worker jsonrpc.Executor/6
Worker started
Starting worker jsonrpc.Executor/7
Worker started
XMLRPC server running
Starting executor
Starting worker periodic/0
Worker started
Starting worker periodic/1
Worker started
Starting worker periodic/2
Worker started
Starting worker periodic/3
Worker started
trying to connect libvirt
Panic: Connect to supervdsm service failed: [Errno 2] No such file
or directory
Traceback (most recent call last):
  File "/usr/share/vdsm/supervdsm.py", line 78, in _connect
utils.retry(self._manager.connect, Exception, timeout=60, tries=3)
  File "/usr/lib/python2.7/site-packages/vdsm/utils.py", line 959,
in retry
return func()
  File "/usr/lib64/python2.7/multiprocessing/managers.py", line
500, in connect
conn = Client(self._address, authkey=self._authkey)
  File "/usr/lib64/python2.7/multiprocessing/connection.py", line
173, in Client
c = SocketClient(address)
  File "/usr/lib64/python2.7/multiprocessing/connection.py", line
308, 

Re: [ovirt-users] Changing of master storage domain

2016-05-13 Thread Nicolas Ecarnot

Le 13/05/2016 12:06, Nir Soffer a écrit :

Nir,

Would there be any benefit thus any plan to add the feature to be able to
select the master domain, the same way we can do it with the SPM?


I don't know about any benefit.


There could be cases where we would want to free a storage domain from being
master, but without putting it into maintenance.


Like?


At multiple times, I had to do maintenance actions on the SAN side which 
are neither SUPPOSED to break anything, nor be visible by oVirt, but in 
case of failure, I would have been glad to choose which storage domain 
to preserve at all cost, though not putting any SD into maintenance.


It is often related to external actions not driven by oVirt, but as 
oVirt as a potential side-effect victim. (other examples related to 
network modifications)




We are working now on removing the master domain (part of spm removal). It is
unlikely that we will add any feature related to the master domain.


Well, I couldn't hope more, that will be perfect.

4.x ?

Thank you Nir.

--
Nicolas ECARNOT
___
Users mailing list
Users@ovirt.org
http://lists.ovirt.org/mailman/listinfo/users


Re: [ovirt-users] Adding another host to my cluster

2016-05-13 Thread Gervais de Montbrun
Hi Charles,

I think the problem I am having is due to the setup failing and not
something in vdsm configs as I have never gotten this server to start up
properly and the BRIDGE ethernet interface + ovirt routes are not setup.

I put the logs here:
https://www.dropbox.com/sh/5ugyykqh1lgru9l/AACXxRYWr3tgd0WbBVFW5twHa?dl=0

hosted-engine--deploy-logs.zip # Logs from when I tried to deploy and it
failed
vdsm.tar.gz # /var/log/vdsm

Output from running vdsm from the command line:

[root@cultivar2 log]# su -s /bin/bash vdsm
[vdsm@cultivar2 log]$ python /usr/share/vdsm/vdsm
(PID: 6521) I am the actual vdsm 4.17.26-1.el7
cultivar2.grove.silverorange.com (3.10.0-327.el7.x86_64)
VDSM will run with cpu affinity: frozenset([1])
/usr/bin/taskset --all-tasks --pid --cpu-list 1 6521 (cwd None)
SUCCESS:  = '';  = 0
Starting scheduler vdsm.Scheduler
started
Run and protect:
registerDomainStateChangeCallback(callbackFunc=)
Run and protect: registerDomainStateChangeCallback, Return response: None
Trying to connect to Super Vdsm
Preparing MOM interface
Using named unix socket /var/run/vdsm/mom-vdsm.sock
Unregistering all secrests
trying to connect libvirt
recovery: started
Setting channels' timeout to 30 seconds.
Starting VM channels listener thread.
Listening at 0.0.0.0:54321
Adding detector 
recovery: completed in 0s
Adding detector 
Starting executor
Starting worker jsonrpc.Executor/0
Worker started
Starting worker jsonrpc.Executor/1
Worker started
Starting worker jsonrpc.Executor/2
Worker started
Starting worker jsonrpc.Executor/3
Worker started
Starting worker jsonrpc.Executor/4
Worker started
Starting worker jsonrpc.Executor/5
Worker started
Starting worker jsonrpc.Executor/6
Worker started
Starting worker jsonrpc.Executor/7
Worker started
XMLRPC server running
Starting executor
Starting worker periodic/0
Worker started
Starting worker periodic/1
Worker started
Starting worker periodic/2
Worker started
Starting worker periodic/3
Worker started
trying to connect libvirt
Panic: Connect to supervdsm service failed: [Errno 2] No such file or
directory
Traceback (most recent call last):
  File "/usr/share/vdsm/supervdsm.py", line 78, in _connect
utils.retry(self._manager.connect, Exception, timeout=60, tries=3)
  File "/usr/lib/python2.7/site-packages/vdsm/utils.py", line 959, in retry
return func()
  File "/usr/lib64/python2.7/multiprocessing/managers.py", line 500, in
connect
conn = Client(self._address, authkey=self._authkey)
  File "/usr/lib64/python2.7/multiprocessing/connection.py", line 173, in
Client
c = SocketClient(address)
  File "/usr/lib64/python2.7/multiprocessing/connection.py", line 308, in
SocketClient
s.connect(address)
  File "/usr/lib64/python2.7/socket.py", line 224, in meth
return getattr(self._sock,name)(*args)
error: [Errno 2] No such file or directory
Killed


Thanks for the help. It's really appreciated.

Cheers,
Gervais

On Fri, May 13, 2016 at 12:55 AM, Charles Tassell 
wrote:

> Hi Gervais,
>
>   Hmm, can you tar up the logfiles (/var/log/vdsm/* on the host you are
> installing on) and put them somewhere to look at?  Also, I found that
> starting VDSM from the command line is useful as it sometimes spits out
> error messages that don't show up in the logs.  I think the command I used
> was:
> su -s /bin/bash vdsm
> python /usr/share/vdsm/vdsm
>
> My problem was that I customized the logging settings in /etc/vdsm/*conf
> to try and tone down the debugging stuff and had a syntax error.
>
>
> On 16-05-12 10:24 PM, Gervais de Montbrun wrote:
>
> Hi Charles,
>
> Thanks for the suggestion.
>
> I cleaned up again using the bash script from the
> recoving-from-failed-install link below, then reinstalled (yum install
> ovirt-hosted-engine-setup).
>
> I enabled NetworkManager and firewalld as you suggested. The install stops
> very early on with an error:
> [ ERROR ] Failed to execute stage 'Programs detection': hosted-engine
> cannot be deployed while NetworkManager is running, please stop and disable
> it before proceeding
>
> I disabled and stopped NetworkManager and tried again. Same result. :(
>
> Any more guesses?
>
> Cheers,
> Gervais
>
>
>
> On May 12, 2016, at 9:08 PM, Charles Tassell  wrote:
>
> Hey Gervais,
>
> Try enabling NetworkManager and firewalld before doing the hosted-engine
> --deploy.  I have run into problems with oVirt trying to perform tasks on
> hosts where firewalld is disabled, so maybe you are running into a similar
> problem.  Also, I think the setup script will disable NetworkManager if it
> needs to.  I know I didn't manually disable it on any of the boxes I
> installed on.
>
> On 16-05-12 04:49 PM, users-requ...@ovirt.org wrote:
>
> Message: 1
> Date: Thu, 12 May 2016 14:22:12 -0300
> From: Gervais de Montbrun 
> To: Wee Sritippho 
> Cc: users 
> Subject: Re: [ovirt-users] Adding another host to my cluster
> Message-ID: 

Re: [ovirt-users] Failed to migrate a VMware through virt-v2v included in oVirt 3.6

2016-05-13 Thread Julián Tete
For oVirt 3.6.5

El vie., 13 de may. de 2016 02:08, Michal Skrivanek <
michal.skriva...@redhat.com> escribió:

> On 12 May 2016, at 22:10, Julián Tete  wrote:
>
> Fixed in oVirt 3.6.5 :')
>
> VMware 5.1.0
>
> Thank you People
>
> How to change the state of this to CLOSED ?
>
>
> it already is - for v2v
> https://bugzilla.redhat.com/show_bug.cgi?id=1305526 and for ovirt
> https://bugzilla.redhat.com/show_bug.cgi?id=1292096
>
> Thanks,
> michal
>
>
> 2016-04-01 3:15 GMT-05:00 Michal Skrivanek :
>
>>
>> > On 31 Mar 2016, at 09:50, Nisim Simsolo  wrote:
>> >
>> > Can you please provide us the VDSM log which includes the v2v error
>> message?
>> > According to your information, it's hard to know where the issue begins:
>> > 1. Add external provider: are you using "Any data center" or specific
>> data center in the "data center" dropbox?
>> > There is a bug related to it when using "any data center" (
>> https://bugzilla.redhat.com/show_bug.cgi?id=1293591 - v2v: external
>> provider "test" button failed when using "any data center" value.)
>> > 2. vSphere Client Version 5.1.0: We have tested VMware migrate from
>> vSphere 5.5 and above.
>> > 3. can´t parse the Cluster data: there are also some known bugs on this
>> issue,
>> > so either you can upgrade your libvirt and libguestfs build (fixed from
>> libvirt-1.3.1-1.el7 and libguestfs-1.32.2-2.el7) or move VMware source VM
>> to be directly under the data center.
>> > Bugs related to this issue:
>> > https://bugzilla.redhat.com/show_bug.cgi?id=1263574 - vpx: Include
>> dcpath output in libvirt XML
>>
>> this has been released [1] yesterday. It may take a day or two to be
>> available for CentOS
>>
>> > https://bugzilla.redhat.com/show_bug.cgi?id=1292437 - Backport
>> virt-v2v pull dcpath from libvirt 
>>
>> you can temporarily use this[2] repo to try virt-v2v 1.32, or wait few
>> more days for a fix in 7.2
>>
>> Thanks,
>> michal
>>
>>
>> [1] https://rhn.redhat.com/errata/RHBA-2016-0555.html
>> [2] https://people.redhat.com/~rjones/libguestfs-RHEL-7.3-preview/
>>
>> >
>> > Thanks
>> >
>> > Nisim Simsolo
>> > QE -Testing Engineer
>> > IRC: nsimsolo
>> > int phone - 8272305
>> > mobile - 054-4779934
>> >
>> > - Original Message -
>> > | From: "Julián Tete" 
>> > | To: "Nisim Simsolo" 
>> > | Cc: users@ovirt.org
>> > | Sent: Wednesday, March 30, 2016 11:13:04 PM
>> > | Subject: Re: [ovirt-users] Failed to migrate a VMware through
>> virt-v2v included in oVirt 3.6
>> > |
>> > | *VMware:*srvesx01 VMware ESXi 5.1.0, 2191751
>> > |
>> > | vSphere Client Version 5.1.0
>> > |
>> > | VMware vCenter Server Version 5.1.0
>> > |
>> > |
>> > | *My oVirt DataCenter:*
>> > |
>> > | Version 3.6.1.3-1.el7.centos
>> > |
>> > | *Cluster:*
>> > |
>> > | Compatibility Version: 3.6
>> > |
>> > | Cluster CPU Type: Intel SandyBridge Family
>> > |
>> > | Emulated Machine: pc-i440fx-rhel7.2.0
>> > |
>> > | *Host:*
>> > |
>> > | SELinux mode: Permissive
>> > |
>> > | OS Version: RHEL - 7 - 2.1511.el7.centos.2.10
>> > |
>> > | Kernel Version: 4.3.0 - 1.el7.elrepo.x86_64
>> > |
>> > | KVM Version: 2.3.0 - 31.el7_2.3.1
>> > |
>> > | LIBVIRT Version: libvirt-1.2.17-13.el7_2.2
>> > |
>> > | VDSM Version: vdsm-4.17.13-0.el7.centos
>> > |
>> > | SPICE Version: 0.12.4 - 15.el7
>> > |
>> > | CEPH Version: librbd1-0.80.7-3.el7
>> > |
>> > | Manufacturer: HP
>> > |
>> > | CPU Model: Intel(R) Xeon(R) CPU E5-2667 v2 @ 3.30GHz
>> > |
>> > | CPU Cores per Socket: 8
>> > |
>> > | Family: ProLiant
>> > |
>> > | CPU Type: Intel SandyBridge Family
>> > |
>> > | CPU Threads per Core: 2 (SMT Enabled)
>> > |
>> > | Product Name: ProLiant BL460c Gen8
>> > |
>> > | CPU Sockets: 2
>> > |
>> > | rpm -qa | grep virt-v2v
>> > |
>> > | virt-v2v-1.28.1-1.55.el7.centos.x86_64
>> > |
>> > | rpm -qa | grep libguestfs-winsupport
>> > |
>> > | libguestfs-winsupport-7.2-1.el7.x86_64
>> > |
>> > | #
>> > |
>> > | R//
>> > |
>> > | Yes, I installed virt-v2v in all hosts of the Cluster and in the
>> Engine :P
>> > |
>> > | The S.O of the machine to migrate is CentOS 6.6
>> > |
>> > | When I try to use the GUI to add a External Provider (VMware) I get
>> the
>> > | error message:
>> > |
>> > | *Failed to retrieve VMs information from external server
>> > | vpx://192.168.0.129/CNSCDatacenter/srvesx01?no_verify=1
>> > | *
>> > |
>> > | But when I use the command:
>> > |
>> > | virsh -c vpx://
>> > |
>> Administrator@192.168.0.129/CNSCDatacenter/Cluster_CNSC/srvesx01?no_verify=1
>> > |
>> > | I can connect to VMware.
>> > |
>> > | I can´t parse the Cluster data (Cluster_CNSC) in the GUI to add a
>> External
>> > | Provider (VMware)
>> > |
>> > | :/
>> > |
>> > | Thanks in advance
>> > |
>> > |
>> > | 2016-03-30 3:23 GMT-05:00 Nisim Simsolo :
>> > |
>> > | > Hi
>> > | >
>> > | > Can you please 

Re: [ovirt-users] volume_utilization_chunk_mb not working

2016-05-13 Thread Grundmann, Christian
Hi,
it works now,

Thx a lot

-Ursprüngliche Nachricht-
Von: Nir Soffer [mailto:nsof...@redhat.com] 
Gesendet: Freitag, 13. Mai 2016 14:00
An: Grundmann, Christian 
Cc: users@ovirt.org
Betreff: Re: [ovirt-users] volume_utilization_chunk_mb not working

On Fri, May 13, 2016 at 2:53 PM, Grundmann, Christian 
 wrote:
> Oh, i see
>
> Is there a way to reload the config without going to maintenance mode?

You can restart the service.

On the spm this will abort operations like copying disks and like, this is safe.

Nir

>
> Thx
> Christian
>
> -Ursprüngliche Nachricht-
> Von: Nir Soffer [mailto:nsof...@redhat.com]
> Gesendet: Freitag, 13. Mai 2016 13:52
> An: Grundmann, Christian 
> Cc: users@ovirt.org
> Betreff: Re: [ovirt-users] volume_utilization_chunk_mb not working
>
> On Fri, May 13, 2016 at 2:32 PM, Grundmann, Christian 
>  wrote:
>> Hi,
>> @ This looks correct if this is in the [irs] section of the configuration.
>> It is
>>
>> cat /etc/vdsm/vdsm.conf
>> [vars]
>> ssl = true
>>
>> [addresses]
>> management_port = 54321
>>
>> [irs ]
>
> This is the "irs " section, not the "irs" section :-)
>
>> volume_utilization_percent=15
>> volume_utilization_chunk_mb=4048
>>
>> do I need a space before and after the = ?
>
> No, it is just more human friendly this way.
>
>> @ We support initialSize argument when creating volumes - use when importing 
>> external vms (v2v), but it is not exposed in the ui.
>> If you think think exposing it in the ui is a useful feature, you can file a 
>> bug:
>> https://bugzilla.redhat.com/enter_bug.cgi?product=ovirt-engine
>>
>> I don't need it in the ui, but it would be nice to have it as 
>> tuneable
>
> I checked the code, and we actually do use volume_utilization_chunk_mb when 
> we create the initial disk, so getting the configuration to work will solve 
> your issue.
>
> Nir
___
Users mailing list
Users@ovirt.org
http://lists.ovirt.org/mailman/listinfo/users


[ovirt-users] IsolatedNetworks

2016-05-13 Thread Winfried de Heiden
Hi All,

As described on http://old.ovirt.org/Features/IsolatedNetworks,
Isolated networks in one of proposed features of oVirt 4.0:

"The current host networking api (up to ovirt-engine-3.6) requires a
network to be configured on top of a network interface.

In order to configure a local network on the host the user had to
create a dummy interface to which the network was attached.

The Isolated Networks feature aimed to configure a local host on the 
network which isn't connected to any network interface and allows vms 
which are connected to it to communicate with each other.

Isolated Networks will be limited to VM networks only and only MTU 
should be relevant in that network definition for the created isolated 

network."

Hence, I should be able to create a host-only network with an
underlaying NIC; sounds nice for my @home Cloud...

However, runing oVirt 4.0 it is still complaining a NIC is misssing. I
might have configured something wrong. How to set create an isolated
network? Is there already any documentation?

Kind regards,

Winny

 ___
Users mailing list
Users@ovirt.org
http://lists.ovirt.org/mailman/listinfo/users


Re: [ovirt-users] volume_utilization_chunk_mb not working

2016-05-13 Thread Nir Soffer
On Fri, May 13, 2016 at 2:53 PM, Grundmann, Christian
 wrote:
> Oh, i see
>
> Is there a way to reload the config without going to maintenance mode?

You can restart the service.

On the spm this will abort operations like copying disks and like, this is
safe.

Nir

>
> Thx
> Christian
>
> -Ursprüngliche Nachricht-
> Von: Nir Soffer [mailto:nsof...@redhat.com]
> Gesendet: Freitag, 13. Mai 2016 13:52
> An: Grundmann, Christian 
> Cc: users@ovirt.org
> Betreff: Re: [ovirt-users] volume_utilization_chunk_mb not working
>
> On Fri, May 13, 2016 at 2:32 PM, Grundmann, Christian 
>  wrote:
>> Hi,
>> @ This looks correct if this is in the [irs] section of the configuration.
>> It is
>>
>> cat /etc/vdsm/vdsm.conf
>> [vars]
>> ssl = true
>>
>> [addresses]
>> management_port = 54321
>>
>> [irs ]
>
> This is the "irs " section, not the "irs" section :-)
>
>> volume_utilization_percent=15
>> volume_utilization_chunk_mb=4048
>>
>> do I need a space before and after the = ?
>
> No, it is just more human friendly this way.
>
>> @ We support initialSize argument when creating volumes - use when importing 
>> external vms (v2v), but it is not exposed in the ui.
>> If you think think exposing it in the ui is a useful feature, you can file a 
>> bug:
>> https://bugzilla.redhat.com/enter_bug.cgi?product=ovirt-engine
>>
>> I don't need it in the ui, but it would be nice to have it as tuneable
>
> I checked the code, and we actually do use volume_utilization_chunk_mb when 
> we create the initial disk, so getting the configuration to work will solve 
> your issue.
>
> Nir
___
Users mailing list
Users@ovirt.org
http://lists.ovirt.org/mailman/listinfo/users


Re: [ovirt-users] volume_utilization_chunk_mb not working

2016-05-13 Thread Grundmann, Christian
Oh, i see

Is there a way to reload the config without going to maintenance mode?

Thx
Christian

-Ursprüngliche Nachricht-
Von: Nir Soffer [mailto:nsof...@redhat.com] 
Gesendet: Freitag, 13. Mai 2016 13:52
An: Grundmann, Christian 
Cc: users@ovirt.org
Betreff: Re: [ovirt-users] volume_utilization_chunk_mb not working

On Fri, May 13, 2016 at 2:32 PM, Grundmann, Christian 
 wrote:
> Hi,
> @ This looks correct if this is in the [irs] section of the configuration.
> It is
>
> cat /etc/vdsm/vdsm.conf
> [vars]
> ssl = true
>
> [addresses]
> management_port = 54321
>
> [irs ]

This is the "irs " section, not the "irs" section :-)

> volume_utilization_percent=15
> volume_utilization_chunk_mb=4048
>
> do I need a space before and after the = ?

No, it is just more human friendly this way.

> @ We support initialSize argument when creating volumes - use when importing 
> external vms (v2v), but it is not exposed in the ui.
> If you think think exposing it in the ui is a useful feature, you can file a 
> bug:
> https://bugzilla.redhat.com/enter_bug.cgi?product=ovirt-engine
>
> I don't need it in the ui, but it would be nice to have it as tuneable

I checked the code, and we actually do use volume_utilization_chunk_mb when we 
create the initial disk, so getting the configuration to work will solve your 
issue.

Nir
___
Users mailing list
Users@ovirt.org
http://lists.ovirt.org/mailman/listinfo/users


Re: [ovirt-users] volume_utilization_chunk_mb not working

2016-05-13 Thread Nir Soffer
On Fri, May 13, 2016 at 2:32 PM, Grundmann, Christian
 wrote:
> Hi,
> @ This looks correct if this is in the [irs] section of the configuration.
> It is
>
> cat /etc/vdsm/vdsm.conf
> [vars]
> ssl = true
>
> [addresses]
> management_port = 54321
>
> [irs ]

This is the "irs " section, not the "irs" section :-)

> volume_utilization_percent=15
> volume_utilization_chunk_mb=4048
>
> do I need a space before and after the = ?

No, it is just more human friendly this way.

> @ We support initialSize argument when creating volumes - use when importing 
> external vms (v2v), but it is not exposed in the ui.
> If you think think exposing it in the ui is a useful feature, you can file a 
> bug:
> https://bugzilla.redhat.com/enter_bug.cgi?product=ovirt-engine
>
> I don't need it in the ui, but it would be nice to have it as tuneable

I checked the code, and we actually do use volume_utilization_chunk_mb
when we create the initial disk, so getting the configuration to work
will solve your issue.

Nir
___
Users mailing list
Users@ovirt.org
http://lists.ovirt.org/mailman/listinfo/users


Re: [ovirt-users] volume_utilization_chunk_mb not working

2016-05-13 Thread Grundmann, Christian
Hi,
@ This looks correct if this is in the [irs] section of the configuration.
It is 

cat /etc/vdsm/vdsm.conf
[vars]
ssl = true

[addresses]
management_port = 54321

[irs ]
volume_utilization_percent=15
volume_utilization_chunk_mb=4048

do I need a space before and after the = ?

@ We support initialSize argument when creating volumes - use when importing 
external vms (v2v), but it is not exposed in the ui.
If you think think exposing it in the ui is a useful feature, you can file a 
bug:
https://bugzilla.redhat.com/enter_bug.cgi?product=ovirt-engine

I don't need it in the ui, but it would be nice to have it as tuneable


Thx 
Christian

-Ursprüngliche Nachricht-
Von: Nir Soffer [mailto:nsof...@redhat.com] 
Gesendet: Freitag, 13. Mai 2016 13:24
An: Grundmann, Christian 
Cc: users@ovirt.org
Betreff: Re: [ovirt-users] volume_utilization_chunk_mb not working

On Fri, May 13, 2016 at 1:34 PM, Grundmann, Christian 
 wrote:
> Hi
>
> I create a lot of Thin Disk VMs from templates every day, and get 
> „paused due to no storage space error“ and „paused due to unknown 
> storage error“ on a few of them.
>
> Sometimes the VMs are resumed a few seconds later but sometime not, 
> but I can resume them myself.
>
>
>
> I tried to set
>
> volume_utilization_percent=15
>
> volume_utilization_chunk_mb=4048

This looks correct if this is in the [irs] section of the configuration.

[irs]

# Together with volume_utilization_chunk_mb, set the minimal free # space 
before a thin provisioned block volume is extended. Use lower # values to 
extend earlier.
# volume_utilization_percent = 50

# Size of extension chunk in megabytes, and together with # 
volume_utilization_percent, set the free space limit. Use higher # values to 
extend in bigger chunks.
# volume_utilization_chunk_mb = 1024


To get the correct configuration format, you can do:
python /usr/lib/python2.7/site-packages/vdsm/config.py

>
>
>
> in /etc/vdsm/vdsm.conf on every Host
>
>
>
> But the Initial Size of the Disk is always 1GB and then increments to 
> 2GB

Please share the vdsm.conf file with these settings.

>
>
>
> 4GB would be the maximum Size I need before I destroy the VM. So if 
> the initial Size would be 4GB the pausing shouldn’t happen anymore

The initial volume size is always 1GiB, regardless of these settings.

After creating 1GiB lv, we extend the lv volume_utilization_chunk_mb megabytes.

So your setting will result in 1GiB lv, and after you write about 150MiB, it 
will be extended to 5GiB.

So I guess that you want to use
volume_utilization_chunk_mb = 3072

Which will give you 4GiB lv after the first extend.

We support initialSize argument when creating volumes - use when importing 
external vms (v2v), but it is not exposed in the ui.

If you think think exposing it in the ui is a useful feature, you can file a 
bug:
https://bugzilla.redhat.com/enter_bug.cgi?product=ovirt-engine

Nir
___
Users mailing list
Users@ovirt.org
http://lists.ovirt.org/mailman/listinfo/users


Re: [ovirt-users] volume_utilization_chunk_mb not working

2016-05-13 Thread Nir Soffer
On Fri, May 13, 2016 at 1:34 PM, Grundmann, Christian
 wrote:
> 4GB would be the maximum Size I need before I destroy the VM. So if the
> initial Size would be 4GB the pausing shouldn’t happen anymore

Maybe you want to use preallocated disks instead?

Nir
___
Users mailing list
Users@ovirt.org
http://lists.ovirt.org/mailman/listinfo/users


Re: [ovirt-users] volume_utilization_chunk_mb not working

2016-05-13 Thread Nir Soffer
On Fri, May 13, 2016 at 1:34 PM, Grundmann, Christian
 wrote:
> Hi
>
> I create a lot of Thin Disk VMs from templates every day, and get „paused
> due to no storage space error“ and „paused due to unknown storage error“ on
> a few of them.
>
> Sometimes the VMs are resumed a few seconds later but sometime not, but I
> can resume them myself.
>
>
>
> I tried to set
>
> volume_utilization_percent=15
>
> volume_utilization_chunk_mb=4048

This looks correct if this is in the [irs] section of the configuration.

[irs]

# Together with volume_utilization_chunk_mb, set the minimal free
# space before a thin provisioned block volume is extended. Use lower
# values to extend earlier.
# volume_utilization_percent = 50

# Size of extension chunk in megabytes, and together with
# volume_utilization_percent, set the free space limit. Use higher
# values to extend in bigger chunks.
# volume_utilization_chunk_mb = 1024


To get the correct configuration format, you can do:
python /usr/lib/python2.7/site-packages/vdsm/config.py

>
>
>
> in /etc/vdsm/vdsm.conf on every Host
>
>
>
> But the Initial Size of the Disk is always 1GB and then increments to 2GB

Please share the vdsm.conf file with these settings.

>
>
>
> 4GB would be the maximum Size I need before I destroy the VM. So if the
> initial Size would be 4GB the pausing shouldn’t happen anymore

The initial volume size is always 1GiB, regardless of these settings.

After creating 1GiB lv, we extend the lv volume_utilization_chunk_mb megabytes.

So your setting will result in 1GiB lv, and after you write about
150MiB, it will be extended
to 5GiB.

So I guess that you want to use
volume_utilization_chunk_mb = 3072

Which will give you 4GiB lv after the first extend.

We support initialSize argument when creating volumes - use when importing
external vms (v2v), but it is not exposed in the ui.

If you think think exposing it in the ui is a useful feature, you can
file a bug:
https://bugzilla.redhat.com/enter_bug.cgi?product=ovirt-engine

Nir
___
Users mailing list
Users@ovirt.org
http://lists.ovirt.org/mailman/listinfo/users


Re: [ovirt-users] Trying to understand the mixed storage configuration options.

2016-05-13 Thread Nir Soffer
On Thu, May 12, 2016 at 5:50 AM, Jason Ziemba  wrote:
> I'm fairly new to oVirt (coming from ProxMox) and trying to wrap my head
> around the mixed (local/NAS) data domain options that are available.
>
> I'm trying to configure a set of systems to have local storage, as their
> primary data storage domain, though also want to have the ability to have a
> NAS based data domain for guests that are 'mobile' between hosts.  Currently
> I'm able to do one or the other, but not both (so it seems).
>
> When I put all of the systems in to a single cluster (or single data-center)
> I'm able to have the shared data domain, though have only found the ability
> to configure one system for local storage (not all of them).   When I split
> them out in to separate data centers, they all have their local data domain
> working, but only a single dc is able to access the shared data domain at a
> time.
>
> Am I missing something along the way (probably fairly obvious) that does
> exactly what I'm outlining, or is this functionality not available by
> design?

This is not available by design.

Can you explain the use case, why do you need to use local storage as
your primary data storage?

How are you going to migrate your vms if the primary storage is local?

How are you going to start the vms on another host after host failures,
if the vm storage is on the failed host, and the last state of that disk
is not available or even lost?

Nir
___
Users mailing list
Users@ovirt.org
http://lists.ovirt.org/mailman/listinfo/users


Re: [ovirt-users] Trying to understand the mixed storage configuration options.

2016-05-13 Thread Nir Soffer
On Thu, May 12, 2016 at 10:50 AM, Gianluca Cecchi
 wrote:
> On Thu, May 12, 2016 at 4:50 AM, Jason Ziemba  wrote:
>>
>> I'm fairly new to oVirt (coming from ProxMox) and trying to wrap my head
>> around the mixed (local/NAS) data domain options that are available.
>>
>> I'm trying to configure a set of systems to have local storage, as their
>> primary data storage domain, though also want to have the ability to have a
>> NAS based data domain for guests that are 'mobile' between hosts.  Currently
>> I'm able to do one or the other, but not both (so it seems).
>>
>> When I put all of the systems in to a single cluster (or single
>> data-center) I'm able to have the shared data domain, though have only found
>> the ability to configure one system for local storage (not all of them).
>> When I split them out in to separate data centers, they all have their local
>> data domain working, but only a single dc is able to access the shared data
>> domain at a time.
>>
>> Am I missing something along the way (probably fairly obvious) that does
>> exactly what I'm outlining, or is this functionality not available by
>> design?
>>
>> Any assistance/guidance is greatly appreciated.
>>
>> ___
>>
>>
>
> Already asked about one month ago. See thread here:
> http://lists.ovirt.org/pipermail/users/2016-April/038911.html
>
> The last comment by Neil was to provide reasons for this need, as probably
> it is not on the roadmap.
> But 4.0 version is only at alpha stage so we can influence it, if we push.

No chance for 4.0.

It is unlikely that we will work on it before removing the spm and the
master domain. Without spm and master domain, this change should be
easier.

> Actually already in 2013 it was asked and Itamar at that time wrote that the
> team was working on eliminating this limit.. don't know what exactly was the
> design limitation from a technical point of view. See thread with question
> from (another one... ;-) Jason  here:
> http://lists.ovirt.org/pipermail/users/2013-July/015400.html
>
> and Itamar final comment here:
> http://lists.ovirt.org/pipermail/users/2013-July/015413.html

Having access to all storage domains from all hosts in a dc is the basic
design assumption. Having some domains which are accessible only from
some hosts is a major change.

> I'm favorable to have the chance to configure inter-mixed storage, local and
> not, especially for testing purposes, but not only, where you have plenty of
> storage you cannot dedicate to oVirt VMs now.
> The workaround is to have it seen as NFS storage, but it makes sense only
> for one-host configuration in my opinion, and it overloads network when it
> is not necessary.
>
> Can we vote for it? Do we need to open an RFE?

I think we have one, and you can vote on the bug (I don't have the bug
number).

> BTW: I think insipration should also come form what the leaders are doing
> (in the positive sense) and in what's new for vSphere 6 here:
> https://www.vmware.com/files/pdf/vsphere/VMW-WP-vSPHR-Whats-New-6-0-PLTFRM.pdf
>
> you find explicitly inside the "VMware vSphere Fault Tolerance
> Enhancements", so in a critical infrastructure point:
>
> "
> There have also been enhancements in how vSphere FT handles storage. It now
> creates a complete copy of
> the entire virtual machine, resulting in total protection for virtual
> machine storage in addition to compute
> and memory. It also increases the options for storage by enabling the files
> of the primary and secondary
> virtual machines to be stored on shared as well as local storage. This
> results in increased protection,
> reduced risk, and improved flexibility

Can you explain what they are doing and how it can benefit ovirt?

Nir
___
Users mailing list
Users@ovirt.org
http://lists.ovirt.org/mailman/listinfo/users


[ovirt-users] volume_utilization_chunk_mb not working

2016-05-13 Thread Grundmann, Christian
Hi
I create a lot of Thin Disk VMs from templates every day, and get "paused due 
to no storage space error" and "paused due to unknown storage error" on a few 
of them.
Sometimes the VMs are resumed a few seconds later but sometime not, but I can 
resume them myself.

I tried to set
volume_utilization_percent=15
volume_utilization_chunk_mb=4048

in /etc/vdsm/vdsm.conf on every Host

But the Initial Size of the Disk is always 1GB and then increments to 2GB

4GB would be the maximum Size I need before I destroy the VM. So if the initial 
Size would be 4GB the pausing shouldn't happen anymore

Thx a lot for your help
Christian
___
Users mailing list
Users@ovirt.org
http://lists.ovirt.org/mailman/listinfo/users


Re: [ovirt-users] Changing of master storage domain

2016-05-13 Thread Nir Soffer
On Fri, May 13, 2016 at 10:35 AM, Nicolas Ecarnot  wrote:
> Le 13/05/2016 09:22, Nir Soffer a écrit :
>>
>> On Thu, May 12, 2016 at 6:24 PM, Roman A. Chukov
>>  wrote:
>>>
>>> Hello there,
>>>
>>> I have a question about the algorithm of changing of master storage
>>> domain.
>>> I have a fresh installation of oVirt 3.6 on 2 hardware nodes, using a
>>> number
>>> of NFS shares as storage domains, and one of them was a master. When I
>>> was
>>> adding the second node into cluster, master storage domain had been
>>> changed
>>> by oVirt. Actually it didn't bring any harmful consequences at that point
>>> but it was unexpected for me. So, could anybody please explain such
>>> behavior
>>> of oVirt and also explain whether I could change master storage domain
>>> back
>>> to the previous?
>>
>>
>> You cannot select the master domain, the master is selected
>> automatically by ovirt.
>>
>> You can move the master domain from the current domain by putting the
>> domain
>> to maintenance, the system will move the master to domain to another
>> domain.
>>
>> Nir
>> ___
>> Users mailing list
>> Users@ovirt.org
>> http://lists.ovirt.org/mailman/listinfo/users
>>
>
> Nir,
>
> Would there be any benefit thus any plan to add the feature to be able to
> select the master domain, the same way we can do it with the SPM?

I don't know about any benefit.

> There could be cases where we would want to free a storage domain from being
> master, but without putting it into maintenance.

Like?

We are working now on removing the master domain (part of spm removal). It is
unlikely that we will add any feature related to the master domain.

Nir
___
Users mailing list
Users@ovirt.org
http://lists.ovirt.org/mailman/listinfo/users


Re: [ovirt-users] oVirt 4.0 hosted-engine deploy exits without messages or logs

2016-05-13 Thread Gianluca Cecchi
On Thu, May 12, 2016 at 3:32 PM, Sandro Bonazzola wrote:

>
>
>
> I strongly suggest to use master. In particular, I strongly suggest to us
> oVirt Node Next ISO for hosts and the engine appliance for the deployment.
> Installing Hosted Engine using Cockpit Web UI has been a nice experience
> yesterday.
> https://twitter.com/SandroBonazzola/status/730426730515673092
>
> Be sure to install also the new dashboard package once your Hosted Engine
> is up, that's another nice thing to see in action.
>
>
It seems master from yesterday, attempting self hosted engine on nfs3 with
hypervisor on CentOS 7.2 gives:

[ INFO  ] Stage: Transaction setup
[ INFO  ] Stage: Misc configuration
[ INFO  ] Stage: Package installation
[ INFO  ] Stage: Misc configuration
[ INFO  ] Configuring libvirt
[ INFO  ] Configuring VDSM
[ INFO  ] Starting vdsmd
[ INFO  ] Creating Storage Domain
[ INFO  ] Creating Storage Pool
[ INFO  ] Connecting Storage Pool
[ INFO  ] Verifying sanlock lockspace initialization
[ ERROR ] Failed to execute stage 'Misc configuration': [Errno 101] Network
is unreachable
[ INFO  ] Stage: Clean up

What I see as relevant in log is

  2016-05-12 16:47:36 DEBUG
otopi.plugins.ovirt_hosted_engine_setup.sanlock.lockspace storage_backends
.create_volume:270 Connecting to VDSM
2016-05-12 16:47:36 DEBUG otopi.context context._executeMethod:142 method
exception
Traceback (most recent call last):
  File "/usr/lib/python2.7/site-packages/otopi/context.py", line 132, in
_executeMethod
method['method']()
  File
"/usr/share/ovirt-hosted-engine-setup/scripts/../plugins/ovirt-hosted-engine-setup/sanlock/lo
ckspace.py", line 143, in _misc
lockspace + '.metadata': md_size,
  File
"/usr/lib/python2.7/site-packages/ovirt_hosted_engine_ha/lib/storage_backends.py",
line 369, in create
service_size=size)
  File
"/usr/lib/python2.7/site-packages/ovirt_hosted_engine_ha/lib/storage_backends.py",
line 279, in create_volume
volUUID=volume_uuid
  File
"/usr/lib/python2.7/site-packages/ovirt_hosted_engine_ha/lib/storage_backends.py",
line 245, in _get_volume_path
volUUID
  File "/usr/lib64/python2.7/xmlrpclib.py", line 1233, in __call__
return self.__send(self.__name, args)
  File "/usr/lib64/python2.7/xmlrpclib.py", line 1587, in __request
verbose=self.__verbose
  File "/usr/lib64/python2.7/xmlrpclib.py", line 1273, in request
return self.single_request(host, handler, request_body, verbose)
  File "/usr/lib64/python2.7/xmlrpclib.py", line 1301, in single_request
self.send_content(h, request_body)
  File "/usr/lib64/python2.7/xmlrpclib.py", line 1448, in send_content
connection.endheaders(request_body)
  File "/usr/lib64/python2.7/httplib.py", line 975, in endheaders
self._send_output(message_body)
  File "/usr/lib64/python2.7/httplib.py", line 835, in _send_output
self.send(msg)
  File "/usr/lib64/python2.7/httplib.py", line 797, in send
self.connect()
  File "/usr/lib/python2.7/site-packages/vdsm/m2cutils.py", line 203, in
connect
sock = socket.create_connection((self.host, self.port), self.timeout)
  File "/usr/lib64/python2.7/socket.py", line 571, in create_connection
raise err
error: [Errno 101] Network is unreachable
2016-05-12 16:47:36 ERROR otopi.context context._executeMethod:151 Failed
to execute stage 'Misc configuration': [Errno 101] Network is unreachable
2016-05-12 16:47:36 DEBUG otopi.transaction transaction.abort:119 aborting
'File transaction for
'/etc/ovirt-hosted-engine/firewalld/hosted-console.xml''
2016-05-12 16:47:36 DEBUG otopi.transaction transaction.abort:119 aborting
'File transaction for '/etc/ovirt-hosted-engine/iptables.example''
2016-05-12 16:47:36 DEBUG otopi.transaction transaction.abort:119 aborting
'File transaction for '/etc/sysconfig/iptables''
2016-05-12 16:47:36 DEBUG otopi.context context.dumpEnvironment:760
ENVIRONMENT DUMP - BEGIN
2016-05-12 16:47:36 DEBUG otopi.context context.dumpEnvironment:770 ENV
BASE/error=bool:'True'
2016-05-12 16:47:36 DEBUG otopi.context context.dumpEnvironment:770 ENV
BASE/exceptionInfo=list:'[(, error(101, 'Network is
unreachable'), )]'
2016-05-12 16:47:36 DEBUG otopi.context context.dumpEnvironment:774
ENVIRONMENT DUMP - END
2016-05-12 16:47:36 INFO otopi.context context.runSequence:687 Stage: Clean
up
2016-05-12 16:47:36 DEBUG otopi.context context.runSequence:691 STAGE
cleanup
2016-05-12 16:47:36 DEBUG otopi.context context._executeMethod:128 Stage
cleanup METHOD
otopi.plugins.ovirt_hosted_engine_setup.engine.add_host.Plugin._cleanup
2016-05-12 16:47:36 DEBUG otopi.context context._executeMethod:128 Stage
cleanup METHOD
otopi.plugins.ovirt_hosted_engine_setup.pki.vdsmpki.Plugin._cleanup
2016-05-12 16:47:36 DEBUG otopi.context context._executeMethod:128 Stage
cleanup METHOD
otopi.plugins.ovirt_hosted_engine_setup.storage.storage.Plugin._cleanup
2016-05-12 16:47:36 DEBUG
otopi.plugins.ovirt_hosted_engine_setup.storage.storage
storage._destroyStoragePool:931 _destroyStoragePool
2016-05-12 16:47:42 DEBUG

Re: [ovirt-users] vms in paused state

2016-05-13 Thread Milan Zamazal
We've found out that if libvirtd got restarted then VMs with disabled
memory balloon device are wrongly reported as being in the paused state.
It's a bug and we're working on a fix.
___
Users mailing list
Users@ovirt.org
http://lists.ovirt.org/mailman/listinfo/users


Re: [ovirt-users] Changing of master storage domain

2016-05-13 Thread Nicolas Ecarnot

Le 13/05/2016 09:22, Nir Soffer a écrit :

On Thu, May 12, 2016 at 6:24 PM, Roman A. Chukov
 wrote:

Hello there,

I have a question about the algorithm of changing of master storage domain.
I have a fresh installation of oVirt 3.6 on 2 hardware nodes, using a number
of NFS shares as storage domains, and one of them was a master. When I was
adding the second node into cluster, master storage domain had been changed
by oVirt. Actually it didn't bring any harmful consequences at that point
but it was unexpected for me. So, could anybody please explain such behavior
of oVirt and also explain whether I could change master storage domain back
to the previous?


You cannot select the master domain, the master is selected
automatically by ovirt.

You can move the master domain from the current domain by putting the domain
to maintenance, the system will move the master to domain to another domain.

Nir
___
Users mailing list
Users@ovirt.org
http://lists.ovirt.org/mailman/listinfo/users



Nir,

Would there be any benefit thus any plan to add the feature to be able 
to select the master domain, the same way we can do it with the SPM?


There could be cases where we would want to free a storage domain from 
being master, but without putting it into maintenance.


--
Nicolas ECARNOT
___
Users mailing list
Users@ovirt.org
http://lists.ovirt.org/mailman/listinfo/users


Re: [ovirt-users] Changing of master storage domain

2016-05-13 Thread Nir Soffer
On Thu, May 12, 2016 at 6:24 PM, Roman A. Chukov
 wrote:
> Hello there,
>
> I have a question about the algorithm of changing of master storage domain.
> I have a fresh installation of oVirt 3.6 on 2 hardware nodes, using a number
> of NFS shares as storage domains, and one of them was a master. When I was
> adding the second node into cluster, master storage domain had been changed
> by oVirt. Actually it didn't bring any harmful consequences at that point
> but it was unexpected for me. So, could anybody please explain such behavior
> of oVirt and also explain whether I could change master storage domain back
> to the previous?

You cannot select the master domain, the master is selected
automatically by ovirt.

You can move the master domain from the current domain by putting the domain
to maintenance, the system will move the master to domain to another domain.

Nir
___
Users mailing list
Users@ovirt.org
http://lists.ovirt.org/mailman/listinfo/users


Re: [ovirt-users] ovirt 4.0 alpha web login error

2016-05-13 Thread Martin Perina
Hi,

there is no way how to disable SSO in oVirt 4.0. And IMO there's no need to
do that, because the change from previous login mechanism (separated logins
for webadmin, userportal and restapi) should be fully transparent. The only
way necessary is to set proper FQDN during engine-setup and use always this
FQDN always when accessing engine.

But please bear in mind, that you are using alpha version so there may be
bugs ...

If ovirt-engine-rename doesn't works for you, then you may be hit by
https://bugzilla.redhat.com/show_bug.cgi?id=1330168 which fixed in beta
release ...

Thanks

Martin Perina


On Fri, May 13, 2016 at 3:20 AM, like...@cs2c.com.cn 
wrote:

> Hi, Martin
>
> I saw your comment on bug 1325746(
> https://bugzilla.redhat.com/show_bug.cgi?id=1325746).
> Is there anyway to disable SSO login for web portal? Make it just like the
> old versions of ovirt(3.4, 3.5, 3.6 ... ).
> Or is there any document about how to use SSO method to access ovirt web
> portal?
>
> Thanks
>
> --
> like...@cs2c.com.cn
>
>
> *From:* like...@cs2c.com.cn
> *Date:* 2016-05-12 16:31
> *To:* users 
> *Subject:* [ovirt-users] ovirt 4.0 alpha web login error
> Hi!
>
> I installed ovirt 4.0 alpha(4.0.0-0.0.master.20160404161620.git4ffd5a4) on
> RHEL7.2.
> During engine-setup, i use ovirtManager.com as the host name(i add
> ip/hostname in the /etc/hosts).
> After engine-setup succeed, i try to access webadmin portal from another
> computer.
> In the client computer i use Chrome browser. And the url is
> https://ovirtManager.com(ovirtManager.com can be resolved in /etc/hosts).
> But i got following error when i access this url:
> The client is not authorized to request an authorization. It's required to
> access the system using FQDN.
>
> So, what should i do to access the webadmin portal?
>
> Many thanks for any advise
>
>
> --
> like...@cs2c.com.cn
>
>
>
___
Users mailing list
Users@ovirt.org
http://lists.ovirt.org/mailman/listinfo/users