Re: [ovirt-users] More 4.1 Networking Questions

2017-04-11 Thread Charles Tassell
And bingo!  I totally spaced on the fact that the MAC pool would be the 
same on the two clusters.  I changed the pool range (Configure->MAC 
Address Pools for anyone interested) on the 4.1 cluster and things are 
looking good again. Thank you, this was driving me nuts!


On 2017-04-11 05:22 AM, users-requ...@ovirt.org wrote:

Message: 1
Date: Mon, 10 Apr 2017 21:07:44 -0400
From: Jeff Bailey 
To: users@ovirt.org
Subject: Re: [ovirt-users] More 4.1 Networking Questions
Message-ID: <31d1821c-9172-5a62-4f7d-b43fa4608...@cs.kent.edu>
Content-Type: text/plain; charset=windows-1252; format=flowed

On 4/10/2017 6:59 AM, Charles Tassell wrote:


Ah, spoke to soon.  30 seconds later the network went down with IPv6
disabled.  So it does appear to be a host forwarding problem, not a VM
problem.  I have an oVirt 4.0 cluster on the same network that doesn't
have these issues, so it must be a configuration issue somewhere.
Here is a dump of my ip config on the host:


The same L2 network?  Using the same range of MAC addresses?

[snip]
___
Users mailing list
Users@ovirt.org
http://lists.ovirt.org/mailman/listinfo/users


[ovirt-users] Compiling oVirt for Debian.

2017-04-11 Thread Leni Kadali Mutungi
Hello all.

I am trying to install oVirt on Debian. So far I've managed to install
a good chunk of the dependencies. However I haven't been able to
install otopi, ovirt-host-deploy, ovirt-js-dependencies,
ovirt-setup-lib since Debian has no packages for these. With the
exception of otopi (whose build instructions I was unable to make
sense of on GitHub),
everything else is to be gotten from Fedora/EPEL repos.

I had thought of using alien to convert from rpm to deb, but
apparently the recommended thing is to compile from source, since
using alien can lead to a complex version of dependency hell.

I can download WildFly from source, though again the recommended
procedure is to install ovirt-engine-wildfly and
ovirt-wildfly-overlay.

Any assistance in tracking down the source code of the above packages
so that I can install them is appreciated.

-- 
- Warm regards
Leni Kadali Mutungi
___
Users mailing list
Users@ovirt.org
http://lists.ovirt.org/mailman/listinfo/users


[ovirt-users] Hosted engine setup shooting dirty pool

2017-04-11 Thread Jamie Lawrence
Or at least, refusing to mount a dirty pool.

I have 4.1 set up, configured and functional, currently wired up with two VM 
hosts and three Gluster hosts. It is configured with a (temporary) NFS data 
storage domain, with the end-goal being two data domains on Gluster; one for 
the hosted engine, one for other VMs.

The issue is that `hosted-engine` sees any gluster volumes offered as dirty. (I 
have been creating them via the command line  right before attempting the 
hosted-engine migration; there is nothing in them at that stage.)  I *think* 
what is happening is that ovirt-engine notices a newly created volume and has 
its way with the volume (visible in the GUI; the volume appears in the list), 
and the hosted-engine installer becomes upset about that. What I don’t know is 
what to do about it. Relevant log lines below. The installer almost sounds like 
it is asking me to remove the UUID-directory and whatnot, but I’m pretty sure 
that’s just going to leave me with two problems instead of fixing the first 
one. I’ve considered attempting to wire this together in the DB, which also 
seems like a great way to break things. I’ve even thought of using a Gluster 
installation that Ovirt knows nothing about, mainly as an experiment to see if 
it would even work, but decided it doesn’t matter, because I can’t deploy in 
that state anyway and it doesn’t actually get me any closer to getting this 
working.

I noticed several bugs in the tracker seemingly related, but the bulk of those 
were for past versions and I saw nothing that seemed actionable from my end in 
the others.

So, can anyone spare a clue as to what is going wrong, and what to do about 
that?

-j

- - - - ovirt-hosted-engine-setup.log - - - - 

2017-04-11 16:14:39 DEBUG otopi.plugins.gr_he_setup.storage.storage 
storage._storageServerConnection:408 connectStorageServer
2017-04-11 16:14:40 DEBUG otopi.plugins.gr_he_setup.storage.storage 
storage._storageServerConnection:475 {'status': {'message': 'Done', 'code': 0}, 
'items': [{u'status': 0, u'id': u'890e82cf-5570-4507-a9bc-c610584dea6e'}]}
2017-04-11 16:14:40 DEBUG otopi.plugins.gr_he_setup.storage.storage 
storage._storageServerConnection:502 {'status': {'message': 'Done', 'code': 0}, 
'items': [{u'status': 0, u'id': u'cd1a1bb6-e607-4e35-b815-1fd88b84fe14'}]}
2017-04-11 16:14:40 DEBUG otopi.plugins.gr_he_setup.storage.storage 
storage._check_existing_pools:794 _check_existing_pools
2017-04-11 16:14:40 DEBUG otopi.plugins.gr_he_setup.storage.storage 
storage._check_existing_pools:795 getConnectedStoragePoolsList
2017-04-11 16:14:40 DEBUG otopi.plugins.gr_he_setup.storage.storage 
storage._check_existing_pools:797 {'status': {'message': 'Done', 'code': 0}}
2017-04-11 16:14:40 INFO otopi.plugins.gr_he_setup.storage.storage 
storage._misc:956 Creating Storage Domain
2017-04-11 16:14:40 DEBUG otopi.plugins.gr_he_setup.storage.storage 
storage._createStorageDomain:513 createStorageDomain
2017-04-11 16:14:40 DEBUG otopi.plugins.gr_he_setup.storage.storage 
storage._createStorageDomain:547 {'status': {'message': 'Done', 'code': 0}}
2017-04-11 16:14:40 DEBUG otopi.plugins.gr_he_setup.storage.storage 
storage._createStorageDomain:549 {'status': {'message': 'Done', 'code': 0}, 
u'mdasize': 0, u'mdathreshold': True, u'mdavalid': True, u'diskfree': 
u'321929216000', u'disktotal': u'321965260800', u'mdafree': 0}
2017-04-11 16:14:40 INFO otopi.plugins.gr_he_setup.storage.storage 
storage._misc:959 Creating Storage Pool
2017-04-11 16:14:40 DEBUG otopi.plugins.gr_he_setup.storage.storage 
storage._createFakeStorageDomain:553 createFakeStorageDomain
2017-04-11 16:14:41 DEBUG otopi.plugins.gr_he_setup.storage.storage 
storage._createFakeStorageDomain:570 {'status': {'message': 'Done', 'code': 0}}
2017-04-11 16:14:41 DEBUG otopi.plugins.gr_he_setup.storage.storage 
storage._createFakeStorageDomain:572 {'status': {'message': 'Done', 'code': 0}, 
u'mdasize': 0, u'mdathreshold': True, u'mdavalid': True, u'diskfree': 
u'1933930496', u'disktotal': u'2046640128', u'mdafree': 0}
2017-04-11 16:14:41 DEBUG otopi.plugins.gr_he_setup.storage.storage 
storage._createStoragePool:587 createStoragePool
2017-04-11 16:14:41 DEBUG otopi.plugins.gr_he_setup.storage.storage 
storage._createStoragePool:627 
createStoragePool(args=[storagepoolID=9e399f0c-7c4b-4131-be79-922dda038383, 
name=hosted_datacenter, masterSdUUID=9a5c302b-2a18-4c7e-b75d-29088299988c, 
masterVersion=1, domainList=['9a5c302b-2a18-4c7e-b75d-29088299988c', 
'f26efe61-a2e1-4a85-a212-269d0a047e07'], lockRenewalIntervalSec=None, 
leaseTimeSec=None, ioOpTimeoutSec=None, leaseRetries=None])
2017-04-11 16:15:29 DEBUG otopi.plugins.gr_he_setup.storage.storage 
storage._createStoragePool:640 {'status': {'message': 'Done', 'code': 0}}
2017-04-11 16:15:29 INFO otopi.plugins.gr_he_setup.storage.storage 
storage._misc:962 Connecting Storage Pool
2017-04-11 16:15:29 DEBUG otopi.plugins.gr_he_setup.storage.storage 
storage._storagePoolConnection:717 connectStoragePool

[ovirt-users] Hosted engine setup shooting dirty pool

2017-04-11 Thread Jamie Lawrence
Or at least, refusing to mount a dirty pool. I’m having trouble getting the 
hosted engine installed.

I have 4.1 set up, configured and functional, currently wired up with two VM 
hosts and three Gluster hosts. It is configured with a (temporary) NFS data 
storage domain, with the end-goal being two data domains on Gluster; one for 
the hosted engine, one for other VMs.

The issue is that `hosted-engine` sees any gluster volumes offered as dirty. (I 
have been creating them via the command line  right before attempting the 
hosted-engine migration; there is nothing in them at that stage.)  I *think* 
what is happening is that ovirt-engine notices a newly created volume and has 
its way with the volume (visible in the GUI; the volume appears in the list), 
and the hosted-engine installer becomes upset about that. What I don’t know is 
what to do about that. Relevant log lines below. The installer almost sounds 
like it is asking me to remove the UUID-directory and whatnot, but I’m pretty 
sure that’s just going to leave me with two problems instead of fixing the 
first one. I’ve considered attempting to wire this together in the DB, which 
also seems like a great way to break things. I’ve even thought of using a 
Gluster cluster that Ovirt knows nothing about, mainly as an experiment to see 
if it would even work, but decided it doesn’t especially matter, as 
architecturally that would not work for production in our environment and I 
just need to get this up.

So, can anyone spare a clue as to what is going wrong, and what to do about 
that?

-j

- - - - ovirt-hosted-engine-setup.log - - - - 

2017-04-11 16:14:39 DEBUG otopi.plugins.gr_he_setup.storage.storage 
storage._storageServerConnection:408 connectStorageServer
2017-04-11 16:14:40 DEBUG otopi.plugins.gr_he_setup.storage.storage 
storage._storageServerConnection:475 {'status': {'message': 'Done', 'code': 0}, 
'items': [{u'status': 0, u'id': u'890e82cf-5570-4507-a9bc-c610584dea6e'}]}
2017-04-11 16:14:40 DEBUG otopi.plugins.gr_he_setup.storage.storage 
storage._storageServerConnection:502 {'status': {'message': 'Done', 'code': 0}, 
'items': [{u'status': 0, u'id': u'cd1a1bb6-e607-4e35-b815-1fd88b84fe14'}]}
2017-04-11 16:14:40 DEBUG otopi.plugins.gr_he_setup.storage.storage 
storage._check_existing_pools:794 _check_existing_pools
2017-04-11 16:14:40 DEBUG otopi.plugins.gr_he_setup.storage.storage 
storage._check_existing_pools:795 getConnectedStoragePoolsList
2017-04-11 16:14:40 DEBUG otopi.plugins.gr_he_setup.storage.storage 
storage._check_existing_pools:797 {'status': {'message': 'Done', 'code': 0}}
2017-04-11 16:14:40 INFO otopi.plugins.gr_he_setup.storage.storage 
storage._misc:956 Creating Storage Domain
2017-04-11 16:14:40 DEBUG otopi.plugins.gr_he_setup.storage.storage 
storage._createStorageDomain:513 createStorageDomain
2017-04-11 16:14:40 DEBUG otopi.plugins.gr_he_setup.storage.storage 
storage._createStorageDomain:547 {'status': {'message': 'Done', 'code': 0}}
2017-04-11 16:14:40 DEBUG otopi.plugins.gr_he_setup.storage.storage 
storage._createStorageDomain:549 {'status': {'message': 'Done', 'code': 0}, 
u'mdasize': 0, u'mdathreshold': True, u'mdavalid': True, u'diskfree': 
u'321929216000', u'disktotal': u'321965260800', u'mdafree': 0}
2017-04-11 16:14:40 INFO otopi.plugins.gr_he_setup.storage.storage 
storage._misc:959 Creating Storage Pool
2017-04-11 16:14:40 DEBUG otopi.plugins.gr_he_setup.storage.storage 
storage._createFakeStorageDomain:553 createFakeStorageDomain
2017-04-11 16:14:41 DEBUG otopi.plugins.gr_he_setup.storage.storage 
storage._createFakeStorageDomain:570 {'status': {'message': 'Done', 'code': 0}}
2017-04-11 16:14:41 DEBUG otopi.plugins.gr_he_setup.storage.storage 
storage._createFakeStorageDomain:572 {'status': {'message': 'Done', 'code': 0}, 
u'mdasize': 0, u'mdathreshold': True, u'mdavalid': True, u'diskfree': 
u'1933930496', u'disktotal': u'2046640128', u'mdafree': 0}
2017-04-11 16:14:41 DEBUG otopi.plugins.gr_he_setup.storage.storage 
storage._createStoragePool:587 createStoragePool
2017-04-11 16:14:41 DEBUG otopi.plugins.gr_he_setup.storage.storage 
storage._createStoragePool:627 
createStoragePool(args=[storagepoolID=9e399f0c-7c4b-4131-be79-922dda038383, 
name=hosted_datacenter, masterSdUUID=9a5c302b-2a18-4c7e-b75d-29088299988c, 
masterVersion=1, domainList=['9a5c302b-2a18-4c7e-b75d-29088299988c', 
'f26efe61-a2e1-4a85-a212-269d0a047e07'], lockRenewalIntervalSec=None, 
leaseTimeSec=None, ioOpTimeoutSec=None, leaseRetries=None])
2017-04-11 16:15:29 DEBUG otopi.plugins.gr_he_setup.storage.storage 
storage._createStoragePool:640 {'status': {'message': 'Done', 'code': 0}}
2017-04-11 16:15:29 INFO otopi.plugins.gr_he_setup.storage.storage 
storage._misc:962 Connecting Storage Pool
2017-04-11 16:15:29 DEBUG otopi.plugins.gr_he_setup.storage.storage 
storage._storagePoolConnection:717 connectStoragePool
2017-04-11 16:15:29 DEBUG otopi.context context._executeMethod:142 method 
exception
Traceback (most recent call last):
  

Re: [ovirt-users] Resize iSCSI lun f storage domain

2017-04-11 Thread Gianluca Cecchi
On Tue, Apr 11, 2017 at 11:31 PM, Fred Rolland  wrote:

>
>
> On Tue, Apr 11, 2017 at 1:51 PM, Gianluca Cecchi <
> gianluca.cec...@gmail.com> wrote:
>
>> Hello,
>> my iSCSI storage domain in 4.1.1 is composed by 1 lun of size 1TB and
>> there are two hosts accessing it.
>> I have to extend this LUN to 4Tb.
>> What is supposed to be done after having completed the resize operation
>> at storage array level?
>>
> It is supposed to be done once the LUN as been extended on the storage
> server.
>
>> I read here the workflow:
>> http://www.ovirt.org/develop/release-management/features/sto
>> rage/lun-resize/
>>
>> Does this imply that I have to do nothing at host side?
>>
>
> Yes,nothing needs to be done on host side.
> Did you had any issue?
>

No, I didn't resize yet. I asked to get confirmation before proceeding


>
>> Normally inside a physical server connecting to iSCSI volumes I run the
>> command "iscsiadm  --rescan" and this one below is all my workflow (on
>> CentOS 5.9, quite similar in CentOS 6.x, except the multipath refresh).
>>
>> Is it oVirt itself that takes in charge the iscsiadm --rescan command?
>>
>
> The rescan will be done by the VDSM
>

I confirm all went well and as described in the link above, without any
action to be done at hosts side.
Storage domain resized, multipath ok on both hosts, iSCSI connections
maintained the same as before resize operation
Thanks
___
Users mailing list
Users@ovirt.org
http://lists.ovirt.org/mailman/listinfo/users


Re: [ovirt-users] Resize iSCSI lun f storage domain

2017-04-11 Thread Fred Rolland
On Tue, Apr 11, 2017 at 1:51 PM, Gianluca Cecchi 
wrote:

> Hello,
> my iSCSI storage domain in 4.1.1 is composed by 1 lun of size 1TB and
> there are two hosts accessing it.
> I have to extend this LUN to 4Tb.
> What is supposed to be done after having completed the resize operation at
> storage array level?
>
It is supposed to be done once the LUN as been extended on the storage
server.

> I read here the workflow:
> http://www.ovirt.org/develop/release-management/features/
> storage/lun-resize/
>
> Does this imply that I have to do nothing at host side?
>

Yes,nothing needs to be done on host side.
Did you had any issue?

>
> Normally inside a physical server connecting to iSCSI volumes I run the
> command "iscsiadm  --rescan" and this one below is all my workflow (on
> CentOS 5.9, quite similar in CentOS 6.x, except the multipath refresh).
>
> Is it oVirt itself that takes in charge the iscsiadm --rescan command?
>

The rescan will be done by the VDSM

>
>
BTW: the "edit domain" screenshot should become a "manage domain"
> screenshot now
>
Thanks, I will update the web site

>
> Thanks,
> Gianluca
>
> I want to extend my filesystem from 250Gb to 600Gb
>
> - current layout of FS, PV, multipath device
>
> [g.cecchi@dbatest ~]$ df -h
> FilesystemSize  Used Avail Use% Mounted on
> ...
> /dev/mapper/VG_ORASAVE-LV_ORASAVE
>   247G   66G  178G  28% /orasave
>
>
> [root@dbatest ~]# pvs /dev/mpath/mpsave
>   PVVG Fmt  Attr PSize   PFree
>   /dev/mpath/mpsave VG_ORASAVE lvm2 a--  250.00G 0
>
> [root@dbatest ~]# multipath -l mpsave
> mpsave (36090a0585094695aed0f95af0601f066) dm-4 EQLOGIC,100E-00
> [size=250G][features=1 queue_if_no_path][hwhandler=0][rw]
> \_ round-robin 0 [prio=0][active]
>  \_ 12:0:0:0 sdc 8:32  [active][undef]
>  \_ 11:0:0:0 sdd 8:48  [active][undef]
>
>
> - current configured iSCSI ifaces (ieth0 using eth0 and ieth1 using eth1)
> [root@dbatest ~]# iscsiadm -m node -P 1
>
> ...
>
> Target: iqn.2001-05.com.equallogic:0-8a0906-5a6994505-
> 66f00106af950fed-dbatest-save
> Portal: 10.10.100.20:3260,1
> Iface Name: ieth0
> Iface Name: ieth1
>
> - rescan of iSCSI
> [root@dbatest ~]# for i in 0 1
> > do
> > iscsiadm -m node --targetname=iqn.2001-05.com.
> equallogic:0-8a0906-5a6994505-66f00106af950fed-dbatest-save -I ieth$i
> --rescan
> > done
> Rescanning session [sid: 3, target: iqn.2001-05.com.equallogic:0-
> 8a0906-5a6994505-66f00106af950fed-dbatest-save, portal: 10.10.100.20,3260]
> Rescanning session [sid: 4, target: iqn.2001-05.com.equallogic:0-
> 8a0906-5a6994505-66f00106af950fed-dbatest-save, portal: 10.10.100.20,3260]
>
> In messages I get:
> Apr 24 16:07:17 dbatest kernel: SCSI device sdd: 1258291200 512-byte hdwr
> sectors (644245 MB)
> Apr 24 16:07:17 dbatest kernel: sdd: Write Protect is off
> Apr 24 16:07:17 dbatest kernel: SCSI device sdd: drive cache: write through
> Apr 24 16:07:17 dbatest kernel: sdd: detected capacity change from
> 268440698880 to 644245094400
> Apr 24 16:07:17 dbatest kernel: SCSI device sdc: 1258291200 512-byte hdwr
> sectors (644245 MB)
> Apr 24 16:07:17 dbatest kernel: sdc: Write Protect is off
> Apr 24 16:07:17 dbatest kernel: SCSI device sdc: drive cache: write through
> Apr 24 16:07:17 dbatest kernel: sdc: detected capacity change from
> 268440698880 to 644245094400
>
> - Dry run of multupath refresh
> [root@dbatest ~]# multipath -v2 -d
> : mpsave (36090a0585094695aed0f95af0601f066)  EQLOGIC,100E-00
> [size=600G][features=1 queue_if_no_path][hwhandler=0][n/a]
> \_ round-robin 0 [prio=1][undef]
>  \_ 12:0:0:0 sdc 8:32  [active][ready]
>  \_ 11:0:0:0 sdd 8:48  [active][ready]
>
> - execute the refresh of multipath
> [root@dbatest ~]# multipath -v2
> : mpsave (36090a0585094695aed0f95af0601f066)  EQLOGIC,100E-00
> [size=600G][features=1 queue_if_no_path][hwhandler=0][n/a]
> \_ round-robin 0 [prio=1][undef]
>  \_ 12:0:0:0 sdc 8:32  [active][ready]
>  \_ 11:0:0:0 sdd 8:48  [active][ready]
>
> - verify new size:
> [root@dbatest ~]# multipath -l mpsave
> mpsave (36090a0585094695aed0f95af0601f066) dm-4 EQLOGIC,100E-00
> [size=600G][features=1 queue_if_no_path][hwhandler=0][rw]
> \_ round-robin 0 [prio=0][active]
>  \_ 12:0:0:0 sdc 8:32  [active][undef]
>  \_ 11:0:0:0 sdd 8:48  [active][undef]
>
> - pvresize
> [root@dbatest ~]# pvresize /dev/mapper/mpsave
>   Physical volume "/dev/mpath/mpsave" changed
>   1 physical volume(s) resized / 0 physical volume(s) not resized
>
> - verify the newly added 350Gb size in PV and in VG
> [root@dbatest ~]# pvs  /dev/mpath/mpsave
>   PVVG Fmt  Attr PSize   PFree
>   /dev/mpath/mpsave VG_ORASAVE lvm2 a--  600.00G 349.99G
>
> [root@dbatest ~]# vgs VG_ORASAVE
>   VG #PV #LV #SN Attr   VSize   VFree
>   VG_ORASAVE   1   1   0 wz--n- 600.00G 349.99G
>
> - lvextend of the existing LV
> [root@dbatest ~]# lvextend -l+100%FREE /dev/VG_ORASAVE/LV_ORASAVE
>   Extending logical volume LV_ORASAVE to 600.00 GB
>   Logical 

Re: [ovirt-users] I’m having trouble deleting a test gluster volume

2017-04-11 Thread knarra

On 04/11/2017 11:28 PM, Precht, Andrew wrote:

Hi all,
The node is oVirt Node 4.1.1 with glusterfs-3.8.10-1.el7.
On the node I can not find /var/log/glusterfs/glusterd.log However, 
there is a /var/log/glusterfs/glustershd.log
can you check if /var/log/glusterfs/etc-glusterfs-glusterd.vol.log 
exists? if yes, can you check if there is any error present in that file ?


What happens if I follow the four steps outlined here to remove the 
volume from the node _BUT_, I do have another volume present in the 
cluster. It too is a test volume. Neither one has any data on them. 
So, data loss is not an issue.
Running those four steps will remove the volume from your cluster . If 
the volumes what you have are test volumes you could just follow the 
steps outlined to delete them (since you are not able to delete from UI) 
and bring back the cluster into a normal state.



*From:* knarra 
*Sent:* Tuesday, April 11, 2017 10:32:27 AM
*To:* Sandro Bonazzola; Precht, Andrew; Sahina Bose; Tal Nisan; Allon 
Mureinik; Nir Soffer

*Cc:* users
*Subject:* Re: [ovirt-users] I’m having trouble deleting a test 
gluster volume

On 04/11/2017 10:44 PM, Sandro Bonazzola wrote:

Adding some people

Il 11/Apr/2017 19:06, "Precht, Andrew" > ha scritto:


Hi Ovirt users,
I’m a newbie to oVirt and I’m having trouble deleting a test
gluster volume. The nodes are 4.1.1 and the engine is 4.1.0

When I try to remove the test volume, I click Remove, the dialog
box prompting to confirm the deletion pops up and after I click
OK, the dialog box changes to show a little spinning wheel and
then it disappears. In the end the volume is still there.

with the latest version of glusterfs & ovirt we do not see any issue 
with deleting a volume. Can you please check 
/var/log/glusterfs/glusterd.log file if there is any error present?




The test volume was distributed with two host members. One of the
hosts I was able to remove from the volume by removing the host
form the cluster. When I try to remove the remaining host in the
volume, even with the “Force Remove” box ticked, I get this
response: Cannot remove Host. Server having Gluster volume.

What to try next?

since you have already removed the volume from one host in the cluster 
and you still see it on another host you can do the following to 
remove the volume from another host.


1) Login to the host where the volume is present.
2) cd to /var/lib/glusterd/vols
3) rm -rf 
4) Restart glusterd on that  host.

And before doing the above make sure that you do not have any other 
volume present in the cluster.


Above steps should not be run on a production system as you might 
loose the volume and data.


Now removing the host from UI should succed.



P.S. I’ve tried to join this user group several times in the
past, with no response.
Is it possible for me to join this group?

Regards,
Andrew



___
Users mailing list
Users@ovirt.org
http://lists.ovirt.org/mailman/listinfo/users





___
Users mailing list
Users@ovirt.org
http://lists.ovirt.org/mailman/listinfo/users


Re: [ovirt-users] I’m having trouble deleting a test gluster volume

2017-04-11 Thread knarra

On 04/11/2017 10:44 PM, Sandro Bonazzola wrote:

Adding some people

Il 11/Apr/2017 19:06, "Precht, Andrew" > ha scritto:


Hi Ovirt users,
I’m a newbie to oVirt and I’m having trouble deleting a test
gluster volume. The nodes are 4.1.1 and the engine is 4.1.0

When I try to remove the test volume, I click Remove, the dialog
box prompting to confirm the deletion pops up and after I click
OK, the dialog box changes to show a little spinning wheel and
then it disappears. In the end the volume is still there.

with the latest version of glusterfs & ovirt we do not see any issue 
with deleting a volume. Can you please check 
/var/log/glusterfs/glusterd.log file if there is any error present?




The test volume was distributed with two host members. One of the
hosts I was able to remove from the volume by removing the host
form the cluster. When I try to remove the remaining host in the
volume, even with the “Force Remove” box ticked, I get this
response: Cannot remove Host. Server having Gluster volume.

What to try next?

since you have already removed the volume from one host in the cluster 
and you still see it on another host you can do the following to remove 
the volume from another host.


1) Login to the host where the volume is present.
2) cd to /var/lib/glusterd/vols
3) rm -rf 
4) Restart glusterd on that  host.

And before doing the above make sure that you do not have any other 
volume present in the cluster.


Above steps should not be run on a production system as you might loose 
the volume and data.


Now removing the host from UI should succed.



P.S. I’ve tried to join this user group several times in the past,
with no response.
Is it possible for me to join this group?

Regards,
Andrew



___
Users mailing list
Users@ovirt.org
http://lists.ovirt.org/mailman/listinfo/users



___
Users mailing list
Users@ovirt.org
http://lists.ovirt.org/mailman/listinfo/users


Re: [ovirt-users] I’m having trouble deleting a test gluster volume

2017-04-11 Thread Sandro Bonazzola
Adding some people

Il 11/Apr/2017 19:06, "Precht, Andrew"  ha
scritto:

> Hi Ovirt users,
> I’m a newbie to oVirt and I’m having trouble deleting a test gluster
> volume. The nodes are 4.1.1 and the engine is 4.1.0
>
> When I try to remove the test volume, I click Remove, the dialog box
> prompting to confirm the deletion pops up and after I click OK, the dialog
> box changes to show a little spinning wheel and then it disappears. In the
> end the volume is still there.
>
> The test volume was distributed with two host members. One of the hosts I
> was able to remove from the volume by removing the host form the cluster.
> When I try to remove the remaining host in the volume, even with the “Force
> Remove” box ticked, I get this response: Cannot remove Host. Server having
> Gluster volume.
>
> What to try next?
>
> P.S. I’ve tried to join this user group several times in the past, with no
> response.
> Is it possible for me to join this group?
>
> Regards,
> Andrew
>
>
___
Users mailing list
Users@ovirt.org
http://lists.ovirt.org/mailman/listinfo/users


[ovirt-users] ovirt node 4.1.1.1

2017-04-11 Thread Julius Bekontaktis
Hey guys,

Yesterday I've upgraded my Ovirt Node 4.1.0 to 4.1.1.1 version. Everything
looks fine except Network configuration part is missing. If I enter
manually in url (/network) like in the older version - I get Not Found
 error. I've roll backed to previous version - network configuration part
is OK, reverted back to new one - no networking configuration. How should I
configure networks? In terminal, the old way ?

Thanks for any comments. Maybe I am missing something.

Regards
Julius
___
Users mailing list
Users@ovirt.org
http://lists.ovirt.org/mailman/listinfo/users


Re: [ovirt-users] VDSM overwrites network config

2017-04-11 Thread Alan Cowles
Hey Dan,

Here is a layout of my environment setup, and the primary issue I have run
into, hopefully it helps to clear up the confusion.

Configure 2 hosts identically, eth0, eth1, eth0.502 (mgmt), eth1.504 (NFS)
and do the self-hosted engine install on one host.
As part of the install you have to identify the network that rhevm needs to
go on, and the network you need to mount your first NFS storage domain
from. In this case, eth0.502, and eth1.504 respectively.
That host, and the rhev-m hosted engine come up, however if you notice,
under Networks for the default datacenter, only rhevm exists as an actual
network.
You need to create a new network for NFS, in order to mount ISO/Data
storage domains, even though the engine setup has already mounted NFS via
eth1.504.
When you go to assign this new network from the engine, you cannot place it
on the eth1.504 only directly on eth0 or eth1 iteself.
Thus I have to be sure to tag that network in the rhev-m engine
When the tagged NFS network is placed on eth1, it looks like it breaks the
already existing NFS mount that supports the hosted engine, and causes
items to be non-responsive.
Rebooting the host at this stage, items still don't come up correctly and
the hosted engine remains down.
If I console to it, and manually setup the eth0 & eth1 interfaces, eth0.502
& eth1.504 VLANS,  and rhevm & NFS bridges, and reboot the host, the host
and the engine come up wonderfully with the defined networks vlan tagged,
and placed on the appropriate tag interfaces.

I then go to deploy a 2nd host as an additional hosted engine, and find
that I can select eth0.502 for rhevm and eth1.504 for nfs during the deploy
stages. But when it gets to the stage where it requires to you define the
networks that exist in the current cluster in order to activate the host
and proceed, I'm stuck in the same spot with applying networks, I can only
place them on the eth0/eth1 interfaces. I select ignore to exit the hosted
engine deployment wizard, and attempt to manually apply them, hoping to
repeat the steps from node 1 but was finding myself in a pickle because
starting VDSM would overwrite the network configs I had defined manually.
Why it does this on one host, and not on the other still perplexes me.

What I ended up doing once my primary host was rebuilt using the
appropriate bridges and vlan tagged interfaces, was reinstalling my 2nd
host completely, and configuring it as a self-hosted engine additional
host. This time it imports the network config from the first host
completely here, and I wind up with all tagged interfaces working correctly
and VDSM running as designed.

I guess the thing that bothered me mainly is the functionality in assigning
the networks in rhev manager, as it shows the vlans as sub-interfaces of
eth0/eth1, but doesn't let you assign networks to them, and the odd
behavior of VDSM overwriting configs on one host, but not the other?

I'll admit the setup I have is convoluted, but it's what I have to work
with for this project.

Thank you very much for the time and advice thus far.

On Sun, Apr 9, 2017 at 4:33 AM, Dan Kenigsberg  wrote:

> On Fri, Apr 7, 2017 at 4:24 PM, Alan Cowles  wrote:
> > Hey guys,
> >
> > I'm in a lab setup currently with 2 hosts, running RHEV-3.5, with a
> > self-hosted engine on RHEL 6.9 servers. I am doing this in order to plot
> out
> > a production upgrade I am planning going forward to 4.0, and I'm a bit
> stuck
> > and I'm hoping it's ok to ask questions here concerning this product and
> > version.
> >
> > In my lab, I have many vlans trunked on my switchports, so I have to
> create
> > individual vlan interfaces on my RHEL install. During the install, I am
> able
> > to pick my ifcfg-eth0.502 interface for rhevm, and ifcfg-eth1.504
> interface
> > for NFS, access the storage and create my self-hosted engine. The issue
> I am
> > running into is that I get into RHEV-M, and I am continuing to set the
> hosts
> > up or add other hosts, when I go to move my NFS network to host2 it only
> > allows me to select the base eth1 adapter, and not the VLAN tagged
> version.
> > I am able to tag the VLAN in the RHEV-M configured network itself, but
> this
> > has the unfortunate side effect of tagging a network on top of the
> already
> > tagged interface on host1, taking down NFS and the self hosted engine.
> >
> > I am able to access the console of host1, and I configure the ifcfg
> files,
> > vlan files, and bridge files to be on the correct interfaces, and I get
> my
> > host back up, and my RHEV-M back up. However when I try to make these
> manual
> > changes to host2 and get it up, the changes to these files are completely
> > overwritten the moment the host reboots connected to vdsmd start-up.
>
> If that was your only issue, I would have recommended you to read
> https://www.ovirt.org/blog/2016/05/modify-ifcfg-files/ and implement a
> hook that would leave the configuration as you wanted it.
>
>
> >

Re: [ovirt-users] EMC Unity 300

2017-04-11 Thread Nathanaël Blanchet
we use it (with fast cache feature), but not as a storage domain in 
ovirt, but with direct lun attached to VMs, and no issue for the moment 
after more than one month use.



Le 10/04/2017 à 14:27, Colin Coe a écrit :

Hi all

We are contemplating moving away from our current iSCSI SAN to 
Dell/EMC  Unity 300.


I've seen one thread where the 300F is problematic and no resolution 
is posted.


Is anyone else using tis with RHEV/oVirt?  Any war stories?

Thanks

CC


___
Users mailing list
Users@ovirt.org
http://lists.ovirt.org/mailman/listinfo/users


--
Nathanaël Blanchet

Supervision réseau
Pôle Infrastrutures Informatiques
227 avenue Professeur-Jean-Louis-Viala
34193 MONTPELLIER CEDEX 5   
Tél. 33 (0)4 67 54 84 55
Fax  33 (0)4 67 54 84 14
blanc...@abes.fr

___
Users mailing list
Users@ovirt.org
http://lists.ovirt.org/mailman/listinfo/users


Re: [ovirt-users] EMC Unity 300

2017-04-11 Thread Yaniv Kaul
On Mon, Apr 10, 2017 at 3:27 PM, Colin Coe  wrote:

> Hi all
>
> We are contemplating moving away from our current iSCSI SAN to Dell/EMC
>  Unity 300.
>
> I've seen one thread where the 300F is problematic and no resolution is
> posted.
>

Can you share a link?
Y.


>
> Is anyone else using tis with RHEV/oVirt?  Any war stories?
>
> Thanks
>
> CC
>
> ___
> Users mailing list
> Users@ovirt.org
> http://lists.ovirt.org/mailman/listinfo/users
>
>
___
Users mailing list
Users@ovirt.org
http://lists.ovirt.org/mailman/listinfo/users


Re: [ovirt-users] Upgrade hypervisor to 4.1.1.1

2017-04-11 Thread eric stam
We are on the good way.
I first put SELinux in permissive mode, and now it is possible to start
virtual machine's.


I will test the "restorecon" next.

Regards,
Eric

2017-04-10 14:00 GMT+02:00 Yuval Turgeman :

> restorecon on virtlogd.conf would be enough
>
> On Apr 10, 2017 2:51 PM, "Misak Khachatryan"  wrote:
>
>> Is it node setup? Today i tried to upgrade my one node cluster, after
>> that VM's fail to start, it turns out that selinux prevents virtlogd to
>> start.
>>
>> ausearch -c 'virtlogd' --raw | audit2allow -M my-virtlogd
>> semodule -i my-virtlogd.pp
>> /sbin/restorecon -v /etc/libvirt/virtlogd.conf
>>
>> fixed things for me, YMMV.
>>
>>
>> Best regards,
>> Misak Khachatryan
>>
>> On Mon, Apr 10, 2017 at 2:03 PM, Sandro Bonazzola 
>> wrote:
>>
>>> Can you please provide a full sos report from that host?
>>>
>>> On Sun, Apr 9, 2017 at 8:38 PM, Sandro Bonazzola 
>>> wrote:
>>>
 Adding node team.

 Il 09/Apr/2017 15:43, "eric stam"  ha scritto:

 Yesterday I executed an upgrade on my hypervisor to version 4.1.1.1
 After the upgrade, it is impossible to start a virtual machine on it.
 The messages I found: Failed to connect socket to
 '/var/run/libvirt/virtlogd-sock': Connection refused

 [root@vm-1 log]# hosted-engine --vm-status | grep -i engine

 Engine status  : {"reason": "bad vm status",
 "health": "bad", "vm": "down", "detail": "down"}

 state=EngineUnexpectedlyDown

 The redhead version: CentOS Linux release 7.3.1611 (Core)

 Is this a known problem?

 Regards, Eric



 ___
 Users mailing list
 Users@ovirt.org
 http://lists.ovirt.org/mailman/listinfo/users



>>>
>>>
>>> --
>>>
>>> SANDRO BONAZZOLA
>>>
>>> ASSOCIATE MANAGER, SOFTWARE ENGINEERING, EMEA ENG VIRTUALIZATION R
>>>
>>> Red Hat EMEA 
>>> 
>>> TRIED. TESTED. TRUSTED. 
>>>
>>> ___
>>> Users mailing list
>>> Users@ovirt.org
>>> http://lists.ovirt.org/mailman/listinfo/users
>>>
>>>
>>


-- 
Gr. Eric Stam
*Mob*.: 06-50278119
___
Users mailing list
Users@ovirt.org
http://lists.ovirt.org/mailman/listinfo/users


Re: [ovirt-users] Python-SDK4: How to list VM user sessions?

2017-04-11 Thread Giulio Casella

On 10/04/2017 17:50, Juan Hernández wrote:

On 04/10/2017 05:38 PM, Giorgio Biacchi wrote:

...




OK, that makes sense. I think this is a bug in the server then. If the
users are not oVirt users we should just not report them, or maybe
report just the user name and domain, if available. But returning 404
and failing in that case isn't correct, in my opinion. I have opened the
following bug to track this issue:

  Don't fail with 404 if session user doesn't exist in the database
  https://bugzilla.redhat.com/1440861



Great, thank you Juan.
My preference is for reporting username and domain, instead of not 
reporting at all.


Thank you again.

Bye,
Giulio



On 04/10/2017 01:18 PM, Juan Hernández wrote:

On 04/10/2017 11:10 AM, Giulio Casella wrote:

On 07/04/2017 16:00, Juan Hernández wrote:

I have been trying to reproduce this and I wasn't able. In theory the
404 error that you get should only happen if the virtual machine
doesn't
exist, but that isn't the case.

Can you check the server.log file and share the complete stack traces
that should appear after the "HTTP 404 Not Found" message?



No problem, find attached a snippet of server.log.

Bye,
Giulio



Thanks, that helps. What the engine isn't finding is the user, not the
virtual machine. Can you provide more information about that user? I
mean, take the virtual machine and find via the GUI which user is using
it. Then go to https://.../ovirt-engine/api/users and find that user.
Share the definition of that user that you get there, if possible.


On 03/31/2017 10:25 AM, Giulio Casella wrote:

On 30/03/2017 20:05, Juan Hernández wrote:

On 03/30/2017 01:01 PM, Giulio Casella wrote:

Hi,
I'm trying to obtain a list of users connected to a VM, using
python SDK
v4.
Here's what I'm doing:

vm = vms_service.list(search="name=vmname")[0]
vm_service = vms_service.vm_service(vm.id)
sessions = vm_service.sessions_service().list()

But "sessions" is None.

Same result using:

s = connection.follow_link(vm.sessions)

"s" is None.

I tried also using curl, and if I connect to:

https://my.ovirt.host/ovirt-engine/api/v4/vms//sessions

I get a beautiful 404.

Also using v3 of python SDK I obtain the same behaviour.

So I suspect that retrieving user sessions via API is not
implemented,
is it? If not, what I'm doing wrong?

I'm using RHV 4.0.6.3-0.1.el7ev

Thanks in advance,
Giulio



Giulio, you should never get a 404 error from that URL, unless the
virtual doesn't exist or isn't visible for you. What user name are
you
to create the SDK connection? An administrator or a regular user?



I tried with a regular domain user (with superuser role assigned) and
admin@internal, with same result.


Also, please check the /var/log/ovirt-engine/server.log and
/var/log/ovirt-engine/engine.log when you send that request. Do
you see
there something relevant?


server.log reports:

2017-03-31 10:03:11,346 ERROR [org.jboss.resteasy.resteasy_jaxrs.i18n]
(default task-33) RESTEASY002010: Failed to execute:
javax.ws.rs.WebApplicationException: HTTP 404 Not Found

(no surprise here, same message obtained by curl).

engine.log is full of:

ERROR [org.ovirt.engine.core.aaa.filters.SsoRestApiAuthFilter]
(default
task-7) [] Cannot authenticate using authentication Headers:
invalid_grant: The provided authorization grant for the auth code has
expired

(indipendently of my request)

It's quite strange I can perform almost every other operation (e.g.
getting other VM parameters, running methods, etc.)




Finally, please run your script with the 'debug=True' option in the
connection, and with a log file, like here:


https://github.com/oVirt/ovirt-engine-sdk/blob/master/sdk/examples/list_vms.py#L20-L37




Then share that log file so that we can check what the server is
returning exactly. Make sure to remove your password from that log
file
before sharing it.


Find attached produced log (passwords purged).

BTW: VM is a Fedora 24, with guest agents correctly installed (I
can see
user sessions in admin portal and in postgresql DB).

Thanks,
Giulio



___
Users mailing list
Users@ovirt.org
http://lists.ovirt.org/mailman/listinfo/users







___
Users mailing list
Users@ovirt.org
http://lists.ovirt.org/mailman/listinfo/users







___
Users mailing list
Users@ovirt.org
http://lists.ovirt.org/mailman/listinfo/users