Re: [ovirt-users] Configuring an ISO domain in RHEV 4.0

2017-01-25 Thread paul.greene.va
Thanks - that did it. I noticed the selinux contexts were all different 
for the files/folders created by the RHEV manager. Is it safe to assume 
if I use the RHEV manager to move files around that the selinux contexts 
will automatically get set to the correct permissions?



On 1/25/2017 3:44 PM, Gianluca Cecchi wrote:



Il 25/Gen/2017 21:32, "paul.greene.va <http://paul.greene.va>" 
<paul.greene...@verizon.net <mailto:paul.greene...@verizon.net>> ha 
scritto:


I'm trying to get an ISO domain configured in a new RHEVM manager.
I did not configure it during the initial ovirt-engine-setup, and
am manually configuring it after the fact.

I created an NFS share in /var/lib/exports/iso and dropped a
couple of files in there from elsewhere, just for testing purposes.

On a client system, I can mount the NFS share, but it appears empty.

On the RHEV manager when I try to add an ISO domain, I get this
error message:

"Error while executing action Add Storage Connection: Permission
settings on the specified path do not allow access to the storage.
Verify permission settings on the specified storage path."


Ownership of root directory of nfS mount point should be 36:36. And 
all empty without other files/directories.

In general follow this
http://www.ovirt.org/documentation/how-to/troubleshooting/troubleshooting-nfs-storage-issues/
Hih,
Gianluca


___
Users mailing list
Users@ovirt.org
http://lists.ovirt.org/mailman/listinfo/users


[ovirt-users] Configuring an ISO domain in RHEV 4.0

2017-01-25 Thread paul.greene.va
I'm trying to get an ISO domain configured in a new RHEVM manager. I did 
not configure it during the initial ovirt-engine-setup, and am manually 
configuring it after the fact.


I created an NFS share in /var/lib/exports/iso and dropped a couple of 
files in there from elsewhere, just for testing purposes.


On a client system, I can mount the NFS share, but it appears empty.

On the RHEV manager when I try to add an ISO domain, I get this error 
message:


"Error while executing action Add Storage Connection: Permission 
settings on the specified path do not allow access to the storage.

Verify permission settings on the specified storage path."

I suspect the selinux permissions aren't set right. This is what they 
are currently configured to:


On the folder itself
[root@hostname iso]# ll -dZ .
drwxr-xr-x. root root system_u:object_r:nfs_t:s0

On the two files in the folder:

[root@hostname iso]# ll -Z
-rwxrw-rw-. root root unconfined_u:object_r:nfs_t:s0 
sosreport-LogCollector-20170117092344.tar.xz

-rwxrw-rw-. root root unconfined_u:object_r:nfs_t:s0 test.txt

Is this correct? If not what should they be set to?

Thanks

Paul

___
Users mailing list
Users@ovirt.org
http://lists.ovirt.org/mailman/listinfo/users


Re: [ovirt-users] VDSM service won't start

2017-01-13 Thread paul.greene.va

Oh, I stumbled onto something relevant.

I noticed on the host that was working correctly that the ifcfg-enp6s0 
file included a line for "BRIDGE=ovirtmgmt", and the other two didn't 
have that line. When I added that line to the other two hosts, and 
restarted networking, I was able to get those hosts in a status of "UP".


That file is autogenerated by VDSM, so I wondered if it would survive a 
reboot. When I rebooted, the line had been removed again by VDSM.


So, I guess the final question then is how to get persistence in keeping 
this BRIDGE line from getting removed across reboots?



On 1/13/2017 2:54 PM, Nir Soffer wrote:

On Fri, Jan 13, 2017 at 9:24 PM, paul.greene.va
<paul.greene...@verizon.net> wrote:

Output below ...



On 1/13/2017 1:47 PM, Nir Soffer wrote:

On Fri, Jan 13, 2017 at 5:45 PM, paul.greene.va
<paul.greene...@verizon.net> wrote:

All,

I'm having an issue with the vdsmd service refusing to start on a fresh
install of RHEL 7.2, RHEV version 4.0.

It initially came up correctly, and the command "ip a" showed a
"vdsmdummy"
interface and a "ovirtmgmt" interface. However after a couple of reboots,
those interfaces disappeared, and running "systemctl status vdsmd"
generated
the message "Dependency failed for Virtual Desktop Service Manager/Job
vdsmd.service/start failed with result 'dependency'". Didn't say what
dependency though

I have 3 hosts where this happening on 2 out of 3 hosts. For some odd
reason, the one host isn't having any problems.

In a Google search I found an instance where system clock timing was out
of
sync, and that messed it up. I checked all three hosts, as well as the
RHEV
manager and they all had chronyd running and the clocks appeared to be in
sync.

After a reboot the virtual interfaces usually initially come up, but go
down
again within a few minutes.

Running journalctl -xe gives these three messages:

"failed to start Virtual Desktop Server Manager network restoration"

"Dependency failed for Virtual Desktop Server Manager"  (but it doesn't
say
which dependency failed"

"Dependency failed for MOM instance configured for VDSM purposes"
(again,
doesn't way which dependency)

Any suggestions?

Can you share the output of:

systemctl status vdsmd
systemctl status mom
systemctl status libvirtd
journalctl -xe

Nir


Sure, here you go 



[root@rhevh03 vdsm]# systemctl status vdsmd
ā vdsmd.service - Virtual Desktop Server Manager
Loaded: loaded (/usr/lib/systemd/system/vdsmd.service; enabled; vendor
preset: enabled)
Active: inactive (dead)

Jan 13 12:01:53 rhevh03 systemd[1]: Dependency failed for Virtual Desktop
Server Manager.
Jan 13 12:01:53 rhevh03 systemd[1]: Job vdsmd.service/start failed with
result 'dependency'.
Jan 13 13:51:50 rhevh03 systemd[1]: Dependency failed for Virtual Desktop
Server Manager.
Jan 13 13:51:50 rhevh03 systemd[1]: Job vdsmd.service/start failed with
result 'dependency'.
Jan 13 13:55:15 rhevh03 systemd[1]: Dependency failed for Virtual Desktop
Server Manager.
Jan 13 13:55:15 rhevh03 systemd[1]: Job vdsmd.service/start failed with
result 'dependency'.



[root@rhevh03 vdsm]# systemctl status momd
ā momd.service - Memory Overcommitment Manager Daemon
Loaded: loaded (/usr/lib/systemd/system/momd.service; static; vendor
preset: disabled)
Active: inactive (dead) since Fri 2017-01-13 13:53:09 EST; 2min 26s ago
   Process: 28294 ExecStart=/usr/sbin/momd -c /etc/momd.conf -d --pid-file
/var/run/momd.pid (code=exited, status=0/SUCCESS)
  Main PID: 28298 (code=exited, status=0/SUCCESS)

Jan 13 13:53:09 rhevh03 systemd[1]: Starting Memory Overcommitment Manager
Daemon...
Jan 13 13:53:09 rhevh03 systemd[1]: momd.service: Supervising process 28298
which is not our child. We'll most likely not notice when it exits.
Jan 13 13:53:09 rhevh03 systemd[1]: Started Memory Overcommitment Manager
Daemon.
Jan 13 13:53:09 rhevh03 python[28298]: No worthy mechs found



[root@rhevh03 vdsm]# systemctl status libvirtd
ā libvirtd.service - Virtualization daemon
Loaded: loaded (/usr/lib/systemd/system/libvirtd.service; enabled; vendor
preset: enabled)
   Drop-In: /etc/systemd/system/libvirtd.service.d
āāunlimited-core.conf
Active: active (running) since Fri 2017-01-13 13:50:47 EST; 8min ago
  Docs: man:libvirtd(8)
http://libvirt.org
  Main PID: 27964 (libvirtd)
CGroup: /system.slice/libvirtd.service
āā27964 /usr/sbin/libvirtd --listen

Jan 13 13:50:47 rhevh03 systemd[1]: Starting Virtualization daemon...
Jan 13 13:50:47 rhevh03 systemd[1]: Started Virtualization daemon.
Jan 13 13:53:09 rhevh03 libvirtd[27964]: libvirt version: 2.0.0, package:
10.el7_3.2 (Red Hat, Inc. <http://bugzilla.redhat.com/bugzilla>,
2016-11-10-04:43:57, x86-034.build.eng.bos.redhat.com)
Jan 13 13:53:09 rhevh03 libvirtd[27964]: hostname: rhevh03
Jan 13 13:53:09 rhevh03 libvirtd[27964]: End o

[ovirt-users] VDSM service won't start

2017-01-13 Thread paul.greene.va

All,

I'm having an issue with the vdsmd service refusing to start on a fresh 
install of RHEL 7.2, RHEV version 4.0.


It initially came up correctly, and the command "ip a" showed a 
"vdsmdummy" interface and a "ovirtmgmt" interface. However after a 
couple of reboots, those interfaces disappeared, and running "systemctl 
status vdsmd" generated the message "Dependency failed for Virtual 
Desktop Service Manager/Job vdsmd.service/start failed with result 
'dependency'". Didn't say what dependency though


I have 3 hosts where this happening on 2 out of 3 hosts. For some odd 
reason, the one host isn't having any problems.


In a Google search I found an instance where system clock timing was out 
of sync, and that messed it up. I checked all three hosts, as well as 
the RHEV manager and they all had chronyd running and the clocks 
appeared to be in sync.


After a reboot the virtual interfaces usually initially come up, but go 
down again within a few minutes.


Running journalctl -xe gives these three messages:

"failed to start Virtual Desktop Server Manager network restoration"

"Dependency failed for Virtual Desktop Server Manager"  (but it doesn't 
say which dependency failed"


"Dependency failed for MOM instance configured for VDSM purposes"  
(again, doesn't way which dependency)


Any suggestions?

Paul

___
Users mailing list
Users@ovirt.org
http://lists.ovirt.org/mailman/listinfo/users