[ovirt-users] Re: Gluster service failure

2019-05-15 Thread knarra

On 10/06/2016 01:39 PM, Koen Vanoppen wrote:

Dear all,

One little issue. I have 1 hypervisor in my datacenter that keeps 
having it's gluster status disconnected in the GUI. But if I look on 
the server, the service is running. I added the logs after I clicked 
on the "Restart gluster service"


Kind regards,

Koen


___
Users mailing list
Users@ovirt.org
http://lists.ovirt.org/mailman/listinfo/users


Hi,

I see that you are seeing a issue which says "Restart glusterd 
service" in the hosts general tab. If you are using ovirt version lesser 
than 4.0.5 , before adding the second node and importing the storage 
domains into the cluster you need to move the first host to maintenance 
and immediately activate it. Once you do this you wont be seeing this issue.


Hope this helps

Thanks

kasturi.


--
IMPORTANT!
This message has been scanned for viruses and phishing links.
However, it is your responsibility to evaluate the links and attachments you 
choose to click.
If you are uncertain, we always try to help.
Greetings helpd...@actnet.se



--
IMPORTANT!
This message has been scanned for viruses and phishing links.
However, it is your responsibility to evaluate the links and attachments you 
choose to click.
If you are uncertain, we always try to help.
Greetings helpd...@actnet.se


___
Users mailing list
Users@ovirt.org
http://lists.ovirt.org/mailman/listinfo/users
___
Users mailing list -- users@ovirt.org
To unsubscribe send an email to users-le...@ovirt.org
Privacy Statement: https://www.ovirt.org/site/privacy-policy/
oVirt Code of Conduct: 
https://www.ovirt.org/community/about/community-guidelines/
List Archives: 
https://lists.ovirt.org/archives/list/users@ovirt.org/message/33JUZQAW7E7N5HBLVUDKIIF53G72COM4/


[ovirt-users] Re: Tracebacks in vdsm.log file

2019-05-14 Thread knarra

On 10/03/2016 11:02 PM, Nir Soffer wrote:

On Fri, Sep 30, 2016 at 3:58 PM, knarra  wrote:

Hi,

 I see below trace back in my vdsm.log. Can some one help me understand
why these are logged?


is free, finding out if anyone is waiting for it.
Thread-557::DEBUG::2016-09-30
18:20:25,064::resourceManager::661::Storage.ResourceManager::(releaseResource)
No one is waiting for resource 'Storage.upgrade_57ee3a08-004b-02
7b-0395-01d6', Clearing records.
Thread-557::ERROR::2016-09-30
18:20:25,064::utils::375::Storage.StoragePool::(wrapper) Unhandled exception
Traceback (most recent call last):
   File "/usr/lib/python2.7/site-packages/vdsm/utils.py", line 372, in
wrapper
 return f(*a, **kw)
   File "/usr/lib/python2.7/site-packages/vdsm/concurrent.py", line 177, in
run
 return func(*args, **kwargs)
   File "/usr/lib/python2.7/site-packages/vdsm/storage/securable.py", line
78, in wrapper
 return method(self, *args, **kwargs)
   File "/usr/share/vdsm/storage/sp.py", line 207, in _upgradePoolDomain
 self._finalizePoolUpgradeIfNeeded()
   File "/usr/lib/python2.7/site-packages/vdsm/storage/securable.py", line
76, in wrapper
 raise SecureError("Secured object is not in safe state")
SecureError: Secured object is not in safe state

This means that the when an domain upgrade thread has finished, the spm
was stopped.

I'm seeing these errors from time to time on my development host using
master. I don't think you should worry about them.

Can you file a bug about this? we should clean this sometimes.

Nir


Thank you for the reply Nir. I have filed a bug, 
https://bugzilla.redhat.com/show_bug.cgi?id=1381418


___
Users mailing list
Users@ovirt.org
http://lists.ovirt.org/mailman/listinfo/users

--
IMPORTANT!
This message has been scanned for viruses and phishing links.
However, it is your responsibility to evaluate the links and attachments you 
choose to click.
If you are uncertain, we always try to help.
Greetings helpd...@actnet.se



--
IMPORTANT!
This message has been scanned for viruses and phishing links.
However, it is your responsibility to evaluate the links and attachments you 
choose to click.
If you are uncertain, we always try to help.
Greetings helpd...@actnet.se

___
Users mailing list -- users@ovirt.org
To unsubscribe send an email to users-le...@ovirt.org
Privacy Statement: https://www.ovirt.org/site/privacy-policy/
oVirt Code of Conduct: 
https://www.ovirt.org/community/about/community-guidelines/
List Archives: 
https://lists.ovirt.org/archives/list/users@ovirt.org/message/YFTLOYBZZ52QKKYXYCZW52RMMZ2ER6K5/


[ovirt-users] Re: Tracebacks in vdsm.log file

2019-05-14 Thread knarra

Hi All,

vdsm version i am using is vdsm-4.18.13-1.el7ev.x86_64. I am trying 
to upgrade a RHV-H node from UI when i saw the following trace back in 
the vdsm log.


Thanks
kasturi

On 09/30/2016 06:28 PM, knarra wrote:

Hi,

I see below trace back in my vdsm.log. Can some one help me 
understand why these are logged?



is free, finding out if anyone is waiting for it.
Thread-557::DEBUG::2016-09-30 
18:20:25,064::resourceManager::661::Storage.ResourceManager::(releaseResource) 
No one is waiting for resource 'Storage.upgrade_57ee3a08-004b-02

7b-0395-01d6', Clearing records.
Thread-557::ERROR::2016-09-30 
18:20:25,064::utils::375::Storage.StoragePool::(wrapper) Unhandled 
exception

Traceback (most recent call last):
  File "/usr/lib/python2.7/site-packages/vdsm/utils.py", line 372, in 
wrapper

return f(*a, **kw)
  File "/usr/lib/python2.7/site-packages/vdsm/concurrent.py", line 
177, in run

return func(*args, **kwargs)
  File "/usr/lib/python2.7/site-packages/vdsm/storage/securable.py", 
line 78, in wrapper

return method(self, *args, **kwargs)
  File "/usr/share/vdsm/storage/sp.py", line 207, in _upgradePoolDomain
self._finalizePoolUpgradeIfNeeded()
  File "/usr/lib/python2.7/site-packages/vdsm/storage/securable.py", 
line 76, in wrapper

raise SecureError("Secured object is not in safe state")
SecureError: Secured object is not in safe state
b38e7a14-f880-4259-a7dd-3994bae2dbc2::DEBUG::2016-09-30 
18:20:25,065::__init__::398::IOProcessClient::(_startCommunication) 
Communication thread for client ioprocess-7 started
ioprocess communication (22325)::INFO::2016-09-30 
18:20:25,067::__init__::447::IOProcess::(_processLogs) Starting ioprocess
ioprocess communication (22325)::INFO::2016-09-30 
18:20:25,067::__init__::447::IOProcess::(_processLogs) Starting ioprocess


Thanks

kasturi

___
Users mailing list
Users@ovirt.org
http://lists.ovirt.org/mailman/listinfo/users



___
Users mailing list
Users@ovirt.org
http://lists.ovirt.org/mailman/listinfo/users

--
IMPORTANT!
This message has been scanned for viruses and phishing links.
However, it is your responsibility to evaluate the links and attachments you 
choose to click.
If you are uncertain, we always try to help.
Greetings helpd...@actnet.se



--
IMPORTANT!
This message has been scanned for viruses and phishing links.
However, it is your responsibility to evaluate the links and attachments you 
choose to click.
If you are uncertain, we always try to help.
Greetings helpd...@actnet.se

___
Users mailing list -- users@ovirt.org
To unsubscribe send an email to users-le...@ovirt.org
Privacy Statement: https://www.ovirt.org/site/privacy-policy/
oVirt Code of Conduct: 
https://www.ovirt.org/community/about/community-guidelines/
List Archives: 
https://lists.ovirt.org/archives/list/users@ovirt.org/message/6TTC4ZEO6TSFAOE7VD2SEVKA2VAKTVZJ/


Re: [ovirt-users] Installation of oVirt 4.1, Gluster Storage and Hosted Engine

2017-07-13 Thread knarra

On 07/13/2017 04:30 PM, Simone Marchioni wrote:

Il 12/07/2017 10:59, knarra ha scritto:

On 07/12/2017 01:43 PM, Simone Marchioni wrote:

Il 11/07/2017 11:23, knarra ha scritto:

Hi,

reply here to both Gianluca and Kasturi.

Gianluca: I had ovirt-4.1-dependencies.repo enabled, and gluster 3.8 
packages, but glusterfs-server was missing in my "yum install" 
command, so added glusterfs-server to my installation.


Kasturi: packages ovirt-hosted-engine-setup, gdeploy and 
cockpit-ovirt-dashboard already installed and updated. vdsm-gluster 
was missing, so added to my installation.

okay, cool.


:-)



Rerun deployment and IT WORKED! I can read the message "Succesfully 
deployed Gluster" with the blue button "Continue to Hosted Engine 
Deployment". There's a minor glitch in the window: the green "V" in 
the circle is missing, like there's a missing image (or a wrong 
path, as I had to remove "ansible" from the grafton-sanity-check.sh 
path...)
There is a bug for this and it will be fixed soon. Here is the bug id 
for your reference. https://bugzilla.redhat.com/show_bug.cgi?id=1462082


Ok, thank you!



Although the deployment worked, and the firewalld and gluterfs 
errors are gone, a couple of errors remains:



AFTER VG/LV CREATION, START/STOP/RELOAD/GLUSTER AND FIREWALLD HANDLING:

PLAY [gluster_servers] 
*


TASK [Run a shell script] 
**
fatal: [ha1.domain.it]: FAILED! => {"failed": true, "msg": "The 
conditional check 'result.rc != 0' failed. The error was: error 
while evaluating conditional (result.rc != 0): 'dict object' has no 
attribute 'rc'"}
fatal: [ha2.domain.it]: FAILED! => {"failed": true, "msg": "The 
conditional check 'result.rc != 0' failed. The error was: error 
while evaluating conditional (result.rc != 0): 'dict object' has no 
attribute 'rc'"}
fatal: [ha3.domain.it]: FAILED! => {"failed": true, "msg": "The 
conditional check 'result.rc != 0' failed. The error was: error 
while evaluating conditional (result.rc != 0): 'dict object' has no 
attribute 'rc'"}

to retry, use: --limit @/tmp/tmpJnz4g3/run-script.retry
May be you missed to change the path of the script 
"/usr/share/ansible/gdeploy/scripts/disable-gluster-hooks.sh" . That 
is why this failure.


You're right: changed the path and now it's ok.



PLAY RECAP 
*

ha1.domain.it: ok=0changed=0unreachable=0 failed=1
ha2.domain.it: ok=0changed=0unreachable=0 failed=1
ha3.domain.it: ok=0changed=0unreachable=0 failed=1


PLAY [gluster_servers] 
*


TASK [Run a command in the shell] 
**
failed: [ha1.domain.it] (item=usermod -a -G gluster qemu) => 
{"changed": true, "cmd": "usermod -a -G gluster qemu", "delta": 
"0:00:00.003144", "end": "2017-07-12 00:22:46.836832", "failed": 
true, "item": "usermod -a -G gluster qemu", "rc": 6, "start": 
"2017-07-12 00:22:46.833688", "stderr": "usermod: group 'gluster' 
does not exist", "stderr_lines": ["usermod: group 'gluster' does not 
exist"], "stdout": "", "stdout_lines": []}
failed: [ha2.domain.it] (item=usermod -a -G gluster qemu) => 
{"changed": true, "cmd": "usermod -a -G gluster qemu", "delta": 
"0:00:00.003647", "end": "2017-07-12 00:22:46.895964", "failed": 
true, "item": "usermod -a -G gluster qemu", "rc": 6, "start": 
"2017-07-12 00:22:46.892317", "stderr": "usermod: group 'gluster' 
does not exist", "stderr_lines": ["usermod: group 'gluster' does not 
exist"], "stdout": "", "stdout_lines": []}
failed: [ha3.domain.it] (item=usermod -a -G gluster qemu) => 
{"changed": true, "cmd": "usermod -a -G gluster qemu", "delta": 
"0:00:00.007008", "end": "2017-07-12 00:22:47.016600", "failed": 
true, "item": "usermod -a -G gluster qemu", "rc": 6, "start": 
"2017-07-12 00:22:47.009592", "stderr": "usermod: group 'gluster' 
does not exist", "stderr_lines": ["usermod: group 'gluster' does not 
exist"], "stdout": "", "stdout_lines": []}

to retry, use: --limit @/tmp/tmpJnz4g3/shell_cmd.retry

PLAY RECAP

Re: [ovirt-users] Installation of oVirt 4.1, Gluster Storage and Hosted Engine

2017-07-12 Thread knarra

On 07/12/2017 01:43 PM, Simone Marchioni wrote:

Il 11/07/2017 11:23, knarra ha scritto:

On 07/11/2017 01:32 PM, Simone Marchioni wrote:

Il 11/07/2017 07:59, knarra ha scritto:

Hi,

removed partition signatures with wipefs and run deploy again: this 
time the creation of VG and LV worked correctly. The deployment 
proceeded until some new errors... :-/



PLAY [gluster_servers] 
*


TASK [start/stop/restart/reload services] 
**
failed: [ha1.domain.it] (item=glusterd) => {"failed": true, "item": 
"glusterd", "msg": "Could not find the requested service glusterd: 
host"}
failed: [ha2.domain.it] (item=glusterd) => {"failed": true, "item": 
"glusterd", "msg": "Could not find the requested service glusterd: 
host"}
failed: [ha3.domain.it] (item=glusterd) => {"failed": true, "item": 
"glusterd", "msg": "Could not find the requested service glusterd: 
host"}

to retry, use: --limit @/tmp/tmp5Dtb2G/service_management.retry

PLAY RECAP 
*

ha1.domain.it: ok=0changed=0unreachable=0 failed=1
ha2.domain.it: ok=0changed=0unreachable=0 failed=1
ha3.domain.it: ok=0changed=0unreachable=0 failed=1


PLAY [gluster_servers] 
*


TASK [Start firewalld if not already started] 
**

ok: [ha1.domain.it]
ok: [ha2.domain.it]
ok: [ha3.domain.it]

TASK [Add/Delete services to firewalld rules] 
**
failed: [ha1.domain.it] (item=glusterfs) => {"failed": true, "item": 
"glusterfs", "msg": "ERROR: Exception caught: 
org.fedoraproject.FirewallD1.Exception: INVALID_SERVICE: 'glusterfs' 
not among existing services Services are defined by port/tcp 
relationship and named as they are in /etc/services (on most systems)"}
failed: [ha2.domain.it] (item=glusterfs) => {"failed": true, "item": 
"glusterfs", "msg": "ERROR: Exception caught: 
org.fedoraproject.FirewallD1.Exception: INVALID_SERVICE: 'glusterfs' 
not among existing services Services are defined by port/tcp 
relationship and named as they are in /etc/services (on most systems)"}
failed: [ha3.domain.it] (item=glusterfs) => {"failed": true, "item": 
"glusterfs", "msg": "ERROR: Exception caught: 
org.fedoraproject.FirewallD1.Exception: INVALID_SERVICE: 'glusterfs' 
not among existing services Services are defined by port/tcp 
relationship and named as they are in /etc/services (on most systems)"}

to retry, use: --limit @/tmp/tmp5Dtb2G/firewalld-service-op.retry

PLAY RECAP 
*

ha1.domain.it: ok=1changed=0unreachable=0 failed=1
ha2.domain.it: ok=1changed=0unreachable=0 failed=1
ha3.domain.it: ok=1changed=0unreachable=0 failed=1


PLAY [gluster_servers] 
*


TASK [Start firewalld if not already started] 
**

ok: [ha1.domain.it]
ok: [ha2.domain.it]
ok: [ha3.domain.it]

TASK [Open/Close firewalld ports] 
**

changed: [ha1.domain.it] => (item=111/tcp)
changed: [ha2.domain.it] => (item=111/tcp)
changed: [ha3.domain.it] => (item=111/tcp)
changed: [ha1.domain.it] => (item=2049/tcp)
changed: [ha2.domain.it] => (item=2049/tcp)
changed: [ha1.domain.it] => (item=54321/tcp)
changed: [ha3.domain.it] => (item=2049/tcp)
changed: [ha2.domain.it] => (item=54321/tcp)
changed: [ha1.domain.it] => (item=5900/tcp)
changed: [ha3.domain.it] => (item=54321/tcp)
changed: [ha2.domain.it] => (item=5900/tcp)
changed: [ha1.domain.it] => (item=5900-6923/tcp)
changed: [ha2.domain.it] => (item=5900-6923/tcp)
changed: [ha3.domain.it] => (item=5900/tcp)
changed: [ha1.domain.it] => (item=5666/tcp)
changed: [ha2.domain.it] => (item=5666/tcp)
changed: [ha1.domain.it] => (item=16514/tcp)
changed: [ha3.domain.it] => (item=5900-6923/tcp)
changed: [ha2.domain.it] => (item=16514/tcp)
changed: [ha3.domain.it] => (item=5666/tcp)
changed: [ha3.domain.it] => (item=16514/tcp)

TASK [Reloads the firewall] 


changed: [ha1.domain.it]
changed: [ha2.domain.it]
changed: [ha3.domain.it]

PLAY RECAP 
*

ha1.domain.it: ok=3changed=2unreachable=0 failed=0
ha2.domain.it: ok=3changed=2unreachable=0 failed=0
ha3.

Re: [ovirt-users] Installation of oVirt 4.1, Gluster Storage and Hosted Engine

2017-07-11 Thread knarra

On 07/11/2017 01:32 PM, Simone Marchioni wrote:

Il 11/07/2017 07:59, knarra ha scritto:

On 07/10/2017 07:18 PM, Simone Marchioni wrote:

Hi Kasturi,

you're right: the file 
/usr/share/gdeploy/scripts/grafton-sanity-check.sh is present. I 
updated the path in the gdeploy config file and run Deploy again.

The situation is much better but the Deployment failed again... :-(

Here are the errors:



PLAY [gluster_servers] 
*


TASK [Run a shell script] 
**
fatal: [ha1.domain.it]: FAILED! => {"failed": true, "msg": "The 
conditional check 'result.rc != 0' failed. The error was: error 
while evaluating conditional (result.rc != 0): 'dict object' has no 
attribute 'rc'"}
fatal: [ha3.domain.it]: FAILED! => {"failed": true, "msg": "The 
conditional check 'result.rc != 0' failed. The error was: error 
while evaluating conditional (result.rc != 0): 'dict object' has no 
attribute 'rc'"}
fatal: [ha2.domain.it]: FAILED! => {"failed": true, "msg": "The 
conditional check 'result.rc != 0' failed. The error was: error 
while evaluating conditional (result.rc != 0): 'dict object' has no 
attribute 'rc'"}

to retry, use: --limit @/tmp/tmpNn6XNG/run-script.retry

PLAY RECAP 
*

ha1.domain.it: ok=0changed=0unreachable=0 failed=1
ha2.domain.it: ok=0changed=0unreachable=0 failed=1
ha3.domain.it: ok=0changed=0unreachable=0 failed=1


PLAY [gluster_servers] 
*


TASK [Clean up filesystem signature] 
***

skipping: [ha2.domain.it] => (item=/dev/md128)
skipping: [ha1.domain.it] => (item=/dev/md128)
skipping: [ha3.domain.it] => (item=/dev/md128)

TASK [Create Physical Volume] 
**
failed: [ha2.domain.it] (item=/dev/md128) => {"failed": true, 
"failed_when_result": true, "item": "/dev/md128", "msg": "WARNING: 
xfs signature detected on /dev/md128 at offset 0. Wipe it? [y/n]: 
[n]\n  Aborted wiping of xfs.\n  1 existing signature left on the 
device.\n", "rc": 5}
failed: [ha1.domain.it] (item=/dev/md128) => {"failed": true, 
"failed_when_result": true, "item": "/dev/md128", "msg": "WARNING: 
xfs signature detected on /dev/md128 at offset 0. Wipe it? [y/n]: 
[n]\n  Aborted wiping of xfs.\n  1 existing signature left on the 
device.\n", "rc": 5}
failed: [ha3.domain.it] (item=/dev/md128) => {"failed": true, 
"failed_when_result": true, "item": "/dev/md128", "msg": "WARNING: 
xfs signature detected on /dev/md128 at offset 0. Wipe it? [y/n]: 
[n]\n  Aborted wiping of xfs.\n  1 existing signature left on the 
device.\n", "rc": 5}

to retry, use: --limit @/tmp/tmpNn6XNG/pvcreate.retry

PLAY RECAP 
*

ha1.domain.it: ok=0changed=0unreachable=0 failed=1
ha2.domain.it: ok=0changed=0unreachable=0 failed=1
ha3.domain.it: ok=0changed=0unreachable=0 failed=1

Ignoring errors...



Any clue?

Hi,

I see that there are some signatures left on your device due to 
which the script is failing and creating physical volume also fails. 
Can you try to do fill zeros in the disk for 512MB or 1GB and try 
again ?


dd if=/dev/zero of=

Before running the script again try to do pvcreate and see if 
that works. If it works, just do pvdelete and run the script. 
Everything should work fine.


Thanks
kasturi


Thanks for your time.
Simone
___
Users mailing list
Users@ovirt.org
http://lists.ovirt.org/mailman/listinfo/users


Hi,

removed partition signatures with wipefs and run deploy again: this 
time the creation of VG and LV worked correctly. The deployment 
proceeded until some new errors... :-/



PLAY [gluster_servers] 
*


TASK [start/stop/restart/reload services] 
**
failed: [ha1.domain.it] (item=glusterd) => {"failed": true, "item": 
"glusterd", "msg": "Could not find the requested service glusterd: host"}
failed: [ha2.domain.it] (item=glusterd) => {"failed": true, "item": 
"glusterd", "msg": "Could not find the requested service glusterd: host"}
failed: [ha3.domain.it] (item=glusterd) => {"failed": true, "item": 
"g

Re: [ovirt-users] Installation of oVirt 4.1, Gluster Storage and Hosted Engine

2017-07-10 Thread knarra

On 07/10/2017 07:18 PM, Simone Marchioni wrote:

Il 10/07/2017 13:49, knarra ha scritto:

On 07/10/2017 04:18 PM, Simone Marchioni wrote:

Il 10/07/2017 09:08, knarra ha scritto:

Hi Simone,

Can you please  let me know what is the version of gdeploy and 
ansible on your system? Can you check if the path 
/usr/share/ansible/gdeploy/scripts/grafton-sanity-check.sh exist ? 
If not, can you edit the generated config file and change the path 
to "/usr/share/gdeploy/scripts/grafton-sanity-check.sh  and see if 
that works ?


You can check the logs in /var/log/messages , or setting 
log_path in /etc/ansbile/ansible.cfg file.


Thanks

kasturi.



Hi Kasturi,

thank you for your reply. Here are my versions:

gdeploy-2.0.2-7.noarch
ansible-2.3.0.0-3.el7.noarch

The file /usr/share/ansible/gdeploy/scripts/grafton-sanity-check.sh 
is missing. For the sake of completeness, the entire directory 
ansible is missing under /usr/share.


In /var/log/messages there is no error message, and I have no 
/etc/ansbile/ansible.cfg config file...


I'm starting to think there are some missing pieces in my 
installation. I installed the following packages:


yum install ovirt-engine
yum install ovirt-hosted-engine-setup
yum install ovirt-engine-setup-plugin-live ovirt-live-artwork-gnome 
libgovirt ovirt-live-artwork ovirt-log-collector gdeploy 
cockpit-ovirt-dashboard


and relative dependencies.

Any idea?
Can you check if "/usr/share/gdeploy/scripts/grafton-sanity-check.sh" 
is present ? If yes, can you change the path in your generated 
gdeploy config file and run again ?


Hi Kasturi,

you're right: the file 
/usr/share/gdeploy/scripts/grafton-sanity-check.sh is present. I 
updated the path in the gdeploy config file and run Deploy again.

The situation is much better but the Deployment failed again... :-(

Here are the errors:



PLAY [gluster_servers] 
*


TASK [Run a shell script] 
**
fatal: [ha1.lynx2000.it]: FAILED! => {"failed": true, "msg": "The 
conditional check 'result.rc != 0' failed. The error was: error while 
evaluating conditional (result.rc != 0): 'dict object' has no 
attribute 'rc'"}
fatal: [ha3.lynx2000.it]: FAILED! => {"failed": true, "msg": "The 
conditional check 'result.rc != 0' failed. The error was: error while 
evaluating conditional (result.rc != 0): 'dict object' has no 
attribute 'rc'"}
fatal: [ha2.lynx2000.it]: FAILED! => {"failed": true, "msg": "The 
conditional check 'result.rc != 0' failed. The error was: error while 
evaluating conditional (result.rc != 0): 'dict object' has no 
attribute 'rc'"}

to retry, use: --limit @/tmp/tmpNn6XNG/run-script.retry

PLAY RECAP 
*

ha1.lynx2000.it: ok=0changed=0unreachable=0 failed=1
ha2.lynx2000.it: ok=0changed=0unreachable=0 failed=1
ha3.lynx2000.it: ok=0changed=0unreachable=0 failed=1


PLAY [gluster_servers] 
*


TASK [Clean up filesystem signature] 
***

skipping: [ha2.lynx2000.it] => (item=/dev/md128)
skipping: [ha1.lynx2000.it] => (item=/dev/md128)
skipping: [ha3.lynx2000.it] => (item=/dev/md128)

TASK [Create Physical Volume] 
**
failed: [ha2.lynx2000.it] (item=/dev/md128) => {"failed": true, 
"failed_when_result": true, "item": "/dev/md128", "msg": "WARNING: xfs 
signature detected on /dev/md128 at offset 0. Wipe it? [y/n]: [n]\n  
Aborted wiping of xfs.\n  1 existing signature left on the device.\n", 
"rc": 5}
failed: [ha1.lynx2000.it] (item=/dev/md128) => {"failed": true, 
"failed_when_result": true, "item": "/dev/md128", "msg": "WARNING: xfs 
signature detected on /dev/md128 at offset 0. Wipe it? [y/n]: [n]\n  
Aborted wiping of xfs.\n  1 existing signature left on the device.\n", 
"rc": 5}
failed: [ha3.lynx2000.it] (item=/dev/md128) => {"failed": true, 
"failed_when_result": true, "item": "/dev/md128", "msg": "WARNING: xfs 
signature detected on /dev/md128 at offset 0. Wipe it? [y/n]: [n]\n  
Aborted wiping of xfs.\n  1 existing signature left on the device.\n", 
"rc": 5}

to retry, use: --limit @/tmp/tmpNn6XNG/pvcreate.retry

PLAY RECAP 
*

ha1.lynx2000.it: ok=0changed=0unreachable=0 failed=1
ha2.lynx2000.it: ok=0changed=0unreachable=0 failed=1
ha3.lynx2000.it: ok=0changed=0unreachable=0 

Re: [ovirt-users] Installation of oVirt 4.1, Gluster Storage and Hosted Engine

2017-07-10 Thread knarra

On 07/10/2017 04:18 PM, Simone Marchioni wrote:

Il 10/07/2017 09:08, knarra ha scritto:

On 07/07/2017 10:01 PM, Simone Marchioni wrote:

Hi to all,

I have an old installation of oVirt 3.3 with the Engine on a 
separate server. I wanted to test the last oVirt 4.1 with Gluster 
Storage and Hosted Engine.


Followed the following tutorial:

http://www.ovirt.org/blog/2017/04/up-and-running-with-ovirt-4.1-and-gluster-storage/ 



I have 3 hosts as shown in the tutorial. Installed CentOS 7.3, the 
oVirt 4.1 repo and all required packages. Configured passwordless 
ssh as stated.
Then I log in cockpit web interface, selected "Hosted Engine with 
Gluster" and hit the Start button. Configured the parameters as 
shown in the tutorial.


In the last step (5) the Generated Gdeply configuration (note: 
replaced the real domain with "domain.it"):


#gdeploy configuration generated by cockpit-gluster plugin
[hosts]
ha1.domain.it
ha2.domain.it
ha3.domain.it

[script1]
action=execute
ignore_script_errors=no
file=/usr/share/ansible/gdeploy/scripts/grafton-sanity-check.sh -d 
sdb -h ha1.domain.it,ha2.domain.it,ha3.domain.it


[disktype]
raid6

[diskcount]
12

[stripesize]
256

[service1]
action=enable
service=chronyd

[service2]
action=restart
service=chronyd

[shell2]
action=execute
command=vdsm-tool configure --force

[script3]
action=execute
file=/usr/share/ansible/gdeploy/scripts/disable-multipath.sh

[pv1]
action=create
devices=sdb
ignore_pv_errors=no

[vg1]
action=create
vgname=gluster_vg_sdb
pvname=sdb
ignore_vg_errors=no

[lv1:{ha1.domain.it,ha2.domain.it}]
action=create
poolname=gluster_thinpool_sdb
ignore_lv_errors=no
vgname=gluster_vg_sdb
lvtype=thinpool
size=110GB
poolmetadatasize=1GB

[lv2:ha3.domain.it]
action=create
poolname=gluster_thinpool_sdb
ignore_lv_errors=no
vgname=gluster_vg_sdb
lvtype=thinpool
size=80GB
poolmetadatasize=1GB

[lv3:{ha1.domain.it,ha2.domain.it}]
action=create
lvname=gluster_lv_engine
ignore_lv_errors=no
vgname=gluster_vg_sdb
mount=/gluster_bricks/engine
lvtype=thinlv
poolname=gluster_thinpool_sdb
virtualsize=50GB

[lv4:ha3.domain.it]
action=create
lvname=gluster_lv_engine
ignore_lv_errors=no
vgname=gluster_vg_sdb
mount=/gluster_bricks/engine
lvtype=thinlv
poolname=gluster_thinpool_sdb
virtualsize=20GB

[lv5:{ha1.domain.it,ha2.domain.it}]
action=create
lvname=gluster_lv_data
ignore_lv_errors=no
vgname=gluster_vg_sdb
mount=/gluster_bricks/data
lvtype=thinlv
poolname=gluster_thinpool_sdb
virtualsize=20GB

[lv6:ha3.domain.it]
action=create
lvname=gluster_lv_data
ignore_lv_errors=no
vgname=gluster_vg_sdb
mount=/gluster_bricks/data
lvtype=thinlv
poolname=gluster_thinpool_sdb
virtualsize=20GB

[lv7:{ha1.domain.it,ha2.domain.it}]
action=create
lvname=gluster_lv_export
ignore_lv_errors=no
vgname=gluster_vg_sdb
mount=/gluster_bricks/export
lvtype=thinlv
poolname=gluster_thinpool_sdb
virtualsize=20GB

[lv8:ha3.domain.it]
action=create
lvname=gluster_lv_export
ignore_lv_errors=no
vgname=gluster_vg_sdb
mount=/gluster_bricks/export
lvtype=thinlv
poolname=gluster_thinpool_sdb
virtualsize=20GB

[lv9:{ha1.domain.it,ha2.domain.it}]
action=create
lvname=gluster_lv_iso
ignore_lv_errors=no
vgname=gluster_vg_sdb
mount=/gluster_bricks/iso
lvtype=thinlv
poolname=gluster_thinpool_sdb
virtualsize=20GB

[lv10:ha3.domain.it]
action=create
lvname=gluster_lv_iso
ignore_lv_errors=no
vgname=gluster_vg_sdb
mount=/gluster_bricks/iso
lvtype=thinlv
poolname=gluster_thinpool_sdb
virtualsize=20GB

[selinux]
yes

[service3]
action=restart
service=glusterd
slice_setup=yes

[firewalld]
action=add
ports=111/tcp,2049/tcp,54321/tcp,5900/tcp,5900-6923/tcp,5666/tcp,16514/tcp 


services=glusterfs

[script2]
action=execute
file=/usr/share/ansible/gdeploy/scripts/disable-gluster-hooks.sh

[shell3]
action=execute
command=usermod -a -G gluster qemu

[volume1]
action=create
volname=engine
transport=tcp
replica=yes
replica_count=3
key=group,storage.owner-uid,storage.owner-gid,network.ping-timeout,performance.strict-o-direct,network.remote-dio,cluster.granular-entry-heal 


value=virt,36,36,30,on,off,enable
brick_dirs=ha1.domain.it:/gluster_bricks/engine/engine,ha2.domain.it:/gluster_bricks/engine/engine,ha3.domain.it:/gluster_bricks/engine/engine 


ignore_volume_errors=no
arbiter_count=1

[volume2]
action=create
volname=data
transport=tcp
replica=yes
replica_count=3
key=group,storage.owner-uid,storage.owner-gid,network.ping-timeout,performance.strict-o-direct,network.remote-dio,cluster.granular-entry-heal 


value=virt,36,36,30,on,off,enable
brick_dirs=ha1.domain.it:/gluster_bricks/data/data,ha2.domain.it:/gluster_bricks/data/data,ha3.domain.it:/gluster_bricks/data/data 


ignore_volume_errors=no
arbiter_count=1

[volume3]
action=create
volname=export
transport=tcp
replica=yes
replica_count=3
key=group,storage.owner-uid,storage.owner-gid,network.ping-timeout,performance.strict-o-direct,network.remote-dio,cluster.granular-entry-heal 


value=virt,36,36,30,on,off,enable
brick_dirs=ha1.domain.it:/gluster_bric

Re: [ovirt-users] Installation of oVirt 4.1, Gluster Storage and Hosted Engine

2017-07-10 Thread knarra

On 07/07/2017 10:01 PM, Simone Marchioni wrote:

Hi to all,

I have an old installation of oVirt 3.3 with the Engine on a separate 
server. I wanted to test the last oVirt 4.1 with Gluster Storage and 
Hosted Engine.


Followed the following tutorial:

http://www.ovirt.org/blog/2017/04/up-and-running-with-ovirt-4.1-and-gluster-storage/ 



I have 3 hosts as shown in the tutorial. Installed CentOS 7.3, the 
oVirt 4.1 repo and all required packages. Configured passwordless ssh 
as stated.
Then I log in cockpit web interface, selected "Hosted Engine with 
Gluster" and hit the Start button. Configured the parameters as shown 
in the tutorial.


In the last step (5) the Generated Gdeply configuration (note: 
replaced the real domain with "domain.it"):


#gdeploy configuration generated by cockpit-gluster plugin
[hosts]
ha1.domain.it
ha2.domain.it
ha3.domain.it

[script1]
action=execute
ignore_script_errors=no
file=/usr/share/ansible/gdeploy/scripts/grafton-sanity-check.sh -d sdb 
-h ha1.domain.it,ha2.domain.it,ha3.domain.it


[disktype]
raid6

[diskcount]
12

[stripesize]
256

[service1]
action=enable
service=chronyd

[service2]
action=restart
service=chronyd

[shell2]
action=execute
command=vdsm-tool configure --force

[script3]
action=execute
file=/usr/share/ansible/gdeploy/scripts/disable-multipath.sh

[pv1]
action=create
devices=sdb
ignore_pv_errors=no

[vg1]
action=create
vgname=gluster_vg_sdb
pvname=sdb
ignore_vg_errors=no

[lv1:{ha1.domain.it,ha2.domain.it}]
action=create
poolname=gluster_thinpool_sdb
ignore_lv_errors=no
vgname=gluster_vg_sdb
lvtype=thinpool
size=110GB
poolmetadatasize=1GB

[lv2:ha3.domain.it]
action=create
poolname=gluster_thinpool_sdb
ignore_lv_errors=no
vgname=gluster_vg_sdb
lvtype=thinpool
size=80GB
poolmetadatasize=1GB

[lv3:{ha1.domain.it,ha2.domain.it}]
action=create
lvname=gluster_lv_engine
ignore_lv_errors=no
vgname=gluster_vg_sdb
mount=/gluster_bricks/engine
lvtype=thinlv
poolname=gluster_thinpool_sdb
virtualsize=50GB

[lv4:ha3.domain.it]
action=create
lvname=gluster_lv_engine
ignore_lv_errors=no
vgname=gluster_vg_sdb
mount=/gluster_bricks/engine
lvtype=thinlv
poolname=gluster_thinpool_sdb
virtualsize=20GB

[lv5:{ha1.domain.it,ha2.domain.it}]
action=create
lvname=gluster_lv_data
ignore_lv_errors=no
vgname=gluster_vg_sdb
mount=/gluster_bricks/data
lvtype=thinlv
poolname=gluster_thinpool_sdb
virtualsize=20GB

[lv6:ha3.domain.it]
action=create
lvname=gluster_lv_data
ignore_lv_errors=no
vgname=gluster_vg_sdb
mount=/gluster_bricks/data
lvtype=thinlv
poolname=gluster_thinpool_sdb
virtualsize=20GB

[lv7:{ha1.domain.it,ha2.domain.it}]
action=create
lvname=gluster_lv_export
ignore_lv_errors=no
vgname=gluster_vg_sdb
mount=/gluster_bricks/export
lvtype=thinlv
poolname=gluster_thinpool_sdb
virtualsize=20GB

[lv8:ha3.domain.it]
action=create
lvname=gluster_lv_export
ignore_lv_errors=no
vgname=gluster_vg_sdb
mount=/gluster_bricks/export
lvtype=thinlv
poolname=gluster_thinpool_sdb
virtualsize=20GB

[lv9:{ha1.domain.it,ha2.domain.it}]
action=create
lvname=gluster_lv_iso
ignore_lv_errors=no
vgname=gluster_vg_sdb
mount=/gluster_bricks/iso
lvtype=thinlv
poolname=gluster_thinpool_sdb
virtualsize=20GB

[lv10:ha3.domain.it]
action=create
lvname=gluster_lv_iso
ignore_lv_errors=no
vgname=gluster_vg_sdb
mount=/gluster_bricks/iso
lvtype=thinlv
poolname=gluster_thinpool_sdb
virtualsize=20GB

[selinux]
yes

[service3]
action=restart
service=glusterd
slice_setup=yes

[firewalld]
action=add
ports=111/tcp,2049/tcp,54321/tcp,5900/tcp,5900-6923/tcp,5666/tcp,16514/tcp 


services=glusterfs

[script2]
action=execute
file=/usr/share/ansible/gdeploy/scripts/disable-gluster-hooks.sh

[shell3]
action=execute
command=usermod -a -G gluster qemu

[volume1]
action=create
volname=engine
transport=tcp
replica=yes
replica_count=3
key=group,storage.owner-uid,storage.owner-gid,network.ping-timeout,performance.strict-o-direct,network.remote-dio,cluster.granular-entry-heal 


value=virt,36,36,30,on,off,enable
brick_dirs=ha1.domain.it:/gluster_bricks/engine/engine,ha2.domain.it:/gluster_bricks/engine/engine,ha3.domain.it:/gluster_bricks/engine/engine 


ignore_volume_errors=no
arbiter_count=1

[volume2]
action=create
volname=data
transport=tcp
replica=yes
replica_count=3
key=group,storage.owner-uid,storage.owner-gid,network.ping-timeout,performance.strict-o-direct,network.remote-dio,cluster.granular-entry-heal 


value=virt,36,36,30,on,off,enable
brick_dirs=ha1.domain.it:/gluster_bricks/data/data,ha2.domain.it:/gluster_bricks/data/data,ha3.domain.it:/gluster_bricks/data/data 


ignore_volume_errors=no
arbiter_count=1

[volume3]
action=create
volname=export
transport=tcp
replica=yes
replica_count=3
key=group,storage.owner-uid,storage.owner-gid,network.ping-timeout,performance.strict-o-direct,network.remote-dio,cluster.granular-entry-heal 


value=virt,36,36,30,on,off,enable
brick_dirs=ha1.domain.it:/gluster_bricks/export/export,ha2.domain.it:/gluster_bricks/export/export,ha3.domain.it:/gluster_bricks/export/export 



Re: [ovirt-users] How to create a new Gluster volume

2017-07-07 Thread knarra

On 07/07/2017 02:03 PM, Gianluca Cecchi wrote:
On Fri, Jul 7, 2017 at 10:15 AM, knarra <kna...@redhat.com 
<mailto:kna...@redhat.com>> wrote:






It seems I have to de-select the checkbox "Show available bricks
from host" and so I can manually the the directory of the bricks

I see that bricks are mounted in /gluster/brick3 and that is the
reason it does not show anything in "Brick Directory" drop down
filed. If bricks are mounted under /gluster_bricks then it would
have detected automatically. There is an RFE which is raised to
detect bricks which are created manually.


I deployed this HCI system with gdeploy at oVirt 4.05 time, so I think 
I used the "default" path that was proposed inside the 
ovirt-gluster.conf file to feed gdeploy with...

I think it was based on this from Jason:
https://www.ovirt.org/blog/2016/08/up-and-running-with-ovirt-4-0-and-gluster-storage/
and this conf file
https://gist.githubusercontent.com/jasonbrooks/a5484769eea5a8cf2fa9d32329d5ebe5/raw/ovirt-gluster.conf

Good that there is an RFE. Thanks




BTW: I see that after creating a volume optimized for oVirt in
web admin gui of 4.1.2 I get slight option for it in respect for
a pre-existing volume created in 4.0.5 during initial setup with
gdeploy.

NOTE: during 4.0.5 setup I had gluster 3.7 installed, while now I
have gluster 3.10 (manually updated from CentOS storage SIG)

Making a "gluster volume info" and then a diff of the output for
the 2 volumes I have:

new volume ==   <
old volume  ==>

< cluster.shd-max-threads: 8
---
> cluster.shd-max-threads: 6
13a13,14
> features.shard-block-size: 512MB
16c17
< network.remote-dio: enable
---
> network.remote-dio: off
23a25
> performance.readdir-ahead: on
25c27
< server.allow-insecure: on
---
> performance.strict-o-direct: on

Do I have to change anything for the newly created one?

No, you do not need to change anything for the new volume. But if
you plan to enable o-direct on the volume then you will have to
disable/turn off remote-dio.


OK.
Again, in ovirt-gluster.conf file I see there was this kind of setting 
for the Gluster volumes when running gdeploy for them:

key=group,storage.owner-uid,storage.owner-gid,features.shard,features.shard-block-size,performance.low-prio-threads,cluster.data-self-heal-algorithm,cluster.locking-scheme,cluster.shd-wait-qlength,cluster.shd-max-threads,network.ping-timeout,user.cifs,performance.strict-o-direct,network.remote-dio,cluster.granular-entry-heal
value=virt,36,36,on,512MB,32,full,granular,1,8,30,off,on,off,on
brick_dirs=/gluster/brick1/engine
I'm going to crosscheck now what are the suggested values for oVirt 
4.1 and Gluster 3.10 combined...
Now virt group sets the shard block size and it is the default which is 
4MB and is the suggested value. With 4MB shards we see that healing is 
much faster with granular entry heal being enabled on the volume.


I am not sure why the conf file again sets the shard size. May be this 
can be removed from the file.


Other than this everything looks good for me.


I was in particular worried by the difference 
of features.shard-block-size but after reading this


http://blog.gluster.org/2015/12/introducing-shard-translator/

I'm not sure if 512Mb is the best in case of VMs storage I'm going 
to dig more eventually


Thanks,
Gianluca



___
Users mailing list
Users@ovirt.org
http://lists.ovirt.org/mailman/listinfo/users


Re: [ovirt-users] How to create a new Gluster volume

2017-07-07 Thread knarra

On 07/06/2017 04:38 PM, Gianluca Cecchi wrote:
On Thu, Jul 6, 2017 at 11:51 AM, Gianluca Cecchi 
> wrote:


Hello,
I'm trying to create a new volume. I'm in 4.1.2
I'm following these indications:

http://www.ovirt.org/documentation/admin-guide/chap-Working_with_Gluster_Storage/



When I click the "add brick" button, I don't see anything in
"Brick Directory" dropdown field and I cannot manuall input a
directory name.

On the 3 nodes I already have formatted and mounted fs

[root@ovirt01 ~]# df -h /gluster/brick3/
Filesystem  Size  Used Avail Use% Mounted on
/dev/mapper/gluster-export   50G   33M   50G   1% /gluster/brick3
[root@ovirt01 ~]#

The guide tells

7. Click the Add Bricks button to select bricks to add to the
volume. Bricks must be created externally on the Gluster Storage
nodes.

What does it mean with "created externally"?
The next step from os point would be volume creation but it is
indeed what I would like to do from the gui...

Thanks,
Gianluca


It seems I have to de-select the checkbox "Show available bricks from 
host" and so I can manually the the directory of the bricks
I see that bricks are mounted in /gluster/brick3 and that is the reason 
it does not show anything in "Brick Directory" drop down filed. If 
bricks are mounted under /gluster_bricks then it would have detected 
automatically. There is an RFE which is raised to detect bricks which 
are created manually.


BTW: I see that after creating a volume optimized for oVirt in web 
admin gui of 4.1.2 I get slight option for it in respect for a 
pre-existing volume created in 4.0.5 during initial setup with gdeploy.


NOTE: during 4.0.5 setup I had gluster 3.7 installed, while now I have 
gluster 3.10 (manually updated from CentOS storage SIG)


Making a "gluster volume info" and then a diff of the output for the 2 
volumes I have:


new volume ==   <
old volume  ==>

< cluster.shd-max-threads: 8
---
> cluster.shd-max-threads: 6
13a13,14
> features.shard-block-size: 512MB
16c17
< network.remote-dio: enable
---
> network.remote-dio: off
23a25
> performance.readdir-ahead: on
25c27
< server.allow-insecure: on
---
> performance.strict-o-direct: on

Do I have to change anything for the newly created one?
No, you do not need to change anything for the new volume. But if you 
plan to enable o-direct on the volume then you will have to disable/turn 
off remote-dio.





___
Users mailing list
Users@ovirt.org
http://lists.ovirt.org/mailman/listinfo/users



___
Users mailing list
Users@ovirt.org
http://lists.ovirt.org/mailman/listinfo/users


Re: [ovirt-users] add fourth full gluster node and remove arbiter: ovirt 4.1 with hosted engine

2017-07-03 Thread knarra

On 07/03/2017 06:58 PM, knarra wrote:

On 07/03/2017 06:53 PM, yayo (j) wrote:

Hi,

And sorry for delay

2017-06-30 14:09 GMT+02:00 knarra <kna...@redhat.com 
<mailto:kna...@redhat.com>>:


To add a fully replicated node  you need to reduce the replica
count to 2 and add new brick to the volume so that it becomes
replica 3. Reducing replica count by removing a brick from
replica / arbiter cannot be done from UI currently and this has
to be done using gluster CLI.
 AFAIR, there was an issue where vm's were going to paused state
when reducing the replica count and increasing it to 3. Not sure
if this still holds good with the latest release.

Any specific reason why you want to move to full replication
instead of using an arbiter node ?


We have a new server with the same hard disk size of other two node, 
so, why not? Why join the cluster as an arbiter when we can have the 
same disk capacity to add extra replication?




and remove the arbiter node (Also a way to move the arbiter role
to the new node, If needed)

To move arbiter role to a new node you can move the node to
maintenance , add  new node and replace  old brick with new
brick. You can follow the steps below to do that.

  * Move the node to be replaced into Maintenance mode
  * Prepare the replacement node
  * Prepare bricks on that node.
  * Create replacement brick directories
  * Ensure the new directories are owned by the vdsm user and the
kvm group.
  * # mkdir /rhgs/bricks/engine
  * # chmod vdsm:kvm /rhgs/bricks/engine
  * # mkdir /rhgs/bricks/data
  * # chmod vdsm:kvm /rhgs/bricks/data
  * Run the following command from one of the healthy cluster
members:
  * # gluster peer probe 
  * add the new host to the cluster.
  * Add new host address to gluster network
  * Click Network Interfaces sub-tab.
  * Click Set up Host Networks.
  * Drag and drop the glusternw network onto the IP address of
the new host.
  * Click OK
  * Replace the old brick with the brick on the new host
  * Click the Bricks sub-tab.
  * Verify that brick heal completes successfully.
  * In the Hosts tab, right-click on the old host and click Remove.
  * Clean old host metadata
  * # hosted-engine --clean-metadata --host-id=
--force-clean



I need this (reads: I need the arbiter role) if I reduce replica 
count then I add the new node as full replica and increasing replica 
count again to 3? (As you expained above)



Above steps hold good if you want to move the arbiter role to a new node.

If you want to move to full replica, reducing the replica count will 
work fine but increasing it again back to 3 might cause vm pause issues.

So, please poweroff your vms while performing this.






Thank you





___
Users mailing list
Users@ovirt.org
http://lists.ovirt.org/mailman/listinfo/users



___
Users mailing list
Users@ovirt.org
http://lists.ovirt.org/mailman/listinfo/users


Re: [ovirt-users] add fourth full gluster node and remove arbiter: ovirt 4.1 with hosted engine

2017-07-03 Thread knarra

On 07/03/2017 06:53 PM, yayo (j) wrote:

Hi,

And sorry for delay

2017-06-30 14:09 GMT+02:00 knarra <kna...@redhat.com 
<mailto:kna...@redhat.com>>:


To add a fully replicated node  you need to reduce the replica
count to 2 and add new brick to the volume so that it becomes
replica 3. Reducing replica count by removing a brick from replica
/ arbiter cannot be done from UI currently and this has to be done
using gluster CLI.
 AFAIR, there was an issue where vm's were going to paused state
when reducing the replica count and increasing it to 3. Not sure
if this still holds good with the latest release.

Any specific reason why you want to move to full replication
instead of using an arbiter node ?


We have a new server with the same hard disk size of other two node, 
so, why not? Why join the cluster as an arbiter when we can have the 
same disk capacity to add extra replication?




and remove the arbiter node (Also a way to move the arbiter role
to the new node, If needed)

To move arbiter role to a new node you can move the node to
maintenance , add  new node and replace  old brick with new brick.
You can follow the steps below to do that.

  * Move the node to be replaced into Maintenance mode
  * Prepare the replacement node
  * Prepare bricks on that node.
  * Create replacement brick directories
  * Ensure the new directories are owned by the vdsm user and the
kvm group.
  * # mkdir /rhgs/bricks/engine
  * # chmod vdsm:kvm /rhgs/bricks/engine
  * # mkdir /rhgs/bricks/data
  * # chmod vdsm:kvm /rhgs/bricks/data
  * Run the following command from one of the healthy cluster members:
  * # gluster peer probe 
  * add the new host to the cluster.
  * Add new host address to gluster network
  * Click Network Interfaces sub-tab.
  * Click Set up Host Networks.
  * Drag and drop the glusternw network onto the IP address of the
new host.
  * Click OK
  * Replace the old brick with the brick on the new host
  * Click the Bricks sub-tab.
  * Verify that brick heal completes successfully.
  * In the Hosts tab, right-click on the old host and click Remove.
  * Clean old host metadata
  * # hosted-engine --clean-metadata --host-id=
--force-clean



I need this (reads: I need the arbiter role) if I reduce replica count 
then I add the new node as full replica and increasing replica count 
again to 3? (As you expained above)



Above steps hold good if you want to move the arbiter role to a new node.

If you want to move to full replica, reducing the replica count will 
work fine but increasing it again back to 3 might cause vm pause issues.






Thank you



___
Users mailing list
Users@ovirt.org
http://lists.ovirt.org/mailman/listinfo/users


Re: [ovirt-users] Best way of doing complete stop of HCI environment?

2017-07-03 Thread knarra

On 07/03/2017 02:35 PM, Gianluca Cecchi wrote:


Any recommendations in case I have to take down a site for maintenance?

Thanks,
Gianluca


___
Users mailing list
Users@ovirt.org
http://lists.ovirt.org/mailman/listinfo/users


Hi ,

You can follow the steps below to do that.

1) Stop all the virtual machines.

2) Move all the storage domains other than hosted_storage to maintenance 
which will unmount them from all the nodes.


3)  Move HE to global maintenance 'hosted-engine --set-maintenance 
--mode =global'


4) stop HE vm by running the command 'hosted-engine --vm-shutdown'

5) confirm that engine is down using the command 'hosted-engine --vm-status'

6) stop ha agent and broker services on all the nodes by running the 
command 'systemctl stop ovirt-ha-broker' ; 'systemctl stop ovirt-ha-agent'


7) umount hosted-engine from all the hypervisors 'hosted-engine 
--disconnect-storage'


8) stop all the volumes.

9) power off all the hypervisors.

Hope this helps !!!

Thanks

kasturi.

___
Users mailing list
Users@ovirt.org
http://lists.ovirt.org/mailman/listinfo/users


Re: [ovirt-users] add fourth full gluster node and remove arbiter: ovirt 4.1 with hosted engine

2017-06-30 Thread knarra

On 06/30/2017 04:53 PM, yayo (j) wrote:


2017-06-30 12:54 GMT+02:00 yayo (j) >:


The actual arbiter must be removed because is too obsolete. So, I
needs to add the new "full replicated" node but I want to know
what are the steps for add a new "full replicated" node and remove
the arbiter node (Also a way to move the arbiter role to the new
node, If needed) . Extra info: I want to know if I can do this on
an existing ovirt gluster Data Domain (called Data01) because we
have many vm runnig on it.


Hi,

I have found this doc from RH about replacing host in a gluster env: 
https://access.redhat.com/documentation/en-US/Red_Hat_Storage/3/html/Administration_Guide/sect-Replacing_Hosts.html


I can use the command described at point 7 ?


# *gluster volume replace-brick vol sys0.example.com:/rhs/brick1/b1 
sys5.example.com:/rhs/brick1/b1 commit force*

*volume replace-brick: success: replace-brick commit successful*


The question is: The replaced node will be a data node (a "full 
replicated" node) or will be again an arbiter?

It will be an arbiter again.


Thank you



___
Users mailing list
Users@ovirt.org
http://lists.ovirt.org/mailman/listinfo/users


Re: [ovirt-users] add fourth full gluster node and remove arbiter: ovirt 4.1 with hosted engine

2017-06-30 Thread knarra

On 06/30/2017 04:24 PM, yayo (j) wrote:


2017-06-30 11:01 GMT+02:00 knarra <kna...@redhat.com 
<mailto:kna...@redhat.com>>:


You do not need to remove the arbiter node as you are getting the
advantage of saving on space by having this config.

Since you have a new you can add this as fourth node and create
another gluster volume (replica 3) out of this node plus the other
two nodes and run vm images there as well.


Hi,

And thanks for the answer. The actual arbiter must be removed because 
is too obsolete. So, I needs to add the new "full replicated" node but 
I want to know what are the steps for add a new "full replicated" node
To add a fully replicated node  you need to reduce the replica count to 
2 and add new brick to the volume so that it becomes replica 3. Reducing 
replica count by removing a brick from replica / arbiter cannot be done 
from UI currently and this has to be done using gluster CLI.
 AFAIR, there was an issue where vm's were going to paused state when 
reducing the replica count and increasing it to 3. Not sure if this 
still holds good with the latest release.


Any specific reason why you want to move to full replication instead of 
using an arbiter node ?




and remove the arbiter node (Also a way to move the arbiter role to 
the new node, If needed)
To move arbiter role to a new node you can move the node to maintenance 
, add  new node and replace  old brick with new brick. You can follow 
the steps below to do that.


 * Move the node to be replaced into Maintenance mode
 * Prepare the replacement node
 * Prepare bricks on that node.
 * Create replacement brick directories
 * Ensure the new directories are owned by the vdsm user and the kvm group.
 * # mkdir /rhgs/bricks/engine
 * # chmod vdsm:kvm /rhgs/bricks/engine
 * # mkdir /rhgs/bricks/data
 * # chmod vdsm:kvm /rhgs/bricks/data
 * Run the following command from one of the healthy cluster members:
 * # gluster peer probe 
 *   add the new host to the cluster.
 * Add new host address to gluster network
 * Click Network Interfaces sub-tab.
 * Click Set up Host Networks.
 * Drag and drop the glusternw network onto the IP address of the new host.
 * Click OK
 * Replace the old brick with the brick on the new host
 * Click the Bricks sub-tab.
 * Verify that brick heal completes successfully.
 * In the Hosts tab, right-click on the old host and click Remove.
 * Clean old host metadata
 * # hosted-engine --clean-metadata --host-id= --force-clean



. Extra info: I want to know if I can do this on an existing ovirt 
gluster Data Domain (called Data01) because we have many vm runnig on it.
When you move your node to maintenance all the vms running on that node 
will be migrated to another node and since you have two nodes up and 
running there should not be any problem.


thank you


___
Users mailing list
Users@ovirt.org
http://lists.ovirt.org/mailman/listinfo/users


Re: [ovirt-users] add fourth full gluster node and remove arbiter: ovirt 4.1 with hosted engine

2017-06-30 Thread knarra

On 06/30/2017 02:18 PM, yayo (j) wrote:

Hi at all,

we have a 3 node cluster with this configuration:

ovirtzz 4.1 with 3 node hyperconverged with gluster. 2 node are "full 
replicated" and 1 node is the arbiter.


Now we have a new server to add to cluster then we want to add this 
new server and remove the arbiter (or, make this new server a "full 
replicated" gluster with arbiter role? I don't know)
You do not need to remove the arbiter node as you are getting the 
advantage of saving on space by having this config.


Since you have a new you can add this as fourth node and create another 
gluster volume (replica 3) out of this node plus the other two nodes and 
run vm images there as well.


Can you please help me to know what is the right way to do this? Or, 
Can you give me any doc or link that explain the steps to do this?


Thank you in advance!



___
Users mailing list
Users@ovirt.org
http://lists.ovirt.org/mailman/listinfo/users



___
Users mailing list
Users@ovirt.org
http://lists.ovirt.org/mailman/listinfo/users


Re: [ovirt-users] Ovirt remove network from hosted engine

2017-06-27 Thread knarra

On 06/27/2017 09:49 PM, Abi Askushi wrote:

Hi all,

Just in case ones needs it, in order to remove the secondary network 
interface from engine, you can go to:
Virtual Machines -> Hostedengine -> Network Interfaces -> edit -> 
unplug it -> confirm -> remove it.
cool. But in your previous mail you did mention that it fails for you 
since the engine is running. Instead of remove you tried unplug here ?


It was simple...


On Tue, Jun 27, 2017 at 4:54 PM, Abi Askushi <rightkickt...@gmail.com 
<mailto:rightkickt...@gmail.com>> wrote:


Hi Knarra,

Then I had already enabled NFS on ISO gluster volume.
Maybe i had some networking issue then. I need to remove the
secondary interface in order to test that again.



On Tue, Jun 27, 2017 at 4:25 PM, knarra <kna...@redhat.com
<mailto:kna...@redhat.com>> wrote:

On 06/27/2017 06:34 PM, Abi Askushi wrote:

Hi Knarra,

The ISO domain is of type gluster though I had nfs enabled on
that volume.

you need to have nfs enabled on the volume. what i meant is
nfs.disable off which means nfs is on.

For more info please refer to bug
https://bugzilla.redhat.com/show_bug.cgi?id=1437799
<https://bugzilla.redhat.com/show_bug.cgi?id=1437799>

I will disable the nfs and try. Though in order to try I need
first to remove that second interface from engine.
Is there a way I can remove the secondary storage network
interface from the engine?

I am not sure how to do that, but   you may shutdown the vm
using the command hosted-engine --vm-shutdown which will power
off the vm and try to remove the networks using vdsclient.
(not sure if this is right, but suggesting a way)


Thanx




On Tue, Jun 27, 2017 at 3:32 PM, knarra <kna...@redhat.com
<mailto:kna...@redhat.com>> wrote:

On 06/27/2017 05:41 PM, Abi Askushi wrote:

Hi all,

When setting up hosted engine setup on top gluster with
3 nodes, I had gluster configured on a separate network
interface, as recommended. When I tried later to upload
ISO from engine to ISO domain, the engine was not able
to upload it since the VM did not have access to the
separate storage network. I then added the storage
network interface to the hosted engine and ISO upload
succeeded.

May i know what was the volume type created and added as
ISO domain ?

If you plan to use a glusterfs volume below is the
procedure :

1) Create a glusterfs volume.
2) While adding storage domain select Domain Function as
'ISO' and Storage Type as 'glusterfs' .
3) You can either use 'use managed gluster volume' check
box and select the gluster volume which you have created
for storing ISO's or you can type the full path of the
volume.
4) Once this is added please make sure to set the option
nfs.disable off.
5) Now you can go to HE engine and run the command
engine-iso-uploader upload -i 


Iso gets uploaded successfully.



1st question: do I need to add the network interface to
engine in order to upload ISOs? does there exist any
alternate way?

AFAIK, this is not required when glusterfs volume is used.

Attached is the screenshot where i have only one network
attached to my HE which is ovirtmgmt.


Then I proceeded to configure bonding for the storage
domain, bonding 2 NICs at each server. When trying to
set a custom bond of mode=6 (as recommended from
gluster) I received a warning that mode0, 5 and 6 cannot
be configured since the interface is used from VMs. I
also understood that having the storage network assigned
to VMs makes it a bridge which decreases performance of
networking. When trying to remove the network interface
from engine it cannot be done, since the engine is running.

2nd question: Is there a way I can remove the secondary
storage network interface from the engine?

Many thanx


___
Users mailing list
Users@ovirt.org <mailto:Users@ovirt.org>
http://lists.ovirt.org/mailman/listinfo/users
<http://lists.ovirt.org/mailman/listinfo/users>


___
Users mailing list
Users@ovirt.org
http://lists.ovirt.org/mailman/listinfo/users


Re: [ovirt-users] Ovirt remove network from hosted engine

2017-06-27 Thread knarra

On 06/27/2017 06:34 PM, Abi Askushi wrote:

Hi Knarra,

The ISO domain is of type gluster though I had nfs enabled on that 
volume.
you need to have nfs enabled on the volume. what i meant is nfs.disable 
off which means nfs is on.


For more info please refer to bug 
https://bugzilla.redhat.com/show_bug.cgi?id=1437799
I will disable the nfs and try. Though in order to try I need first to 
remove that second interface from engine.
Is there a way I can remove the secondary storage network interface 
from the engine?
I am not sure how to do that, but   you may shutdown the vm using the 
command hosted-engine --vm-shutdown which will power off the vm and try 
to remove the networks using vdsclient. (not sure if this is right, but 
suggesting a way)


Thanx




On Tue, Jun 27, 2017 at 3:32 PM, knarra <kna...@redhat.com 
<mailto:kna...@redhat.com>> wrote:


On 06/27/2017 05:41 PM, Abi Askushi wrote:

Hi all,

When setting up hosted engine setup on top gluster with 3 nodes,
I had gluster configured on a separate network interface, as
recommended. When I tried later to upload ISO from engine to ISO
domain, the engine was not able to upload it since the VM did not
have access to the separate storage network. I then added the
storage network interface to the hosted engine and ISO upload
succeeded.

May i know what was the volume type created and added as ISO domain ?

If you plan to use a glusterfs volume below is the procedure :

1) Create a glusterfs volume.
2) While adding storage domain select Domain Function as 'ISO' and
Storage Type as 'glusterfs' .
3) You can either use 'use managed gluster volume' check box and
select the gluster volume which you have created for storing ISO's
or you can type the full path of the volume.
4) Once this is added please make sure to set the option
nfs.disable off.
5) Now you can go to HE engine and run the command
engine-iso-uploader upload -i  

Iso gets uploaded successfully.



1st question: do I need to add the network interface to engine in
order to upload ISOs? does there exist any alternate way?

AFAIK, this is not required when glusterfs volume is used.

Attached is the screenshot where i have only one network attached
to my HE which is ovirtmgmt.


Then I proceeded to configure bonding for the storage domain,
bonding 2 NICs at each server. When trying to set a custom bond
of mode=6 (as recommended from gluster) I received a warning that
mode0, 5 and 6 cannot be configured since the interface is used
from VMs. I also understood that having the storage network
assigned to VMs makes it a bridge which decreases performance of
networking. When trying to remove the network interface from
engine it cannot be done, since the engine is running.

2nd question: Is there a way I can remove the secondary storage
network interface from the engine?

Many thanx


___
Users mailing list
Users@ovirt.org <mailto:Users@ovirt.org>
http://lists.ovirt.org/mailman/listinfo/users
<http://lists.ovirt.org/mailman/listinfo/users>


___
Users mailing list
Users@ovirt.org
http://lists.ovirt.org/mailman/listinfo/users


Re: [ovirt-users] Ovirt remove network from hosted engine

2017-06-27 Thread knarra

On 06/27/2017 05:41 PM, Abi Askushi wrote:

Hi all,

When setting up hosted engine setup on top gluster with 3 nodes, I had 
gluster configured on a separate network interface, as recommended. 
When I tried later to upload ISO from engine to ISO domain, the engine 
was not able to upload it since the VM did not have access to the 
separate storage network. I then added the storage network interface 
to the hosted engine and ISO upload succeeded.

May i know what was the volume type created and added as ISO domain ?

If you plan to use a glusterfs volume below is the procedure :

1) Create a glusterfs volume.
2) While adding storage domain select Domain Function as 'ISO' and 
Storage Type as 'glusterfs' .
3) You can either use 'use managed gluster volume' check box and select 
the gluster volume which you have created for storing ISO's or you can 
type the full path of the volume.

4) Once this is added please make sure to set the option nfs.disable off.
5) Now you can go to HE engine and run the command engine-iso-uploader 
upload -i  


Iso gets uploaded successfully.



1st question: do I need to add the network interface to engine in 
order to upload ISOs? does there exist any alternate way?

AFAIK, this is not required when glusterfs volume is used.

Attached is the screenshot where i have only one network attached to my 
HE which is ovirtmgmt.


Then I proceeded to configure bonding for the storage domain, bonding 
2 NICs at each server. When trying to set a custom bond of mode=6 (as 
recommended from gluster) I received a warning that mode0, 5 and 6 
cannot be configured since the interface is used from VMs. I also 
understood that having the storage network assigned to VMs makes it a 
bridge which decreases performance of networking. When trying to 
remove the network interface from engine it cannot be done, since the 
engine is running.


2nd question: Is there a way I can remove the secondary storage 
network interface from the engine?


Many thanx


___
Users mailing list
Users@ovirt.org
http://lists.ovirt.org/mailman/listinfo/users



___
Users mailing list
Users@ovirt.org
http://lists.ovirt.org/mailman/listinfo/users


Re: [ovirt-users] is arbiter configuration needed?

2017-06-23 Thread knarra

On 06/23/2017 03:38 PM, Erekle Magradze wrote:

Hello,
I am using glusterfs as the storage backend for the VM images, volumes 
for oVirt consist of three bricks, is it still necessary to configure 
the arbiter to be on the safe side? or since the number of bricks is 
odd it will be done out of the box?

Thanks in advance
Cheers
Erekle
___
Users mailing list
Users@ovirt.org
http://lists.ovirt.org/mailman/listinfo/users


Hi,

An arbiter volume is a special class of replica-3 volume. Arbiter is 
special because the third brick of replica set contains only directory 
hierarchy information and metadata. Therefore, arbiter provides 
split-brain protection with the equivalent consistency of a replica-3 
volume without incurring the additional storage space overhead.


If you already have a replica volume in your config with three 
bricks then that config should be good. You do not need to create a arbiter.


Hope this helps !!

Thanks

kasturi

___
Users mailing list
Users@ovirt.org
http://lists.ovirt.org/mailman/listinfo/users


Re: [ovirt-users] Hyperconverged oVirt installation gluster problems

2017-06-16 Thread knarra

Hi,

grafton_sanity_check.sh checks if the disk has any labels or 
partitions present on it. Since your disk has already a partition and 
you are using the same disk to create gluster brick as well it fails. 
commenting out this script in the conf file and running again would 
resolve your issue.


Thanks
kasturi.

On 06/16/2017 06:56 PM, jesper andersson wrote:

Hi.

I'm trying to set up a 3 node ovirt cluster with gluster as this guide 
describes:

https://www.ovirt.org/blog/2017/04/up-and-running-with-ovirt-4.1-and-gluster-storage/
I've installed oVirt node 4.1.2 in one partition and left a partition 
to hold the gluster volumes on all three nodes. The problem is that I 
can't get through gdeploy for gluster install. I only get the error:

Error: Unsupported disk type!



PLAY [gluster_servers] 
*


TASK [Run a shell script] 
**
changed: [host03] => 
(item=/usr/share/ansible/gdeploy/scripts/grafton-sanity-check.sh -d 
sdb -h host01,host02,host03)
changed: [host02] => 
(item=/usr/share/ansible/gdeploy/scripts/grafton-sanity-check.sh -d 
sdb -h host01,host02,host03)
changed: [host01] => 
(item=/usr/share/ansible/gdeploy/scripts/grafton-sanity-check.sh -d 
sdb -h host01,host02,host03)


TASK [debug] 
***

ok: [host01] => {
"changed": false,
"msg": "All items completed"
}
ok: [host02] => {
"changed": false,
"msg": "All items completed"
}
ok: [host03] => {
"changed": false,
"msg": "All items completed"
}

PLAY RECAP 
*

host01 : ok=2changed=1 unreachable=0failed=0
host02 : ok=2changed=1 unreachable=0failed=0
host03 : ok=2changed=1 unreachable=0failed=0


PLAY [gluster_servers] 
*


TASK [Enable or disable services] 
**

ok: [host01] => (item=chronyd)
ok: [host03] => (item=chronyd)
ok: [host02] => (item=chronyd)

PLAY RECAP 
*

host01 : ok=1changed=0 unreachable=0failed=0
host02 : ok=1changed=0 unreachable=0failed=0
host03 : ok=1changed=0 unreachable=0failed=0


PLAY [gluster_servers] 
*


TASK [start/stop/restart/reload services] 
**

changed: [host03] => (item=chronyd)
changed: [host01] => (item=chronyd)
changed: [host02] => (item=chronyd)

PLAY RECAP 
*

host01 : ok=1changed=1 unreachable=0failed=0
host02 : ok=1changed=1 unreachable=0failed=0
host03 : ok=1changed=1 unreachable=0failed=0


Error: Unsupported disk type!





[root@host01 scripts]# fdisk -l

Disk /dev/sdb: 898.3 GB, 898319253504 bytes, 1754529792 sectors
Units = sectors of 1 * 512 = 512 bytes
Sector size (logical/physical): 512 bytes / 512 bytes
I/O size (minimum/optimal): 512 bytes / 512 bytes
Disk label type: dos
Disk identifier: 0x0629cdcf

   Device Boot  Start End  Blocks   Id System

Disk /dev/sda: 299.4 GB, 299439751168 bytes, 584843264 sectors
Units = sectors of 1 * 512 = 512 bytes
Sector size (logical/physical): 512 bytes / 512 bytes
I/O size (minimum/optimal): 512 bytes / 512 bytes
Disk label type: dos
Disk identifier: 0x7c39

   Device Boot  Start End  Blocks   Id System
/dev/sda1   *2048 2099199 1048576   83 Linux
/dev/sda2 2099200   584843263   291372032   8e Linux LVM

Disk /dev/mapper/onn_host01-swap: 16.9 GB, 16911433728 bytes, 33030144 
sectors

Units = sectors of 1 * 512 = 512 bytes
Sector size (logical/physical): 512 bytes / 512 bytes
I/O size (minimum/optimal): 512 bytes / 512 bytes


Disk /dev/mapper/onn_host01-pool00_tmeta: 1073 MB, 1073741824 bytes, 
2097152 sectors

Units = sectors of 1 * 512 = 512 bytes
Sector size (logical/physical): 512 bytes / 512 bytes
I/O size (minimum/optimal): 512 bytes / 512 bytes


Disk /dev/mapper/onn_host01-pool00_tdata: 264.3 GB, 264266317824 
bytes, 516145152 sectors

Units = sectors of 1 * 512 = 512 bytes
Sector size (logical/physical): 512 bytes / 512 bytes
I/O size (minimum/optimal): 512 bytes / 512 bytes


Disk /dev/mapper/onn_host01-pool00-tpool: 264.3 GB, 264266317824 
bytes, 516145152 sectors

Units = sectors of 1 * 512 = 512 bytes
Sector size (logical/physical): 512 bytes / 512 bytes
I/O size (minimum/optimal): 131072 bytes / 131072 bytes


Disk /dev/mapper/onn_host01-ovirt--node--ng--4.1.2--0.20170613.0+1: 
248.2 GB, 248160190464 bytes, 484687872 sectors

Units = sectors of 1 * 512 = 512 

Re: [ovirt-users] Remove host from hosted engine configuration

2017-06-15 Thread knarra

On 06/16/2017 08:17 AM, Mike Farnam wrote:
I had 3 hosts running in a hosted engine setup,  oVirt Engine Version: 
4.1.2.2-1.el7.centos, using FC storage.  One of my hosts went 
unresponsive in the GUI, and attempts to bring it back were 
fruitless.  I eventually decided to just remove it and have gotten 
it removed from the GUI, but it still shows in “hosted-engine 
—vm-status” command on the other 2 hosts.  The 2 good nodes show it as 
the following:


--== Host 3 status ==--

conf_on_shared_storage : True
Status up-to-date  : False
Hostname   : host3.my.lab
Host ID: 3
Engine status  : unknown stale-data
Score  : 0
stopped: False
Local maintenance  : True
crc32  : bce9a8c5
local_conf_timestamp   : 2605898 
Host timestamp : 2605882 
Extra metadata (valid at timestamp):
metadata_parse_version=1
metadata_feature_version=1
timestamp=2605882  (Thu Jun 15 15:18:13 2017)
host-id=3
score=0
vm_conf_refresh_time=2605898  (Thu Jun 15 15:18:29 2017)
conf_on_shared_storage=True
maintenance=True
state=LocalMaintenance
stopped=False

you can use the command 'hosted-engine --clean-metadata 
--host-id= --force-clean' so that this node does not show up 
in  hosted-engine --vm-status.


How can I either remove this host altogether from the configuration, 
or repair it so that it is back in a good state?  The host is up, but 
due to my removal attempts earlier, reports “unknown stale data” for 
all 3 hosts in the config.


Thanks



___
Users mailing list
Users@ovirt.org
http://lists.ovirt.org/mailman/listinfo/users



___
Users mailing list
Users@ovirt.org
http://lists.ovirt.org/mailman/listinfo/users


Re: [ovirt-users] Hosted-Engine --Deploy option in Web

2017-06-09 Thread knarra

On 06/10/2017 02:32 AM, Langley, Robert wrote:
There is no Hosted Engine option within the Edit window of my 
additional host. How do I deploy Hosted-Engine to an existing host, if 
I’ve forgotten to do so when adding it as a new host? And, this host 
is one of the three storage servers hosting the engine gluster volume.

Thanks,
Robert Langley


___
Users mailing list
Users@ovirt.org
http://lists.ovirt.org/mailman/listinfo/users


Hi Robert,

   If hosted_storage and HostedEngine vm are not imported into the 
cluster, you will not be able to find HostedEngine tab in the edit / Add 
new host dialog . For this you will have to create first a storage 
domain which will automatically import hosted_storage and HostedEngine vm.


 Once you have the other two entities  you can simply move the host 
to maintenance and click reinstall. While re installing you should see a 
tab called Hosted Engine and choose 'Deploy' in that tab.


 Hope this helps !!

Thanks

kasturi

___
Users mailing list
Users@ovirt.org
http://lists.ovirt.org/mailman/listinfo/users


Re: [ovirt-users] oVirt install error

2017-06-08 Thread knarra

On 06/07/2017 10:33 PM, ov...@fateknollogee.com wrote:

I just used all the default setting installing from the iso.
ansible v2.3.0.0
gdeploy v2.0.2
Thanks for conforming.  I thought i have logged bug for this but when i 
went and looked back realized that i do not have any. would you mind 
logging bug for this?


On 2017-06-07 09:41, knarra wrote:

On 06/07/2017 03:15 AM, ov...@fateknollogee.com wrote:

I finally figured out what the error was all about

The default location for the gdeploy script is:
/usr/share/ansible/gdeploy/scripts/grafton-sanity-check.sh

The oVirt node installer 
"ovirt-node-ng-installer-ovirt-4.1-2017060504.iso" installed it in a 
different location:

/usr/share/gdeploy/scripts/grafton-sanity-check.sh

I copied the "gdeploy" folder to the default location & the error 
went away.


**btw, I installed oVirt from scratch twice & both times got the 
same error**


just wondering how did this change happen. I will install with the iso
what you mentioned above and let you know the results. But before that
have few questions. Can you please tell me what is the version of
ansible and gdeploy you have on the node ?


On 2017-06-06 13:01, ov...@fateknollogee.com wrote:

Ok, I will re-check a few things based on this:
https://bugzilla.redhat.com/show_bug.cgi?id=1405447

On 2017-06-06 12:58, ov...@fateknollogee.com wrote:

How do I check that?

Today, I'm re-installing but getting this error message:


PLAY [gluster_servers] 
*


TASK [Run a shell script] 
**
fatal: [ovirt-N1-f25.fatek-dc.lab]: FAILED! => {"failed": true, 
"msg":

"The conditional check 'result.rc != 0' failed. The error was: error
while evaluating conditional (result.rc != 0): 'dict object' has no
attribute 'rc'"}
fatal: [ovirt-N3-f25.fatek-dc.lab]: FAILED! => {"failed": true, 
"msg":

"The conditional check 'result.rc != 0' failed. The error was: error
while evaluating conditional (result.rc != 0): 'dict object' has no
attribute 'rc'"}
fatal: [ovirt-N2-f25.fatek-dc.lab]: FAILED! => {"failed": true, 
"msg":

"The conditional check 'result.rc != 0' failed. The error was: error
while evaluating conditional (result.rc != 0): 'dict object' has no
attribute 'rc'"}
to retry, use: --limit @/tmp/tmpEzKSy6/run-script.retry

PLAY RECAP 
*
ovirt-N1-f25.fatek-dc.lab  : ok=0changed=0 unreachable=0
failed=1
ovirt-N2-f25.fatek-dc.lab  : ok=0changed=0 unreachable=0
failed=1
ovirt-N3-f25.fatek-dc.lab  : ok=0changed=0 unreachable=0
failed=1



On 2017-06-01 00:08, knarra wrote:

On 06/01/2017 01:19 AM, ov...@fateknollogee.com wrote:

Any ideas what this is:

TASK [Run a shell script] 
**
fatal: [ovirt-node1.lab]: FAILED! => {"failed": true, "msg": 
"The conditional check 'result.rc != 0' failed. The error was: 
error while evaluating conditional (result.rc != 0): 'dict 
object' has no attribute 'rc'"}
fatal: [ovirt-node3.lab]: FAILED! => {"failed": true, "msg": 
"The conditional check 'result.rc != 0' failed. The error was: 
error while evaluating conditional (result.rc != 0): 'dict 
object' has no attribute 'rc'"}
fatal: [ovirt-node2.lab]: FAILED! => {"failed": true, "msg": 
"The conditional check 'result.rc != 0' failed. The error was: 
error while evaluating conditional (result.rc != 0): 'dict 
object' has no attribute 'rc'"}

to retry, use: --limit @/tmp/tmpaOHOtY/run-script.retry
___
Users mailing list
Users@ovirt.org
http://lists.ovirt.org/mailman/listinfo/users


Hi,

Can you see if the script which is getting execute is present 
in that path ?


Thanks

kasturi



___
Users mailing list
Users@ovirt.org
http://lists.ovirt.org/mailman/listinfo/users


Re: [ovirt-users] oVirt install error

2017-06-07 Thread knarra

On 06/07/2017 03:15 AM, ov...@fateknollogee.com wrote:

I finally figured out what the error was all about

The default location for the gdeploy script is:
/usr/share/ansible/gdeploy/scripts/grafton-sanity-check.sh

The oVirt node installer 
"ovirt-node-ng-installer-ovirt-4.1-2017060504.iso" installed it in a 
different location:

/usr/share/gdeploy/scripts/grafton-sanity-check.sh

I copied the "gdeploy" folder to the default location & the error went 
away.


**btw, I installed oVirt from scratch twice & both times got the same 
error**


just wondering how did this change happen. I will install with the iso 
what you mentioned above and let you know the results. But before that 
have few questions. Can you please tell me what is the version of 
ansible and gdeploy you have on the node ?


On 2017-06-06 13:01, ov...@fateknollogee.com wrote:

Ok, I will re-check a few things based on this:
https://bugzilla.redhat.com/show_bug.cgi?id=1405447

On 2017-06-06 12:58, ov...@fateknollogee.com wrote:

How do I check that?

Today, I'm re-installing but getting this error message:


PLAY [gluster_servers] 
*


TASK [Run a shell script] 
**

fatal: [ovirt-N1-f25.fatek-dc.lab]: FAILED! => {"failed": true, "msg":
"The conditional check 'result.rc != 0' failed. The error was: error
while evaluating conditional (result.rc != 0): 'dict object' has no
attribute 'rc'"}
fatal: [ovirt-N3-f25.fatek-dc.lab]: FAILED! => {"failed": true, "msg":
"The conditional check 'result.rc != 0' failed. The error was: error
while evaluating conditional (result.rc != 0): 'dict object' has no
attribute 'rc'"}
fatal: [ovirt-N2-f25.fatek-dc.lab]: FAILED! => {"failed": true, "msg":
"The conditional check 'result.rc != 0' failed. The error was: error
while evaluating conditional (result.rc != 0): 'dict object' has no
attribute 'rc'"}
to retry, use: --limit @/tmp/tmpEzKSy6/run-script.retry

PLAY RECAP 
*
ovirt-N1-f25.fatek-dc.lab  : ok=0changed=0 unreachable=0
failed=1
ovirt-N2-f25.fatek-dc.lab  : ok=0changed=0 unreachable=0
failed=1
ovirt-N3-f25.fatek-dc.lab  : ok=0changed=0 unreachable=0
failed=1



On 2017-06-01 00:08, knarra wrote:

On 06/01/2017 01:19 AM, ov...@fateknollogee.com wrote:

Any ideas what this is:

TASK [Run a shell script] 
**
fatal: [ovirt-node1.lab]: FAILED! => {"failed": true, "msg": "The 
conditional check 'result.rc != 0' failed. The error was: error 
while evaluating conditional (result.rc != 0): 'dict object' has 
no attribute 'rc'"}
fatal: [ovirt-node3.lab]: FAILED! => {"failed": true, "msg": "The 
conditional check 'result.rc != 0' failed. The error was: error 
while evaluating conditional (result.rc != 0): 'dict object' has 
no attribute 'rc'"}
fatal: [ovirt-node2.lab]: FAILED! => {"failed": true, "msg": "The 
conditional check 'result.rc != 0' failed. The error was: error 
while evaluating conditional (result.rc != 0): 'dict object' has 
no attribute 'rc'"}

to retry, use: --limit @/tmp/tmpaOHOtY/run-script.retry
___
Users mailing list
Users@ovirt.org
http://lists.ovirt.org/mailman/listinfo/users


Hi,

Can you see if the script which is getting execute is present 
in that path ?


Thanks

kasturi



___
Users mailing list
Users@ovirt.org
http://lists.ovirt.org/mailman/listinfo/users


Re: [ovirt-users] hosted engine unattended install fails

2017-06-07 Thread knarra

On 06/07/2017 12:06 AM, Ramachandra Reddy Ankireddypalle wrote:

Hi,
   hosted engine unattended install fails with the following error:

[ ERROR ] Cannot automatically add the host to cluster Default:  
  
400 Bad Request  Bad Request 
Your browser sent a request that this server could not 
understand.  



  Please check Engine VM configuration.

  Make a selection from the options below:
  (1) Continue setup - Engine VM configuration has been fixed
  (2) Abort setup


Thanks and Regards,
Ram


___
Users mailing list
Users@ovirt.org
http://lists.ovirt.org/mailman/listinfo/users


Hi,

Can you please check if your cluster has both virt + gluster 
services enabled from the engine vm? You can run the command 
'engine-config -g AllowClusterWithVirtGlusterEnabled' to see if this is 
enabled. If not, can you run the command command=engine-config -s 
AllowClusterWithVirtGlusterEnabled=true and press continue setup and 
this should work.


But these issues are fixed in the latest version. Can you please 
let me know which version are you using ?


Thanks

kasturi

___
Users mailing list
Users@ovirt.org
http://lists.ovirt.org/mailman/listinfo/users


Re: [ovirt-users] oVirt install error

2017-06-07 Thread knarra

On 06/07/2017 03:15 AM, ov...@fateknollogee.com wrote:

I finally figured out what the error was all about

The default location for the gdeploy script is:
/usr/share/ansible/gdeploy/scripts/grafton-sanity-check.sh

The oVirt node installer 
"ovirt-node-ng-installer-ovirt-4.1-2017060504.iso" installed it in a 
different location:

/usr/share/gdeploy/scripts/grafton-sanity-check.sh

I copied the "gdeploy" folder to the default location & the error went 
away.


**btw, I installed oVirt from scratch twice & both times got the same 
error**

Hi ,

Good to know that it worked after changing the script path. This 
change is brought in with ansible2.3 and  we have a bug to change it to 
correct  path from cockpit UI. Not sure what the state of bug now. But 
we should have it fixed soon.


Thanks
kasturi.


On 2017-06-06 13:01, ov...@fateknollogee.com wrote:

Ok, I will re-check a few things based on this:
https://bugzilla.redhat.com/show_bug.cgi?id=1405447

On 2017-06-06 12:58, ov...@fateknollogee.com wrote:

How do I check that?

Today, I'm re-installing but getting this error message:


PLAY [gluster_servers] 
*


TASK [Run a shell script] 
**

fatal: [ovirt-N1-f25.fatek-dc.lab]: FAILED! => {"failed": true, "msg":
"The conditional check 'result.rc != 0' failed. The error was: error
while evaluating conditional (result.rc != 0): 'dict object' has no
attribute 'rc'"}
fatal: [ovirt-N3-f25.fatek-dc.lab]: FAILED! => {"failed": true, "msg":
"The conditional check 'result.rc != 0' failed. The error was: error
while evaluating conditional (result.rc != 0): 'dict object' has no
attribute 'rc'"}
fatal: [ovirt-N2-f25.fatek-dc.lab]: FAILED! => {"failed": true, "msg":
"The conditional check 'result.rc != 0' failed. The error was: error
while evaluating conditional (result.rc != 0): 'dict object' has no
attribute 'rc'"}
to retry, use: --limit @/tmp/tmpEzKSy6/run-script.retry

PLAY RECAP 
*
ovirt-N1-f25.fatek-dc.lab  : ok=0changed=0 unreachable=0
failed=1
ovirt-N2-f25.fatek-dc.lab  : ok=0changed=0 unreachable=0
failed=1
ovirt-N3-f25.fatek-dc.lab  : ok=0changed=0 unreachable=0
failed=1



On 2017-06-01 00:08, knarra wrote:

On 06/01/2017 01:19 AM, ov...@fateknollogee.com wrote:

Any ideas what this is:

TASK [Run a shell script] 
**
fatal: [ovirt-node1.lab]: FAILED! => {"failed": true, "msg": "The 
conditional check 'result.rc != 0' failed. The error was: error 
while evaluating conditional (result.rc != 0): 'dict object' has 
no attribute 'rc'"}
fatal: [ovirt-node3.lab]: FAILED! => {"failed": true, "msg": "The 
conditional check 'result.rc != 0' failed. The error was: error 
while evaluating conditional (result.rc != 0): 'dict object' has 
no attribute 'rc'"}
fatal: [ovirt-node2.lab]: FAILED! => {"failed": true, "msg": "The 
conditional check 'result.rc != 0' failed. The error was: error 
while evaluating conditional (result.rc != 0): 'dict object' has 
no attribute 'rc'"}

to retry, use: --limit @/tmp/tmpaOHOtY/run-script.retry
___
Users mailing list
Users@ovirt.org
http://lists.ovirt.org/mailman/listinfo/users


Hi,

Can you see if the script which is getting execute is present 
in that path ?


Thanks

kasturi



___
Users mailing list
Users@ovirt.org
http://lists.ovirt.org/mailman/listinfo/users


Re: [ovirt-users] oVirt install error

2017-06-01 Thread knarra

On 06/01/2017 01:19 AM, ov...@fateknollogee.com wrote:

Any ideas what this is:

TASK [Run a shell script] 
**
fatal: [ovirt-node1.lab]: FAILED! => {"failed": true, "msg": "The 
conditional check 'result.rc != 0' failed. The error was: error while 
evaluating conditional (result.rc != 0): 'dict object' has no 
attribute 'rc'"}
fatal: [ovirt-node3.lab]: FAILED! => {"failed": true, "msg": "The 
conditional check 'result.rc != 0' failed. The error was: error while 
evaluating conditional (result.rc != 0): 'dict object' has no 
attribute 'rc'"}
fatal: [ovirt-node2.lab]: FAILED! => {"failed": true, "msg": "The 
conditional check 'result.rc != 0' failed. The error was: error while 
evaluating conditional (result.rc != 0): 'dict object' has no 
attribute 'rc'"}

to retry, use: --limit @/tmp/tmpaOHOtY/run-script.retry
___
Users mailing list
Users@ovirt.org
http://lists.ovirt.org/mailman/listinfo/users


Hi,

Can you see if the script which is getting execute is present in 
that path ?


Thanks

kasturi

___
Users mailing list
Users@ovirt.org
http://lists.ovirt.org/mailman/listinfo/users


Re: [ovirt-users] [GlusterFS] Regarding gluster backup-volfile-servers option

2017-05-18 Thread knarra

On 05/18/2017 10:35 AM, TranceWorldLogic . wrote:

Thanks,
Would you also explain about my 2nd question ?

let say hostB mounted glusterFS partition by using backup vol (e.g. 
hostB) and after some time Host A come online, will GlusterFS client 
(mount) automatically switch to Host A ?

mount will not switch automatically to Host A .

In an ideal case i.e with out using backup-vol-file-server option if 
host B goes down, you mount will not be accessible since host B went 
down and df -TH or mount does not show the volume or it says "Transport 
End point not connected".


Take the case where using backup-volfile-server option. If host B goes 
down, your mount will still be accessible since there is 
backup-volfile-server and df -TH or mount still shows that volume is 
mounted using Host B, but all the internal requests will be served 
through Host A.




On Thu, May 18, 2017 at 9:43 AM, knarra <kna...@redhat.com 
<mailto:kna...@redhat.com>> wrote:


Hi,

backup-volfile-servers is mainly used to avoid SPOF. For
example take a scenario where you have Host A and Host B and when
you try to mount a glusterfs volume using Host A with
backup-volfile-servers specified, if Host A is not accessible,
mount will happen with Host B which is specified in
backup-volfile-server. backup-volfile-servers are mainly used to
fetch the volfile from gluster and  has nothing to do with data sync.

data syncing comes as part of replicate feature in glusterfs
where say for example, you have two Hosts Host A and Host B with
replica volume configured, if Host A goes down for sometime, all
the writes happens on Host B and when Host A comes up data gets
synced to HostA.

Hope this helps 

Thanks
kasturi


On 05/17/2017 11:31 PM, TranceWorldLogic . wrote:

Hi,

Before trying out, I want to understand how glusterfs will react
for below scenario.
Please help me.

Let consider I have two host hostA and hostB
I have setup replica volume on hostA and hostB. (consider as
storage domain for DATA in ovirt).
I have configure data domain mount command with backup server
option (backup-volfile-server) as hostB (I mean main server as
hostA and backup as hostB)

1> As I understood, VDSM execute mount command on both hostA and
hostB.(for creating data domain)
2> That mean, HostB glusterFS CLIENT will communicate with main
server (hostA).
(Please correct me if I am wrong here.)
3> Let say HostA got down (say shutdown, power off scenario)
4> Due to backup option I will have data domain available on HostB.
(Now glusterFS CLIENT on HostB will start communicating with
HostB GlusterFS SERVER).
5> Now let say HostA comes up.
6> Will it sync all data from HostB to HostA glusterFS server ?
(as per doc, yes, i not tried yet, want to confirm my understanding)
7> Will glusterFS CLIENT on HostB start communicate with main
server (HostA) ?

Please let me know, I am new to glusterFS.

Thanks,
~Rohit


___
Users mailing list
Users@ovirt.org <mailto:Users@ovirt.org>
http://lists.ovirt.org/mailman/listinfo/users
<http://lists.ovirt.org/mailman/listinfo/users>


___
Users mailing list
Users@ovirt.org
http://lists.ovirt.org/mailman/listinfo/users


Re: [ovirt-users] [GlusterFS] Regarding gluster backup-volfile-servers option

2017-05-17 Thread knarra

Hi,

backup-volfile-servers is mainly used to avoid SPOF. For example 
take a scenario where you have Host A and Host B and when you try to 
mount a glusterfs volume using Host A with backup-volfile-servers 
specified, if Host A is not accessible, mount will happen with Host B 
which is specified in backup-volfile-server. backup-volfile-servers are 
mainly used to fetch the volfile from gluster and  has nothing to do 
with data sync.


data syncing comes as part of replicate feature in glusterfs where 
say for example, you have two Hosts Host A and Host B with replica 
volume configured, if Host A goes down for sometime, all the writes 
happens on Host B and when Host A comes up data gets synced to HostA.


Hope this helps 

Thanks
kasturi

On 05/17/2017 11:31 PM, TranceWorldLogic . wrote:

Hi,

Before trying out, I want to understand how glusterfs will react for 
below scenario.

Please help me.

Let consider I have two host hostA and hostB
I have setup replica volume on hostA and hostB. (consider as storage 
domain for DATA in ovirt).
I have configure data domain mount command with backup server option 
(backup-volfile-server) as hostB (I mean main server as hostA and 
backup as hostB)


1> As I understood, VDSM execute mount command on both hostA and 
hostB.(for creating data domain)
2> That mean, HostB glusterFS CLIENT will communicate with main server 
(hostA).

(Please correct me if I am wrong here.)
3> Let say HostA got down (say shutdown, power off scenario)
4> Due to backup option I will have data domain available on HostB.
(Now glusterFS CLIENT on HostB will start communicating with HostB 
GlusterFS SERVER).

5> Now let say HostA comes up.
6> Will it sync all data from HostB to HostA glusterFS server ?
(as per doc, yes, i not tried yet, want to confirm my understanding)
7> Will glusterFS CLIENT on HostB start communicate with main server 
(HostA) ?


Please let me know, I am new to glusterFS.

Thanks,
~Rohit


___
Users mailing list
Users@ovirt.org
http://lists.ovirt.org/mailman/listinfo/users



___
Users mailing list
Users@ovirt.org
http://lists.ovirt.org/mailman/listinfo/users


Re: [ovirt-users] Glusterd Service Stopped Working After Delete Volume And Peers ID

2017-05-11 Thread knarra

Hi,

When you say service is not working are you not able to start 
glusterd any more? If yes, can you please look at 
/var/log/glusterfs/glusterd.log and see if there are any errors? How did 
you delete your peer ids? By running gluster peer detach ?


Thanks
kasturi


On 05/11/2017 06:01 PM, Khalid Jamal wrote:


Dear Team


i have error on glusterd service it's not working after delete gluster 
volume and delete all peers id , the service is not working , what 
shall i do , i bake all id peers on all gluster node but service still 
not working is that a bug , by the way my gluster version is (3.8.10) .


best regards


Eng khalid jamal
System Admin@IT Department
Earthlink Telecom

Email: khalid.ja...@earthlinktele.com
No: 3355
skype: engkhalid21986
NO : 07704268321



___
Users mailing list
Users@ovirt.org
http://lists.ovirt.org/mailman/listinfo/users



___
Users mailing list
Users@ovirt.org
http://lists.ovirt.org/mailman/listinfo/users


Re: [ovirt-users] unable to set group on volume

2017-05-10 Thread knarra

On 05/10/2017 06:37 AM, Joel Diaz wrote:

Hello ovirt users,

First off all, thanks for your work. I've been using the software for 
a few months and the experience has been great.


I'm having a hard time trying to set the group on a glusterfs volume

PLAY [master] 
**


TASK [Sets options for volume] 
*
failed: [192.168.170.141] (item={u'key': u'group', u'value': u'virt'}) 
=> {"failed": true, "item": {"key": "group", "value": "virt"}, "msg": 
"'/var/lib/glusterd/groups/virt' file format not valid.\n"}
From this error it looks like the virt file format is not valid? Can 
you please paste the contents of this file?
changed: [192.168.170.141] => (item={u'key': u'storage.owner-uid', 
u'value': u'36'})
changed: [192.168.170.141] => (item={u'key': u'storage.owner-gid', 
u'value': u'36'})
changed: [192.168.170.141] => (item={u'key': u'network.ping-timeout', 
u'value': u'30'})
changed: [192.168.170.141] => (item={u'key': 
u'performance.strict-o-direct', u'value': u'on'})
changed: [192.168.170.141] => (item={u'key': u'network.remote-dio', 
u'value': u'off'})
changed: [192.168.170.141] => (item={u'key': 
u'cluster.granular-entry-heal', u'value': u'enable'})

to retry, use: --limit @/tmp/tmpdTWQ8B/gluster-volume-set.retry

PLAY RECAP 
*

192.168.170.141: ok=0changed=0  unreachable=0failed=1

I've tried to remove remove glusterfs, wiping the glusterfs 
configurations and reinstalling the service.


Any help would be appreciated.

Thank you,

Joel


___
Users mailing list
Users@ovirt.org
http://lists.ovirt.org/mailman/listinfo/users



___
Users mailing list
Users@ovirt.org
http://lists.ovirt.org/mailman/listinfo/users


Re: [ovirt-users] Hosted Engine Setup with the gluster bricks on the same disk as the OS

2017-05-08 Thread knarra

On 05/07/2017 04:48 PM, Mike DePaulo wrote:

Hi. I am trying to follow this guide. Is it possible to use part of my
OS disk /dev/sda for the bricks?
https://www.ovirt.org/blog/2017/04/up-and-running-with-ovirt-4-1-and-gluster-storage/

I am using oVirt Node 4.1.1.1. I am aware of the manual partitioning
requirements. I am guessing I have to create an LV for the OS that
does not take up the entire disk during install, manually create a pv
like /dev/sda3 afterwards, and then run Hosted Engine Setup and
specify /sda3 rather than sdb?

Thanks,
-Mike
___
Users mailing list
Users@ovirt.org
http://lists.ovirt.org/mailman/listinfo/users


Hi Mike,

If you create gluster bricks on the same disk as OS it works but we 
do not recommend setting up gluster bricks on the same disk as the os. 
When user tries to create a gluster volume using by specifying the 
bricks from root partition it displays an error message "Bricks in root 
parition not recommended and use force at the end to create volume".


Thanks

kasturi

___
Users mailing list
Users@ovirt.org
http://lists.ovirt.org/mailman/listinfo/users


Re: [ovirt-users] configuring gluster volumes/bricks from ovirt ??

2017-05-04 Thread knarra

On 05/04/2017 02:57 PM, Matthias Leopold wrote:



Am 2017-05-04 um 10:40 schrieb knarra:

On 05/04/2017 01:55 PM, Matthias Leopold wrote:



Am 2017-05-04 um 10:21 schrieb knarra:

On 05/04/2017 01:16 PM, Matthias Leopold wrote:



Am 2017-05-04 um 09:00 schrieb knarra:

On 05/04/2017 12:28 PM, knarra wrote:

On 05/03/2017 07:44 PM, Matthias Leopold wrote:

hi,


i'm trying to get into this gluster thing with oVirt and added a 2
node gluster cluster to my oVirt 4.1 data center (just for
testing, i
know it can't have HA with 2 nodes). provisioning of the storage
hosts did apparently work and my storage cluster seems to be
operational.

i have very little understanding of glusterfs right know, but 
what i

think i am missing in the interface is a way to configure
volumes/bricks on my gluster cluster/hosts so i can use them for
storage domains (i want to use a "managed gluster volume"), the 
drop

down "Gluster" in "New domain" is empty. all i could find for
storage
specific UI was the "Services" tab for the storage cluster 
which is

empty.

once gluster hosts are added into the UI, user will be able to see
volumes created on that hosts and to use them as storage 
domains. For
this you will need to create a new storage domain with the mount 
path

as gluster volume path.


i'm not using a hyperconverged/self-hosted setup, my engine is
located on a dedicated server and i used iSCSI storage for data
master domain. my hosts (for hypervisors and gluster storage) 
where

installed on top of centos7, not using oVirt Node.

does my setup make sense (it's only for testing)?
do i have to configure gluster hosts manually?

yes, you will have to do this manually .

Installing gluster packages have to done manually. Once gluster
packages
are installed you can create a gluster cluster from ovirt UI , add
gluster hosts and create volumes on them using the volumes tab.


i'm sorry, but i'm missing all these "Gluster Volumes" UI components
that are mentioned in
http://www.ovirt.org/develop/release-management/features/gluster/gluster-support/. 



the tabs i see for my storage cluster are "General", "Logical
Networks", "Hosts", "Services", "Permissions". as i said 
"Services" is

empty, is that a problem?

what's wrong?

thx
matthias


Does your cluster has both virt+gluster enabled or only virt ? If only
virt you will not be able to see them.

If the cluster has both virt+gluster service enabled or only gluster
service enabled you should be able to see them.


my storage cluster has only gluster service enabled

matthias



I think you have selected the cluster and you are referring to the sub
tabs for that cluster. There should be a main tab called 'Volumes' which
is present. Are you not seeing that? I have attached screenshot for the
same.



thanks for the screenshot, now i know how it should look like. i'm 
attaching my screenshot. i'm missing a couple of elements, especially 
"Cluster Node Type" (i don't have that in my VM cluster either). is 
there an obvious explanation? next step would be to recreate the 
gluster cluster with "clean" oVirt Nodes. maybe my storage hosts are 
botched, i had glusterfs 3.10 packages installed on one of them 
previously


thanks a lot so far
matthias


During the engine-setup when application mode was asked hope you have 
set "Both" .


___
Users mailing list
Users@ovirt.org
http://lists.ovirt.org/mailman/listinfo/users


Re: [ovirt-users] configuring gluster volumes/bricks from ovirt ??

2017-05-04 Thread knarra

On 05/04/2017 12:28 PM, knarra wrote:

On 05/03/2017 07:44 PM, Matthias Leopold wrote:

hi,


i'm trying to get into this gluster thing with oVirt and added a 2 
node gluster cluster to my oVirt 4.1 data center (just for testing, i 
know it can't have HA with 2 nodes). provisioning of the storage 
hosts did apparently work and my storage cluster seems to be 
operational.


i have very little understanding of glusterfs right know, but what i 
think i am missing in the interface is a way to configure 
volumes/bricks on my gluster cluster/hosts so i can use them for 
storage domains (i want to use a "managed gluster volume"), the drop 
down "Gluster" in "New domain" is empty. all i could find for storage 
specific UI was the "Services" tab for the storage cluster which is 
empty.
once gluster hosts are added into the UI, user will be able to see 
volumes created on that hosts and to use them as storage domains. For 
this you will need to create a new storage domain with the mount path 
as gluster volume path.


i'm not using a hyperconverged/self-hosted setup, my engine is 
located on a dedicated server and i used iSCSI storage for data 
master domain. my hosts (for hypervisors and gluster storage) where 
installed on top of centos7, not using oVirt Node.


does my setup make sense (it's only for testing)?
do i have to configure gluster hosts manually?

yes, you will have to do this manually .
Installing gluster packages have to done manually. Once gluster packages 
are installed you can create a gluster cluster from ovirt UI , add 
gluster hosts and create volumes on them using the volumes tab.
do i need more than 2 storage hosts when i want to configure gluster 
with ovirt (in hyperconverged setup 3 hosts are mandatory)?
It is always recommended to have 3 hosts as we say that in replica 3 
volumes there is less / no chance to see split brain issues. For hyper 
converged setup 3 hosts are mandatory.
do i need oVirt Node/cockpit on the storage hosts to do further 
configuration?
you can reduce the pain of configuring gluster hosts manually if we 
cockpit on centos7 / Ovirt Node.


thanks a lot for reading
matthias
___
Users mailing list
Users@ovirt.org
http://lists.ovirt.org/mailman/listinfo/users



___
Users mailing list
Users@ovirt.org
http://lists.ovirt.org/mailman/listinfo/users



___
Users mailing list
Users@ovirt.org
http://lists.ovirt.org/mailman/listinfo/users


Re: [ovirt-users] configuring gluster volumes/bricks from ovirt ??

2017-05-04 Thread knarra

On 05/03/2017 07:44 PM, Matthias Leopold wrote:

hi,


i'm trying to get into this gluster thing with oVirt and added a 2 
node gluster cluster to my oVirt 4.1 data center (just for testing, i 
know it can't have HA with 2 nodes). provisioning of the storage hosts 
did apparently work and my storage cluster seems to be operational.


i have very little understanding of glusterfs right know, but what i 
think i am missing in the interface is a way to configure 
volumes/bricks on my gluster cluster/hosts so i can use them for 
storage domains (i want to use a "managed gluster volume"), the drop 
down "Gluster" in "New domain" is empty. all i could find for storage 
specific UI was the "Services" tab for the storage cluster which is 
empty.
once gluster hosts are added into the UI, user will be able to see 
volumes created on that hosts and to use them as storage domains. For 
this you will need to create a new storage domain with the mount path as 
gluster volume path.


i'm not using a hyperconverged/self-hosted setup, my engine is located 
on a dedicated server and i used iSCSI storage for data master domain. 
my hosts (for hypervisors and gluster storage) where installed on top 
of centos7, not using oVirt Node.


does my setup make sense (it's only for testing)?
do i have to configure gluster hosts manually?

yes, you will have to do this manually .
do i need more than 2 storage hosts when i want to configure gluster 
with ovirt (in hyperconverged setup 3 hosts are mandatory)?
It is always recommended to have 3 hosts as we say that in replica 3 
volumes there is less / no chance to see split brain issues. For hyper 
converged setup 3 hosts are mandatory.
do i need oVirt Node/cockpit on the storage hosts to do further 
configuration?
you can reduce the pain of configuring gluster hosts manually if we 
cockpit on centos7 / Ovirt Node.


thanks a lot for reading
matthias
___
Users mailing list
Users@ovirt.org
http://lists.ovirt.org/mailman/listinfo/users



___
Users mailing list
Users@ovirt.org
http://lists.ovirt.org/mailman/listinfo/users


Re: [ovirt-users] Resolving host non_responsive

2017-05-03 Thread knarra

On 05/03/2017 05:35 PM, Alan Griffiths wrote:

Hi,

Following a short network outage a couple of HE hosts are reported as 
non_responsive in the engine. Storage was not affected and the VMs 
continue to run on those hosts. Is it possible to bring the hosts back 
under management without disrupting the running of the VMs? Is it as 
simple as confirming host has rebooted?


Thanks,

Alan


___
Users mailing list
Users@ovirt.org
http://lists.ovirt.org/mailman/listinfo/users


Hi Alan,

One way is to move the host to maintenance which will move all of 
your vms to another host and reinstall it from UI. Another way is to 
confirm host has been rebooted and please make sure that there are no 
vms or vms are migrated.


Thanks

kasturi

___
Users mailing list
Users@ovirt.org
http://lists.ovirt.org/mailman/listinfo/users


Re: [ovirt-users] Ovirt Node 4.1.1.1 installation, engine on gluster, existing disk sdb not found or filtered, deployment fails

2017-05-03 Thread knarra

On 05/03/2017 03:53 PM, Oliver Dietzel wrote:

Actual size as displayed by lsblk is 558,9G , a combined size of 530 GB worked 
(engine 100, data 180, vmstore 250),
but only without thin provisioning. Deployment failed with thin provisioning 
enabled, but worked with fixed sizes.

Now i hang in hosted engine deployment (having set installation with gluster to 
yes when asked) with error:

"Failed to execute stage 'Environment customization': Invalid value provided to 
'ENABLE_HC_GLUSTER_SERVICE'"

Hi,

Can you provide me the exact question and your response to that 
because of which your setup failed ?


Thanks
kasturi




-Ursprüngliche Nachricht-
Von: knarra [mailto:kna...@redhat.com]
Gesendet: Mittwoch, 3. Mai 2017 12:16
An: Oliver Dietzel <o.diet...@rto.de>; 'users@ovirt.org' <users@ovirt.org>
Betreff: Re: [ovirt-users] Ovirt Node 4.1.1.1 installation, engine on gluster, 
existing disk sdb not found or filtered, deployment fails

On 05/03/2017 03:20 PM, Oliver Dietzel wrote:

Thx a lot, i already got rid of the multipaths.

Now 5 tries later i try to understand who disk space calc works.

I already understand that the combined GByte limit for my drive sdb is around 
530.
   

sdb8:16   0 558,9G  0 
disk

Now the thin pool creation kicks me! :)

(i do a  vgremove gluster_vg_sdb on all hosts and reboot all three
hosts between retries)

TASK [Create LVs with specified size for the VGs]
**
failed: [hv1.iw.rto.de] (item={u'lv': u'gluster_thinpool_sdb',
u'size': u'530GB', u'extent': u'100%FREE', u'vg': u'gluster_vg_sdb'})
=> {"failed": true, "item": {"extent": "100%FREE", "lv":
"gluster_thinpool_sdb", "size": "530GB", "vg": "gluster_vg_sdb"},
"msg": "  Insufficient suitable allocatable extents for logical volume
gluster_thinpool_sdb: 135680 more required\n", "rc": 5}

I think you should input the size as 500GB if your actual disk size is 530 ?

-Ursprüngliche Nachricht-
Von: knarra [mailto:kna...@redhat.com]
Gesendet: Mittwoch, 3. Mai 2017 11:17
An: Oliver Dietzel <o.diet...@rto.de>; 'users@ovirt.org'
<users@ovirt.org>
Betreff: Re: [ovirt-users] Ovirt Node 4.1.1.1 installation, engine on
gluster, existing disk sdb not found or filtered, deployment fails

On 05/03/2017 02:06 PM, Oliver Dietzel wrote:

Hi,

i try to set up a 3 node gluster based ovirt cluster, following this guide:
https://www.ovirt.org/blog/2017/04/up-and-running-with-ovirt-4-1-and-
g
luster-storage/

oVirt nodes were installed with all disks available in the system,
installer limited to use only /dev/sda (both sda and sdb are HPE
logical volumes on a p410 raid controller)


Glusterfs deployment fails in the last step before engine setup:

PLAY RECAP *
hv1.iw  : ok=1changed=1unreachable=0failed=0
hv2.iw  : ok=1changed=1unreachable=0failed=0
hv3.iw  : ok=1changed=1unreachable=0failed=0


PLAY [gluster_servers]
*

TASK [Clean up filesystem signature]
***
skipping: [hv1.iw] => (item=/dev/sdb)
skipping: [hv2.iw] => (item=/dev/sdb)
skipping: [hv3.iw] => (item=/dev/sdb)

TASK [Create Physical Volume]
**
failed: [hv3.iw] (item=/dev/sdb) => {"failed": true,
"failed_when_result": true, "item": "/dev/sdb", "msg": "  Device
/dev/sdb not found (or ignored by filtering).\n", "rc": 5}
failed: [hv1.iw] (item=/dev/sdb) => {"failed": true,
"failed_when_result": true, "item": "/dev/sdb", "msg": "  Device
/dev/sdb not found (or ignored by filtering).\n", "rc": 5}
failed: [hv2.iw] (item=/dev/sdb) => {"failed": true,
"failed_when_result": true, "item": "/dev/sdb", "msg": "  Device
/dev/sdb not found (or ignored by filtering).\n", "rc": 5}


But: /dev/sdb exists on all hosts

[root@hv1 ~]# lsblk
NAME MAJ:MIN RM   SIZE RO 
TYPE  MOUNTPOINT
sda8:00 136,7G  0 
disk
...
sdb8:16   0 558,9G  0 
disk
└─3600508b1001c350a2c1748b0a0ff3860  253:50 558,9G  0 
mpath



What can i do to make this work?

___
Oliver Dietzel


___
Users mailing list
Users@ovirt.org
http://lists.ovirt.org/mailman/listinfo/users

Hi Oliver,


Re: [ovirt-users] Ovirt Node 4.1.1.1 installation, engine on gluster, existing disk sdb not found or filtered, deployment fails

2017-05-03 Thread knarra

On 05/03/2017 03:20 PM, Oliver Dietzel wrote:

Thx a lot, i already got rid of the multipaths.

Now 5 tries later i try to understand who disk space calc works.

I already understand that the combined GByte limit for my drive sdb is around 
530.
  

sdb8:16   0 558,9G  0 
disk

Now the thin pool creation kicks me! :)

(i do a  vgremove gluster_vg_sdb on all hosts and reboot all three hosts 
between retries)

TASK [Create LVs with specified size for the VGs] **
failed: [hv1.iw.rto.de] (item={u'lv': u'gluster_thinpool_sdb', u'size': u'530GB', u'extent': u'100%FREE', u'vg': u'gluster_vg_sdb'}) => {"failed": true, "item": {"extent": 
"100%FREE", "lv": "gluster_thinpool_sdb", "size": "530GB", "vg": "gluster_vg_sdb"}, "msg": "  Insufficient suitable allocatable 
extents for logical volume gluster_thinpool_sdb: 135680 more required\n", "rc": 5}

I think you should input the size as 500GB if your actual disk size is 530 ?


-Ursprüngliche Nachricht-
Von: knarra [mailto:kna...@redhat.com]
Gesendet: Mittwoch, 3. Mai 2017 11:17
An: Oliver Dietzel <o.diet...@rto.de>; 'users@ovirt.org' <users@ovirt.org>
Betreff: Re: [ovirt-users] Ovirt Node 4.1.1.1 installation, engine on gluster, 
existing disk sdb not found or filtered, deployment fails

On 05/03/2017 02:06 PM, Oliver Dietzel wrote:

Hi,

i try to set up a 3 node gluster based ovirt cluster, following this guide:
https://www.ovirt.org/blog/2017/04/up-and-running-with-ovirt-4-1-and-g
luster-storage/

oVirt nodes were installed with all disks available in the system,
installer limited to use only /dev/sda (both sda and sdb are HPE
logical volumes on a p410 raid controller)


Glusterfs deployment fails in the last step before engine setup:

PLAY RECAP *
hv1.iw  : ok=1changed=1unreachable=0failed=0
hv2.iw  : ok=1changed=1unreachable=0failed=0
hv3.iw  : ok=1changed=1unreachable=0failed=0


PLAY [gluster_servers]
*

TASK [Clean up filesystem signature]
***
skipping: [hv1.iw] => (item=/dev/sdb)
skipping: [hv2.iw] => (item=/dev/sdb)
skipping: [hv3.iw] => (item=/dev/sdb)

TASK [Create Physical Volume]
**
failed: [hv3.iw] (item=/dev/sdb) => {"failed": true,
"failed_when_result": true, "item": "/dev/sdb", "msg": "  Device
/dev/sdb not found (or ignored by filtering).\n", "rc": 5}
failed: [hv1.iw] (item=/dev/sdb) => {"failed": true,
"failed_when_result": true, "item": "/dev/sdb", "msg": "  Device
/dev/sdb not found (or ignored by filtering).\n", "rc": 5}
failed: [hv2.iw] (item=/dev/sdb) => {"failed": true,
"failed_when_result": true, "item": "/dev/sdb", "msg": "  Device
/dev/sdb not found (or ignored by filtering).\n", "rc": 5}


But: /dev/sdb exists on all hosts

[root@hv1 ~]# lsblk
NAME MAJ:MIN RM   SIZE RO 
TYPE  MOUNTPOINT
sda8:00 136,7G  0 
disk
...
sdb8:16   0 558,9G  0 
disk
└─3600508b1001c350a2c1748b0a0ff3860  253:50 558,9G  0 
mpath



What can i do to make this work?

___
Oliver Dietzel


___
Users mailing list
Users@ovirt.org
http://lists.ovirt.org/mailman/listinfo/users

Hi Oliver,

  I see that multipath is enabled on your system and for the device sdb it creates 
mpath and once this is created system will identify sdb as 
"3600508b1001c350a2c1748b0a0ff3860". To make this work perform the steps below.

1) multipath -l (to list all multipath devices)

2) black list devices in /etc/multipath.conf by adding the lines below, if you 
do not see this file run the command 'vdsm-tool configure --force' which will 
create the file for you.

blacklist {
  devnode "*"
}

3) mutipath -F which flushes all the mpath devices.

4) Restart mutipathd by running the command 'systemctl restart multipathd'

This should solve the issue.

Thanks
kasturi.

___
Users mailing list
Users@ovirt.org
http://lists.ovirt.org/mailman/listinfo/users



___
Users mailing list
Users@ovirt.org
http://lists.ovirt.org/mailman/listinfo/users


Re: [ovirt-users] Trouble Adding Gluster Host in oVirt Manager

2017-05-02 Thread knarra

Hi,

Can you please tell me which is the version of ovirt you are using 
? I looked at the engine log and i see that engine failed to establish 
SSH session with the host . Can you check if your hosts are reachable 
from the engine ?


2017-04-25 16:46:23,944-07 ERROR 
[org.ovirt.engine.core.bll.hostdeploy.AddVdsCommand] (default task-34) 
[0335c88e-96cc-46f4-ab22-cbf10d4645a2] Failed to establish session wi
th host 'gsa-stor1s.stor.local': SSH connection timed out connecting to 
'root@192.168.2.5'
2017-04-25 16:46:23,944-07 WARN 
[org.ovirt.engine.core.bll.hostdeploy.AddVdsCommand] (default task-34) 
[0335c88e-96cc-46f4-ab22-cbf10d4645a2] Validation of action 'AddVds'
failed for user admin@internal-authz. Reasons: 
VAR__ACTION__ADD,VAR__TYPE__HOST,$server 
192.168.2.5,VDS_CANNOT_CONNECT_TO_SERVER
2017-04-25 16:47:53,357-07 ERROR 
[org.ovirt.engine.core.bll.hostdeploy.AddVdsCommand] (default task-59) 
[3724100f-2593-41d6-b8fc-513c24cb2074] Failed to establish session wi
th host 'gsa-stor1s.stor.local': SSH connection timed out connecting to 
'root@192.168.2.5'


Thanks
kasturi

On 05/02/2017 01:47 AM, Langley, Robert wrote:

Attempt #3 to send engine log file with the compressed file. -Robert
These log files can be large for sending in email. So, I’m guessing it 
is best to send them as compressed. I’m learning here with the mailing 
list.

_
*From:* Langley, Robert
*Sent:* Monday, May 1, 2017 12:58 PM
*To:* 'users' 
*Cc:* 'Fred Rolland' 
*Subject:* Re: Trouble Adding Gluster Host in oVirt Manager
Engine.log attached from 20170427 (only including the one day, in 
order to decrease size)
Please, bear with me, I’m not sure about the best practice for sending 
the log. I hope the attachment goes through okay. << File: 
engine.log-20170427.txt >>



___
Users mailing list
Users@ovirt.org
http://lists.ovirt.org/mailman/listinfo/users



___
Users mailing list
Users@ovirt.org
http://lists.ovirt.org/mailman/listinfo/users


Re: [ovirt-users] [Ovirt 4.0] HA VM fail to start on another Host.

2017-04-25 Thread knarra

On 04/25/2017 08:45 PM, TranceWorldLogic . wrote:

Hi,

This is regarding HA VM fail to restart on other host.

I have setup, which has 2 host in a cluster let say host1 and host2.
And one HA VM (with High priority), say vm1.
And also not storage domain is configure on host3 and it available all 
time.


1> Initially vm1 was running on host2.
2> Then I power OFF host2 to see whether ovirt start vm1 on host1.

I found two result in this case as below:
1> Sometime vm1 retrying to start but retrying on host2 itself.
2> Sometime vm1 move in down state without retrying.

Can anyone explain about this behaviour ? Or Is this an issue ?

Note : I am using Ovirt 4.0.

Thanks,
~Rohit


___
Users mailing list
Users@ovirt.org
http://lists.ovirt.org/mailman/listinfo/users


Hi,

  If a host is powered off and if powermanagment is enabled engine 
will fence the host and restarts it. During this process host residing 
on the vm  will be shutdown and will be restarted on another node. All 
the events can be seen in the engine UI.


Hope you have not missed to enable power management on the hosts.  
with out enabling power management even if the vm is marked to be Highly 
available it will not  be.


Second thing to check for is if the vm has guest-agent installed on 
it. If the vm does not have guest-agent installed then it wont be 
restarted on different host. More info on this can be found at  [1].


 [1] https://bugzilla.redhat.com/show_bug.cgi?id=1341106#c35

Thanks

kasturi

___
Users mailing list
Users@ovirt.org
http://lists.ovirt.org/mailman/listinfo/users


Re: [ovirt-users] Ovirt Engine Can't See The Gluster Storage

2017-04-25 Thread knarra

On 04/25/2017 02:17 PM, khalid wrote:

dear ovirt users team

i have issued :

six server installed glusterfs configured with replica 3 when the 
first 3 servers of gluster down the ovirt engine can't see the gluster 
domain storage  , but i know when the first or last 3 server down must 
the domain storage in read only is that right and how you can help me 
with this isuues.


best regards

___
Users mailing list
Users@ovirt.org
http://lists.ovirt.org/mailman/listinfo/users


Hi,

 i just have few questions to understand your issue.

1) Any reason you are using six servers? are you trying to create 
distribute replicate / replica 3 volume?


2) If you are using six servers are the bricks of the volume present on 
all the nodes or just the first three or last three nodes?


3) Once you have the gluster volume has it been added as a storage 
domain in ovirt-UI? If the gluster volume is added as storage domain to 
UI only then ovirt engine can see this.


4) so when you have replica 3 volume configured on three nodes if two 
nodes goes down then system will go to read-only file system.


5) what is the error you are facing ?

Thanks

kasturi


___
Users mailing list
Users@ovirt.org
http://lists.ovirt.org/mailman/listinfo/users


Re: [ovirt-users] oVirt GUI bug? clicking "ok" on upgrade host confirmation screen

2017-04-24 Thread knarra

On 04/24/2017 03:59 PM, Nelson Lameiras wrote:

Hi kasturi,

Thanks for your answer,

Indeed, I tried again and after 1 minute and 17 seconds (!!) the 
confirmation screen disappeared. Is it really necessary to wait this 
long for screen to disapear? (I can see in the background that 
"upgrade" stars a few seconds after clicking ok)


When putting host into maintenance mode, a circular "waiting" 
animation is used in order to warn user "something" is happening. A 
similar animation would be usefull in "upgrade" screen after clicking 
ok, no?


cordialement, regards,

<https://www.lyra-network.com/>

Nelson LAMEIRAS
Ingénieur Systèmes et Réseaux/ Systems and Networks engineer
Tel: +33 5 32 09 09 70
nelson.lamei...@lyra-network.com <mailto:nelson.lamei...@lyra-network.com>
www.lyra-network.com <https://www.lyra-network.com/> | www.payzen.eu 
<https://payzen.eu>


<https://www.youtube.com/channel/UCrVl1CO_Jlu3KbiRH-tQ_vA>
<https://www.linkedin.com/company/lyra-network_2>
<https://twitter.com/LyraNetwork>
<https://payzen.eu>



Lyra Network, 109 rue de l'innovation, 31670 Labège, FRANCE


Not sure why does it take so long in your case. In my case it just takes 
few sec. But yaniv mentioned a bug on this would be good to track it down.



*From: *"knarra" <kna...@redhat.com>
*To: *"Nelson Lameiras" <nelson.lamei...@lyra-network.com>, "ovirt 
users" <users@ovirt.org>

*Sent: *Monday, April 24, 2017 7:34:17 AM
*Subject: *Re: [ovirt-users] oVirt GUI bug? clicking "ok" on upgrade 
host confirmation screen


On 04/21/2017 10:20 PM, Nelson Lameiras wrote:

Hello,

Since "upgrade" functionality is available for hosts in oVirt GUI
I have this strange bug :

- Click on "Installation>>Upgrade"
- Click "ok" on confirmation screen
- -> (bug) confirmation screen does not dissapear as expected
- Click "ok" again on confirmation screen -> error : "system is
already upgrading"
- Click "cancel" to be able to return to oVirt

This happens using on :
ovirt engine : oVirt Engine Version: 4.1.1.6-1.el7.centos
client : windows 10
client : chrome Version 57.0.2987.133 (64-bit)

This bug was already present on oVirt 4.0 before updating to 4.1.

Has anybody else have this problem?

(will try to reproduce with firefox, IE)

cordialement, regards,

<https://www.lyra-network.com/>

Nelson LAMEIRAS
Ingénieur Systèmes et Réseaux/ Systems and Networks engineer
Tel: +33 5 32 09 09 70
nelson.lamei...@lyra-network.com
<mailto:nelson.lamei...@lyra-network.com>
www.lyra-network.com <https://www.lyra-network.com/> |
www.payzen.eu <https://payzen.eu>

<https://www.youtube.com/channel/UCrVl1CO_Jlu3KbiRH-tQ_vA>
<https://www.linkedin.com/company/lyra-network_2>
<https://twitter.com/LyraNetwork>
<https://payzen.eu>



Lyra Network, 109 rue de l'innovation, 31670 Labège, FRANCE




___
Users mailing list
Users@ovirt.org
http://lists.ovirt.org/mailman/listinfo/users

Hi Nelson,

Once you click on 'OK' you will need to wait for few seconds 
(before the confirmation disappears) then you can see that upgrade 
starts.  In the previous versions once user clicks on 'OK' 
confirmation screen usually disappears immediately.


Thanks

kasturi




___
Users mailing list
Users@ovirt.org
http://lists.ovirt.org/mailman/listinfo/users


Re: [ovirt-users] Hyperconverged Setup and Gluster healing

2017-04-24 Thread knarra

On 04/24/2017 05:36 PM, Sven Achtelik wrote:


Hi Kasturi,

I’ll try that. Will this be persistent over a reboot of a host or even 
stopping of the complete cluster ?


Thank you


Hi Sven,

This is a volume set option ((has nothing to do with reboot)and it 
will be present on the volume until you reset it manually using 'gluster 
volume reset' command . You just need to execute 'gluster volume heal 
 granular-entry-heal enable' and this will do the right thing 
for you.


Thanks
kasturi.


*Von:*knarra [mailto:kna...@redhat.com]
*Gesendet:* Montag, 24. April 2017 13:44
*An:* Sven Achtelik <sven.achte...@eps.aero>; users@ovirt.org
*Betreff:* Re: [ovirt-users] Hyperconverged Setup and Gluster healing

On 04/24/2017 05:03 PM, Sven Achtelik wrote:

Hi All,

my oVirt-Setup is 3 Hosts with gluster and reaplica 3. I always
try to stay on the current version and I’m applying
updates/upgrade if there are any. For this I put a host in
maintenance and also use the “Stop Gluster Service”  checkbox.
After it’s done updating I’ll set it back to active and wait until
the engine sees all bricks again and then I’ll go for the next host.

This worked fine for me the last month and now that I have more
and more VMs running the changes that are written to the gluster
volume while a host is in maintenance become a lot more and it
takes pretty long for the healing to complete. What I don’t
understand is that I don’t really see a lot of network usage in
the GUI during that time and it feels quiet slow. The Network for
the gluster is a 10G and I’m quiet happy with the performance of
it, it’s just the healing that takes long. I noticed that because
I couldn’t update the third host because of unsynced gluster volumes.

Is there any limiting variable that slows down traffic during
healing that needs to be configured ? Or should I maybe change my
updating process somehow to avoid having so many changes in queue?

Thank you,

Sven



___

Users mailing list

Users@ovirt.org <mailto:Users@ovirt.org>

http://lists.ovirt.org/mailman/listinfo/users

Hi Sven,

Do you have granular entry heal enabled on the volume? If no, 
there is a feature called granular entry self-heal which should be 
enabled with sharded volumes to get the benefits. So when a brick goes 
down and say only 1 in those million entries is created/deleted. 
Self-heal would be done for only that file it won't crawl the entire 
directory.


You can run|gluster volume set|/VOLNAME/|cluster.granular-entry-heal 
enable / disable|command only if the volume is in|Created|state. If 
the volume is in any other state other than|Created|, for 
example,|Started|,|Stopped|, and so on, execute|gluster volume heal 
VOLNAME granular-entry-heal||enable / disable|command to enable or 
disable granular-entry-heal option.


Thanks

kasturi



___
Users mailing list
Users@ovirt.org
http://lists.ovirt.org/mailman/listinfo/users


Re: [ovirt-users] Hyperconverged Setup and Gluster healing

2017-04-24 Thread knarra

On 04/24/2017 05:03 PM, Sven Achtelik wrote:


Hi All,

my oVirt-Setup is 3 Hosts with gluster and reaplica 3. I always try to 
stay on the current version and I’m applying updates/upgrade if there 
are any. For this I put a host in maintenance and also use the “Stop 
Gluster Service”  checkbox. After it’s done updating I’ll set it back 
to active and wait until the engine sees all bricks again and then 
I’ll go for the next host.


This worked fine for me the last month and now that I have more and 
more VMs running the changes that are written to the gluster volume 
while a host is in maintenance become a lot more and it takes pretty 
long for the healing to complete. What I don’t understand is that I 
don’t really see a lot of network usage in the GUI during that time 
and it feels quiet slow. The Network for the gluster is a 10G and I’m 
quiet happy with the performance of it, it’s just the healing that 
takes long. I noticed that because I couldn’t update the third host 
because of unsynced gluster volumes.


Is there any limiting variable that slows down traffic during healing 
that needs to be configured ? Or should I maybe change my updating 
process somehow to avoid having so many changes in queue?


Thank you,

Sven



___
Users mailing list
Users@ovirt.org
http://lists.ovirt.org/mailman/listinfo/users


Hi Sven,

Do you have granular entry heal enabled on the volume? If no, there 
is a feature called granular entry self-heal which should be enabled 
with sharded volumes to get the benefits. So when a brick goes down and 
say only 1 in those million entries is created/deleted. Self-heal would 
be done for only that file it won't crawl the entire directory.


You can run|gluster volume set/VOLNAME/cluster.granular-entry-heal 
enable / disable|command only if the volume is in|Created|state. If the 
volume is in any other state other than|Created|, for 
example,|Started|,|Stopped|, and so on, execute|gluster volume heal 
VOLNAME granular-entry-heal|enable / disable||command to enable or 
disable granular-entry-heal option.


Thanks

kasturi

___
Users mailing list
Users@ovirt.org
http://lists.ovirt.org/mailman/listinfo/users


Re: [ovirt-users] oVirt GUI bug? clicking "ok" on upgrade host confirmation screen

2017-04-23 Thread knarra

On 04/21/2017 10:20 PM, Nelson Lameiras wrote:

Hello,

Since "upgrade" functionality is available for hosts in oVirt GUI I 
have this strange bug :


- Click on "Installation>>Upgrade"
- Click "ok" on confirmation screen
- -> (bug) confirmation screen does not dissapear as expected
- Click "ok" again on confirmation screen -> error : "system is 
already upgrading"

- Click "cancel" to be able to return to oVirt

This happens using on :
ovirt engine : oVirt Engine Version: 4.1.1.6-1.el7.centos
client : windows 10
client : chrome Version 57.0.2987.133 (64-bit)

This bug was already present on oVirt 4.0 before updating to 4.1.

Has anybody else have this problem?

(will try to reproduce with firefox, IE)

cordialement, regards,



Nelson LAMEIRAS
Ingénieur Systèmes et Réseaux/ Systems and Networks engineer
Tel: +33 5 32 09 09 70
nelson.lamei...@lyra-network.com 
www.lyra-network.com  | www.payzen.eu 










Lyra Network, 109 rue de l'innovation, 31670 Labège, FRANCE




___
Users mailing list
Users@ovirt.org
http://lists.ovirt.org/mailman/listinfo/users


Hi Nelson,

Once you click on 'OK' you will need to wait for few seconds 
(before the confirmation disappears) then you can see that upgrade 
starts.  In the previous versions once user clicks on 'OK' confirmation 
screen usually disappears immediately.


Thanks

kasturi

___
Users mailing list
Users@ovirt.org
http://lists.ovirt.org/mailman/listinfo/users


Re: [ovirt-users] Hosted engine install failed; vdsm upset about broker

2017-04-21 Thread knarra

On 04/21/2017 06:34 PM, Jamie Lawrence wrote:

On Apr 20, 2017, at 10:36 PM, knarra <kna...@redhat.com> wrote:

The installer claimed it did, but I believe it didn’t. Below the error from my 
original email, there’s the below (apologies for not including it earlier; I 
missed it). Note: 04ff4cf1-135a-4918-9a1f-8023322f89a3 is the HE - I’m pretty 
sure it is complaining about itself. (In any case, I verified that there are no 
other VMs running with both virsh and vdsClient.)

^^^


2017-04-19 12:27:02 DEBUG otopi.context context._executeMethod:128 Stage 
late_setup METHOD otopi.plugins.gr_he_setup.vm.runvm.Plugin._late_setup
2017-04-19 12:27:02 DEBUG otopi.plugins.gr_he_setup.vm.runvm 
runvm._late_setup:83 {'status': {'message': 'Done', 'code': 0}, 'items': 
[u'04ff4cf1-135a-4918-9a1f-8023322f89a3']}
2017-04-19 12:27:02 ERROR otopi.plugins.gr_he_setup.vm.runvm 
runvm._late_setup:91 The following VMs have been found: 
04ff4cf1-135a-4918-9a1f-8023322f89a3
2017-04-19 12:27:02 DEBUG otopi.context context._executeMethod:142 method 
exception
Traceback (most recent call last):
   File "/usr/lib/python2.7/site-packages/otopi/context.py", line 132, in 
_executeMethod
 method['method']()
   File 
"/usr/share/ovirt-hosted-engine-setup/scripts/../plugins/gr-he-setup/vm/runvm.py",
 line 95, in _late_setup
 _('Cannot setup Hosted Engine with other VMs running')
RuntimeError: Cannot setup Hosted Engine with other VMs running
2017-04-19 12:27:02 ERROR otopi.context context._executeMethod:151 Failed to 
execute stage 'Environment setup': Cannot setup Hosted Engine with other VMs 
running
2017-04-19 12:27:02 DEBUG otopi.context context.dumpEnvironment:760 ENVIRONMENT 
DUMP - BEGIN
2017-04-19 12:27:02 DEBUG otopi.context context.dumpEnvironment:770 ENV 
BASE/error=bool:'True'
2017-04-19 12:27:02 DEBUG otopi.context context.dumpEnvironment:770 ENV 
BASE/exceptionInfo=list:'[(, RuntimeError('Cannot 
setup Hosted Engine with other VMs running',), )]'
2017-04-19 12:27:02 DEBUG otopi.context context.dumpEnvironment:774 ENVIRONMENT 
DUMP - END

James, generally this issue happens when the setup failed once and you tried re 
running it again.  Can you clean it and deploy it again?  HE should come up 
successfully. Below are the steps for cleaning it up.

Knarra,

I realize that. However, that is not the situation in my case. See above, at 
the mark - the UUID it is complaining about is the UUID of the hosted-engine it 
just installed. From the answers file generated from the run (whole thing 
below):


OVEHOSTED_VM/vmUUID=str:04ff4cf1-135a-4918-9a1f-8023322f89a3

Also see the WARNs I mentioned previously, quoted below. Excerpt:


Apr 19 12:29:20 sc5-ovirt-2.squaretrade.com vdsm[70062]: vdsm root WARN File: 
/var/lib/libvirt/qemu/channels/04ff4cf1-135a-4918-9a1f-8023322f89a3.com.redhat.rhevm.vdsm
 already removed
Apr 19 12:29:20 sc5-ovirt-2.squaretrade.com vdsm[70062]: vdsm root WARN File: 
/var/lib/libvirt/qemu/channels/04ff4cf1-135a-4918-9a1f-8023322f89a3.org.qemu.guest_agent.0
 already removed
Apr 19 12:29:30 sc5-ovirt-2.squaretrade.com vdsm[70062]: vdsm 
ovirt_hosted_engine_ha.lib.brokerlink.BrokerLink ERROR Failed to connect to 
broker, the number of errors has exceeded the limit (1)

I’m not clear on what it is attempting to do there, but it seems relevant.
I remember that you said HE vm was not started when the installation was 
successful. Is Local Maintenance enabled on that host?


can you please check if the services 'ovirt-ha-agent' and 
'ovirt-ha-broker' running fine and try to restart them once ?





I know there is no failed install left on the gluster volume, because when I 
attempt an install, part of my scripted prep process is deleting and recreating 
the Gluster volume. The below instructions are more or less what I’m doing 
already in a script[1]. (the gluster portion of the script process is: stop the 
volume, delete the volume, remove the mount point directory to avoid Gluster’s 
xattr problem with recycling directories, recreate the directory, change perms, 
create the volume, start the volume, set Ovirt-recc’ed volume options.)

-j

[1] We have a requirement for automated setup of all production resources, so 
all of this ends up being scripted.


1) vdsClient -s 0 list table | awk '{print $1}' | xargs vdsClient -s 0 destroy

2) stop the volume and delete all the information inside the bricks from all 
the hosts

3) try to umount storage from /rhev/data-center/mnt/ - umount -f 
/rhev/data-center/mnt/  if it is mounted

4) remove all dirs from /rhev/data-center/mnt/ - rm -rf /rhev/data-center/mnt/*

5) start  volume again and start the deployment.

Thanks
kasturi



If I start it manually, the default DC is down, the default cluster has the 
installation host in the cluster,  there is no storage, and the VM doesn’t show 
up in the GUI. In this install run, I have not yet started the engine manually.

you wont be seeing HE vm until HE storage is import

Re: [ovirt-users] Hosted engine install failed; vdsm upset about broker

2017-04-20 Thread knarra

On 04/20/2017 10:48 PM, Jamie Lawrence wrote:

On Apr 19, 2017, at 11:35 PM, knarra <kna...@redhat.com> wrote:

On 04/20/2017 03:15 AM, Jamie Lawrence wrote:

I trialed installing the hosted engine, following the instructions at  
http://www.ovirt.org/documentation/self-hosted/chap-Deploying_Self-Hosted_Engine/
  . This is using Gluster as the backend storage subsystem.

Answer file at the end.

Per the docs,

"When the hosted-engine deployment script completes successfully, the oVirt 
Engine is configured and running on your host. The Engine has already configured the 
data center, cluster, host, the Engine virtual machine, and a shared storage domain 
dedicated to the Engine virtual machine.”

In my case, this is false. The installation claims success, but  the hosted 
engine VM stays stopped, unless I start it manually.

During the install process there is a step where HE vm is stopped and started. 
Can you check if this has happened correctly ?

The installer claimed it did, but I believe it didn’t. Below the error from my 
original email, there’s the below (apologies for not including it earlier; I 
missed it). Note: 04ff4cf1-135a-4918-9a1f-8023322f89a3 is the HE - I’m pretty 
sure it is complaining about itself. (In any case, I verified that there are no 
other VMs running with both virsh and vdsClient.)

2017-04-19 12:27:02 DEBUG otopi.context context._executeMethod:128 Stage 
late_setup METHOD otopi.plugins.gr_he_setup.vm.runvm.Plugin._late_setup
2017-04-19 12:27:02 DEBUG otopi.plugins.gr_he_setup.vm.runvm 
runvm._late_setup:83 {'status': {'message': 'Done', 'code': 0}, 'items': 
[u'04ff4cf1-135a-4918-9a1f-8023322f89a3']}
2017-04-19 12:27:02 ERROR otopi.plugins.gr_he_setup.vm.runvm 
runvm._late_setup:91 The following VMs have been found: 
04ff4cf1-135a-4918-9a1f-8023322f89a3
2017-04-19 12:27:02 DEBUG otopi.context context._executeMethod:142 method 
exception
Traceback (most recent call last):
   File "/usr/lib/python2.7/site-packages/otopi/context.py", line 132, in 
_executeMethod
 method['method']()
   File 
"/usr/share/ovirt-hosted-engine-setup/scripts/../plugins/gr-he-setup/vm/runvm.py",
 line 95, in _late_setup
 _('Cannot setup Hosted Engine with other VMs running')
RuntimeError: Cannot setup Hosted Engine with other VMs running
2017-04-19 12:27:02 ERROR otopi.context context._executeMethod:151 Failed to 
execute stage 'Environment setup': Cannot setup Hosted Engine with other VMs 
running
2017-04-19 12:27:02 DEBUG otopi.context context.dumpEnvironment:760 ENVIRONMENT 
DUMP - BEGIN
2017-04-19 12:27:02 DEBUG otopi.context context.dumpEnvironment:770 ENV 
BASE/error=bool:'True'
2017-04-19 12:27:02 DEBUG otopi.context context.dumpEnvironment:770 ENV 
BASE/exceptionInfo=list:'[(, RuntimeError('Cannot 
setup Hosted Engine with other VMs running',), )]'
2017-04-19 12:27:02 DEBUG otopi.context context.dumpEnvironment:774 ENVIRONMENT 
DUMP - END
James, generally this issue happens when the setup failed once and you 
tried re running it again.  Can you clean it and deploy it again?  HE 
should come up successfully. Below are the steps for cleaning it up.


1) vdsClient -s 0 list table | awk '{print $1}' | xargs vdsClient -s 0 
destroy


2) stop the volume and delete all the information inside the bricks from 
all the hosts


3) try to umount storage from /rhev/data-center/mnt/ - umount 
-f /rhev/data-center/mnt/  if it is mounted


4) remove all dirs from /rhev/data-center/mnt/ - rm 
-rf /rhev/data-center/mnt/*


5) start  volume again and start the deployment.

Thanks
kasturi




If I start it manually, the default DC is down, the default cluster has the 
installation host in the cluster,  there is no storage, and the VM doesn’t show 
up in the GUI. In this install run, I have not yet started the engine manually.

you wont be seeing HE vm until HE storage is imported into the UI. HE storage 
will be automatically imported into the UI (which will import HE vm too )once a 
master domain is present .

Sure; I’m just attempting to provide context.


I assume this is related to the errors in ovirt-hosted-engine-setup.log, below. 
(The timestamps are confusing; it looks like the Python errors are logged some 
time after they’re captured or something.) The HA broker and agent logs just 
show them looping in the sequence below.

Is there a decent way to pick this up and continue? If not, how do I make this 
work?

Can you please check the following things.

1) is glusterd running on all the nodes ? 'systemctl status glistered’
2) Are you able to connect to your storage server which is ovirt_engine in your 
case.
3) Can you check if all the brick process in the volume is up ?


1) Verified that glusterd is running on all three nodes.

2)
[root@sc5-thing-1]# mount -tglusterfs sc5-gluster-1:/ovirt_engine 
/mnt/ovirt_engine
[root@sc5-thing-1]# df -h
Filesystem  Size  Used Avail Use% Mounted on
[…]
sc5-gluster-1:/ovirt_engine 300G  2.6G  

Re: [ovirt-users] Hosted engine install failed; vdsm upset about broker

2017-04-20 Thread knarra

On 04/20/2017 03:15 AM, Jamie Lawrence wrote:

I trialed installing the hosted engine, following the instructions at  
http://www.ovirt.org/documentation/self-hosted/chap-Deploying_Self-Hosted_Engine/
  . This is using Gluster as the backend storage subsystem.

Answer file at the end.

Per the docs,

"When the hosted-engine deployment script completes successfully, the oVirt 
Engine is configured and running on your host. The Engine has already configured the 
data center, cluster, host, the Engine virtual machine, and a shared storage domain 
dedicated to the Engine virtual machine.”

In my case, this is false. The installation claims success, but  the hosted 
engine VM stays stopped, unless I start it manually.
During the install process there is a step where HE vm is stopped and 
started. Can you check if this has happened correctly ?

If I start it manually, the default DC is down, the default cluster has the 
installation host in the cluster,  there is no storage, and the VM doesn’t show 
up in the GUI. In this install run, I have not yet started the engine manually.
you wont be seeing HE vm until HE storage is imported into the UI. HE 
storage will be automatically imported into the UI (which will import HE 
vm too )once a master domain is present .


I assume this is related to the errors in ovirt-hosted-engine-setup.log, below. 
(The timestamps are confusing; it looks like the Python errors are logged some 
time after they’re captured or something.) The HA broker and agent logs just 
show them looping in the sequence below.

Is there a decent way to pick this up and continue? If not, how do I make this 
work?

Can you please check the following things.

1) is glusterd running on all the nodes ? 'systemctl status glusterd'
2) Are you able to connect to your storage server which is ovirt_engine 
in your case.

3) Can you check if all the brick process in the volume is up ?

Thanks
kasturi.



Thanks,

-j

- - - - ovirt-hosted-engine-setup.log snippet: - - - -

2017-04-19 12:29:55 DEBUG otopi.context context._executeMethod:128 Stage 
late_setup METHOD otopi.plugins.gr_he_setup.system.vdsmenv.Plugin._late_setup
2017-04-19 12:29:55 DEBUG otopi.plugins.otopi.services.systemd 
systemd.status:90 check service vdsmd status
2017-04-19 12:29:55 DEBUG otopi.plugins.otopi.services.systemd 
plugin.executeRaw:813 execute: ('/bin/systemctl', 'status', 'vdsmd.service'), 
executable='None', cwd='None', env=None
2017-04-19 12:29:55 DEBUG otopi.plugins.otopi.services.systemd 
plugin.executeRaw:863 execute-result: ('/bin/systemctl', 'status', 
'vdsmd.service'), rc=0
2017-04-19 12:29:55 DEBUG otopi.plugins.otopi.services.systemd 
plugin.execute:921 execute-output: ('/bin/systemctl', 'status', 
'vdsmd.service') stdout:
● vdsmd.service - Virtual Desktop Server Manager
Loaded: loaded (/usr/lib/systemd/system/vdsmd.service; enabled; vendor 
preset: enabled)
Active: active (running) since Wed 2017-04-19 12:26:59 PDT; 2min 55s ago
   Process: 67370 ExecStopPost=/usr/libexec/vdsm/vdsmd_init_common.sh 
--post-stop (code=exited, status=0/SUCCESS)
   Process: 69995 ExecStartPre=/usr/libexec/vdsm/vdsmd_init_common.sh 
--pre-start (code=exited, status=0/SUCCESS)
  Main PID: 70062 (vdsm)
CGroup: /system.slice/vdsmd.service
└─70062 /usr/bin/python2 /usr/share/vdsm/vdsm

Apr 19 12:29:00 sc5-ovirt-2.squaretrade.com vdsm[70062]: vdsm 
ovirt_hosted_engine_ha.lib.brokerlink.BrokerLink ERROR Failed to connect to 
broker, the number of errors has exceeded the limit (1)
Apr 19 12:29:00 sc5-ovirt-2.squaretrade.com vdsm[70062]: vdsm root ERROR failed 
to retrieve Hosted Engine HA info
  Traceback (most 
recent call last):
File 
"/usr/lib/python2.7/site-packages/vdsm/host/api.py", line 231, in _getHaInfo
  stats = 
instance.get_all_stats()
File 
"/usr/lib/python2.7/site-packages/ovirt_hosted_engine_ha/client/client.py", 
line 102, in get_all_stats
  with 
broker.connection(self._retries, self._wait):
File 
"/usr/lib64/python2.7/contextlib.py", line 17, in __enter__
  return 
self.gen.next()
File 
"/usr/lib/python2.7/site-packages/ovirt_hosted_engine_ha/lib/brokerlink.py", 
line 99, in connection
  
self.connect(retries, wait)
File 
"/usr/lib/python2.7/site-packages/ovirt_hosted_engine_ha/lib/brokerlink.py", 
line 78, in connect
  raise 
BrokerConnectionError(error_msg)
   

Re: [ovirt-users] I’m having trouble deleting a test gluster volume

2017-04-12 Thread knarra

On 04/12/2017 08:45 PM, Precht, Andrew wrote:


Hi all,

You asked: Any errors in ovirt-engine.log file ?

Yes, In the engine.log this error is repeated about every 3 minutes:


2017-04-12 07:16:12,554-07 ERROR 
[org.ovirt.engine.core.bll.gluster.GlusterTasksSyncJob] 
(DefaultQuartzScheduler3) [ccc8ed0d-8b91-4397-b6b9-ab0f77c5f7b8] Error 
updating tasks from CLI: 
org.ovirt.engine.core.common.errors.EngineException: EngineException: 
Command execution failederror: Error : Request timed outreturn code: 1 
(Failed with error GlusterVolumeStatusAllFailedException and code 
4161)error: Error : Request timed out



I am not sure why this says "Request timed out".


1) gluster volume list ->  Still shows the deleted volume (test1)

2) gluster peer status -> Shows one of the peers twice with different 
uuid’s:


Hostname: 192.168.10.109Uuid: 
42fbb7de-8e6f-4159-a601-3f858fa65f6cState: Peer in Cluster 
(Connected)Hostname: 192.168.10.109Uuid: 
e058babe-7f9d-49fe-a3ea-ccdc98d7e5b5State: Peer in Cluster (Connected)



How did this happen? Are the hostname same for two hosts ?


I tried a gluster volume stop test1, with this result: volume stop: 
test1: failed: Another transaction is in progress for test1. Please 
try again after sometime.



can you restart glusterd and try to stop and delete the volume?


The etc-glusterfs-glusterd.vol.log shows no activity triggered by 
trying to remove the test1 volume from the UI.



The ovirt-engine.log shows this repeating many times, when trying to 
remove the test1 volume from the UI:



2017-04-12 07:57:38,049-07 INFO 
 [org.ovirt.engine.core.bll.lock.InMemoryLockManager] 
(DefaultQuartzScheduler9) [ccc8ed0d-8b91-4397-b6b9-ab0f77c5f7b8] 
Failed to acquire lock and wait lock 
'EngineLock:{exclusiveLocks='[b0e1b909-9a6a-49dc-8e20-3a027218f7e1=<GLUSTER, 
ACTION_TYPE_FAILED_GLUSTER_OPERATION_INPROGRESS>]', sharedLocks='null'}'


can you restart ovirt-engine service because i see that "failed to 
acquire lock".  Once ovirt-engine is restarted some one who is holding 
the lock should be release  and things should work fine.


Last but not least, if none of the above works:

Login to all your nodes in the cluster.
rm -rf /var/lib/glusterd/vols/*
rm -rf /var/lib/glusterd/peers/*
systemctl restart glusterd on all the nodes.

Login to UI and see if any volumes / hosts are present. If yes, remove them.

This should clear things for you and you can start from basic.



Thanks much,

Andrew

----
*From:* knarra <kna...@redhat.com>
*Sent:* Tuesday, April 11, 2017 11:10:04 PM
*To:* Precht, Andrew; Sandro Bonazzola; Sahina Bose; Tal Nisan; Allon 
Mureinik; Nir Soffer

*Cc:* users
*Subject:* Re: [ovirt-users] I’m having trouble deleting a test 
gluster volume

On 04/12/2017 03:35 AM, Precht, Andrew wrote:


I just noticed this in the Alerts tab: Detected deletion of volume 
test1 on cluster 8000-1, and deleted it from engine DB.


Yet, It still shows in the web UI?

Any errors in ovirt-engine.log file ? if the volume is deleted from db 
ideally it should be deleted from UI too.  Can you go to gluster nodes 
and check for the following:


1) gluster volume list -> should not return anything since you have 
deleted the volumes.


2) gluster peer status -> on all the nodes should show that all the 
peers are in connected state.


can you tail -f /var/log/ovirt-engine/ovirt-engine.log and gluster log 
and capture the error messages when you try deleting the volume from UI?


Log what you have pasted in the previous mail only gives info and i 
could not get any details from that on why volume delete is failing




*From:* Precht, Andrew
*Sent:* Tuesday, April 11, 2017 2:39:31 PM
*To:* knarra; Sandro Bonazzola; Sahina Bose; Tal Nisan; Allon 
Mureinik; Nir Soffer

*Cc:* users
*Subject:* Re: [ovirt-users] I’m having trouble deleting a test 
gluster volume

The plot thickens…
I put all hosts in the cluster into maintenance mode, with the Stop 
Gluster service checkbox checked. I then deleted the 
/var/lib/glusterd/vols/test1 directory on all hosts. I then took the 
host that the test1 volume was on out of maintenance mode. Then I 
tried to remove the test1 volume from within the web UI. With no 
luck, I got the message: Could not delete Gluster Volume test1 on 
cluster 8000-1.


I went back and checked all host for the test1 directory, it is not 
on any host. Yet I still can’t remove it…


Any suggestions?


*From:* Precht, Andrew
*Sent:* Tuesday, April 11, 2017 1:15:22 PM
*To:* knarra; Sandro Bonazzola; Sahina Bose; Tal Nisan; Allon 
Mureinik; Nir Soffer

*Cc:* users
*Subject:* Re: [ovirt-users] I’m having trouble deleting a test 
gluster volume

Here is an update…

I checked the /var/log/glusterfs/etc-glusterfs-glusterd.vol.log on 
the node t

Re: [ovirt-users] I’m having trouble deleting a test gluster volume

2017-04-12 Thread knarra

On 04/12/2017 03:35 AM, Precht, Andrew wrote:


I just noticed this in the Alerts tab: Detected deletion of volume 
test1 on cluster 8000-1, and deleted it from engine DB.


Yet, It still shows in the web UI?

Any errors in ovirt-engine.log file ? if the volume is deleted from db 
ideally it should be deleted from UI too.  Can you go to gluster nodes 
and check for the following:


1) gluster volume list -> should not return anything since you have 
deleted the volumes.


2) gluster peer status -> on all the nodes should show that all the 
peers are in connected state.


can you tail -f /var/log/ovirt-engine/ovirt-engine.log and gluster log 
and capture the error messages when you try deleting the volume from UI?


Log what you have pasted in the previous mail only gives info and i 
could not get any details from that on why volume delete is failing




*From:* Precht, Andrew
*Sent:* Tuesday, April 11, 2017 2:39:31 PM
*To:* knarra; Sandro Bonazzola; Sahina Bose; Tal Nisan; Allon 
Mureinik; Nir Soffer

*Cc:* users
*Subject:* Re: [ovirt-users] I’m having trouble deleting a test 
gluster volume

The plot thickens…
I put all hosts in the cluster into maintenance mode, with the Stop 
Gluster service checkbox checked. I then deleted the 
/var/lib/glusterd/vols/test1 directory on all hosts. I then took the 
host that the test1 volume was on out of maintenance mode. Then I 
tried to remove the test1 volume from within the web UI. With no luck, 
I got the message: Could not delete Gluster Volume test1 on cluster 
8000-1.


I went back and checked all host for the test1 directory, it is not on 
any host. Yet I still can’t remove it…


Any suggestions?


*From:* Precht, Andrew
*Sent:* Tuesday, April 11, 2017 1:15:22 PM
*To:* knarra; Sandro Bonazzola; Sahina Bose; Tal Nisan; Allon 
Mureinik; Nir Soffer

*Cc:* users
*Subject:* Re: [ovirt-users] I’m having trouble deleting a test 
gluster volume

Here is an update…

I checked the /var/log/glusterfs/etc-glusterfs-glusterd.vol.log on the 
node that had the trouble volume (test1). I didn’t see any errors. So, 
I ran a tail -f on the log as I tried to remove the volume using the 
web UI. here is what was appended:


[2017-04-11 19:48:40.756360] I [MSGID: 106487] 
[glusterd-handler.c:1474:__glusterd_handle_cli_list_friends] 
0-glusterd: Received cli list req
[2017-04-11 19:48:42.238840] I [MSGID: 106488] 
[glusterd-handler.c:1537:__glusterd_handle_cli_get_volume] 
0-management: Received get vol req
The message "I [MSGID: 106487] 
[glusterd-handler.c:1474:__glusterd_handle_cli_list_friends] 
0-glusterd: Received cli list req" repeated 6 times between 
[2017-04-11 19:48:40.756360] and [2017-04-11 19:49:32.596536]
The message "I [MSGID: 106488] 
[glusterd-handler.c:1537:__glusterd_handle_cli_get_volume] 
0-management: Received get vol req" repeated 20 times between 
[2017-04-11 19:48:42.238840] and [2017-04-11 19:49:34.082179]
[2017-04-11 19:51:41.556077] I [MSGID: 106487] 
[glusterd-handler.c:1474:__glusterd_handle_cli_list_friends] 
0-glusterd: Received cli list req


I’m seeing that the timestamps on these log entries do not match the 
time on the node.


The next steps
I stopped the glusterd service on the node with volume test1
I deleted it with:  rm -rf /var/lib/glusterd/vols/test1
I started the glusterd service.

After starting the gluster service back up, the directory 
/var/lib/glusterd/vols/test1 reappears.

I’m guessing syncing with the other nodes?
Is this because I have the Volume Option: auth allow *
Do I need to remove the directory /var/lib/glusterd/vols/test1 on all 
nodes in the cluster individually?


thanks

--------
*From:* knarra <kna...@redhat.com>
*Sent:* Tuesday, April 11, 2017 11:51:18 AM
*To:* Precht, Andrew; Sandro Bonazzola; Sahina Bose; Tal Nisan; Allon 
Mureinik; Nir Soffer

*Cc:* users
*Subject:* Re: [ovirt-users] I’m having trouble deleting a test 
gluster volume

On 04/11/2017 11:28 PM, Precht, Andrew wrote:

Hi all,
The node is oVirt Node 4.1.1 with glusterfs-3.8.10-1.el7.
On the node I can not find /var/log/glusterfs/glusterd.log However, 
there is a /var/log/glusterfs/glustershd.log
can you check if /var/log/glusterfs/etc-glusterfs-glusterd.vol.log 
exists? if yes, can you check if there is any error present in that file ?


What happens if I follow the four steps outlined here to remove the 
volume from the node _BUT_, I do have another volume present in the 
cluster. It too is a test volume. Neither one has any data on them. 
So, data loss is not an issue.
Running those four steps will remove the volume from your cluster . If 
the volumes what you have are test volumes you could just follow the 
steps outlined to delete them (since you are not able to delete from 
UI) and bring back

Re: [ovirt-users] I’m having trouble deleting a test gluster volume

2017-04-12 Thread knarra

On 04/12/2017 01:45 AM, Precht, Andrew wrote:

Here is an update…

I checked the /var/log/glusterfs/etc-glusterfs-glusterd.vol.log on the 
node that had the trouble volume (test1). I didn’t see any errors. So, 
I ran a tail -f on the log as I tried to remove the volume using the 
web UI. here is what was appended:


[2017-04-11 19:48:40.756360] I [MSGID: 106487] 
[glusterd-handler.c:1474:__glusterd_handle_cli_list_friends] 
0-glusterd: Received cli list req
[2017-04-11 19:48:42.238840] I [MSGID: 106488] 
[glusterd-handler.c:1537:__glusterd_handle_cli_get_volume] 
0-management: Received get vol req
The message "I [MSGID: 106487] 
[glusterd-handler.c:1474:__glusterd_handle_cli_list_friends] 
0-glusterd: Received cli list req" repeated 6 times between 
[2017-04-11 19:48:40.756360] and [2017-04-11 19:49:32.596536]
The message "I [MSGID: 106488] 
[glusterd-handler.c:1537:__glusterd_handle_cli_get_volume] 
0-management: Received get vol req" repeated 20 times between 
[2017-04-11 19:48:42.238840] and [2017-04-11 19:49:34.082179]
[2017-04-11 19:51:41.556077] I [MSGID: 106487] 
[glusterd-handler.c:1474:__glusterd_handle_cli_list_friends] 
0-glusterd: Received cli list req


I’m seeing that the timestamps on these log entries do not match the 
time on the node.
gluster logs are in UTC format. That is the reason you might be seeing a 
different timestamp on your node and in the gluster logs.


The next steps
I stopped the glusterd service on the node with volume test1
I deleted it with:  rm -rf /var/lib/glusterd/vols/test1
I started the glusterd service.

After starting the gluster service back up, the directory 
/var/lib/glusterd/vols/test1 reappears.

I’m guessing syncing with the other nodes?

yes, since you deleted it only one one node.

Is this because I have the Volume Option: auth allow *
Do I need to remove the directory /var/lib/glusterd/vols/test1 on all 
nodes in the cluster individually?
you need to remove the file /var/lib/glusterd/vols/test1 on all nodes 
and restart glusterd service on all the nodes in the cluster.


thanks

----
*From:* knarra <kna...@redhat.com>
*Sent:* Tuesday, April 11, 2017 11:51:18 AM
*To:* Precht, Andrew; Sandro Bonazzola; Sahina Bose; Tal Nisan; Allon 
Mureinik; Nir Soffer

*Cc:* users
*Subject:* Re: [ovirt-users] I’m having trouble deleting a test 
gluster volume

On 04/11/2017 11:28 PM, Precht, Andrew wrote:

Hi all,
The node is oVirt Node 4.1.1 with glusterfs-3.8.10-1.el7.
On the node I can not find /var/log/glusterfs/glusterd.log However, 
there is a /var/log/glusterfs/glustershd.log
can you check if /var/log/glusterfs/etc-glusterfs-glusterd.vol.log 
exists? if yes, can you check if there is any error present in that file ?


What happens if I follow the four steps outlined here to remove the 
volume from the node _BUT_, I do have another volume present in the 
cluster. It too is a test volume. Neither one has any data on them. 
So, data loss is not an issue.
Running those four steps will remove the volume from your cluster . If 
the volumes what you have are test volumes you could just follow the 
steps outlined to delete them (since you are not able to delete from 
UI) and bring back the cluster into a normal state.


--------
*From:* knarra <kna...@redhat.com>
*Sent:* Tuesday, April 11, 2017 10:32:27 AM
*To:* Sandro Bonazzola; Precht, Andrew; Sahina Bose; Tal Nisan; Allon 
Mureinik; Nir Soffer

*Cc:* users
*Subject:* Re: [ovirt-users] I’m having trouble deleting a test 
gluster volume

On 04/11/2017 10:44 PM, Sandro Bonazzola wrote:

Adding some people

Il 11/Apr/2017 19:06, "Precht, Andrew" <andrew.pre...@sjlibrary.org 
<mailto:andrew.pre...@sjlibrary.org>> ha scritto:


Hi Ovirt users,
I’m a newbie to oVirt and I’m having trouble deleting a test
gluster volume. The nodes are 4.1.1 and the engine is 4.1.0

When I try to remove the test volume, I click Remove, the dialog
box prompting to confirm the deletion pops up and after I click
OK, the dialog box changes to show a little spinning wheel and
then it disappears. In the end the volume is still there.

with the latest version of glusterfs & ovirt we do not see any issue 
with deleting a volume. Can you please check 
/var/log/glusterfs/glusterd.log file if there is any error present?




The test volume was distributed with two host members. One of
the hosts I was able to remove from the volume by removing the
host form the cluster. When I try to remove the remaining host
in the volume, even with the “Force Remove” box ticked, I get
this response: Cannot remove Host. Server having Gluster volume.

What to try next?

since you have already removed the volume from one host in the 
cluster and you still see it on another host you can do the following 
to remove the volume from anothe

Re: [ovirt-users] I’m having trouble deleting a test gluster volume

2017-04-11 Thread knarra

On 04/11/2017 11:28 PM, Precht, Andrew wrote:

Hi all,
The node is oVirt Node 4.1.1 with glusterfs-3.8.10-1.el7.
On the node I can not find /var/log/glusterfs/glusterd.log However, 
there is a /var/log/glusterfs/glustershd.log
can you check if /var/log/glusterfs/etc-glusterfs-glusterd.vol.log 
exists? if yes, can you check if there is any error present in that file ?


What happens if I follow the four steps outlined here to remove the 
volume from the node _BUT_, I do have another volume present in the 
cluster. It too is a test volume. Neither one has any data on them. 
So, data loss is not an issue.
Running those four steps will remove the volume from your cluster . If 
the volumes what you have are test volumes you could just follow the 
steps outlined to delete them (since you are not able to delete from UI) 
and bring back the cluster into a normal state.



*From:* knarra <kna...@redhat.com>
*Sent:* Tuesday, April 11, 2017 10:32:27 AM
*To:* Sandro Bonazzola; Precht, Andrew; Sahina Bose; Tal Nisan; Allon 
Mureinik; Nir Soffer

*Cc:* users
*Subject:* Re: [ovirt-users] I’m having trouble deleting a test 
gluster volume

On 04/11/2017 10:44 PM, Sandro Bonazzola wrote:

Adding some people

Il 11/Apr/2017 19:06, "Precht, Andrew" <andrew.pre...@sjlibrary.org 
<mailto:andrew.pre...@sjlibrary.org>> ha scritto:


Hi Ovirt users,
I’m a newbie to oVirt and I’m having trouble deleting a test
gluster volume. The nodes are 4.1.1 and the engine is 4.1.0

When I try to remove the test volume, I click Remove, the dialog
box prompting to confirm the deletion pops up and after I click
OK, the dialog box changes to show a little spinning wheel and
then it disappears. In the end the volume is still there.

with the latest version of glusterfs & ovirt we do not see any issue 
with deleting a volume. Can you please check 
/var/log/glusterfs/glusterd.log file if there is any error present?




The test volume was distributed with two host members. One of the
hosts I was able to remove from the volume by removing the host
form the cluster. When I try to remove the remaining host in the
volume, even with the “Force Remove” box ticked, I get this
response: Cannot remove Host. Server having Gluster volume.

What to try next?

since you have already removed the volume from one host in the cluster 
and you still see it on another host you can do the following to 
remove the volume from another host.


1) Login to the host where the volume is present.
2) cd to /var/lib/glusterd/vols
3) rm -rf 
4) Restart glusterd on that  host.

And before doing the above make sure that you do not have any other 
volume present in the cluster.


Above steps should not be run on a production system as you might 
loose the volume and data.


Now removing the host from UI should succed.



P.S. I’ve tried to join this user group several times in the
past, with no response.
Is it possible for me to join this group?

Regards,
Andrew



___
Users mailing list
Users@ovirt.org
http://lists.ovirt.org/mailman/listinfo/users





___
Users mailing list
Users@ovirt.org
http://lists.ovirt.org/mailman/listinfo/users


Re: [ovirt-users] I’m having trouble deleting a test gluster volume

2017-04-11 Thread knarra

On 04/11/2017 10:44 PM, Sandro Bonazzola wrote:

Adding some people

Il 11/Apr/2017 19:06, "Precht, Andrew" > ha scritto:


Hi Ovirt users,
I’m a newbie to oVirt and I’m having trouble deleting a test
gluster volume. The nodes are 4.1.1 and the engine is 4.1.0

When I try to remove the test volume, I click Remove, the dialog
box prompting to confirm the deletion pops up and after I click
OK, the dialog box changes to show a little spinning wheel and
then it disappears. In the end the volume is still there.

with the latest version of glusterfs & ovirt we do not see any issue 
with deleting a volume. Can you please check 
/var/log/glusterfs/glusterd.log file if there is any error present?




The test volume was distributed with two host members. One of the
hosts I was able to remove from the volume by removing the host
form the cluster. When I try to remove the remaining host in the
volume, even with the “Force Remove” box ticked, I get this
response: Cannot remove Host. Server having Gluster volume.

What to try next?

since you have already removed the volume from one host in the cluster 
and you still see it on another host you can do the following to remove 
the volume from another host.


1) Login to the host where the volume is present.
2) cd to /var/lib/glusterd/vols
3) rm -rf 
4) Restart glusterd on that  host.

And before doing the above make sure that you do not have any other 
volume present in the cluster.


Above steps should not be run on a production system as you might loose 
the volume and data.


Now removing the host from UI should succed.



P.S. I’ve tried to join this user group several times in the past,
with no response.
Is it possible for me to join this group?

Regards,
Andrew



___
Users mailing list
Users@ovirt.org
http://lists.ovirt.org/mailman/listinfo/users



___
Users mailing list
Users@ovirt.org
http://lists.ovirt.org/mailman/listinfo/users


Re: [ovirt-users] engine-image/iso-uploader issues

2017-03-31 Thread knarra

On 03/31/2017 12:44 PM, knarra wrote:

On 03/29/2017 07:20 AM, Bryan Sockel wrote:

Guys,
I have recently setup a new Ovirt environment, and i have created a 
gluster share for both my Export and ISO Domains. Every time i try 
and use either command, i get an access denied error.  Here is the 
verbose output from the image import:

DEBUG: API Vendor(ovirt.org)API Version(4.1.0)
DEBUG: id=deceadaa-bfce-4d63-8726-0128be8b3a27 
address=vs-host-colo-2-gluster path=/export

DEBUG: local NFS mount point is /tmp/tmpne0s1q
DEBUG: NFS mount command (/bin/mount -t nfs -o rw,sync,soft 
vs-host-colo-2-gluster:/export /tmp/tmpne0s1q)
DEBUG: /bin/mount -t nfs -o rw,sync,soft 
vs-host-colo-2-gluster:/export /tmp/tmpne0s1q
DEBUG: _cmds(['/bin/mount', '-t', 'nfs', '-o', 'rw,sync,soft', 
'vs-host-colo-2-gluster:/export', '/tmp/tmpne0s1q'])

DEBUG: returncode(32)
DEBUG: STDOUT()
DEBUG: STDERR(mount.nfs: access denied by server while mounting 
vs-host-colo-2-gluster:/export

)
ERROR: mount.nfs: access denied by server while mounting 
vs-host-colo-2-gluster:/export

DEBUG: /bin/umount -t nfs -f  /tmp/tmpne0s1q
DEBUG: /bin/umount -t nfs -f  /tmp/tmpne0s1q
DEBUG: _cmds(['/bin/umount', '-t', 'nfs', '-f', '/tmp/tmpne0s1q'])
DEBUG: returncode(32)
DEBUG: STDOUT()
DEBUG: STDERR(umount: /tmp/tmpne0s1q: not mounted
)
DEBUG: umount: /tmp/tmpne0s1q: not mounted
Volumes where created as follows:
gluster volume create iso 
vs-host-colo-2-gluster.altn.int:/gluster/iso/brick1
gluster volume set iso storage.owner-uid 36 && gluster volume set iso 
storage.owner-gid 36

gluster volume start iso
gluster volume create 
export vs-host-colo-2-gluster.altn.int:/gluster/iso/brick1
gluster volume set export storage.owner-uid 36 && gluster volume set 
export storage.owner-gid 36

gluster volume start export
Thanks


___
Users mailing list
Users@ovirt.org
http://lists.ovirt.org/mailman/listinfo/users


Hi Bryan,

Can you please give the output of gluster volume info. Can you 
check if nfs is enabled on the volume ? If nfs is enabled on the 
volume you should be able to mount it.


Thanks

kasturi.



___
Users mailing list
Users@ovirt.org
http://lists.ovirt.org/mailman/listinfo/users


If nfs is not enabled on the volume, engine-iso-uploader does not work 
as it uses nfs and you will have to copy the image via ssh.


___
Users mailing list
Users@ovirt.org
http://lists.ovirt.org/mailman/listinfo/users


Re: [ovirt-users] engine-image/iso-uploader issues

2017-03-31 Thread knarra

On 03/29/2017 07:20 AM, Bryan Sockel wrote:

Guys,
I have recently setup a new Ovirt environment, and i have created a 
gluster share for both my Export and ISO Domains. Every time i try and 
use either command, i get an access denied error.  Here is the verbose 
output from the image import:

DEBUG: API Vendor(ovirt.org)API Version(4.1.0)
DEBUG: id=deceadaa-bfce-4d63-8726-0128be8b3a27 
address=vs-host-colo-2-gluster path=/export

DEBUG: local NFS mount point is /tmp/tmpne0s1q
DEBUG: NFS mount command (/bin/mount -t nfs -o rw,sync,soft 
vs-host-colo-2-gluster:/export /tmp/tmpne0s1q)
DEBUG: /bin/mount -t nfs -o rw,sync,soft 
vs-host-colo-2-gluster:/export /tmp/tmpne0s1q
DEBUG: _cmds(['/bin/mount', '-t', 'nfs', '-o', 'rw,sync,soft', 
'vs-host-colo-2-gluster:/export', '/tmp/tmpne0s1q'])

DEBUG: returncode(32)
DEBUG: STDOUT()
DEBUG: STDERR(mount.nfs: access denied by server while mounting 
vs-host-colo-2-gluster:/export

)
ERROR: mount.nfs: access denied by server while mounting 
vs-host-colo-2-gluster:/export

DEBUG: /bin/umount -t nfs -f  /tmp/tmpne0s1q
DEBUG: /bin/umount -t nfs -f  /tmp/tmpne0s1q
DEBUG: _cmds(['/bin/umount', '-t', 'nfs', '-f', '/tmp/tmpne0s1q'])
DEBUG: returncode(32)
DEBUG: STDOUT()
DEBUG: STDERR(umount: /tmp/tmpne0s1q: not mounted
)
DEBUG: umount: /tmp/tmpne0s1q: not mounted
Volumes where created as follows:
gluster volume create iso 
vs-host-colo-2-gluster.altn.int:/gluster/iso/brick1
gluster volume set iso storage.owner-uid 36 && gluster volume set iso 
storage.owner-gid 36

gluster volume start iso
gluster volume create 
export vs-host-colo-2-gluster.altn.int:/gluster/iso/brick1
gluster volume set export storage.owner-uid 36 && gluster volume set 
export storage.owner-gid 36

gluster volume start export
Thanks


___
Users mailing list
Users@ovirt.org
http://lists.ovirt.org/mailman/listinfo/users


Hi Bryan,

Can you please give the output of gluster volume info. Can you 
check if nfs is enabled on the volume ? If nfs is enabled on the volume 
you should be able to mount it.


Thanks

kasturi.

___
Users mailing list
Users@ovirt.org
http://lists.ovirt.org/mailman/listinfo/users


Re: [ovirt-users] how to handle mount options for hosted engine on glusterfs

2017-03-21 Thread knarra

On 03/21/2017 10:52 AM, Ian Neilsen wrote:

knara

Looks like your conf is incorrect for mnt option.


Hi Ian,

mnt_option should be mnt_options=backup-volfile-servers=: 
and this is how we test it.


Thanks
kasturi.

It should be I believe; mnt_options=backupvolfile-server=server name

not

mnt_options=backup-volfile-servers=host2

If your dns isnt working or your hosts file is incorrect this will 
prevent it as well.




On 21 March 2017 at 03:30, /dev/null <devn...@linuxitil.org 
<mailto:devn...@linuxitil.org>> wrote:


Hi kasturi,

thank you. I tested and it seems not to work, even after rebooting
the current mount does not show up the mnt_options nor the switch
over works.

[root@host2 ~]# cat /etc/ovirt-hosted-engine/hosted-engine.conf
ca_cert=/etc/pki/vdsm/libvirt-spice/ca-cert.pem
gateway=192.168.2.1
iqn=
conf_image_UUID=7bdc29ad-bee6-4a33-8d58-feae9f45d54f
ca_cert=/etc/pki/vdsm/libvirt-spice/ca-cert.pem
sdUUID=1775d440-649c-4921-ba3b-9b6218c27ef3
connectionUUID=fcf70593-8214-4e8d-b546-63c210a3d5e7
conf_volume_UUID=06dd17e5-a440-417a-94e8-75929b6f9ed5
user=
host_id=2
bridge=ovirtmgmt
metadata_image_UUID=6252c21c-227d-4dbd-bb7b-65cf342154b6
spUUID=----
mnt_options=backup-volfile-servers=host2
fqdn=ovirt.test.lab
portal=
vm_disk_id=1bb9ea7f-986c-4803-ae82-8d5a47b1c496
metadata_volume_UUID=426ff2cc-58a2-4b83-b22f-3f7dc99890d4
vm_disk_vol_id=b57d40d2-e68b-440a-bab7-0a9631f4baa4
domainType=glusterfs
port=
console=qxl
ca_subject="C=EN, L=Test, O=Test, CN=Test"
password=
vmid=272942f3-99b9-48b9-aca4-19ec852f6874
lockspace_image_UUID=9fbdbfd4-3b31-43ce-80e2-283f0aeead49
lockspace_volume_UUID=b1e4d3ed-ec78-41cd-9a39-4372f488fb92
vdsm_use_ssl=true
storage=host1:/gvol0
conf=/var/run/ovirt-hosted-engine-ha/vm.conf


[root@host2 ~]# mount |grep gvol0
host1:/gvol0 on /rhev/data-center/mnt/glusterSD/host1:_gvol0 type
fuse.glusterfs

(rw,relatime,user_id=0,group_id=0,default_permissions,allow_other,max_read=131072
<tel:13%2010%2072>)

Any suggestion?

I will try an answerfile-install as well later, but it was helpful
to know, where to set this.

Thanks & best regards
*
On Mon, 20 Mar 2017 12:12:25 +0530, knarra wrote*
> On 03/20/2017 05:09 AM, /dev/null wrote:
>

Hi,

how do i make the hosted_storage aware of gluster server failure? In 
--deploy i
cannot
provide backup-volfile-servers. In 
/etc/ovirt-hosted-engine/hosted-engine.conf
there
is
an mnt_options line, but i
read

(https://github.com/oVirt/ovirt-hosted-engine-setup/commit/995c6a65ab897d804f794306cc3654214f2c29b6

<https://github.com/oVirt/ovirt-hosted-engine-setup/commit/995c6a65ab897d804f794306cc3654214f2c29b6>)
that this settings get lost during deployment on seconday
servers.

Is there an official way to deal with that? Should this option be set 
manualy on
all
nodes?

Thanks!

/dev/null

Hi, > >I think in the above patch they are just   hiding the
the query for mount_options but i think all the code is still
present and you should not loose mount options during additional
host deployment. For more info you can refer [1]. > > You can
set this option manually on all nodes by editing
/etc/ovirt-hosted-engine/hosted-engine.conf. Following steps will
help you to achieve this. > > 1) Move each host to maintenance,
edit the file '/etc/ovirt-hosted-engine/hosted-engine.conf'. > 2)
set mnt_options =
backup-volfile-servers=: > 3) restart
the services 'systemctl restart ovirt-ha-agent' ; 'systemctl
restart ovirt-ha-broker' > 4) Activate the node. > > Repeat the
above steps for all the nodes in the cluster. > > [1]
https://bugzilla.redhat.com/show_bug.cgi?id=1426517#c2
<https://bugzilla.redhat.com/show_bug.cgi?id=1426517#c2> > > Hope
this helps !! > > Thanks > kasturi >

--
Diese Nachricht wurde auf Viren und andere gef�hrliche Inhalte
untersucht
und ist - aktuelle Virenscanner vorausgesetzt -
sauber.
For all your IT requirements visit:http://www.transtec.co.uk 
<http://www.transtec.co.uk/>

>
>
___
Users mailing
list
Users@ovirt.org <mailto:Users@ovirt.org>
http://lists.ovirt.org/mailman/listinfo/users
<http://lists.ovirt.org/mailman/listinfo/users>


>
>-- > Diese E-Mail wurde auf Viren und gefährliche Anhänge > durch
*MailScanner* <http://www.mailscanner.info/> untersucht und ist
wahrscheinlich virenfrei. > MailScanner dankt transtec
<http://www.transtec.de/> f�r die freundliche Unterst�tzung. --
Diese E-Mail wur

Re: [ovirt-users] how to handle mount options for hosted engine on glusterfs

2017-03-20 Thread knarra

On 03/20/2017 05:09 AM, /dev/null wrote:

Hi,

how do i make the hosted_storage aware of gluster server failure? In --deploy i 
cannot
provide backup-volfile-servers. In /etc/ovirt-hosted-engine/hosted-engine.conf 
there is
an mnt_options line, but i read
(https://github.com/oVirt/ovirt-hosted-engine-setup/commit/995c6a65ab897d804f794306cc3654214f2c29b6)
that this settings get lost during deployment on seconday servers.

Is there an official way to deal with that? Should this option be set manualy 
on all nodes?

Thanks!

/dev/null

Hi,

   I think in the above patch they are just   hiding the the query for 
mount_options but i think all the code is still present and you should 
not loose mount options during additional host deployment. For more info 
you can refer [1].


You can set this option manually on all nodes by editing 
/etc/ovirt-hosted-engine/hosted-engine.conf. Following steps will help 
you to achieve this.


1) Move each host to maintenance, edit the file 
'/etc/ovirt-hosted-engine/hosted-engine.conf'.

2) set mnt_options = backup-volfile-servers=:
3) restart the services 'systemctl restart ovirt-ha-agent' ; 'systemctl 
restart ovirt-ha-broker'

4) Activate the node.

Repeat the above steps for all the nodes in the cluster.

[1] https://bugzilla.redhat.com/show_bug.cgi?id=1426517#c2

Hope this helps !!

Thanks
kasturi



--
Diese Nachricht wurde auf Viren und andere gef�hrliche Inhalte untersucht
und ist - aktuelle Virenscanner vorausgesetzt - sauber.
For all your IT requirements visit: http://www.transtec.co.uk



___
Users mailing list
Users@ovirt.org
http://lists.ovirt.org/mailman/listinfo/users



___
Users mailing list
Users@ovirt.org
http://lists.ovirt.org/mailman/listinfo/users


Re: [ovirt-users] oVirt 4.1.0 - Gluster - Hosted Engine deploy howto

2017-03-15 Thread knarra

Hi Roberto,

Hosted Engine tab is visible only after hosted_storage and 
hosted_engine vm is imported to the cluster. Once the first node is 
installed, you will need to create a storage domain of vmstore or data 
volume . Once one of these storage domains are created, hosted_storage 
and Hosted_engine vm is imported automatically. After this you will be 
able to see Hosted Engine tab in the New hosts dialog box.


Thanks
kasturi

On 03/15/2017 11:09 PM, NUNIN Roberto wrote:


Hi

Our test environment:

oVirt Engine : 4.1.0.3-1-el7.centos

Gluster replica 3 with bricks on all nodes.

6 nodes :

OS Version: RHEL - 7 - 3.1611.el7.centos

Kernel Version: 3.10.0 - 514.10.2.el7.x86_64

KVM Version: 2.6.0 - 28.el7_3.3.1

LIBVIRT Version: libvirt-2.0.0-10.el7_3.5

VDSM Version: vdsm-4.19.4-1.el7.centos

GlusterFS Version: glusterfs-3.8.9-1.el7

I have successfully added all the nodes to the default cluster.

Now I need to activate all the remainder 5 hosts for the hosted 
engine, but I haven’t the tab “Hosted Engine” in the Host properties, 
so I can’t deploy HE via web UI.


Using command line, I have an error :

2017-03-15 16:58:48 ERROR otopi.plugins.gr_he_setup.storage.storage 
storage._abortAdditionalHosts:189 Setup of additional hosts using this 
software is not allowed anymore. Please use the engine web interface 
to deploy any additional hosts.


RuntimeError: *Setup of additional hosts using this software is not 
allowed anymore. Please use the engine web interface to deploy any 
additional hosts*.


2017-03-15 16:58:48 ERROR otopi.context context._executeMethod:151 
Failed to execute stage 'Environment customization': Setup of 
additional hosts using this software is not allowed anymore. Please 
use the engine web interface to deploy any additional hosts.


2017-03-15 16:58:48 DEBUG otopi.context context.dumpEnvironment:770 
ENV BASE/exceptionInfo=list:'[(, 
RuntimeError('Setup of additional hosts using this software is not 
allowed anymore. Please use the engine web interface to deploy any 
additional hosts.',), )]'


2017-03-15 16:58:48 DEBUG otopi.context context.dumpEnvironment:770 
ENV BASE/exceptionInfo=list:'[(, 
RuntimeError('Setup of additional hosts using this software is not 
allowed anymore. Please use the engine web interface to deploy any 
additional hosts.',), )]'


The hosted_storage is still locked.

How to enable the “Hosted Engine” Host property in the web UI or to 
force deploy via CLI ?


If logs are needed, please let me know which one. Attached 
agent-ha.log (from first node) and ovirt-hosted-engine-setup.log (from 
host where attempt to deploy hosted-engine via CLI).


Apart this, seems that the test environment is working as expected 
(Gluster ok, installed VM ok).


TIA

*Roberto *




Questo messaggio e' indirizzato esclusivamente al destinatario 
indicato e potrebbe contenere informazioni confidenziali, riservate o 
proprietarie. Qualora la presente venisse ricevuta per errore, si 
prega di segnalarlo immediatamente al mittente, cancellando 
l'originale e ogni sua copia e distruggendo eventuali copie cartacee. 
Ogni altro uso e' strettamente proibito e potrebbe essere fonte di 
violazione di legge.


This message is for the designated recipient only and may contain 
privileged, proprietary, or otherwise private information. If you have 
received it in error, please notify the sender immediately, deleting 
the original and all copies and destroying any hard copies. Any other 
use is strictly prohibited and may be unlawful.



___
Users mailing list
Users@ovirt.org
http://lists.ovirt.org/mailman/listinfo/users



___
Users mailing list
Users@ovirt.org
http://lists.ovirt.org/mailman/listinfo/users


Re: [ovirt-users] Gluster storage expansion

2017-01-19 Thread knarra

On 01/19/2017 09:15 PM, Goorkate, B.J. wrote:

Hi all,

I have an oVirt environment with 5 nodes. 3 nodes offer a replica-3 gluster 
storage domain for the virtual
machines.

Is there a way to use storage in the nodes which are no member of the replica-3 
storage domain?
Or do I need another node and make a second replica-3 gluster storage domain?
since  you have 5 nodes in your cluster, you could add another node and 
make replica-3 gluster storage domain out of these three nodes which are 
no member of the already existing replica-3 storage domain.


In other words: I would like to expand the existing storage domain by adding 
more nodes, rather
than adding disks to the existing gluster nodes. Is that possible?

Thanks!

Regards,

Bertjan



--

De informatie opgenomen in dit bericht kan vertrouwelijk zijn en is
uitsluitend bestemd voor de geadresseerde. Indien u dit bericht onterecht
ontvangt, wordt u verzocht de inhoud niet te gebruiken en de afzender direct
te informeren door het bericht te retourneren. Het Universitair Medisch
Centrum Utrecht is een publiekrechtelijke rechtspersoon in de zin van de W.H.W.
(Wet Hoger Onderwijs en Wetenschappelijk Onderzoek) en staat geregistreerd bij
de Kamer van Koophandel voor Midden-Nederland onder nr. 30244197.

Denk s.v.p aan het milieu voor u deze e-mail afdrukt.

--

This message may contain confidential information and is intended exclusively
for the addressee. If you receive this message unintentionally, please do not
use the contents but notify the sender immediately by return e-mail. University
Medical Center Utrecht is a legal person by public law and is registered at
the Chamber of Commerce for Midden-Nederland under no. 30244197.

Please consider the environment before printing this e-mail.
___
Users mailing list
Users@ovirt.org
http://lists.ovirt.org/mailman/listinfo/users



___
Users mailing list
Users@ovirt.org
http://lists.ovirt.org/mailman/listinfo/users


[ovirt-users] Master storage domain in locked state

2017-01-12 Thread knarra

Hi,

I have three glusterfs storage domains present on my system. data 
(master), vmstore and engine. I tried moving the master storage domain 
to maintenance state , it was stuck in preparing for maintenance for a 
long time and then i rebooted my hosts. Now i see that the master domain 
moves to maintenance state but vmstore which is master now is stuck in 
locked state. Any idea how to come out of this situation.


Any help is much appreciated.

Thanks

kasturi

___
Users mailing list
Users@ovirt.org
http://lists.ovirt.org/mailman/listinfo/users


Re: [ovirt-users] HA Score

2017-01-02 Thread knarra

On 01/02/2017 02:59 PM, Simone Tiraboschi wrote:



On Mon, Jan 2, 2017 at 9:23 AM, knarra <kna...@redhat.com 
<mailto:kna...@redhat.com>> wrote:


On 01/01/2017 06:23 PM, Doron Fediuck wrote:

Care to share the HE agents log files?

Hi Doron,

I have collected  logs from all the machines and placed it in
the link below.

http://rhsqe-repo.lab.eng.blr.redhat.com/sosreports/HC/ovirt_logs/
<http://rhsqe-repo.lab.eng.blr.redhat.com/sosreports/HC/ovirt_logs/>


From the logs it seams that host 1 and host 3 are stable at 3400 
points while host 2 periodically goes down to 1800 due to:


MainThread::INFO::2017-01-01 
17:26:33,324::states::128::ovirt_hosted_engine_ha.agent.hosted_engine.HostedEngine::(score) 
Penalizing score by 1600 due to gateway status
Thanks simone. So i understand that problem is because of bad gateway in 
Host2.




Thanks
kasturi

On Fri, Dec 30, 2016 at 9:17 AM, knarra <kna...@redhat.com
<mailto:kna...@redhat.com>> wrote:

Hi,

 I have latest 4.1 installed and i see that HA score
on hosts keeps going
to 0  and comes back to  3400. This behavior is something
which i am
observing with 4.1  and i see that it takes considerable
amount of time to
get back to normal state. Any reason why it takes such a
long time?

Thanks

kasturi.

___
Users mailing list
Users@ovirt.org <mailto:Users@ovirt.org>
http://lists.ovirt.org/mailman/listinfo/users
<http://lists.ovirt.org/mailman/listinfo/users>



___
Users mailing list
Users@ovirt.org <mailto:Users@ovirt.org>
http://lists.ovirt.org/mailman/listinfo/users
<http://lists.ovirt.org/mailman/listinfo/users>




___
Users mailing list
Users@ovirt.org
http://lists.ovirt.org/mailman/listinfo/users


Re: [ovirt-users] HA Score

2017-01-02 Thread knarra

On 01/01/2017 06:23 PM, Doron Fediuck wrote:

Care to share the HE agents log files?

Hi Doron,

I have collected  logs from all the machines and placed it in the 
link below.


http://rhsqe-repo.lab.eng.blr.redhat.com/sosreports/HC/ovirt_logs/

Thanks
kasturi

On Fri, Dec 30, 2016 at 9:17 AM, knarra <kna...@redhat.com> wrote:

Hi,

 I have latest 4.1 installed and i see that HA score on hosts keeps going
to 0  and comes back to  3400. This behavior is something which i am
observing with 4.1  and i see that it takes considerable amount of time to
get back to normal state. Any reason why it takes such a long time?

Thanks

kasturi.

___
Users mailing list
Users@ovirt.org
http://lists.ovirt.org/mailman/listinfo/users



___
Users mailing list
Users@ovirt.org
http://lists.ovirt.org/mailman/listinfo/users


Re: [ovirt-users] Issues with RHV 4.1

2016-12-30 Thread knarra

Please ignore this mail. sorry for the chaos caused :-(

On 12/30/2016 03:52 PM, knarra wrote:

Hi,

I have latest RHV 4.1 installed on HC stack. My app vms  runs on 
Host1 and HE vm runs on  Host2. I have a test case where i bring down 
glusternw on Host1 and i expect all app vms to be migrated to another 
node. But when i am running the above said test case i  run into 
following issues.


1) I expect HE vm not to go down since everything works fine on the 
host where HE vm is running. But i see that HE vm goes down and comes 
up back. Why does this happen?


2) Some times i see that Host where HE vm runs  restarts and I am not 
sure why does it reboot. I checked  /var/log/messages and i see the 
errors below but still unable to figure out why the system restarts. 
Due to this i see that HE VM is unavailable for some time.


https://paste.fedoraproject.org/514976/83092116/

Any idea why the host system reboots here?

3) I see another issue being logged there related to 
ovirt-imageio-daemon thought not relevant to the above test case.


https://paste.fedoraproject.org/514978/83092677/

why does it throw IO error and why is this Traceback logged ?

Thanks

kasturi.



___
Users mailing list
Users@ovirt.org
http://lists.ovirt.org/mailman/listinfo/users



___
Users mailing list
Users@ovirt.org
http://lists.ovirt.org/mailman/listinfo/users


[ovirt-users] Issues with RHV 4.1

2016-12-30 Thread knarra

Hi,

I have latest RHV 4.1 installed on HC stack. My app vms  runs on 
Host1 and HE vm runs on  Host2. I have a test case where i bring down 
glusternw on Host1 and i expect all app vms to be migrated to another 
node. But when i am running the above said test case i  run into 
following issues.


1) I expect HE vm not to go down since everything works fine on the host 
where HE vm is running. But i see that HE vm goes down and comes up 
back. Why does this happen?


2) Some times i see that Host where HE vm runs  restarts and I am not 
sure why does it reboot. I checked  /var/log/messages and i see the 
errors below but still unable to figure out why the system restarts. Due 
to this i see that HE VM is unavailable for some time.


https://paste.fedoraproject.org/514976/83092116/

Any idea why the host system reboots here?

3) I see another issue being logged there related to 
ovirt-imageio-daemon thought not relevant to the above test case.


https://paste.fedoraproject.org/514978/83092677/

why does it throw IO error and why is this Traceback logged ?

Thanks

kasturi.



___
Users mailing list
Users@ovirt.org
http://lists.ovirt.org/mailman/listinfo/users


[ovirt-users] HA Score

2016-12-29 Thread knarra

Hi,

I have latest 4.1 installed and i see that HA score on hosts keeps 
going to 0  and comes back to  3400. This behavior is something which i 
am observing with 4.1  and i see that it takes considerable amount of 
time to get back to normal state. Any reason why it takes such a long time?


Thanks

kasturi.

___
Users mailing list
Users@ovirt.org
http://lists.ovirt.org/mailman/listinfo/users


Re: [ovirt-users] Failed to import Hosted Engine VM

2016-12-20 Thread knarra

On 12/20/2016 01:47 PM, Simone Tiraboschi wrote:



On Tue, Dec 20, 2016 at 7:47 AM, knarra <kna...@redhat.com 
<mailto:kna...@redhat.com>> wrote:


Hi,

I have latest master installed and i see that Hosted Engine VM
fails to import. Below are the logs i see in the engine log. Can
some one help me understand why does this happen?


It's a change in VDSM storage APIs; look for:
[ovirt-devel] Change in VDSM API in master (VolumeInfo.lease)

Probably you are using an up-to-date vdsm against a few days old 
engine-appliance (we still have some troubles re-building the engine 
appliance on Centos 7.3) and so the issue.

Could you please run
  yum update "ovirt-*-setup*"
  engine-setup
on your engine VM to get an up to date engine?
Once up to date, the engine should be able to recover by itself.
Thanks simone. I updated my engine as you suggested and i see that 
Hosted Engine vm appears in the UI.




2016-12-20 06:46:02,291Z INFO
[org.ovirt.engine.core.vdsbroker.irsbroker.GetImageInfoVDSCommand]
(org.ovirt.thread.pool-6-thread-48) [77f83e0f] START,
GetImageInfoVDSComman
d( GetImageInfoVDSCommandParameters:{runAsync='true',
storagePoolId='0001-0001-0001-0001-0311',
ignoreFailoverLimit='false', storageDomainId='4830f5b2-5a7d-4a89-
8fc9-8911134035e4',
imageGroupId='0dec26c2-59c8-4d7f-adc0-6e4c878028ee',
imageId='e114-9f08-4e71-9b3a-d6a93273fbd3'}), log id: 78f8a633
2016-12-20 06:46:02,291Z INFO
[org.ovirt.engine.core.vdsbroker.vdsbroker.GetVolumeInfoVDSCommand]
(org.ovirt.thread.pool-6-thread-48) [77f83e0f] START,
GetVolumeInfoVDSComm
and(HostName = hosted_engine1,
GetVolumeInfoVDSCommandParameters:{runAsync='true',
hostId='4c4a3633-2c2a-49c9-be06-78a21a4a2584',
storagePoolId='0001-0001-0001-0001-
0311', storageDomainId='4830f5b2-5a7d-4a89-8fc9-8911134035e4',
imageGroupId='0dec26c2-59c8-4d7f-adc0-6e4c878028ee',
imageId='e114-9f08-4e71-9b3a-d6a93273fbd3'}), log
 id: 62a0b308
2016-12-20 06:46:02,434Z ERROR
[org.ovirt.engine.core.vdsbroker.vdsbroker.GetVolumeInfoVDSCommand]
(org.ovirt.thread.pool-6-thread-48) [77f83e0f] Failed building
DiskImage:
No enum const

org.ovirt.engine.core.common.businessentities.LeaseState.{owners=[Ljava.lang.Object;@28beccfa,
version=2}
2016-12-20 06:46:02,434Z INFO
[org.ovirt.engine.core.vdsbroker.vdsbroker.GetVolumeInfoVDSCommand]
(org.ovirt.thread.pool-6-thread-48) [77f83e0f] Command
'org.ovirt.engine.c
ore.vdsbroker.vdsbroker.GetVolumeInfoVDSCommand' return value '
VolumeInfoReturn:{status='Status [code=0, message=Done]'}
status = OK
domain = 4830f5b2-5a7d-4a89-8fc9-8911134035e4
voltype = LEAF
description = Hosted Engine Image
parent = ----
format = RAW
generation = 0
image = 0dec26c2-59c8-4d7f-adc0-6e4c878028ee
ctime = 1482153085
disktype = 2
legality = LEGAL
mtime = 0
apparentsize = 53687091200
children:
[]
pool =
capacity = 53687091200
uuid = e114-9f08-4e71-9b3a-d6a93273fbd3
truesize = 2761210368
type = SPARSE
lease:
owners:
[1]
version = 2

'
2016-12-20 06:46:02,434Z INFO
[org.ovirt.engine.core.vdsbroker.vdsbroker.GetVolumeInfoVDSCommand]
(org.ovirt.thread.pool-6-thread-48) [77f83e0f] HostName =
hosted_engine1
2016-12-20 06:46:02,434Z INFO
[org.ovirt.engine.core.vdsbroker.vdsbroker.GetVolumeInfoVDSCommand]
(org.ovirt.thread.pool-6-thread-48) [77f83e0f] FINISH,
GetVolumeInfoVDSCommand, log id: 62a0b308
2016-12-20 06:46:02,434Z ERROR
[org.ovirt.engine.core.vdsbroker.irsbroker.GetImageInfoVDSCommand]
(org.ovirt.thread.pool-6-thread-48) [77f83e0f] Failed to get the
volume information, marking as FAILED
2016-12-20 06:46:02,434Z INFO
[org.ovirt.engine.core.vdsbroker.irsbroker.GetImageInfoVDSCommand]
(org.ovirt.thread.pool-6-thread-48) [77f83e0f] FINISH,
GetImageInfoVDSCommand, log id: 78f8a633
2016-12-20 06:46:02,434Z WARN
[org.ovirt.engine.core.bll.exportimport.ImportVmCommand]
(org.ovirt.thread.pool-6-thread-48) [77f83e0f] Validation of
action 'ImportVm' failed for user SYSTEM. Reasons:
VAR__ACTION__IMPORT,VAR__TYPE__VM,ACTION_TYPE_FAILED_VM_IMAGE_DOES_NOT_EXIST
2016-12-20 06:46:02,435Z INFO
[org.ovirt.engine.core.bll.exportimport.ImportVmCommand]
(org.ovirt.thread.pool-6-thread-48) [77f83e0f] Lock freed to
object
'EngineLock:{exclusiveLocks='[89681893-94fe-4366-be6d-15141ff2b365=<VM,
ACTION_TYPE_FAILED_VM_IS_BEING_IMPORTED$VmName HostedEngine>,
HostedEngine=<VM_NAME, ACTION_TYPE_FAILED_NAME_ALREADY_USED>]',
sharedLocks='[89681893-94fe-4366-be6d-15141ff2b365=<REMOTE_VM,
ACTION_TYPE_FAILED_VM_IS_BEING_IMPORTED$VmName HostedEngine>]'}'
2016-12-20 06:46:02,435Z ERROR
[org.ov

[ovirt-users] Failed to import Hosted Engine VM

2016-12-19 Thread knarra

Hi,

I have latest master installed and i see that Hosted Engine VM 
fails to import. Below are the logs i see in the engine log. Can some 
one help me understand why does this happen?



2016-12-20 06:46:02,291Z INFO 
[org.ovirt.engine.core.vdsbroker.irsbroker.GetImageInfoVDSCommand] 
(org.ovirt.thread.pool-6-thread-48) [77f83e0f] START, GetImageInfoVDSComman
d( GetImageInfoVDSCommandParameters:{runAsync='true', 
storagePoolId='0001-0001-0001-0001-0311', 
ignoreFailoverLimit='false', storageDomainId='4830f5b2-5a7d-4a89-
8fc9-8911134035e4', imageGroupId='0dec26c2-59c8-4d7f-adc0-6e4c878028ee', 
imageId='e114-9f08-4e71-9b3a-d6a93273fbd3'}), log id: 78f8a633
2016-12-20 06:46:02,291Z INFO 
[org.ovirt.engine.core.vdsbroker.vdsbroker.GetVolumeInfoVDSCommand] 
(org.ovirt.thread.pool-6-thread-48) [77f83e0f] START, GetVolumeInfoVDSComm
and(HostName = hosted_engine1, 
GetVolumeInfoVDSCommandParameters:{runAsync='true', 
hostId='4c4a3633-2c2a-49c9-be06-78a21a4a2584', 
storagePoolId='0001-0001-0001-0001-
0311', storageDomainId='4830f5b2-5a7d-4a89-8fc9-8911134035e4', 
imageGroupId='0dec26c2-59c8-4d7f-adc0-6e4c878028ee', 
imageId='e114-9f08-4e71-9b3a-d6a93273fbd3'}), log

 id: 62a0b308
2016-12-20 06:46:02,434Z ERROR 
[org.ovirt.engine.core.vdsbroker.vdsbroker.GetVolumeInfoVDSCommand] 
(org.ovirt.thread.pool-6-thread-48) [77f83e0f] Failed building DiskImage:
No enum const 
org.ovirt.engine.core.common.businessentities.LeaseState.{owners=[Ljava.lang.Object;@28beccfa, 
version=2}
2016-12-20 06:46:02,434Z INFO 
[org.ovirt.engine.core.vdsbroker.vdsbroker.GetVolumeInfoVDSCommand] 
(org.ovirt.thread.pool-6-thread-48) [77f83e0f] Command 'org.ovirt.engine.c

ore.vdsbroker.vdsbroker.GetVolumeInfoVDSCommand' return value '
VolumeInfoReturn:{status='Status [code=0, message=Done]'}
status = OK
domain = 4830f5b2-5a7d-4a89-8fc9-8911134035e4
voltype = LEAF
description = Hosted Engine Image
parent = ----
format = RAW
generation = 0
image = 0dec26c2-59c8-4d7f-adc0-6e4c878028ee
ctime = 1482153085
disktype = 2
legality = LEGAL
mtime = 0
apparentsize = 53687091200
children:
[]
pool =
capacity = 53687091200
uuid = e114-9f08-4e71-9b3a-d6a93273fbd3
truesize = 2761210368
type = SPARSE
lease:
owners:
[1]
version = 2

'
2016-12-20 06:46:02,434Z INFO 
[org.ovirt.engine.core.vdsbroker.vdsbroker.GetVolumeInfoVDSCommand] 
(org.ovirt.thread.pool-6-thread-48) [77f83e0f] HostName = hosted_engine1
2016-12-20 06:46:02,434Z INFO 
[org.ovirt.engine.core.vdsbroker.vdsbroker.GetVolumeInfoVDSCommand] 
(org.ovirt.thread.pool-6-thread-48) [77f83e0f] FINISH, 
GetVolumeInfoVDSCommand, log id: 62a0b308
2016-12-20 06:46:02,434Z ERROR 
[org.ovirt.engine.core.vdsbroker.irsbroker.GetImageInfoVDSCommand] 
(org.ovirt.thread.pool-6-thread-48) [77f83e0f] Failed to get the volume 
information, marking as FAILED
2016-12-20 06:46:02,434Z INFO 
[org.ovirt.engine.core.vdsbroker.irsbroker.GetImageInfoVDSCommand] 
(org.ovirt.thread.pool-6-thread-48) [77f83e0f] FINISH, 
GetImageInfoVDSCommand, log id: 78f8a633
2016-12-20 06:46:02,434Z WARN 
[org.ovirt.engine.core.bll.exportimport.ImportVmCommand] 
(org.ovirt.thread.pool-6-thread-48) [77f83e0f] Validation of action 
'ImportVm' failed for user SYSTEM. Reasons: 
VAR__ACTION__IMPORT,VAR__TYPE__VM,ACTION_TYPE_FAILED_VM_IMAGE_DOES_NOT_EXIST
2016-12-20 06:46:02,435Z INFO 
[org.ovirt.engine.core.bll.exportimport.ImportVmCommand] 
(org.ovirt.thread.pool-6-thread-48) [77f83e0f] Lock freed to object 
'EngineLock:{exclusiveLocks='[89681893-94fe-4366-be6d-15141ff2b365=, 
HostedEngine=]', 
sharedLocks='[89681893-94fe-4366-be6d-15141ff2b365=]'}'
2016-12-20 06:46:02,435Z ERROR 
[org.ovirt.engine.core.bll.HostedEngineImporter] 
(org.ovirt.thread.pool-6-thread-48) [77f83e0f] Failed importing the 
Hosted Engine VM
2016-12-20 06:46:04,436Z INFO 
[org.ovirt.engine.core.vdsbroker.gluster.GlusterServersListVDSCommand] 
(DefaultQuartzScheduler4) [2d8b8a56] FINISH, 
GlusterServersListVDSCommand, return: [10.70.36.79/23:CONNECTED, 
10.70.36.80:CONNECTED, 10.70.36.81:CONNECTED], log id: 617781b7


Thanks

kasturi.

___
Users mailing list
Users@ovirt.org
http://lists.ovirt.org/mailman/listinfo/users


Re: [ovirt-users] Hosted Engine won't deploy

2016-12-14 Thread knarra

On 12/15/2016 03:35 AM, Gervais de Montbrun wrote:

Hi all,

I had to reinstall one of my hosts today and I noticed an issue. The 
error message was:


Ovirt2:

  o Cannot edit Host. You are using an unmanaged hosted engine VM.
Please upgrade the cluster level to 3.6 and wait for the
hosted engine storage domain to be properly imported.

I am running oVirt 4.0.5 and have a hosted engine and Cluster and Data 
Center say that they are running in 4.0 compatibility mode, so I don't 
understand this error. I did get the host setup by running 
`hosted-engine --deploy` and walking through the command line options. 
Alarmingly, I was warned that this is deprecated and will not be 
possible in oVirt 4.1.


Any suggestions as to what I should do to sort out my issue?

Cheers,
Gervais

Hi Gervais,

Have you imported hosted_storage into your environment. I have hit 
this issue when i did not have hosted_storage domain and hosted_engine 
vm imported into my setup.


Thanks
kasturi






___
Users mailing list
Users@ovirt.org
http://lists.phx.ovirt.org/mailman/listinfo/users



___
Users mailing list
Users@ovirt.org
http://lists.ovirt.org/mailman/listinfo/users


Re: [ovirt-users] vm.conf on one of the node is missing

2016-11-24 Thread knarra

On 11/24/2016 07:47 PM, Simone Tiraboschi wrote:



On Thu, Nov 24, 2016 at 3:06 PM, knarra <kna...@redhat.com 
<mailto:kna...@redhat.com>> wrote:


On 11/24/2016 07:27 PM, Simone Tiraboschi wrote:



On Thu, Nov 24, 2016 at 2:39 PM, knarra <kna...@redhat.com
<mailto:kna...@redhat.com>> wrote:

On 11/24/2016 06:56 PM, Simone Tiraboschi wrote:



On Thu, Nov 24, 2016 at 2:08 PM, knarra <kna...@redhat.com
<mailto:kna...@redhat.com>> wrote:

On 11/24/2016 06:15 PM, Simone Tiraboschi wrote:



    On Thu, Nov 24, 2016 at 1:26 PM, knarra
<kna...@redhat.com <mailto:kna...@redhat.com>> wrote:

Hi,

I have three nodes with glusterfs as storage
domain. For some reason i see that vm.conf from
/var/run/ovirt-hosted-engine-ha is missing and due
to this on one of my host i see that Hosted Engine
HA : Not Active. Once i copy the file from some
other node and restart ovirt-ha-broker and
ovirt-ha-agent services everything works fine. But
then this happens again. Can some please help me
identify why this happens. Below is the log i see
in ovirt-ha-agent.logs.


https://paste.fedoraproject.org/489120/79990345/
<https://paste.fedoraproject.org/489120/79990345/>


Once the engine correctly imported the hosted-engine
storage domain, a couple of OVF_STORE volumes will
appear there.
Every modification to the engine VM configuration will
be written by the engine into that OVF_STORE, so all
the ovirt-ha-agent running on the hosted-engine hosts
will be able to re-start the engine VM with a coherent
configuration.

Till the engine imports the hosted-engine storage
domain, ovirt-ha-agent will fall back to the initial
vm.conf.

In you case the OVF_STORE volume is there,
but the agent fails extracting the engine VM configuration:
MainThread::INFO::2016-11-24

17:55:04,914::ovf_store::112::ovirt_hosted_engine_ha.lib.ovf.ovf_store.OVFStore::(getEngineVMOVF)
Extracting Engine VM OVF from the OVF_STORE
MainThread::INFO::2016-11-24

17:55:04,919::ovf_store::119::ovirt_hosted_engine_ha.lib.ovf.ovf_store.OVFStore::(getEngineVMOVF)
OVF_STORE volume path:

/rhev/data-center/mnt/glusterSD/10.70.36.79:_engine/27f054c3-c245-4039-b42a-c28b37043016/images/fdf49778-9a06-49c6-bf7a-a0f12425911c/8c954add-6bcf-47f8-ac2e-4c85fc3f8699
MainThread::ERROR::2016-11-24

17:55:04,928::ovf_store::124::ovirt_hosted_engine_ha.lib.ovf.ovf_store.OVFStore::(getEngineVMOVF)
Unable to extract HEVM OVF

So it tries to rollback to the initial vm.conf, but
also that one seams to miss some values and so the
agent is failing:
MainThread::ERROR::2016-11-24

17:55:04,974::agent::205::ovirt_hosted_engine_ha.agent.agent.Agent::(_run_agent)
Error: ''Configuration value not found:
file=/var/run/ovirt-hosted-engine-ha/vm.conf,
key=memSize'' - trying to restart agent

Both of the issue seams storage related, could yuo
please share your gluster logs?


Thanks

kasturi



Hi Simone,

Below [1] is the link for the sosreports on the
first two hosts. The third host has some issue. Once it
is up will give the sosreport from there as well.


And the host where you see the initial issue was the third one?

It is on the first host.


It seams that host1 is failing reading from the the hosted-engine
storage domain:

[2016-11-24 12:33:43.678467] W [MSGID: 114031]
[client-rpc-fops.c:2938:client3_3_lookup_cbk] 0-engine-client-2:
remote operation failed. Path: /
(----0001) [Transport endpoint is not
connected]
[2016-11-24 12:33:43.678747] E
[rpc-clnt.c:365:saved_frames_unwind] (-->
/lib64/libglusterfs.so.0(_gf_log_callingfn+0x192)[0x7f077eba1642]
(-->
/lib64/libgfrpc.so.0(saved_frames_unwind+0x1de)[0x7f077e96775e]
(-->
/lib64/libgfrpc.so.0(saved_frames_destroy+0xe)[0x7f077e96786e]
(-->
/lib64/libgfrpc.so.0(rpc_clnt_connection_cleanup+0x84)[0x7f077e968fc4]
(--> /lib64/libgfrpc.so.0(rpc_clnt_notify+0x120)[0x7f077e9698a0]
) 0-engine-client-2: forced unwinding frame type(GlusterFS
3.3) op(LOOKUP(27)) called at 2016-11-24 12:33:07.495178
(xid=0x82a1c)
[2016-11-24 12:33:43.678982] E
[rpc-clnt.c:365:saved_frames_unwind] (-->
/lib64/libglu

Re: [ovirt-users] vm.conf on one of the node is missing

2016-11-24 Thread knarra

On 11/24/2016 07:27 PM, Simone Tiraboschi wrote:



On Thu, Nov 24, 2016 at 2:39 PM, knarra <kna...@redhat.com 
<mailto:kna...@redhat.com>> wrote:


On 11/24/2016 06:56 PM, Simone Tiraboschi wrote:



On Thu, Nov 24, 2016 at 2:08 PM, knarra <kna...@redhat.com
<mailto:kna...@redhat.com>> wrote:

On 11/24/2016 06:15 PM, Simone Tiraboschi wrote:



On Thu, Nov 24, 2016 at 1:26 PM, knarra <kna...@redhat.com
<mailto:kna...@redhat.com>> wrote:

Hi,

I have three nodes with glusterfs as storage domain.
For some reason i see that vm.conf from
/var/run/ovirt-hosted-engine-ha is missing and due to
this on one of my host i see that Hosted Engine HA : Not
Active. Once i copy the file from some other node and
restart ovirt-ha-broker and ovirt-ha-agent services
everything works fine. But then this happens again. Can
some please help me identify why this happens. Below is
the log i see in ovirt-ha-agent.logs.


https://paste.fedoraproject.org/489120/79990345/
<https://paste.fedoraproject.org/489120/79990345/>


Once the engine correctly imported the hosted-engine storage
domain, a couple of OVF_STORE volumes will appear there.
Every modification to the engine VM configuration will be
written by the engine into that OVF_STORE, so all the
ovirt-ha-agent running on the hosted-engine hosts will be
able to re-start the engine VM with a coherent configuration.

Till the engine imports the hosted-engine storage domain,
ovirt-ha-agent will fall back to the initial vm.conf.

In you case the OVF_STORE volume is there,
but the agent fails extracting the engine VM configuration:
MainThread::INFO::2016-11-24

17:55:04,914::ovf_store::112::ovirt_hosted_engine_ha.lib.ovf.ovf_store.OVFStore::(getEngineVMOVF)
Extracting Engine VM OVF from the OVF_STORE
MainThread::INFO::2016-11-24

17:55:04,919::ovf_store::119::ovirt_hosted_engine_ha.lib.ovf.ovf_store.OVFStore::(getEngineVMOVF)
OVF_STORE volume path:

/rhev/data-center/mnt/glusterSD/10.70.36.79:_engine/27f054c3-c245-4039-b42a-c28b37043016/images/fdf49778-9a06-49c6-bf7a-a0f12425911c/8c954add-6bcf-47f8-ac2e-4c85fc3f8699
MainThread::ERROR::2016-11-24

17:55:04,928::ovf_store::124::ovirt_hosted_engine_ha.lib.ovf.ovf_store.OVFStore::(getEngineVMOVF)
Unable to extract HEVM OVF

So it tries to rollback to the initial vm.conf, but also
that one seams to miss some values and so the agent is failing:
MainThread::ERROR::2016-11-24

17:55:04,974::agent::205::ovirt_hosted_engine_ha.agent.agent.Agent::(_run_agent)
Error: ''Configuration value not found:
file=/var/run/ovirt-hosted-engine-ha/vm.conf, key=memSize''
- trying to restart agent

Both of the issue seams storage related, could yuo please
share your gluster logs?


Thanks

kasturi



Hi Simone,

Below [1] is the link for the sosreports on the first two
hosts. The third host has some issue. Once it is up will give
the sosreport from there as well.


And the host where you see the initial issue was the third one?

It is on the first host.


It seams that host1 is failing reading from the the hosted-engine 
storage domain:


[2016-11-24 12:33:43.678467] W [MSGID: 114031] 
[client-rpc-fops.c:2938:client3_3_lookup_cbk] 0-engine-client-2: 
remote operation failed. Path: / 
(----0001) [Transport endpoint is not 
connected]
[2016-11-24 12:33:43.678747] E [rpc-clnt.c:365:saved_frames_unwind] 
(--> /lib64/libglusterfs.so.0(_gf_log_callingfn+0x192)[0x7f077eba1642] 
(--> /lib64/libgfrpc.so.0(saved_frames_unwind+0x1de)[0x7f077e96775e] 
(--> /lib64/libgfrpc.so.0(saved_frames_destroy+0xe)[0x7f077e96786e] 
(--> 
/lib64/libgfrpc.so.0(rpc_clnt_connection_cleanup+0x84)[0x7f077e968fc4] 
(--> /lib64/libgfrpc.so.0(rpc_clnt_notify+0x120)[0x7f077e9698a0] ) 
0-engine-client-2: forced unwinding frame type(GlusterFS 3.3) 
op(LOOKUP(27)) called at 2016-11-24 12:33:07.495178 (xid=0x82a1c)
[2016-11-24 12:33:43.678982] E [rpc-clnt.c:365:saved_frames_unwind] 
(--> /lib64/libglusterfs.so.0(_gf_log_callingfn+0x192)[0x7f077eba1642] 
(--> /lib64/libgfrpc.so.0(saved_frames_unwind+0x1de)[0x7f077e96775e] 
(--> /lib64/libgfrpc.so.0(saved_frames_destroy+0xe)[0x7f077e96786e] 
(--> 
/lib64/libgfrpc.so.0(rpc_clnt_connection_cleanup+0x84)[0x7f077e968fc4] 
(--> /lib64/libgfrpc.so.0(rpc_clnt_notify+0x120)[0x7f077e9698a0] ) 
0-engine-client-2: forced unwinding frame type(GlusterFS 3.3) 
op(LOOKUP(27)) called at 2016-11-24 12:33:08.770637 (xid=0x82a1d)
[2016-11-24 12:33:43.679001] W [MSGID: 114031] 
[client-rp

Re: [ovirt-users] vm.conf on one of the node is missing

2016-11-24 Thread knarra

On 11/24/2016 06:56 PM, Simone Tiraboschi wrote:



On Thu, Nov 24, 2016 at 2:08 PM, knarra <kna...@redhat.com 
<mailto:kna...@redhat.com>> wrote:


On 11/24/2016 06:15 PM, Simone Tiraboschi wrote:



On Thu, Nov 24, 2016 at 1:26 PM, knarra <kna...@redhat.com
<mailto:kna...@redhat.com>> wrote:

Hi,

I have three nodes with glusterfs as storage domain. For
some reason i see that vm.conf from
/var/run/ovirt-hosted-engine-ha is missing and due to this on
one of my host i see that Hosted Engine HA : Not Active. Once
i copy the file from some other node and restart
ovirt-ha-broker and ovirt-ha-agent services everything works
fine. But then this happens again. Can some please help me
identify why this happens. Below is the log i see in
ovirt-ha-agent.logs.


https://paste.fedoraproject.org/489120/79990345/
<https://paste.fedoraproject.org/489120/79990345/>


Once the engine correctly imported the hosted-engine storage
domain, a couple of OVF_STORE volumes will appear there.
Every modification to the engine VM configuration will be written
by the engine into that OVF_STORE, so all the ovirt-ha-agent
running on the hosted-engine hosts will be able to re-start the
engine VM with a coherent configuration.

Till the engine imports the hosted-engine storage domain,
ovirt-ha-agent will fall back to the initial vm.conf.

In you case the OVF_STORE volume is there,
but the agent fails extracting the engine VM configuration:
MainThread::INFO::2016-11-24

17:55:04,914::ovf_store::112::ovirt_hosted_engine_ha.lib.ovf.ovf_store.OVFStore::(getEngineVMOVF)
Extracting Engine VM OVF from the OVF_STORE
MainThread::INFO::2016-11-24

17:55:04,919::ovf_store::119::ovirt_hosted_engine_ha.lib.ovf.ovf_store.OVFStore::(getEngineVMOVF)
OVF_STORE volume path:

/rhev/data-center/mnt/glusterSD/10.70.36.79:_engine/27f054c3-c245-4039-b42a-c28b37043016/images/fdf49778-9a06-49c6-bf7a-a0f12425911c/8c954add-6bcf-47f8-ac2e-4c85fc3f8699
MainThread::ERROR::2016-11-24

17:55:04,928::ovf_store::124::ovirt_hosted_engine_ha.lib.ovf.ovf_store.OVFStore::(getEngineVMOVF)
Unable to extract HEVM OVF

So it tries to rollback to the initial vm.conf, but also that one
seams to miss some values and so the agent is failing:
MainThread::ERROR::2016-11-24

17:55:04,974::agent::205::ovirt_hosted_engine_ha.agent.agent.Agent::(_run_agent)
Error: ''Configuration value not found:
file=/var/run/ovirt-hosted-engine-ha/vm.conf, key=memSize'' -
trying to restart agent

Both of the issue seams storage related, could yuo please share
your gluster logs?


Thanks

kasturi



Hi Simone,

Below [1] is the link for the sosreports on the first two
hosts. The third host has some issue. Once it is up will give the
sosreport from there as well.


And the host where you see the initial issue was the third one?

It is on the first host.


[1]
http://rhsqe-repo.lab.eng.blr.redhat.com/sosreports/HC/vm_conf/
<http://rhsqe-repo.lab.eng.blr.redhat.com/sosreports/HC/vm_conf/>

Thanks

kasturi




___
Users mailing list
Users@ovirt.org
http://lists.ovirt.org/mailman/listinfo/users


Re: [ovirt-users] vm.conf on one of the node is missing

2016-11-24 Thread knarra

On 11/24/2016 06:15 PM, Simone Tiraboschi wrote:



On Thu, Nov 24, 2016 at 1:26 PM, knarra <kna...@redhat.com 
<mailto:kna...@redhat.com>> wrote:


Hi,

I have three nodes with glusterfs as storage domain. For some
reason i see that vm.conf from /var/run/ovirt-hosted-engine-ha is
missing and due to this on one of my host i see that Hosted Engine
HA : Not Active. Once i copy the file from some other node and
restart ovirt-ha-broker and ovirt-ha-agent services everything
works fine. But then this happens again. Can some please help me
identify why this happens. Below is the log i see in
ovirt-ha-agent.logs.


https://paste.fedoraproject.org/489120/79990345/
<https://paste.fedoraproject.org/489120/79990345/>


Once the engine correctly imported the hosted-engine storage domain, a 
couple of OVF_STORE volumes will appear there.
Every modification to the engine VM configuration will be written by 
the engine into that OVF_STORE, so all the ovirt-ha-agent running on 
the hosted-engine hosts will be able to re-start the engine VM with a 
coherent configuration.


Till the engine imports the hosted-engine storage domain, 
ovirt-ha-agent will fall back to the initial vm.conf.


In you case the OVF_STORE volume is there,
but the agent fails extracting the engine VM configuration:
MainThread::INFO::2016-11-24 
17:55:04,914::ovf_store::112::ovirt_hosted_engine_ha.lib.ovf.ovf_store.OVFStore::(getEngineVMOVF) 
Extracting Engine VM OVF from the OVF_STORE
MainThread::INFO::2016-11-24 
17:55:04,919::ovf_store::119::ovirt_hosted_engine_ha.lib.ovf.ovf_store.OVFStore::(getEngineVMOVF) 
OVF_STORE volume path: 
/rhev/data-center/mnt/glusterSD/10.70.36.79:_engine/27f054c3-c245-4039-b42a-c28b37043016/images/fdf49778-9a06-49c6-bf7a-a0f12425911c/8c954add-6bcf-47f8-ac2e-4c85fc3f8699
MainThread::ERROR::2016-11-24 
17:55:04,928::ovf_store::124::ovirt_hosted_engine_ha.lib.ovf.ovf_store.OVFStore::(getEngineVMOVF) 
Unable to extract HEVM OVF


So it tries to rollback to the initial vm.conf, but also that one 
seams to miss some values and so the agent is failing:
MainThread::ERROR::2016-11-24 
17:55:04,974::agent::205::ovirt_hosted_engine_ha.agent.agent.Agent::(_run_agent) 
Error: ''Configuration value not found: 
file=/var/run/ovirt-hosted-engine-ha/vm.conf, key=memSize'' - trying 
to restart agent


Both of the issue seams storage related, could yuo please share your 
gluster logs?



Thanks

kasturi



Hi Simone,

Below [1] is the link for the sosreports on the first two hosts. 
The third host has some issue. Once it is up will give the sosreport 
from there as well.


[1] http://rhsqe-repo.lab.eng.blr.redhat.com/sosreports/HC/vm_conf/

Thanks

kasturi

___
Users mailing list
Users@ovirt.org
http://lists.ovirt.org/mailman/listinfo/users


[ovirt-users] vm.conf on one of the node is missing

2016-11-24 Thread knarra

Hi,

I have three nodes with glusterfs as storage domain. For some 
reason i see that vm.conf from /var/run/ovirt-hosted-engine-ha is 
missing and due to this on one of my host i see that Hosted Engine HA : 
Not Active. Once i copy the file from some other node and restart 
ovirt-ha-broker and ovirt-ha-agent services everything works fine. But 
then this happens again. Can some please help me identify why this 
happens. Below is the log i see in ovirt-ha-agent.logs.



https://paste.fedoraproject.org/489120/79990345/


Thanks

kasturi

___
Users mailing list
Users@ovirt.org
http://lists.ovirt.org/mailman/listinfo/users


Re: [ovirt-users] Unable to add host

2016-11-20 Thread knarra

On 11/20/2016 09:24 PM, Oscar Segarra wrote:

Hi,

When I try to add the second host from the ovirt interface I get the 
following error:


Imágenes integradas 2

Of course, host vdicnode02 does not appear in the GUI and the gluster 
looks perfectly up und sync:


UI supports a functionality called "Importing host into Ovirt" which 
means that if there is already an existing cluster user can import that 
cluster and manage it from the UI. In your case i see that you already 
have a cluster, what you would need to do just importing the cluster 
into UI. To achieve that you just need to go to 'clusters' tab, there 
you see a link  called 'import'. Simply click on that link and you will 
see a popup for adding the host. Provide the root password for your 
hosts and all your hosts will be imported into the UI which are part of 
the cluster.


[root@vdicnode02 ~]# gluster volume status
Status of volume: vdic-infr-gv0
Gluster process TCP Port  RDMA Port  Online  Pid
--
Brick vdicnode01-priv:/vdic-infr/gv049152 0  Y   3039
Brick vdicnode02-priv:/vdic-infr/gv049152 0  Y   1999
Brick vdicnode03-priv:/vdic-infr/gv049152 0  Y   3456
Self-heal Daemon on localhost   N/A   N/AY 
  3043
Self-heal Daemon on vdicnode03-priv N/A   N/AY 
  3496
Self-heal Daemon on vdicnode01-priv N/A   N/AY 
  3267


Task Status of Volume vdic-infr-gv0
--
There are no active volume tasks

Status of volume: vdic-infr2-gv0
Gluster process TCP Port  RDMA Port  Online  Pid
--
Brick vdicnode01-priv:/vdic-infr2/gv0   49153 0  Y   3048
Brick vdicnode02-priv:/vdic-infr2/gv0   49153 0  Y   2026
Brick vdicnode03-priv:/vdic-infr2/gv0   49153 0  Y   3450
Self-heal Daemon on localhost   N/A   N/AY 
  3043
Self-heal Daemon on vdicnode01-priv N/A   N/AY 
  3267
Self-heal Daemon on vdicnode03-priv N/A   N/AY 
  3496


Task Status of Volume vdic-infr2-gv0
--
There are no active volume tasks

[root@vdicnode02 ~]#

May I activate self-heal?
Activate self heal? From the above volume status output i see that SHD 
process is started and PID for the same is listed which simply means 
that self heal is active and running.


I'd like to know the difference between None, Deploy and Undeploy from 
the Hosted Engine option as well:


Imágenes integradas 1


ah !!!. A lot to explain here. I would suggest you to go through the 
link below for more details on this.


https://devconfcz2016.sched.org/event/5m20/ovirt-and-gluster-hyperconvergence

Hope the above helps


Thanks a lot.


___
Users mailing list
Users@ovirt.org
http://lists.ovirt.org/mailman/listinfo/users



___
Users mailing list
Users@ovirt.org
http://lists.ovirt.org/mailman/listinfo/users


Re: [ovirt-users] Cannot install latest upstream master

2016-11-16 Thread knarra

On 11/16/2016 06:46 PM, Simone Tiraboschi wrote:



On Wed, Nov 16, 2016 at 12:18 PM, knarra <kna...@redhat.com 
<mailto:kna...@redhat.com>> wrote:


On 11/16/2016 03:59 PM, Simone Tiraboschi wrote:



On Wed, Nov 16, 2016 at 11:18 AM, knarra <kna...@redhat.com
<mailto:kna...@redhat.com>> wrote:

On 11/16/2016 03:43 PM, knarra wrote:

On 11/16/2016 03:37 PM, Simone Tiraboschi wrote:



On Wed, Nov 16, 2016 at 10:56 AM, knarra <kna...@redhat.com
<mailto:kna...@redhat.com>> wrote:

On 11/16/2016 03:07 PM, Martin Perina wrote:



    On Wed, Nov 16, 2016 at 9:48 AM, knarra
<kna...@redhat.com <mailto:kna...@redhat.com>> wrote:

On 11/16/2016 01:34 PM, Martin Perina wrote:

Hi,

could you please share log from engine-setup
execution?

But I fear this is caused by [1] as we haven't
done any changes in aaa-jdbc extension for quite
long time.
Sandro is it possible to remove or fix faulty
slf4j package from repo [2] as suggested in [1]?

Thanks

Martin

[1]
https://bugzilla.redhat.com/show_bug.cgi?id=1394656
<https://bugzilla.redhat.com/show_bug.cgi?id=1394656>
[2]

http://cbs.centos.org/repos/virt7-ovirt-common-candidate/x86_64/os/Packages/

<http://cbs.centos.org/repos/virt7-ovirt-common-candidate/x86_64/os/Packages/>


Hi Martin / simone,

Below is the link to the log file.

http://rhsqe-repo.lab.eng.blr.redhat.com/sosreports/HC/upstream/

<http://rhsqe-repo.lab.eng.blr.redhat.com/sosreports/HC/upstream/>


​This hosted engine setup log, but we need to get
engine-setup log from engine VM, which is located at

/var/log/ovirt-engine/setup/ovirt-engine-setup-20161115085421-ee7ksg.log

to find out the real issue.
Thanks
Martin

Hi Martin, I see that hosted engine vm is down and
the log you are asking for would be present in the
engine vm right. Is there a way that i can bring this up? 


hosted-engine --vm-start
Hi Simone, 

  I tried this but this does not seem to be working.
https://paste.fedoraproject.org/482930/29144814/
<https://paste.fedoraproject.org/482930/29144814/> Thanks
kasturi.

Can you please vdsm logs to understand why it didn't started? 

Hi, I see a traceback in the vdsm log saying no space left on
device. But i do see that i have enough space on my host. Below
link has the Traceback and df -Th output from the host.
https://paste.fedoraproject.org/482947/95037147/
<https://paste.fedoraproject.org/482947/95037147/> Thanks kasturi

OK, can you please shutdown sanlock with
   sanlockshutdown -f 1
before trying again?
Hi , I have ran the above command on the host. Then i 
tried to start hosted-engine vm which is hosted-engine --vm-start. 
Listed Traceback seen in vdsm log at [1]. Listed output of the sanlock 
and hosted-engine --vm-start at [2] [1] 
https://paste.fedoraproject.org/483186/30571514/ [2] 
https://paste.fedoraproject.org/483188/14793057/ Thanks kasturi.



On Wed, Nov 16, 2016 at 8:03 AM, knarra <kna...@redhat.com
<mailto:kna...@redhat.com>> wrote:



Hi,

I was  installing latest upstream master
and i am hitting the issue below. Can some
one please let me know if this a bug ? If
yes, is this going to be fixed in the next
nightly?

[WARNING] OVF does not contain a valid image
description, using default. [ INFO  ]
Detecting host timezone.   Enter ssh
public key for the root user that will be
used for the engine appliance (leave it empty
to skip): //root//.ssh/id_rsa.pub [ ERROR ]
The ssh key is not valid.   Enter ssh
public key for the root user that will be
used for the engine appliance (leave it empty
to skip): [WARNING] Skipping appliance root
ssh public key   Do you want to
enable ssh access for the root user (yes, no,
without-password) [yes]: yes ERROR SNIPPET:

  |- [ ERROR ] Failed to execute
sta

Re: [ovirt-users] Cannot install latest upstream master

2016-11-16 Thread knarra

On 11/16/2016 03:59 PM, Simone Tiraboschi wrote:



On Wed, Nov 16, 2016 at 11:18 AM, knarra <kna...@redhat.com 
<mailto:kna...@redhat.com>> wrote:


On 11/16/2016 03:43 PM, knarra wrote:

On 11/16/2016 03:37 PM, Simone Tiraboschi wrote:



On Wed, Nov 16, 2016 at 10:56 AM, knarra <kna...@redhat.com
<mailto:kna...@redhat.com>> wrote:

On 11/16/2016 03:07 PM, Martin Perina wrote:



On Wed, Nov 16, 2016 at 9:48 AM, knarra <kna...@redhat.com
<mailto:kna...@redhat.com>> wrote:

On 11/16/2016 01:34 PM, Martin Perina wrote:

Hi,

could you please share log from engine-setup execution?

But I fear this is caused by [1] as we haven't done
any changes in aaa-jdbc extension for quite long time.
Sandro is it possible to remove or fix faulty slf4j
package from repo [2] as suggested in [1]?

Thanks

Martin

[1]
https://bugzilla.redhat.com/show_bug.cgi?id=1394656
<https://bugzilla.redhat.com/show_bug.cgi?id=1394656>
[2]

http://cbs.centos.org/repos/virt7-ovirt-common-candidate/x86_64/os/Packages/

<http://cbs.centos.org/repos/virt7-ovirt-common-candidate/x86_64/os/Packages/>


Hi Martin / simone,

Below is the link to the log file.

http://rhsqe-repo.lab.eng.blr.redhat.com/sosreports/HC/upstream/
<http://rhsqe-repo.lab.eng.blr.redhat.com/sosreports/HC/upstream/>


​This hosted engine setup log, but we need to get
engine-setup log from engine VM, which is located at
/var/log/ovirt-engine/setup/ovirt-engine-setup-20161115085421-ee7ksg.log

to find out the real issue.
Thanks
Martin

Hi Martin, I see that hosted engine vm is down and the
log you are asking for would be present in the engine vm
right. Is there a way that i can bring this up? 


hosted-engine --vm-start
Hi Simone, 

  I tried this but this does not seem to be working.
https://paste.fedoraproject.org/482930/29144814/
<https://paste.fedoraproject.org/482930/29144814/> Thanks kasturi.

Can you please vdsm logs to understand why it didn't started? 
Hi, I see a traceback in the vdsm log saying no space left on 
device. But i do see that i have enough space on my host. Below link has 
the Traceback and df -Th output from the host. 
https://paste.fedoraproject.org/482947/95037147/ Thanks kasturi



On Wed, Nov 16, 2016 at 8:03 AM, knarra <kna...@redhat.com
<mailto:kna...@redhat.com>> wrote:



Hi,

I was  installing latest upstream master and i
am hitting the issue below. Can some one please
let me know if this a bug ? If yes, is this going
to be fixed in the next nightly?

[WARNING] OVF does not contain a valid image
description, using default. [ INFO  ] Detecting
host timezone.   Enter ssh public key for
the root user that will be used for the engine
appliance (leave it empty to skip):
//root//.ssh/id_rsa.pub [ ERROR ] The ssh key is
not valid.   Enter ssh public key for the
root user that will be used for the engine
appliance (leave it empty to skip): [WARNING]
Skipping appliance root ssh public key  
Do you want to enable ssh access for the root user

(yes, no, without-password) [yes]: yes ERROR
SNIPPET:

  |- [ ERROR ] Failed to execute stage
'Misc configuration': Command
'/usr/bin/ovirt-aaa-jdbc-tool' failed to execute
  |- [ INFO  ] Rolling back database
schema   |- [ INFO  ] Clearing Engine
database engine   |- [ INFO  ] Rolling
back DWH database schema   |- [ INFO  ]
Clearing DWH database ovirt_engine_history
  |- [ INFO  ] Stage: Clean up  
|-   Log file is located at


/var/log/ovirt-engine/setup/ovirt-engine-setup-20161115140627-er36oa.log
  |- [ INFO  ] Generating answer file
'/var/lib/ovirt-engine/setup/answers/20161115140829-setup.co
<http://20161115140829-setup.co>nf'   |- [
INFO  ] Stage: Pre-termination   |- [
INFO  ] Stage: Termination   |- [ ERROR ]
Execution of setup failed   |-
HE_APPLIANCE_ENGINE_SETUP_FAIL [

Re: [ovirt-users] Cannot install latest upstream master

2016-11-16 Thread knarra

On 11/16/2016 03:59 PM, Simone Tiraboschi wrote:



On Wed, Nov 16, 2016 at 11:18 AM, knarra <kna...@redhat.com 
<mailto:kna...@redhat.com>> wrote:


On 11/16/2016 03:43 PM, knarra wrote:

On 11/16/2016 03:37 PM, Simone Tiraboschi wrote:



On Wed, Nov 16, 2016 at 10:56 AM, knarra <kna...@redhat.com
<mailto:kna...@redhat.com>> wrote:

On 11/16/2016 03:07 PM, Martin Perina wrote:



On Wed, Nov 16, 2016 at 9:48 AM, knarra <kna...@redhat.com
<mailto:kna...@redhat.com>> wrote:

On 11/16/2016 01:34 PM, Martin Perina wrote:

Hi,

could you please share log from engine-setup execution?

But I fear this is caused by [1] as we haven't done
any changes in aaa-jdbc extension for quite long time.
Sandro is it possible to remove or fix faulty slf4j
package from repo [2] as suggested in [1]?

Thanks

Martin

[1]
https://bugzilla.redhat.com/show_bug.cgi?id=1394656
<https://bugzilla.redhat.com/show_bug.cgi?id=1394656>
[2]

http://cbs.centos.org/repos/virt7-ovirt-common-candidate/x86_64/os/Packages/

<http://cbs.centos.org/repos/virt7-ovirt-common-candidate/x86_64/os/Packages/>


Hi Martin / simone,

Below is the link to the log file.

http://rhsqe-repo.lab.eng.blr.redhat.com/sosreports/HC/upstream/
<http://rhsqe-repo.lab.eng.blr.redhat.com/sosreports/HC/upstream/>


​This hosted engine setup log, but we need to get
engine-setup log from engine VM, which is located at
/var/log/ovirt-engine/setup/ovirt-engine-setup-20161115085421-ee7ksg.log

to find out the real issue.
Thanks
Martin

Hi Martin, I see that hosted engine vm is down and the
log you are asking for would be present in the engine vm
right. Is there a way that i can bring this up? 


hosted-engine --vm-start
Hi Simone, 

  I tried this but this does not seem to be working.
https://paste.fedoraproject.org/482930/29144814/
<https://paste.fedoraproject.org/482930/29144814/> Thanks kasturi.

Can you please vdsm logs to understand why it didn't started? 
Hi Simone, I see the following trace back in vdsm logs. But i do see 
that my host has enough space. 2016-11-16 15:38:32,111 ERROR 
(vm/0428ddce) [virt.vm] (vmId='0428ddce-73cd-4f39-93ac-89906b71cffa') 
The vm start process failed (vm:594) Traceback (most recent call last): 
  File "/usr/share/vdsm/virt/vm.py", line 535, in _startUnderlyingVm 
self._run()   File "/usr/share/vdsm/virt/vm.py", line 1897, in _run 
self._connection.createXML(domxml, flags),   File 
"/usr/lib/python2.7/site-packages/vdsm/libvirtconnection.py", line 123, 
in wrapper ret = f(*args, **kwargs)   File 
"/usr/lib/python2.7/site-packages/vdsm/utils.py", line 936, in wrapper 
return func(inst, *args, **kwargs)   File 
"/usr/lib64/python2.7/site-packages/libvirt.py", line 3777, in createXML 
if ret is None:raise libvirtError('virDomainCreateXML() failed', 
conn=self) libvirtError: Failed to acquire lock: No space left on device 
2016-11-16 15:38:32,118 INFO  (vm/0428ddce) [virt.vm] 
(vmId='0428ddce-73cd-4f39-93ac-89906b71cffa') Changed state to Down: 
Failed to acquire lock: No space left on device (c ode=1) (vm:1176) df 
-TH on the host:  [root@zod ~]# df -TH 
Filesystem  TypeSize  Used Avail 
Use% Mounted on /dev/mapper/rhel_zod-root   xfs  
54G  3.5G   51G   7% / devtmpfs
devtmpfs 51G 0   51G   0% /dev 
tmpfs   tmpfs51G 0   51G   
0% /dev/shm tmpfs   tmpfs51G   
27M   51G   1% /run tmpfs   tmpfs
51G 0   51G   0% /sys/fs/cgroup /dev/sda1   
xfs 1.1G  149M  916M  14% /boot 
/dev/mapper/rhel_zod-home   xfs 941G   35M  941G   
1% /home tmpfs   tmpfs11G 
0   11G   0% /run/user/0 /dev/mapper/RHGS_vg1-engine_lv  
xfs 108G  9.0G   99G   9% /rhgs/engine 
/dev/mapper/RHGS_vg1-lv_vmrootdisks xfs 1.1T  154M  1.1T   
1% /rhgs/brick1 /dev/mapper/RHGS_vg1-lv_vmaddldisks xfs 
2.2T   36M  2.2T   1% /rhgs/brick2 10.70.36.76:/engine 
fuse.glusterfs  108G  9.0G   99G   9% 
/rhev/data-center/mnt/glusterSD/10.70.36.76:_engine 
/dev/loop1  ext32.1G  3.3M  2.0G   
1% /rhev/data-center/mnt/_var_lib_ovirt-hosted-engine-setup_tmppnoQeT 
Thanks kasturi



On Wed, Nov 16, 2016 at 8:03 AM, knarra <kna...@redhat.com
<mailto:k

Re: [ovirt-users] Info on testing ovirt 4.0.5 and gluster

2016-11-16 Thread knarra

On 11/16/2016 03:51 PM, Gianluca Cecchi wrote:
On Wed, Nov 16, 2016 at 9:55 AM, knarra <kna...@redhat.com 
<mailto:kna...@redhat.com>> wrote:


On 11/16/2016 01:28 PM, Gianluca Cecchi wrote:

On Wed, Nov 16, 2016 at 7:59 AM, knarra <kna...@redhat.com
<mailto:kna...@redhat.com>> wrote:

Hi Gianluca,

yes, you are right. Now second and third host can be
directly added from UI. Before adding the second and third
host please make sure that the following steps are done for
hyperconverged setup.

1) On Hosted engine vm run the command 'engine-config -s
AllowClusterWithVirtGlusterEnabled=true'

2) Restart ovirt-engine by running the command 'service
ovirt-engine restart'

3) /Edit Cluster/>/Default/>/Enable the gluster service/.

4) Create separate storage domains for each gluster volume.
You can see that hosted_storage gets imported into the UI
automatically when one storage domain is created in the UI.

5) Add second and third host from UI.

Hope this helps

Thanks
kasturi


Hello,
thanks for your answer.
Basically I'm following Jason guide here for 4.0:

https://www.ovirt.org/blog/2016/08/up-and-running-with-ovirt-4-0-and-gluster-storage/

<https://www.ovirt.org/blog/2016/08/up-and-running-with-ovirt-4-0-and-gluster-storage/>

I arrived just before the second/third host deploy actions.
I verified that

1) actually seems already in place
[root@ovengine ~]# engine-config -g
AllowClusterWithVirtGlusterEnabled
AllowClusterWithVirtGlusterEnabled: true version: general
[root@ovengine ~]#

3) Already done

2) Already done after step 3)


I see that in Jason guide steps 4) and 5) are reversed.
Are they interchangeable or is that using the web admin deploy
method requires that storage domains are already set up before
adding second host?

Before adding second host, it requires hosted_storage domain to be
imported into the UI. For this to happen we need to atleast add
one domain. With out creating a storage domain you can try adding
the second host and you will be given a message to add the domain
first.


Gianluca




Thanks for clarifications.
It worked smoothly; all the three hosts are now up; going to test 
urther functionalities.

Fine!

In the host-deploy logs for the 3 hosts I only see for the third host 
(and for the first file of the two ones generated for first host) 
these kinds of warning:



2016-11-16 11:09:48 DEBUG otopi.plugins.otopi.services.systemd 
plugin.executeRaw:863 execute-result: ('/bin/systemctl', 'start', 
'tuned.service'), rc=0
2016-11-16 11:09:48 DEBUG otopi.plugins.otopi.services.systemd 
plugin.execute:921 execute-output: ('/bin/systemctl', 'start', 
'tuned.service') stdout:



2016-11-16 11:09:48 DEBUG otopi.plugins.otopi.services.systemd 
plugin.execute:926 execute-output: ('/bin/systemctl', 'start', 
'tuned.service') stderr:



2016-11-16 11:09:48 DEBUG otopi.plugins.ovirt_host_deploy.tune.tuned 
plugin.executeRaw:813 execute: ('/sbin/tuned-adm', 'profile', 
'rhs-virtualization'), executable='None', cwd='None', env=None
2016-11-16 11:09:49 DEBUG otopi.plugins.ovirt_host_deploy.tune.tuned 
plugin.executeRaw:863 execute-result: ('/sbin/tuned-adm', 'profile', 
'rhs-virtualization'), rc=1
2016-11-16 11:09:49 DEBUG otopi.plugins.ovirt_host_deploy.tune.tuned 
plugin.execute:921 execute-output: ('/sbin/tuned-adm', 'profile', 
'rhs-virtualization') stdout:



2016-11-16 11:09:49 DEBUG otopi.plugins.ovirt_host_deploy.tune.tuned 
plugin.execute:926 execute-output: ('/sbin/tuned-adm', 'profile', 
'rhs-virtualization') stderr:

Requested profile 'rhs-virtualization' doesn't exist.

2016-11-16 11:09:49 WARNING otopi.plugins.ovirt_host_deploy.tune.tuned 
tuned._misc:105 Cannot set tuned profile


Do I have to worry about them?

Thanks again
Gianluca


you could safely ignore these messages. There are some tuned profiles 
which engine tries to set on the node. If those profiles does not exist 
on the node engine throws these kind of messages.


___
Users mailing list
Users@ovirt.org
http://lists.ovirt.org/mailman/listinfo/users


Re: [ovirt-users] Cannot install latest upstream master

2016-11-16 Thread knarra

On 11/16/2016 03:43 PM, knarra wrote:

On 11/16/2016 03:37 PM, Simone Tiraboschi wrote:



On Wed, Nov 16, 2016 at 10:56 AM, knarra <kna...@redhat.com 
<mailto:kna...@redhat.com>> wrote:


On 11/16/2016 03:07 PM, Martin Perina wrote:



On Wed, Nov 16, 2016 at 9:48 AM, knarra <kna...@redhat.com
<mailto:kna...@redhat.com>> wrote:

On 11/16/2016 01:34 PM, Martin Perina wrote:

Hi,

could you please share log from engine-setup execution?

But I fear this is caused by [1] as we haven't done any
changes in aaa-jdbc extension for quite long time.
Sandro is it possible to remove or fix faulty slf4j package
from repo [2] as suggested in [1]?

Thanks

Martin

[1] https://bugzilla.redhat.com/show_bug.cgi?id=1394656
<https://bugzilla.redhat.com/show_bug.cgi?id=1394656>
[2]

http://cbs.centos.org/repos/virt7-ovirt-common-candidate/x86_64/os/Packages/

<http://cbs.centos.org/repos/virt7-ovirt-common-candidate/x86_64/os/Packages/>


Hi Martin / simone,

Below is the link to the log file.

http://rhsqe-repo.lab.eng.blr.redhat.com/sosreports/HC/upstream/
<http://rhsqe-repo.lab.eng.blr.redhat.com/sosreports/HC/upstream/>


​This hosted engine setup log, but we need to get engine-setup
log from engine VM, which is located at
/var/log/ovirt-engine/setup/ovirt-engine-setup-20161115085421-ee7ksg.log

to find out the real issue.
Thanks
Martin

Hi Martin, I see that hosted engine vm is down and the log
you are asking for would be present in the engine vm right. Is
there a way that i can bring this up? 


hosted-engine --vm-start
Hi Simone, 
  I tried this but this does not seem to be working. 
https://paste.fedoraproject.org/482930/29144814/ Thanks kasturi.
On Wed, Nov 16, 2016 at 8:03 AM, knarra <kna...@redhat.com 
<mailto:kna...@redhat.com>> wrote:



Hi,

I was  installing latest upstream master and i am
hitting the issue below. Can some one please let me
know if this a bug ? If yes, is this going to be fixed
in the next nightly?

[WARNING] OVF does not contain a valid image
description, using default. [ INFO  ] Detecting host
timezone.   Enter ssh public key for the root
user that will be used for the engine appliance (leave
it empty to skip): //root//.ssh/id_rsa.pub [ ERROR ]
The ssh key is not valid.   Enter ssh public
key for the root user that will be used for the engine
appliance (leave it empty to skip): [WARNING] Skipping
appliance root ssh public key   Do you want to
enable ssh access for the root user (yes, no,
without-password) [yes]: yes ERROR SNIPPET:

  |- [ ERROR ] Failed to execute stage 'Misc
configuration': Command '/usr/bin/ovirt-aaa-jdbc-tool'
failed to execute   |- [ INFO  ] Rolling back
database schema   |- [ INFO  ] Clearing Engine
database engine   |- [ INFO  ] Rolling back DWH
database schema   |- [ INFO  ] Clearing DWH
database ovirt_engine_history   |- [ INFO  ]
Stage: Clean up   |-   Log file is
located at

/var/log/ovirt-engine/setup/ovirt-engine-setup-20161115140627-er36oa.log
  |- [ INFO  ] Generating answer file
'/var/lib/ovirt-engine/setup/answers/20161115140829-setup.co
<http://20161115140829-setup.co>nf'   |- [
INFO  ] Stage: Pre-termination   |- [ INFO  ]
Stage: Termination   |- [ ERROR ] Execution of
setup failed   |-
HE_APPLIANCE_ENGINE_SETUP_FAIL [ ERROR ] Engine setup
failed on the appliance [ ERROR ] Failed to execute
stage 'Closing up': Engine setup failed on the
appliance Please check its log on the appliance. [
INFO  ] Stage: Clean up [ INFO  ] Generating answer
file

'/var/lib/ovirt-hosted-engine-setup/answers/answers-20161115193834.conf'
[ INFO  ] Stage: Pre-termination [ INFO  ] Stage:
Termination [ ERROR ] Hosted Engine deployment failed:
this system is not reliable, please check the issue,fix
and redeploy   Log file is located at

/var/log/ovirt-hosted-engine-setup/ovirt-hosted-engine-setup-20161115191145-hr3nat.log
[root@rhsqa-grafton4 ~]#

Thanks

kasturi.

___ Users
mailing li

Re: [ovirt-users] Cannot install latest upstream master

2016-11-16 Thread knarra

On 11/16/2016 03:37 PM, Simone Tiraboschi wrote:



On Wed, Nov 16, 2016 at 10:56 AM, knarra <kna...@redhat.com 
<mailto:kna...@redhat.com>> wrote:


On 11/16/2016 03:07 PM, Martin Perina wrote:



On Wed, Nov 16, 2016 at 9:48 AM, knarra <kna...@redhat.com
<mailto:kna...@redhat.com>> wrote:

On 11/16/2016 01:34 PM, Martin Perina wrote:

Hi,

could you please share log from engine-setup execution?

But I fear this is caused by [1] as we haven't done any
changes in aaa-jdbc extension for quite long time.
Sandro is it possible to remove or fix faulty slf4j package
from repo [2] as suggested in [1]?

Thanks

Martin

[1] https://bugzilla.redhat.com/show_bug.cgi?id=1394656
<https://bugzilla.redhat.com/show_bug.cgi?id=1394656>
[2]

http://cbs.centos.org/repos/virt7-ovirt-common-candidate/x86_64/os/Packages/

<http://cbs.centos.org/repos/virt7-ovirt-common-candidate/x86_64/os/Packages/>


Hi Martin / simone,

Below is the link to the log file.

http://rhsqe-repo.lab.eng.blr.redhat.com/sosreports/HC/upstream/
<http://rhsqe-repo.lab.eng.blr.redhat.com/sosreports/HC/upstream/>


​This hosted engine setup log, but we need to get engine-setup
log from engine VM, which is located at
/var/log/ovirt-engine/setup/ovirt-engine-setup-20161115085421-ee7ksg.log

to find out the real issue.
Thanks
Martin

Hi Martin, I see that hosted engine vm is down and the log you
are asking for would be present in the engine vm right. Is there a
way that i can bring this up? 


hosted-engine --vm-start
Hi Simone, I tried this but this does not seem to be working. 
[root@zod ~]# hosted-engine --vm-start VM exists and is down, destroying 
it Machine destroyed 0428ddce-73cd-4f39-93ac-89906b71cffa Status = 
WaitForLaunch nicModel = rtl8139,pv statusTime = 4823851490 
emulatedMachine = rhel6.5.0 pid = 0 clientIp = devices = 
[{'index': '2', 'iface': 'ide', 'specParams': {}, 'readonly': 'true', 
'deviceId': 'c6cf7784-10a9-4b47-99ed-fa73e0083a3f', 'address': {'bus': 
'1', 'controller': '0', 'type': 'drive', 'target': '0', 'unit': '0'}, 
'device': 'cdrom', 'shared': 'false', 'path': '', 'type': 'disk'}, 
{'index': '0', 'iface': 'virtio', 'format': 'raw', 'bootOrder': '1', 
'poolID': '----', 'volumeID': 
'1a2b391a-1e26-47e8-b4c0-7fcdce61fd11', 'imageID': 
'7ec3bffe-2549-4832-a6ca-2fbd609b02c2', 'specParams': {}, 'readonly': 
'false', 'domainID': 'ef9cafbf-b740-4ac3-aa95-5f5ed24d21d3', 'optional': 
'false', 'deviceId': '7ec3bffe-2549-4832-a6ca-2fbd609b02c2', 'address': 
{'slot': '0x06', 'bus': '0x00', 'domain': '0x', 'type': 'pci', 
'function': '0x0'}, 'device': 'disk', 'shared': 'exclusive', 
'propagateErrors': 'off', 'type': 'disk'}, {'device': 'scsi', 'model': 
'virtio-scsi', 'type': 'controller'}, {'nicModel': 'pv', 'macAddr': 
'00:45:55:21:48:08', 'linkActive': 'true', 'network': 'ovirtmgmt', 
'filter': 'vdsm-no-mac-spoofing', 'specParams': {}, 'deviceId': 
'bc34f5f4-d9dd-40fb-ab4c-0e47542c1652', 'address': {'slot': '0x03', 
'bus': '0x00', 'domain': '0x', 'type': 'pci', 'function': '0x0'}, 
'device': 'bridge', 'type': 'interface'}, {'device': 'console', 
'specParams': {}, 'type': 'console', 'deviceId': 
'95da1064-ef94-4eae-bb4e-5cd05ae6e783', 'alias': 'console0'}, {'device': 
'vga', 'alias': 'video0', 'type': 'video'}, {'device': 'vnc', 'type': 
'graphics'}, {'device': 'virtio', 'specParams': {'source': 'random'}, 
'model': 'virtio', 'type': 'rng'}] guestDiskMapping = {} vmType 
= kvm memSize = 16384 cpuType = Haswell-noTSX 
spiceSecureChannels = 
smain,sdisplay,sinputs,scursor,splayback,srecord,ssmartcard,susbredir 
smp = 4 vmName = HostedEngine display = vnc maxVCpus = 
12 [root@zod ~]# hosted-engine --vm-status Failed to connect to broker, 
the number of errors has exceeded the limit (1) Cannot connect to the HA 
daemon, please check the logs. Traceback (most recent call last):   File 
"/usr/lib64/python2.7/runpy.py", line 162, in _run_module_as_main 
"__main__", fname, loader, pkg_name)   File 
"/usr/lib64/python2.7/runpy.py", line 72, in _run_code exec code in 
run_globals   File 
"/usr/lib/python2.7/site-packages/ovirt_hosted_engine_setup/vm_status.py", 
line 173, in  if not status_checker.print_status():   File 
"/usr/lib/python2.7/site-packages/ovirt_hosted_engine_setup/vm_status.py", 
line 104, in print_status cluster_stats = self._get_cluster_stats() 
  File 
"/usr/lib/python2.7/site-packages/ovirt_hosted_engine_setup/vm_status.py", 
line 89, in _get_cluster_stats cluster_stats = 
ha_cli.get_all_stats(client.HAClient.   File 
"/usr/lib/python2.7/site-packages/ovirt_hosted_engine_ha/client/client.py", 

Re: [ovirt-users] Cannot install latest upstream master

2016-11-16 Thread knarra

On 11/16/2016 03:07 PM, Martin Perina wrote:



On Wed, Nov 16, 2016 at 9:48 AM, knarra <kna...@redhat.com 
<mailto:kna...@redhat.com>> wrote:


On 11/16/2016 01:34 PM, Martin Perina wrote:

Hi,

could you please share log from engine-setup execution?

But I fear this is caused by [1] as we haven't done any changes
in aaa-jdbc extension for quite long time.
Sandro is it possible to remove or fix faulty slf4j package from
repo [2] as suggested in [1]?

Thanks

Martin

[1] https://bugzilla.redhat.com/show_bug.cgi?id=1394656
<https://bugzilla.redhat.com/show_bug.cgi?id=1394656>
[2]
http://cbs.centos.org/repos/virt7-ovirt-common-candidate/x86_64/os/Packages/

<http://cbs.centos.org/repos/virt7-ovirt-common-candidate/x86_64/os/Packages/>


Hi Martin / simone,

Below is the link to the log file.

http://rhsqe-repo.lab.eng.blr.redhat.com/sosreports/HC/upstream/
<http://rhsqe-repo.lab.eng.blr.redhat.com/sosreports/HC/upstream/>


​This hosted engine setup log, but we need to get engine-setup log 
from engine VM, which is located at

/var/log/ovirt-engine/setup/ovirt-engine-setup-20161115085421-ee7ksg.log

to find out the real issue.
Thanks
Martin

Hi Martin,

I see that hosted engine vm is down and the log you are asking for 
would be present in the engine vm right. Is there a way that i can bring 
this up?


Thanks
kasturi.




Thanks
kasturi


On Wed, Nov 16, 2016 at 8:03 AM, knarra <kna...@redhat.com
<mailto:kna...@redhat.com>> wrote:

Hi,

I was  installing latest upstream master and i am hitting
the issue below. Can some one please let me know if this a
bug ? If yes, is this going to be fixed in the next nightly?

[WARNING] OVF does not contain a valid image description,
using default.
[ INFO  ] Detecting host timezone.
  Enter ssh public key for the root user that will be
used for the engine appliance (leave it empty to skip):
//root//.ssh/id_rsa.pub
[ ERROR ] The ssh key is not valid.
  Enter ssh public key for the root user that will be
used for the engine appliance (leave it empty to skip):
[WARNING] Skipping appliance root ssh public key
  Do you want to enable ssh access for the root user
(yes, no, without-password) [yes]: yes

ERROR SNIPPET:




  |- [ ERROR ] Failed to execute stage 'Misc
configuration': Command '/usr/bin/ovirt-aaa-jdbc-tool' failed
to execute
  |- [ INFO  ] Rolling back database schema
  |- [ INFO  ] Clearing Engine database engine
  |- [ INFO  ] Rolling back DWH database schema
  |- [ INFO  ] Clearing DWH database
ovirt_engine_history
  |- [ INFO  ] Stage: Clean up
  |-   Log file is located at
/var/log/ovirt-engine/setup/ovirt-engine-setup-20161115140627-er36oa.log

  |- [ INFO  ] Generating answer file
'/var/lib/ovirt-engine/setup/answers/20161115140829-setup.co
<http://20161115140829-setup.co>nf'
  |- [ INFO  ] Stage: Pre-termination
  |- [ INFO  ] Stage: Termination
  |- [ ERROR ] Execution of setup failed
  |- HE_APPLIANCE_ENGINE_SETUP_FAIL
[ ERROR ] Engine setup failed on the appliance
[ ERROR ] Failed to execute stage 'Closing up': Engine setup
failed on the appliance Please check its log on the appliance.
[ INFO  ] Stage: Clean up
[ INFO  ] Generating answer file
'/var/lib/ovirt-hosted-engine-setup/answers/answers-20161115193834.conf'

[ INFO  ] Stage: Pre-termination
[ INFO  ] Stage: Termination
[ ERROR ] Hosted Engine deployment failed: this system is not
reliable, please check the issue,fix and redeploy
  Log file is located at

/var/log/ovirt-hosted-engine-setup/ovirt-hosted-engine-setup-20161115191145-hr3nat.log
[root@rhsqa-grafton4 ~]#

Thanks

kasturi.


___
Users mailing list
Users@ovirt.org <mailto:Users@ovirt.org>
http://lists.ovirt.org/mailman/listinfo/users
<http://lists.ovirt.org/mailman/listinfo/users>







___
Users mailing list
Users@ovirt.org
http://lists.ovirt.org/mailman/listinfo/users


Re: [ovirt-users] Info on testing ovirt 4.0.5 and gluster

2016-11-16 Thread knarra

On 11/16/2016 01:28 PM, Gianluca Cecchi wrote:
On Wed, Nov 16, 2016 at 7:59 AM, knarra <kna...@redhat.com 
<mailto:kna...@redhat.com>> wrote:


Hi Gianluca,

yes, you are right. Now second and third host can be directly
added from UI. Before adding the second and third host please make
sure that the following steps are done for hyperconverged setup.

1) On Hosted engine vm run the command 'engine-config -s
AllowClusterWithVirtGlusterEnabled=true'

2) Restart ovirt-engine by running the command 'service
ovirt-engine restart'

3) /Edit Cluster/>/Default/>/Enable the gluster service/.

4) Create separate storage domains for each gluster volume. You
can see that hosted_storage gets imported into the UI
automatically when one storage domain is created in the UI.

5) Add second and third host from UI.

Hope this helps

Thanks
kasturi


Hello,
thanks for your answer.
Basically I'm following Jason guide here for 4.0:
https://www.ovirt.org/blog/2016/08/up-and-running-with-ovirt-4-0-and-gluster-storage/

I arrived just before the second/third host deploy actions.
I verified that

1) actually seems already in place
[root@ovengine ~]# engine-config -g AllowClusterWithVirtGlusterEnabled
AllowClusterWithVirtGlusterEnabled: true version: general
[root@ovengine ~]#

3) Already done

2) Already done after step 3)


I see that in Jason guide steps 4) and 5) are reversed.
Are they interchangeable or is that using the web admin deploy method 
requires that storage domains are already set up before adding second 
host?
Before adding second host, it requires hosted_storage domain to be 
imported into the UI. For this to happen we need to atleast add one 
domain. With out creating a storage domain you can try adding the second 
host and you will be given a message to add the domain first.


Gianluca



___
Users mailing list
Users@ovirt.org
http://lists.ovirt.org/mailman/listinfo/users


Re: [ovirt-users] Cannot install latest upstream master

2016-11-16 Thread knarra

On 11/16/2016 02:13 PM, Simone Tiraboschi wrote:



On Wed, Nov 16, 2016 at 9:38 AM, knarra <kna...@redhat.com 
<mailto:kna...@redhat.com>> wrote:


On 11/16/2016 02:03 PM, Simone Tiraboschi wrote:



On Wed, Nov 16, 2016 at 8:03 AM, knarra <kna...@redhat.com
<mailto:kna...@redhat.com>> wrote:

Hi,

I was  installing latest upstream master and i am hitting
the issue below. Can some one please let me know if this a
bug ? If yes, is this going to be fixed in the next nightly?

[WARNING] OVF does not contain a valid image description,
using default.
[ INFO  ] Detecting host timezone.
  Enter ssh public key for the root user that will be
used for the engine appliance (leave it empty to skip):
//root//.ssh/id_rsa.pub
[ ERROR ] The ssh key is not valid.

Here you have to directly enter the SSH public key and not the
name of a file that contains it.

simone, sas directly entered the key and he also faces the same
issue.


This is not good, could you please send me the relevant log?

My bad. Looks like sas has skipped it.



  Enter ssh public key for the root user that will be
used for the engine appliance (leave it empty to skip):
[WARNING] Skipping appliance root ssh public key
  Do you want to enable ssh access for the root user
(yes, no, without-password) [yes]: yes

ERROR SNIPPET:




  |- [ ERROR ] Failed to execute stage 'Misc
configuration': Command '/usr/bin/ovirt-aaa-jdbc-tool' failed
to execute
  |- [ INFO  ] Rolling back database schema
  |- [ INFO  ] Clearing Engine database engine
  |- [ INFO  ] Rolling back DWH database schema
  |- [ INFO  ] Clearing DWH database
ovirt_engine_history
  |- [ INFO  ] Stage: Clean up
  |-   Log file is located at
/var/log/ovirt-engine/setup/ovirt-engine-setup-20161115140627-er36oa.log

  |- [ INFO  ] Generating answer file
'/var/lib/ovirt-engine/setup/answers/20161115140829-setup.co
<http://20161115140829-setup.co>nf'
  |- [ INFO  ] Stage: Pre-termination
  |- [ INFO  ] Stage: Termination
  |- [ ERROR ] Execution of setup failed
  |- HE_APPLIANCE_ENGINE_SETUP_FAIL
[ ERROR ] Engine setup failed on the appliance
[ ERROR ] Failed to execute stage 'Closing up': Engine setup
failed on the appliance Please check its log on the appliance.
[ INFO  ] Stage: Clean up
[ INFO  ] Generating answer file
'/var/lib/ovirt-hosted-engine-setup/answers/answers-20161115193834.conf'

[ INFO  ] Stage: Pre-termination
[ INFO  ] Stage: Termination
[ ERROR ] Hosted Engine deployment failed: this system is not
reliable, please check the issue,fix and redeploy
  Log file is located at

/var/log/ovirt-hosted-engine-setup/ovirt-hosted-engine-setup-20161115191145-hr3nat.log
[root@rhsqa-grafton4 ~]#

Thanks

kasturi.


___
Users mailing list
Users@ovirt.org <mailto:Users@ovirt.org>
http://lists.ovirt.org/mailman/listinfo/users
<http://lists.ovirt.org/mailman/listinfo/users>







___
Users mailing list
Users@ovirt.org
http://lists.ovirt.org/mailman/listinfo/users


Re: [ovirt-users] Cannot install latest upstream master

2016-11-16 Thread knarra

On 11/16/2016 01:34 PM, Martin Perina wrote:

Hi,

could you please share log from engine-setup execution?

But I fear this is caused by [1] as we haven't done any changes in 
aaa-jdbc extension for quite long time.
Sandro is it possible to remove or fix faulty slf4j package from repo 
[2] as suggested in [1]?


Thanks

Martin

[1] https://bugzilla.redhat.com/show_bug.cgi?id=1394656
[2] 
http://cbs.centos.org/repos/virt7-ovirt-common-candidate/x86_64/os/Packages/



Hi Martin / simone,

Below is the link to the log file.

http://rhsqe-repo.lab.eng.blr.redhat.com/sosreports/HC/upstream/

Thanks
kasturi


On Wed, Nov 16, 2016 at 8:03 AM, knarra <kna...@redhat.com 
<mailto:kna...@redhat.com>> wrote:


Hi,

I was  installing latest upstream master and i am hitting the
issue below. Can some one please let me know if this a bug ? If
yes, is this going to be fixed in the next nightly?

[WARNING] OVF does not contain a valid image description, using
default.
[ INFO  ] Detecting host timezone.
  Enter ssh public key for the root user that will be used
for the engine appliance (leave it empty to skip):
//root//.ssh/id_rsa.pub
[ ERROR ] The ssh key is not valid.
  Enter ssh public key for the root user that will be used
for the engine appliance (leave it empty to skip):
[WARNING] Skipping appliance root ssh public key
  Do you want to enable ssh access for the root user (yes,
no, without-password) [yes]: yes

ERROR SNIPPET:




  |- [ ERROR ] Failed to execute stage 'Misc
configuration': Command '/usr/bin/ovirt-aaa-jdbc-tool' failed to
execute
  |- [ INFO  ] Rolling back database schema
  |- [ INFO  ] Clearing Engine database engine
  |- [ INFO  ] Rolling back DWH database schema
  |- [ INFO  ] Clearing DWH database ovirt_engine_history
  |- [ INFO  ] Stage: Clean up
  |-   Log file is located at
/var/log/ovirt-engine/setup/ovirt-engine-setup-20161115140627-er36oa.log

  |- [ INFO  ] Generating answer file
'/var/lib/ovirt-engine/setup/answers/20161115140829-setup.conf'
  |- [ INFO  ] Stage: Pre-termination
  |- [ INFO  ] Stage: Termination
  |- [ ERROR ] Execution of setup failed
  |- HE_APPLIANCE_ENGINE_SETUP_FAIL
[ ERROR ] Engine setup failed on the appliance
[ ERROR ] Failed to execute stage 'Closing up': Engine setup
failed on the appliance Please check its log on the appliance.
[ INFO  ] Stage: Clean up
[ INFO  ] Generating answer file
'/var/lib/ovirt-hosted-engine-setup/answers/answers-20161115193834.conf'

[ INFO  ] Stage: Pre-termination
[ INFO  ] Stage: Termination
[ ERROR ] Hosted Engine deployment failed: this system is not
reliable, please check the issue,fix and redeploy
  Log file is located at

/var/log/ovirt-hosted-engine-setup/ovirt-hosted-engine-setup-20161115191145-hr3nat.log
[root@rhsqa-grafton4 ~]#

Thanks

kasturi.


___
Users mailing list
Users@ovirt.org <mailto:Users@ovirt.org>
http://lists.ovirt.org/mailman/listinfo/users
<http://lists.ovirt.org/mailman/listinfo/users>




___
Users mailing list
Users@ovirt.org
http://lists.ovirt.org/mailman/listinfo/users


Re: [ovirt-users] Cannot install latest upstream master

2016-11-16 Thread knarra

On 11/16/2016 02:03 PM, Simone Tiraboschi wrote:



On Wed, Nov 16, 2016 at 8:03 AM, knarra <kna...@redhat.com 
<mailto:kna...@redhat.com>> wrote:


Hi,

I was  installing latest upstream master and i am hitting the
issue below. Can some one please let me know if this a bug ? If
yes, is this going to be fixed in the next nightly?

[WARNING] OVF does not contain a valid image description, using
default.
[ INFO  ] Detecting host timezone.
  Enter ssh public key for the root user that will be used
for the engine appliance (leave it empty to skip):
//root//.ssh/id_rsa.pub
[ ERROR ] The ssh key is not valid.

Here you have to directly enter the SSH public key and not the name of 
a file that contains it.

simone, sas directly entered the key and he also faces the same issue.


  Enter ssh public key for the root user that will be used
for the engine appliance (leave it empty to skip):
[WARNING] Skipping appliance root ssh public key
  Do you want to enable ssh access for the root user (yes,
no, without-password) [yes]: yes

ERROR SNIPPET:




  |- [ ERROR ] Failed to execute stage 'Misc
configuration': Command '/usr/bin/ovirt-aaa-jdbc-tool' failed to
execute
  |- [ INFO  ] Rolling back database schema
  |- [ INFO  ] Clearing Engine database engine
  |- [ INFO  ] Rolling back DWH database schema
  |- [ INFO  ] Clearing DWH database ovirt_engine_history
  |- [ INFO  ] Stage: Clean up
  |-   Log file is located at
/var/log/ovirt-engine/setup/ovirt-engine-setup-20161115140627-er36oa.log

  |- [ INFO  ] Generating answer file
'/var/lib/ovirt-engine/setup/answers/20161115140829-setup.conf'
  |- [ INFO  ] Stage: Pre-termination
  |- [ INFO  ] Stage: Termination
  |- [ ERROR ] Execution of setup failed
  |- HE_APPLIANCE_ENGINE_SETUP_FAIL
[ ERROR ] Engine setup failed on the appliance
[ ERROR ] Failed to execute stage 'Closing up': Engine setup
failed on the appliance Please check its log on the appliance.
[ INFO  ] Stage: Clean up
[ INFO  ] Generating answer file
'/var/lib/ovirt-hosted-engine-setup/answers/answers-20161115193834.conf'

[ INFO  ] Stage: Pre-termination
[ INFO  ] Stage: Termination
[ ERROR ] Hosted Engine deployment failed: this system is not
reliable, please check the issue,fix and redeploy
  Log file is located at

/var/log/ovirt-hosted-engine-setup/ovirt-hosted-engine-setup-20161115191145-hr3nat.log
[root@rhsqa-grafton4 ~]#

Thanks

kasturi.


___
Users mailing list
Users@ovirt.org <mailto:Users@ovirt.org>
http://lists.ovirt.org/mailman/listinfo/users
<http://lists.ovirt.org/mailman/listinfo/users>




___
Users mailing list
Users@ovirt.org
http://lists.ovirt.org/mailman/listinfo/users


[ovirt-users] Cannot install latest upstream master

2016-11-15 Thread knarra

Hi,

I was  installing latest upstream master and i am hitting the issue 
below. Can some one please let me know if this a bug ? If yes, is this 
going to be fixed in the next nightly?


[WARNING] OVF does not contain a valid image description, using default.
[ INFO  ] Detecting host timezone.
  Enter ssh public key for the root user that will be used for 
the engine appliance (leave it empty to skip): /root/.ssh/id_rsa.pub

[ ERROR ] The ssh key is not valid.
  Enter ssh public key for the root user that will be used for 
the engine appliance (leave it empty to skip):

[WARNING] Skipping appliance root ssh public key
  Do you want to enable ssh access for the root user (yes, no, 
without-password) [yes]: yes


ERROR SNIPPET:




  |- [ ERROR ] Failed to execute stage 'Misc configuration': 
Command '/usr/bin/ovirt-aaa-jdbc-tool' failed to execute

  |- [ INFO  ] Rolling back database schema
  |- [ INFO  ] Clearing Engine database engine
  |- [ INFO  ] Rolling back DWH database schema
  |- [ INFO  ] Clearing DWH database ovirt_engine_history
  |- [ INFO  ] Stage: Clean up
  |-   Log file is located at 
/var/log/ovirt-engine/setup/ovirt-engine-setup-20161115140627-er36oa.log
  |- [ INFO  ] Generating answer file 
'/var/lib/ovirt-engine/setup/answers/20161115140829-setup.conf'

  |- [ INFO  ] Stage: Pre-termination
  |- [ INFO  ] Stage: Termination
  |- [ ERROR ] Execution of setup failed
  |- HE_APPLIANCE_ENGINE_SETUP_FAIL
[ ERROR ] Engine setup failed on the appliance
[ ERROR ] Failed to execute stage 'Closing up': Engine setup failed on 
the appliance Please check its log on the appliance.

[ INFO  ] Stage: Clean up
[ INFO  ] Generating answer file 
'/var/lib/ovirt-hosted-engine-setup/answers/answers-20161115193834.conf'

[ INFO  ] Stage: Pre-termination
[ INFO  ] Stage: Termination
[ ERROR ] Hosted Engine deployment failed: this system is not reliable, 
please check the issue,fix and redeploy
  Log file is located at 
/var/log/ovirt-hosted-engine-setup/ovirt-hosted-engine-setup-20161115191145-hr3nat.log

[root@rhsqa-grafton4 ~]#

Thanks

kasturi.

___
Users mailing list
Users@ovirt.org
http://lists.ovirt.org/mailman/listinfo/users


Re: [ovirt-users] Info on testing ovirt 4.0.5 and gluster

2016-11-15 Thread knarra

Hi Gianluca,

yes, you are right. Now second and third host can be directly added 
from UI. Before adding the second and third host please make sure that 
the following steps are done for hyperconverged setup.


1) On Hosted engine vm run the command 'engine-config -s 
AllowClusterWithVirtGlusterEnabled=true'


2) Restart ovirt-engine by running the command 'service ovirt-engine 
restart'


3) /Edit Cluster/>/Default/>/Enable the gluster service/.

4) Create separate storage domains for each gluster volume. You can see 
that hosted_storage gets imported into the UI automatically when one 
storage domain is created in the UI.


5) Add second and third host from UI.

Hope this helps

Thanks
kasturi

On 11/16/2016 06:43 AM, Gianluca Cecchi wrote:

Hello,
I'm testing hyperconverged setup with gluster and oVirt 4.0.5 and 
three hosts and self hosted engine.
I'm at the point where first host is ok and engine up and I have to 
deploy second and third host.


In the past the command to give on them was

root@host2 # hosted-engine --deploy
and at the end of it
root@host3 # hosted-engine --deploy

But I also seem to remember that perhaps this has been superseded and 
possible to direct deploy now host2 and host3 from web admin gui with 
Hosts --> New


Is this true in general? And in particular in my case?

Thanks in advance,

Gianluca


___
Users mailing list
Users@ovirt.org
http://lists.ovirt.org/mailman/listinfo/users



___
Users mailing list
Users@ovirt.org
http://lists.ovirt.org/mailman/listinfo/users


Re: [ovirt-users] EPEL and package(s) conflicts

2016-11-15 Thread knarra

[+soumya]
On 11/15/2016 06:51 PM, Simone Tiraboschi wrote:



On Tue, Nov 15, 2016 at 1:26 PM, lejeczek > wrote:


hi

I apologize if I missed it reading release(repo) note.
What are users supposed to do with EPEL repo?
I'm asking for hit this:

--> Package python-perf.x86_64 0:4.8.7-1.el7.elrepo will be an update
--> Finished Dependency Resolution
Error: Package: nfs-ganesha-gluster-2.3.0-1.el7.x86_64
(@ovirt-4.0-centos-gluster37)
   Requires: nfs-ganesha = 2.3.0-1.el7
   Removing: nfs-ganesha-2.3.0-1.el7.x86_64
(@ovirt-4.0-centos-gluster37)
   nfs-ganesha = 2.3.0-1.el7
   Updated By: nfs-ganesha-2.3.2-1.el7.x86_64 (epel)
   nfs-ganesha = 2.3.2-1.el7


Adding Sahina on this.


and I also wander if there might be more?
regards.
L.
___
Users mailing list
Users@ovirt.org 
http://lists.ovirt.org/mailman/listinfo/users





___
Users mailing list
Users@ovirt.org
http://lists.ovirt.org/mailman/listinfo/users



___
Users mailing list
Users@ovirt.org
http://lists.ovirt.org/mailman/listinfo/users


Re: [ovirt-users] gluster - Volume quota failed

2016-11-13 Thread knarra

[+gluster-users] list

On 11/11/2016 04:09 PM, lejeczek wrote:



On 11/11/16 09:26, knarra wrote:

On 11/11/2016 01:07 PM, lejeczek wrote:



On 11/11/16 05:57, knarra wrote:

On 11/11/2016 03:20 AM, lejeczek wrote:
quota command failed : Volume quota failed. The cluster is 
operating at version 30700. Quota command enable is unavailable in 
this version. 


Hi,

Could you please tell us what is the version of glusterfs you 
are using ? For 3.7.0 / 3.7.1 30700 op-version is applicable. If 
you have greater version you would need to bump up the op-version 
and check if quota enable works.


Hope this helps.

Thanks

kasturi.

it is glusterfs-3.7.16-1.el7.x86_64 - so it's higher, correct? And 
if yes why gluster did set it 30700?

What value should there be?
Could you please set the op-version to 30712 and try enabling quota. 
Ideally, when a fresh install of glusterfs is done op version is set 
to correct value. When a upgrade of node is done then user has to go 
and bump up the op-version.


hi, thanks,
like I mentioned earlier - no earlier setup, gluster was installed 
yes, older versions but never was in use, not a single volume was set up.


I'd have question still about quotas:  I've started googling but have 
to failed - glusterfs does not translate local FS (xfs) quotas and 
presents them to its clients, does it?

It is only glusterfs own quota functions that we have to manage quotas?




More info can be found here 
https://github.com/gluster/glusterfs/blob/release-3.7/doc/release-notes/3.7.16.md



thanks
___
Users mailing list
Users@ovirt.org
http://lists.ovirt.org/mailman/listinfo/users







___
Users mailing list
Users@ovirt.org
http://lists.ovirt.org/mailman/listinfo/users


Re: [ovirt-users] gluster - Volume quota failed

2016-11-11 Thread knarra

On 11/11/2016 01:07 PM, lejeczek wrote:



On 11/11/16 05:57, knarra wrote:

On 11/11/2016 03:20 AM, lejeczek wrote:
quota command failed : Volume quota failed. The cluster is operating 
at version 30700. Quota command enable is unavailable in this version. 


Hi,

Could you please tell us what is the version of glusterfs you are 
using ? For 3.7.0 / 3.7.1 30700 op-version is applicable. If you have 
greater version you would need to bump up the op-version and check if 
quota enable works.


Hope this helps.

Thanks

kasturi.

it is glusterfs-3.7.16-1.el7.x86_64 - so it's higher, correct? And if 
yes why gluster did set it 30700?

What value should there be?
Could you please set the op-version to 30712 and try enabling quota. 
Ideally, when a fresh install of glusterfs is done op version is set to 
correct value. When a upgrade of node is done then user has to go and 
bump up the op-version.


More info can be found here 
https://github.com/gluster/glusterfs/blob/release-3.7/doc/release-notes/3.7.16.md



thanks
___
Users mailing list
Users@ovirt.org
http://lists.ovirt.org/mailman/listinfo/users



___
Users mailing list
Users@ovirt.org
http://lists.ovirt.org/mailman/listinfo/users


Re: [ovirt-users] gluster - Volume quota failed

2016-11-10 Thread knarra

On 11/11/2016 03:20 AM, lejeczek wrote:
quota command failed : Volume quota failed. The cluster is operating 
at version 30700. Quota command enable is unavailable in this version. 


Hi,

Could you please tell us what is the version of glusterfs you are 
using ? For 3.7.0 / 3.7.1 30700 op-version is applicable. If you have 
greater version you would need to bump up the op-version and check if 
quota enable works.


Hope this helps.

Thanks

kasturi.

___
Users mailing list
Users@ovirt.org
http://lists.ovirt.org/mailman/listinfo/users


Re: [ovirt-users] Problem moving master storage domain to maintenance

2016-11-10 Thread knarra

On 11/09/2016 06:26 PM, Roy Golan wrote:



On 9 November 2016 at 14:49, knarra <kna...@redhat.com 
<mailto:kna...@redhat.com>> wrote:


Can some one please help me to understand the queries below.

On 11/03/2016 06:43 PM, Maor Lipchuk wrote:

Hi kasturi,

Which version of oVirt are you using?

Apologies for the late reply. I am using the latest master.

Roy, I assume it is related to 4.0 version where the import of
hosted storage domain was introduced. Care to share your insight
about it?

Regards,
Maor


On Thu, Nov 3, 2016 at 12:23 PM, knarra <kna...@redhat.com
<mailto:kna...@redhat.com>> wrote:

Hi,

I have three storage domains backed by gluster in my
environment (hostedstorage, data and vmstore). I would want
to move the storage domains into maintenance. Have couple of
questions here.

1) Will moving master storage domain into maintenance have
some impact on hostedstorage?

2) I see that moving master storage domain into maintenance
causes  HostedEngine VM to restart and moves hosted_storage
from active to Unknown state. Is this expected?

3) master storage domain remains in "Preparing for
Maintenance" and i see the following exceptions in the
engine.log.

2016-11-03 06:22:10,988 ERROR
[org.ovirt.engine.core.vdsbroker.irsbroker.IrsBrokerCommand]
(DefaultQuartzScheduler6) [2d534f09]
IrsBroker::Failed::GetStoragePoolInfoVDS:
IRSGenericException: IRSErrorException:
IRSNoMasterDomainException: Wrong Master domain or its
version: u'SD=08aba92e-e685-45d7-b03f-85d9678ecc9b,
pool=581999ef-02aa-0272-0334-0159'
2016-11-03 06:22:11,001 WARN
[org.ovirt.engine.core.bll.storage.pool.ReconstructMasterDomainCommand]
(org.ovirt.thread.pool-6-thread-24) [210d2f12] Validation of
action 'ReconstructMasterDomain' failed for user SYSTEM.
Reasons:

VAR__ACTION__RECONSTRUCT_MASTER,VAR__TYPE__STORAGE__DOMAIN,ACTION_TYPE_FAILED_STORAGE_DOMAIN_STATUS_ILLEGAL2,$status
PreparingForMaintenance

Thanks

kasturi.


The hosted_storage will not be picked up as a master domain. So the 
reconstruct must have picked up one of your other domain. I don't know 
why the reconstruct failed, it says here it's wrong master domain version.
Roy, when i tried to move master storage domain into maintenance i did 
not have any other active domains on my system other than 
hosted_storage.  So i am not sure what reconstruct would have picked.



___
Users mailing list
Users@ovirt.org <mailto:Users@ovirt.org>
http://lists.ovirt.org/mailman/listinfo/users
<http://lists.ovirt.org/mailman/listinfo/users>







___
Users mailing list
Users@ovirt.org
http://lists.ovirt.org/mailman/listinfo/users


Re: [ovirt-users] Problem moving master storage domain to maintenance

2016-11-09 Thread knarra

Can some one please help me to understand the queries below.

On 11/03/2016 06:43 PM, Maor Lipchuk wrote:

Hi kasturi,

Which version of oVirt are you using?

Apologies for the late reply. I am using the latest master.
Roy, I assume it is related to 4.0 version where the import of hosted 
storage domain was introduced. Care to share your insight about it?


Regards,
Maor


On Thu, Nov 3, 2016 at 12:23 PM, knarra <kna...@redhat.com 
<mailto:kna...@redhat.com>> wrote:


Hi,

I have three storage domains backed by gluster in my
environment (hostedstorage, data and vmstore). I would want to
move the storage domains into maintenance. Have couple of
questions here.

1) Will moving master storage domain into maintenance have some
impact on hostedstorage?

2) I see that moving master storage domain into maintenance
causes  HostedEngine VM to restart and moves hosted_storage from
active to Unknown state. Is this expected?

3) master storage domain remains in  "Preparing for Maintenance"
and i see the following exceptions in the engine.log.

2016-11-03 06:22:10,988 ERROR
[org.ovirt.engine.core.vdsbroker.irsbroker.IrsBrokerCommand]
(DefaultQuartzScheduler6) [2d534f09]
IrsBroker::Failed::GetStoragePoolInfoVDS: IRSGenericException:
IRSErrorException: IRSNoMasterDomainException: Wrong Master domain
or its version: u'SD=08aba92e-e685-45d7-b03f-85d9678ecc9b,
pool=581999ef-02aa-0272-0334-0159'
2016-11-03 06:22:11,001 WARN
[org.ovirt.engine.core.bll.storage.pool.ReconstructMasterDomainCommand]
(org.ovirt.thread.pool-6-thread-24) [210d2f12] Validation of
action 'ReconstructMasterDomain' failed for user SYSTEM. Reasons:

VAR__ACTION__RECONSTRUCT_MASTER,VAR__TYPE__STORAGE__DOMAIN,ACTION_TYPE_FAILED_STORAGE_DOMAIN_STATUS_ILLEGAL2,$status
PreparingForMaintenance

Thanks

kasturi.


___
Users mailing list
Users@ovirt.org <mailto:Users@ovirt.org>
http://lists.ovirt.org/mailman/listinfo/users
<http://lists.ovirt.org/mailman/listinfo/users>




___
Users mailing list
Users@ovirt.org
http://lists.ovirt.org/mailman/listinfo/users


[ovirt-users] Problem moving master storage domain to maintenance

2016-11-03 Thread knarra

Hi,

I have three storage domains backed by gluster in my environment 
(hostedstorage, data and vmstore). I would want to move the storage 
domains into maintenance. Have couple of questions here.


1) Will moving master storage domain into maintenance have some impact 
on hostedstorage?


2) I see that moving master storage domain into maintenance causes  
HostedEngine VM to restart and moves hosted_storage from active to 
Unknown state. Is this expected?


3) master storage domain remains in  "Preparing for Maintenance" and i 
see the following exceptions in the engine.log.


2016-11-03 06:22:10,988 ERROR 
[org.ovirt.engine.core.vdsbroker.irsbroker.IrsBrokerCommand] 
(DefaultQuartzScheduler6) [2d534f09] 
IrsBroker::Failed::GetStoragePoolInfoVDS: IRSGenericException: 
IRSErrorException: IRSNoMasterDomainException: Wrong Master domain or 
its version: u'SD=08aba92e-e685-45d7-b03f-85d9678ecc9b, 
pool=581999ef-02aa-0272-0334-0159'
2016-11-03 06:22:11,001 WARN 
[org.ovirt.engine.core.bll.storage.pool.ReconstructMasterDomainCommand] 
(org.ovirt.thread.pool-6-thread-24) [210d2f12] Validation of action 
'ReconstructMasterDomain' failed for user SYSTEM. Reasons: 
VAR__ACTION__RECONSTRUCT_MASTER,VAR__TYPE__STORAGE__DOMAIN,ACTION_TYPE_FAILED_STORAGE_DOMAIN_STATUS_ILLEGAL2,$status 
PreparingForMaintenance


Thanks

kasturi.


___
Users mailing list
Users@ovirt.org
http://lists.ovirt.org/mailman/listinfo/users


Re: [ovirt-users] Hosted Engine CPU usage always shows 100%

2016-11-01 Thread knarra

On 10/27/2016 07:10 PM, Simone Tiraboschi wrote:



On Thu, Oct 27, 2016 at 3:33 PM, knarra <kna...@redhat.com 
<mailto:kna...@redhat.com>> wrote:


On 10/27/2016 06:42 PM, Simone Tiraboschi wrote:



On Thu, Oct 27, 2016 at 2:56 PM, knarra <kna...@redhat.com
<mailto:kna...@redhat.com>> wrote:

Hi Simone,

I see that this patch is merged upstream. I have
installed latest master. But i still see that HostedEngine
CPU shows 100%. Is there something i am missing here?


So maybe it's an unrelated bug.
Do you see the same behavior if you directly check the CPU on the
VM with top?


Thanks
kasturi


Hi simone,

I directly checked it on my machine and i see that the CPU
utilization is very less .

top - 13:33:12 up 32 min,  1 user,  load average: 0.69, 0.66, 0.57
Tasks: 150 total,   1 running, 149 sleeping,   0 stopped,   0 zombie
%Cpu(s):  8.3 us,  9.0 sy,  0.0 ni, 82.5 id,  0.0 wa, 0.0 hi,  0.2
si,  0.0 st


OK, can you please file a bug?



Thanks
kasturi


Hi Simone,

I have filed bug for the same and here is the bug id 
https://bugzilla.redhat.com/show_bug.cgi?id=1390675. I have put the 
ovirt team as Infra and assigned the bug to you. Could you please let me 
know which ovirt team should i be selecting ?


Thanks
kasturi.





On 10/19/2016 10:33 PM, Simone Tiraboschi wrote:



On Wed, Oct 19, 2016 at 3:13 PM, knarra <kna...@redhat.com
<mailto:kna...@redhat.com>> wrote:

    On 10/19/2016 06:43 PM, knarra wrote:

Hi,

I have latest ovirt master and i see that Hosted
Engine CPU is always shown 100%. But the actual
usage in the system is very less.  Is this a known
issue or a bug ?


https://bugzilla.redhat.com/show_bug.cgi?id=1381899
<https://bugzilla.redhat.com/show_bug.cgi?id=1381899> should
strongly reduce it.
Kasturi, can you please try
https://gerrit.ovirt.org/#/c/65230/
<https://gerrit.ovirt.org/#/c/65230/> ?



Thanks

kasturi

___
Users mailing list
Users@ovirt.org <mailto:Users@ovirt.org>
http://lists.ovirt.org/mailman/listinfo/users
<http://lists.ovirt.org/mailman/listinfo/users>


Attaching the screenshot for the same.


___
Users mailing list
Users@ovirt.org <mailto:Users@ovirt.org>
http://lists.ovirt.org/mailman/listinfo/users
<http://lists.ovirt.org/mailman/listinfo/users>










___
Users mailing list
Users@ovirt.org
http://lists.ovirt.org/mailman/listinfo/users


Re: [ovirt-users] Hosted Engine CPU usage always shows 100%

2016-10-27 Thread knarra

On 10/27/2016 06:42 PM, Simone Tiraboschi wrote:



On Thu, Oct 27, 2016 at 2:56 PM, knarra <kna...@redhat.com 
<mailto:kna...@redhat.com>> wrote:


Hi Simone,

I see that this patch is merged upstream. I have installed
latest master. But i still see that HostedEngine CPU shows 100%.
Is there something i am missing here?


So maybe it's an unrelated bug.
Do you see the same behavior if you directly check the CPU on the VM 
with top?



Thanks
kasturi


Hi simone,

I directly checked it on my machine and i see that the CPU 
utilization is very less .


top - 13:33:12 up 32 min,  1 user,  load average: 0.69, 0.66, 0.57
Tasks: 150 total,   1 running, 149 sleeping,   0 stopped,   0 zombie
%Cpu(s):  8.3 us,  9.0 sy,  0.0 ni, 82.5 id,  0.0 wa,  0.0 hi,  0.2 si,  
0.0 st


Thanks
kasturi



On 10/19/2016 10:33 PM, Simone Tiraboschi wrote:



On Wed, Oct 19, 2016 at 3:13 PM, knarra <kna...@redhat.com
<mailto:kna...@redhat.com>> wrote:

On 10/19/2016 06:43 PM, knarra wrote:

Hi,

I have latest ovirt master and i see that Hosted
Engine CPU is always shown 100%. But the actual usage in
the system is very less.  Is this a known issue or a bug ?


https://bugzilla.redhat.com/show_bug.cgi?id=1381899
<https://bugzilla.redhat.com/show_bug.cgi?id=1381899> should
strongly reduce it.
Kasturi, can you please try https://gerrit.ovirt.org/#/c/65230/
<https://gerrit.ovirt.org/#/c/65230/> ?



Thanks

kasturi

___
Users mailing list
Users@ovirt.org <mailto:Users@ovirt.org>
http://lists.ovirt.org/mailman/listinfo/users
<http://lists.ovirt.org/mailman/listinfo/users>


Attaching the screenshot for the same.


___
Users mailing list
Users@ovirt.org <mailto:Users@ovirt.org>
http://lists.ovirt.org/mailman/listinfo/users
<http://lists.ovirt.org/mailman/listinfo/users>







___
Users mailing list
Users@ovirt.org
http://lists.ovirt.org/mailman/listinfo/users


Re: [ovirt-users] Hosted Engine CPU usage always shows 100%

2016-10-27 Thread knarra

Hi Simone,

I see that this patch is merged upstream. I have installed latest 
master. But i still see that HostedEngine CPU shows 100%. Is there 
something i am missing here?


Thanks
kasturi
On 10/19/2016 10:33 PM, Simone Tiraboschi wrote:



On Wed, Oct 19, 2016 at 3:13 PM, knarra <kna...@redhat.com 
<mailto:kna...@redhat.com>> wrote:


On 10/19/2016 06:43 PM, knarra wrote:

Hi,

I have latest ovirt master and i see that Hosted Engine
CPU is always shown 100%. But the actual usage in the system
is very less.  Is this a known issue or a bug ?


https://bugzilla.redhat.com/show_bug.cgi?id=1381899 should strongly 
reduce it.

Kasturi, can you please try https://gerrit.ovirt.org/#/c/65230/ ?



Thanks

kasturi

___
Users mailing list
Users@ovirt.org <mailto:Users@ovirt.org>
http://lists.ovirt.org/mailman/listinfo/users
<http://lists.ovirt.org/mailman/listinfo/users>


Attaching the screenshot for the same.


___
Users mailing list
Users@ovirt.org <mailto:Users@ovirt.org>
http://lists.ovirt.org/mailman/listinfo/users
<http://lists.ovirt.org/mailman/listinfo/users>




___
Users mailing list
Users@ovirt.org
http://lists.ovirt.org/mailman/listinfo/users


[ovirt-users] Cannot Migrate Hosted Engine

2016-10-27 Thread knarra

Hi,

I have installed latest upstream master on RHEL7.2. When i try to 
put a host in maintenance which runs HE vm i see that vm does not get 
migrated to another host and host is stuck in "preparing for maintenace" 
state. I see the following errors in the vdsm.log . Can you please help 
me understand why this error is seen?


1.
   2016-10-27 16:40:22,742 ERROR (Thread-3293) [virt.vm]
   (vmId='21e0e248-19bf-47b3-b72f-6a3740d9ff43') Hook script execution
   failed: internal error: Child process (LC_ALL=C PAT
2.
   H=/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin
   /etc/libvirt/hooks/qemu HostedEngine migrate begin -) unexpected
   exit status 1: Traceback (most recent call last):
3.
  File "/usr/libexec/vdsm/vm_migrate_hook.py", line 51, in main
4.
   _process_domxml(tree)
5.
  File "/usr/libexec/vdsm/vm_migrate_hook.py", line 80, in
   _process_domxml
6.
   _set_graphics(devices, target_vm_conf)
7.
  File "/usr/libexec/vdsm/vm_migrate_hook.py", line 156, in
   _set_graphics
8.
   target_display_network, target_display_ip =
   _vmconf_display(target_vm_conf)
9.
  File "/usr/libexec/vdsm/vm_migrate_hook.py", line 177, in
   _vmconf_display
10.
   raise VmMigrationHookError('VM conf graphics not detected')
11.
   VmMigrationHookError: VM conf graphics not detected
12.
   Traceback (most recent call last):
13.
  File "/usr/libexec/vdsm/vm_migrate_hook.py", line 201, in 
14.
   main(*sys.argv[1:])
15.
  File "/usr/libexec/vdsm/vm_migrate_hook.py", line 51, in main
16.
   _process_domxml(tree)
17.
  File "/usr/libexec/vdsm/vm_migrate_hoo (migration:261)
18.
   2016-10-27 16:40:22,757 ERROR (Thread-3293) [virt.vm]
   (vmId='21e0e248-19bf-47b3-b72f-6a3740d9ff43') Failed to migrate
   (migration:390)
19.
   Traceback (most recent call last):
20.
  File "/usr/share/vdsm/virt/migration.py", line 372, in run
21.
   self._startUnderlyingMigration(time.time())
22.
  File "/usr/share/vdsm/virt/migration.py", line 447, in
   _startUnderlyingMigration
23.
   self._perform_with_downtime_thread(duri, muri)
24.
  File "/usr/share/vdsm/virt/migration.py", line 498, in
   _perform_with_downtime_thread
25.
   self._perform_migration(duri, muri)
26.
  File "/usr/share/vdsm/virt/migration.py", line 485, in
   _perform_migration
27.
   self._vm._dom.migrateToURI3(duri, params, flags)
28.
  File "/usr/lib/python2.7/site-packages/vdsm/virt/virdomain.py",
   line 69, in f
29.
ret = attr(*args, **kwargs)
30.
  File
   "/usr/lib/python2.7/site-packages/vdsm/libvirtconnection.py", line
   123, in wrapper
31.
ret = f(*args, **kwargs)
32.
  File "/usr/lib/python2.7/site-packages/vdsm/utils.py", line 899,
   in wrapper
33.
   return func(inst, *args, **kwargs)
34.
  File "/usr/lib64/python2.7/site-packages/libvirt.py", line 1836,
   in migrateToURI3
35.
   :
36.
if ret == -1: raise libvirtError ('virDomainMigrateToURI3()
   failed', dom=self)
37.
   libvirtError: Hook script execution failed: internal error: Child
   process (LC_ALL=C
   PATH=/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin
   /etc/libvirt/hooks/qemu HostedEngine migrate begin -) unexpected
   exit status 1: Traceback (most recent call last):
38.
  File "/usr/libexec/vdsm/vm_migrate_hook.py", line 51, in main
39.
   _process_domxml(tree)
40.
  File "/usr/libexec/vdsm/vm_migrate_hook.py", line 80, in
   _process_domxml
41.
   _set_graphics(devices, target_vm_conf)
42.
  File "/usr/libexec/vdsm/vm_migrate_hook.py", line 156, in
   _set_graphics
43.
   target_display_network, target_display_ip =
   _vmconf_display(target_vm_conf)
44.
  File "/usr/libexec/vdsm/vm_migrate_hook.py", line 177, in
   _vmconf_display
45.
   raise VmMigrationHookError('VM conf graphics not detected')
46.
   VmMigrationHookError: VM conf graphics not detected
47.
   Traceback (most recent call last):
48.
  File "/usr/libexec/vdsm/vm_migrate_hook.py", line 201, in 
49.
   main(*sys.argv[1:])
50.
  File "/usr/libexec/vdsm/vm_migrate_hook.py", line 51, in main
51.
   _process_domxml(tree)
52.
  File "/usr/libexec/vdsm/vm_migrate_hoo

Thanks

kasturi.

___
Users mailing list
Users@ovirt.org
http://lists.ovirt.org/mailman/listinfo/users


  1   2   >