Re: [ovirt-users] Can not access storage domain hosted_storage

2016-04-07 Thread Richard Neuboeck
Thanks Darryl!

For others having the same problem and needing more information on
how to temporarily fix this:

On the Engine VM you find the information to access the DB in
/etc/ovirt-engine/engine.conf.d/10-setup-database.conf

Then access the engine database and update the vfs_type field in the
storage_server_connections table of the engine storage volume entry:

psql -U engine -W -h localhost
select * from storage_server_connections;
update storage_server_connections set vfs_type = 'glusterfs' where
id = 'THE_ID_YOU_FOUND_IN_THE_OUTPUT_ABOVE_FOR_THE_ENGINE_VOLUME';

After that adding new hosts works as expected.

Cheers
Richard


On 04/08/2016 12:43 AM, Bond, Darryl wrote:
> The workaround for this bug is here 
> https://bugzilla.redhat.com/show_bug.cgi?id=1317699
> 
> 
> 
> From: users-boun...@ovirt.org  on behalf of Simone 
> Tiraboschi 
> Sent: Friday, 8 April 2016 1:30 AM
> To: Richard Neuboeck; Roy Golan
> Cc: users
> Subject: Re: [ovirt-users] Can not access storage domain hosted_storage
> 
> On Thu, Apr 7, 2016 at 4:17 PM, Richard Neuboeck  
> wrote:
>> Hi oVirt Users/Developers,
>>
>> I'm having trouble adding another host to a working hosted engine
>> setup. Through the WebUI I try to add another host. The package
>> installation and configuration processes seemingly run without
>> problems. When the second host tries to mount the engine storage
>> volume it halts with the WebUI showing the following message:
>>
>> 'Failed to connect Host cube-two to the Storage Domain hosted_engine'
>>
>> The mount fails which results in the host status as 'non operational'.
>>
>> Checking the vdsm.log on the newly added host shows that the mount
>> attempt of the engine volume doesn't use -t glusterfs. On the other
>> hand the VM storage volume (also a glusterfs volume) is mounted the
>> right way.
>>
>> It seems the Engine configuration that is given to the second host
>> lacks the vfs_type property. So without glusterfs as fs given the
>> system assumes an NFS mount and obviously fails.
> 
> It seams that the auto-import procedure in the engine didn't recognize
> that the hosted-engine storage domain was on gluster and took it for
> NFS.
> 
> Adding Roy here to take a look.
> 
> 
>> Here are the relevant log lines showing the JSON reply to the
>> configuration request, the working mount of the VM storage (called
>> plexus) and the failing mount of the engine storage.
>>
>> ...
>> jsonrpc.Executor/4::INFO::2016-04-07
>> 15:45:53,043::logUtils::48::dispatcher::(wrapper) Run and protect:
>> connectStorageServer(domType=7,
>> spUUID=u'0001-0001-0001-0001-03ce', conList=[{u'id':
>> u'981cd3aa-052b-498a-914e-5e8f314357a8', u'connection':
>> u'borg-sphere-one:/plexus', u'iqn': u'', u'user': u'', u'tpgt':
>> u'1', u'vfs_type': u'glusterfs', u'password': '', u'port':
>> u''}, {u'id': u'cceaa988-9607-4bef-8854-0e7a585720aa',
>> u'connection': u'borg-sphere-one:/engine', u'iqn': u'', u'user':
>> u'', u'tpgt': u'1', u'password': '', u'port': u''}],
>> options=None)
>> ...
>> jsonrpc.Executor/4::DEBUG::2016-04-07
>> 15:45:53,062::mount::229::Storage.Misc.excCmd::(_runcmd)
>> /usr/bin/taskset --cpu-list 0-39 /usr/bin/sudo -n
>> /usr/bin/systemd-run --scope --slice=vdsm-glusterfs /usr/bin/mount
>> -t glusterfs -o
>> backup-volfile-servers=borg-sphere-two:borg-sphere-three
>> borg-sphere-one:/plexus
>> /rhev/data-center/mnt/glusterSD/borg-sphere-one:_plexus (cwd None)
>> ...
>> jsonrpc.Executor/4::DEBUG::2016-04-07
>> 15:45:53,380::mount::229::Storage.Misc.excCmd::(_runcmd)
>> /usr/bin/taskset --cpu-list 0-39 /usr/bin/sudo -n
>> /usr/bin/systemd-run --scope --slice=vdsm-glusterfs /usr/bin/mount
>> -o backup-volfile-servers=borg-sphere-two:borg-sphere-three
>> borg-sphere-one:/engine
>> /rhev/data-center/mnt/glusterSD/borg-sphere-one:_engine (cwd None)
>> ...
>>
>> The problem seems to have been introduced since March 22nd. On this
>> install I have added two additional hosts without problem. Three
>> days ago I tried to reinstall the whole system for testing and
>> documentation purposes but now am not able to add other hosts.
>>
>> All the installs follow the same docum

Re: [ovirt-users] Can not access storage domain hosted_storage

2016-04-07 Thread Bond, Darryl
The workaround for this bug is here 
https://bugzilla.redhat.com/show_bug.cgi?id=1317699



From: users-boun...@ovirt.org  on behalf of Simone 
Tiraboschi 
Sent: Friday, 8 April 2016 1:30 AM
To: Richard Neuboeck; Roy Golan
Cc: users
Subject: Re: [ovirt-users] Can not access storage domain hosted_storage

On Thu, Apr 7, 2016 at 4:17 PM, Richard Neuboeck  wrote:
> Hi oVirt Users/Developers,
>
> I'm having trouble adding another host to a working hosted engine
> setup. Through the WebUI I try to add another host. The package
> installation and configuration processes seemingly run without
> problems. When the second host tries to mount the engine storage
> volume it halts with the WebUI showing the following message:
>
> 'Failed to connect Host cube-two to the Storage Domain hosted_engine'
>
> The mount fails which results in the host status as 'non operational'.
>
> Checking the vdsm.log on the newly added host shows that the mount
> attempt of the engine volume doesn't use -t glusterfs. On the other
> hand the VM storage volume (also a glusterfs volume) is mounted the
> right way.
>
> It seems the Engine configuration that is given to the second host
> lacks the vfs_type property. So without glusterfs as fs given the
> system assumes an NFS mount and obviously fails.

It seams that the auto-import procedure in the engine didn't recognize
that the hosted-engine storage domain was on gluster and took it for
NFS.

Adding Roy here to take a look.


> Here are the relevant log lines showing the JSON reply to the
> configuration request, the working mount of the VM storage (called
> plexus) and the failing mount of the engine storage.
>
> ...
> jsonrpc.Executor/4::INFO::2016-04-07
> 15:45:53,043::logUtils::48::dispatcher::(wrapper) Run and protect:
> connectStorageServer(domType=7,
> spUUID=u'0001-0001-0001-0001-03ce', conList=[{u'id':
> u'981cd3aa-052b-498a-914e-5e8f314357a8', u'connection':
> u'borg-sphere-one:/plexus', u'iqn': u'', u'user': u'', u'tpgt':
> u'1', u'vfs_type': u'glusterfs', u'password': '', u'port':
> u''}, {u'id': u'cceaa988-9607-4bef-8854-0e7a585720aa',
> u'connection': u'borg-sphere-one:/engine', u'iqn': u'', u'user':
> u'', u'tpgt': u'1', u'password': '', u'port': u''}],
> options=None)
> ...
> jsonrpc.Executor/4::DEBUG::2016-04-07
> 15:45:53,062::mount::229::Storage.Misc.excCmd::(_runcmd)
> /usr/bin/taskset --cpu-list 0-39 /usr/bin/sudo -n
> /usr/bin/systemd-run --scope --slice=vdsm-glusterfs /usr/bin/mount
> -t glusterfs -o
> backup-volfile-servers=borg-sphere-two:borg-sphere-three
> borg-sphere-one:/plexus
> /rhev/data-center/mnt/glusterSD/borg-sphere-one:_plexus (cwd None)
> ...
> jsonrpc.Executor/4::DEBUG::2016-04-07
> 15:45:53,380::mount::229::Storage.Misc.excCmd::(_runcmd)
> /usr/bin/taskset --cpu-list 0-39 /usr/bin/sudo -n
> /usr/bin/systemd-run --scope --slice=vdsm-glusterfs /usr/bin/mount
> -o backup-volfile-servers=borg-sphere-two:borg-sphere-three
> borg-sphere-one:/engine
> /rhev/data-center/mnt/glusterSD/borg-sphere-one:_engine (cwd None)
> ...
>
> The problem seems to have been introduced since March 22nd. On this
> install I have added two additional hosts without problem. Three
> days ago I tried to reinstall the whole system for testing and
> documentation purposes but now am not able to add other hosts.
>
> All the installs follow the same documented procedure. I've verified
> several times that the problem exists with the components in the
> current 3.6 release repo as well as in the 3.6 snapshot repo.
>
> If I check the storage configuration of hosted_engine domain in the
> WebUI it shows glusterfs as VFS type.
>
> The initial mount during the hosted engine setup on the first host
> shows the correct parameters (vfs_type) in vdsm.log:
>
> Thread-42::INFO::2016-04-07
> 14:56:29,464::logUtils::48::dispatcher::(wrapper) Run and protect:
> connectStorageServer(domType=7,
> spUUID='----', conList=[{'id':
> 'b13ae31f-d66a-43a7-8aba-eaf4e62a6fb0', 'tpgt': '1', 'vfs_type':
> 'glusterfs', 'connection': 'borg-sphere-one:/engine', 'user':
> 'kvm'}], options=None)
> Thread-42::DEBUG::2016-04-07
> 14:56:29,591::fileUtils::143::Storage.fileUtils::(createdir)
> Creating directory:
> /rhev/data-

Re: [ovirt-users] Can not access storage domain hosted_storage

2016-04-07 Thread Simone Tiraboschi
On Thu, Apr 7, 2016 at 4:17 PM, Richard Neuboeck  wrote:
> Hi oVirt Users/Developers,
>
> I'm having trouble adding another host to a working hosted engine
> setup. Through the WebUI I try to add another host. The package
> installation and configuration processes seemingly run without
> problems. When the second host tries to mount the engine storage
> volume it halts with the WebUI showing the following message:
>
> 'Failed to connect Host cube-two to the Storage Domain hosted_engine'
>
> The mount fails which results in the host status as 'non operational'.
>
> Checking the vdsm.log on the newly added host shows that the mount
> attempt of the engine volume doesn't use -t glusterfs. On the other
> hand the VM storage volume (also a glusterfs volume) is mounted the
> right way.
>
> It seems the Engine configuration that is given to the second host
> lacks the vfs_type property. So without glusterfs as fs given the
> system assumes an NFS mount and obviously fails.

It seams that the auto-import procedure in the engine didn't recognize
that the hosted-engine storage domain was on gluster and took it for
NFS.

Adding Roy here to take a look.


> Here are the relevant log lines showing the JSON reply to the
> configuration request, the working mount of the VM storage (called
> plexus) and the failing mount of the engine storage.
>
> ...
> jsonrpc.Executor/4::INFO::2016-04-07
> 15:45:53,043::logUtils::48::dispatcher::(wrapper) Run and protect:
> connectStorageServer(domType=7,
> spUUID=u'0001-0001-0001-0001-03ce', conList=[{u'id':
> u'981cd3aa-052b-498a-914e-5e8f314357a8', u'connection':
> u'borg-sphere-one:/plexus', u'iqn': u'', u'user': u'', u'tpgt':
> u'1', u'vfs_type': u'glusterfs', u'password': '', u'port':
> u''}, {u'id': u'cceaa988-9607-4bef-8854-0e7a585720aa',
> u'connection': u'borg-sphere-one:/engine', u'iqn': u'', u'user':
> u'', u'tpgt': u'1', u'password': '', u'port': u''}],
> options=None)
> ...
> jsonrpc.Executor/4::DEBUG::2016-04-07
> 15:45:53,062::mount::229::Storage.Misc.excCmd::(_runcmd)
> /usr/bin/taskset --cpu-list 0-39 /usr/bin/sudo -n
> /usr/bin/systemd-run --scope --slice=vdsm-glusterfs /usr/bin/mount
> -t glusterfs -o
> backup-volfile-servers=borg-sphere-two:borg-sphere-three
> borg-sphere-one:/plexus
> /rhev/data-center/mnt/glusterSD/borg-sphere-one:_plexus (cwd None)
> ...
> jsonrpc.Executor/4::DEBUG::2016-04-07
> 15:45:53,380::mount::229::Storage.Misc.excCmd::(_runcmd)
> /usr/bin/taskset --cpu-list 0-39 /usr/bin/sudo -n
> /usr/bin/systemd-run --scope --slice=vdsm-glusterfs /usr/bin/mount
> -o backup-volfile-servers=borg-sphere-two:borg-sphere-three
> borg-sphere-one:/engine
> /rhev/data-center/mnt/glusterSD/borg-sphere-one:_engine (cwd None)
> ...
>
> The problem seems to have been introduced since March 22nd. On this
> install I have added two additional hosts without problem. Three
> days ago I tried to reinstall the whole system for testing and
> documentation purposes but now am not able to add other hosts.
>
> All the installs follow the same documented procedure. I've verified
> several times that the problem exists with the components in the
> current 3.6 release repo as well as in the 3.6 snapshot repo.
>
> If I check the storage configuration of hosted_engine domain in the
> WebUI it shows glusterfs as VFS type.
>
> The initial mount during the hosted engine setup on the first host
> shows the correct parameters (vfs_type) in vdsm.log:
>
> Thread-42::INFO::2016-04-07
> 14:56:29,464::logUtils::48::dispatcher::(wrapper) Run and protect:
> connectStorageServer(domType=7,
> spUUID='----', conList=[{'id':
> 'b13ae31f-d66a-43a7-8aba-eaf4e62a6fb0', 'tpgt': '1', 'vfs_type':
> 'glusterfs', 'connection': 'borg-sphere-one:/engine', 'user':
> 'kvm'}], options=None)
> Thread-42::DEBUG::2016-04-07
> 14:56:29,591::fileUtils::143::Storage.fileUtils::(createdir)
> Creating directory:
> /rhev/data-center/mnt/glusterSD/borg-sphere-one:_engine mode: None
> Thread-42::DEBUG::2016-04-07
> 14:56:29,592::storageServer::364::Storage.StorageServer.MountConnection::(_get_backup_servers_option)
> Using bricks: ['borg-sphere-one', 'borg-sphere-two',
> 'borg-sphere-three']
> Thread-42::DEBUG::2016-04-07
> 14:56:29,592::mount::229::Storage.Misc.excCmd::(_runcmd)
> /usr/bin/taskset --cpu-list 0-39 /usr/bin/sudo -n
> /usr/bin/systemd-run --scope --slice=vdsm-glusterfs /usr/bin/mount
> -t glusterfs -o
> backup-volfile-servers=borg-sphere-two:borg-sphere-three
> borg-sphere-one:/engine
> /rhev/data-center/mnt/glusterSD/borg-sphere-one:_engine (cwd None)
>
>
> I've already created a bug report but since I didn't know where to
> put it filed it as VDSM bug which it doesn't seem to be.
> https://bugzilla.redhat.com/show_bug.cgi?id=1324075
>
>
> I would really like to help resolve this problem. If there is
> anything I can test, please let me know. I appreciate any help in
> this matter.
>
> Currently I'm running an oVirt 3.6 snapshot installation on 

[ovirt-users] Can not access storage domain hosted_storage

2016-04-07 Thread Richard Neuboeck
Hi oVirt Users/Developers,

I'm having trouble adding another host to a working hosted engine
setup. Through the WebUI I try to add another host. The package
installation and configuration processes seemingly run without
problems. When the second host tries to mount the engine storage
volume it halts with the WebUI showing the following message:

'Failed to connect Host cube-two to the Storage Domain hosted_engine'

The mount fails which results in the host status as 'non operational'.

Checking the vdsm.log on the newly added host shows that the mount
attempt of the engine volume doesn't use -t glusterfs. On the other
hand the VM storage volume (also a glusterfs volume) is mounted the
right way.

It seems the Engine configuration that is given to the second host
lacks the vfs_type property. So without glusterfs as fs given the
system assumes an NFS mount and obviously fails.

Here are the relevant log lines showing the JSON reply to the
configuration request, the working mount of the VM storage (called
plexus) and the failing mount of the engine storage.

...
jsonrpc.Executor/4::INFO::2016-04-07
15:45:53,043::logUtils::48::dispatcher::(wrapper) Run and protect:
connectStorageServer(domType=7,
spUUID=u'0001-0001-0001-0001-03ce', conList=[{u'id':
u'981cd3aa-052b-498a-914e-5e8f314357a8', u'connection':
u'borg-sphere-one:/plexus', u'iqn': u'', u'user': u'', u'tpgt':
u'1', u'vfs_type': u'glusterfs', u'password': '', u'port':
u''}, {u'id': u'cceaa988-9607-4bef-8854-0e7a585720aa',
u'connection': u'borg-sphere-one:/engine', u'iqn': u'', u'user':
u'', u'tpgt': u'1', u'password': '', u'port': u''}],
options=None)
...
jsonrpc.Executor/4::DEBUG::2016-04-07
15:45:53,062::mount::229::Storage.Misc.excCmd::(_runcmd)
/usr/bin/taskset --cpu-list 0-39 /usr/bin/sudo -n
/usr/bin/systemd-run --scope --slice=vdsm-glusterfs /usr/bin/mount
-t glusterfs -o
backup-volfile-servers=borg-sphere-two:borg-sphere-three
borg-sphere-one:/plexus
/rhev/data-center/mnt/glusterSD/borg-sphere-one:_plexus (cwd None)
...
jsonrpc.Executor/4::DEBUG::2016-04-07
15:45:53,380::mount::229::Storage.Misc.excCmd::(_runcmd)
/usr/bin/taskset --cpu-list 0-39 /usr/bin/sudo -n
/usr/bin/systemd-run --scope --slice=vdsm-glusterfs /usr/bin/mount
-o backup-volfile-servers=borg-sphere-two:borg-sphere-three
borg-sphere-one:/engine
/rhev/data-center/mnt/glusterSD/borg-sphere-one:_engine (cwd None)
...

The problem seems to have been introduced since March 22nd. On this
install I have added two additional hosts without problem. Three
days ago I tried to reinstall the whole system for testing and
documentation purposes but now am not able to add other hosts.

All the installs follow the same documented procedure. I've verified
several times that the problem exists with the components in the
current 3.6 release repo as well as in the 3.6 snapshot repo.

If I check the storage configuration of hosted_engine domain in the
WebUI it shows glusterfs as VFS type.

The initial mount during the hosted engine setup on the first host
shows the correct parameters (vfs_type) in vdsm.log:

Thread-42::INFO::2016-04-07
14:56:29,464::logUtils::48::dispatcher::(wrapper) Run and protect:
connectStorageServer(domType=7,
spUUID='----', conList=[{'id':
'b13ae31f-d66a-43a7-8aba-eaf4e62a6fb0', 'tpgt': '1', 'vfs_type':
'glusterfs', 'connection': 'borg-sphere-one:/engine', 'user':
'kvm'}], options=None)
Thread-42::DEBUG::2016-04-07
14:56:29,591::fileUtils::143::Storage.fileUtils::(createdir)
Creating directory:
/rhev/data-center/mnt/glusterSD/borg-sphere-one:_engine mode: None
Thread-42::DEBUG::2016-04-07
14:56:29,592::storageServer::364::Storage.StorageServer.MountConnection::(_get_backup_servers_option)
Using bricks: ['borg-sphere-one', 'borg-sphere-two',
'borg-sphere-three']
Thread-42::DEBUG::2016-04-07
14:56:29,592::mount::229::Storage.Misc.excCmd::(_runcmd)
/usr/bin/taskset --cpu-list 0-39 /usr/bin/sudo -n
/usr/bin/systemd-run --scope --slice=vdsm-glusterfs /usr/bin/mount
-t glusterfs -o
backup-volfile-servers=borg-sphere-two:borg-sphere-three
borg-sphere-one:/engine
/rhev/data-center/mnt/glusterSD/borg-sphere-one:_engine (cwd None)


I've already created a bug report but since I didn't know where to
put it filed it as VDSM bug which it doesn't seem to be.
https://bugzilla.redhat.com/show_bug.cgi?id=1324075


I would really like to help resolve this problem. If there is
anything I can test, please let me know. I appreciate any help in
this matter.

Currently I'm running an oVirt 3.6 snapshot installation on CentOS
7.2. The two storage volumes are both replica 3 on separate gluster
storage nodes.

Thanks in advance!
Richard

-- 
/dev/null



signature.asc
Description: OpenPGP digital signature
___
Users mailing list
Users@ovirt.org
http://lists.ovirt.org/mailman/listinfo/users