[ovirt-users] Re: Node losing management network address?

2019-03-02 Thread Edward Haas
On Thu, Feb 28, 2019 at 6:27 PM Juhani Rautiainen <
juhani.rautiai...@gmail.com> wrote:

> On Thu, Feb 28, 2019 at 3:01 PM Edward Haas  wrote:
> >
> > On boot, VDSM attempts to apply its persisted network desired
> configuration through ifcfg.
> > Even though you may have fixed the ifcfg files, the persisted VDSM
> config may had been set with DHCP, therefore your changes would get
> overwritten on boot or when the synchronization was issued from Engine.
>
> This is what I thought when I saw it coming up with DHCP.
>
> > When you get an unsync warning, fixes on the host should match with the
> persisted config known to VDSM.
>
> How can you see the persistent config? At least the webadmin is maybe
> too user friendly and just offers the resync. It doesn't tell what is
> going to do when it does resync. I would have not started the
> operation if I had known that it was going to switch to DHCP.
>

Engine will pass a command to VDSM per the configuration you see on the
network attachement window.
If there was something else on the host, it will be changed based on this
command.
Even if the parameteres are grayed, it should still show you the current
desired state.
You could identify that something is not correct, uncheck the `resync`
checkbox, edit the values and then apply.


> > Usually the persisted state in VDSM and the one in Engine are the same,
> unless something very strange has happened... like moving hosts between
> Engine servers or self editing on one side without it reaching the other
> side.
>
> I had to move hosts to different cluster (same HE). That was
> recommended here because of the EPYC migration problems when upgrading
> to 4.3. Discussions are in the list archives and in bug 1672859 for
> that one. Maybe this was side effect of that.
>
> Thanks,
> -Juhani
>
> >
> > On Thu, Feb 28, 2019 at 9:34 AM Juhani Rautiainen <
> juhani.rautiai...@gmail.com> wrote:
> >>
> >> On Wed, Feb 27, 2019 at 6:26 PM Dominik Holler 
> wrote:
> >> >
> >>
> >> >
> >> > This is a valid question.
> >> >
> >> > > > > I noticed that one node had ovirtmgmt network unsynchronized. I
> tried
> >> >
> >> > oVirt detected a difference between the expected configuration and
> applied
> >> > configuration. This might happen if the interface configuration is
> change
> >> > directly on the host instead of using oVirt Engine.
> >> >
> >> > > > > to resynchronize it.
> >> >
> >> > If you have the vdsm.log, the relevant lines start at the pattern
> >> > Calling 'Host.setupNetworks'
> >> > and ends at the pattern
> >> > FINISH getCapabilities
> >>
> >> This gave some clues. See the log below. IMHO it points engine getting
> >> something wrong because it seems to ask for DHCP setup in query.
> >> Probably fails because it succeeds in address change and network
> >> connection is torn down.
> >>
> >> 2019-02-25 12:41:38,478+0200 WARN  (vdsm.Scheduler) [Executor] Worker
> >> blocked:  >> {u'bondings': {}, u'networks': {u'ovirtmgmt': {u'ipv6autoconf': False,
> >> u'nic': u'eno1', u'mtu': 1500, u'switch': u'legacy', u'dhcpv6': False,
> >> u'STP': u'no', u'bridged': u'true', u'defaultRoute': True,
> >> u'bootproto': u'dhcp'}}, u'options': {u'connectivityCheck': u'true',
> >> u'connectivityTimeout': 120, u'commitOnSuccess': True}}, 'jsonrpc':
> >> '2.0', 'method': u'Host.setupNetworks', 'id':
> >> u'2ca75cf3-6410-43b4-aebf-cdc3f262e5c2'} at 0x7fc9c95ef350>
> >> timeout=60, duration=60.01 at 0x7fc9c961e090> task#=230106 at
> >> 0x7fc9a43a1fd0>, traceback:
> >> File: "/usr/lib64/python2.7/threading.py", line 785, in __bootstrap
> >>   self.__bootstrap_inner()
> >> File: "/usr/lib64/python2.7/threading.py", line 812, in
> __bootstrap_inner
> >>   self.run()
> >> File: "/usr/lib64/python2.7/threading.py", line 765, in run
> >>   self.__target(*self.__args, **self.__kwargs)
> >> File: "/usr/lib/python2.7/site-packages/vdsm/common/concurrent.py",
> >> line 195, in run
> >>   ret = func(*args, **kwargs)
> >> File: "/usr/lib/python2.7/site-packages/vdsm/executor.py", line 301, in
> _run
> >>   self._execute_task()
> >> File: "/usr/lib/python2.7/site-packages/vdsm/executor.py", line 315,
> >> in _execute_task
> >>   task()
> >> File: "/usr/lib/python2.7/site-packages/vdsm/executor.py", line 391, in
> __call__
> >>   self._callable()
> >> File: "/usr/lib/python2.7/site-packages/yajsonrpc/__init__.py", line
> >> 262, in __call__
> >>   self._handler(self._ctx, self._req)
> >> File: "/usr/lib/python2.7/site-packages/yajsonrpc/__init__.py", line
> >> 305, in _serveRequest
> >>   response = self._handle_request(req, ctx)
> >> File: "/usr/lib/python2.7/site-packages/yajsonrpc/__init__.py", line
> >> 345, in _handle_request
> >>   res = method(**params)
> >> File: "/usr/lib/python2.7/site-packages/vdsm/rpc/Bridge.py", line 194,
> >> in _dynamicMethod
> >>   result = fn(*methodArgs)
> >> File: "", line 2, in setupNetworks
> >> File: "/usr/lib/python2.7/site-packages/vdsm/common/api.py", line 50,
> in method
> >>   ret = func(*args, **kwargs)
> >> Fi

[ovirt-users] Re: Advice around ovirt 4.3 / gluster 5.x

2019-03-02 Thread Strahil
I think that there are updates via 4.3.1 .

Have you checked for gluster updates?

Best Regards,
Strahil NikolovOn Mar 2, 2019 23:16, Endre Karlson  
wrote:
>
> Hi, should we downgrade / reinstall our cluster? we have a 4 node cluster 
> that's breakin apart daily due to the issues with GlusterFS after upgrading 
> from 4.2.8 that was rock solid. I am wondering why 4.3 was released as a 
> stable version at all?? **FRUSTRATION**
>
> Endre___
Users mailing list -- users@ovirt.org
To unsubscribe send an email to users-le...@ovirt.org
Privacy Statement: https://www.ovirt.org/site/privacy-policy/
oVirt Code of Conduct: 
https://www.ovirt.org/community/about/community-guidelines/
List Archives: 
https://lists.ovirt.org/archives/list/users@ovirt.org/message/5QAZ4FBAIWH6LVKCCUB77IJZ5TBKPEOX/


[ovirt-users] Advice around ovirt 4.3 / gluster 5.x

2019-03-02 Thread Endre Karlson
Hi, should we downgrade / reinstall our cluster? we have a 4 node cluster
that's breakin apart daily due to the issues with GlusterFS after upgrading
from 4.2.8 that was rock solid. I am wondering why 4.3 was released as a
stable version at all?? **FRUSTRATION**

Endre
___
Users mailing list -- users@ovirt.org
To unsubscribe send an email to users-le...@ovirt.org
Privacy Statement: https://www.ovirt.org/site/privacy-policy/
oVirt Code of Conduct: 
https://www.ovirt.org/community/about/community-guidelines/
List Archives: 
https://lists.ovirt.org/archives/list/users@ovirt.org/message/3TJKJGGWCANXWZED2WF5ZHTSRS2DVHR2/


[ovirt-users] Re: Cannot assign Network to Mellanox Interface

2019-03-02 Thread Vinícius Ferrão
Interesting.

I have a lot of stability issues with IPoIB in EL7. In Windows it’s works 
flawlessly, but on Linux is just unstable. So I never had the courage to put an 
hypervisor Storage network over IPoIB.

Are you using Mellanox OFED, or the upstream one?

The iSCSI is only through software? No RDMA involved? Aren’t the CPU being 
wasted in this scenario since Connect-X3 does not offload TCP/IP connections?

Finally the IPoIB mode is connected or dataram? Have you tunned the MTU 
settings?

Thanks,


Sent from my iPhone

> On 1 Mar 2019, at 21:36, Mike Blanchard  wrote:
> 
> Hi Vinicius, i'm sorry, I thought I had replied already.  
> 
> I'm mostly using IPoIB, but gluster is using some over RDMA.  I'm using it a 
> few different ways,
> 5 IB nodes, all have dual port ConnectX-3 cards @ 40GB
> 
> 2 nodes are running windows, one port on each is used for a sync channel for 
> starwind Vsan to mirror ~20tb of date.  The second ports are using for iSCS 
> sharing SMB fiile shares for other IB nodes.
> 3 nides are running CentOS 7 and ovirt.  One of the ports on each are used 
> for gluster synchronzaton, gluster is set to use both IPoIB & RDMA.  Then the 
> other ports are using for migratons, iSCSI sharing, and a backup admin 
> channel.
> 
>> On Fri, Mar 1, 2019 at 1:36 PM Vinícius Ferrão  
>> wrote:
>> Darkytoo have you received my message.
>> 
>> I’m really curious about your setup using Infiniband.
>> 
>> Thanks,
>> 
>> > On 14 Feb 2019, at 07:42, Vinícius Ferrão  
>> > wrote:
>> > 
>> > Are you guys running IPoIB for Infiniband Storage? Is it true IB mode? Or 
>> > Ethernet over IB?
>> > 
>> > If using NFS on top of it, RDMA is working
>> > with NFSoRDMA?
>> > 
>> > I’m really curious about your setup.
>> > 
>> > Thanks,
>> > 
>> > Sent from my iPhone
>> > 
>> >> On 10 Feb 2019, at 23:55, darky...@gmail.com wrote:
>> >> 
>> >> You were just like me, I also thought it would pick up settings from 
>> >> cockpit, but once I realized you had to configure the network again when 
>> >> adding to the host, everything sort of fell into line.  I would make sure 
>> >> and monitor the networks so you can see the traffic going through the 
>> >> infiniband and not the ethernet.
>> >> ___
>> >> Users mailing list -- users@ovirt.org
>> >> To unsubscribe send an email to users-le...@ovirt.org
>> >> Privacy Statement: https://www.ovirt.org/site/privacy-policy/
>> >> oVirt Code of Conduct: 
>> >> https://www.ovirt.org/community/about/community-guidelines/
>> >> List Archives: 
>> >> https://lists.ovirt.org/archives/list/users@ovirt.org/message/WC6W7KRXDENXVCTXW6F24742YKDE6ZHK/
___
Users mailing list -- users@ovirt.org
To unsubscribe send an email to users-le...@ovirt.org
Privacy Statement: https://www.ovirt.org/site/privacy-policy/
oVirt Code of Conduct: 
https://www.ovirt.org/community/about/community-guidelines/
List Archives: 
https://lists.ovirt.org/archives/list/users@ovirt.org/message/P35FN76GEAOXKZO4VFPUZZ75MD2FMBRL/


[ovirt-users] Re: Mounting CephFS

2019-03-02 Thread Strahil
Hi Leo,

Check libvirt's logs on the destination host.
Maybe they can provide more information.

Best Regards,
Strahil NikolovOn Mar 2, 2019 15:40, Leo David  wrote:
>
> Thank you,
> I am trying to migrate a vm that has its disks on cephfs ( as posix domain - 
> mounted on all hosts ),  and it does not work. Not sure if this is normal,  
> considering the vm disks being on this type of storage.  The error logs in 
> engine are:
>
> 2019-03-02 13:35:03,483Z ERROR 
> [org.ovirt.engine.core.vdsbroker.monitoring.VmAnalyzer] 
> (ForkJoinPool-1-worker-0) [] Migration of VM 'centos7-test' to host 
> 'node1.internal' failed: VM destroyed during the startup.
> 2019-03-02 13:35:03,505Z ERROR 
> [org.ovirt.engine.core.vdsbroker.monitoring.VmsMonitoring] 
> (ForkJoinPool-1-worker-14) [] Rerun VM 
> 'bec1cd40-9d62-4f6d-a9df-d97a79584441'. Called from VDS 'node2.internal'
> 2019-03-02 13:35:03,566Z ERROR 
> [org.ovirt.engine.core.dal.dbbroker.auditloghandling.AuditLogDirector] 
> (EE-ManagedThreadFactory-engine-Thread-42967) [] EVENT_ID: 
> VM_MIGRATION_TO_SERVER_FAILED(120), Migration failed  (VM: centos7-test, 
> Source: node2.internal, Destination: node1.internal).
>
> Any thoughts ?
>
> Thanks,
>
> Leo
>
>
> On Sat, Mar 2, 2019 at 11:59 AM Strahil  wrote:
>>
>> If you mean storage migration - could be possible.
>> If it is about live migration between hosts - shouldn't happen.
>> Anything in the logs ?
>>
>> Best Regards,
>> Strahil Nikolov
>>
>> On Mar 2, 2019 09:23, Leo David  wrote:
>>>
>>> Thank you Strahil, yes thought about that too, I'll give it a try.
>>> Now ( to be a bit offtopic ), it seems that I can't live migrate the vm, 
>>> even thought the cephfs mountpoint exists on all the hosts.
>>> Could it be the fact that the storage type is "posix" and live migration 
>>> not being possible ?
>>>
>>> Thank you !
>>>
>>> On Sat, Mar 2, 2019, 04:05 Strahil  wrote:

 Can you try to set the credentials in a file (don't recall where that was 
 for ceph) , so you can mount without specifying user/pass ?

 Best Regards,
 Strahil Nikolov

 On Mar 1, 2019 13:46, Leo David  wrote:
>
> Hi Everyone,
> I am trying to mount cephfs as a posix storage domain and getting an 
> error in vdsm.log, although the direct command run on the node " mount -t 
> ceph 10.10.6.1:/sata/ovirt-data  /cephfs-sata/  -o 
> name=admin,secret=AQAzncZY0VyYHBAAoeeePdv8Rr5kZlW07EVhZZ== " works fine. 
> I have configured:
> Storage type: POSIX compliant FS
> Path: 10.10.6.1:/sata/ovirt-data
> VFS Type: ceph
> Mount Options: name=admin,secret=AQAzncZY0VyYHBAAoeeePdv8Rr5kZlW07EVhZZ==
>
>
> 2019-03-01 11:35:33,457+ INFO  (jsonrpc/4) [storage.Mount] mounting 
> 10.10.6.1:/sata/ovirt-data at 
> /rhev/data-center/mnt/10.10.6.1:_sata_ovirt-data (mount:204)
> 2019-03-01 11:35:33,464+ INFO  (jsonrpc/0) [jsonrpc.JsonRpcServer] 
> RPC call Host.ping2 succeeded in 0.00 seconds (__init__:312)
> 2019-03-01 11:35:33,471+ ERROR (jsonrpc/4) [storage.HSM] Could not 
> connect to storageServer (hsm:2414)
___
Users mailing list -- users@ovirt.org
To unsubscribe send an email to users-le...@ovirt.org
Privacy Statement: https://www.ovirt.org/site/privacy-policy/
oVirt Code of Conduct: 
https://www.ovirt.org/community/about/community-guidelines/
List Archives: 
https://lists.ovirt.org/archives/list/users@ovirt.org/message/OUXZLOTDC4LEJDRP54PQJ4FN2E5CYTZX/


[ovirt-users] oVirt 4.3 and dual socket hosts : numad?

2019-03-02 Thread Guillaume Pavese
Is it recommended to enable numad service for automatic NUMA memory
optimization?

I could not find any recommendation on that front for oVirt ; all
benchmarks and documentation I could find are from around ~2015

I see that kernel autonuma balancing is on :

cat /proc/sys/kernel/numa_balancing
1


Not sure which is better as of now.

Guillaume Pavese
Ingénieur Système et Réseau
Interactiv-Group
___
Users mailing list -- users@ovirt.org
To unsubscribe send an email to users-le...@ovirt.org
Privacy Statement: https://www.ovirt.org/site/privacy-policy/
oVirt Code of Conduct: 
https://www.ovirt.org/community/about/community-guidelines/
List Archives: 
https://lists.ovirt.org/archives/list/users@ovirt.org/message/WC3JWLN4OI45HJRGXXJG3PIUDVESJQML/


[ovirt-users] Re: Cannot Increase Hosted Engine VM Memory

2019-03-02 Thread Douglas Duckworth
Updating to 4.2.8 fixed this problem.  I can now hot add more RAM to hosted 
engine!

Thanks,

Douglas Duckworth, MSc, LFCS
HPC System Administrator
Scientific Computing Unit
Weill Cornell Medicine
1300 York Avenue
New York, NY 10065
E: d...@med.cornell.edu
O: 212-746-6305
F: 212-746-8690


On Fri, Feb 1, 2019 at 10:31 AM Douglas Duckworth 
mailto:dod2...@med.cornell.edu>> wrote:
Ok sounds good.

I will upgrade then hope this goes away.

On Thu, Jan 31, 2019, 12:09 PM Simone Tiraboschi 
mailto:stira...@redhat.com> wrote:


On Thu, Jan 31, 2019 at 4:20 PM Douglas Duckworth 
mailto:dod2...@med.cornell.edu>> wrote:
Hi Simone

Thanks again for your help!

Do you have some ideas on what I can try to resolve this issue?

Honestly I'm not able to reproduce this issue.
I can only suggest to try upgrading to 4.2.8 if still not there, and if still 
not working open a bug on bugzilla attaching engine.log.



Thanks,

Douglas Duckworth, MSc, LFCS
HPC System Administrator
Scientific Computing Unit
Weill Cornell Medicine
1300 York Avenue
New York, NY 10065
E: d...@med.cornell.edu
O: 212-746-6305
F: 212-746-8690


On Fri, Jan 25, 2019 at 3:15 PM Douglas Duckworth 
mailto:dod2...@med.cornell.edu>> wrote:
Yes, I do.  Gold crown indeed.

It's the "HostedEngine" as seen attached!


Thanks,

Douglas Duckworth, MSc, LFCS
HPC System Administrator
Scientific Computing Unit
Weill Cornell Medicine
1300 York Avenue
New York, NY 10065
E: d...@med.cornell.edu
O: 212-746-6305
F: 212-746-8690


On Wed, Jan 23, 2019 at 12:02 PM Simone Tiraboschi 
mailto:stira...@redhat.com>> wrote:


On Wed, Jan 23, 2019 at 5:51 PM Douglas Duckworth 
mailto:dod2...@med.cornell.edu>> wrote:
Hi Simone

Can I get help with this issue?  Still cannot increase memory for Hosted Engine.

From the logs it seams that the engine is trying to hotplug memory to the 
engine VM which is something it should not happen.
The engine should simply update engine VM configuration in the OVF_STORE and 
require a reboot of the engine VM.
Quick question, in the VM panel do you see a gold crown symbol on the Engine VM?


Thanks,

Douglas Duckworth, MSc, LFCS
HPC System Administrator
Scientific Computing Unit
Weill Cornell Medicine
1300 York Avenue
New York, NY 10065
E: d...@med.cornell.edu
O: 212-746-6305
F: 212-746-8690


On Thu, Jan 17, 2019 at 8:08 AM Douglas Duckworth 
mailto:dod2...@med.cornell.edu>> wrote:
Sure, they're attached.  In "first attempt" the error seems to be:

2019-01-17 07:49:24,795-05 ERROR 
[org.ovirt.engine.core.dal.dbbroker.auditloghandling.AuditLogDirector] (default 
task-29) [680f82b3-7612-4d91-afdc-43937aa298a2] EVENT_ID: 
FAILED_HOT_SET_MEMORY_NOT_DIVIDABLE(2,048), Failed to hot plug memory to VM 
HostedEngine. Amount of added memory (4000MiB) is not dividable by 256MiB.

Followed by:

2019-01-17 07:49:24,814-05 WARN  
[org.ovirt.engine.core.bll.UpdateRngDeviceCommand] (default task-29) [26f5f3ed] 
Validation of action 'UpdateRngDevice' failed for user admin@internal-authz. 
Reasons: ACTION_TYPE_FAILED_VM_IS_RUNNING
2019-01-17 07:49:24,815-05 ERROR [org.ovirt.engine.core.bll.UpdateVmCommand] 
(default task-29) [26f5f3ed] Updating RNG device of VM HostedEngine 
(adf14389-1563-4b1a-9af6-4b40370a825b) failed. Old RNG device = 
VmRngDevice:{id='VmDeviceId:{deviceId='6435b2b5-163c-4f0c-934e-7994da60dc89', 
vmId='adf14389-1563-4b1a-9af6-4b40370a825b'}', device='virtio', type='RNG', 
specParams='[source=urandom]', address='', managed='true', plugged='true', 
readOnly='false', deviceAlias='', customProperties='null', snapshotId='null', 
logicalName='null', hostDevice='null'}. New RNG device = 
VmRngDevice:{id='VmDeviceId:{deviceId='6435b2b5-163c-4f0c-934e-7994da60dc89', 
vmId='adf14389-1563-4b1a-9af6-4b40370a825b'}', device='virtio', type='RNG', 
specParams='[source=urandom]', address='', managed='true', plugged='true', 
readOnly='false', deviceAlias='', customProperties='null', snapshotId='null', 
logicalName='null', hostDevice='null'}.

In "second attempt" I used values that are dividable by 256 MiB so that's no 
longer present.  Though same error:

2019-01-17 07:56:59,795-05 INFO  
[org.ovirt.engine.core.vdsbroker.SetAmountOfMemoryVDSCommand] (default task-22) 
[7059a48f] START, SetAmountOfMemoryVDSCommand(HostName = 
ovirt-hv1.med.cornell.edu, 
Params:{hostId='cdd5ffda-95c7-4ffa-ae40-be66f1d15c30', 
vmId='adf14389-1563-4b1a-9af6-4b40370a825b', 
memoryDevice='VmDevice:{id='VmDeviceId:{deviceId='7f7d97cc-c273-4033-af53-bc9033ea3abe',
 vmId='adf14389-1563-4b1a-9af6-4b40370a825b'}', device='memory', type='MEMORY', 
specParams='[node=0, size=2048]', address='', managed='true', plugged='true', 
readOnly='false', deviceAlias='', customProperties='null', snapshotId='null', 
logicalName='null', hostDev

[ovirt-users] Re: Mounting CephFS

2019-03-02 Thread Leo David
Thank you,
I am trying to migrate a vm that has its disks on cephfs ( as posix domain
- mounted on all hosts ),  and it does not work. Not sure if this is
normal,  considering the vm disks being on this type of storage.  The error
logs in engine are:

2019-03-02 13:35:03,483Z ERROR
[org.ovirt.engine.core.vdsbroker.monitoring.VmAnalyzer]
(ForkJoinPool-1-worker-0) [] Migration of VM 'centos7-test' to host
'node1.internal' failed: VM destroyed during the startup.
2019-03-02 13:35:03,505Z ERROR
[org.ovirt.engine.core.vdsbroker.monitoring.VmsMonitoring]
(ForkJoinPool-1-worker-14) [] Rerun VM
'bec1cd40-9d62-4f6d-a9df-d97a79584441'. Called from VDS 'node2.internal'
2019-03-02 13:35:03,566Z ERROR
[org.ovirt.engine.core.dal.dbbroker.auditloghandling.AuditLogDirector]
(EE-ManagedThreadFactory-engine-Thread-42967) [] EVENT_ID:
VM_MIGRATION_TO_SERVER_FAILED(120), Migration failed  (VM: centos7-test,
Source: node2.internal, Destination: node1.internal).

Any thoughts ?

Thanks,

Leo


On Sat, Mar 2, 2019 at 11:59 AM Strahil  wrote:

> If you mean storage migration - could be possible.
> If it is about live migration between hosts - shouldn't happen.
> Anything in the logs ?
>
> Best Regards,
> Strahil Nikolov
> On Mar 2, 2019 09:23, Leo David  wrote:
>
> Thank you Strahil, yes thought about that too, I'll give it a try.
> Now ( to be a bit offtopic ), it seems that I can't live migrate the vm,
> even thought the cephfs mountpoint exists on all the hosts.
> Could it be the fact that the storage type is "posix" and live migration
> not being possible ?
>
> Thank you !
>
> On Sat, Mar 2, 2019, 04:05 Strahil  wrote:
>
> Can you try to set the credentials in a file (don't recall where that was
> for ceph) , so you can mount without specifying user/pass ?
>
> Best Regards,
> Strahil Nikolov
> On Mar 1, 2019 13:46, Leo David  wrote:
>
> Hi Everyone,
> I am trying to mount cephfs as a posix storage domain and getting an error
> in vdsm.log, although the direct command run on the node " mount -t ceph
> 10.10.6.1:/sata/ovirt-data  /cephfs-sata/  -o
> name=admin,secret=AQAzncZY0VyYHBAAoeeePdv8Rr5kZlW07EVhZZ== " works fine. I
> have configured:
> Storage type: POSIX compliant FS
> Path: 10.10.6.1:/sata/ovirt-data
> VFS Type: ceph
> Mount Options: name=admin,secret=AQAzncZY0VyYHBAAoeeePdv8Rr5kZlW07EVhZZ==
>
>
> 2019-03-01 11:35:33,457+ INFO  (jsonrpc/4) [storage.Mount] mounting
> 10.10.6.1:/sata/ovirt-data at /rhev/data-center/mnt/10.10.6.1:_sata_ovirt-data
> (mount:204)
> 2019-03-01 11:35:33,464+ INFO  (jsonrpc/0) [jsonrpc.JsonRpcServer] RPC
> call Host.ping2 succeeded in 0.00 seconds (__init__:312)
> 2019-03-01 11:35:33,471+ ERROR (jsonrpc/4) [storage.HSM] Could not
> connect to storageServer (hsm:2414)
> Traceback (most recent call last):
>   File "/usr/lib/python2.7/site-packages/vdsm/storage/hsm.py", line 2411,
> in connectStorageServer
> conObj.connect()
>   File "/usr/lib/python2.7/site-packages/vdsm/storage/storageServer.py",
> line 180, in connect
> six.reraise(t, v, tb)
>   File "/usr/lib/python2.7/site-packages/vdsm/storage/storageServer.py",
> line 172, in connect
> self._mount.mount(self.options, self._vfsType, cgroup=self.CGROUP)
>   File "/usr/lib/python2.7/site-packages/vdsm/storage/mount.py", line
> 207, in mount
> cgroup=cgroup)
>   File "/usr/lib/python2.7/site-packages/vdsm/common/supervdsm.py", line
> 56, in __call__
> return callMethod()
>   File "/usr/lib/python2.7/site-packages/vdsm/common/supervdsm.py", line
> 54, in 
> **kwargs)
>   File "", line 2, in mount
>   File "/usr/lib64/python2.7/multiprocessing/managers.py", line 773, in
> _callmethod
> raise convert_to_error(kind, result)
> *MountError: (1, ';mount: unsupported option format:  *
> name=admin,secret=AQAzncZY0VyYHBAAoeeePdv8Rr5kZlW07EVhZZ==\n')
> Any thoughts on this,  what could it be wrong with the options field ?
> Using oVirt 4.3.1
> Thank you very much and  have a great day !
>
> Leo
>
> --
> Best regards, Leo David
>
>

-- 
Best regards, Leo David
___
Users mailing list -- users@ovirt.org
To unsubscribe send an email to users-le...@ovirt.org
Privacy Statement: https://www.ovirt.org/site/privacy-policy/
oVirt Code of Conduct: 
https://www.ovirt.org/community/about/community-guidelines/
List Archives: 
https://lists.ovirt.org/archives/list/users@ovirt.org/message/2S3CM23KME6CHTES7KGY2TNGDGXZHBMO/


[ovirt-users] Re: Mounting CephFS

2019-03-02 Thread Strahil
If you mean storage migration - could be possible.
If it is about live migration between hosts - shouldn't happen.
Anything in the logs ?

Best Regards,
Strahil NikolovOn Mar 2, 2019 09:23, Leo David  wrote:
>
> Thank you Strahil, yes thought about that too, I'll give it a try.
> Now ( to be a bit offtopic ), it seems that I can't live migrate the vm, even 
> thought the cephfs mountpoint exists on all the hosts.
> Could it be the fact that the storage type is "posix" and live migration not 
> being possible ?
>
> Thank you !
>
> On Sat, Mar 2, 2019, 04:05 Strahil  wrote:
>>
>> Can you try to set the credentials in a file (don't recall where that was 
>> for ceph) , so you can mount without specifying user/pass ?
>>
>> Best Regards,
>> Strahil Nikolov
>>
>> On Mar 1, 2019 13:46, Leo David  wrote:
>>>
>>> Hi Everyone,
>>> I am trying to mount cephfs as a posix storage domain and getting an error 
>>> in vdsm.log, although the direct command run on the node " mount -t ceph 
>>> 10.10.6.1:/sata/ovirt-data  /cephfs-sata/  -o 
>>> name=admin,secret=AQAzncZY0VyYHBAAoeeePdv8Rr5kZlW07EVhZZ== " works fine. I 
>>> have configured:
>>> Storage type: POSIX compliant FS
>>> Path: 10.10.6.1:/sata/ovirt-data
>>> VFS Type: ceph
>>> Mount Options: name=admin,secret=AQAzncZY0VyYHBAAoeeePdv8Rr5kZlW07EVhZZ==
>>>
>>>
>>> 2019-03-01 11:35:33,457+ INFO  (jsonrpc/4) [storage.Mount] mounting 
>>> 10.10.6.1:/sata/ovirt-data at 
>>> /rhev/data-center/mnt/10.10.6.1:_sata_ovirt-data (mount:204)
>>> 2019-03-01 11:35:33,464+ INFO  (jsonrpc/0) [jsonrpc.JsonRpcServer] RPC 
>>> call Host.ping2 succeeded in 0.00 seconds (__init__:312)
>>> 2019-03-01 11:35:33,471+ ERROR (jsonrpc/4) [storage.HSM] Could not 
>>> connect to storageServer (hsm:2414)
>>> Traceback (most recent call last):
>>>   File "/usr/lib/python2.7/site-packages/vdsm/storage/hsm.py", line 2411, 
>>> in connectStorageServer
>>>     conObj.connect()
>>>   File "/usr/lib/python2.7/site-packages/vdsm/storage/storageServer.py", 
>>> line 180, in connect
>>>     six.reraise(t, v, tb)
>>>   File "/usr/lib/python2.7/site-packages/vdsm/storage/storageServer.py", 
>>> line 172, in connect
>>>     self._mount.mount(self.options, self._vfsType, cgroup=self.CGROUP)
>>>   File "/usr/lib/python2.7/site-packages/vdsm/storage/mount.py", line 207, 
>>> in mount
>>>     cgroup=cgroup)
>>>   File "/usr/lib/python2.7/site-packages/vdsm/common/supervdsm.py", line 
>>> 56, in __call__
>>>     return callMethod()
>>>   File "/usr/lib/python2.7/site-packages/vdsm/common/supervdsm.py", line 
>>> 54, in 
>>>     **kwargs)
>>>   File "", line 2, in mount
>>>   File "/usr/lib64/python2.7/multiprocessing/managers.py", line 773, in 
>>> _callmethod
>>>     raise convert_to_error(kind, result)
>>> MountError: (1, ';mount: unsupported option format:  
>>> name=admin,secret=AQAzncZY0VyYHBAAoeeePdv8Rr5kZlW07EVhZZ==\n')
>>> Any thoughts on this,  what could it be wrong with the options field ?
>>> Using oVirt 4.3.1
>>> Thank you very much and  have a great day !
>>>
>>> Leo
>>>
>>> -- 
>>> Best regards, Leo David___
Users mailing list -- users@ovirt.org
To unsubscribe send an email to users-le...@ovirt.org
Privacy Statement: https://www.ovirt.org/site/privacy-policy/
oVirt Code of Conduct: 
https://www.ovirt.org/community/about/community-guidelines/
List Archives: 
https://lists.ovirt.org/archives/list/users@ovirt.org/message/TAJR7E23V7U2JUIZQV4VSVSND5IMHNDN/