[ovirt-users] Re: VDO volume - storage domain size

2019-01-13 Thread Leo David
Thank you very much, Sahina.
I will try the suggested workaround.
Have a nice day,

Leo

On Mon, Jan 14, 2019, 06:47 Sahina Bose  On Fri, Jan 11, 2019 at 3:23 PM Leo David  wrote:
> >
> > Hello Everyone,
> > I'm I trying to benefit of vdo capabilities,  but it seems that I don;t
> have too much luck with this.
> > I have both created a vdo based gluster volume usign both following
> methods:
> > - at hyperconverged wizard cluster setup by enabling compression per
> device
> > - after installation, by creating a new gluster volume and enabling
> compression
> >
> > In both cases,  I end up with a real device size gluster volume/storage
> domain, although the vdo's appear in node's cockpit UI as being 10 times
> then physical device.
> >
> > ie: 3 nodes,  each having 1 x 900GB ssd device, turns into 9t device vdo
> device per host, but the storage domain (gluster  replica 3 ) ends up as
> being 900GB   .
> > Am I missing something ,  or maybe doing something wrong?
> > Thank you very much !
>
> You're running into this bug -
> https://bugzilla.redhat.com/show_bug.cgi?id=1629543
>
> As a workaround, can you try lvextend on the gluster bricks to make
> use of the available capacity?
>
> >
> > Leo
> >
> >
> >
> > Best regards, Leo David
> > ___
> > Users mailing list -- users@ovirt.org
> > To unsubscribe send an email to users-le...@ovirt.org
> > Privacy Statement: https://www.ovirt.org/site/privacy-policy/
> > oVirt Code of Conduct:
> https://www.ovirt.org/community/about/community-guidelines/
> > List Archives:
> https://lists.ovirt.org/archives/list/users@ovirt.org/message/F5CWQ2AYXCVUG2HTM3ASFDSGPRVX2M2F/
>
___
Users mailing list -- users@ovirt.org
To unsubscribe send an email to users-le...@ovirt.org
Privacy Statement: https://www.ovirt.org/site/privacy-policy/
oVirt Code of Conduct: 
https://www.ovirt.org/community/about/community-guidelines/
List Archives: 
https://lists.ovirt.org/archives/list/users@ovirt.org/message/I65VXVDUUTGHHQMHYGJ4XMV4HDVPLCRN/


[ovirt-users] Re: Trunk VLAN Ovirt 4.2

2019-01-13 Thread NUNIN Roberto
After nodes were joined the ovirt cluster, you must :

1) create networks within Datacenter tab. There you set the VLAN tagging, with 
the VLAN ID. You can also specify the network usage (migration, management, vm 
etc.). You must also assign network to the cluster containing nodes.
2) assign these networks to nodes from Host>Setup Host Networks, without assign 
an IP, being the "transport" to the vm that will run on the nodes. At the same 
time, or before, or later, you can create the bond from the same point. Choose 
the right bond mode, depending on the switch config and supported modes on 
ovirt.

After these task, networks will go up in the cluster, so you can start to 
deploy vm and assign vm to these "trunked" networks, assigning them IP 
addresses, fixed or via DHCP.

Hope this help you.

Roberto Nunin
Gruppo Comifar



ICT Infrastructure Manager
Nucleo Ind. Sant'Atto - S.Nicolò A Tordino

64100, Teramo (TE) - Italy

Phone: +39 0861 204 415 int: 2415

Fax: +39 02  0804

Mobile: +39 3483019957

Email: roberto.nu...@comifar.it

Web: www.gruppocomifar.it





On Mon, Jan 14, 2019 at 4:08 AM +0100, "Sebastian Antunez N." 
mailto:antunez.sebast...@gmail.com>> wrote:

Hello Guys

Have a lab with 6 nodes Ovirt 4.2. All nodes have 4 nic 1GB and two nic are for 
Management and and I will assign two nic in bonding for VM traffic

My problem is the following.

The switch Cisco have 2 VLAN (56 and 57) and I need to be able to create a new 
network with the two vlan but I am not clear how to make it so that the vlan go 
through this created network. I have read that I must create a network with 
vlan 56 for example and assign it an IP, and then create the network with vlan 
57 and assign it an IP and later assign it to bonding.

Is what I indicate or should another process be correct?

Thanks for the help.

Sebastian



Questo messaggio e' indirizzato esclusivamente al destinatario indicato e 
potrebbe contenere informazioni confidenziali, riservate o proprietarie. 
Qualora la presente venisse ricevuta per errore, si prega di segnalarlo 
immediatamente al mittente, cancellando l'originale e ogni sua copia e 
distruggendo eventuali copie cartacee. Ogni altro uso e' strettamente proibito 
e potrebbe essere fonte di violazione di legge.

This message is for the designated recipient only and may contain privileged, 
proprietary, or otherwise private information. If you have received it in 
error, please notify the sender immediately, deleting the original and all 
copies and destroying any hard copies. Any other use is strictly prohibited and 
may be unlawful.
___
Users mailing list -- users@ovirt.org
To unsubscribe send an email to users-le...@ovirt.org
Privacy Statement: https://www.ovirt.org/site/privacy-policy/
oVirt Code of Conduct: 
https://www.ovirt.org/community/about/community-guidelines/
List Archives: 
https://lists.ovirt.org/archives/list/users@ovirt.org/message/UHNF2KTCDMMX6IV3ORAQEY7V2NIH425H/


[ovirt-users] Re: VDO volume - storage domain size

2019-01-13 Thread Sahina Bose
On Fri, Jan 11, 2019 at 3:23 PM Leo David  wrote:
>
> Hello Everyone,
> I'm I trying to benefit of vdo capabilities,  but it seems that I don;t have 
> too much luck with this.
> I have both created a vdo based gluster volume usign both following methods:
> - at hyperconverged wizard cluster setup by enabling compression per device
> - after installation, by creating a new gluster volume and enabling 
> compression
>
> In both cases,  I end up with a real device size gluster volume/storage 
> domain, although the vdo's appear in node's cockpit UI as being 10 times then 
> physical device.
>
> ie: 3 nodes,  each having 1 x 900GB ssd device, turns into 9t device vdo 
> device per host, but the storage domain (gluster  replica 3 ) ends up as 
> being 900GB   .
> Am I missing something ,  or maybe doing something wrong?
> Thank you very much !

You're running into this bug -
https://bugzilla.redhat.com/show_bug.cgi?id=1629543

As a workaround, can you try lvextend on the gluster bricks to make
use of the available capacity?

>
> Leo
>
>
>
> Best regards, Leo David
> ___
> Users mailing list -- users@ovirt.org
> To unsubscribe send an email to users-le...@ovirt.org
> Privacy Statement: https://www.ovirt.org/site/privacy-policy/
> oVirt Code of Conduct: 
> https://www.ovirt.org/community/about/community-guidelines/
> List Archives: 
> https://lists.ovirt.org/archives/list/users@ovirt.org/message/F5CWQ2AYXCVUG2HTM3ASFDSGPRVX2M2F/
___
Users mailing list -- users@ovirt.org
To unsubscribe send an email to users-le...@ovirt.org
Privacy Statement: https://www.ovirt.org/site/privacy-policy/
oVirt Code of Conduct: 
https://www.ovirt.org/community/about/community-guidelines/
List Archives: 
https://lists.ovirt.org/archives/list/users@ovirt.org/message/TRIP4T5F6A5O7EMJ5O2FVGUE24D7TVQW/


[ovirt-users] Trunk VLAN Ovirt 4.2

2019-01-13 Thread Sebastian Antunez N.
Hello Guys

Have a lab with 6 nodes Ovirt 4.2. All nodes have 4 nic 1GB and two nic are
for Management and and I will assign two nic in bonding for VM traffic

My problem is the following.

The switch Cisco have 2 VLAN (56 and 57) and I need to be able to create a
new network with the two vlan but I am not clear how to make it so that the
vlan go through this created network. I have read that I must create a
network with vlan 56 for example and assign it an IP, and then create the
network with vlan 57 and assign it an IP and later assign it to bonding.

Is what I indicate or should another process be correct?

Thanks for the help.

Sebastian
___
Users mailing list -- users@ovirt.org
To unsubscribe send an email to users-le...@ovirt.org
Privacy Statement: https://www.ovirt.org/site/privacy-policy/
oVirt Code of Conduct: 
https://www.ovirt.org/community/about/community-guidelines/
List Archives: 
https://lists.ovirt.org/archives/list/users@ovirt.org/message/DFV56WEZOIKXMP6K4EBME72BQWMLMUGG/


[ovirt-users] Import existing storage domain into new server

2019-01-13 Thread michael
I had original ovirt-engine that had stuck tasks and just had storage issues.  
I decided to load a new engine appliance and import existing servers and VMs 
into new engine.  This actually worked perfect until I went to import the 
storage domain into new engine, which has failed.  I was unable to move the 
domain to maintenance mode or anything from the old engine.  Any way to force 
import?

Error messages / logs:
Failed to attach Storage Domains to Data Center Default. (User: 
admin@internal-authz) (in GUI log)

engine.log:
2019-01-13 16:31:27,973-05 INFO  
[org.ovirt.engine.core.vdsbroker.gluster.GlusterVolumesListVDSCommand] 
(DefaultQuartzScheduler8) [176267b0] START, 
GlusterVolumesListVDSCommand(HostName = prometheus.redforest.wanderingmad.com, 
GlusterVolumesListVDSParameters:{hostId='b5b9e763-704c-47ee-817b-19d081ad9aab'}),
 log id: 2e7162ed
2019-01-13 16:31:28,204-05 WARN  
[org.ovirt.engine.core.vdsbroker.gluster.GlusterVolumesListReturn] 
(DefaultQuartzScheduler8) [176267b0] Could not associate brick 
'Daedalus.redforest.wanderingmad.com:/gluster_bricks/nvme-storagetwo/nvme-storagetwo'
 of volume 'fe0c3620-6a33-48fa-98b7-847ad76edaba' with correct network as no 
gluster network found in cluster '8e8cc8a7-9f45-4054-85e5-8014ec421b7f'
2019-01-13 16:31:28,205-05 WARN  
[org.ovirt.engine.core.vdsbroker.gluster.GlusterVolumesListReturn] 
(DefaultQuartzScheduler8) [176267b0] Could not associate brick 
'10.100.50.14:/gluster_bricks/nvme-storagetwo/nvme-storagetwo' of volume 
'fe0c3620-6a33-48fa-98b7-847ad76edaba' with correct network as no gluster 
network found in cluster '8e8cc8a7-9f45-4054-85e5-8014ec421b7f'
2019-01-13 16:31:28,206-05 WARN  
[org.ovirt.engine.core.vdsbroker.gluster.GlusterVolumesListReturn] 
(DefaultQuartzScheduler8) [176267b0] Could not associate brick 
'10.100.50.12:/gluster_bricks/nvme-storagetwo/nvme-storagetwo' of volume 
'fe0c3620-6a33-48fa-98b7-847ad76edaba' with correct network as no gluster 
network found in cluster '8e8cc8a7-9f45-4054-85e5-8014ec421b7f'
2019-01-13 16:31:28,208-05 WARN  
[org.ovirt.engine.core.vdsbroker.gluster.GlusterVolumesListReturn] 
(DefaultQuartzScheduler8) [176267b0] Could not associate brick 
'Daedalus.redforest.wanderingmad.com:/gluster_bricks/ssd-storagetwo/ssd-storagetwo'
 of volume '7add3797-9b03-433c-aa1a-90ad7c5ac280' with correct network as no 
gluster network found in cluster '8e8cc8a7-9f45-4054-85e5-8014ec421b7f'
2019-01-13 16:31:28,209-05 WARN  
[org.ovirt.engine.core.vdsbroker.gluster.GlusterVolumesListReturn] 
(DefaultQuartzScheduler8) [176267b0] Could not associate brick 
'10.100.50.14:/gluster_bricks/ssd-storagetwo/ssd-storagetwo' of volume 
'7add3797-9b03-433c-aa1a-90ad7c5ac280' with correct network as no gluster 
network found in cluster '8e8cc8a7-9f45-4054-85e5-8014ec421b7f'
2019-01-13 16:31:28,221-05 WARN  
[org.ovirt.engine.core.vdsbroker.gluster.GlusterVolumesListReturn] 
(DefaultQuartzScheduler8) [176267b0] Could not associate brick 
'10.100.50.12:/gluster_bricks/ssd-storagetwo/ssd-storagetwo' of volume 
'7add3797-9b03-433c-aa1a-90ad7c5ac280' with correct network as no gluster 
network found in cluster '8e8cc8a7-9f45-4054-85e5-8014ec421b7f'
2019-01-13 16:31:28,232-05 INFO  
[org.ovirt.engine.core.vdsbroker.gluster.GlusterVolumesListVDSCommand] 
(DefaultQuartzScheduler8) [176267b0] FINISH, GlusterVolumesListVDSCommand, 
return: 
{7add3797-9b03-433c-aa1a-90ad7c5ac280=org.ovirt.engine.core.common.businessentities.gluster.GlusterVolumeEntity@4205765f,
 
fe0c3620-6a33-48fa-98b7-847ad76edaba=org.ovirt.engine.core.common.businessentities.gluster.GlusterVolumeEntity@40fccb},
 log id: 2e7162ed
2019-01-13 16:31:37,612-05 INFO  
[org.ovirt.engine.core.vdsbroker.gluster.GlusterTasksListVDSCommand] 
(DefaultQuartzScheduler10) [2876ae97] START, 
GlusterTasksListVDSCommand(HostName = daedalus.redforest.wanderingmad.com, 
VdsIdVDSCommandParametersBase:{hostId='b79fe49c-761f-4710-adf4-8fa6d1143dae'}), 
log id: 30c3bd7c
2019-01-13 16:31:37,863-05 INFO  
[org.ovirt.engine.core.vdsbroker.gluster.GlusterTasksListVDSCommand] 
(DefaultQuartzScheduler10) [2876ae97] FINISH, GlusterTasksListVDSCommand, 
return: [], log id: 30c3bd7c
2019-01-13 16:31:43,248-05 INFO  
[org.ovirt.engine.core.vdsbroker.gluster.GlusterServersListVDSCommand] 
(DefaultQuartzScheduler7) [70a256c0] START, 
GlusterServersListVDSCommand(HostName = prometheus.redforest.wanderingmad.com, 
VdsIdVDSCommandParametersBase:{hostId='b5b9e763-704c-47ee-817b-19d081ad9aab'}), 
log id: 7f890a8
2019-01-13 16:31:43,705-05 INFO  
[org.ovirt.engine.core.vdsbroker.gluster.GlusterServersListVDSCommand] 
(DefaultQuartzScheduler7) [70a256c0] FINISH, GlusterServersListVDSCommand, 
return: [10.100.50.12/24:CONNECTED, 
icarus.redforest.wanderingmad.com:CONNECTED, 
daedalus.redforest.wanderingmad.com:CONNECTED], log id: 7f890a8
2019-01-13 16:31:43,708-05 INFO  
[org.ovirt.engine.core.vdsbroker.gluster.GlusterVolumesListVDSCommand] 
(DefaultQuartzScheduler7) [70a256c0] START, 

[ovirt-users] Re: multiple engines (active passive)

2019-01-13 Thread alex
For what it's worth we do active passive with pacemaker, corosync, and drbd. all the configuration files stay synced by drbd, pacemaker ensures the services are only running on one node. Works pretty well.On Jan 13, 2019 2:33 PM, maoz zadok  wrote:is it good enough to disable the ovirt-engine "systemctl disable ovirt-engine"  on the standby node? if it does, what about the other services, do I have to disable the following as well? : ovirt-engine-dwhd.service                     enabled ovirt-engine.service                          enabled ovirt-fence-kdump-listener.service            enabled ovirt-imageio-proxy.service                   enabled ovirt-provider-ovn.service                    enabled ovirt-vmconsole-proxy-sshd.service            enabled ovirt-websocket-proxy.service                 enabled Thank you! maozOn Sun, Jan 13, 2019 at 3:34 PM Mike  wrote:13.01.2019 10:47, Yedidyah Bar David пишет:

> Most people that need HA engine use ovirt-hosted-engine, 

HA hosted-engine cannot help if VM image are broken. HA runs same image 
on different nodes, and if current running VM corrupt FS, for example, 
it cannot  run on other nodes also.

I wrote about this experience here some time ago.

--
Mike
___
Users mailing list -- users@ovirt.org
To unsubscribe send an email to users-leave@ovirt.org
Privacy Statement: https://www.ovirt.org/site/privacy-policy/
oVirt Code of Conduct: https://www.ovirt.org/community/about/community-guidelines/
List Archives: https://lists.ovirt.org/archives/list/users@ovirt.org/message/WK4IT3NMVOWGIC2OJHO3QCN2TF6P5XW3/

___
Users mailing list -- users@ovirt.org
To unsubscribe send an email to users-le...@ovirt.org
Privacy Statement: https://www.ovirt.org/site/privacy-policy/
oVirt Code of Conduct: 
https://www.ovirt.org/community/about/community-guidelines/
List Archives: 
https://lists.ovirt.org/archives/list/users@ovirt.org/message/CAXQSP3DG5XZX5LRCZSIQIHGHM3Y2C6Z/


[ovirt-users] Re: multiple engines (active passive)

2019-01-13 Thread maoz zadok
is it good enough to disable the ovirt-engine "systemctl disable
ovirt-engine"  on the standby node?
if it does, what about the other services, do I have to disable the
following as well? :
*ovirt-engine-dwhd.service enabled *
*ovirt-engine.service  enabled *
*ovirt-fence-kdump-listener.serviceenabled *

*ovirt-imageio-proxy.service   enabled *
*ovirt-provider-ovn.serviceenabled *
*ovirt-vmconsole-proxy-sshd.serviceenabled *
*ovirt-websocket-proxy.service enabled *

*Thank you! *
*maoz*

On Sun, Jan 13, 2019 at 3:34 PM Mike  wrote:

> 13.01.2019 10:47, Yedidyah Bar David пишет:
>
> > Most people that need HA engine use ovirt-hosted-engine,
>
> HA hosted-engine cannot help if VM image are broken. HA runs same image
> on different nodes, and if current running VM corrupt FS, for example,
> it cannot  run on other nodes also.
>
> I wrote about this experience here some time ago.
>
> --
> Mike
> ___
> Users mailing list -- users@ovirt.org
> To unsubscribe send an email to users-le...@ovirt.org
> Privacy Statement: https://www.ovirt.org/site/privacy-policy/
> oVirt Code of Conduct:
> https://www.ovirt.org/community/about/community-guidelines/
> List Archives:
> https://lists.ovirt.org/archives/list/users@ovirt.org/message/WK4IT3NMVOWGIC2OJHO3QCN2TF6P5XW3/
>
___
Users mailing list -- users@ovirt.org
To unsubscribe send an email to users-le...@ovirt.org
Privacy Statement: https://www.ovirt.org/site/privacy-policy/
oVirt Code of Conduct: 
https://www.ovirt.org/community/about/community-guidelines/
List Archives: 
https://lists.ovirt.org/archives/list/users@ovirt.org/message/6DTQQCLV74BRDUQTKE6XFZ5LPZZOXQZV/


[ovirt-users] Re: Ovirt 4.3 / New Install / NFS (broken)

2019-01-13 Thread Nir Soffer
On Sun, Jan 13, 2019 at 4:35 PM Gianluca Cecchi 
wrote:

>
>
> On Sun, Jan 13, 2019 at 12:38 PM Gianluca Cecchi <
> gianluca.cec...@gmail.com> wrote:
>
>> On Sun, Jan 13, 2019 at 10:20 AM Shani Leviim  wrote:
>>
>>> Hi Devin,
>>> This one was solved in the following patch:
>>> https://gerrit.ovirt.org/#/c/96746/
>>>
>>>
>>> *Regards,*
>>>
>>> *Shani Leviim*
>>>
>>>
>>> On Sun, Jan 13, 2019 at 10:13 AM Devin Acosta 
>>> wrote:
>>>
 I installed the latest 4.3 release candidate and tried to add an NFS
 mount to the Data Center, and it errors in the GUI with “Error while
 executing action New NFS Storage Domain: Invalid parameter”, then in the
 vdsm.log I see it is passing “block_size=None”. Does this regardless if NFS
 v3 or v4.

 InvalidParameterException: Invalid parameter: 'block_size=None'


>> Hi,
>> could be the same I have in this thread during single host HCI install:
>> https://www.mail-archive.com/users@ovirt.org/msg52875.html
>>
>
It looks the same issue.

This was was broken for few days, and was fixed last week.


>
>> ?
>>
>> Gianluca
>>
>
> I have not understood where to find the to-be-patched file vdsm-api.yml
> It seems it is not in engine and host...
>

The yaml file is the source - on the host the file is stored in a binary
format
that is 100 times faster to load.

Marcin, do we have an easy way to update the yaml on a host without building
vdsm from source?

Nir
___
Users mailing list -- users@ovirt.org
To unsubscribe send an email to users-le...@ovirt.org
Privacy Statement: https://www.ovirt.org/site/privacy-policy/
oVirt Code of Conduct: 
https://www.ovirt.org/community/about/community-guidelines/
List Archives: 
https://lists.ovirt.org/archives/list/users@ovirt.org/message/XKBMGSI6AUEIWUT2ANRPKUNAXC4BKFTP/


[ovirt-users] Re: [ovirt-devel] Hard disk requirement

2019-01-13 Thread Nir Soffer
On Sun, Jan 13, 2019 at 6:18 PM Hetz Ben Hamo  wrote:

> Hi,
>
> The old oVirt (3.x?) ISO image wasn't requiring a big hard disk in order
> to install the node part on a physical machine, since the HE and other
> parts were running using NFS/iSCSI etc.
>
> oVirt 4.X if I understand correctly - does require hard disk on the node.
>

I think you can setup temporary storage on a diskless host using NFS or
by connecting to temporary LUN and setting up a file system on it.

Once the bootstrap engine is ready on the "local" storage, you we move
engine disk to shared storage, and you can remove the local storage.

Adding Simone to add more info.


> Can this requirement be avoided and just use an SD Card? Whats the minimum
> storage for the local node?
>

Another option is to use /dev/shm, if you have a server with lot of memory.

Note that this list is for ovirt developers. This question is more about
using ovirt, so the users
mailing list is better. Others users may already solved this issue and can
help more than
developers which have much less experience with actual deployment.

Nir
___
Users mailing list -- users@ovirt.org
To unsubscribe send an email to users-le...@ovirt.org
Privacy Statement: https://www.ovirt.org/site/privacy-policy/
oVirt Code of Conduct: 
https://www.ovirt.org/community/about/community-guidelines/
List Archives: 
https://lists.ovirt.org/archives/list/users@ovirt.org/message/DNQDWN5GR7T2VKFK5A4ISYJLLL4JKXMF/


[ovirt-users] Re: EXTERNAL: Re: multiple engines (active passive)

2019-01-13 Thread Albrecht, Thomas C
In that case, my understanding is that you use the backup feature within 
hosted-engine, and if you need to recover use that backup, which is an option 
during hosted-engine installation. 

I’m all new to this, and I know I haven’t read all the administration 
documentation (though what I have read is very good).  It might be helpful, if 
it doesn’t exist, to have a separate “disaster recovery” practices page to 
cover all the terrible things that could happen to a running system. The first 
part would be activities that administrators should be doing regularly while 
the system is healthy, then a section on testing the validity of those backups. 
Finally, multiple subsections on “if this happens, then use this, this, and 
this that you have for section A to recover your system.”

I’d be willing to help create that, since I need to create the documents for 
our business systems. 

Tom Albrecht III
Cyber Architect
Lockheed Martin RMS

Sent from my iPhone

> On Jan 13, 2019, at 8:34 AM, Mike  wrote:
> 
> 13.01.2019 10:47, Yedidyah Bar David пишет:
> 
>> Most people that need HA engine use ovirt-hosted-engine, 
> 
> HA hosted-engine cannot help if VM image are broken. HA runs same image on 
> different nodes, and if current running VM corrupt FS, for example, it cannot 
>  run on other nodes also.
> 
> I wrote about this experience here some time ago.
> 
> --
> Mike
> ___
> Users mailing list -- users@ovirt.org
> To unsubscribe send an email to users-le...@ovirt.org
> Privacy Statement: https://www.ovirt.org/site/privacy-policy/
> oVirt Code of Conduct: 
> https://www.ovirt.org/community/about/community-guidelines/
> List Archives: 
> https://lists.ovirt.org/archives/list/users@ovirt.org/message/WK4IT3NMVOWGIC2OJHO3QCN2TF6P5XW3/
___
Users mailing list -- users@ovirt.org
To unsubscribe send an email to users-le...@ovirt.org
Privacy Statement: https://www.ovirt.org/site/privacy-policy/
oVirt Code of Conduct: 
https://www.ovirt.org/community/about/community-guidelines/
List Archives: 
https://lists.ovirt.org/archives/list/users@ovirt.org/message/MLXAA7Q4K75X3L4OW55CYH5MLSAUM23Y/


[ovirt-users] Re: Ovirt 4.3 / New Install / NFS (broken)

2019-01-13 Thread Gianluca Cecchi
On Sun, Jan 13, 2019 at 12:38 PM Gianluca Cecchi 
wrote:

> On Sun, Jan 13, 2019 at 10:20 AM Shani Leviim  wrote:
>
>> Hi Devin,
>> This one was solved in the following patch:
>> https://gerrit.ovirt.org/#/c/96746/
>>
>>
>> *Regards,*
>>
>> *Shani Leviim*
>>
>>
>> On Sun, Jan 13, 2019 at 10:13 AM Devin Acosta 
>> wrote:
>>
>>> I installed the latest 4.3 release candidate and tried to add an NFS
>>> mount to the Data Center, and it errors in the GUI with “Error while
>>> executing action New NFS Storage Domain: Invalid parameter”, then in the
>>> vdsm.log I see it is passing “block_size=None”. Does this regardless if NFS
>>> v3 or v4.
>>>
>>> InvalidParameterException: Invalid parameter: 'block_size=None'
>>>
>>>
> Hi,
> could be the same I have in this thread during single host HCI install:
> https://www.mail-archive.com/users@ovirt.org/msg52875.html
>
> ?
>
> Gianluca
>

I have not understood where to find the to-be-patched file vdsm-api.yml
It seems it is not in engine and host...
___
Users mailing list -- users@ovirt.org
To unsubscribe send an email to users-le...@ovirt.org
Privacy Statement: https://www.ovirt.org/site/privacy-policy/
oVirt Code of Conduct: 
https://www.ovirt.org/community/about/community-guidelines/
List Archives: 
https://lists.ovirt.org/archives/list/users@ovirt.org/message/YVBR4LH54D5CSEZUMBFIT2ROTWY5OBCS/


[ovirt-users] Re: multiple engines (active passive)

2019-01-13 Thread Mike

13.01.2019 10:47, Yedidyah Bar David пишет:

Most people that need HA engine use ovirt-hosted-engine, 


HA hosted-engine cannot help if VM image are broken. HA runs same image 
on different nodes, and if current running VM corrupt FS, for example, 
it cannot  run on other nodes also.


I wrote about this experience here some time ago.

--
Mike
___
Users mailing list -- users@ovirt.org
To unsubscribe send an email to users-le...@ovirt.org
Privacy Statement: https://www.ovirt.org/site/privacy-policy/
oVirt Code of Conduct: 
https://www.ovirt.org/community/about/community-guidelines/
List Archives: 
https://lists.ovirt.org/archives/list/users@ovirt.org/message/WK4IT3NMVOWGIC2OJHO3QCN2TF6P5XW3/


[ovirt-users] Re: Ovirt 4.3 / New Install / NFS (broken)

2019-01-13 Thread Gianluca Cecchi
On Sun, Jan 13, 2019 at 10:20 AM Shani Leviim  wrote:

> Hi Devin,
> This one was solved in the following patch:
> https://gerrit.ovirt.org/#/c/96746/
>
>
> *Regards,*
>
> *Shani Leviim*
>
>
> On Sun, Jan 13, 2019 at 10:13 AM Devin Acosta 
> wrote:
>
>> I installed the latest 4.3 release candidate and tried to add an NFS
>> mount to the Data Center, and it errors in the GUI with “Error while
>> executing action New NFS Storage Domain: Invalid parameter”, then in the
>> vdsm.log I see it is passing “block_size=None”. Does this regardless if NFS
>> v3 or v4.
>>
>> InvalidParameterException: Invalid parameter: 'block_size=None'
>>
>>
Hi,
could be the same I have in this thread during single host HCI install:
https://www.mail-archive.com/users@ovirt.org/msg52875.html

?

Gianluca
___
Users mailing list -- users@ovirt.org
To unsubscribe send an email to users-le...@ovirt.org
Privacy Statement: https://www.ovirt.org/site/privacy-policy/
oVirt Code of Conduct: 
https://www.ovirt.org/community/about/community-guidelines/
List Archives: 
https://lists.ovirt.org/archives/list/users@ovirt.org/message/T5C3FMIYSL3RU3XXXVEYMNIBNZGQKMLN/


[ovirt-users] Re: Ovirt 4.3 / New Install / NFS (broken)

2019-01-13 Thread Shani Leviim
Hi Devin,
This one was solved in the following patch:
https://gerrit.ovirt.org/#/c/96746/


*Regards,*

*Shani Leviim*


On Sun, Jan 13, 2019 at 10:13 AM Devin Acosta 
wrote:

> I installed the latest 4.3 release candidate and tried to add an NFS mount
> to the Data Center, and it errors in the GUI with “Error while executing
> action New NFS Storage Domain: Invalid parameter”, then in the vdsm.log I
> see it is passing “block_size=None”. Does this regardless if NFS v3 or v4.
>
> InvalidParameterException: Invalid parameter: 'block_size=None'
>
> 2019-01-12 20:37:58,241-0700 INFO  (jsonrpc/7) [vdsm.api] START
> createStorageDomain(storageType=1,
> sdUUID=u'b30c64c4-4b1f-4ebf-828b-e54c330ae84c', domainName=u'nfsdata',
> typeSpecificArg=u'192.168.19.155:/data/data', domClass=1,
> domVersion=u'4', block_size=None, max_hosts=2000, options=None)
> from=:::192.168.19.178,51042, flow_id=67743df7,
> task_id=ad82f581-9638-48f1-bcd9-669b9809b34a (api:48)
> 2019-01-12 20:37:58,241-0700 INFO  (jsonrpc/7) [vdsm.api] FINISH
> createStorageDomain error=Invalid parameter: 'block_size=None'
> from=:::192.168.19.178,51042, flow_id=67743df7,
> task_id=ad82f581-9638-48f1-bcd9-669b9809b34a (api:52)
> 2019-01-12 20:37:58,241-0700 ERROR (jsonrpc/7) [storage.TaskManager.Task]
> (Task='ad82f581-9638-48f1-bcd9-669b9809b34a') Unexpected error (task:875)
> Traceback (most recent call last):
>   File "/usr/lib/python2.7/site-packages/vdsm/storage/task.py", line 882,
> in _run
> return fn(*args, **kargs)
>   File "", line 2, in createStorageDomain
>   File "/usr/lib/python2.7/site-packages/vdsm/common/api.py", line 50, in
> method
> ret = func(*args, **kwargs)
>   File "/usr/lib/python2.7/site-packages/vdsm/storage/hsm.py", line 2583,
> in createStorageDomain
> alignment = clusterlock.alignment(block_size, max_hosts)
>   File "/usr/lib/python2.7/site-packages/vdsm/storage/clusterlock.py",
> line 661, in alignment
> raise se.InvalidParameterException('block_size', block_size)
> InvalidParameterException: Invalid parameter: 'block_size=None'
> 2019-01-12 20:37:58,242-0700 INFO  (jsonrpc/7) [storage.TaskManager.Task]
> (Task='ad82f581-9638-48f1-bcd9-669b9809b34a') aborting: Task is aborted:
> u"Invalid parameter: 'block_size=None'" - code 100 (task:1181)
> 2019-01-12 20:37:58,242-0700 ERROR (jsonrpc/7) [storage.Dispatcher] FINISH
> createStorageDomain error=Invalid parameter: 'block_size=None'
> (dispatcher:81)
> 2019-01-12 20:37:58,242-0700 INFO  (jsonrpc/7) [jsonrpc.JsonRpcServer] RPC
> call StorageDomain.create failed (error 1000) in 0.00 seconds (__init__:312)
> 2019-01-12 20:37:58,541-0700 INFO  (jsonrpc/1) [vdsm.api] START
> disconnectStorageServer(domType=1,
> spUUID=u'----', conList=[{u'tpgt': u'1',
> u'id': u'db7d16c8-7497-42db-8a75-81cb7f9d3350', u'connection':
> u'192.168.19.155:/data/data', u'iqn': u'', u'user': u'', u'ipv6_enabled':
> u'false', u'protocol_version': u'auto', u'password': '', u'port':
> u''}], options=None) from=:::192.168.19.178,51042,
> flow_id=7e4cb4fa-1437-4d5b-acb5-958838ecd54c,
> task_id=1d004ea2-ae84-4c95-8c70-29e205efd4b1 (api:48)
> 2019-01-12 20:37:58,542-0700 INFO  (jsonrpc/1) [storage.Mount] unmounting
> /rhev/data-center/mnt/192.168.19.155:_data_data (mount:212)
> 2019-01-12 20:37:59,087-0700 INFO  (jsonrpc/1) [vdsm.api] FINISH
> disconnectStorageServer return={'statuslist': [{'status': 0, 'id':
> u'db7d16c8-7497-42db-8a75-81cb7f9d3350'}]}
> from=:::192.168.19.178,51042,
> flow_id=7e4cb4fa-1437-4d5b-acb5-958838ecd54c,
> task_id=1d004ea2-ae84-4c95-8c70-29e205efd4b1 (api:54)
> 2019-01-12 20:37:59,089-0700 INFO  (jsonrpc/1) [jsonrpc.JsonRpcServer] RPC
> call StoragePool.disconnectStorageServer succeeded in 0.55 seconds
> (__init__:312)
>
> Devin Acosta
>
> ___
> Users mailing list -- users@ovirt.org
> To unsubscribe send an email to users-le...@ovirt.org
> Privacy Statement: https://www.ovirt.org/site/privacy-policy/
> oVirt Code of Conduct:
> https://www.ovirt.org/community/about/community-guidelines/
> List Archives:
> https://lists.ovirt.org/archives/list/users@ovirt.org/message/XFEIW6MUHJK5V5IMENBOPGLMSC2JZGGR/
>
___
Users mailing list -- users@ovirt.org
To unsubscribe send an email to users-le...@ovirt.org
Privacy Statement: https://www.ovirt.org/site/privacy-policy/
oVirt Code of Conduct: 
https://www.ovirt.org/community/about/community-guidelines/
List Archives: 
https://lists.ovirt.org/archives/list/users@ovirt.org/message/4BOUL4U5WI72LYK26X5L6CIPWTFJH5HV/


[ovirt-users] Re: ovirt 4.3 / Adding NFS storage issue - block_size=None

2019-01-13 Thread Eyal Shenitzky
Already fixed by the following patch - https://gerrit.ovirt.org/#/c/96746/

On Sun, Jan 13, 2019 at 10:45 AM Yedidyah Bar David  wrote:

> Adding Nir and Freddy, changing Subject.
>
> On Sun, Jan 13, 2019 at 10:08 AM Devin Acosta 
> wrote:
> >
> >
> > I installed the latest 4.3 release candidate and tried to add an NFS
> mount to the Data Center, and it errors in the GUI with “Error while
> executing action New NFS Storage Domain: Invalid parameter”, then in the
> vdsm.log I see it is passing “block_size=None”. Does this regardless if NFS
> v3 or v4.
> >
> > InvalidParameterException: Invalid parameter: 'block_size=None'
> >
> > 2019-01-12 20:37:58,241-0700 INFO  (jsonrpc/7) [vdsm.api] START
> createStorageDomain(storageType=1,
> sdUUID=u'b30c64c4-4b1f-4ebf-828b-e54c330ae84c', domainName=u'nfsdata',
> typeSpecificArg=u'192.168.19.155:/data/data', domClass=1,
> domVersion=u'4', block_size=None, max_hosts=2000, options=None)
> from=:::192.168.19.178,51042, flow_id=67743df7,
> task_id=ad82f581-9638-48f1-bcd9-669b9809b34a (api:48)
> > 2019-01-12 20:37:58,241-0700 INFO  (jsonrpc/7) [vdsm.api] FINISH
> createStorageDomain error=Invalid parameter: 'block_size=None'
> from=:::192.168.19.178,51042, flow_id=67743df7,
> task_id=ad82f581-9638-48f1-bcd9-669b9809b34a (api:52)
> > 2019-01-12 20:37:58,241-0700 ERROR (jsonrpc/7)
> [storage.TaskManager.Task] (Task='ad82f581-9638-48f1-bcd9-669b9809b34a')
> Unexpected error (task:875)
> > Traceback (most recent call last):
> >   File "/usr/lib/python2.7/site-packages/vdsm/storage/task.py", line
> 882, in _run
> > return fn(*args, **kargs)
> >   File "", line 2, in createStorageDomain
> >   File "/usr/lib/python2.7/site-packages/vdsm/common/api.py", line 50,
> in method
> > ret = func(*args, **kwargs)
> >   File "/usr/lib/python2.7/site-packages/vdsm/storage/hsm.py", line
> 2583, in createStorageDomain
> > alignment = clusterlock.alignment(block_size, max_hosts)
> >   File "/usr/lib/python2.7/site-packages/vdsm/storage/clusterlock.py",
> line 661, in alignment
> > raise se.InvalidParameterException('block_size', block_size)
> > InvalidParameterException: Invalid parameter: 'block_size=None'
> > 2019-01-12 20:37:58,242-0700 INFO  (jsonrpc/7)
> [storage.TaskManager.Task] (Task='ad82f581-9638-48f1-bcd9-669b9809b34a')
> aborting: Task is aborted: u"Invalid parameter: 'block_size=None'" - code
> 100 (task:1181)
> > 2019-01-12 20:37:58,242-0700 ERROR (jsonrpc/7) [storage.Dispatcher]
> FINISH createStorageDomain error=Invalid parameter: 'block_size=None'
> (dispatcher:81)
> > 2019-01-12 20:37:58,242-0700 INFO  (jsonrpc/7) [jsonrpc.JsonRpcServer]
> RPC call StorageDomain.create failed (error 1000) in 0.00 seconds
> (__init__:312)
> > 2019-01-12 20:37:58,541-0700 INFO  (jsonrpc/1) [vdsm.api] START
> disconnectStorageServer(domType=1,
> spUUID=u'----', conList=[{u'tpgt': u'1',
> u'id': u'db7d16c8-7497-42db-8a75-81cb7f9d3350', u'connection':
> u'192.168.19.155:/data/data', u'iqn': u'', u'user': u'', u'ipv6_enabled':
> u'false', u'protocol_version': u'auto', u'password': '', u'port':
> u''}], options=None) from=:::192.168.19.178,51042,
> flow_id=7e4cb4fa-1437-4d5b-acb5-958838ecd54c,
> task_id=1d004ea2-ae84-4c95-8c70-29e205efd4b1 (api:48)
> > 2019-01-12 20:37:58,542-0700 INFO  (jsonrpc/1) [storage.Mount]
> unmounting /rhev/data-center/mnt/192.168.19.155:_data_data (mount:212)
> > 2019-01-12 20:37:59,087-0700 INFO  (jsonrpc/1) [vdsm.api] FINISH
> disconnectStorageServer return={'statuslist': [{'status': 0, 'id':
> u'db7d16c8-7497-42db-8a75-81cb7f9d3350'}]}
> from=:::192.168.19.178,51042,
> flow_id=7e4cb4fa-1437-4d5b-acb5-958838ecd54c,
> task_id=1d004ea2-ae84-4c95-8c70-29e205efd4b1 (api:54)
> > 2019-01-12 20:37:59,089-0700 INFO  (jsonrpc/1) [jsonrpc.JsonRpcServer]
> RPC call StoragePool.disconnectStorageServer succeeded in 0.55 seconds
> (__init__:312)
> > ___
> > Users mailing list -- users@ovirt.org
> > To unsubscribe send an email to users-le...@ovirt.org
> > Privacy Statement: https://www.ovirt.org/site/privacy-policy/
> > oVirt Code of Conduct:
> https://www.ovirt.org/community/about/community-guidelines/
> > List Archives:
> https://lists.ovirt.org/archives/list/users@ovirt.org/message/UEZGYMMGFJYW7TWMOVILUEFFUR5KDZVB/
>
>
>
> --
> Didi
> ___
> Users mailing list -- users@ovirt.org
> To unsubscribe send an email to users-le...@ovirt.org
> Privacy Statement: https://www.ovirt.org/site/privacy-policy/
> oVirt Code of Conduct:
> https://www.ovirt.org/community/about/community-guidelines/
> List Archives:
> https://lists.ovirt.org/archives/list/users@ovirt.org/message/YHX2CCUIIW644OATY6F2LZJSLLYG77PL/
>


-- 
Regards,
Eyal Shenitzky
___
Users mailing list -- users@ovirt.org
To unsubscribe send an email to users-le...@ovirt.org
Privacy Statement: https://www.ovirt.org/site/privacy-policy/

[ovirt-users] Re: ovirt 4.3 / Adding NFS storage issue - block_size=None

2019-01-13 Thread Yedidyah Bar David
Adding Nir and Freddy, changing Subject.

On Sun, Jan 13, 2019 at 10:08 AM Devin Acosta  wrote:
>
>
> I installed the latest 4.3 release candidate and tried to add an NFS mount to 
> the Data Center, and it errors in the GUI with “Error while executing action 
> New NFS Storage Domain: Invalid parameter”, then in the vdsm.log I see it is 
> passing “block_size=None”. Does this regardless if NFS v3 or v4.
>
> InvalidParameterException: Invalid parameter: 'block_size=None'
>
> 2019-01-12 20:37:58,241-0700 INFO  (jsonrpc/7) [vdsm.api] START 
> createStorageDomain(storageType=1, 
> sdUUID=u'b30c64c4-4b1f-4ebf-828b-e54c330ae84c', domainName=u'nfsdata', 
> typeSpecificArg=u'192.168.19.155:/data/data', domClass=1, domVersion=u'4', 
> block_size=None, max_hosts=2000, options=None) 
> from=:::192.168.19.178,51042, flow_id=67743df7, 
> task_id=ad82f581-9638-48f1-bcd9-669b9809b34a (api:48)
> 2019-01-12 20:37:58,241-0700 INFO  (jsonrpc/7) [vdsm.api] FINISH 
> createStorageDomain error=Invalid parameter: 'block_size=None' 
> from=:::192.168.19.178,51042, flow_id=67743df7, 
> task_id=ad82f581-9638-48f1-bcd9-669b9809b34a (api:52)
> 2019-01-12 20:37:58,241-0700 ERROR (jsonrpc/7) [storage.TaskManager.Task] 
> (Task='ad82f581-9638-48f1-bcd9-669b9809b34a') Unexpected error (task:875)
> Traceback (most recent call last):
>   File "/usr/lib/python2.7/site-packages/vdsm/storage/task.py", line 882, in 
> _run
> return fn(*args, **kargs)
>   File "", line 2, in createStorageDomain
>   File "/usr/lib/python2.7/site-packages/vdsm/common/api.py", line 50, in 
> method
> ret = func(*args, **kwargs)
>   File "/usr/lib/python2.7/site-packages/vdsm/storage/hsm.py", line 2583, in 
> createStorageDomain
> alignment = clusterlock.alignment(block_size, max_hosts)
>   File "/usr/lib/python2.7/site-packages/vdsm/storage/clusterlock.py", line 
> 661, in alignment
> raise se.InvalidParameterException('block_size', block_size)
> InvalidParameterException: Invalid parameter: 'block_size=None'
> 2019-01-12 20:37:58,242-0700 INFO  (jsonrpc/7) [storage.TaskManager.Task] 
> (Task='ad82f581-9638-48f1-bcd9-669b9809b34a') aborting: Task is aborted: 
> u"Invalid parameter: 'block_size=None'" - code 100 (task:1181)
> 2019-01-12 20:37:58,242-0700 ERROR (jsonrpc/7) [storage.Dispatcher] FINISH 
> createStorageDomain error=Invalid parameter: 'block_size=None' (dispatcher:81)
> 2019-01-12 20:37:58,242-0700 INFO  (jsonrpc/7) [jsonrpc.JsonRpcServer] RPC 
> call StorageDomain.create failed (error 1000) in 0.00 seconds (__init__:312)
> 2019-01-12 20:37:58,541-0700 INFO  (jsonrpc/1) [vdsm.api] START 
> disconnectStorageServer(domType=1, 
> spUUID=u'----', conList=[{u'tpgt': u'1', 
> u'id': u'db7d16c8-7497-42db-8a75-81cb7f9d3350', u'connection': 
> u'192.168.19.155:/data/data', u'iqn': u'', u'user': u'', u'ipv6_enabled': 
> u'false', u'protocol_version': u'auto', u'password': '', u'port': 
> u''}], options=None) from=:::192.168.19.178,51042, 
> flow_id=7e4cb4fa-1437-4d5b-acb5-958838ecd54c, 
> task_id=1d004ea2-ae84-4c95-8c70-29e205efd4b1 (api:48)
> 2019-01-12 20:37:58,542-0700 INFO  (jsonrpc/1) [storage.Mount] unmounting 
> /rhev/data-center/mnt/192.168.19.155:_data_data (mount:212)
> 2019-01-12 20:37:59,087-0700 INFO  (jsonrpc/1) [vdsm.api] FINISH 
> disconnectStorageServer return={'statuslist': [{'status': 0, 'id': 
> u'db7d16c8-7497-42db-8a75-81cb7f9d3350'}]} from=:::192.168.19.178,51042, 
> flow_id=7e4cb4fa-1437-4d5b-acb5-958838ecd54c, 
> task_id=1d004ea2-ae84-4c95-8c70-29e205efd4b1 (api:54)
> 2019-01-12 20:37:59,089-0700 INFO  (jsonrpc/1) [jsonrpc.JsonRpcServer] RPC 
> call StoragePool.disconnectStorageServer succeeded in 0.55 seconds 
> (__init__:312)
> ___
> Users mailing list -- users@ovirt.org
> To unsubscribe send an email to users-le...@ovirt.org
> Privacy Statement: https://www.ovirt.org/site/privacy-policy/
> oVirt Code of Conduct: 
> https://www.ovirt.org/community/about/community-guidelines/
> List Archives: 
> https://lists.ovirt.org/archives/list/users@ovirt.org/message/UEZGYMMGFJYW7TWMOVILUEFFUR5KDZVB/



-- 
Didi
___
Users mailing list -- users@ovirt.org
To unsubscribe send an email to users-le...@ovirt.org
Privacy Statement: https://www.ovirt.org/site/privacy-policy/
oVirt Code of Conduct: 
https://www.ovirt.org/community/about/community-guidelines/
List Archives: 
https://lists.ovirt.org/archives/list/users@ovirt.org/message/YHX2CCUIIW644OATY6F2LZJSLLYG77PL/


[ovirt-users] Ovirt 4.3 / New Install / NFS (broken)

2019-01-13 Thread Devin Acosta
I installed the latest 4.3 release candidate and tried to add an NFS mount
to the Data Center, and it errors in the GUI with “Error while executing
action New NFS Storage Domain: Invalid parameter”, then in the vdsm.log I
see it is passing “block_size=None”. Does this regardless if NFS v3 or v4.

InvalidParameterException: Invalid parameter: 'block_size=None'

2019-01-12 20:37:58,241-0700 INFO  (jsonrpc/7) [vdsm.api] START
createStorageDomain(storageType=1,
sdUUID=u'b30c64c4-4b1f-4ebf-828b-e54c330ae84c', domainName=u'nfsdata',
typeSpecificArg=u'192.168.19.155:/data/data', domClass=1, domVersion=u'4',
block_size=None, max_hosts=2000, options=None)
from=:::192.168.19.178,51042, flow_id=67743df7,
task_id=ad82f581-9638-48f1-bcd9-669b9809b34a (api:48)
2019-01-12 20:37:58,241-0700 INFO  (jsonrpc/7) [vdsm.api] FINISH
createStorageDomain error=Invalid parameter: 'block_size=None'
from=:::192.168.19.178,51042, flow_id=67743df7,
task_id=ad82f581-9638-48f1-bcd9-669b9809b34a (api:52)
2019-01-12 20:37:58,241-0700 ERROR (jsonrpc/7) [storage.TaskManager.Task]
(Task='ad82f581-9638-48f1-bcd9-669b9809b34a') Unexpected error (task:875)
Traceback (most recent call last):
  File "/usr/lib/python2.7/site-packages/vdsm/storage/task.py", line 882,
in _run
return fn(*args, **kargs)
  File "", line 2, in createStorageDomain
  File "/usr/lib/python2.7/site-packages/vdsm/common/api.py", line 50, in
method
ret = func(*args, **kwargs)
  File "/usr/lib/python2.7/site-packages/vdsm/storage/hsm.py", line 2583,
in createStorageDomain
alignment = clusterlock.alignment(block_size, max_hosts)
  File "/usr/lib/python2.7/site-packages/vdsm/storage/clusterlock.py", line
661, in alignment
raise se.InvalidParameterException('block_size', block_size)
InvalidParameterException: Invalid parameter: 'block_size=None'
2019-01-12 20:37:58,242-0700 INFO  (jsonrpc/7) [storage.TaskManager.Task]
(Task='ad82f581-9638-48f1-bcd9-669b9809b34a') aborting: Task is aborted:
u"Invalid parameter: 'block_size=None'" - code 100 (task:1181)
2019-01-12 20:37:58,242-0700 ERROR (jsonrpc/7) [storage.Dispatcher] FINISH
createStorageDomain error=Invalid parameter: 'block_size=None'
(dispatcher:81)
2019-01-12 20:37:58,242-0700 INFO  (jsonrpc/7) [jsonrpc.JsonRpcServer] RPC
call StorageDomain.create failed (error 1000) in 0.00 seconds (__init__:312)
2019-01-12 20:37:58,541-0700 INFO  (jsonrpc/1) [vdsm.api] START
disconnectStorageServer(domType=1,
spUUID=u'----', conList=[{u'tpgt': u'1',
u'id': u'db7d16c8-7497-42db-8a75-81cb7f9d3350', u'connection':
u'192.168.19.155:/data/data', u'iqn': u'', u'user': u'', u'ipv6_enabled':
u'false', u'protocol_version': u'auto', u'password': '', u'port':
u''}], options=None) from=:::192.168.19.178,51042,
flow_id=7e4cb4fa-1437-4d5b-acb5-958838ecd54c,
task_id=1d004ea2-ae84-4c95-8c70-29e205efd4b1 (api:48)
2019-01-12 20:37:58,542-0700 INFO  (jsonrpc/1) [storage.Mount] unmounting
/rhev/data-center/mnt/192.168.19.155:_data_data (mount:212)
2019-01-12 20:37:59,087-0700 INFO  (jsonrpc/1) [vdsm.api] FINISH
disconnectStorageServer return={'statuslist': [{'status': 0, 'id':
u'db7d16c8-7497-42db-8a75-81cb7f9d3350'}]}
from=:::192.168.19.178,51042,
flow_id=7e4cb4fa-1437-4d5b-acb5-958838ecd54c,
task_id=1d004ea2-ae84-4c95-8c70-29e205efd4b1 (api:54)
2019-01-12 20:37:59,089-0700 INFO  (jsonrpc/1) [jsonrpc.JsonRpcServer] RPC
call StoragePool.disconnectStorageServer succeeded in 0.55 seconds
(__init__:312)

Devin Acosta
___
Users mailing list -- users@ovirt.org
To unsubscribe send an email to users-le...@ovirt.org
Privacy Statement: https://www.ovirt.org/site/privacy-policy/
oVirt Code of Conduct: 
https://www.ovirt.org/community/about/community-guidelines/
List Archives: 
https://lists.ovirt.org/archives/list/users@ovirt.org/message/XFEIW6MUHJK5V5IMENBOPGLMSC2JZGGR/


[ovirt-users] ovirt 4.3 / Adding NFS storage issue

2019-01-13 Thread Devin Acosta
I installed the latest 4.3 release candidate and tried to add an NFS mount
to the Data Center, and it errors in the GUI with “Error while executing
action New NFS Storage Domain: Invalid parameter”, then in the vdsm.log I
see it is passing “block_size=None”. Does this regardless if NFS v3 or v4.

InvalidParameterException: Invalid parameter: 'block_size=None'

2019-01-12 20:37:58,241-0700 INFO  (jsonrpc/7) [vdsm.api] START
createStorageDomain(storageType=1,
sdUUID=u'b30c64c4-4b1f-4ebf-828b-e54c330ae84c', domainName=u'nfsdata',
typeSpecificArg=u'192.168.19.155:/data/data', domClass=1, domVersion=u'4',
block_size=None, max_hosts=2000, options=None)
from=:::192.168.19.178,51042, flow_id=67743df7,
task_id=ad82f581-9638-48f1-bcd9-669b9809b34a (api:48)
2019-01-12 20:37:58,241-0700 INFO  (jsonrpc/7) [vdsm.api] FINISH
createStorageDomain error=Invalid parameter: 'block_size=None'
from=:::192.168.19.178,51042, flow_id=67743df7,
task_id=ad82f581-9638-48f1-bcd9-669b9809b34a (api:52)
2019-01-12 20:37:58,241-0700 ERROR (jsonrpc/7) [storage.TaskManager.Task]
(Task='ad82f581-9638-48f1-bcd9-669b9809b34a') Unexpected error (task:875)
Traceback (most recent call last):
  File "/usr/lib/python2.7/site-packages/vdsm/storage/task.py", line 882,
in _run
return fn(*args, **kargs)
  File "", line 2, in createStorageDomain
  File "/usr/lib/python2.7/site-packages/vdsm/common/api.py", line 50, in
method
ret = func(*args, **kwargs)
  File "/usr/lib/python2.7/site-packages/vdsm/storage/hsm.py", line 2583,
in createStorageDomain
alignment = clusterlock.alignment(block_size, max_hosts)
  File "/usr/lib/python2.7/site-packages/vdsm/storage/clusterlock.py", line
661, in alignment
raise se.InvalidParameterException('block_size', block_size)
InvalidParameterException: Invalid parameter: 'block_size=None'
2019-01-12 20:37:58,242-0700 INFO  (jsonrpc/7) [storage.TaskManager.Task]
(Task='ad82f581-9638-48f1-bcd9-669b9809b34a') aborting: Task is aborted:
u"Invalid parameter: 'block_size=None'" - code 100 (task:1181)
2019-01-12 20:37:58,242-0700 ERROR (jsonrpc/7) [storage.Dispatcher] FINISH
createStorageDomain error=Invalid parameter: 'block_size=None'
(dispatcher:81)
2019-01-12 20:37:58,242-0700 INFO  (jsonrpc/7) [jsonrpc.JsonRpcServer] RPC
call StorageDomain.create failed (error 1000) in 0.00 seconds (__init__:312)
2019-01-12 20:37:58,541-0700 INFO  (jsonrpc/1) [vdsm.api] START
disconnectStorageServer(domType=1,
spUUID=u'----', conList=[{u'tpgt': u'1',
u'id': u'db7d16c8-7497-42db-8a75-81cb7f9d3350', u'connection':
u'192.168.19.155:/data/data', u'iqn': u'', u'user': u'', u'ipv6_enabled':
u'false', u'protocol_version': u'auto', u'password': '', u'port':
u''}], options=None) from=:::192.168.19.178,51042,
flow_id=7e4cb4fa-1437-4d5b-acb5-958838ecd54c,
task_id=1d004ea2-ae84-4c95-8c70-29e205efd4b1 (api:48)
2019-01-12 20:37:58,542-0700 INFO  (jsonrpc/1) [storage.Mount] unmounting
/rhev/data-center/mnt/192.168.19.155:_data_data (mount:212)
2019-01-12 20:37:59,087-0700 INFO  (jsonrpc/1) [vdsm.api] FINISH
disconnectStorageServer return={'statuslist': [{'status': 0, 'id':
u'db7d16c8-7497-42db-8a75-81cb7f9d3350'}]}
from=:::192.168.19.178,51042,
flow_id=7e4cb4fa-1437-4d5b-acb5-958838ecd54c,
task_id=1d004ea2-ae84-4c95-8c70-29e205efd4b1 (api:54)
2019-01-12 20:37:59,089-0700 INFO  (jsonrpc/1) [jsonrpc.JsonRpcServer] RPC
call StoragePool.disconnectStorageServer succeeded in 0.55 seconds
(__init__:312)
___
Users mailing list -- users@ovirt.org
To unsubscribe send an email to users-le...@ovirt.org
Privacy Statement: https://www.ovirt.org/site/privacy-policy/
oVirt Code of Conduct: 
https://www.ovirt.org/community/about/community-guidelines/
List Archives: 
https://lists.ovirt.org/archives/list/users@ovirt.org/message/UEZGYMMGFJYW7TWMOVILUEFFUR5KDZVB/