[ovirt-users] Re: Migrate HE between ISCSI storages

2019-03-13 Thread kiv
I went the other way. Created another HE one on a new host. On a separate 
server, make the export domain, connected it to both HE and migrated all vm 
through export / import between HE. Everything went quickly and without 
problems.
___
Users mailing list -- users@ovirt.org
To unsubscribe send an email to users-le...@ovirt.org
Privacy Statement: https://www.ovirt.org/site/privacy-policy/
oVirt Code of Conduct: 
https://www.ovirt.org/community/about/community-guidelines/
List Archives: 
https://lists.ovirt.org/archives/list/users@ovirt.org/message/QR722D7VHFLPM2ZWJKHT3WM4E4VVYHAH/


[ovirt-users] Re: VM has been paused due to a storage I/O error

2019-03-13 Thread xil...@126.com
Thank you for your reply. I used glusterfs storage and used three copies for 
storage. When one of my nodes was down, HostedEngine was suspended for a period 
of time.I/O errors were reported when I tried to start the HostedEngine virtual 
machine with virsh resume, and returned to normal when I tried to restart 
HostedEngine.When I tried to restart HostedEngine again, it was back to normal.



xil...@126.com
 
From: Gianluca Cecchi
Date: 2019-03-13 22:32
To: xilazz
CC: users
Subject: Re: [ovirt-users] VM has been paused due to a storage I/O error
On Wed, Mar 13, 2019 at 3:56 AM  wrote:
Hi, everyone, there is a VM in the HostedEngine virtual machine that has been 
paused due to a storage I/O error, but engine manager services are normal, what 
is the problem?
___


Which kind of storage?
Sometimes when you have block based storage (FC, iSCSI) and you have a disk 
configured as thin provisioned, it could happen if heavy and/or rapid I/O and 
LVM resize operations don't satisfy timing needed
If this is your case and you think you need enough sustained I/O in one VM you 
should configure its related disk as preallocated. 
See this thread about 2 years ago for example:
https://lists.ovirt.org/archives/list/users@ovirt.org/thread/S3LXEJV3V4CIOTQXNGZYVZFUSDSQZQJS/

HIH,
Gianluca
___
Users mailing list -- users@ovirt.org
To unsubscribe send an email to users-le...@ovirt.org
Privacy Statement: https://www.ovirt.org/site/privacy-policy/
oVirt Code of Conduct: 
https://www.ovirt.org/community/about/community-guidelines/
List Archives: 
https://lists.ovirt.org/archives/list/users@ovirt.org/message/Q3LKCVCCMIK6FTDZFWUL3ZXNODEQACMO/


[ovirt-users] Re: Hi question about Version 3 API deprecation.

2019-03-13 Thread Staniforth, Paul
FYI
It looks like virt-viewer 7 is in RHEL8 Beta.
The latest version of virt-viewer is 8, just released and version 6 changed the 
API version to 4.

Maybe you could install a later version using Flatpak or Snap.

Regards,
   Paul S.
To view the terms under which this email is distributed, please go to:-
http://leedsbeckett.ac.uk/disclaimer/email/
___
Users mailing list -- users@ovirt.org
To unsubscribe send an email to users-le...@ovirt.org
Privacy Statement: https://www.ovirt.org/site/privacy-policy/
oVirt Code of Conduct: 
https://www.ovirt.org/community/about/community-guidelines/
List Archives: 
https://lists.ovirt.org/archives/list/users@ovirt.org/message/IQU4YEXVAVMAF6INTB2GJWZ7R45NEBJ2/


[ovirt-users] ovn-provider-network

2019-03-13 Thread Staniforth, Paul
Hello,

  we are using oVirt-4.2.8 and I have created a logical network 
using the ovn-network-provider, I haven't configured it to connect to a 
physical network.


I have 2 VMs running on 2 hosts which can connect to each other this logical 
network. The only connection between the hosts is over the ovirtmgmt network so 
presumably the traffic is using this?


Thanks,

   Paul S.

To view the terms under which this email is distributed, please go to:-
http://leedsbeckett.ac.uk/disclaimer/email/
___
Users mailing list -- users@ovirt.org
To unsubscribe send an email to users-le...@ovirt.org
Privacy Statement: https://www.ovirt.org/site/privacy-policy/
oVirt Code of Conduct: 
https://www.ovirt.org/community/about/community-guidelines/
List Archives: 
https://lists.ovirt.org/archives/list/users@ovirt.org/message/B22LIMO6RI4SBYAOVDRWPQX3UUUYTUGL/


[ovirt-users] Re: ConnectStoragePoolVDS failed

2019-03-13 Thread Strahil
There were  some issues with the migration.
Check that  all  files/directories  are owned by vdsm:kvm .

Best Regards,
Strahil Nikolov___
Users mailing list -- users@ovirt.org
To unsubscribe send an email to users-le...@ovirt.org
Privacy Statement: https://www.ovirt.org/site/privacy-policy/
oVirt Code of Conduct: 
https://www.ovirt.org/community/about/community-guidelines/
List Archives: 
https://lists.ovirt.org/archives/list/users@ovirt.org/message/ZBDHK2A6T54SR6BRM5W7WQ3Y5MTQ5QUM/


[ovirt-users] Re: ConnectStoragePoolVDS failed

2019-03-13 Thread alexeynikolaev
Sandro, thx for help!

Problem with volume msk-gluster-facility.: / data

Log file here https://yadi.sk/d/RFHHey-5jQMxYQ

In logs there is a cyclic errors.

2019-03-13 21:30:21,130+0300 ERROR (jsonrpc/6) [storage.HSM] Could not connect 
to storageServer (hsm:2414)
Traceback (most recent call last):
  File "/usr/lib/python2.7/site-packages/vdsm/storage/hsm.py", line 2411, in 
connectStorageServer
conObj.connect()
  File "/usr/lib/python2.7/site-packages/vdsm/storage/storageServer.py", line 
180, in connect
six.reraise(t, v, tb)
  File "/usr/lib/python2.7/site-packages/vdsm/storage/storageServer.py", line 
172, in connect
self._mount.mount(self.options, self._vfsType, cgroup=self.CGROUP)
  File "/usr/lib/python2.7/site-packages/vdsm/storage/mount.py", line 207, in 
mount
cgroup=cgroup)
  File "/usr/lib/python2.7/site-packages/vdsm/common/supervdsm.py", line 56, in 
__call__
return callMethod()
  File "/usr/lib/python2.7/site-packages/vdsm/common/supervdsm.py", line 54, in 

**kwargs)
  File "", line 2, in mount
  File "/usr/lib64/python2.7/multiprocessing/managers.py", line 773, in 
_callmethod
raise convert_to_error(kind, result)
MountError: (1, ';Running scope as unit run-158834.scope.\nMount failed. Please 
check the log file for more details.\n')

2019-03-13 21:30:21,524+0300 ERROR (jsonrpc/4) [storage.TaskManager.Task] 
(Task='5d377769-86b6-4fea-844d-7e4825101971') Unexpected error (task:875)
Traceback (most recent call last):
  File "/usr/lib/python2.7/site-packages/vdsm/storage/task.py", line 882, in 
_run
return fn(*args, **kargs)
  File "", line 2, in connectStoragePool
  File "/usr/lib/python2.7/site-packages/vdsm/common/api.py", line 50, in method
ret = func(*args, **kwargs)
  File "/usr/lib/python2.7/site-packages/vdsm/storage/hsm.py", line 1034, in 
connectStoragePool
spUUID, hostID, msdUUID, masterVersion, domainsMap)
  File "/usr/lib/python2.7/site-packages/vdsm/storage/hsm.py", line 1096, in 
_connectStoragePool
res = pool.connect(hostID, msdUUID, masterVersion)
  File "/usr/lib/python2.7/site-packages/vdsm/storage/sp.py", line 700, in 
connect
self.__rebuild(msdUUID=msdUUID, masterVersion=masterVersion)
  File "/usr/lib/python2.7/site-packages/vdsm/storage/sp.py", line 1274, in 
__rebuild
self.setMasterDomain(msdUUID, masterVersion)
  File "/usr/lib/python2.7/site-packages/vdsm/storage/sp.py", line 1494, in 
setMasterDomain
raise se.StoragePoolMasterNotFound(self.spUUID, msdUUID)
StoragePoolMasterNotFound: Cannot find master domain: 
u'spUUID=5a5cca91-01f8-01af-0297-025f, 
msdUUID=7d5de684-58ff-4fbc-905d-3048fc55b2b1'
___
Users mailing list -- users@ovirt.org
To unsubscribe send an email to users-le...@ovirt.org
Privacy Statement: https://www.ovirt.org/site/privacy-policy/
oVirt Code of Conduct: 
https://www.ovirt.org/community/about/community-guidelines/
List Archives: 
https://lists.ovirt.org/archives/list/users@ovirt.org/message/OZ53OVYNURM3IXQOCKTXMGOTBRYPC4EQ/


[ovirt-users] Re: qemu-img info showed iscsi/FC lun size 0

2019-03-13 Thread Nir Soffer
On Wed, Mar 13, 2019 at 8:40 PM Jingjie Jiang 
wrote:

> Hi Nir,
>
> I had qcow2 on FC, but qemu-img still showed size is 0.
>
> # qemu-img info
> /rhev/data-center/mnt/blockSD/eaa6f641-6b36-4c1d-bf99-6ba77df3156f/images/38cdceea-45d9-4616-8eef-966acff2f7be/8a32c5af-f01f-48f4-9329-e173ad3483b1
>
> image:
> /rhev/data-center/mnt/blockSD/eaa6f641-6b36-4c1d-bf99-6ba77df3156f/images/38cdceea-45d9-4616-8eef-966acff2f7be/8a32c5af-f01f-48f4-9329-e173ad3483b1
> file format: qcow2
> virtual size: 20G (21474836480 bytes)
> *disk size: 0*
> cluster_size: 65536
> Format specific information:
> compat: 1.1
> lazy refcounts: false
> refcount bits: 16
> corrupt: false
>
> Is the behavior expected?
>
Yes, I explained it here on few weeks ago:
http://lists.nongnu.org/archive/html/qemu-block/2019-02/msg01040.html

>
> Thanks,
>
> Jingjie
>
>
> On 2/22/19 1:53 PM, Nir Soffer wrote:
>
> On Fri, Feb 22, 2019 at 7:14 PM Nir Soffer  wrote:
>
>> On Fri, Feb 22, 2019 at 5:00 PM Jingjie Jiang 
>> wrote:
>>
>>> What about qcow2 format?
>>>
>> qcow2 reports the real size regardless of the underlying storage, since
> qcow2 manages
> the allocations. However the size is reported in qemu-img check in the
> image-end-offset.
>
> $ dd if=/dev/zero bs=1M count=10 | tr "\0" "\1" > test.raw
>
> $ truncate -s 200m test.raw
>
> $ truncate -s 1g backing
>
> $ sudo losetup -f backing --show
> /dev/loop2
>
> $ sudo qemu-img convert -f raw -O qcow2 test.raw /dev/loop2
>
> $ sudo qemu-img info --output json /dev/loop2
> {
> "virtual-size": 209715200,
> "filename": "/dev/loop2",
> "cluster-size": 65536,
> "format": "qcow2",
> "actual-size": 0,
> "format-specific": {
> "type": "qcow2",
> "data": {
> "compat": "1.1",
> "lazy-refcounts": false,
> "refcount-bits": 16,
> "corrupt": false
> }
> },
> "dirty-flag": false
> }
>
> $ sudo qemu-img check --output json /dev/loop2
> {
> "image-end-offset": 10813440,
> "total-clusters": 3200,
> "check-errors": 0,
> "allocated-clusters": 160,
> "filename": "/dev/loop2",
> "format": "qcow2"
> }
>
> We use this for reducing volumes to optimal size after merging snapshots,
> but
> we don't report this value to engine.
>
> Is there a choice  to create vm disk with format qcow2 instead of raw?
>>>
>> Not for LUNs, only for images.
>>
>> The available formats in 4.3 are documented here:
>>
>> https://ovirt.org/develop/release-management/features/storage/incremental-backup.html#disk-format
>>
>> incremental means you checked the checkbox "Enable incremental backup"
>> when creating a disk.
>> But note that the fact that we will create qcow2 image is implementation
>> detail and the behavior
>> may change in the future. For example, qemu is expected to provide a way
>> to do incremental
>> backup with raw volumes, and in this case we may create a raw volume
>> instead of qcow2 volume.
>> (actually raw data volume and qcow2 metadata volume).
>>
>> If you want to control the disk format, the only way is via the REST API
>> or SDK, where you can
>> specify the format instead of allocation policy. However even if you
>> specify the format in the SDK
>> the system may chose to change the format when copying the disk to
>> another storage type. For
>> example if you copy qcow2/sparse image from block storage to file storage
>> the system will create
>> a raw/sparse image.
>>
>> If you desire to control the format both from the UI and REST API/SDK and
>> ensure that the system
>> will never change the selected format please file a bug explaining the
>> use case.
>>
>> On 2/21/19 5:46 PM, Nir Soffer wrote:
>>>
>>>
>>>
>>> On Thu, Feb 21, 2019, 21:48 >>
 Hi,
 Based on oVirt 4.3.0, I have data domain from FC lun, then I create new
 vm on the disk from FC data domain.
 After VM was created. According to qemu-img info, the disk size is 0.
 # qemu-img info
 /rhev/data-center/mnt/blockSD/eaa6f641-6b36-4c1d-bf99-6ba77df3156f/images/8d3b455b-1da4-49f3-ba57-8cda64aa9dc9/949fa315-3934-4038-85f2-08aec52c1e2b

 image:
 /rhev/data-center/mnt/blockSD/eaa6f641-6b36-4c1d-bf99-6ba77df3156f/images/8d3b455b-1da4-49f3-ba57-8cda64aa9dc9/949fa315-3934-4038-85f2-08aec52c1e2b
 file format: raw
 virtual size: 10G (10737418240 bytes)
 disk size: 0

 I tried on iscsi and same result.

 Is the behaviour expected?

>>>
>>> It is expected in a way. Disk size is the amount of storage actually
>>> used, and block devices has no way to tell that.
>>>
>>> oVirt report the size of the block device in this case, which is more
>>> accurate than zero.
>>>
>>> However the real size allocated on the undrelying storage is somewhere
>>> between zero an device size, and depends on the imlementation of the
>>> storage. Nither qemu-img nor oVirt can tell the real size.
>>>
>>> Nir
>>>
>>>
 Thanks,
 Jingjie

 

[ovirt-users] Re: qemu-img info showed iscsi/FC lun size 0

2019-03-13 Thread Jingjie Jiang

Hi Nir,

I had qcow2 on FC, but qemu-img still showed size is 0.

# qemu-img info 
/rhev/data-center/mnt/blockSD/eaa6f641-6b36-4c1d-bf99-6ba77df3156f/images/38cdceea-45d9-4616-8eef-966acff2f7be/8a32c5af-f01f-48f4-9329-e173ad3483b1 

image: 
/rhev/data-center/mnt/blockSD/eaa6f641-6b36-4c1d-bf99-6ba77df3156f/images/38cdceea-45d9-4616-8eef-966acff2f7be/8a32c5af-f01f-48f4-9329-e173ad3483b1

file format: qcow2
virtual size: 20G (21474836480 bytes)
*disk size: 0*
cluster_size: 65536
Format specific information:
    compat: 1.1
    lazy refcounts: false
    refcount bits: 16
    corrupt: false

Is the behavior expected?


Thanks,

Jingjie


On 2/22/19 1:53 PM, Nir Soffer wrote:
On Fri, Feb 22, 2019 at 7:14 PM Nir Soffer > wrote:


On Fri, Feb 22, 2019 at 5:00 PM Jingjie Jiang
mailto:jingjie.ji...@oracle.com>> wrote:

What about qcow2 format?

qcow2 reports the real size regardless of the underlying storage, 
since qcow2 manages
the allocations. However the size is reported in qemu-img check in the 
image-end-offset.


$ dd if=/dev/zero bs=1M count=10 | tr "\0" "\1" > test.raw

$ truncate -s 200m test.raw

$ truncate -s 1g backing

$ sudo losetup -f backing --show
/dev/loop2

$ sudo qemu-img convert -f raw -O qcow2 test.raw /dev/loop2

$ sudo qemu-img info --output json /dev/loop2
{
    "virtual-size": 209715200,
    "filename": "/dev/loop2",
    "cluster-size": 65536,
    "format": "qcow2",
    "actual-size": 0,
    "format-specific": {
        "type": "qcow2",
        "data": {
            "compat": "1.1",
"lazy-refcounts": false,
"refcount-bits": 16,
            "corrupt": false
        }
    },
    "dirty-flag": false
}

$ sudo qemu-img check --output json /dev/loop2
{
    "image-end-offset": 10813440,
    "total-clusters": 3200,
    "check-errors": 0,
    "allocated-clusters": 160,
    "filename": "/dev/loop2",
    "format": "qcow2"
}

We use this for reducing volumes to optimal size after merging 
snapshots, but

we don't report this value to engine.

Is there a choice  to create vm disk with format qcow2 instead
of raw?

Not for LUNs, only for images.

The available formats in 4.3 are documented here:

https://ovirt.org/develop/release-management/features/storage/incremental-backup.html#disk-format

incremental means you checked the checkbox "Enable incremental
backup" when creating a disk.
But note that the fact that we will create qcow2 image is
implementation detail and the behavior
may change in the future. For example, qemu is expected to provide
a way to do incremental
backup with raw volumes, and in this case we may create a raw
volume instead of qcow2 volume.
(actually raw data volume and qcow2 metadata volume).

If you want to control the disk format, the only way is via the
REST API or SDK, where you can
specify the format instead of allocation policy. However even if
you specify the format in the SDK
the system may chose to change the format when copying the disk to
another storage type. For
example if you copy qcow2/sparse image from block storage to file
storage the system will create
a raw/sparse image.

If you desire to control the format both from the UI and REST
API/SDK and ensure that the system
will never change the selected format please file a bug explaining
the use case.

On 2/21/19 5:46 PM, Nir Soffer wrote:



On Thu, Feb 21, 2019, 21:48 mailto:jingjie.ji...@oracle.com> wrote:

Hi,
Based on oVirt 4.3.0, I have data domain from FC lun,
then I create new vm on the disk from FC data domain.
After VM was created. According to qemu-img info, the
disk size is 0.
# qemu-img info

/rhev/data-center/mnt/blockSD/eaa6f641-6b36-4c1d-bf99-6ba77df3156f/images/8d3b455b-1da4-49f3-ba57-8cda64aa9dc9/949fa315-3934-4038-85f2-08aec52c1e2b

image:

/rhev/data-center/mnt/blockSD/eaa6f641-6b36-4c1d-bf99-6ba77df3156f/images/8d3b455b-1da4-49f3-ba57-8cda64aa9dc9/949fa315-3934-4038-85f2-08aec52c1e2b
file format: raw
virtual size: 10G (10737418240 bytes)
disk size: 0

I tried on iscsi and same result.

Is the behaviour expected?


It is expected in a way. Disk size is the amount of storage
actually used, and block devices has no way to tell that.

oVirt report the size of the block device in this case, which
is more accurate than zero.

However the real size allocated on the undrelying storage is
somewhere between zero an device size, and depends on the
imlementation of the storage. Nither qemu-img nor oVirt can
tell the real size.

Nir


Thanks,
Jingjie

___
Users mailing list -- users@ovirt.org

[ovirt-users] Undeploy oVirt Metrics Store

2019-03-13 Thread toslavik
Deployed MetricStore on the engine and hosts as per the instructions. But using 
some time I realized that for me the functionality is redundant, enough data 
collected by DWH.
https://www.ovirt.org/develop/release-management/features/metrics/metrics-store-installation.html
Is there an instruction on how undeploy oVirt MetricStore?
___
Users mailing list -- users@ovirt.org
To unsubscribe send an email to users-le...@ovirt.org
Privacy Statement: https://www.ovirt.org/site/privacy-policy/
oVirt Code of Conduct: 
https://www.ovirt.org/community/about/community-guidelines/
List Archives: 
https://lists.ovirt.org/archives/list/users@ovirt.org/message/IO5BHQ6O3VYKIWVX35TKD2PN3DY7YSTX/


[ovirt-users] ovirt and blk-mq

2019-03-13 Thread Fabrice Bacchella
When checking block device configuration, on an ovirt configuration using a 
SAN, I found this line:

dm/use_blk_mq:0

Did someone try it, by adding in the kernel command line: 
dm_mod.use_blk_mq=y

I'm not sure, but it might improve performance on multipath, even on spinning 
rust.
___
Users mailing list -- users@ovirt.org
To unsubscribe send an email to users-le...@ovirt.org
Privacy Statement: https://www.ovirt.org/site/privacy-policy/
oVirt Code of Conduct: 
https://www.ovirt.org/community/about/community-guidelines/
List Archives: 
https://lists.ovirt.org/archives/list/users@ovirt.org/message/Y7MIRKLDCOJWLRS2EI5WE4XMZEWK5RBD/


[ovirt-users] Re: ConnectStoragePoolVDS failed

2019-03-13 Thread Sandro Bonazzola
Il giorno mer 13 mar 2019 alle ore 15:14 alexeynikolaev <
alexeynikolaev.p...@yandex.ru> ha scritto:

> Hi community!
>
> After update one of ovirt node NG from version 4.2.x to 4.3.1 this node
> lost connection to glusterfs volume with error:
>
> ConnectStoragePoolVDS failed: Cannot find master domain:
> u'spUUID=5a5cca91-01f8-01af-0297-025f,
> msdUUID=7d5de684-58ff-4fbc-905d-3048fc55b2b1'.
>
> Another nodes works well with this volume.
>
> How I can debug this issue?
>

Can you please share vdsm logs for this?




> ___
> Users mailing list -- users@ovirt.org
> To unsubscribe send an email to users-le...@ovirt.org
> Privacy Statement: https://www.ovirt.org/site/privacy-policy/
> oVirt Code of Conduct:
> https://www.ovirt.org/community/about/community-guidelines/
> List Archives:
> https://lists.ovirt.org/archives/list/users@ovirt.org/message/I64WEH2O3EO75ZAPVFHMLHM2DZAN7E6S/
>


-- 

SANDRO BONAZZOLA

MANAGER, SOFTWARE ENGINEERING, EMEA R RHV

Red Hat EMEA 

sbona...@redhat.com

___
Users mailing list -- users@ovirt.org
To unsubscribe send an email to users-le...@ovirt.org
Privacy Statement: https://www.ovirt.org/site/privacy-policy/
oVirt Code of Conduct: 
https://www.ovirt.org/community/about/community-guidelines/
List Archives: 
https://lists.ovirt.org/archives/list/users@ovirt.org/message/WFF6QGODCNZ25OIXZGLJNPO3LPHGZ44I/


[ovirt-users] Re: VM has been paused due to a storage I/O error

2019-03-13 Thread Gianluca Cecchi
On Wed, Mar 13, 2019 at 3:56 AM  wrote:

> Hi, everyone, there is a VM in the HostedEngine virtual machine that has
> been paused due to a storage I/O error, but engine manager services are
> normal, what is the problem?
> ___
>
>
Which kind of storage?
Sometimes when you have block based storage (FC, iSCSI) and you have a disk
configured as thin provisioned, it could happen if heavy and/or rapid I/O
and LVM resize operations don't satisfy timing needed
If this is your case and you think you need enough sustained I/O in one VM
you should configure its related disk as preallocated.
See this thread about 2 years ago for example:
https://lists.ovirt.org/archives/list/users@ovirt.org/thread/S3LXEJV3V4CIOTQXNGZYVZFUSDSQZQJS/

HIH,
Gianluca
___
Users mailing list -- users@ovirt.org
To unsubscribe send an email to users-le...@ovirt.org
Privacy Statement: https://www.ovirt.org/site/privacy-policy/
oVirt Code of Conduct: 
https://www.ovirt.org/community/about/community-guidelines/
List Archives: 
https://lists.ovirt.org/archives/list/users@ovirt.org/message/4IDSZXZ4MY3UMYARX3FOAYJWCCI6PUTL/


[ovirt-users] Re: Host affinity hard rule doesn't work

2019-03-13 Thread Staniforth, Paul
Try disabling or making the VM rule soft, it won't migrate more than one VM at 
a time so it can't migrate either VM without breaking the vms_rule.

Regards,
  Paul S.

From: zoda...@gmail.com 
Sent: 13 March 2019 07:49
To: users@ovirt.org
Subject: [ovirt-users] Host affinity hard rule doesn't work

Hi there,
Here is my setup:
oVirt engine: 4.2.8

1. Create an affinity group as below:
VM affinity rule: positive + enforcing
Host affinity rule: disabled.
VMs: 2 VMs added
Hosts: No host selected.
2. Run the 2 VMs, they are running on the same host, say host1.
3. Change the affinity group's host affinity:
Host affinity rule: positive  + enforcing
Hosts: host2 added.

I expect that the 2 VMs can migrate to host2, but that never happen, is this 
expected?

snippet of engine.log:
2019-03-13 07:47:05,747Z INFO  
[org.ovirt.engine.core.bll.scheduling.SchedulingManager] 
(EE-ManagedThreadFactory-engineScheduled-Thread-30) [] Candidate host 
'dub-svrfarm24' ('76b13e75-d01b-4dec-9298-1fad72b46525') was filtered out by 
'VAR__FILTERTYPE__INTERNAL' filter 'VmAffinityGroups' (correlation id: null)
2019-03-13 07:47:05,747Z DEBUG 
[org.ovirt.engine.core.bll.scheduling.arem.AffinityRulesEnforcer] 
(EE-ManagedThreadFactory-engineScheduled-Thread-30) [] VM 
822b37b7-5da3-453c-b775-d4192c2fdcae is NOT a viable candidate for solving the 
affinity group violation situation.
2019-03-13 07:47:05,747Z DEBUG 
[org.ovirt.engine.core.bll.scheduling.arem.AffinityRulesEnforcer] 
(EE-ManagedThreadFactory-engineScheduled-Thread-30) [] No vm to hosts 
soft-affinity group violation detected
2019-03-13 07:47:05,749Z DEBUG 
[org.ovirt.engine.core.bll.scheduling.arem.AffinityRulesEnforcer] 
(EE-ManagedThreadFactory-engineScheduled-Thread-30) [] No affinity group 
collision detected for cluster 8fe88b8c-966c-4c21-839d-e2437cc6b73d. Standing 
by.
2019-03-13 07:47:05,749Z DEBUG 
[org.ovirt.engine.core.bll.scheduling.arem.AffinityRulesEnforcer] 
(EE-ManagedThreadFactory-engineScheduled-Thread-30) [] No affinity group 
collision detected for cluster 3beac2ea-ed04-4f40-9ce3-5a9a67cebd8c. Standing 
by.
2019-03-13 07:47:05,750Z DEBUG 
[org.ovirt.engine.core.bll.scheduling.arem.AffinityRulesEnforcer] 
(EE-ManagedThreadFactory-engineScheduled-Thread-30) [] No affinity group 
collision detected for cluster da32d154-4303-11e9-9607-00163eaab080. Standing 
by.

Thank you,
-Zhen
___
Users mailing list -- users@ovirt.org
To unsubscribe send an email to users-le...@ovirt.org
Privacy Statement: 
https://emea01.safelinks.protection.outlook.com/?url=https%3A%2F%2Fwww.ovirt.org%2Fsite%2Fprivacy-policy%2Fdata=02%7C01%7Cp.staniforth%40leedsbeckett.ac.uk%7C697dfc2909184cd250dc08d6a7888f4d%7Cd79a81124fbe417aa112cd0fb490d85c%7C0%7C1%7C636880602317806228sdata=X2tVZVPWYQBp%2FmFT2WRsHEffIQnvArULWrlZ5nP3I0g%3Dreserved=0
oVirt Code of Conduct: 
https://emea01.safelinks.protection.outlook.com/?url=https%3A%2F%2Fwww.ovirt.org%2Fcommunity%2Fabout%2Fcommunity-guidelines%2Fdata=02%7C01%7Cp.staniforth%40leedsbeckett.ac.uk%7C697dfc2909184cd250dc08d6a7888f4d%7Cd79a81124fbe417aa112cd0fb490d85c%7C0%7C1%7C636880602317806228sdata=2DpJLUIbUD5ADkJF27Er%2B%2FHPRXOM%2B2zfS9Ivog6Nwus%3Dreserved=0
List Archives: 
https://emea01.safelinks.protection.outlook.com/?url=https%3A%2F%2Flists.ovirt.org%2Farchives%2Flist%2Fusers%40ovirt.org%2Fmessage%2F2BVS3U4BBFLJ32EZN5K3TOI64M7DQHSZ%2Fdata=02%7C01%7Cp.staniforth%40leedsbeckett.ac.uk%7C697dfc2909184cd250dc08d6a7888f4d%7Cd79a81124fbe417aa112cd0fb490d85c%7C0%7C1%7C636880602317806228sdata=IWG%2BLEErAh1ByaCmET9ca3%2BJZ%2BCaB4VOC8Yw4mcRM3I%3Dreserved=0
To view the terms under which this email is distributed, please go to:-
http://leedsbeckett.ac.uk/disclaimer/email/
___
Users mailing list -- users@ovirt.org
To unsubscribe send an email to users-le...@ovirt.org
Privacy Statement: https://www.ovirt.org/site/privacy-policy/
oVirt Code of Conduct: 
https://www.ovirt.org/community/about/community-guidelines/
List Archives: 
https://lists.ovirt.org/archives/list/users@ovirt.org/message/TE5L2RN7MUNL3SF7ERHZAEMKL5DZ3JR7/


[ovirt-users] Re: ConnectStoragePoolVDS failed

2019-03-13 Thread alexeynikolaev
Hi community!
 
After update one of ovirt node NG from version 4.2.x to 4.3.1 this node lost 
connection to glusterfs volume with error:
 
ConnectStoragePoolVDS failed: Cannot find master domain: 
u'spUUID=5a5cca91-01f8-01af-0297-025f, 
msdUUID=7d5de684-58ff-4fbc-905d-3048fc55b2b1'.
 
Another nodes works well with this volume.
 
How I can debug this issue?
___
Users mailing list -- users@ovirt.org
To unsubscribe send an email to users-le...@ovirt.org
Privacy Statement: https://www.ovirt.org/site/privacy-policy/
oVirt Code of Conduct: 
https://www.ovirt.org/community/about/community-guidelines/
List Archives: 
https://lists.ovirt.org/archives/list/users@ovirt.org/message/I64WEH2O3EO75ZAPVFHMLHM2DZAN7E6S/


[ovirt-users] ConnectStoragePoolVDS failed

2019-03-13 Thread Николаев Алексей
Hi community! After update one of ovirt node NG from version 4.2.x to 4.3.1 this node lost connection to glusterfs volume with error: ConnectStoragePoolVDS failed: Cannot find master domain: u'spUUID=5a5cca91-01f8-01af-0297-025f, msdUUID=7d5de684-58ff-4fbc-905d-3048fc55b2b1'. Another nodes works well with this volume. How I can debug this issue?___
Users mailing list -- users@ovirt.org
To unsubscribe send an email to users-le...@ovirt.org
Privacy Statement: https://www.ovirt.org/site/privacy-policy/
oVirt Code of Conduct: 
https://www.ovirt.org/community/about/community-guidelines/
List Archives: 
https://lists.ovirt.org/archives/list/users@ovirt.org/message/RHGZJMECO3IYZSYJG7STONWZ655GEBNG/


[ovirt-users] Re: VM has been paused due to a storage I/O error

2019-03-13 Thread Николаев Алексей
First you must look vdsm logs on the hypervisor's hosts. 13.03.2019, 05:57, "xil...@126.com" :Hi, everyone, there is a VM in the HostedEngine virtual machine that has been paused due to a storage I/O error, but engine manager services are normal, what is the problem?___Users mailing list -- users@ovirt.orgTo unsubscribe send an email to users-le...@ovirt.orgPrivacy Statement: https://www.ovirt.org/site/privacy-policy/oVirt Code of Conduct: https://www.ovirt.org/community/about/community-guidelines/List Archives: https://lists.ovirt.org/archives/list/users@ovirt.org/message/ROZTEDMOSQFGE4JUO3QBMQEHIZIC5OER/___
Users mailing list -- users@ovirt.org
To unsubscribe send an email to users-le...@ovirt.org
Privacy Statement: https://www.ovirt.org/site/privacy-policy/
oVirt Code of Conduct: 
https://www.ovirt.org/community/about/community-guidelines/
List Archives: 
https://lists.ovirt.org/archives/list/users@ovirt.org/message/HT5P24SEFM2XCJQ5E7VF4MB2LBYNRZKQ/


[ovirt-users] oVirt 4.3.2 Second Release Candidate is now available

2019-03-13 Thread Sandro Bonazzola
The oVirt Project is pleased to announce the availability of the oVirt
4.3.2 Second Release Candidate, as of March 13th, 2019.

This update is a release candidate of the second in a series of
stabilization updates to the 4.3 series.
This is pre-release software. This pre-release should not to be used in
production.

This release is available now on x86_64 architecture for:
* Red Hat Enterprise Linux 7.6 or later
* CentOS Linux (or similar) 7.6 or later

This release supports Hypervisor Hosts on x86_64 and ppc64le architectures
for:
* Red Hat Enterprise Linux 7.6 or later
* CentOS Linux (or similar) 7.6 or later
* oVirt Node 4.3 (available for x86_64 only)

Experimental tech preview for x86_64 and s390x architectures for Fedora 28
is also included.

See the release notes [1] for installation / upgrade instructions and
a list of new features and bugs fixed.

Notes:
- oVirt Guest Tools with new oVirt Windows Guest Agent is available
- oVirt Appliance is already available
- oVirt Node is already available [2]

Additional Resources:
* Read more about the oVirt 4.3.2 release highlights:
http://www.ovirt.org/release/4.3.2/
* Get more oVirt Project updates on Twitter: https://twitter.com/ovirt
* Check out the latest project news on the oVirt blog:
http://www.ovirt.org/blog/

[1] http://www.ovirt.org/release/4.3.2/
[2] http://resources.ovirt.org/pub/ovirt-4.3-pre/iso/

-- 

SANDRO BONAZZOLA

MANAGER, SOFTWARE ENGINEERING, EMEA R RHV

Red Hat EMEA 

sbona...@redhat.com

___
Users mailing list -- users@ovirt.org
To unsubscribe send an email to users-le...@ovirt.org
Privacy Statement: https://www.ovirt.org/site/privacy-policy/
oVirt Code of Conduct: 
https://www.ovirt.org/community/about/community-guidelines/
List Archives: 
https://lists.ovirt.org/archives/list/users@ovirt.org/message/VEWGTKK42A3RYIJUSVA63BYYTSU4OYDB/


[ovirt-users] Re: Ovirt 4.3.1 cannto set host to maintenance

2019-03-13 Thread Strahil Nikolov
 It seems to be working properly , but the OVF got updated recently and 
powering up the hosted-engine is not working :)
[root@ovirt2 ~]# sudo -u vdsm tar -tvf  
/var/run/vdsm/storage/808423f9-8a5c-40cd-bc9f-2568c85b8c74/441abdc8-6cb1-49a4-903f-a1ec0ed88429/c3309fc0-8707-4de1-903d-8d4bbb024f81
-rw-r--r-- 0/0 138 2019-03-12 08:06 info.json
-rw-r--r-- 0/0   21164 2019-03-12 08:06 
8474ae07-f172-4a20-b516-375c73903df7.ovf
-rw-r--r-- 0/0  72 2019-03-12 08:06 metadata.json

[root@ovirt2 ~]# sudo -u vdsm tar -tvf 
/var/run/vdsm/storage/808423f9-8a5c-40cd-bc9f-2568c85b8c74/94ade632-6ecc-4901-8cec-8e39f3d69cb0/9460fc4b-54f3-48e3-b7b6-da962321ecf4
-rw-r--r-- 0/0 138 2019-03-13 11:06 info.json
-rw-r--r-- 0/0   21164 2019-03-13 11:06 
8474ae07-f172-4a20-b516-375c73903df7.ovf
-rw-r--r-- 0/0  72 2019-03-13 11:06 metadata.json

Best Regards,Strahil Nikolov

В сряда, 13 март 2019 г., 11:08:57 ч. Гринуич+2, Simone Tiraboschi 
 написа:  
 
 

On Wed, Mar 13, 2019 at 9:57 AM Strahil Nikolov  wrote:

Hi Simone,Nir,

>Adding also Nir on this, the whole sequence is tracked here:
>I'd suggest to check ovirt-imageio and vdsm logs on ovirt2.localdomain about 
>the same time.
I have tested again (first wiped current transfers) and it is happening the 
same (phase 10).
engine=# \x
Expanded display is on.
engine=# select * from image_transfers;
-[ RECORD 1 ]-+-
command_id    | 11b2c162-29e0-46ef-b0a4-f41ebe3c2910
command_type  | 1024
phase | 10
last_updated  | 2019-03-13 09:38:30.365+02
message   |
vds_id    |
disk_id   | 94ade632-6ecc-4901-8cec-8e39f3d69cb0
imaged_ticket_id  |
proxy_uri |
signed_ticket |
bytes_sent    | 0
bytes_total   | 134217728
type  | 1
active    | f
daemon_uri    |
client_inactivity_timeout | 60

engine=# delete from image_transfers where 
disk_id='94ade632-6ecc-4901-8cec-8e39f3d69cb0';

This is the VDSM log from the last test:

2019-03-13 09:38:23,229+0200 INFO  (jsonrpc/4) [vdsm.api] START 
prepareImage(sdUUID=u'808423f9-8a5c-40cd-bc9f-2568c85b8c74', 
spUUID=u'b803f7e4-2543-11e9-ba9a-00163e6272c8', 
imgUUID=u'94ade632-6ecc-4901-8cec-8e39f3d69cb0', 
leafUUID=u'9460fc4b-54f3-48e3-b7b6-da962321ecf4', allowIllegal=True) 
from=:::192.168.1.2,42644, flow_id=d48d9272-2e65-438d-a7b2-46979309833b, 
task_id=bb534320-451c-45c0-b7a6-0cce017ec7cb (api:48)
2019-03-13 09:38:23,253+0200 INFO  (jsonrpc/4) [storage.StorageDomain] Fixing 
permissions on 
/rhev/data-center/mnt/glusterSD/ovirt1.localdomain:_engine/808423f9-8a5c-40cd-bc9f-2568c85b8c74/images/94ade632-6ecc-4901-8cec-8e39f3d69cb0/9460fc4b-54f3-48e3-b7b6-da962321ecf4
 (fileSD:623)
2019-03-13 09:38:23,254+0200 INFO  (jsonrpc/4) [storage.StorageDomain] Creating 
domain run directory 
u'/var/run/vdsm/storage/808423f9-8a5c-40cd-bc9f-2568c85b8c74' (fileSD:577)
2019-03-13 09:38:23,254+0200 INFO  (jsonrpc/4) [storage.fileUtils] Creating 
directory: /var/run/vdsm/storage/808423f9-8a5c-40cd-bc9f-2568c85b8c74 mode: 
None (fileUtils:199)
2019-03-13 09:38:23,254+0200 INFO  (jsonrpc/4) [storage.StorageDomain] Creating 
symlink from 
/rhev/data-center/mnt/glusterSD/ovirt1.localdomain:_engine/808423f9-8a5c-40cd-bc9f-2568c85b8c74/images/94ade632-6ecc-4901-8cec-8e39f3d69cb0
 to 
/var/run/vdsm/storage/808423f9-8a5c-40cd-bc9f-2568c85b8c74/94ade632-6ecc-4901-8cec-8e39f3d69cb0
 (fileSD:580)
2019-03-13 09:38:23,260+0200 INFO  (jsonrpc/4) [vdsm.api] FINISH prepareImage 
error=Volume does not exist: (u'9460fc4b-54f3-48e3-b7b6-da962321ecf4',) 
from=:::192.168.1.2,42644, flow_id=d48d9272-2e65-438d-a7b2-46979309833b, 
task_id=bb534320-451c-45c0-b7a6-0cce017ec7cb (api:52)
2019-03-13 09:38:23,261+0200 ERROR (jsonrpc/4) [storage.TaskManager.Task] 
(Task='bb534320-451c-45c0-b7a6-0cce017ec7cb') Unexpected error (task:875)
Traceback (most recent call last):
  File "/usr/lib/python2.7/site-packages/vdsm/storage/task.py", line 882, in 
_run
    return fn(*args, **kargs)
  File "", line 2, in prepareImage
  File "/usr/lib/python2.7/site-packages/vdsm/common/api.py", line 50, in method
    ret = func(*args, **kwargs)
  File "/usr/lib/python2.7/site-packages/vdsm/storage/hsm.py", line 3212, in 
prepareImage
    leafInfo = dom.produceVolume(imgUUID, leafUUID).getVmVolumeInfo()
  File "/usr/lib/python2.7/site-packages/vdsm/storage/sd.py", line 822, in 
produceVolume
    volUUID)
  File "/usr/lib/python2.7/site-packages/vdsm/storage/glusterVolume.py", line 
45, in __init__
    volUUID)
  File "/usr/lib/python2.7/site-packages/vdsm/storage/volume.py", line 801, in 
__init__
    self._manifest = self.manifestClass(repoPath, sdUUID, imgUUID, volUUID)
  File "/usr/lib/python2.7/site-packages/vdsm/storage/fileVolume.py", line 71, 
in __init__
    volUUID)
  File 

[ovirt-users] Re: Ovirt 4.3.1 problem with HA agent

2019-03-13 Thread Strahil Nikolov
 Dear Simone,
it seems that there is some kind of problem ,as the OVF got updated with wrong 
configuration:[root@ovirt2 ~]# ls -l 
/rhev/data-center/mnt/glusterSD/ovirt1.localdomain:_engine/808423f9-8a5c-40cd-bc9f-2568c85b8c74/images/{441abdc8-6cb1-49a4-903f-a1ec0ed88429,94ade632-6ecc-4901-8cec-8e39f3d69cb0}
/rhev/data-center/mnt/glusterSD/ovirt1.localdomain:_engine/808423f9-8a5c-40cd-bc9f-2568c85b8c74/images/441abdc8-6cb1-49a4-903f-a1ec0ed88429:
total 66591
-rw-rw. 1 vdsm kvm   30720 Mar 12 08:06 c3309fc0-8707-4de1-903d-8d4bbb024f81
-rw-rw. 1 vdsm kvm 1048576 Jan 31 13:24 
c3309fc0-8707-4de1-903d-8d4bbb024f81.lease
-rw-r--r--. 1 vdsm kvm 435 Mar 12 08:06 
c3309fc0-8707-4de1-903d-8d4bbb024f81.meta

/rhev/data-center/mnt/glusterSD/ovirt1.localdomain:_engine/808423f9-8a5c-40cd-bc9f-2568c85b8c74/images/94ade632-6ecc-4901-8cec-8e39f3d69cb0:
total 66591
-rw-rw. 1 vdsm kvm   30720 Mar 13 11:07 9460fc4b-54f3-48e3-b7b6-da962321ecf4
-rw-rw. 1 vdsm kvm 1048576 Jan 31 13:24 
9460fc4b-54f3-48e3-b7b6-da962321ecf4.lease
-rw-r--r--. 1 vdsm kvm 435 Mar 13 11:07 
9460fc4b-54f3-48e3-b7b6-da962321ecf4.meta

Starting the hosted-engine fails with:
2019-03-13 12:48:21,237+0200 ERROR (vm/8474ae07) [virt.vm] 
(vmId='8474ae07-f172-4a20-b516-375c73903df7') The vm start process failed 
(vm:937)
Traceback (most recent call last):
  File "/usr/lib/python2.7/site-packages/vdsm/virt/vm.py", line 866, in 
_startUnderlyingVm
    self._run()
  File "/usr/lib/python2.7/site-packages/vdsm/virt/vm.py", line 2852, in _run
    dom = self._connection.defineXML(self._domain.xml)
  File "/usr/lib/python2.7/site-packages/vdsm/common/libvirtconnection.py", 
line 131, in wrapper
    ret = f(*args, **kwargs)
  File "/usr/lib/python2.7/site-packages/vdsm/common/function.py", line 94, in 
wrapper
    return func(inst, *args, **kwargs)
  File "/usr/lib64/python2.7/site-packages/libvirt.py", line 3743, in defineXML
    if ret is None:raise libvirtError('virDomainDefineXML() failed', conn=self)
libvirtError: XML error: No PCI buses available

Best Regards,Strahil Nikolov


В вторник, 12 март 2019 г., 14:14:26 ч. Гринуич+2, Strahil Nikolov 
 написа:  
 
  Dear Simone,
it should be 60 min , but I have checked several hours after that and it didn't 
update it.
[root@engine ~]# engine-config -g OvfUpdateIntervalInMinutes
OvfUpdateIntervalInMinutes: 60 version: general

How can i make a backup of the VM config , as you have noticed the local copy 
in /var/run/ovirt-hosted-engine-ha/vm.conf won't work ?
I will keep the HostedEngine's xml - so I can redefine if needed.
Best Regards,Strahil Nikolov
  
  
___
Users mailing list -- users@ovirt.org
To unsubscribe send an email to users-le...@ovirt.org
Privacy Statement: https://www.ovirt.org/site/privacy-policy/
oVirt Code of Conduct: 
https://www.ovirt.org/community/about/community-guidelines/
List Archives: 
https://lists.ovirt.org/archives/list/users@ovirt.org/message/3XPJXJ4I4LVDDV47BTSXA4FQE3OM5T5J/


[ovirt-users] Re: Host affinity hard rule doesn't work

2019-03-13 Thread zodaoko
Hi Andrej,
Thank you for your quick response, as well as the RFE.
Thanks,
-Zhen
___
Users mailing list -- users@ovirt.org
To unsubscribe send an email to users-le...@ovirt.org
Privacy Statement: https://www.ovirt.org/site/privacy-policy/
oVirt Code of Conduct: 
https://www.ovirt.org/community/about/community-guidelines/
List Archives: 
https://lists.ovirt.org/archives/list/users@ovirt.org/message/S7URE6ZNB2JIWTZUN6O7T7CPQL76F3BM/


[ovirt-users] Re: Ovirt 4.3.1 cannto set host to maintenance

2019-03-13 Thread Simone Tiraboschi
On Wed, Mar 13, 2019 at 9:57 AM Strahil Nikolov 
wrote:

> Hi Simone,Nir,
>
> >Adding also Nir on this, the whole sequence is tracked here:
> >I'd suggest to check ovirt-imageio and vdsm logs on ovirt2.localdomain
> about the same time.
>
> I have tested again (first wiped current transfers) and it is happening
> the same (phase 10).
>
> engine=# \x
> Expanded display is on.
> engine=# select * from image_transfers;
> -[ RECORD 1 ]-+-
> command_id| 11b2c162-29e0-46ef-b0a4-f41ebe3c2910
> command_type  | 1024
> phase | 10
> last_updated  | 2019-03-13 09:38:30.365+02
> message   |
> vds_id|
> disk_id   | 94ade632-6ecc-4901-8cec-8e39f3d69cb0
> imaged_ticket_id  |
> proxy_uri |
> signed_ticket |
> bytes_sent| 0
> bytes_total   | 134217728
> type  | 1
> active| f
> daemon_uri|
> client_inactivity_timeout | 60
>
> engine=# delete from image_transfers where
> disk_id='94ade632-6ecc-4901-8cec-8e39f3d69cb0';
>
> This is the VDSM log from the last test:
>
>
> 2019-03-13 09:38:23,229+0200 INFO  (jsonrpc/4) [vdsm.api] START
> prepareImage(sdUUID=u'808423f9-8a5c-40cd-bc9f-2568c85b8c74',
> spUUID=u'b803f7e4-2543-11e9-ba9a-00163e6272c8',
> imgUUID=u'94ade632-6ecc-4901-8cec-8e39f3d69cb0',
> leafUUID=u'9460fc4b-54f3-48e3-b7b6-da962321ecf4', allowIllegal=True)
> from=:::192.168.1.2,42644,
> flow_id=d48d9272-2e65-438d-a7b2-46979309833b,
> task_id=bb534320-451c-45c0-b7a6-0cce017ec7cb (api:48)
> 2019-03-13 09:38:23,253+0200 INFO  (jsonrpc/4) [storage.StorageDomain]
> Fixing permissions on
> /rhev/data-center/mnt/glusterSD/ovirt1.localdomain:_engine/808423f9-8a5c-40cd-bc9f-2568c85b8c74/images/94ade632-6ecc-4901-8cec-8e39f3d69cb0/9460fc4b-54f3-48e3-b7b6-da962321ecf4
> (fileSD:623)
> 2019-03-13 09:38:23,254+0200 INFO  (jsonrpc/4) [storage.StorageDomain]
> Creating domain run directory
> u'/var/run/vdsm/storage/808423f9-8a5c-40cd-bc9f-2568c85b8c74' (fileSD:577)
> 2019-03-13 09:38:23,254+0200 INFO  (jsonrpc/4) [storage.fileUtils]
> Creating directory:
> /var/run/vdsm/storage/808423f9-8a5c-40cd-bc9f-2568c85b8c74 mode: None
> (fileUtils:199)
> 2019-03-13 09:38:23,254+0200 INFO  (jsonrpc/4) [storage.StorageDomain]
> Creating symlink from
> /rhev/data-center/mnt/glusterSD/ovirt1.localdomain:_engine/808423f9-8a5c-40cd-bc9f-2568c85b8c74/images/94ade632-6ecc-4901-8cec-8e39f3d69cb0
> to
> /var/run/vdsm/storage/808423f9-8a5c-40cd-bc9f-2568c85b8c74/94ade632-6ecc-4901-8cec-8e39f3d69cb0
> (fileSD:580)
> 2019-03-13 09:38:23,260+0200 INFO  (jsonrpc/4) [vdsm.api] FINISH
> prepareImage error=Volume does not exist:
> (u'9460fc4b-54f3-48e3-b7b6-da962321ecf4',) from=:::192.168.1.2,42644,
> flow_id=d48d9272-2e65-438d-a7b2-46979309833b,
> task_id=bb534320-451c-45c0-b7a6-0cce017ec7cb (api:52)
> 2019-03-13 09:38:23,261+0200 ERROR (jsonrpc/4) [storage.TaskManager.Task]
> (Task='bb534320-451c-45c0-b7a6-0cce017ec7cb') Unexpected error (task:875)
> Traceback (most recent call last):
>   File "/usr/lib/python2.7/site-packages/vdsm/storage/task.py", line 882,
> in _run
> return fn(*args, **kargs)
>   File "", line 2, in prepareImage
>   File "/usr/lib/python2.7/site-packages/vdsm/common/api.py", line 50, in
> method
> ret = func(*args, **kwargs)
>   File "/usr/lib/python2.7/site-packages/vdsm/storage/hsm.py", line 3212,
> in prepareImage
> leafInfo = dom.produceVolume(imgUUID, leafUUID).getVmVolumeInfo()
>   File "/usr/lib/python2.7/site-packages/vdsm/storage/sd.py", line 822, in
> produceVolume
> volUUID)
>   File "/usr/lib/python2.7/site-packages/vdsm/storage/glusterVolume.py",
> line 45, in __init__
> volUUID)
>   File "/usr/lib/python2.7/site-packages/vdsm/storage/volume.py", line
> 801, in __init__
> self._manifest = self.manifestClass(repoPath, sdUUID, imgUUID, volUUID)
>   File "/usr/lib/python2.7/site-packages/vdsm/storage/fileVolume.py", line
> 71, in __init__
> volUUID)
>   File "/usr/lib/python2.7/site-packages/vdsm/storage/volume.py", line 86,
> in __init__
> self.validate()
>   File "/usr/lib/python2.7/site-packages/vdsm/storage/volume.py", line
> 112, in validate
> self.validateVolumePath()
>   File "/usr/lib/python2.7/site-packages/vdsm/storage/fileVolume.py", line
> 136, in validateVolumePath
> self.validateMetaVolumePath()
>   File "/usr/lib/python2.7/site-packages/vdsm/storage/fileVolume.py", line
> 118, in validateMetaVolumePath
> raise se.VolumeDoesNotExist(self.volUUID)
> VolumeDoesNotExist: Volume does not exist:
> (u'9460fc4b-54f3-48e3-b7b6-da962321ecf4',)
> 2019-03-13 09:38:23,261+0200 INFO  (jsonrpc/4) [storage.TaskManager.Task]
> (Task='bb534320-451c-45c0-b7a6-0cce017ec7cb') aborting: Task is aborted:
> "Volume does not exist: (u'9460fc4b-54f3-48e3-b7b6-da962321ecf4',)" - code
> 201 (task:1181)
> 2019-03-13 

[ovirt-users] Re: Ovirt 4.3.1 cannto set host to maintenance

2019-03-13 Thread Strahil Nikolov
Hi Simone,Nir,

>Adding also Nir on this, the whole sequence is tracked here:
>I'd suggest to check ovirt-imageio and vdsm logs on ovirt2.localdomain about 
>the same time.
I have tested again (first wiped current transfers) and it is happening the 
same (phase 10).
engine=# \x
Expanded display is on.
engine=# select * from image_transfers;
-[ RECORD 1 ]-+-
command_id    | 11b2c162-29e0-46ef-b0a4-f41ebe3c2910
command_type  | 1024
phase | 10
last_updated  | 2019-03-13 09:38:30.365+02
message   |
vds_id    |
disk_id   | 94ade632-6ecc-4901-8cec-8e39f3d69cb0
imaged_ticket_id  |
proxy_uri |
signed_ticket |
bytes_sent    | 0
bytes_total   | 134217728
type  | 1
active    | f
daemon_uri    |
client_inactivity_timeout | 60

engine=# delete from image_transfers where 
disk_id='94ade632-6ecc-4901-8cec-8e39f3d69cb0';

This is the VDSM log from the last test:

2019-03-13 09:38:23,229+0200 INFO  (jsonrpc/4) [vdsm.api] START 
prepareImage(sdUUID=u'808423f9-8a5c-40cd-bc9f-2568c85b8c74', 
spUUID=u'b803f7e4-2543-11e9-ba9a-00163e6272c8', 
imgUUID=u'94ade632-6ecc-4901-8cec-8e39f3d69cb0', 
leafUUID=u'9460fc4b-54f3-48e3-b7b6-da962321ecf4', allowIllegal=True) 
from=:::192.168.1.2,42644, flow_id=d48d9272-2e65-438d-a7b2-46979309833b, 
task_id=bb534320-451c-45c0-b7a6-0cce017ec7cb (api:48)
2019-03-13 09:38:23,253+0200 INFO  (jsonrpc/4) [storage.StorageDomain] Fixing 
permissions on 
/rhev/data-center/mnt/glusterSD/ovirt1.localdomain:_engine/808423f9-8a5c-40cd-bc9f-2568c85b8c74/images/94ade632-6ecc-4901-8cec-8e39f3d69cb0/9460fc4b-54f3-48e3-b7b6-da962321ecf4
 (fileSD:623)
2019-03-13 09:38:23,254+0200 INFO  (jsonrpc/4) [storage.StorageDomain] Creating 
domain run directory 
u'/var/run/vdsm/storage/808423f9-8a5c-40cd-bc9f-2568c85b8c74' (fileSD:577)
2019-03-13 09:38:23,254+0200 INFO  (jsonrpc/4) [storage.fileUtils] Creating 
directory: /var/run/vdsm/storage/808423f9-8a5c-40cd-bc9f-2568c85b8c74 mode: 
None (fileUtils:199)
2019-03-13 09:38:23,254+0200 INFO  (jsonrpc/4) [storage.StorageDomain] Creating 
symlink from 
/rhev/data-center/mnt/glusterSD/ovirt1.localdomain:_engine/808423f9-8a5c-40cd-bc9f-2568c85b8c74/images/94ade632-6ecc-4901-8cec-8e39f3d69cb0
 to 
/var/run/vdsm/storage/808423f9-8a5c-40cd-bc9f-2568c85b8c74/94ade632-6ecc-4901-8cec-8e39f3d69cb0
 (fileSD:580)
2019-03-13 09:38:23,260+0200 INFO  (jsonrpc/4) [vdsm.api] FINISH prepareImage 
error=Volume does not exist: (u'9460fc4b-54f3-48e3-b7b6-da962321ecf4',) 
from=:::192.168.1.2,42644, flow_id=d48d9272-2e65-438d-a7b2-46979309833b, 
task_id=bb534320-451c-45c0-b7a6-0cce017ec7cb (api:52)
2019-03-13 09:38:23,261+0200 ERROR (jsonrpc/4) [storage.TaskManager.Task] 
(Task='bb534320-451c-45c0-b7a6-0cce017ec7cb') Unexpected error (task:875)
Traceback (most recent call last):
  File "/usr/lib/python2.7/site-packages/vdsm/storage/task.py", line 882, in 
_run
    return fn(*args, **kargs)
  File "", line 2, in prepareImage
  File "/usr/lib/python2.7/site-packages/vdsm/common/api.py", line 50, in method
    ret = func(*args, **kwargs)
  File "/usr/lib/python2.7/site-packages/vdsm/storage/hsm.py", line 3212, in 
prepareImage
    leafInfo = dom.produceVolume(imgUUID, leafUUID).getVmVolumeInfo()
  File "/usr/lib/python2.7/site-packages/vdsm/storage/sd.py", line 822, in 
produceVolume
    volUUID)
  File "/usr/lib/python2.7/site-packages/vdsm/storage/glusterVolume.py", line 
45, in __init__
    volUUID)
  File "/usr/lib/python2.7/site-packages/vdsm/storage/volume.py", line 801, in 
__init__
    self._manifest = self.manifestClass(repoPath, sdUUID, imgUUID, volUUID)
  File "/usr/lib/python2.7/site-packages/vdsm/storage/fileVolume.py", line 71, 
in __init__
    volUUID)
  File "/usr/lib/python2.7/site-packages/vdsm/storage/volume.py", line 86, in 
__init__
    self.validate()
  File "/usr/lib/python2.7/site-packages/vdsm/storage/volume.py", line 112, in 
validate
    self.validateVolumePath()
  File "/usr/lib/python2.7/site-packages/vdsm/storage/fileVolume.py", line 136, 
in validateVolumePath
    self.validateMetaVolumePath()
  File "/usr/lib/python2.7/site-packages/vdsm/storage/fileVolume.py", line 118, 
in validateMetaVolumePath
    raise se.VolumeDoesNotExist(self.volUUID)
VolumeDoesNotExist: Volume does not exist: 
(u'9460fc4b-54f3-48e3-b7b6-da962321ecf4',)
2019-03-13 09:38:23,261+0200 INFO  (jsonrpc/4) [storage.TaskManager.Task] 
(Task='bb534320-451c-45c0-b7a6-0cce017ec7cb') aborting: Task is aborted: 
"Volume does not exist: (u'9460fc4b-54f3-48e3-b7b6-da962321ecf4',)" - code 201 
(task:1181)
2019-03-13 09:38:23,261+0200 ERROR (jsonrpc/4) [storage.Dispatcher] FINISH 
prepareImage error=Volume does not exist: 
(u'9460fc4b-54f3-48e3-b7b6-da962321ecf4',) (dispatcher:83)

Yet, the volume is there and is accessible:
[root@ovirt1 

[ovirt-users] Re: iSCSI domain creation or HE setups fail

2019-03-13 Thread Guillaume Pavese
Similarly,

I tried to deploy hosted-engine on iSCSI through cockpit (oVirt 4.3.2-rc1)

Retriving the Target works, I get :

The following targets have been found:
 iqn.2000-01.com.synology:SVC-STO-FR-301.Target-1.2dfed4a32a, TPGT: 1
10.199.9.16:3260
fe80::211:32ff:fe6d:6ddb:3260

 iqn.2000-01.com.synology:SVC-STO-FR-301.Target-2.2dfed4a32a, TPGT: 1
10.199.9.16:3260
fe80::211:32ff:fe6d:6ddb:3260


I Select the second one and then click "Next", Then I get :

Retrieval of iSCSI LUNs failed.

in hosts logs, I have :

mars 13 09:18:22 vs-inf-int-kvm-fr-304-210.hostics.fr python[15734]:
ansible-ovirt_host_storage_facts Invoked with fcp=None iscsi={'username':
None, 'password': None, 'port': '3260', 'target':
'iqn.2000-01.com.synology:SVC-STO-FR-301.Target-2.2dfed4a32a', 'address':
'10.199.9.16'} fetch_nested=False auth={'timeout': 0, 'url': '
https://vs-inf-int-ovt-fr-302-210.hostics.fr/ovirt-engine/api', 'insecure':
True, 'kerberos': False, 'compress': True, 'headers': None, 'token':
'OhSkJagx0abRj2stqVHqyyHH6amBJTcjHQdipFTMmukXlzV-_7mavFF0XazAoSIR3-6bTa8AmDTG5NNVFiNPNw',
'ca_file': None} host=vs-inf-int-kvm-fr-304-210.hostics.fr
nested_attributes=[]


mars 13 09:18:23 vs-inf-int-kvm-fr-304-210.hostics.fr iscsid[4898]:
Connection2:0 to [target:
iqn.2000-01.com.synology:SVC-STO-FR-301.Target-2.2dfed4a32a, portal:
10.199.9.16,3260] through [iface: default] is shutdown.



Guillaume Pavese
Ingénieur Système et Réseau
Interactiv-Group


On Wed, Mar 13, 2019 at 12:15 AM Guillaume Pavese <
guillaume.pav...@interactiv-group.com> wrote:

> yes :
> "Package iscsi-initiator-utils-6.2.0.874-10.el7.x86_64 already installed
> and latest version"
>
> Guillaume Pavese
> Ingénieur Système et Réseau
> Interactiv-Group
>
>
> On Tue, Mar 12, 2019 at 11:54 PM Strahil Nikolov 
> wrote:
>
>> Do you have the iscsi-initiator-utils rpm installed ?
>>
>> Best Regards,
>> Strahil Nikolov
>>
>> В вторник, 12 март 2019 г., 15:46:36 ч. Гринуич+2, Guillaume Pavese <
>> guillaume.pav...@interactiv-group.com> написа:
>>
>>
>> My setup : oVirt 4.3.1 HC on Centos 7.6, everything up2date
>> I try to create a new iSCSI Domain. It's a new LUN/Target created on
>> synology bay, no CHAP (I tried with CHAP too but that does not help)
>>
>> I first entered the syno's address and clicked discover
>> I saw the existing Targets ; I clicked on the arrow on the right. I then
>> get the following Error :
>> "Error while executing action: Failed to setup iSCSI subsystem"
>>
>> In hosts logs, I get
>> conn 0 login rejected: initiator error (02/00)
>> Connection1:0 to [target:
>> iqn.2000-01.com.synology:SVC-STO-FR-301.Target-2.2dfed4a32a, portal:
>> 10.199.9.16,3260] through [iface: default] is shutdown.
>>
>> In engine logs, I get :
>>
>> 2019-03-12 14:33:35,504+01 INFO
>> [org.ovirt.engine.core.bll.storage.connection.ConnectStorageToVdsCommand]
>> (default task-24) [85d27833-b5d5-4bc8-b43c-88d980c30333] Running command:
>> ConnectStorageToVdsCommand
>>  internal: false. Entities affected :  ID:
>> aaa0----123456789aaa Type: SystemAction group
>> CREATE_STORAGE_DOMAIN with role type ADMIN
>> 2019-03-12 14:33:35,511+01 INFO
>> [org.ovirt.engine.core.vdsbroker.vdsbroker.ConnectStorageServerVDSCommand]
>> (default task-24) [85d27833-b5d5-4bc8-b43c-88d980c30333] START,
>> ConnectStorageServerVDSCommand(Host
>> Name = ps-inf-int-kvm-fr-305-210.hostics.fr,
>> StorageServerConnectionManagementVDSParameters:{hostId='6958c4f7-3716-40e4-859a-bfce2f6dbdba',
>> storagePoolId='----', storageType='
>> ISCSI', connectionList='[StorageServerConnections:{id='null',
>> connection='10.199.9.16',
>> iqn='iqn.2000-01.com.synology:SVC-STO-FR-301.Target-2.2dfed4a32a',
>> vfsType='null', mountOptions='null', nfsVersion='nul
>> l', nfsRetrans='null', nfsTimeo='null', iface='null',
>> netIfaceName='null'}]', sendNetworkEventOnFailure='true'}), log id: 7f36d8a9
>> 2019-03-12 14:33:36,302+01 INFO
>> [org.ovirt.engine.core.vdsbroker.vdsbroker.ConnectStorageServerVDSCommand]
>> (default task-24) [85d27833-b5d5-4bc8-b43c-88d980c30333] FINISH,
>> ConnectStorageServerVDSCommand, re
>> turn: {----=465}, log id: 7f36d8a9
>> 2019-03-12 14:33:36,310+01 ERROR
>> [org.ovirt.engine.core.bll.storage.connection.ISCSIStorageHelper] (default
>> task-24) [85d27833-b5d5-4bc8-b43c-88d980c30333] The connection with details
>> '----' failed because of error code '465'
>> and error message is: failed to setup iscsi subsystem
>> 2019-03-12 14:33:36,315+01 ERROR
>> [org.ovirt.engine.core.bll.storage.connection.ConnectStorageToVdsCommand]
>> (default task-24) [85d27833-b5d5-4bc8-b43c-88d980c30333] Transaction
>> rolled-back for command
>> 'org.ovirt.engine.core.bll.storage.connection.ConnectStorageToVdsCommand'.
>> 2019-03-12 14:33:36,676+01 INFO
>> [org.ovirt.engine.core.vdsbroker.vdsbroker.GetDeviceListVDSCommand]
>> (default task-24) [70251a16-0049-4d90-a67c-653b229f7639] START,
>> 

[ovirt-users] Re: Host affinity hard rule doesn't work

2019-03-13 Thread Andrej Krejcir
Hi,

This is the expected behavior. The process that automatically migrates VMs
so that they do not break affinity groups, only migrates one VM at a time.
In this case the two VMs are in a positive enforcing group, so none of them
can be migrated away from the other.

Currently, for the same reason, the 2 VMs cannot even be migrated manually.
But that will be fixed as part of RFE:
https://bugzilla.redhat.com/show_bug.cgi?id=1651406


Best regards,
Andrej

On Wed, 13 Mar 2019 at 08:50,  wrote:

> Hi there,
> Here is my setup:
> oVirt engine: 4.2.8
>
> 1. Create an affinity group as below:
> VM affinity rule: positive + enforcing
> Host affinity rule: disabled.
> VMs: 2 VMs added
> Hosts: No host selected.
> 2. Run the 2 VMs, they are running on the same host, say host1.
> 3. Change the affinity group's host affinity:
> Host affinity rule: positive  + enforcing
> Hosts: host2 added.
>
> I expect that the 2 VMs can migrate to host2, but that never happen, is
> this expected?
>
> snippet of engine.log:
> 2019-03-13 07:47:05,747Z INFO
> [org.ovirt.engine.core.bll.scheduling.SchedulingManager]
> (EE-ManagedThreadFactory-engineScheduled-Thread-30) [] Candidate host
> 'dub-svrfarm24' ('76b13e75-d01b-4dec-9298-1fad72b46525') was filtered out
> by 'VAR__FILTERTYPE__INTERNAL' filter 'VmAffinityGroups' (correlation id:
> null)
> 2019-03-13 07:47:05,747Z DEBUG
> [org.ovirt.engine.core.bll.scheduling.arem.AffinityRulesEnforcer]
> (EE-ManagedThreadFactory-engineScheduled-Thread-30) [] VM
> 822b37b7-5da3-453c-b775-d4192c2fdcae is NOT a viable candidate for solving
> the affinity group violation situation.
> 2019-03-13 07:47:05,747Z DEBUG
> [org.ovirt.engine.core.bll.scheduling.arem.AffinityRulesEnforcer]
> (EE-ManagedThreadFactory-engineScheduled-Thread-30) [] No vm to hosts
> soft-affinity group violation detected
> 2019-03-13 07:47:05,749Z DEBUG
> [org.ovirt.engine.core.bll.scheduling.arem.AffinityRulesEnforcer]
> (EE-ManagedThreadFactory-engineScheduled-Thread-30) [] No affinity group
> collision detected for cluster 8fe88b8c-966c-4c21-839d-e2437cc6b73d.
> Standing by.
> 2019-03-13 07:47:05,749Z DEBUG
> [org.ovirt.engine.core.bll.scheduling.arem.AffinityRulesEnforcer]
> (EE-ManagedThreadFactory-engineScheduled-Thread-30) [] No affinity group
> collision detected for cluster 3beac2ea-ed04-4f40-9ce3-5a9a67cebd8c.
> Standing by.
> 2019-03-13 07:47:05,750Z DEBUG
> [org.ovirt.engine.core.bll.scheduling.arem.AffinityRulesEnforcer]
> (EE-ManagedThreadFactory-engineScheduled-Thread-30) [] No affinity group
> collision detected for cluster da32d154-4303-11e9-9607-00163eaab080.
> Standing by.
>
> Thank you,
> -Zhen
> ___
> Users mailing list -- users@ovirt.org
> To unsubscribe send an email to users-le...@ovirt.org
> Privacy Statement: https://www.ovirt.org/site/privacy-policy/
> oVirt Code of Conduct:
> https://www.ovirt.org/community/about/community-guidelines/
> List Archives:
> https://lists.ovirt.org/archives/list/users@ovirt.org/message/2BVS3U4BBFLJ32EZN5K3TOI64M7DQHSZ/
>
___
Users mailing list -- users@ovirt.org
To unsubscribe send an email to users-le...@ovirt.org
Privacy Statement: https://www.ovirt.org/site/privacy-policy/
oVirt Code of Conduct: 
https://www.ovirt.org/community/about/community-guidelines/
List Archives: 
https://lists.ovirt.org/archives/list/users@ovirt.org/message/3W7KG63II7PLJIMEJLL3TPO3HCRIXL2D/


[ovirt-users] Host affinity hard rule doesn't work

2019-03-13 Thread zodaoko
Hi there,
Here is my setup:
oVirt engine: 4.2.8

1. Create an affinity group as below:
VM affinity rule: positive + enforcing
Host affinity rule: disabled. 
VMs: 2 VMs added
Hosts: No host selected.
2. Run the 2 VMs, they are running on the same host, say host1.
3. Change the affinity group's host affinity:
Host affinity rule: positive  + enforcing
Hosts: host2 added.

I expect that the 2 VMs can migrate to host2, but that never happen, is this 
expected?

snippet of engine.log:
2019-03-13 07:47:05,747Z INFO  
[org.ovirt.engine.core.bll.scheduling.SchedulingManager] 
(EE-ManagedThreadFactory-engineScheduled-Thread-30) [] Candidate host 
'dub-svrfarm24' ('76b13e75-d01b-4dec-9298-1fad72b46525') was filtered out by 
'VAR__FILTERTYPE__INTERNAL' filter 'VmAffinityGroups' (correlation id: null)
2019-03-13 07:47:05,747Z DEBUG 
[org.ovirt.engine.core.bll.scheduling.arem.AffinityRulesEnforcer] 
(EE-ManagedThreadFactory-engineScheduled-Thread-30) [] VM 
822b37b7-5da3-453c-b775-d4192c2fdcae is NOT a viable candidate for solving the 
affinity group violation situation.
2019-03-13 07:47:05,747Z DEBUG 
[org.ovirt.engine.core.bll.scheduling.arem.AffinityRulesEnforcer] 
(EE-ManagedThreadFactory-engineScheduled-Thread-30) [] No vm to hosts 
soft-affinity group violation detected
2019-03-13 07:47:05,749Z DEBUG 
[org.ovirt.engine.core.bll.scheduling.arem.AffinityRulesEnforcer] 
(EE-ManagedThreadFactory-engineScheduled-Thread-30) [] No affinity group 
collision detected for cluster 8fe88b8c-966c-4c21-839d-e2437cc6b73d. Standing 
by.
2019-03-13 07:47:05,749Z DEBUG 
[org.ovirt.engine.core.bll.scheduling.arem.AffinityRulesEnforcer] 
(EE-ManagedThreadFactory-engineScheduled-Thread-30) [] No affinity group 
collision detected for cluster 3beac2ea-ed04-4f40-9ce3-5a9a67cebd8c. Standing 
by.
2019-03-13 07:47:05,750Z DEBUG 
[org.ovirt.engine.core.bll.scheduling.arem.AffinityRulesEnforcer] 
(EE-ManagedThreadFactory-engineScheduled-Thread-30) [] No affinity group 
collision detected for cluster da32d154-4303-11e9-9607-00163eaab080. Standing 
by.

Thank you,
-Zhen
___
Users mailing list -- users@ovirt.org
To unsubscribe send an email to users-le...@ovirt.org
Privacy Statement: https://www.ovirt.org/site/privacy-policy/
oVirt Code of Conduct: 
https://www.ovirt.org/community/about/community-guidelines/
List Archives: 
https://lists.ovirt.org/archives/list/users@ovirt.org/message/2BVS3U4BBFLJ32EZN5K3TOI64M7DQHSZ/


[ovirt-users] Re: oVirt Performance (Horrific)

2019-03-13 Thread Krutika Dhananjay
Hi,

OK, thanks. I'd also asked for gluster version you're running.  Could you
share that information as well?

-Krutika

On Thu, Mar 7, 2019 at 9:38 PM Drew Rash  wrote:

> Here is the output for our ssd gluster which exhibits the same issue as
> the hdd glusters.
> However, I can replicate the issue on an 8TB WD Gold disk NFS mounted as
> well ( removed the gluster part )  Which is the reason I'm on the oVirt
> site.  I can start a file copy that writes at max speed, then after a gb or
> 2 it drops down to 3-10 MBps maxing at 13.3 ish overall.
> Testing outside of oVirt using dd doesn't have the same behavior. Outside
> ovirt (directly on the ovirtnode to the gluster or 8tb nfs mount results is
> max drive speeds consistently for large file copies)
>
> I enabled writeback (as someone suggested) on the virtio-scsi windows disk
> and one of our windows 10 installs speed up. Still suffers from sustained
> write issue which causes the whole box to cripple. Opening chrome for
> example cripples the box or sql server management studio also.
>
> Volume Name: gv1
> Type: Replicate
> Volume ID: 7340a436-d971-4d69-84f9-12a23cd76ec8
> Status: Started
> Snapshot Count: 0
> Number of Bricks: 1 x (2 + 1) = 3
> Transport-type: tcp
> Bricks:
> Brick1: 10.30.30.121:/gluster_bricks/gv1/brick
> Brick2: 10.30.30.122:/gluster_bricks/gv1/brick
> Brick3: 10.30.30.123:/gluster_bricks/gv1/brick (arbiter)
> Options Reconfigured:
> network.ping-timeout: 30
> performance.strict-o-direct: on
> storage.owner-gid: 36
> storage.owner-uid: 36
> user.cifs: off
> cluster.shd-wait-qlength: 1
> cluster.shd-max-threads: 8
> cluster.locking-scheme: granular
> cluster.data-self-heal-algorithm: full
> cluster.server-quorum-type: server
> cluster.quorum-type: auto
> cluster.eager-lock: enable
> network.remote-dio: off
> performance.low-prio-threads: 32
> performance.io-cache: off
> performance.read-ahead: off
> performance.quick-read: off
> features.shard: off
> cluster.granular-entry-heal: enable
> transport.address-family: inet
> nfs.disable: on
> performance.client-io-threads: off
>
>
> On Thu, Mar 7, 2019 at 1:00 AM Krutika Dhananjay 
> wrote:
>
>> So from the profile, it appears the XATTROPs and FINODELKs are way higher
>> than the number of WRITEs:
>>
>> 
>> ...
>> ...
>> %-latency   Avg-latency   Min-Latency   Max-Latency   No. of calls
>>  Fop
>>  -   ---   ---   ---   
>>   
>>   0.43 384.83 us  51.00 us   65375.00 us  13632
>> FXATTROP
>>   7.54   13535.70 us 225.00 us  210298.00 us   6816
>>  WRITE
>>  45.99   28508.86 us   7.00 us 2591280.00 us  19751
>> FINODELK
>>
>> ...
>> ...
>> 
>>
>> We'd noticed something similar in our internal tests and found
>> inefficiencies in gluster's eager-lock implementation. This was fixed at
>> https://review.gluster.org/c/glusterfs/+/19503.
>> I need the two things I asked for in the prev mail to confirm if you're
>> hitting the same issue.
>>
>> -Krutika
>>
>> On Thu, Mar 7, 2019 at 12:24 PM Krutika Dhananjay 
>> wrote:
>>
>>> Hi,
>>>
>>> Could you share the following pieces of information to begin with -
>>>
>>> 1. output of `gluster volume info $AFFECTED_VOLUME_NAME`
>>> 2. glusterfs version you're running
>>>
>>> -Krutika
>>>
>>>
>>> On Sat, Mar 2, 2019 at 3:38 AM Drew R  wrote:
>>>
 Saw some people asking for profile info.  So I had started a migration
 from a 6TB WDGold 2+1arb replicated gluster to a 1TB samsung ssd 2+1 rep
 gluster and it's been running a while for a 100GB file thin provisioned
 with like 28GB actually used.  Here is the profile info.  I started the
 profiler like 5 minutes ago. The migration had been running for like
 30minutes:

 gluster volume profile gv2 info
 Brick: 10.30.30.122:/gluster_bricks/gv2/brick
 -
 Cumulative Stats:
Block Size:256b+ 512b+
   1024b+
  No. of Reads: 1189 8
   12
 No. of Writes:4  3245
  883

Block Size:   2048b+4096b+
   8192b+
  No. of Reads:   1020
2
 No. of Writes: 1087312228
   124080

Block Size:  16384b+   32768b+
  65536b+
  No. of Reads:0 1
   52
 No. of Writes: 5188  3617
 5532

Block Size: 131072b+
  No. of Reads:70191
 No. of Writes:   634192
  %-latency   Avg-latency   Min-Latency   Max-Latency   No. of calls
  Fop
  -   ---   ---   ---   
 
   0.00   0.00 us   0.00 us   0.00 us