[ovirt-users] Re: Help recovering cluster

2020-03-08 Thread Strahil Nikolov
On March 9, 2020 12:49:55 AM GMT+02:00, joesherman1...@gmail.com wrote:
>Sorry about the email include. I'll do my best to leave it out from now
>on.
>
>If they are just normal disks then o really should just be able to
>transport them to another kvm machine, import the disks into a VM and
>use them, right?
>___
>Users mailing list -- users@ovirt.org
>To unsubscribe send an email to users-le...@ovirt.org
>Privacy Statement: https://www.ovirt.org/privacy-policy.html
>oVirt Code of Conduct:
>https://www.ovirt.org/community/about/community-guidelines/
>List Archives:
>https://lists.ovirt.org/archives/list/users@ovirt.org/message/7PYEZKMROMGBO2QP4CRN4YGQ4BDIIHVB/

Yes,
But you need the VM's definition (the xml that is visible in the vdsm.log 
during power up),  or you will have to create it manually.
Actually, oVirt is the management part with KVM /we should not forget qemu/ for 
hypervisor.

You do not need  to move them - just define the VMs via virsh and you are good 
to go.

Best Regards,
Strahil Nikolov
___
Users mailing list -- users@ovirt.org
To unsubscribe send an email to users-le...@ovirt.org
Privacy Statement: https://www.ovirt.org/privacy-policy.html
oVirt Code of Conduct: 
https://www.ovirt.org/community/about/community-guidelines/
List Archives: 
https://lists.ovirt.org/archives/list/users@ovirt.org/message/ST23R7Y2OPCHY7PKUTEOM4SPY45DAREI/


[ovirt-users] Re: Help recovering cluster

2020-03-08 Thread joesherman1979
Sorry about the email include. I'll do my best to leave it out from now on.

If they are just normal disks then o really should just be able to transport 
them to another kvm machine, import the disks into a VM and use them, right?
___
Users mailing list -- users@ovirt.org
To unsubscribe send an email to users-le...@ovirt.org
Privacy Statement: https://www.ovirt.org/privacy-policy.html
oVirt Code of Conduct: 
https://www.ovirt.org/community/about/community-guidelines/
List Archives: 
https://lists.ovirt.org/archives/list/users@ovirt.org/message/7PYEZKMROMGBO2QP4CRN4YGQ4BDIIHVB/


[ovirt-users] Re: Help recovering cluster

2020-03-08 Thread Strahil Nikolov
On March 8, 2020 9:47:10 PM GMT+02:00, joesherman1...@gmail.com wrote:
>Thank you for the reply. When it was first setup I messed up and didn't
>create the HostedEngine VM. Instead the engine is installed on the
>host. This is wrong, I know, but it had worked for a while. Now it is
>not. At this point if I could figure out how to load the images in a
>straight KVM setup I would be ok. But I don't understand the drive
>image format. I believe I have found the drive images but the file
>names are just guids. Is this just standard img format with no file
>extension? Can I just load these files as disk images in KVM? I am
>working on backing it all up now so I can begin to try things.
>
>I did check and the ovirt-engine service is started on the host without
>error. 
>
>BTW. This is on centos 7. 
>
>Thank you for any help you have. 
>___
>Users mailing list -- users@ovirt.org
>To unsubscribe send an email to users-le...@ovirt.org
>Privacy Statement: https://www.ovirt.org/privacy-policy.html
>oVirt Code of Conduct:
>https://www.ovirt.org/community/about/community-guidelines/
>List Archives:
>https://lists.ovirt.org/archives/list/users@ovirt.org/message/WOXXJMJPA5GCUZE6ZD5TNMSU6KYMYC4E/

As the HostedEngine has a internal database and oVirt was planned to support 
thousands of VMs , the only way to have unique names is via the uids.

They are just simple KVM disks.
You can always run 'qemu-img info  to get information about the disk.

If you want to control the VMs without the engine (as it currently doesn't work 
peoperly).
You need to:
1. Find the VM's xml in the vdsm.log and save it in a separate file
2. Define this alias on the host:
alias virsh='virsh -c 
qemu:///system?authfile=/etc/ovirt-hosted-engine/virsh_auth.conf'

3. Define the VM via:

virsh define 

4. Try to start the VM:
virsh start VM

Keep in mind that you might need to :
A)  create  symbolic links for the storage domains (this is specific to storage 
type and your custom installation)  - the error will be in the libvirt log
B) define the ovirtmgmt or another network.

P.S.: It will be nice if you leave aome old e-mails in your reply, as it 
isbhard tracking what was already discussed or what was  refferenced before.

Best Regards,
Strahil Nikolov
___
Users mailing list -- users@ovirt.org
To unsubscribe send an email to users-le...@ovirt.org
Privacy Statement: https://www.ovirt.org/privacy-policy.html
oVirt Code of Conduct: 
https://www.ovirt.org/community/about/community-guidelines/
List Archives: 
https://lists.ovirt.org/archives/list/users@ovirt.org/message/MMM7N2R6FCGEMZ22BD6KF6LOO4IHTSS6/


[ovirt-users] Re: Help recovering cluster

2020-03-08 Thread joesherman1979
Thank you for the reply. When it was first setup I messed up and didn't create 
the HostedEngine VM. Instead the engine is installed on the host. This is 
wrong, I know, but it had worked for a while. Now it is not. At this point if I 
could figure out how to load the images in a straight KVM setup I would be ok. 
But I don't understand the drive image format. I believe I have found the drive 
images but the file names are just guids. Is this just standard img format with 
no file extension? Can I just load these files as disk images in KVM? I am 
working on backing it all up now so I can begin to try things.

I did check and the ovirt-engine service is started on the host without error. 

BTW. This is on centos 7. 

Thank you for any help you have. 
___
Users mailing list -- users@ovirt.org
To unsubscribe send an email to users-le...@ovirt.org
Privacy Statement: https://www.ovirt.org/privacy-policy.html
oVirt Code of Conduct: 
https://www.ovirt.org/community/about/community-guidelines/
List Archives: 
https://lists.ovirt.org/archives/list/users@ovirt.org/message/WOXXJMJPA5GCUZE6ZD5TNMSU6KYMYC4E/


[ovirt-users] Re: Help recovering cluster

2020-03-08 Thread Strahil Nikolov
On March 8, 2020 5:34:30 PM GMT+02:00, Joe Sherman  
wrote:
>Hello all,
>
>New to the group, apologies for any poor information. I have a home lab
>with a single host ovirt setup. I actually don't think I ever set it up
>exactly right, the agent vm was never created, it was running on the
>host
>server. But it ran and I had a handful of vm's running on it
>successfully.
>I knew I had to rebuild it but was waiting for funds for extra
>hardware.
>Then the other day I came home and could no longer access the web
>portal.
>Could get to the login page but after entering my auth info it just
>hangs.
>I'm honestly not even sure where to start. Does anyone know any good
>troubleshooting docs? Or any idea what may be happening? I can see the
>VM
>images and such in the proper folders so I don't think I have .lost
>anything. But can't access any of the vm's now. Any help would be
>appreciated.
>
>Joe

Hey Joe,

Welcome to the mailing list.

As you don't use the hosted infrastucture , you will need to sort some stuff 
for yourself.

First step, check if the HostedEngine VM is alive - use ping and then try with 
ssh.
If you manage to get in -> restart the 'ovirt-engine.service'.

Wait 30-40s and try to access the web UI.

Otherwise, you can reboot the whole VM and check what is going on.

Best Regards,
Strahil Nikolov
___
Users mailing list -- users@ovirt.org
To unsubscribe send an email to users-le...@ovirt.org
Privacy Statement: https://www.ovirt.org/privacy-policy.html
oVirt Code of Conduct: 
https://www.ovirt.org/community/about/community-guidelines/
List Archives: 
https://lists.ovirt.org/archives/list/users@ovirt.org/message/WNTC6AYZUJDBCKLQSXWFUPCVVCPEXSCB/


[ovirt-users] Re: What if anything can be done to improve small file performance with gluster?

2020-03-08 Thread Strahil Nikolov
On March 8, 2020 5:30:55 PM GMT+02:00, Jayme  wrote:
>Ok, this is more strange.  The same dd test against my ssd os/boot
>drives
>on oVirt node hosts using the same model drive (only smaller) and same
>h310
>controller (only diff being the os/boot drives are in raid mirror and
>gluster drives are passthrough) test completes in <2 seconds in /tmp of
>host but takes ~45 seconds in /gluster_bricks/brick_whatever
>
>Is there any explanation why there is such a vast difference between
>the
>two tests?
>
>example of one my mounts:
>
>/dev/mapper/onn_orchard1-tmp /tmp ext4 defaults,discard 1 2
>/dev/gluster_vg_sda/gluster_lv_prod_a /gluster_bricks/brick_a xfs
>inode64,noatime,nodiratime 0 0
>
>On Sun, Mar 8, 2020 at 12:23 PM Jayme  wrote:
>
>> Strahil,
>>
>> I'm starting to think that my problem could be related to the use of
>perc
>> H310 mini raid controllers in my oVirt hosts. The os/boot SSDs are
>raid
>> mirror but gluster storage is SSDs in passthrough. I've read that the
>queue
>> depth of h310 card is very low and can cause performance issues
>> especially when used with flash devices.
>>
>> dd if=/dev/zero of=test4.img bs=512 count=5000 oflag=dsync on one of
>my
>> hosts gluster bricks /gluster_bricks/brick_a for example takes 45
>seconds
>> to complete.
>>
>> I can perform the same operation in ~2 seconds on another server with
>a
>> better raid controller, but with the same model ssd.
>>
>> I might look at seeing how I can swap out the h310's, unfortunately I
>> think that may require me to wipe the gluster storage drives as with
>> another controller I believe they'd need to be added as single raid 0
>> arrays and would need to be rebuilt to do so.
>>
>> If I were to take one host down at a time is there a way that I can
>> re-build the entire server including wiping the gluster disks and add
>the
>> host back into the ovirt cluster and rebuild it along with the
>bricks? How
>> would you recommend doing such a task if I needed to wipe gluster
>disks on
>> each host ?
>>
>>
>>
>> On Sat, Mar 7, 2020 at 6:24 PM Jayme  wrote:
>>
>>> No worries at all about the length of the email, the details are
>highly
>>> appreciated. You've given me lots to look into and consider.
>>>
>>>
>>>
>>> On Sat, Mar 7, 2020 at 10:02 AM Strahil Nikolov
>
>>> wrote:
>>>
 On March 7, 2020 1:12:58 PM GMT+02:00, Jayme 
>wrote:
 >Thanks again for the info. You’re probably right about the testing
 >method.
 >Though the reason I’m down this path in the first place is because
>I’m
 >seeing a problem in real world work loads. Many of my vms are used
>in
 >development environments where working with small files is common
>such
 >as
 >npm installs working with large node_module folders, ci/cd doing
>lots
 >of
 >mixed operations io and compute.
 >
 >I started testing some of these things by comparing side to side
>with a
 >vm
 >using same specs only difference being gluster vs nfs storage. Nfs
 >backed
 >storage is performing about 3x better real world.
 >
 >Gluster version is stock that comes with 4.3.7. I haven’t
>attempted
 >updating it outside of official ovirt updates.
 >
 >I’d like to see if I could improve it to handle my workloads
>better. I
 >also
 >understand that replication adds overhead.
 >
 >I do wonder how much difference in performance there would be with
 >replica
 >3 vs replica 3 arbiter. I’d assume arbiter setup would be faster
>but
 >perhaps not by a considerable difference.
 >
 >I will check into c states as well
 >
 >On Sat, Mar 7, 2020 at 2:52 AM Strahil Nikolov
>
 >wrote:
 >
 >> On March 7, 2020 1:09:37 AM GMT+02:00, Jayme 
 >wrote:
 >> >Strahil,
 >> >
 >> >Thanks for your suggestions. The config is pretty standard HCI
>setup
 >> >with
 >> >cockpit and hosts are oVirt node. XFS was handled by the
>deployment
 >> >automatically. The gluster volumes were optimized for virt
>store.
 >> >
 >> >I tried noop on the SSDs, that made zero difference in the
>tests I
 >was
 >> >running above. I took a look at the random-io-profile and it
>looks
 >like
 >> >it
 >> >really only sets vm.dirty_background_ratio = 2 & vm.dirty_ratio
>= 5
 >--
 >> >my
 >> >hosts already appear to have those sysctl values, and by
>default are
 >> >using virtual-host tuned profile.
 >> >
 >> >I'm curious what a test like "dd if=/dev/zero of=test2.img
>bs=512
 >> >count=1000 oflag=dsync" on one of your VMs would show for
>results?
 >> >
 >> >I haven't done much with gluster profiling but will take a look
>and
 >see
 >> >if
 >> >I can make sense of it. Otherwise, the setup is pretty stock
>oVirt
 >HCI
 >> >deployment with SSD backed storage and 10Gbe storage network. 
>I'm
 >not
 >> >coming anywhere close to maxing network throughput.
 >> >
 >> >The NFS export I was testing was an export 

[ovirt-users] Help recovering cluster

2020-03-08 Thread Joe Sherman
Hello all,

  New to the group, apologies for any poor information. I have a home lab
with a single host ovirt setup. I actually don't think I ever set it up
exactly right, the agent vm was never created, it was running on the host
server. But it ran and I had a handful of vm's running on it successfully.
I knew I had to rebuild it but was waiting for funds for extra hardware.
Then the other day I came home and could no longer access the web portal.
Could get to the login page but after entering my auth info it just hangs.
I'm honestly not even sure where to start. Does anyone know any good
troubleshooting docs? Or any idea what may be happening? I can see the VM
images and such in the proper folders so I don't think I have .lost
anything. But can't access any of the vm's now. Any help would be
appreciated.

Joe
___
Users mailing list -- users@ovirt.org
To unsubscribe send an email to users-le...@ovirt.org
Privacy Statement: https://www.ovirt.org/privacy-policy.html
oVirt Code of Conduct: 
https://www.ovirt.org/community/about/community-guidelines/
List Archives: 
https://lists.ovirt.org/archives/list/users@ovirt.org/message/HWOK53VBDZJBJPPUSGFDRIEYHO45OPMJ/


[ovirt-users] Re: What if anything can be done to improve small file performance with gluster?

2020-03-08 Thread Jayme
Ok, this is more strange.  The same dd test against my ssd os/boot drives
on oVirt node hosts using the same model drive (only smaller) and same h310
controller (only diff being the os/boot drives are in raid mirror and
gluster drives are passthrough) test completes in <2 seconds in /tmp of
host but takes ~45 seconds in /gluster_bricks/brick_whatever

Is there any explanation why there is such a vast difference between the
two tests?

example of one my mounts:

/dev/mapper/onn_orchard1-tmp /tmp ext4 defaults,discard 1 2
/dev/gluster_vg_sda/gluster_lv_prod_a /gluster_bricks/brick_a xfs
inode64,noatime,nodiratime 0 0

On Sun, Mar 8, 2020 at 12:23 PM Jayme  wrote:

> Strahil,
>
> I'm starting to think that my problem could be related to the use of perc
> H310 mini raid controllers in my oVirt hosts. The os/boot SSDs are raid
> mirror but gluster storage is SSDs in passthrough. I've read that the queue
> depth of h310 card is very low and can cause performance issues
> especially when used with flash devices.
>
> dd if=/dev/zero of=test4.img bs=512 count=5000 oflag=dsync on one of my
> hosts gluster bricks /gluster_bricks/brick_a for example takes 45 seconds
> to complete.
>
> I can perform the same operation in ~2 seconds on another server with a
> better raid controller, but with the same model ssd.
>
> I might look at seeing how I can swap out the h310's, unfortunately I
> think that may require me to wipe the gluster storage drives as with
> another controller I believe they'd need to be added as single raid 0
> arrays and would need to be rebuilt to do so.
>
> If I were to take one host down at a time is there a way that I can
> re-build the entire server including wiping the gluster disks and add the
> host back into the ovirt cluster and rebuild it along with the bricks? How
> would you recommend doing such a task if I needed to wipe gluster disks on
> each host ?
>
>
>
> On Sat, Mar 7, 2020 at 6:24 PM Jayme  wrote:
>
>> No worries at all about the length of the email, the details are highly
>> appreciated. You've given me lots to look into and consider.
>>
>>
>>
>> On Sat, Mar 7, 2020 at 10:02 AM Strahil Nikolov 
>> wrote:
>>
>>> On March 7, 2020 1:12:58 PM GMT+02:00, Jayme  wrote:
>>> >Thanks again for the info. You’re probably right about the testing
>>> >method.
>>> >Though the reason I’m down this path in the first place is because I’m
>>> >seeing a problem in real world work loads. Many of my vms are used in
>>> >development environments where working with small files is common such
>>> >as
>>> >npm installs working with large node_module folders, ci/cd doing lots
>>> >of
>>> >mixed operations io and compute.
>>> >
>>> >I started testing some of these things by comparing side to side with a
>>> >vm
>>> >using same specs only difference being gluster vs nfs storage. Nfs
>>> >backed
>>> >storage is performing about 3x better real world.
>>> >
>>> >Gluster version is stock that comes with 4.3.7. I haven’t attempted
>>> >updating it outside of official ovirt updates.
>>> >
>>> >I’d like to see if I could improve it to handle my workloads better. I
>>> >also
>>> >understand that replication adds overhead.
>>> >
>>> >I do wonder how much difference in performance there would be with
>>> >replica
>>> >3 vs replica 3 arbiter. I’d assume arbiter setup would be faster but
>>> >perhaps not by a considerable difference.
>>> >
>>> >I will check into c states as well
>>> >
>>> >On Sat, Mar 7, 2020 at 2:52 AM Strahil Nikolov 
>>> >wrote:
>>> >
>>> >> On March 7, 2020 1:09:37 AM GMT+02:00, Jayme 
>>> >wrote:
>>> >> >Strahil,
>>> >> >
>>> >> >Thanks for your suggestions. The config is pretty standard HCI setup
>>> >> >with
>>> >> >cockpit and hosts are oVirt node. XFS was handled by the deployment
>>> >> >automatically. The gluster volumes were optimized for virt store.
>>> >> >
>>> >> >I tried noop on the SSDs, that made zero difference in the tests I
>>> >was
>>> >> >running above. I took a look at the random-io-profile and it looks
>>> >like
>>> >> >it
>>> >> >really only sets vm.dirty_background_ratio = 2 & vm.dirty_ratio = 5
>>> >--
>>> >> >my
>>> >> >hosts already appear to have those sysctl values, and by default are
>>> >> >using virtual-host tuned profile.
>>> >> >
>>> >> >I'm curious what a test like "dd if=/dev/zero of=test2.img bs=512
>>> >> >count=1000 oflag=dsync" on one of your VMs would show for results?
>>> >> >
>>> >> >I haven't done much with gluster profiling but will take a look and
>>> >see
>>> >> >if
>>> >> >I can make sense of it. Otherwise, the setup is pretty stock oVirt
>>> >HCI
>>> >> >deployment with SSD backed storage and 10Gbe storage network.  I'm
>>> >not
>>> >> >coming anywhere close to maxing network throughput.
>>> >> >
>>> >> >The NFS export I was testing was an export from a local server
>>> >> >exporting a
>>> >> >single SSD (same type as in the oVirt hosts).
>>> >> >
>>> >> >I might end up switching storage to NFS and ditching gluster if
>>> >> >performance
>>> >> >is 

[ovirt-users] Re: oVirt 4.4.0 Alpha release refresh is now available for testing

2020-03-08 Thread Mathieu Simon
Hi Martin

Am So., 8. März 2020 um 08:55 Uhr schrieb Martin Tessun :
>
> RHV 4.3 will have EUS due to the exact reason that RHV 4.4 changes the
> Hosts as well as the Engine to RHEL 8. Also RHV 4.3 does have the SAP
> HANA certification for MultiVM (which RHV 4.4 does not have yet).
>
> So there is no need to update to RHV 4.4 as soon as it is released.
>
> Does that help?
Yes, that does indeed. Last time I checked the Lifecycle Page of RHV*
I remembered that no EUS was planned.
Maybe that's the somewhat confusing part where it still says that no
EUS support is given for 4.3, however support ends April 2021,
which gives some time for the transition, which is the important part
to me due to the migration to an EL8 base - to which I'm otherwise
looking forward to.

Regards
Mathieu

* https://access.redhat.com/support/policy/updates/rhev
___
Users mailing list -- users@ovirt.org
To unsubscribe send an email to users-le...@ovirt.org
Privacy Statement: https://www.ovirt.org/privacy-policy.html
oVirt Code of Conduct: 
https://www.ovirt.org/community/about/community-guidelines/
List Archives: 
https://lists.ovirt.org/archives/list/users@ovirt.org/message/YRLJTSCCNDJHD4AJBXMZWILUWDRIXBGT/


[ovirt-users] Re: What if anything can be done to improve small file performance with gluster?

2020-03-08 Thread Jayme
Strahil,

I'm starting to think that my problem could be related to the use of perc
H310 mini raid controllers in my oVirt hosts. The os/boot SSDs are raid
mirror but gluster storage is SSDs in passthrough. I've read that the queue
depth of h310 card is very low and can cause performance issues
especially when used with flash devices.

dd if=/dev/zero of=test4.img bs=512 count=5000 oflag=dsync on one of my
hosts gluster bricks /gluster_bricks/brick_a for example takes 45 seconds
to complete.

I can perform the same operation in ~2 seconds on another server with a
better raid controller, but with the same model ssd.

I might look at seeing how I can swap out the h310's, unfortunately I think
that may require me to wipe the gluster storage drives as with another
controller I believe they'd need to be added as single raid 0 arrays and
would need to be rebuilt to do so.

If I were to take one host down at a time is there a way that I can
re-build the entire server including wiping the gluster disks and add the
host back into the ovirt cluster and rebuild it along with the bricks? How
would you recommend doing such a task if I needed to wipe gluster disks on
each host ?



On Sat, Mar 7, 2020 at 6:24 PM Jayme  wrote:

> No worries at all about the length of the email, the details are highly
> appreciated. You've given me lots to look into and consider.
>
>
>
> On Sat, Mar 7, 2020 at 10:02 AM Strahil Nikolov 
> wrote:
>
>> On March 7, 2020 1:12:58 PM GMT+02:00, Jayme  wrote:
>> >Thanks again for the info. You’re probably right about the testing
>> >method.
>> >Though the reason I’m down this path in the first place is because I’m
>> >seeing a problem in real world work loads. Many of my vms are used in
>> >development environments where working with small files is common such
>> >as
>> >npm installs working with large node_module folders, ci/cd doing lots
>> >of
>> >mixed operations io and compute.
>> >
>> >I started testing some of these things by comparing side to side with a
>> >vm
>> >using same specs only difference being gluster vs nfs storage. Nfs
>> >backed
>> >storage is performing about 3x better real world.
>> >
>> >Gluster version is stock that comes with 4.3.7. I haven’t attempted
>> >updating it outside of official ovirt updates.
>> >
>> >I’d like to see if I could improve it to handle my workloads better. I
>> >also
>> >understand that replication adds overhead.
>> >
>> >I do wonder how much difference in performance there would be with
>> >replica
>> >3 vs replica 3 arbiter. I’d assume arbiter setup would be faster but
>> >perhaps not by a considerable difference.
>> >
>> >I will check into c states as well
>> >
>> >On Sat, Mar 7, 2020 at 2:52 AM Strahil Nikolov 
>> >wrote:
>> >
>> >> On March 7, 2020 1:09:37 AM GMT+02:00, Jayme 
>> >wrote:
>> >> >Strahil,
>> >> >
>> >> >Thanks for your suggestions. The config is pretty standard HCI setup
>> >> >with
>> >> >cockpit and hosts are oVirt node. XFS was handled by the deployment
>> >> >automatically. The gluster volumes were optimized for virt store.
>> >> >
>> >> >I tried noop on the SSDs, that made zero difference in the tests I
>> >was
>> >> >running above. I took a look at the random-io-profile and it looks
>> >like
>> >> >it
>> >> >really only sets vm.dirty_background_ratio = 2 & vm.dirty_ratio = 5
>> >--
>> >> >my
>> >> >hosts already appear to have those sysctl values, and by default are
>> >> >using virtual-host tuned profile.
>> >> >
>> >> >I'm curious what a test like "dd if=/dev/zero of=test2.img bs=512
>> >> >count=1000 oflag=dsync" on one of your VMs would show for results?
>> >> >
>> >> >I haven't done much with gluster profiling but will take a look and
>> >see
>> >> >if
>> >> >I can make sense of it. Otherwise, the setup is pretty stock oVirt
>> >HCI
>> >> >deployment with SSD backed storage and 10Gbe storage network.  I'm
>> >not
>> >> >coming anywhere close to maxing network throughput.
>> >> >
>> >> >The NFS export I was testing was an export from a local server
>> >> >exporting a
>> >> >single SSD (same type as in the oVirt hosts).
>> >> >
>> >> >I might end up switching storage to NFS and ditching gluster if
>> >> >performance
>> >> >is really this much better...
>> >> >
>> >> >
>> >> >On Fri, Mar 6, 2020 at 5:06 PM Strahil Nikolov
>> >
>> >> >wrote:
>> >> >
>> >> >> On March 6, 2020 6:02:03 PM GMT+02:00, Jayme 
>> >> >wrote:
>> >> >> >I have 3 server HCI with Gluster replica 3 storage (10GBe and SSD
>> >> >> >disks).
>> >> >> >Small file performance inner-vm is pretty terrible compared to a
>> >> >> >similar
>> >> >> >spec'ed VM using NFS mount (10GBe network, SSD disk)
>> >> >> >
>> >> >> >VM with gluster storage:
>> >> >> >
>> >> >> ># dd if=/dev/zero of=test2.img bs=512 count=1000 oflag=dsync
>> >> >> >1000+0 records in
>> >> >> >1000+0 records out
>> >> >> >512000 bytes (512 kB) copied, 53.9616 s, 9.5 kB/s
>> >> >> >
>> >> >> >VM with NFS:
>> >> >> >
>> >> >> ># dd if=/dev/zero of=test2.img bs=512 count=1000 oflag=dsync
>> 

[ovirt-users] Re: upgrade from 4.38 to 4.39

2020-03-08 Thread Yedidyah Bar David
On Sat, Mar 7, 2020 at 11:19 PM Strahil Nikolov  wrote:
>
> On March 7, 2020 10:11:13 PM GMT+02:00, eev...@digitaldatatechs.com wrote:
> >The upgrade went successfully, however I have lost the ability to
> >migrate vm's manually.
> >The engine log:
> >2020-03-07 15:05:32,826-05 INFO
> >[org.ovirt.engine.core.vdsbroker.monitoring.VmsStatisticsFetcher]
> >(EE-ManagedThreadFactory-engineScheduled-Thread-58) [] Fetched 0 VMs
> >from VDS '3e4e1779-045d-4cee-be45-99a80471a1e4'
> >2020-03-07 15:05:59,675-05 WARN
> >[org.ovirt.engine.core.bll.SearchQuery] (default task-5)
> >[f2f9faa4-41ac-475b-ad57-cb17792d4c77]
> >ResourceManager::searchBusinessObjects - Invalid search text - ''VMs :
> >id=''
> >2020-03-07 15:07:31,640-05 WARN
> >[org.ovirt.engine.core.bll.SearchQuery] (default task-5)
> >[edce2f10-d670-4e7e-a69a-937d91882146]
> >ResourceManager::searchBusinessObjects - Invalid search text - ''VMs :
> >id=''
> >If I put a host into maintenance the vm's migrate automatically.But
> >manual migration is broken for some reason.
> >Any ideas?

Are you sure that above is all you get in engine.log when trying to migrate?
Please try again and check/share engine.log over all the relevant time interval.

> >___
> >Users mailing list -- users@ovirt.org
> >To unsubscribe send an email to users-le...@ovirt.org
> >Privacy Statement: https://www.ovirt.org/privacy-policy.html
> >oVirt Code of Conduct:
> >https://www.ovirt.org/community/about/community-guidelines/
> >List Archives:
> >https://lists.ovirt.org/archives/list/users@ovirt.org/message/KKWY5FAL2FZRU2NRYJ5LP4N3PMYR36AQ/
>
> Check the libvirt log.
> Sometimes  it gives  more clues:
>
> /var/log/libvirt/qemu/.log
>
> Also, check if it can be done via the API.
> If API works,  core functionality is OK and only UI is affected.
>
> Best Regards,
> Strahil Nikolov
> ___
> Users mailing list -- users@ovirt.org
> To unsubscribe send an email to users-le...@ovirt.org
> Privacy Statement: https://www.ovirt.org/privacy-policy.html
> oVirt Code of Conduct: 
> https://www.ovirt.org/community/about/community-guidelines/
> List Archives: 
> https://lists.ovirt.org/archives/list/users@ovirt.org/message/SW6X2EBB5RLCRLQW3QYGYHTDAPVS3F6D/



-- 
Didi
___
Users mailing list -- users@ovirt.org
To unsubscribe send an email to users-le...@ovirt.org
Privacy Statement: https://www.ovirt.org/privacy-policy.html
oVirt Code of Conduct: 
https://www.ovirt.org/community/about/community-guidelines/
List Archives: 
https://lists.ovirt.org/archives/list/users@ovirt.org/message/SQN5JANNPL2JKDTPSGN7B3SGEGN67QKH/