Re: [ovirt-users] ovirt-node-ng-update

2017-11-29 Thread Yuval Turgeman
Hi,

Which version are you using ?

Thanks,
Yuval.

On Wed, Nov 29, 2017 at 4:17 PM, Nathanaël Blanchet 
wrote:

> Hi all,
>
> I didn't find any explicit howto about upgrade of ovirt-node, but I may
> mistake...
>
> However, here is what I guess: after installing a fresh ovirt-node-ng
> iso, the engine check upgrade finds an available update
> "ovirt-node-ng-image-update"
>
> But, the available update is the same as the current one.  If I choose
> installing it succeeds, but after rebooting, ovirt-node-ng-image-update is
> not still part of installed rpms so that engine tells me an update of
> ovirt-node is still available.
>
> --
> Nathanaël Blanchet
>
> Supervision réseau
> Pôle Infrastrutures Informatiques
> 227 avenue Professeur-Jean-Louis-Viala
> 
> 34193 MONTPELLIER CEDEX 5
> Tél. 33 (0)4 67 54 84 55
> Fax  33 (0)4 67 54 84 14
> blanc...@abes.fr
>
> ___
> Users mailing list
> Users@ovirt.org
> http://lists.ovirt.org/mailman/listinfo/users
>
___
Users mailing list
Users@ovirt.org
http://lists.ovirt.org/mailman/listinfo/users


Re: [ovirt-users] oVirt Node ng upgrade failed

2017-11-29 Thread Yuval Turgeman
Looks like selinux is broken on your machine for some reason, can you share
/etc/selinux ?

Thanks,
Yuval.

On Tue, Nov 28, 2017 at 6:31 PM, Kilian Ries  wrote:

> @Yuval Turgeman
>
>
> ###
>
>
> [17:27:10][root@vm5:~]$semanage permissive -a setfiles_t
>
> SELinux:  Could not downgrade policy file 
> /etc/selinux/targeted/policy/policy.30,
> searching for an older version.
>
> SELinux:  Could not open policy file <= 
> /etc/selinux/targeted/policy/policy.30:
>  No such file or directory
>
> /sbin/load_policy:  Can't load policy:  No such file or directory
>
> libsemanage.semanage_reload_policy: load_policy returned error code 2.
> (No such file or directory).
>
> SELinux:  Could not downgrade policy file 
> /etc/selinux/targeted/policy/policy.30,
> searching for an older version.
>
> SELinux:  Could not open policy file <= 
> /etc/selinux/targeted/policy/policy.30:
>  No such file or directory
>
> /sbin/load_policy:  Can't load policy:  No such file or directory
>
> libsemanage.semanage_reload_policy: load_policy returned error code 2.
> (No such file or directory).
>
> OSError: No such file or directory
>
>
> ###
>
>
> @Ryan Barry
>
>
> Manual yum upgrade finished without any error but imgbased.log still shows
> me the following:
>
>
> ###
>
>
> 2017-11-28 17:25:28,372 [DEBUG] (MainThread) Returned:
>
> 2017-11-28 17:25:28,434 [DEBUG] (MainThread) Creating /home as {'attach':
> True, 'size': '1G'}
>
> 2017-11-28 17:25:28,434 [DEBUG] (MainThread) Calling binary: (['vgs',
> '--noheadings', '@imgbased:volume', '-o', 'lv_full_name'],) {'stderr':
> }
>
> 2017-11-28 17:25:28,434 [DEBUG] (MainThread) Calling: (['vgs',
> '--noheadings', '@imgbased:volume', '-o', 'lv_full_name'],) {'close_fds':
> True, 'stderr': }
>
> 2017-11-28 17:25:28,533 [DEBUG] (MainThread) Returned: onn/home
>
>   onn/tmp
>
>   onn/var_log
>
>   onn/var_log_audit
>
> 2017-11-28 17:25:28,533 [DEBUG] (MainThread) Calling binary: (['umount',
> '-l', '/etc'],) {}
>
> 2017-11-28 17:25:28,534 [DEBUG] (MainThread) Calling: (['umount', '-l',
> '/etc'],) {'close_fds': True, 'stderr': -2}
>
> 2017-11-28 17:25:28,539 [DEBUG] (MainThread) Returned:
>
> 2017-11-28 17:25:28,540 [DEBUG] (MainThread) Calling binary: (['umount',
> '-l', u'/tmp/mnt.tuHU8'],) {}
>
> 2017-11-28 17:25:28,540 [DEBUG] (MainThread) Calling: (['umount', '-l',
> u'/tmp/mnt.tuHU8'],) {'close_fds': True, 'stderr': -2}
>
> 2017-11-28 17:25:28,635 [DEBUG] (MainThread) Returned:
>
> 2017-11-28 17:25:28,635 [DEBUG] (MainThread) Calling binary: (['rmdir',
> u'/tmp/mnt.tuHU8'],) {}
>
> 2017-11-28 17:25:28,635 [DEBUG] (MainThread) Calling: (['rmdir',
> u'/tmp/mnt.tuHU8'],) {'close_fds': True, 'stderr': -2}
>
> 2017-11-28 17:25:28,640 [DEBUG] (MainThread) Returned:
>
> 2017-11-28 17:25:28,641 [ERROR] (MainThread) Failed to migrate etc
>
> Traceback (most recent call last):
>
>   File "/tmp/tmp.ipxGZrbQEi/usr/lib/python2.7/site-packages/
> imgbased/plugins/osupdater.py", line 109, in on_new_layer
>
> check_nist_layout(imgbase, new_lv)
>
>   File "/tmp/tmp.ipxGZrbQEi/usr/lib/python2.7/site-packages/
> imgbased/plugins/osupdater.py", line 179, in check_nist_layout
>
> v.create(t, paths[t]["size"], paths[t]["attach"])
>
>   File 
> "/tmp/tmp.ipxGZrbQEi/usr/lib/python2.7/site-packages/imgbased/volume.py",
> line 48, in create
>
> "Path is already a volume: %s" % where
>
> AssertionError: Path is already a volume: /home
>
> 2017-11-28 17:25:28,642 [DEBUG] (MainThread) Calling binary: (['umount',
> '-l', u'/tmp/mnt.bEW2k'],) {}
>
> 2017-11-28 17:25:28,642 [DEBUG] (MainThread) Calling: (['umount', '-l',
> u'/tmp/mnt.bEW2k'],) {'close_fds': True, 'stderr': -2}
>
> 2017-11-28 17:25:29,061 [DEBUG] (MainThread) Returned:
>
> 2017-11-28 17:25:29,061 [DEBUG] (MainThread) Calling binary: (['rmdir',
> u'/tmp/mnt.bEW2k'],) {}
>
> 2017-11-28 17:25:29,061 [DEBUG] (MainThread) Calling: (['rmdir',
> u'/tmp/mnt.bEW2k'],) {'close_fds': True, 'stderr': -2}
>
> 2017-11-28 17:25:29,067 [DEBUG] (MainThread) Returned:
>
> 2017-11-28 17:25:29,067 [DEBUG] (MainThread) Calling binary: (['umount',
> '-l', u'/tmp/mnt.UB5Yg'],) {}
>
> 2017-11-28 17:25:29,067 [DEBUG] (MainThread) Calling: (['umount', '-l',
> u'/tmp/mnt.UB5Yg'],) {'close_fds': True, 'stderr': -2}
>
> 2017-11-28 17:25:29,625 [DEBUG] (MainThread) Returned:
>
> 2017-11-28 17:25:29,625 [DEBUG] (MainThread) Calling binary: (['rmdir',
> u'/tmp/mnt.UB5Yg'],) {}
>
> 2017-11-28 17:25:29,626 [DEBUG] (MainThread) Calling: (['rmdir',
> u'/tmp/mnt.UB5Yg'],) {'close_fds': True, 'stderr': -2}
>
> 2017-11-28 17:25:29,631 [DEBUG] (MainThread) Returned:
>
> Traceback (most recent call last):
>
>   File "/usr/lib64/python2.7/runpy.py", line 162, in _run_module_as_main
>
> "__main__", fname, loader, pkg_name)
>
>   File "/usr/lib64/python2.7/runpy.py", line 72, in _run_code
>
> exec code in run_globals
>
>   File 
> "/tmp/tmp.ipxGZrbQEi/usr/lib/python2.7/site-packages/imgbased/__main__.py",
> line 53, in 
>
> CliApplication()
>
>   File 

Re: [ovirt-users] Convert local storage domain to shared

2017-11-29 Thread Karli Sjöberg
Den 29 nov. 2017 18:49 skrev Demeter Tibor :Hi,Yes, I understand what do you talk about. It isn't too safe..:(We have terrabytes under that VM.I could make a downtime at most for eight hours (maybe), but meanwhile I have to copy 3 TB of vdisks. Firstly I need export (with a gigabit nic) to export domain, and back under 10gbe nic.I don't know how is enough this.Well, just counting the numbers, let's start with the optimistic approach and say that you can move 100 MB/s for 8 hours:100*60*60*8=288 MBAnd then just divide that with 1024*1024 to get to Tera:288%(1024^2)=2.74658203125 TBSo roughly 2.7 TB in 8 hours, and that's very optimistic! If you're more pessimistic, adjust the number of MB you think (or better yet, tested) that you'll be able to send per second to get a more accurate answer.The question is how much you can do without any downtime, I don't know myself, but the devs should:@devsIs it possible to do live exports? I mean to keep exporting and just sync the delta? If not, that would be an awesome RFE, since it would drastically reduce the downtime for these kinds of operations./KThanksTibor- 2017. nov.. 29., 18:26, Christopher Cox c...@endlessnow.com írta:> On 11/29/2017 09:39 AM, Demeter Tibor wrote: Dear Users, We have an old ovirt3.5 install with a local and a shared clusters. Meanwhile we>> created a new data center, that based on 4.1 and it use only shared>> infrastructure.>> I would like to migrate an big VM from the old local datacenter to our new, but>> I don't have enough downtime. Is it possible to convert the old local storage to shared (by share via NFS) and>> attach that as new storage domain to the new cluster?>> I just want to import VM and copy (while running) with live storage migration>> function. I know, the official way for move vms between ovirt clusters is the export>> domain, but it has very big disks. What can I do?> > Just my opinion, but if you don't figure out a way to have occasional downtime,> you'll probably pay the price with unplanned downtime eventually (and it could> be painful).> > Define "large disks"?  Terabytes?> > I know for a fact that if you don't have good network segmentation that live> migrations of large disks can be very problematic.  And I'm not talking about> what you're wanting to do.  I'm just talking about storage migration.> > We successfully migrated hundreds of VMs from a 3.4 to a 3.6 (on new blades and> storage) last year over time using the NFS export domain method.> > If storage is the same across DC's, you might be able to shortcut this with> minimal downtime, but I'm pretty sure there will be some downtime.> > I've seen large storage migrations render entire nodes offline (not nice) due to> non-isolated paths or QoS.> > > > ___> Users mailing list> Users@ovirt.org> http://lists.ovirt.org/mailman/listinfo/users___Users mailing listUsers@ovirt.orghttp://lists.ovirt.org/mailman/listinfo/users___
Users mailing list
Users@ovirt.org
http://lists.ovirt.org/mailman/listinfo/users


Re: [ovirt-users] Convert local storage domain to shared

2017-11-29 Thread Demeter Tibor
Hi,

Yes, I understand what do you talk about. It isn't too safe..:(
We have terrabytes under that VM.
I could make a downtime at most for eight hours (maybe), but meanwhile I have 
to copy 3 TB of vdisks. Firstly I need export (with a gigabit nic) to export 
domain, and back under 10gbe nic.
I don't know how is enough this.

Thanks

Tibor
- 2017. nov.. 29., 18:26, Christopher Cox c...@endlessnow.com írta:

> On 11/29/2017 09:39 AM, Demeter Tibor wrote:
>>
>> Dear Users,
>>
>> We have an old ovirt3.5 install with a local and a shared clusters. 
>> Meanwhile we
>> created a new data center, that based on 4.1 and it use only shared
>> infrastructure.
>> I would like to migrate an big VM from the old local datacenter to our new, 
>> but
>> I don't have enough downtime.
>>
>> Is it possible to convert the old local storage to shared (by share via NFS) 
>> and
>> attach that as new storage domain to the new cluster?
>> I just want to import VM and copy (while running) with live storage migration
>> function.
>>
>> I know, the official way for move vms between ovirt clusters is the export
>> domain, but it has very big disks.
>>
>> What can I do?
> 
> Just my opinion, but if you don't figure out a way to have occasional 
> downtime,
> you'll probably pay the price with unplanned downtime eventually (and it could
> be painful).
> 
> Define "large disks"?  Terabytes?
> 
> I know for a fact that if you don't have good network segmentation that live
> migrations of large disks can be very problematic.  And I'm not talking about
> what you're wanting to do.  I'm just talking about storage migration.
> 
> We successfully migrated hundreds of VMs from a 3.4 to a 3.6 (on new blades 
> and
> storage) last year over time using the NFS export domain method.
> 
> If storage is the same across DC's, you might be able to shortcut this with
> minimal downtime, but I'm pretty sure there will be some downtime.
> 
> I've seen large storage migrations render entire nodes offline (not nice) due 
> to
> non-isolated paths or QoS.
> 
> 
> 
> ___
> Users mailing list
> Users@ovirt.org
> http://lists.ovirt.org/mailman/listinfo/users
___
Users mailing list
Users@ovirt.org
http://lists.ovirt.org/mailman/listinfo/users


Re: [ovirt-users] [ANN] oVirt 4.2.0 Second Beta Release is now available for testing

2017-11-29 Thread Blaster

Is Fedora not supported anymore?

I've read the release notes for the 4.2r2 beta and 4.1.7, they mention 
specific versions of RHEL and CentOS, but only mention Fedora by name, 
with no specific version information.


On 11/15/2017 9:17 AM, Sandro Bonazzola wrote


This release is available now on x86_64 architecture for:

* Red Hat Enterprise Linux 7.4 or later

* CentOS Linux (or similar) 7.4 or later


This release supports Hypervisor Hosts on x86_64 and ppc64le 
architectures for:


* Red Hat Enterprise Linux 7.4 or later

* CentOS Linux (or similar) 7.4 or later

* oVirt Node 4.2 (available for x86_64 only)

tp://lists.ovirt.org/mailman/listinfo/users



___
Users mailing list
Users@ovirt.org
http://lists.ovirt.org/mailman/listinfo/users


Re: [ovirt-users] Convert local storage domain to shared

2017-11-29 Thread Christopher Cox

On 11/29/2017 09:39 AM, Demeter Tibor wrote:


Dear Users,

We have an old ovirt3.5 install with a local and a shared clusters. Meanwhile we
created a new data center, that based on 4.1 and it use only shared 
infrastructure.
I would like to migrate an big VM from the old local datacenter to our new, but
I don't have enough downtime.

Is it possible to convert the old local storage to shared (by share via NFS) and
attach that as new storage domain to the new cluster?
I just want to import VM and copy (while running) with live storage migration
function.

I know, the official way for move vms between ovirt clusters is the export
domain, but it has very big disks.

What can I do?


Just my opinion, but if you don't figure out a way to have occasional downtime, 
you'll probably pay the price with unplanned downtime eventually (and it could 
be painful).


Define "large disks"?  Terabytes?

I know for a fact that if you don't have good network segmentation that live 
migrations of large disks can be very problematic.  And I'm not talking about 
what you're wanting to do.  I'm just talking about storage migration.


We successfully migrated hundreds of VMs from a 3.4 to a 3.6 (on new blades and 
storage) last year over time using the NFS export domain method.


If storage is the same across DC's, you might be able to shortcut this with 
minimal downtime, but I'm pretty sure there will be some downtime.


I've seen large storage migrations render entire nodes offline (not nice) due to 
non-isolated paths or QoS.




___
Users mailing list
Users@ovirt.org
http://lists.ovirt.org/mailman/listinfo/users


[ovirt-users] Convert local storage domain to shared

2017-11-29 Thread Demeter Tibor

Dear Users, 

We have an old ovirt3.5 install with a local and a shared clusters. Meanwhile 
we created a new data center, that based on 4.1 and it use only shared 
infrastructure. 
I would like to migrate an big VM from the old local datacenter to our new, but 
I don't have enough downtime. 

Is it possible to convert the old local storage to shared (by share via NFS) 
and attach that as new storage domain to the new cluster? 
I just want to import VM and copy (while running) with live storage migration 
function. 

I know, the official way for move vms between ovirt clusters is the export 
domain, but it has very big disks. 

What can I do? 

Thanks 

Tibor 





___
Users mailing list
Users@ovirt.org
http://lists.ovirt.org/mailman/listinfo/users


[ovirt-users] ovirt-node-ng-update

2017-11-29 Thread Nathanaël Blanchet

Hi all,

I didn't find any explicit howto about upgrade of ovirt-node, but I may 
mistake...


However, here is what I guess: after installing a fresh ovirt-node-ng  
iso, the engine check upgrade finds an available update 
"ovirt-node-ng-image-update"


But, the available update is the same as the current one.  If I choose 
installing it succeeds, but after rebooting, ovirt-node-ng-image-update 
is not still part of installed rpms so that engine tells me an update of 
ovirt-node is still available.


--
Nathanaël Blanchet

Supervision réseau
Pôle Infrastrutures Informatiques
227 avenue Professeur-Jean-Louis-Viala
34193 MONTPELLIER CEDEX 5   
Tél. 33 (0)4 67 54 84 55
Fax  33 (0)4 67 54 84 14
blanc...@abes.fr

___
Users mailing list
Users@ovirt.org
http://lists.ovirt.org/mailman/listinfo/users


[ovirt-users] please test UI plugins in 4.2 beta

2017-11-29 Thread Greg Sheremeta
Hi everyone,

[I should have sent this message sooner -- apologies, and thanks to Martin
Sivak for the reminder!]

If you're trying out oVirt 4.2 Beta, please check that any existing UI
plugins [1] you are using 1. still work (they should!), and 2. look good in
the new UI. If something doesn't look quite right with a UI plugin [for
example, if it doesn't quite match the new theme], please contact Alexander
Wels and me and we can assist with getting it updated.

[1] https://www.ovirt.org/develop/release-management/features/ux/uiplugins/

Best wishes,
Greg

-- 

GREG SHEREMETA

SENIOR SOFTWARE ENGINEER - TEAM LEAD - RHV UX

Red Hat NA



gsher...@redhat.comIRC: gshereme

___
Users mailing list
Users@ovirt.org
http://lists.ovirt.org/mailman/listinfo/users


[ovirt-users] FOSDEM 2018 - CFP is about to close!

2017-11-29 Thread Doron Fediuck
Hi all,
FOSDEM 18's Virt and IaaS CFP ends on Friday!
This is your last chance to submit your session, so do not wait any longer.
You can find more details in [1].

Thanks,
Doron

[1] https://www.ovirt.org/blog/2017/10/come-to-fosdem-event/
___
Users mailing list
Users@ovirt.org
http://lists.ovirt.org/mailman/listinfo/users


Re: [ovirt-users] oVirt self-hosted Engine [Failed to acquire lock: error -243]

2017-11-29 Thread Terry hey
Dear Shani and Martin,
Thank you for your help. The explanation is make sense as i just test HA
function in oVirt self-hosted engine.

Regards,
Terry

2017-11-27 18:54 GMT+08:00 Martin Sivak :

> Hi,
>
> This error just informs you that multiple hosts tried to start the
> engine VM and one of them lost the lock race. Nothing to worry about
> as long as the webadmin is up.
>
> https://ovirt.org/documentation/how-to/hosted-
> engine/#engineunexpectedlydown
>
> Best regards
>
> --
> Martin Sivak
> oVirt / SLA
>
>
> On Mon, Nov 27, 2017 at 10:29 AM, Terry hey  wrote:
> > Hello all,
> > I installed ovirt self-hosted engine. Unfortunately, the engine VM was
> > suddenly shutdown. And later, it automatically powered on. The engine
> admin
> > console showed the following error.
> > VM HostedEngine is down with error. Exit message: resource busy: Failed
> to
> > acquire lock: error -243.
> >
> > I would like to know what is this error message is talking about and how
> to
> > solve this error.
> >
> > I would like to first thank all of you help me to solve this issue.
> >
> > Regards,
> > Terry
> >
> > ___
> > Users mailing list
> > Users@ovirt.org
> > http://lists.ovirt.org/mailman/listinfo/users
> >
>
___
Users mailing list
Users@ovirt.org
http://lists.ovirt.org/mailman/listinfo/users


Re: [ovirt-users] [Gluster-users] slow performance with export storage on glusterfs

2017-11-29 Thread Dmitri Chebotarov
Hello

If you use Gluster as FUSE mount it's always slower than you expect it to
be.
If you want to get better performance out of your oVirt/Gluster storage,
try the following:

- create a Linux VM in your oVirt environment, assign 4/8/12 virtual disks
(Virtual disks are located on your Gluster storage volume).
- Boot/configure the VM, then use LVM to create VG/LV with 4 stripes
(lvcreate -i 4) and use all 4/8/12 virtual disks as PVS.
- then install NFS server and export LV you created in previous step, use
the NFS export as export domain in oVirt/RHEV.

You should get wire speed when you use multiple stripes on Gluster storage,
FUSE mount on oVirt host will fan out requests to all 4 servers.
Gluster is very good at distributed/parallel workloads, but when you use
direct Gluster FUSE mount for Export domain you only have one data stream,
which is fragmented even more my multiple writes/reads that Gluster needs
to do to save your data on all member servers.



On Mon, Nov 27, 2017 at 8:41 PM, Donny Davis  wrote:

> What about mounting over nfs instead of the fuse client. Or maybe
> libgfapi. Is that available for export domains
>
> On Fri, Nov 24, 2017 at 3:48 AM Jiří Sléžka  wrote:
>
>> On 11/24/2017 06:41 AM, Sahina Bose wrote:
>> >
>> >
>> > On Thu, Nov 23, 2017 at 4:56 PM, Jiří Sléžka > > > wrote:
>> >
>> > Hi,
>> >
>> > On 11/22/2017 07:30 PM, Nir Soffer wrote:
>> > > On Mon, Nov 20, 2017 at 5:22 PM Jiří Sléžka > 
>> > > >> wrote:
>> > >
>> > > Hi,
>> > >
>> > > I am trying realize why is exporting of vm to export storage
>> on
>> > > glusterfs such slow.
>> > >
>> > > I am using oVirt and RHV, both instalations on version 4.1.7.
>> > >
>> > > Hosts have dedicated nics for rhevm network - 1gbps, data
>> > storage itself
>> > > is on FC.
>> > >
>> > > GlusterFS cluster lives separate on 4 dedicated hosts. It has
>> > slow disks
>> > > but I can achieve about 200-400mbit throughput in other
>> > applications (we
>> > > are using it for "cold" data, backups mostly).
>> > >
>> > > I am using this glusterfs cluster as backend for export
>> > storage. When I
>> > > am exporting vm I can see only about 60-80mbit throughput.
>> > >
>> > > What could be the bottleneck here?
>> > >
>> > > Could it be qemu-img utility?
>> > >
>> > > vdsm  97739  0.3  0.0 354212 29148 ?S>  0:06
>> > > /usr/bin/qemu-img convert -p -t none -T none -f raw
>> > >
>> >  /rhev/data-center/2ff6d0ee-a10b-473d-b77c-be9149945f5f/
>> ff3cd56a-1005-4426-8137-8f422c0b47c1/images/ba42cbcc-
>> c068-4df8-af3d-00f2077b1e27/c57acd5f-d6cf-48cc-ad0c-4a7d979c0c1e
>> > > -O raw
>> > >
>> >  /rhev/data-center/mnt/glusterSD/10.20.30.41:_rhv__
>> export/81094499-a392-4ea2-b081-7c6288fbb636/images/
>> ba42cbcc-c068-4df8-af3d-00f2077b1e27/c57acd5f-d6cf-48cc-ad0c-4a7d979c0c1e
>> > >
>> > > Any idea how to make it work faster or what throughput should
>> I
>> > > expected?
>> > >
>> > >
>> > > gluster storage operations are using fuse mount - so every write:
>> > > - travel to the kernel
>> > > - travel back to the gluster fuse helper process
>> > > - travel to all 3 replicas - replication is done on client side
>> > > - return to kernel when all writes succeeded
>> > > - return to caller
>> > >
>> > > So gluster will never set any speed record.
>> > >
>> > > Additionally, you are copying from raw lv on FC - qemu-img cannot
>> do
>> > > anything
>> > > smart and avoid copying unused clusters. Instead if copies
>> > gigabytes of
>> > > zeros
>> > > from FC.
>> >
>> > ok, it does make sense
>> >
>> > > However 7.5-10 MiB/s sounds too slow.
>> > >
>> > > I would try to test with dd - how much time it takes to copy
>> > > the same image from FC to your gluster storage?
>> > >
>> > > dd
>> > > if=/rhev/data-center/2ff6d0ee-a10b-473d-b77c-be9149945f5f/
>> ff3cd56a-1005-4426-8137-8f422c0b47c1/images/ba42cbcc-
>> c068-4df8-af3d-00f2077b1e27/c57acd5f-d6cf-48cc-ad0c-4a7d979c0c1e
>> > > of=/rhev/data-center/mnt/glusterSD/10.20.30.41:_rhv__
>> export/81094499-a392-4ea2-b081-7c6288fbb636/__test__
>> > > bs=8M oflag=direct status=progress
>> >
>> > unfrotunately dd performs the same
>> >
>> > 1778384896 bytes (1.8 GB) copied, 198.565265 s, 9.0 MB/s
>> >
>> >
>> > > If dd can do this faster, please ask on qemu-discuss mailing list:
>> > > https://lists.nongnu.org/mailman/listinfo/qemu-discuss
>> > 
>> > >
>> > > If both give similar results, I think asking 

Re: [ovirt-users] [Gluster-users] slow performance with export storage on glusterfs

2017-11-29 Thread Jiří Sléžka
Hello,

> 
> If you use Gluster as FUSE mount it's always slower than you expect it
> to be.
> If you want to get better performance out of your oVirt/Gluster storage,
> try the following: 
> 
> - create a Linux VM in your oVirt environment, assign 4/8/12 virtual
> disks (Virtual disks are located on your Gluster storage volume).
> - Boot/configure the VM, then use LVM to create VG/LV with 4 stripes
> (lvcreate -i 4) and use all 4/8/12 virtual disks as PVS.
> - then install NFS server and export LV you created in previous step,
> use the NFS export as export domain in oVirt/RHEV.
> 
> You should get wire speed when you use multiple stripes on Gluster
> storage, FUSE mount on oVirt host will fan out requests to all 4 servers.
> Gluster is very good at distributed/parallel workloads, but when you use
> direct Gluster FUSE mount for Export domain you only have one data
> stream, which is fragmented even more my multiple writes/reads that
> Gluster needs to do to save your data on all member servers.

Thanks for explanation, it is an interesting solution.

Cheers,

Jiri

> 
> 
> 
> On Mon, Nov 27, 2017 at 8:41 PM, Donny Davis  > wrote:
> 
> What about mounting over nfs instead of the fuse client. Or maybe
> libgfapi. Is that available for export domains
> 
> On Fri, Nov 24, 2017 at 3:48 AM Jiří Sléžka  > wrote:
> 
> On 11/24/2017 06:41 AM, Sahina Bose wrote:
> >
> >
> > On Thu, Nov 23, 2017 at 4:56 PM, Jiří Sléžka
> 
> > >> wrote:
> >
> >     Hi,
> >
> >     On 11/22/2017 07:30 PM, Nir Soffer wrote:
> >     > On Mon, Nov 20, 2017 at 5:22 PM Jiří Sléžka
> 
> >
> >     > 
>  >     >
> >     >     Hi,
> >     >
> >     >     I am trying realize why is exporting of vm to export
> storage on
> >     >     glusterfs such slow.
> >     >
> >     >     I am using oVirt and RHV, both instalations on
> version 4.1.7.
> >     >
> >     >     Hosts have dedicated nics for rhevm network - 1gbps,
> data
> >     storage itself
> >     >     is on FC.
> >     >
> >     >     GlusterFS cluster lives separate on 4 dedicated
> hosts. It has
> >     slow disks
> >     >     but I can achieve about 200-400mbit throughput in other
> >     applications (we
> >     >     are using it for "cold" data, backups mostly).
> >     >
> >     >     I am using this glusterfs cluster as backend for export
> >     storage. When I
> >     >     am exporting vm I can see only about 60-80mbit
> throughput.
> >     >
> >     >     What could be the bottleneck here?
> >     >
> >     >     Could it be qemu-img utility?
> >     >
> >     >     vdsm      97739  0.3  0.0 354212 29148 ?        S 15:43   0:06
> >     >     /usr/bin/qemu-img convert -p -t none -T none -f raw
> >     >   
> >   
>   
> /rhev/data-center/2ff6d0ee-a10b-473d-b77c-be9149945f5f/ff3cd56a-1005-4426-8137-8f422c0b47c1/images/ba42cbcc-c068-4df8-af3d-00f2077b1e27/c57acd5f-d6cf-48cc-ad0c-4a7d979c0c1e
> >     >     -O raw
> >     >   
> >   
>   
> /rhev/data-center/mnt/glusterSD/10.20.30.41:_rhv__export/81094499-a392-4ea2-b081-7c6288fbb636/images/ba42cbcc-c068-4df8-af3d-00f2077b1e27/c57acd5f-d6cf-48cc-ad0c-4a7d979c0c1e
> >     >
> >     >     Any idea how to make it work faster or what
> throughput should I
> >     >     expected?
> >     >
> >     >
> >     > gluster storage operations are using fuse mount - so
> every write:
> >     > - travel to the kernel
> >     > - travel back to the gluster fuse helper process
> >     > - travel to all 3 replicas - replication is done on
> client side
> >     > - return to kernel when all writes succeeded
> >     > - return to caller
> >     >
> >     > So gluster will never set any speed record.
> >     >
> >     > Additionally, you are copying from raw lv on FC -
> qemu-img cannot do
> >     > anything
> >     > smart and avoid copying unused clusters. Instead if copies
> >     gigabytes of
> >     > zeros
> >     > from FC.
> >
> >     ok, it does make sense
>

Re: [ovirt-users] strange problems with engine

2017-11-29 Thread alex
Hi, Didi yes, your advicewith  --otopi-environment=OVESETUP_CONFIG/continueSetupOnHEVM=bool:True       has helped, thanks )   23.11.2017, 21:42, "Yedidyah Bar David" :On Fri, Nov 17, 2017 at 3:53 PM,  wrote: hi! I have caught a problem. hosted-engine unexpectedly stop working with error in log: 2017-11-14 01:47:02,628+09 ERROR [org.ovirt.engine.core.bll.InitBackendServicesOnStartupBean] (ServerService Thread Pool -- 54) [7ce64f75-6eee-4d8c-9693-521cf0c6cebb] Failed to initialize backend: org.jboss.weld.exceptions.WeldException: WELD-49: Unable to invoke private void org.ovirt.engine.core.bll.tasks.CommandCallbacksPoller.init() I had tried to restore HE from a backup, but also unsucsessful: Failed to execute stage 'Setup validation': Hosted Engine setup detected, but Global Maintenance is not set. this is incorrect error, because: hosted-engine --vm-status | grep MAIN !! Cluster is in GLOBAL MAINTENANCE mode !! tell me please how it can be fixed? )I think I talked with you on OFCT#ovirt, right? Any progress with this?Regards,--Didi___
Users mailing list
Users@ovirt.org
http://lists.ovirt.org/mailman/listinfo/users