[ovirt-users] Re: Remove obsolete Gluster hyperconverged doc

2022-02-14 Thread Leo David
HCI architectured virtualisation enviroments provide quite a lot of
benefits for different implementation scenarios. From rought point of view,
they combine 3 main components in horizontal scaling out fashion ( compute,
storage, networking)
As a starting point, i think you may want to have a look on this:
https://storpool.com/blog/is-hyper-converged-infrastructure-what-you-need/

Regards,

Leo


On Mon, Feb 14, 2022, 08:00 Pascal DeMilly  wrote:

> What advantages does ovirt in hyperconerge mode offer over using glusterfs
> on a separate stack unrelated to orvirt except as a domain storage? I am
> looking into moving our NFS server to a distributed redundant solution.
> What is the best, most reliable, fastest solution I could build that ovirt
> can use but doesn't manage or is it necessary to let ovirt manage its
> domains as well
>
> TIA
>
> On Sun, Feb 13, 2022 at 3:00 PM Strahil Nikolov via Users 
> wrote:
>
>> I'm not so sure. Usually Gluster is used in Hyperconverged scenarios.
>> CEPH is more damanding and I would calculate my reaources several times
>> before considering it in Hyperconverged.
>>
>> Best Regards,
>> Strahil Nikolov
>>
>> On Mon, Feb 14, 2022 at 0:01, Leo David
>>  wrote:
>> ___
>> Users mailing list -- users@ovirt.org
>> To unsubscribe send an email to users-le...@ovirt.org
>> Privacy Statement: https://www.ovirt.org/privacy-policy.html
>> oVirt Code of Conduct:
>> https://www.ovirt.org/community/about/community-guidelines/
>> List Archives:
>> https://lists.ovirt.org/archives/list/users@ovirt.org/message/KEA2VRL76ELF74SXHDIIG7VMNC5NXJ2I/
>>
>> ___
>> Users mailing list -- users@ovirt.org
>> To unsubscribe send an email to users-le...@ovirt.org
>> Privacy Statement: https://www.ovirt.org/privacy-policy.html
>> oVirt Code of Conduct:
>> https://www.ovirt.org/community/about/community-guidelines/
>> List Archives:
>> https://lists.ovirt.org/archives/list/users@ovirt.org/message/Q3BDSSGI5YUJJVN5UY5XJIG46MQXDU6T/
>>
>
___
Users mailing list -- users@ovirt.org
To unsubscribe send an email to users-le...@ovirt.org
Privacy Statement: https://www.ovirt.org/privacy-policy.html
oVirt Code of Conduct: 
https://www.ovirt.org/community/about/community-guidelines/
List Archives: 
https://lists.ovirt.org/archives/list/users@ovirt.org/message/6R3NNBDKIEW5BRRQ7HTSQKFTP5MYWMRN/


[ovirt-users] Re: Remove obsolete Gluster hyperconverged doc

2022-02-13 Thread Leo David
Ceph native usage (no iscis's or any other abstraction) will do the job and
elavate entire thing to a NEXT level.

On Sun, Feb 13, 2022, 17:51 less foobar via Users  wrote:

> I quote you from the https://bugzilla.redhat.com/show_bug.cgi?id=2016359
> Sandro Bonazzola 2022-02-07 06:42:27 UTC
>
> (In reply to Nir Soffer from comment #4)
> Is this relevant to oVirt? Do we plan to deprecate Gluster usage in oVirt?
>
> The Gluster upstream website doesn't mention any intention to stop the
> development so I think oVirt is not affected by RHGS EOL.
>
> So the question now is what changed???
> ___
> Users mailing list -- users@ovirt.org
> To unsubscribe send an email to users-le...@ovirt.org
> Privacy Statement: https://www.ovirt.org/privacy-policy.html
> oVirt Code of Conduct:
> https://www.ovirt.org/community/about/community-guidelines/
> List Archives:
> https://lists.ovirt.org/archives/list/users@ovirt.org/message/Z3FX3VKYXJ6UFNFDE5IZAZ7VRYUQWXQB/
>
___
Users mailing list -- users@ovirt.org
To unsubscribe send an email to users-le...@ovirt.org
Privacy Statement: https://www.ovirt.org/privacy-policy.html
oVirt Code of Conduct: 
https://www.ovirt.org/community/about/community-guidelines/
List Archives: 
https://lists.ovirt.org/archives/list/users@ovirt.org/message/KEA2VRL76ELF74SXHDIIG7VMNC5NXJ2I/


[ovirt-users] Re: RHGS and RHV closing down: could you please put that on the home page?

2022-02-04 Thread Leo David
Maybe its a perfect time to add ( again ) Ceph into discution.

Leo


On Fri, Feb 4, 2022, 18:21 Thomas Hoberg  wrote:

> With Gluster gone, you could still use SAN and NFS storage, just like
> before they tried to compete with Nutanix and vSphere.
>
> Can you imagine IBM sponsoring oVirt, which doesn't make any money without
> RHV, which evidently isn't profitable enough?
>
> Most likely oVirt will lead RHV, in this case to the scrapyard, by months
> if not years.
> ___
> Users mailing list -- users@ovirt.org
> To unsubscribe send an email to users-le...@ovirt.org
> Privacy Statement: https://www.ovirt.org/privacy-policy.html
> oVirt Code of Conduct:
> https://www.ovirt.org/community/about/community-guidelines/
> List Archives:
> https://lists.ovirt.org/archives/list/users@ovirt.org/message/T64IKY4DQUPQJXLRRBUCOOJ45FBRXM7Q/
>
___
Users mailing list -- users@ovirt.org
To unsubscribe send an email to users-le...@ovirt.org
Privacy Statement: https://www.ovirt.org/privacy-policy.html
oVirt Code of Conduct: 
https://www.ovirt.org/community/about/community-guidelines/
List Archives: 
https://lists.ovirt.org/archives/list/users@ovirt.org/message/GAWGUIQ7A36G6OPZDBCXNTCKAI3LJEXJ/


[ovirt-users] Re: oVirt + Proxmox Backup Server

2021-01-28 Thread Leo David
Hi,
I think that as long as you can manage to have the pbs client installed on
your nodes, you can setup some functional cli based backup strategy.
Cheers,

Leo

On Thu, Jan 28, 2021, 21:17  wrote:

> So... I would like to know if works hehe
>
> It seems to me to be a very interesting solution, and it is open source
>
> Buuut, they are not say if works with kvm or other integration.
> ___
> Users mailing list -- users@ovirt.org
> To unsubscribe send an email to users-le...@ovirt.org
> Privacy Statement: https://www.ovirt.org/privacy-policy.html
> oVirt Code of Conduct:
> https://www.ovirt.org/community/about/community-guidelines/
> List Archives:
> https://lists.ovirt.org/archives/list/users@ovirt.org/message/PEOSANI7NZ3SORP66WW5JUZ6DUFDPAZY/
>
___
Users mailing list -- users@ovirt.org
To unsubscribe send an email to users-le...@ovirt.org
Privacy Statement: https://www.ovirt.org/privacy-policy.html
oVirt Code of Conduct: 
https://www.ovirt.org/community/about/community-guidelines/
List Archives: 
https://lists.ovirt.org/archives/list/users@ovirt.org/message/AJVZNZVQGJBCMMBYZ3TJBSUYTGCDPAEW/


[ovirt-users] Re: Upgrade hyperconverged self-hosted ovirt from 4.3 to 4.4

2020-11-09 Thread Leo David
Hi,
I am interested about these steps too, for a clean an straighforward
procedure.
Althought this plan looks pretty good, i am still wondering:

Step 4
Backup all gluster config files
- could you please let me know what would be the exact location(s) of the
files to be backed up ?

Step 6
Install glusterfs, restore the configs from step 4
- would the configs work with this version ?
- would in theory gluster get back to a previous balanced state ?

Step 9

Deploy the new HE on a new Gluster Volume, using the backup/restore
procedure for HE.
- this assumes to firstly create a new volume based on some aditional new
disks or lvms, right ?

Sorry if maybe I'm missing somethings due to my lack of knowledge.
Cheers,

Leo

On Mon, Nov 9, 2020, 17:40 Strahil Nikolov via Users 
wrote:

> Hi ,
>
> I haven't done it yet, but I'm planing to do it.
> As I haven't tested the following, I can't guarantee that it will work:
> 0. Gluster snapshots on all volumes
> 1. Set a node in maintenance
> 2. Create a full backup of the engine
> 3. Set global maintenance and power off the current engine
> 4. Backup all gluster config files
> 5. Reinstall the node that was set to maintnenance (step 2)
> 6. Install glusterfs, restore the configs from step 4
> 7. Restart glusterd and check that all bricks are up
> 8. Wait for healing to end
> 9. Deploy the new HE on a new Gluster Volume, using the backup/restore
> procedure for HE
> 10.Add the other nodes from the oVirt cluster
> 11.Set EL7-based hosts to maintenance and power off
> 12.Repeat steps 4-8 for the second host (step 11)
> ...
> In the end, you can bring the CLuster Level up to 4.4 and enjoy...
>
>
> Yet, this is just theory :)
>
> Best Regards,
> Strahil Nikolov
>
> Keep in mind that gluster snapshot feature allows you to revert
>
>
>
>
>
>
> В понеделник, 9 ноември 2020 г., 08:19:23 Гринуич+2, 
> написа:
>
>
>
>
>
> Hi,
> has anyone attempted an upgrade from 4.3 to 4.4 in a hyperconverged
> self-hosted setup?
> The posted guidelines seem a bit contradictive and not complete.
> Has anyone tried it and could share his experiences? I am currently having
> problems when deploying the hosted engine and restoring. The host becomes
> unresponsive and has hung tasks.
>
> Kind regards,
>
> Ralf
> ___
> Users mailing list -- users@ovirt.org
> To unsubscribe send an email to users-le...@ovirt.org
> Privacy Statement: https://www.ovirt.org/privacy-policy.html
> oVirt Code of Conduct:
> https://www.ovirt.org/community/about/community-guidelines/
> List Archives:
> https://lists.ovirt.org/archives/list/users@ovirt.org/message/R6C5DAKNT4ZA42FLC2YGYYUNQLXXHHHZ/
> ___
> Users mailing list -- users@ovirt.org
> To unsubscribe send an email to users-le...@ovirt.org
> Privacy Statement: https://www.ovirt.org/privacy-policy.html
> oVirt Code of Conduct:
> https://www.ovirt.org/community/about/community-guidelines/
> List Archives:
> https://lists.ovirt.org/archives/list/users@ovirt.org/message/FCD77H5RWGBR6GFEIFKJWJPG7MQVT6M2/
>
___
Users mailing list -- users@ovirt.org
To unsubscribe send an email to users-le...@ovirt.org
Privacy Statement: https://www.ovirt.org/privacy-policy.html
oVirt Code of Conduct: 
https://www.ovirt.org/community/about/community-guidelines/
List Archives: 
https://lists.ovirt.org/archives/list/users@ovirt.org/message/Q62ZG7JLWP4XYQXLZGEESLQQCSIRVVR7/


[ovirt-users] Re: Hyperconverged setup questions

2020-02-29 Thread Leo David
Hi,
As a first setup, you can go with a 3 nodes HCI and having the data volume
in a replica 3 setup.
Afterwards, if you want to expand HCI ( compute and storage too) you can
add sets of 3  nodes, and the data volume will automatically become
replicated-distributed. Safely, you can add sets of 3 nodes up to 12 nodes
per HCI.
You can also add "compute only nodes" and not extending storage too. This
can be done by adding nodes one by one.
As an example, I have an implementation where are 3 hyperconverged nodes,
they form a replica 3 volume, and later i have added the 4th node to the
cluster which only adds ram and cpu, whilts consuming storage from the
existing 3 nodes based volume.
Hope this helps.
Cheers,

Leo


On Fri, Feb 28, 2020, 15:25 Vrgotic, Marko 
wrote:

> Hi Strahil,
>
>
>
> I circled back on your reply while ago regarding oVirt Hyperconverged and
> more than 3 nodes in cluster:
>
>
>
> “Hi Marko, I guess  you can use distributed-replicated volumes  and
> oVirt  cluster with host triplets.”
>
> Initially I understood that its limited to 3Nodes max per HC cluster, but
> now reading documentation further
> https://www.ovirt.org/documentation/gluster-hyperconverged/chap-Maintenance_and_Upgrading_Resources.html
> that does not look like it.
>
>
>
> Would you be so kind to give me an example or clarify what you meant by “*you
> can use distributed-replicated volumes  and oVirt  cluster with host
> triplets.*” ?
>
>
>
> Kindly awaiting your reply.
>
>
>
>
>
> -
>
> kind regards/met vriendelijke groeten
>
>
>
> Marko Vrgotic
> ActiveVideo
>
>
>
>
>
>
>
> *From: *"Vrgotic, Marko" 
> *Date: *Friday, 11 October 2019 at 08:49
> *To: *Strahil 
> *Cc: *users 
> *Subject: *Re: [ovirt-users] Hyperconverged setup questions
>
>
>
> Hi Strahil,
>
>
>
> Thank you.
>
> One maybe stupid question, but significant to me:
>
> Considering i haven’t been playing before with hyperconverged setup in
> oVirt, is this something i can do from ui cockpit or does it require me
> first setup an Glusterfs on the Hosts before doing anything via oVirt API
> or Web interface?
>
>
>
> Kindly awaiting your reply.
>
>
>
> Marko
>
>
>
> Sent from my iPhone
>
>
>
> On 11 Oct 2019, at 06:14, Strahil  wrote:
>
> Hi Marko,
>
> I guess  you can use distributed-replicated volumes  and oVirt  cluster
> with host triplets.
>
> Best Regards,
> Strahil Nikolov
>
> On Oct 10, 2019 15:30, "Vrgotic, Marko"  wrote:
>
> Dear oVirt,
>
>
>
> Is it possible to add oVirt 3Hosts/Gluster hyperconverged cluster to
> existing oVirt setup? I need this to achieve Local storage performance, but
> still have pool of Hypevisors available.
>
> Is it possible to have more than 3Hosts in Hyperconverged setup?
>
>
>
> I have currently 1Shared Cluster (NFS based storage, where also SHE is
> hosted) and 2Local Storage clusters.
>
>
>
> oVirt current version running is 4.3.4.
>
>
>
> Kindly awaiting your reply.
>
>
>
>
>
> — — —
> Met vriendelijke groet / Kind regards,
>
> *Marko Vrgotic*
>
> *ActiveVideo*
>
>
>
>
>
> ___
> Users mailing list -- users@ovirt.org
> To unsubscribe send an email to users-le...@ovirt.org
> Privacy Statement: https://www.ovirt.org/site/privacy-policy/
> oVirt Code of Conduct:
> https://www.ovirt.org/community/about/community-guidelines/
> List Archives:
> https://lists.ovirt.org/archives/list/users@ovirt.org/message/UH2FDN57V2TOQXD36UQXVTVCTB37O4OE/
>
___
Users mailing list -- users@ovirt.org
To unsubscribe send an email to users-le...@ovirt.org
Privacy Statement: https://www.ovirt.org/site/privacy-policy/
oVirt Code of Conduct: 
https://www.ovirt.org/community/about/community-guidelines/
List Archives: 
https://lists.ovirt.org/archives/list/users@ovirt.org/message/CJ5IAHCMNU3KSYUR3MCD2NNJTDEIHRNX/


[ovirt-users] Re: Delete snapshots task hung

2019-10-16 Thread Leo David
Thank you for help Strahil,

But although there where 4 images with a status 4 in the database, and did
the update query on them, same bloody message, and vms wont start.
Eventually, ive decided to delete the vms, and do a from scratch
installation. Persistance openshift vms are still ok, so i should be able
to reuse the volumes somehow.
This is why sometimes a subscription is good, when a lack of knowledge on
my side is happening. Production systems should not rely on upstreams
unless there is a strong understanding of the product.
Again, thank you so much for trying helping me out !
Cheers,

Leo

On Tue, Oct 15, 2019, 07:00 Leo David  wrote:

> Thank you Strahil,
> I'll proceed with these steps and come back to you.
> Cheers,
>
> Leo
>
> On Tue, Oct 15, 2019, 06:45 Strahil  wrote:
>
>> Have you checked this thread :
>> https://lists.ovirt.org/pipermail/users/2016-April/039277.html
>>
>> You can switch to postgre user, then 'source
>> /opt/rhn/postgresql10/enable' & then 'psql engine'.
>>
>> As per the thread you can find illegal snapshots via '*select
>> image_group_id,imagestatus from images where imagestatus =4;*'
>>
>> And then update them via '*update images set imagestatus =1 where
>> imagestatus = 4 and ;** commit'*
>>
>> Best Regards,
>> Strahil Nikolov
>>
>> On Oct 13, 2019 15:45, Leo David  wrote:
>>
>> >
>> > Hi Everyone,
>> > Im still not being able to start the vms... Could anyone give me an
>> advice on sorign this out ?
>> > Still having th "Bad volume specification" error,  although the disk is
>> present on the storage.
>> > This issue would force me to reinstall a 10 nodes Openshift cluster
>> from scratch,  which would not be so funny..
>> > Thanks,
>> >
>> > Leo.
>> >
>> > On Fri, Oct 11, 2019 at 7:12 AM Strahil  wrote:
>>
>> >>
>> >> Nah...
>> >> It's done directly on the DB and I wouldn't recommend such action for
>> Production Cluster.
>> >> I've done it only once and it was based on some old mailing lists.
>> >>
>> >> Maybe someone from the dev can assist?
>> >>
>> >> On Oct 10, 2019 13:31, Leo David  wrote:
>>
>> >>>
>> >>> Thank you Strahil,
>> >>> Could you tell me what do you mean by changing status ? Is this
>> something to be done in the UI ?
>> >>>
>> >>> Thanks,
>> >>>
>> >>> Leo
>> >>>
>> >>> On Thu, Oct 10, 2019, 09:55 Strahil  wrote:
>>
>> >>>>
>> >>>> Maybe you can change the status of the VM in order the engine to
>> know that it has to blockcommit the snapshots.
>> >>>>
>> >>>> Best Regards,
>> >>>> Strahil Nikolov
>> >>>>
>> >>>> On Oct 9, 2019 09:02, Leo David  wrote:
>>
>> >>>>>
>> >>>>> Hi Everyone,
>> >>>>> Please let me know if any thoughts or recommandations that could
>> help me solve this issue..
>> >>>>> The real bad luck in this outage is that these 5 vms are part on an
>> Openshift deployment,  and now we are not able to start it up...
>> >>>>> Before trying to sort this at ocp platform level by replacing the
>> failed nodes with new vms, I would rather prefer to do it at the oVirt
>> level and have the vms starting since the disks are still present on
>> gluster.
>> >>>>> Thank you so much !
>> >>>>>
>> >>>>>
>> >>>>> Leo
>>
>> >
>> >
>> >
>> > --
>> > Best regards, Leo David
>>
>>
___
Users mailing list -- users@ovirt.org
To unsubscribe send an email to users-le...@ovirt.org
Privacy Statement: https://www.ovirt.org/site/privacy-policy/
oVirt Code of Conduct: 
https://www.ovirt.org/community/about/community-guidelines/
List Archives: 
https://lists.ovirt.org/archives/list/users@ovirt.org/message/WYE2EO4AOCTWK4EWGMDQ7KSTF3M6JR6Q/


[ovirt-users] Re: Delete snapshots task hung

2019-10-14 Thread Leo David
Thank you Strahil,
I'll proceed with these steps and come back to you.
Cheers,

Leo

On Tue, Oct 15, 2019, 06:45 Strahil  wrote:

> Have you checked this thread :
> https://lists.ovirt.org/pipermail/users/2016-April/039277.html
>
> You can switch to postgre user, then 'source /opt/rhn/postgresql10/enable'
> & then 'psql engine'.
>
> As per the thread you can find illegal snapshots via '*select
> image_group_id,imagestatus from images where imagestatus =4;*'
>
> And then update them via '*update images set imagestatus =1 where
> imagestatus = 4 and ;** commit'*
>
> Best Regards,
> Strahil Nikolov
>
> On Oct 13, 2019 15:45, Leo David  wrote:
>
> >
> > Hi Everyone,
> > Im still not being able to start the vms... Could anyone give me an
> advice on sorign this out ?
> > Still having th "Bad volume specification" error,  although the disk is
> present on the storage.
> > This issue would force me to reinstall a 10 nodes Openshift cluster from
> scratch,  which would not be so funny..
> > Thanks,
> >
> > Leo.
> >
> > On Fri, Oct 11, 2019 at 7:12 AM Strahil  wrote:
>
> >>
> >> Nah...
> >> It's done directly on the DB and I wouldn't recommend such action for
> Production Cluster.
> >> I've done it only once and it was based on some old mailing lists.
> >>
> >> Maybe someone from the dev can assist?
> >>
> >> On Oct 10, 2019 13:31, Leo David  wrote:
>
> >>>
> >>> Thank you Strahil,
> >>> Could you tell me what do you mean by changing status ? Is this
> something to be done in the UI ?
> >>>
> >>> Thanks,
> >>>
> >>> Leo
> >>>
> >>> On Thu, Oct 10, 2019, 09:55 Strahil  wrote:
>
> >>>>
> >>>> Maybe you can change the status of the VM in order the engine to know
> that it has to blockcommit the snapshots.
> >>>>
> >>>> Best Regards,
> >>>> Strahil Nikolov
> >>>>
> >>>> On Oct 9, 2019 09:02, Leo David  wrote:
>
> >>>>>
> >>>>> Hi Everyone,
> >>>>> Please let me know if any thoughts or recommandations that could
> help me solve this issue..
> >>>>> The real bad luck in this outage is that these 5 vms are part on an
> Openshift deployment,  and now we are not able to start it up...
> >>>>> Before trying to sort this at ocp platform level by replacing the
> failed nodes with new vms, I would rather prefer to do it at the oVirt
> level and have the vms starting since the disks are still present on
> gluster.
> >>>>> Thank you so much !
> >>>>>
> >>>>>
> >>>>> Leo
>
> >
> >
> >
> > --
> > Best regards, Leo David
>
>
___
Users mailing list -- users@ovirt.org
To unsubscribe send an email to users-le...@ovirt.org
Privacy Statement: https://www.ovirt.org/site/privacy-policy/
oVirt Code of Conduct: 
https://www.ovirt.org/community/about/community-guidelines/
List Archives: 
https://lists.ovirt.org/archives/list/users@ovirt.org/message/AUXZBKKV6UNK4V5FZQFV3LRQRJIZT7EN/


[ovirt-users] Re: Delete snapshots task hung

2019-10-13 Thread Leo David
Hi Everyone,
Im still not being able to start the vms... Could anyone give me an advice
on sorign this out ?
Still having th "Bad volume specification" error,  although the disk is
present on the storage.
This issue would force me to reinstall a 10 nodes Openshift cluster from
scratch,  which would not be so funny..
Thanks,

Leo.

On Fri, Oct 11, 2019 at 7:12 AM Strahil  wrote:

> Nah...
> It's done directly on the DB and I wouldn't recommend such action for
> Production Cluster.
> I've done it only once and it was based on some old mailing lists.
>
> Maybe someone from the dev can assist?
> On Oct 10, 2019 13:31, Leo David  wrote:
>
> Thank you Strahil,
> Could you tell me what do you mean by changing status ? Is this something
> to be done in the UI ?
>
> Thanks,
>
> Leo
>
> On Thu, Oct 10, 2019, 09:55 Strahil  wrote:
>
> Maybe you can change the status of the VM in order the engine to know that
> it has to blockcommit the snapshots.
>
> Best Regards,
> Strahil Nikolov
> On Oct 9, 2019 09:02, Leo David  wrote:
>
> Hi Everyone,
> Please let me know if any thoughts or recommandations that could help me
> solve this issue..
> The real bad luck in this outage is that these 5 vms are part on an
> Openshift deployment,  and now we are not able to start it up...
> Before trying to sort this at ocp platform level by replacing the failed
> nodes with new vms, I would rather prefer to do it at the oVirt level and
> have the vms starting since the disks are still present on gluster.
> Thank you so much !
>
>
> Leo
>
>

-- 
Best regards, Leo David
___
Users mailing list -- users@ovirt.org
To unsubscribe send an email to users-le...@ovirt.org
Privacy Statement: https://www.ovirt.org/site/privacy-policy/
oVirt Code of Conduct: 
https://www.ovirt.org/community/about/community-guidelines/
List Archives: 
https://lists.ovirt.org/archives/list/users@ovirt.org/message/NFBFOA6LF3JI4CMUO66D5W2I534B5HBP/


[ovirt-users] Re: Delete snapshots task hung

2019-10-10 Thread Leo David
Thank you Strahil,
Could you tell me what do you mean by changing status ? Is this something
to be done in the UI ?

Thanks,

Leo

On Thu, Oct 10, 2019, 09:55 Strahil  wrote:

> Maybe you can change the status of the VM in order the engine to know that
> it has to blockcommit the snapshots.
>
> Best Regards,
> Strahil Nikolov
> On Oct 9, 2019 09:02, Leo David  wrote:
>
> Hi Everyone,
> Please let me know if any thoughts or recommandations that could help me
> solve this issue..
> The real bad luck in this outage is that these 5 vms are part on an
> Openshift deployment,  and now we are not able to start it up...
> Before trying to sort this at ocp platform level by replacing the failed
> nodes with new vms, I would rather prefer to do it at the oVirt level and
> have the vms starting since the disks are still present on gluster.
> Thank you so much !
>
>
> Leo
>
>
___
Users mailing list -- users@ovirt.org
To unsubscribe send an email to users-le...@ovirt.org
Privacy Statement: https://www.ovirt.org/site/privacy-policy/
oVirt Code of Conduct: 
https://www.ovirt.org/community/about/community-guidelines/
List Archives: 
https://lists.ovirt.org/archives/list/users@ovirt.org/message/SCMHKZO5EEOLNTJ7RPYGZBDE3W5C6NCT/


[ovirt-users] Re: Delete snapshots task hung

2019-10-09 Thread Leo David
Hi Everyone,
Please let me know if any thoughts or recommandations that could help me
solve this issue..
The real bad luck in this outage is that these 5 vms are part on an
Openshift deployment,  and now we are not able to start it up...
Before trying to sort this at ocp platform level by replacing the failed
nodes with new vms, I would rather prefer to do it at the oVirt level and
have the vms starting since the disks are still present on gluster.
Thank you so much !


Leo
___
Users mailing list -- users@ovirt.org
To unsubscribe send an email to users-le...@ovirt.org
Privacy Statement: https://www.ovirt.org/site/privacy-policy/
oVirt Code of Conduct: 
https://www.ovirt.org/community/about/community-guidelines/
List Archives: 
https://lists.ovirt.org/archives/list/users@ovirt.org/message/MJZA5ZX4WR7QD5BFO57ATTEOQCWS3MRF/


[ovirt-users] Re: Delete snapshots task hung

2019-10-08 Thread Leo David
Thank you Strahil,
But the vm's are not starting at all...
Error is clear: " Exit message: Bad volume specification" ,  but i just do
not understand how to deal with this.

Cheers,

Leo

On Tue, Oct 8, 2019 at 2:44 PM Strahil  wrote:

> Try to migrate a VM from one host to another.
> I had a similar issue (1000 warnings in UI) that have stopped immediately
> I have migrated that VM.
>
> Best Regards,
> Strahil Nikolov
> On Oct 8, 2019 09:59, Leo David  wrote:
>
> Hi Everyone,
> I'm waiting since 3 days for 5 x delete snapshot tasks to finish, and for
> some reason it seems to be stucked.For other vms snapshot removal took at
> most 20 mins, with havin the disks pretty much same size,  and snapshots
> numbers.
> Any thoughts on how should I get this fixed ?
> Below,  some lines from the engine.log, and it seems to show some
> complains regarding locks ( Failed to acquire lock and wait lock) ,
> although I am not sure if thats the root cause:
> Thank you very much !
>
> Leo
>
> 2019-10-08 09:52:48,692+03 INFO
> [org.ovirt.engine.core.bll.snapshots.RemoveSnapshotSingleDiskLiveCommandCallback]
> (EE-ManagedThreadFactory-engineScheduled-Thread-47)
> [73016a4a-bb2f-487f-91c5-cd027b278930] Command
> 'RemoveSnapshotSingleDiskLive' (id: '341d9c1b-2915-48d6-a8a9-9146ab19d5f8')
> waiting on child command id: '329da0fd-801b-4e0d-b7c0-fbb5c2a98bb5'
> type:'DestroyImage' to complete
> 2019-10-08 09:52:48,702+03 INFO
> [org.ovirt.engine.core.bll.snapshots.RemoveSnapshotSingleDiskLiveCommandCallback]
> (EE-ManagedThreadFactory-engineScheduled-Thread-47)
> [73016a4a-bb2f-487f-91c5-cd027b278930] Command
> 'RemoveSnapshotSingleDiskLive' (id: '580fa033-35fd-44f0-9979-e60e9bbf8a29')
> waiting on child command id: 'c00bdeb6-2e8b-4ef8-a3dc-1aaa088ae052'
> type:'DestroyImage' to complete
> 2019-10-08 09:52:49,713+03 INFO
> [org.ovirt.engine.core.bll.snapshots.RemoveSnapshotSingleDiskLiveCommandCallback]
> (EE-ManagedThreadFactory-engineScheduled-Thread-50)
> [539ba19e-0cb5-42cf-9a23-7916ee2de4a9] Command
> 'RemoveSnapshotSingleDiskLive' (id: 'de747f91-ec59-4e70-9345-77e16234bfe0')
> waiting on child command id: '10812160-cf4c-4239-bb92-1d5a847687ee'
> type:'DestroyImage' to complete
> 2019-10-08 09:52:50,725+03 INFO
> [org.ovirt.engine.core.bll.snapshots.RemoveSnapshotSingleDiskLiveCommandCallback]
> (EE-ManagedThreadFactory-engineScheduled-Thread-100)
> [baed2fa3-bcad-43b2-8164-480598bc72f3] Command
> 'RemoveSnapshotSingleDiskLive' (id: '4919b287-e980-4d34-a219-c08a169cd8f7')
> waiting on child command id: '5eceb6a8-f08e-42aa-8258-c907f5927e6c'
> type:'DestroyImage' to complete
> 2019-10-08 09:52:51,563+03 INFO
> [org.ovirt.engine.core.bll.lock.InMemoryLockManager]
> (DefaultQuartzScheduler7) [306a2296] Failed to acquire lock and wait lock
> 'EngineLock:{exclusiveLocks='[c6087b9e-2214-11e9-9288-00163e168814=GLUSTER]',
> sharedLocks=''}'
> 2019-10-08 09:52:51,583+03 INFO
> [org.ovirt.engine.core.bll.lock.InMemoryLockManager]
> (DefaultQuartzScheduler7) [306a2296] Failed to acquire lock and wait lock
> 'EngineLock:{exclusiveLocks='[c6087b9e-2214-11e9-9288-00163e168814=GLUSTER]',
> sharedLocks=''}'
> 2019-10-08 09:52:51,604+03 INFO
> [org.ovirt.engine.core.bll.lock.InMemoryLockManager]
> (DefaultQuartzScheduler7) [306a2296] Failed to acquire lock and wait lock
> 'EngineLock:{exclusiveLocks='[c6087b9e-2214-11e9-9288-00163e168814=GLUSTER]',
> sharedLocks=''}'
> 2019-10-08 09:52:51,606+03 INFO
> [org.ovirt.engine.core.bll.lock.InMemoryLockManager]
> (DefaultQuartzScheduler7) [306a2296] Failed to acquire lock and wait lock
> 'EngineLock:{exclusiveLocks='[c6087b9e-2214-11e9-9288-00163e168814=GLUSTER]',
> sharedLocks=''}'
> 2019-10-08 09:52:51,735+03 INFO
> [org.ovirt.engine.core.bll.ConcurrentChildCommandsExecutionCallback]
> (EE-ManagedThreadFactory-engineScheduled-Thread-94)
> [73016a4a-bb2f-487f-91c5-cd027b278930] Command 'RemoveSnapshot' (id:
> 'c9ab1344-ae27-4934-9358-d6a7b10a4f0a') waiting on child command id:
> '341d9c1b-2915-48d6-a8a9-9146ab19d5f8' type:'RemoveSnapshotSingleDiskLive'
> to complete
> 2019-10-08 09:52:52,706+03 INFO
> [org.ovirt.engine.core.vdsbroker.gluster.GetGlusterLocalPhysicalVolumeListVDSCommand]
> (DefaultQuartzScheduler10) [8921c9c] FINISH,
> GetGlusterLocalPhysicalVolumeListVDSCommand, return:
> [org.ovirt.engine.core.common.businessentities.gluster.GlusterLocalPhysicalVolume@21830b5f,
> org.ovirt.engine.core.common.businessentities.gluster.GlusterLocalPhysicalVolume@676adc3e,
> org.ovirt.engine.core.common.businessentities.gluster.GlusterLocalPhysicalVolume@385a3510,
> org.ovirt.engine.core.common.businessentities.gluster.GlusterLocalPhysicalVolume@af24d00,
> org.ovirt.engine.core.common.businessentities.gluster.GlusterLocalPhysicalVolume@33

[ovirt-users] Re: Delete snapshots task hung

2019-10-08 Thread Leo David
No, its 4.2.8...

Thank you,

Leo

On Tue, Oct 8, 2019 at 1:29 PM Gianluca Cecchi 
wrote:

>
>
> On Tue, Oct 8, 2019 at 12:10 PM Leo David  wrote:
>
>> Now I'm in a worst position,  after unlocking "all" entities.  The tasks
>> are not present anymore,  the snapshots are not locked anymore,  but these
>> 5 vms are not able to start:
>>
>>
>> [snip]
>
>> Any ideea how should I proceed to have the vms to start ? I am kind of
>> stuck in this issue...
>> Thank you very much in advance,
>> Leo
>>
>>
> Are you on 4.3.6? In case could it be related with the async announcement
> made some hours ago?
>
> Gianluca
>


-- 
Best regards, Leo David
___
Users mailing list -- users@ovirt.org
To unsubscribe send an email to users-le...@ovirt.org
Privacy Statement: https://www.ovirt.org/site/privacy-policy/
oVirt Code of Conduct: 
https://www.ovirt.org/community/about/community-guidelines/
List Archives: 
https://lists.ovirt.org/archives/list/users@ovirt.org/message/THDKDGDUTEBXWVTKAXFCXPZSFJAXBKTH/


[ovirt-users] Re: Delete snapshots task hung

2019-10-08 Thread Leo David
Now I'm in a worst position,  after unlocking "all" entities.  The tasks
are not present anymore,  the snapshots are not locked anymore,  but these
5 vms are not able to start:

2019-10-08 12:54:25,088+03 ERROR
[org.ovirt.engine.core.dal.dbbroker.auditloghandling.AuditLogDirector]
(ForkJoinPool-1-worker-7) [] EVENT_ID: VM_DOWN_ERROR(119), VM
openshift-04-os-infra-1 is down with error. Exit message: Bad volume
specification {'protocol': 'gluster', 'address': {'function': '0x0', 'bus':
'0x00', 'domain': '0x', 'type': 'pci', 'slot': '0x06'}, 'serial':
'eb6331b3-ec4f-4b8e-a1cd-cb763ada9f6a', 'index': 0, 'iface': 'virtio',
'apparentsize': '29933240320', 'specParams': {'pinToIoThread': '1'},
'cache': 'none', 'imageID': 'eb6331b3-ec4f-4b8e-a1cd-cb763ada9f6a',
'truesize': '30025396224', 'type': 'disk', 'domainID':
'97ced32d-bdb9-4913-a272-8a4a83ca3d1b', 'reqsize': '0', 'format': 'cow',
'poolID': 'c604b50e-2214-11e9-b449-00163e168814', 'device': 'disk', 'path':
'ssd-samsung-evo860/97ced32d-bdb9-4913-a272-8a4a83ca3d1b/images/eb6331b3-ec4f-4b8e-a1cd-cb763ada9f6a/
*f8b86437-e54c-4728-8100-ed05ef312212'*, 'propagateErrors': 'off', 'name':
'vda', 'bootOrder': '1', 'volumeID':
'f8b86437-e54c-4728-8100-ed05ef312212', 'diskType': 'network', 'alias':
'ua-eb6331b3-ec4f-4b8e-a1cd-cb763ada9f6a', 'hosts': [{'name':
'192.168.80.191', 'port': '0'}], 'discard': False}.

So i have checked the img file,  it seems to be present an healthy:

# cd /rhev/data-center/mnt/glusterSD/192.168.80.191:
_ssd-samsung-evo860/97ced32d-bdb9-4913-a272-8a4a83ca3d1b/images/eb6331b3-ec4f-4b8e-a1cd-cb763ada9f6a

 # qemu-img info *f8b86437-e54c-4728-8100-ed05ef312212*
image: f8b86437-e54c-4728-8100-ed05ef312212
file format: qcow2
virtual size: 50G (53687091200 bytes)
disk size: 28G
cluster_size: 65536
backing file: 4de5eae5-ab25-4a9e-a41a-2f7c2d8b272f
backing file format: qcow2
Format specific information:
compat: 1.1
lazy refcounts: false
refcount bits: 16
corrupt: false

# qemu-img check *f8b86437-e54c-4728-8100-ed05ef312212*
No errors were found on the image.
456643/819200 = 55.74% allocated, 5.36% fragmented, 0.00% compressed
clusters
Image end offset: 29933240320

# ls -l
total 62430134
-rw-rw. 1 vdsm kvm 33863434240 Oct  7 09:40
4de5eae5-ab25-4a9e-a41a-2f7c2d8b272f
-rw-rw. 1 vdsm kvm 1048576 Jun 27 13:10
4de5eae5-ab25-4a9e-a41a-2f7c2d8b272f.lease
-rw-r--r--. 1 vdsm kvm 338 Oct  6 19:01
4de5eae5-ab25-4a9e-a41a-2f7c2d8b272f.meta
-rw-rw. 1 vdsm kvm 29933240320 Oct  6 19:00
*f8b86437-e54c-4728-8100-ed05ef312212*
-rw-rw. 1 vdsm kvm 1048576 Jul  1 22:56
f8b86437-e54c-4728-8100-ed05ef312212.lease
-rw-r--r--. 1 vdsm kvm 271 Oct  6 19:01
f8b86437-e54c-4728-8100-ed05ef312212.meta


Any ideea how should I proceed to have the vms to start ? I am kind of
stuck in this issue...
Thank you very much in advance,
Leo



On Tue, Oct 8, 2019 at 9:59 AM Leo David  wrote:

> Hi Everyone,
> I'm waiting since 3 days for 5 x delete snapshot tasks to finish, and for
> some reason it seems to be stucked.For other vms snapshot removal took at
> most 20 mins, with havin the disks pretty much same size,  and snapshots
> numbers.
> Any thoughts on how should I get this fixed ?
> Below,  some lines from the engine.log, and it seems to show some
> complains regarding locks ( Failed to acquire lock and wait lock) ,
> although I am not sure if thats the root cause:
> Thank you very much !
>
> Leo
>
> 2019-10-08 09:52:48,692+03 INFO
> [org.ovirt.engine.core.bll.snapshots.RemoveSnapshotSingleDiskLiveCommandCallback]
> (EE-ManagedThreadFactory-engineScheduled-Thread-47)
> [73016a4a-bb2f-487f-91c5-cd027b278930] Command
> 'RemoveSnapshotSingleDiskLive' (id: '341d9c1b-2915-48d6-a8a9-9146ab19d5f8')
> waiting on child command id: '329da0fd-801b-4e0d-b7c0-fbb5c2a98bb5'
> type:'DestroyImage' to complete
> 2019-10-08 09:52:48,702+03 INFO
> [org.ovirt.engine.core.bll.snapshots.RemoveSnapshotSingleDiskLiveCommandCallback]
> (EE-ManagedThreadFactory-engineScheduled-Thread-47)
> [73016a4a-bb2f-487f-91c5-cd027b278930] Command
> 'RemoveSnapshotSingleDiskLive' (id: '580fa033-35fd-44f0-9979-e60e9bbf8a29')
> waiting on child command id: 'c00bdeb6-2e8b-4ef8-a3dc-1aaa088ae052'
> type:'DestroyImage' to complete
> 2019-10-08 09:52:49,713+03 INFO
> [org.ovirt.engine.core.bll.snapshots.RemoveSnapshotSingleDiskLiveCommandCallback]
> (EE-ManagedThreadFactory-engineScheduled-Thread-50)
> [539ba19e-0cb5-42cf-9a23-7916ee2de4a9] Command
> 'RemoveSnapshotSingleDiskLive' (id: 'de747f91-ec59-4e70-9345-77e16234bfe0')
> waiting on child command id: '10812160-cf4c-4239-bb92-1d5a847687ee'
> type:'DestroyImage' to complete
> 2019-10-08 09:52:50,725+03 INFO
> [org.ovirt.engine.core.bll.snapshots.RemoveSnapshotSingleDiskLiveCommandCallback]
> (EE-ManagedThreadFactory-engineScheduled-Thread-100)
> [baed2fa3-bcad-43b2-8164-480598bc72f3] Command

[ovirt-users] Delete snapshots task hung

2019-10-08 Thread Leo David
[org.ovirt.engine.core.bll.snapshots.RemoveSnapshotSingleDiskLiveCommandCallback]
(EE-ManagedThreadFactory-engineScheduled-Thread-90)
[4f2bedf7-b9b3-481e-936f-e180803ac1b8] Command
'RemoveSnapshotSingleDiskLive' (id: '66094952-88e3-41d9-9191-7ef8873de511')
waiting on child command id: '122be070-d45f-4e6b-bb61-4943a096b88b'
type:'DestroyImage' to complete
2019-10-08 09:52:57,792+03 INFO
[org.ovirt.engine.core.bll.ConcurrentChildCommandsExecutionCallback]
(EE-ManagedThreadFactory-engineScheduled-Thread-90)
[baed2fa3-bcad-43b2-8164-480598bc72f3] Command 'RemoveSnapshot' (id:
'7aafe2fe-e05b-44bc-a716-c48734a8c2de') waiting on child command id:
'4919b287-e980-4d34-a219-c08a169cd8f7' type:'RemoveSnapshotSingleDiskLive'
to complete
2019-10-08 09:52:57,799+03 INFO
[org.ovirt.engine.core.bll.ConcurrentChildCommandsExecutionCallback]
(EE-ManagedThreadFactory-engineScheduled-Thread-90)
[825c211d-b40d-45d7-9dc7-e5c6af28a269] Command 'RemoveSnapshot' (id:
'cb6e7a1d-1902-4e61-9c3d-e541cf4c6348') waiting on child command id:
'5e9f9b03-4df9-4e31-b2e1-5efaa4fcf66c' type:'RemoveSnapshotSingleDiskLive'
to complete
2019-10-08 09:52:58,345+03 INFO
[org.ovirt.engine.core.vdsbroker.gluster.GetGlusterLocalLogicalVolumeListVDSCommand]
(DefaultQuartzScheduler10) [8921c9c] FINISH,
GetGlusterLocalLogicalVolumeListVDSCommand, return:
[org.ovirt.engine.core.common.businessentities.gluster.GlusterLocalLogicalVolume@65406163,
org.ovirt.engine.core.common.businessentities.gluster.GlusterLocalLogicalVolume@1207b8a9,
org.ovirt.engine.core.common.businessentities.gluster.GlusterLocalLogicalVolume@33f93de1,
org.ovirt.engine.core.common.businessentities.gluster.GlusterLocalLogicalVolume@6d2ae59d,
org.ovirt.engine.core.common.businessentities.gluster.GlusterLocalLogicalVolume@b07ca22,
org.ovirt.engine.core.common.businessentities.gluster.GlusterLocalLogicalVolume@246ae91b,
org.ovirt.engine.core.common.businessentities.gluster.GlusterLocalLogicalVolume@5e86ce42,
org.ovirt.engine.core.common.businessentities.gluster.GlusterLocalLogicalVolume@643ade02,
org.ovirt.engine.core.common.businessentities.gluster.GlusterLocalLogicalVolume@51d95e7d,
org.ovirt.engine.core.common.businessentities.gluster.GlusterLocalLogicalVolume@2288576c,
org.ovirt.engine.core.common.businessentities.gluster.GlusterLocalLogicalVolume@4a9469a1,
org.ovirt.engine.core.common.businessentities.gluster.GlusterLocalLogicalVolume@78ec643f,
org.ovirt.engine.core.common.businessentities.gluster.GlusterLocalLogicalVolume@63dde7e9,
org.ovirt.engine.core.common.businessentities.gluster.GlusterLocalLogicalVolume@30effc9,
org.ovirt.engine.core.common.businessentities.gluster.GlusterLocalLogicalVolume@6a3e4768],
log id: 449353d0
2019-10-08 09:52:58,346+03 INFO
[org.ovirt.engine.core.vdsbroker.gluster.GetGlusterLocalPhysicalVolumeListVDSCommand]
(DefaultQuartzScheduler10) [8921c9c] START,
GetGlusterLocalPhysicalVolumeListVDSCommand(HostName =
host-dell-2.domain.int,
VdsIdVDSCommandParametersBase:{hostId='f598de28-296a-46ce-8b8d-6f19a2c892e2'}),
log id: 71680b8c
2019-10-08 09:52:58,807+03 INFO
[org.ovirt.engine.core.bll.snapshots.RemoveSnapshotSingleDiskLiveCommandCallback]
(EE-ManagedThreadFactory-engineScheduled-Thread-81)
[73016a4a-bb2f-487f-91c5-cd027b278930] Command
'RemoveSnapshotSingleDiskLive' (id: '341d9c1b-2915-48d6-a8a9-9146ab19d5f8')
waiting on child command id: '329da0fd-801b-4e0d-b7c0-fbb5c2a98bb5'
type:'DestroyImage' to complete
2019-10-08 09:52:58,817+03 INFO
[org.ovirt.engine.core.bll.snapshots.RemoveSnapshotSingleDiskLiveCommandCallback]
(EE-ManagedThreadFactory-engineScheduled-Thread-81)
[73016a4a-bb2f-487f-91c5-cd027b278930] Command
'RemoveSnapshotSingleDiskLive' (id: '580fa033-35fd-44f0-9979-e60e9bbf8a29')
waiting on child command id: 'c00bdeb6-2e8b-4ef8-a3dc-1aaa088ae052'
type:'DestroyImage' to complete



-- 
Best regards, Leo David
___
Users mailing list -- users@ovirt.org
To unsubscribe send an email to users-le...@ovirt.org
Privacy Statement: https://www.ovirt.org/site/privacy-policy/
oVirt Code of Conduct: 
https://www.ovirt.org/community/about/community-guidelines/
List Archives: 
https://lists.ovirt.org/archives/list/users@ovirt.org/message/4VHKWGVBVEGJM2I236MN6G2VJJBKHJ5B/


[ovirt-users] Re: VDI

2019-10-07 Thread Leo David
Thank you very much for the informations.
I'm am sorry for my lack of knowledge,  but the vGPU / vm graphical
performance part is a bit confusing for me at the moment.
As far as i've understood,  if some real decent graphical performance is
needed for the vms,  the only proper way is to install a grid supported gpu
card on the hypervisor,  and have the "slices" assigned to the vms.
A rough look show that this will add at least 4k - 5k dollars per node...
Just as a reference,  its only about "office type" vdi's that only do web
browsing and sometimes youtube videos / skype calls.
Any thoughts for a more less expensive way to achieve this ?

Again, thank you very much !

On Sun, Oct 6, 2019 at 12:22 PM Alex McWhirter  wrote:

> We use customized versions of spice / kvm. Same versions ovirt ships with
> for compatibility reasons, with audio patches on the kvm side and spice
> patches for vp8 encoding the video streams. We've been meaning to make the
> repo for our custom patched versions public for a while, if you are
> interested i can accelerate that. Note, you also need patched versions of
> spice client if yours wasn't build with vp8 support, we have those too.
>
> On the experimental side we have another in-progress set of patches that
> enable h264 encoding in spice, hardware accelerated with AMD W5100's, but
> this requires a lot of new software to be installed and a new kernel,
> CentOS 8 should fix most of that, so we'll probably re-base and release on
> that when the time is right.
>
> The GPU's are not used for the guests at all, we only use them for the
> h264 encoding. AMD was picked to avoid proprietary drivers and stream
> limits. No RAM / SR-IOV needed, if you want 3d support you will be looking
> more for Nvidia-GRID.
>
>
> Anyways, with just the patched software installed and some custom
> settings, video playback is about 95% the quality of native, takes about
> 40mbit/s per client to stream it. Audio has the occasional stutter, but
> it's not bad.
>
>
> On 2019-10-06 05:09, Leo David wrote:
>
> Thank you for sharing the informations Alex, they are very helpfull. I am
> now able to get sound from the vms,  although performance is pretty poor
> even with "adjust for performance" setting in Win10. Cannot even talk
> about youtube video playing - freezing and crackling.
> Could you please be so kind to share the following infos:
> 1. Have you upgraded the "spice-server" installed on the hosts with a
>  newer version than 1.4.0 ? If so,  could you provide me how could I get
> these packages ?
> 2. What graphic card have you used for getting better graphic performance
> with the vms ? Im trying to understand what "accepted" card could I use
> with my 1U chassis servers...
> 3. Is it only needed to install the card and the platform will alocate
> physical video memory to "desktop" vms ? (  Will the card RAM
> be automatically shared across the desktop tyoe vms running on top  of the
> host ? )
> 4. Is it necesarilly to activate sr-iov in the hosts bios or any other
> platform configurations ?
>
> I am really sorry for asking too many things,  but im just trying to get
> these vdi vms working at least decent...
> Thank you so much !
>
> Leo
>
>
> -- Forwarded message -
> From: 
> Date: Tue, Sep 24, 2019 at 7:50 PM
> Subject: Re: [ovirt-users] Re: VDI
> To: Leo David 
>
>
> Audio should just work as long as the VM is of the desktop type.
>
> On Sep 24, 2019 6:50 AM, Leo David  wrote:
>
> Thank you Alex,
> When you say "gpu backed"  are you referring to sr-iov  to share same gpu
> to multiple vms ?
> Any thoughts regarding passing audio form the vm to the client ?
> Did you do any update of the spice-server on the hosts ?
>
> Thanks,
>
> Leo
>
> On Tue, Sep 24, 2019 at 12:01 PM  wrote:
>
> I believe a lot of the package updates in CentOS 8 will solve some of the
> issues.
>
> But for now we get around them by disabling all visual effects on our VMS.
> If you are gpu backing the VMS with something like Nvidia grid the issues
> are non existent, but for non gpu backed VMS currently disabling all the
> effects is a must.
>
> We deploy the changes via gpo directly to the registry, so they take
> effect on first VM boot.
>
> On Sep 24, 2019 2:03 AM, Leo David  wrote:
>
> Thank you Alex from my side as well, very usefull information. I am the
> middle of vdi implementation as well, and i'm having issues with the spice
> console since 4.2, and it seems that latest 4.3 is still having the
> problem.
> What am i confrunting is:
> - spice console is very slaggy and slow for Win10 vms ( not even talking
> about running videos..)
> - i

[ovirt-users] Re: VDI

2019-10-06 Thread Leo David
Thank you Alex,
If you could share the repos for server and client patches would be very
helpful. ( also some install guidance info would it be very good )
I could try them on the actual 4.2.8 that i'm trying to enable vdi's on.
Cheers,

Leo

On Sun, Oct 6, 2019 at 12:22 PM Alex McWhirter  wrote:

> We use customized versions of spice / kvm. Same versions ovirt ships with
> for compatibility reasons, with audio patches on the kvm side and spice
> patches for vp8 encoding the video streams. We've been meaning to make the
> repo for our custom patched versions public for a while, if you are
> interested i can accelerate that. Note, you also need patched versions of
> spice client if yours wasn't build with vp8 support, we have those too.
>
> On the experimental side we have another in-progress set of patches that
> enable h264 encoding in spice, hardware accelerated with AMD W5100's, but
> this requires a lot of new software to be installed and a new kernel,
> CentOS 8 should fix most of that, so we'll probably re-base and release on
> that when the time is right.
>
> The GPU's are not used for the guests at all, we only use them for the
> h264 encoding. AMD was picked to avoid proprietary drivers and stream
> limits. No RAM / SR-IOV needed, if you want 3d support you will be looking
> more for Nvidia-GRID.
>
>
> Anyways, with just the patched software installed and some custom
> settings, video playback is about 95% the quality of native, takes about
> 40mbit/s per client to stream it. Audio has the occasional stutter, but
> it's not bad.
>
>
> On 2019-10-06 05:09, Leo David wrote:
>
> Thank you for sharing the informations Alex, they are very helpfull. I am
> now able to get sound from the vms,  although performance is pretty poor
> even with "adjust for performance" setting in Win10. Cannot even talk
> about youtube video playing - freezing and crackling.
> Could you please be so kind to share the following infos:
> 1. Have you upgraded the "spice-server" installed on the hosts with a
>  newer version than 1.4.0 ? If so,  could you provide me how could I get
> these packages ?
> 2. What graphic card have you used for getting better graphic performance
> with the vms ? Im trying to understand what "accepted" card could I use
> with my 1U chassis servers...
> 3. Is it only needed to install the card and the platform will alocate
> physical video memory to "desktop" vms ? (  Will the card RAM
> be automatically shared across the desktop tyoe vms running on top  of the
> host ? )
> 4. Is it necesarilly to activate sr-iov in the hosts bios or any other
> platform configurations ?
>
> I am really sorry for asking too many things,  but im just trying to get
> these vdi vms working at least decent...
> Thank you so much !
>
> Leo
>
>
> -- Forwarded message -
> From: 
> Date: Tue, Sep 24, 2019 at 7:50 PM
> Subject: Re: [ovirt-users] Re: VDI
> To: Leo David 
>
>
> Audio should just work as long as the VM is of the desktop type.
>
> On Sep 24, 2019 6:50 AM, Leo David  wrote:
>
> Thank you Alex,
> When you say "gpu backed"  are you referring to sr-iov  to share same gpu
> to multiple vms ?
> Any thoughts regarding passing audio form the vm to the client ?
> Did you do any update of the spice-server on the hosts ?
>
> Thanks,
>
> Leo
>
> On Tue, Sep 24, 2019 at 12:01 PM  wrote:
>
> I believe a lot of the package updates in CentOS 8 will solve some of the
> issues.
>
> But for now we get around them by disabling all visual effects on our VMS.
> If you are gpu backing the VMS with something like Nvidia grid the issues
> are non existent, but for non gpu backed VMS currently disabling all the
> effects is a must.
>
> We deploy the changes via gpo directly to the registry, so they take
> effect on first VM boot.
>
> On Sep 24, 2019 2:03 AM, Leo David  wrote:
>
> Thank you Alex from my side as well, very usefull information. I am the
> middle of vdi implementation as well, and i'm having issues with the spice
> console since 4.2, and it seems that latest 4.3 is still having the
> problem.
> What am i confrunting is:
> - spice console is very slaggy and slow for Win10 vms ( not even talking
> about running videos..)
> - i can't find a way to get audio from the vm
> At the moment i am running 4.3, latest virt-viewer installed on the
> client, and latest qxl-dod driver installed on the vm.
> Any thoughts on solving video performance and audio redirection ?
> Thank you again,
>
> Leo
>
> On Mon, Sep 23, 2019, 22:53 Alex McWhirter  wrote:
>
> To achieve that all you need to do is create a template of the desktop
> base vm, make sure the vm type

[ovirt-users] Re: VDI

2019-10-06 Thread Leo David
Thank you for sharing the informations Alex, they are very helpfull. I am
now able to get sound from the vms,  although performance is pretty poor
even with "adjust for performance" setting in Win10. Cannot even talk
about youtube video playing - freezing and crackling.
Could you please be so kind to share the following infos:
1. Have you upgraded the "spice-server" installed on the hosts with a
 newer version than 1.4.0 ? If so,  could you provide me how could I get
these packages ?
2. What graphic card have you used for getting better graphic performance
with the vms ? Im trying to understand what "accepted" card could I use
with my 1U chassis servers...
3. Is it only needed to install the card and the platform will alocate
physical video memory to "desktop" vms ? (  Will the card RAM
be automatically shared across the desktop tyoe vms running on top  of the
host ? )
4. Is it necesarilly to activate sr-iov in the hosts bios or any other
platform configurations ?

I am really sorry for asking too many things,  but im just trying to get
these vdi vms working at least decent...
Thank you so much !

Leo


-- Forwarded message -
From: 
Date: Tue, Sep 24, 2019 at 7:50 PM
Subject: Re: [ovirt-users] Re: VDI
To: Leo David 


Audio should just work as long as the VM is of the desktop type.

On Sep 24, 2019 6:50 AM, Leo David  wrote:

Thank you Alex,
When you say "gpu backed"  are you referring to sr-iov  to share same gpu
to multiple vms ?
Any thoughts regarding passing audio form the vm to the client ?
Did you do any update of the spice-server on the hosts ?

Thanks,

Leo

On Tue, Sep 24, 2019 at 12:01 PM  wrote:

I believe a lot of the package updates in CentOS 8 will solve some of the
issues.

But for now we get around them by disabling all visual effects on our VMS.
If you are gpu backing the VMS with something like Nvidia grid the issues
are non existent, but for non gpu backed VMS currently disabling all the
effects is a must.

We deploy the changes via gpo directly to the registry, so they take effect
on first VM boot.

On Sep 24, 2019 2:03 AM, Leo David  wrote:

Thank you Alex from my side as well, very usefull information. I am the
middle of vdi implementation as well, and i'm having issues with the spice
console since 4.2, and it seems that latest 4.3 is still having the problem.
What am i confrunting is:
- spice console is very slaggy and slow for Win10 vms ( not even talking
about running videos..)
- i can't find a way to get audio from the vm
At the moment i am running 4.3, latest virt-viewer installed on the client,
and latest qxl-dod driver installed on the vm.
Any thoughts on solving video performance and audio redirection ?
Thank you again,

Leo

On Mon, Sep 23, 2019, 22:53 Alex McWhirter  wrote:

To achieve that all you need to do is create a template of the desktop base
vm, make sure the vm type is set to desktop. Afterwards just create new vms
from that template. As long as the VM type is set to desktop each new VM
will use a qcow overlay on top of the base image.

Taking this a step further you can then create VM pools from said template,
allowing users to dynamically be assigned a new VM on login. Granted pools
are usually stateless, so you need to have network file storage. We use
pools for windows 10 VDI instances, where we use sysprep to autojoin the
new pool vm to the domain where redirected folders are already setup.

For VDI only use spice protocol. By default we found spice to be semi
lackluster, so we do apply custom settings and we have recompiled spice on
both servers and clients with h264 support. This is not 100% necessary, but
makes things like youtube much more usable. We have also backported some
audio patches to KVM. CentOS 8 should resolve a lot of these customizations
that we've had to do.


As far as updating, pretty much. We create a VM from the template, update
it, then push it back as a new version of the template. The pools are set
to always use the latest template version. Users have to log out, then back
in to the VDI system in order to get the new image as logging out will
destroy the users current instance and create a new one on log in.


On 2019-09-23 15:16, Fabio Marzocca wrote:

Hi Alex, thanks for answering.

I am approaching and studying oVirt in order to propose the solution to a
customer as a replacement for a commercial solution they have now.
They only need Desktop virtualization.
Sorry for the silly question, but I can't find a way to deploy a VM
(template) to users as a "linked-clone", meaning that the users' image
still refers to the original image but modification are written (and
afterwards read) from a new location. This technique is called
Copy-on-write.
Can this be achieved with oVirt?


Then, what is the Best Practice to update WIndows OS for the all the users?
Currently they simply "check-out"  the Gold Image, update it and check-in,
while all users are ru

[ovirt-users] NFS based - incremental backups

2019-09-26 Thread Leo David
Hello Everyone,
Ive been struggling since a while to find out a propper solution to have a
full backup scenario for a production environment.
In the past, we have used Proxmox, and the scheduled incremenral nfs based
full vm backups is a thing that we really miss.
As far as i know, at this point the only way to have a backup in oVirt /
rhev is by using gluster geo-replication feature.
This is nice, but as far as i know it lacks some important features:
- ability to have incremental backups to restore vms from
- ability to backup vms placed on different storage domains ( only one
storage domain can be geo-replicated !!! some vms have disks on ssd volume,
some on hdd, some on both)
- need to setup an external 3 nodes gluster cluster ( although a workaround
would be to have single bricks based volumes for a single instance )
I know we can create snaps, but they will die with the platform in a fail
scenario, and neither they can be scheduled.
We have found bacchus project that looked promising, although it had a
pretty hassled way to achieve backups ( create snap, create vm from the
snap, export vm to export domain, delete vm, delete snap - all in a
scheduled fashion )
As a mention, Proxmox is incrementaly creating a tar archive of the vm disk
content, and places it to an external network storage like nfs. This
allowed us to backup/reatore both linux and windows vms very easily.
Now, I know this have been discussed before, but i would like to know if
there are at least any future plans to implement this feature in the next
releases.
Personally, i consider this a major, and quite decent feature to have with
the platform, without the need to pay for 3rd party solutions that may or
may not achieve the goal while adding extra pieces to the stack.
Geo-replication is a good and nice feature, but in my oppinion it is not
what a "backup domain" would be.
Have a nice day,

Leo
___
Users mailing list -- users@ovirt.org
To unsubscribe send an email to users-le...@ovirt.org
Privacy Statement: https://www.ovirt.org/site/privacy-policy/
oVirt Code of Conduct: 
https://www.ovirt.org/community/about/community-guidelines/
List Archives: 
https://lists.ovirt.org/archives/list/users@ovirt.org/message/QX7H2DNKJWMKNPX5V465V2CRSJS4IXQJ/


[ovirt-users] Re: VDI

2019-09-24 Thread Leo David
Thank you Alex from my side as well, very usefull information. I am the
middle of vdi implementation as well, and i'm having issues with the spice
console since 4.2, and it seems that latest 4.3 is still having the problem.
What am i confrunting is:
- spice console is very slaggy and slow for Win10 vms ( not even talking
about running videos..)
- i can't find a way to get audio from the vm
At the moment i am running 4.3, latest virt-viewer installed on the client,
and latest qxl-dod driver installed on the vm.
Any thoughts on solving video performance and audio redirection ?
Thank you again,

Leo

On Mon, Sep 23, 2019, 22:53 Alex McWhirter  wrote:

> To achieve that all you need to do is create a template of the desktop
> base vm, make sure the vm type is set to desktop. Afterwards just create
> new vms from that template. As long as the VM type is set to desktop each
> new VM will use a qcow overlay on top of the base image.
>
> Taking this a step further you can then create VM pools from said
> template, allowing users to dynamically be assigned a new VM on login.
> Granted pools are usually stateless, so you need to have network file
> storage. We use pools for windows 10 VDI instances, where we use sysprep to
> autojoin the new pool vm to the domain where redirected folders are already
> setup.
>
> For VDI only use spice protocol. By default we found spice to be semi
> lackluster, so we do apply custom settings and we have recompiled spice on
> both servers and clients with h264 support. This is not 100% necessary, but
> makes things like youtube much more usable. We have also backported some
> audio patches to KVM. CentOS 8 should resolve a lot of these customizations
> that we've had to do.
>
>
> As far as updating, pretty much. We create a VM from the template, update
> it, then push it back as a new version of the template. The pools are set
> to always use the latest template version. Users have to log out, then back
> in to the VDI system in order to get the new image as logging out will
> destroy the users current instance and create a new one on log in.
>
>
> On 2019-09-23 15:16, Fabio Marzocca wrote:
>
> Hi Alex, thanks for answering.
>
> I am approaching and studying oVirt in order to propose the solution to a
> customer as a replacement for a commercial solution they have now.
> They only need Desktop virtualization.
> Sorry for the silly question, but I can't find a way to deploy a VM
> (template) to users as a "linked-clone", meaning that the users' image
> still refers to the original image but modification are written (and
> afterwards read) from a new location. This technique is called
> Copy-on-write.
> Can this be achieved with oVirt?
>
>
> Then, what is the Best Practice to update WIndows OS for the all the
> users? Currently they simply "check-out"  the Gold Image, update it and
> check-in, while all users are running...
>
> Fabio
>
> On Mon, Sep 23, 2019 at 8:04 PM Alex McWhirter  wrote:
>
>> yes, we do. All spice, with some customizations done at source level for
>> spice / kvm packages.
>>
>>
>> On 2019-09-23 13:44, Fabio Marzocca wrote:
>>
>> Is there anyone who uses oVirt as a full VDI environment? I would have a
>> bunch of questions...
>>
>>
>>
>> ___
>> Users mailing list -- users@ovirt.org
>> To unsubscribe send an email to users-le...@ovirt.org
>> Privacy Statement: https://www.ovirt.org/site/privacy-policy/
>> oVirt Code of Conduct:
>> https://www.ovirt.org/community/about/community-guidelines/
>> List Archives:
>> https://lists.ovirt.org/archives/list/users@ovirt.org/message/D44YB5VOKNBNCJSOMLKRAZBURFJLAOLM/
>>
>>
>>
> ___
> Users mailing list -- users@ovirt.org
> To unsubscribe send an email to users-le...@ovirt.org
> Privacy Statement: https://www.ovirt.org/site/privacy-policy/
> oVirt Code of Conduct:
> https://www.ovirt.org/community/about/community-guidelines/
> List Archives:
> https://lists.ovirt.org/archives/list/users@ovirt.org/message/DWY46KJGSYJGRXMRY7WWCINAW2Y5ETDI/
>
___
Users mailing list -- users@ovirt.org
To unsubscribe send an email to users-le...@ovirt.org
Privacy Statement: https://www.ovirt.org/site/privacy-policy/
oVirt Code of Conduct: 
https://www.ovirt.org/community/about/community-guidelines/
List Archives: 
https://lists.ovirt.org/archives/list/users@ovirt.org/message/VUANDTA34W6XBQZMDZRFMZC7CSCNBYCK/


[ovirt-users] HP NetXen Incorporated NX3031 driver

2019-09-07 Thread Leo David
Hi everyone,

I have this bloody card installed on a node,  and it seems that no driver
can be loaded for it,  "ip link sh"  does not show it.

Any recommendation about installing this network card on oVirt 4.2.8  ?

lspci -v



05:00.0 Ethernet controller: NetXen Incorporated NX3031 Multifunction
1/10-Gigabit Server Adapter (rev 42)
Subsystem: Hewlett-Packard Company NC522SFP Dual Port 10GbE Server
Adapter
Flags: fast devsel, IRQ 31
Memory at d9c0 (64-bit, non-prefetchable) [size=2M]
Memory at da00 (64-bit, non-prefetchable) [size=32M]
Expansion ROM at d800 [disabled] [size=64K]
Capabilities: [40] MSI-X: Enable- Count=64 Masked-
Capabilities: [80] Power Management version 3
Capabilities: [a0] MSI: Enable- Count=1/32 Maskable- 64bit+
Capabilities: [c0] Express Endpoint, MSI 00
Capabilities: [100] Advanced Error Reporting
Capabilities: [140] Device Serial Number 59-69-46-61-6e-48-73-75
Kernel modules: netxen_nic

05:00.1 Ethernet controller: NetXen Incorporated NX3031 Multifunction
1/10-Gigabit Server Adapter (rev 42)
Subsystem: Hewlett-Packard Company NC522SFP Dual Port 10GbE Server
Adapter
Flags: fast devsel, IRQ 33
Memory at d9e0 (64-bit, non-prefetchable) [size=2M]
Memory at dc00 (64-bit, non-prefetchable) [size=32M]
Capabilities: [40] MSI-X: Enable- Count=64 Masked-
Capabilities: [80] Power Management version 3
Capabilities: [a0] MSI: Enable- Count=1/32 Maskable- 64bit+
Capabilities: [c0] Express Endpoint, MSI 00
Capabilities: [100] Advanced Error Reporting
Capabilities: [140] Device Serial Number 59-69-46-61-6e-48-73-75
Kernel modules: netxen_nic

dmesg | grep netxen
[2.188078] netxen_nic :05:00.0: 2MB memory map
[2.188371] netxen_nic :05:00.0: Timeout reached  waiting for rom
done
[2.188442] netxen_nic :05:00.0: Error getting board config info.
[2.189488] netxen_nic: probe of :05:00.0 failed with error -5
[2.190438] netxen_nic :05:00.1: 2MB memory map
[2.190741] netxen_nic :05:00.1: Timeout reached  waiting for rom
done
[2.190808] netxen_nic :05:00.1: Error getting board config info.
[2.190964] netxen_nic: probe of :05:00.1 failed with error -5


...

Thank you very much !
Leo


-- 
Best regards, Leo David
___
Users mailing list -- users@ovirt.org
To unsubscribe send an email to users-le...@ovirt.org
Privacy Statement: https://www.ovirt.org/site/privacy-policy/
oVirt Code of Conduct: 
https://www.ovirt.org/community/about/community-guidelines/
List Archives: 
https://lists.ovirt.org/archives/list/users@ovirt.org/message/B6GXGCV47VLF2JS7EXHGGBGLWAI5ZJ2R/


[ovirt-users] Re: Spice console very poor performance for Windows 10 vm

2019-07-03 Thread Leo David
Hello everyone,
I am comming back to this issue because of the need to spinup a couple of
Win10 vdi vms, and I can see that the problem still persist - at least for
me.
I am using now a fresh installed and updated oVirt 4.2.8 and latest
guest-tools iso. Still havig very laggy desktop experience on Win10.
Win2012 seems fine, Win7 fine as well.
Is there anyone of you guys having Win10 working at least decent for a vdi
usage ? Any thoughts on this ?
Thank you very much !

Leo


On Sat, Feb 16, 2019, 08:38 Leo David  wrote:

> Thank you,
>
> Not sure i've understood the procedure to create a custom vdsm hook.
> Is this a good example to follow ?
> https://github.com/oVirt/vdsm/blob/master/vdsm_hooks/README
>
> Thanks,
>
> Leo
>
>
> On Fri, Feb 15, 2019, 19:46 Michal Skrivanek  wrote:
>
>>
>>
>> On 15 Feb 2019, at 16:04, Leo David  wrote:
>>
>> Thank you Victor.
>> Yes, I have the latest guest-tools installed, and the problem is that
>> after configuring the vm by using virsh and reboot,  the configuration
>> reverts to defaults:
>> > passwdValidTo='1970-01-01T00:00:01'>
>>   
>>   
>>   
>>   
>>   
>>   
>>   
>>   
>>   
>> 
>> So my added changes are not loaded at vm boot.
>>  I am sure this is an oVirt spicific behavior, but i just can;t find it
>> out how to make this persistent.
>>
>>
>> You can’t edit it in virsh in oVirt. Starting VM in oVirt is too complex
>> for libvirt to handle it on its own. You need to write a vdsm hook if you
>> want to modify resulting xml
>>
>> For trying out things I’d recommend to do that with a simple VM in
>> virt-manager and once you find out the right config/parameters then write a
>> hook with those for oVirt
>>
>> Thanks,
>> michal
>>
>>
>> On Fri, Feb 15, 2019 at 4:32 PM Victor Toso 
>> wrote:
>>
>>> Hi,
>>>
>>> On Fri, Feb 15, 2019 at 04:24:15PM +0200, Leo David wrote:
>>> > Hi Everyone,
>>> > Any thoughts on this ?
>>> > It seems that audio streaming is affected as well, and
>>> > bandwidth is not an issue in this case.
>>>
>>> What audio issues do you see?
>>>
>>> > 'm thinking that maybe if I just just disable compression on
>>> > spice,  things would get a bit better...maybe.
>>> > Thank you !
>>> >
>>> > On Wed, Feb 13, 2019 at 8:05 AM Leo David  wrote:
>>> >
>>> > > Thank you so much Victor !
>>> > > Anyone, any ideea how could I disable video compression for
>>> > > spice console on particular vms ?
>>>
>>> I'm not familiar with oVirt interface but it shouldn't be hard if
>>> you have access to the host.
>>>
>>> # virsh edit $vm-name
>>>
>>> switch what you have in graphics to:
>>>
>>> 
>>> 
>>> 
>>> 
>>>
>>>
>>> > > I am trying to implement an "almost" full desktop experience
>>> > > with an oVirt based vdi environment.
>>> > > And besides the Windows10 spice issues ( which are the main
>>> > > cause of this thread ), it seems that Windows 7 is affected
>>> > > too by the multimedia playing perspective. Which makes a
>>> > > total blocker on project implementation
>>>
>>> Do you have spice-guest-tools installed?
>>>
>>> > > Any suggestions/ similar experiences ?
>>> > > Thank you very much and have a nice day !
>>> > >
>>> > > Leo
>>>
>>> Cheers,
>>> Victor
>>> > >
>>> > > On Mon, Feb 11, 2019, 12:01 Victor Toso >> wrote:
>>> > >
>>> > >> Hi,
>>> > >>
>>> > >> On Mon, Feb 11, 2019 at 11:50:49AM +0200, Leo David wrote:
>>> > >> > Hi,
>>> > >> > "This enable host-side streaming, are you sure you want it?"
>>> > >> > Not sure yet, but i would at least disable compression, video
>>> > >> > playing seems to be pretty poor, and crackling ( youtube, etc )
>>> > >>
>>> > >> For playing video use-cases (youtube) it might be okay but not
>>> > >> for playing games as it has some hard coded delay in the
>>> > >> streaming code path.
>>> > >>
&

[ovirt-users] Re: Replace bad Host from a 9 Node hyperconverged setup 4.3.3

2019-06-10 Thread Leo David
https://stijn.tintel.eu/blog/2013/03/02/ovirt-problem-duplicate-uuids

On Mon, Jun 10, 2019, 18:13 Adrian Quintero 
wrote:

> Ok I have tried reinstalling the server from scratch with a different name
> and IP address and when trying to add it to cluster I get the following
> error:
>
> Event details
> ID: 505
> Time: Jun 10, 2019, 10:00:00 AM
> Message: Host myshost2.virt.iad3p installation failed. Host
> myhost2.mydomain.com reports unique id which already registered for
> myhost1.mydomain.com
>
> I am at a loss here, I don't have a brand new server to do this and in
> need to re-use what I have.
>
>
> *From the oVirt engine log (/var/log/ovirt-engine/engine.log): *
> 2019-06-10 10:57:59,950-04 ERROR
> [org.ovirt.engine.core.dal.dbbroker.auditloghandling.AuditLogDirector]
> (EE-ManagedThreadFactory-engine-Thread-37744) [9b88055] EVENT_ID:
> VDS_INSTALL_FAILED(505), Host myhost2.mydomain.com installation failed.
> Host myhost2.mydomain.com reports unique id which already registered for
> myhost1.mydomain.com
>
> So in the
> /var/log/ovirt-engine/host-deploy/ovirt-host-deploy-20190610105759-myhost2.mydomain.com-9b88055.log
> of the ovirt engine I see that the host deploy is running  the following
> command to identify the system, if this is the case then it will never work
> :( because it identifies each host using the system uuid.
>
> *dmidecode -s system-uuid*
> b64d566e-055d-44d4-83a2-d3b83f25412e
>
>
> Any suggestions?
>
> Thanks
>
> On Sat, Jun 8, 2019 at 11:23 AM Adrian Quintero 
> wrote:
>
>> Leo,
>> I did try putting it under maintenance and checking to ignore gluster and
>> it did not work.
>> Error while executing action:
>> -Cannot remove host. Server having gluster volume.
>>
>> Note: the server was already reinstalled so gluster will never see the
>> volumes or bricks for this server.
>>
>> I will rename the server to myhost2.mydomain.com and try to replace the
>> bricks hopefully that might work, however it would be good to know that you
>> can re-install from scratch an existing cluster server and put it back to
>> the cluster.
>>
>> Still doing research hopefully we can find a way.
>>
>> thanks again
>>
>> Adrian
>>
>>
>>
>>
>> On Fri, Jun 7, 2019 at 2:39 AM Leo David  wrote:
>>
>>> You will need to remove the storage role from that server first ( not
>>> being part of gluster cluster ).
>>> I cannot test this right now on production,  but maybe putting host
>>> although its already died under "mantainance" while checking to ignore
>>> guster warning will let you remove it.
>>> Maybe I am wrong about the procedure,  can anybody input an advice
>>> helping with this situation ?
>>> Cheers,
>>>
>>> Leo
>>>
>>>
>>>
>>>
>>> On Thu, Jun 6, 2019 at 9:45 PM Adrian Quintero 
>>> wrote:
>>>
>>>> I tried removing the bad host but running into the following issue ,
>>>> any idea?
>>>> Operation Canceled
>>>> Error while executing action:
>>>>
>>>> host1.mydomain.com
>>>>
>>>>- Cannot remove Host. Server having Gluster volume.
>>>>
>>>>
>>>>
>>>>
>>>> On Thu, Jun 6, 2019 at 11:18 AM Adrian Quintero <
>>>> adrianquint...@gmail.com> wrote:
>>>>
>>>>> Leo, I forgot to mention that I have 1 SSD disk for caching purposes,
>>>>> wondering how that setup should be achieved?
>>>>>
>>>>> thanks,
>>>>>
>>>>> Adrian
>>>>>
>>>>> On Wed, Jun 5, 2019 at 11:25 PM Adrian Quintero <
>>>>> adrianquint...@gmail.com> wrote:
>>>>>
>>>>>> Hi Leo, yes, this helps a lot, this confirms the plan we had in mind.
>>>>>>
>>>>>> Will test tomorrow and post the results.
>>>>>>
>>>>>> Thanks again
>>>>>>
>>>>>> Adrian
>>>>>>
>>>>>> On Wed, Jun 5, 2019 at 11:18 PM Leo David  wrote:
>>>>>>
>>>>>>> Hi Adrian,
>>>>>>> I think the steps are:
>>>>>>> - reinstall the host
>>>>>>> - join it to virtualisation cluster
>>>>>>> And if was member of gluster cluster as well:
>>>>>>> - go to host - storage devices
>>>>>>> - cr

[ovirt-users] Re: Replace bad Host from a 9 Node hyperconverged setup 4.3.3

2019-06-10 Thread Leo David
Hi, i think you can generate and use a new uuid, althought i am not sure
about the procedure right now..

On Mon, Jun 10, 2019, 18:13 Adrian Quintero 
wrote:

> Ok I have tried reinstalling the server from scratch with a different name
> and IP address and when trying to add it to cluster I get the following
> error:
>
> Event details
> ID: 505
> Time: Jun 10, 2019, 10:00:00 AM
> Message: Host myshost2.virt.iad3p installation failed. Host
> myhost2.mydomain.com reports unique id which already registered for
> myhost1.mydomain.com
>
> I am at a loss here, I don't have a brand new server to do this and in
> need to re-use what I have.
>
>
> *From the oVirt engine log (/var/log/ovirt-engine/engine.log): *
> 2019-06-10 10:57:59,950-04 ERROR
> [org.ovirt.engine.core.dal.dbbroker.auditloghandling.AuditLogDirector]
> (EE-ManagedThreadFactory-engine-Thread-37744) [9b88055] EVENT_ID:
> VDS_INSTALL_FAILED(505), Host myhost2.mydomain.com installation failed.
> Host myhost2.mydomain.com reports unique id which already registered for
> myhost1.mydomain.com
>
> So in the
> /var/log/ovirt-engine/host-deploy/ovirt-host-deploy-20190610105759-myhost2.mydomain.com-9b88055.log
> of the ovirt engine I see that the host deploy is running  the following
> command to identify the system, if this is the case then it will never work
> :( because it identifies each host using the system uuid.
>
> *dmidecode -s system-uuid*
> b64d566e-055d-44d4-83a2-d3b83f25412e
>
>
> Any suggestions?
>
> Thanks
>
> On Sat, Jun 8, 2019 at 11:23 AM Adrian Quintero 
> wrote:
>
>> Leo,
>> I did try putting it under maintenance and checking to ignore gluster and
>> it did not work.
>> Error while executing action:
>> -Cannot remove host. Server having gluster volume.
>>
>> Note: the server was already reinstalled so gluster will never see the
>> volumes or bricks for this server.
>>
>> I will rename the server to myhost2.mydomain.com and try to replace the
>> bricks hopefully that might work, however it would be good to know that you
>> can re-install from scratch an existing cluster server and put it back to
>> the cluster.
>>
>> Still doing research hopefully we can find a way.
>>
>> thanks again
>>
>> Adrian
>>
>>
>>
>>
>> On Fri, Jun 7, 2019 at 2:39 AM Leo David  wrote:
>>
>>> You will need to remove the storage role from that server first ( not
>>> being part of gluster cluster ).
>>> I cannot test this right now on production,  but maybe putting host
>>> although its already died under "mantainance" while checking to ignore
>>> guster warning will let you remove it.
>>> Maybe I am wrong about the procedure,  can anybody input an advice
>>> helping with this situation ?
>>> Cheers,
>>>
>>> Leo
>>>
>>>
>>>
>>>
>>> On Thu, Jun 6, 2019 at 9:45 PM Adrian Quintero 
>>> wrote:
>>>
>>>> I tried removing the bad host but running into the following issue ,
>>>> any idea?
>>>> Operation Canceled
>>>> Error while executing action:
>>>>
>>>> host1.mydomain.com
>>>>
>>>>- Cannot remove Host. Server having Gluster volume.
>>>>
>>>>
>>>>
>>>>
>>>> On Thu, Jun 6, 2019 at 11:18 AM Adrian Quintero <
>>>> adrianquint...@gmail.com> wrote:
>>>>
>>>>> Leo, I forgot to mention that I have 1 SSD disk for caching purposes,
>>>>> wondering how that setup should be achieved?
>>>>>
>>>>> thanks,
>>>>>
>>>>> Adrian
>>>>>
>>>>> On Wed, Jun 5, 2019 at 11:25 PM Adrian Quintero <
>>>>> adrianquint...@gmail.com> wrote:
>>>>>
>>>>>> Hi Leo, yes, this helps a lot, this confirms the plan we had in mind.
>>>>>>
>>>>>> Will test tomorrow and post the results.
>>>>>>
>>>>>> Thanks again
>>>>>>
>>>>>> Adrian
>>>>>>
>>>>>> On Wed, Jun 5, 2019 at 11:18 PM Leo David  wrote:
>>>>>>
>>>>>>> Hi Adrian,
>>>>>>> I think the steps are:
>>>>>>> - reinstall the host
>>>>>>> - join it to virtualisation cluster
>>>>>>> And if was member of gluster cluster as well:
>>>>>>> - go to host - storage devices
>>>>>>> - cr

[ovirt-users] Re: Replace bad Host from a 9 Node hyperconverged setup 4.3.3

2019-06-07 Thread Leo David
You will need to remove the storage role from that server first ( not being
part of gluster cluster ).
I cannot test this right now on production,  but maybe putting host
although its already died under "mantainance" while checking to ignore
guster warning will let you remove it.
Maybe I am wrong about the procedure,  can anybody input an advice helping
with this situation ?
Cheers,

Leo




On Thu, Jun 6, 2019 at 9:45 PM Adrian Quintero 
wrote:

> I tried removing the bad host but running into the following issue , any
> idea?
> Operation Canceled
> Error while executing action:
>
> host1.mydomain.com
>
>- Cannot remove Host. Server having Gluster volume.
>
>
>
>
> On Thu, Jun 6, 2019 at 11:18 AM Adrian Quintero 
> wrote:
>
>> Leo, I forgot to mention that I have 1 SSD disk for caching purposes,
>> wondering how that setup should be achieved?
>>
>> thanks,
>>
>> Adrian
>>
>> On Wed, Jun 5, 2019 at 11:25 PM Adrian Quintero 
>> wrote:
>>
>>> Hi Leo, yes, this helps a lot, this confirms the plan we had in mind.
>>>
>>> Will test tomorrow and post the results.
>>>
>>> Thanks again
>>>
>>> Adrian
>>>
>>> On Wed, Jun 5, 2019 at 11:18 PM Leo David  wrote:
>>>
>>>> Hi Adrian,
>>>> I think the steps are:
>>>> - reinstall the host
>>>> - join it to virtualisation cluster
>>>> And if was member of gluster cluster as well:
>>>> - go to host - storage devices
>>>> - create the bricks on the devices - as they are on the other hosts
>>>> - go to storage - volumes
>>>> - replace each failed brick with the corresponding new one.
>>>> Hope it helps.
>>>> Cheers,
>>>> Leo
>>>>
>>>>
>>>> On Wed, Jun 5, 2019, 23:09  wrote:
>>>>
>>>>> Anybody have had to replace a failed host from a 3, 6, or 9 node
>>>>> hyperconverged setup with gluster storage?
>>>>>
>>>>> One of my hosts is completely dead, I need to do a fresh install using
>>>>> ovirt node iso, can anybody point me to the proper steps?
>>>>>
>>>>> thanks,
>>>>> _______
>>>>> Users mailing list -- users@ovirt.org
>>>>> To unsubscribe send an email to users-le...@ovirt.org
>>>>> Privacy Statement: https://www.ovirt.org/site/privacy-policy/
>>>>> oVirt Code of Conduct:
>>>>> https://www.ovirt.org/community/about/community-guidelines/
>>>>> List Archives:
>>>>> https://lists.ovirt.org/archives/list/users@ovirt.org/message/RFBYQKWC2KNZVYTYQF5T256UZBCJHK5F/
>>>>>
>>>> --
>>> Adrian Quintero
>>>
>>
>>
>> --
>> Adrian Quintero
>>
>
>
> --
> Adrian Quintero
>


-- 
Best regards, Leo David
___
Users mailing list -- users@ovirt.org
To unsubscribe send an email to users-le...@ovirt.org
Privacy Statement: https://www.ovirt.org/site/privacy-policy/
oVirt Code of Conduct: 
https://www.ovirt.org/community/about/community-guidelines/
List Archives: 
https://lists.ovirt.org/archives/list/users@ovirt.org/message/QNYP3FKNBF6QPV46R5L3LRBWTTIC3OHO/


[ovirt-users] Re: Replace bad Host from a 9 Node hyperconverged setup 4.3.3

2019-06-05 Thread Leo David
Hi Adrian,
I think the steps are:
- reinstall the host
- join it to virtualisation cluster
And if was member of gluster cluster as well:
- go to host - storage devices
- create the bricks on the devices - as they are on the other hosts
- go to storage - volumes
- replace each failed brick with the corresponding new one.
Hope it helps.
Cheers,
Leo


On Wed, Jun 5, 2019, 23:09  wrote:

> Anybody have had to replace a failed host from a 3, 6, or 9 node
> hyperconverged setup with gluster storage?
>
> One of my hosts is completely dead, I need to do a fresh install using
> ovirt node iso, can anybody point me to the proper steps?
>
> thanks,
> ___
> Users mailing list -- users@ovirt.org
> To unsubscribe send an email to users-le...@ovirt.org
> Privacy Statement: https://www.ovirt.org/site/privacy-policy/
> oVirt Code of Conduct:
> https://www.ovirt.org/community/about/community-guidelines/
> List Archives:
> https://lists.ovirt.org/archives/list/users@ovirt.org/message/RFBYQKWC2KNZVYTYQF5T256UZBCJHK5F/
>
___
Users mailing list -- users@ovirt.org
To unsubscribe send an email to users-le...@ovirt.org
Privacy Statement: https://www.ovirt.org/site/privacy-policy/
oVirt Code of Conduct: 
https://www.ovirt.org/community/about/community-guidelines/
List Archives: 
https://lists.ovirt.org/archives/list/users@ovirt.org/message/L2FT4DA5B6MTT5TXIT4N5MTH5VTG25F7/


[ovirt-users] Re: Single instance scaleup.

2019-05-28 Thread Leo David
Hi,
Looks like the only way arround would be to create a brand-new volume as
replicated on other disks, and start moving the vms all around the place
between volumes ?
Cheers,

Leo

On Mon, May 27, 2019 at 1:53 PM Leo David  wrote:

> Hi,
> Any suggestions ?
> Thank you very much !
>
> Leo
>
> On Sun, May 26, 2019 at 4:38 PM Strahil Nikolov 
> wrote:
>
>> Yeah,
>> it seems different from the docs.
>> I'm adding the gluster users list ,as they are more experienced into that.
>>
>> @Gluster-users,
>>
>> can you provide some hint how to add aditional replicas to the below
>> volumes , so they become 'replica 2 arbiter 1' or 'replica 3' type volumes ?
>>
>>
>> Best Regards,
>> Strahil Nikolov
>>
>> В неделя, 26 май 2019 г., 15:16:18 ч. Гринуич+3, Leo David <
>> leoa...@gmail.com> написа:
>>
>>
>> Thank you Strahil,
>> The engine and ssd-samsung are distributed...
>> So these are the ones that I need to have replicated accross new nodes.
>> I am not very sure about the procedure to accomplish this.
>> Thanks,
>>
>> Leo
>>
>> On Sun, May 26, 2019, 13:04 Strahil  wrote:
>>
>> Hi Leo,
>> As you do not have a distributed volume , you can easily switch to
>> replica 2 arbiter 1 or replica 3 volumes.
>>
>> You can use the following for adding the bricks:
>>
>>
>> https://access.redhat.com/documentation/en-US/Red_Hat_Storage/2.1/html/Administration_Guide/Expanding_Volumes.html
>>
>> Best Regards,
>> Strahil Nikoliv
>> On May 26, 2019 10:54, Leo David  wrote:
>>
>> Hi Stahil,
>> Thank you so much for yout input !
>>
>>  gluster volume info
>>
>>
>> Volume Name: engine
>> Type: Distribute
>> Volume ID: d7449fc2-cc35-4f80-a776-68e4a3dbd7e1
>> Status: Started
>> Snapshot Count: 0
>> Number of Bricks: 1
>> Transport-type: tcp
>> Bricks:
>> Brick1: 192.168.80.191:/gluster_bricks/engine/engine
>> Options Reconfigured:
>> nfs.disable: on
>> transport.address-family: inet
>> storage.owner-uid: 36
>> storage.owner-gid: 36
>> features.shard: on
>> performance.low-prio-threads: 32
>> performance.strict-o-direct: off
>> network.remote-dio: off
>> network.ping-timeout: 30
>> user.cifs: off
>> performance.quick-read: off
>> performance.read-ahead: off
>> performance.io-cache: off
>> cluster.eager-lock: enable
>> Volume Name: ssd-samsung
>> Type: Distribute
>> Volume ID: 76576cc6-220b-4651-952d-99846178a19e
>> Status: Started
>> Snapshot Count: 0
>> Number of Bricks: 1
>> Transport-type: tcp
>> Bricks:
>> Brick1: 192.168.80.191:/gluster_bricks/sdc/data
>> Options Reconfigured:
>> cluster.eager-lock: enable
>> performance.io-cache: off
>> performance.read-ahead: off
>> performance.quick-read: off
>> user.cifs: off
>> network.ping-timeout: 30
>> network.remote-dio: off
>> performance.strict-o-direct: on
>> performance.low-prio-threads: 32
>> features.shard: on
>> storage.owner-gid: 36
>> storage.owner-uid: 36
>> transport.address-family: inet
>> nfs.disable: on
>>
>> The other two hosts will be 192.168.80.192/193  - this is gluster
>> dedicated network over 10GB sfp+ switch.
>> - host 2 wil have identical harware configuration with host 1 ( each disk
>> is actually a raid0 array )
>> - host 3 has:
>>-  1 ssd for OS
>>-  1 ssd - for adding to engine volume in a full replica 3
>>-  2 ssd's in a raid 1 array to be added as arbiter for the data
>> volume ( ssd-samsung )
>> So the plan is to have "engine"  scaled in a full replica 3,  and
>> "ssd-samsung" scalled in a replica 3 arbitrated.
>>
>>
>>
>>
>> On Sun, May 26, 2019 at 10:34 AM Strahil  wrote:
>>
>> Hi Leo,
>>
>> Gluster is quite smart, but in order to provide any hints , can you
>> provide output of 'gluster volume info '.
>> If you have 2 more systems , keep in mind that it is best to mirror the
>> storage on the second replica (2 disks on 1 machine -> 2 disks on the new
>> machine), while for the arbiter this is not neccessary.
>>
>> What is your network and NICs ? Based on my experience , I can recommend
>> at least 10 gbit/s  interfase(s).
>>
>> Best Regards,
>> Strahil Nikolov
>> On May 26, 2019 07:52, Leo David  wrote:
>>
>> Hello Everyone,
>> Can someone help me to clarify this ?
>> I have a single-node 4.2.8 installation ( only two glust

[ovirt-users] Re: Single instance scaleup.

2019-05-27 Thread Leo David
Hi,
Any suggestions ?
Thank you very much !

Leo

On Sun, May 26, 2019 at 4:38 PM Strahil Nikolov 
wrote:

> Yeah,
> it seems different from the docs.
> I'm adding the gluster users list ,as they are more experienced into that.
>
> @Gluster-users,
>
> can you provide some hint how to add aditional replicas to the below
> volumes , so they become 'replica 2 arbiter 1' or 'replica 3' type volumes ?
>
>
> Best Regards,
> Strahil Nikolov
>
> В неделя, 26 май 2019 г., 15:16:18 ч. Гринуич+3, Leo David <
> leoa...@gmail.com> написа:
>
>
> Thank you Strahil,
> The engine and ssd-samsung are distributed...
> So these are the ones that I need to have replicated accross new nodes.
> I am not very sure about the procedure to accomplish this.
> Thanks,
>
> Leo
>
> On Sun, May 26, 2019, 13:04 Strahil  wrote:
>
> Hi Leo,
> As you do not have a distributed volume , you can easily switch to replica
> 2 arbiter 1 or replica 3 volumes.
>
> You can use the following for adding the bricks:
>
>
> https://access.redhat.com/documentation/en-US/Red_Hat_Storage/2.1/html/Administration_Guide/Expanding_Volumes.html
>
> Best Regards,
> Strahil Nikoliv
> On May 26, 2019 10:54, Leo David  wrote:
>
> Hi Stahil,
> Thank you so much for yout input !
>
>  gluster volume info
>
>
> Volume Name: engine
> Type: Distribute
> Volume ID: d7449fc2-cc35-4f80-a776-68e4a3dbd7e1
> Status: Started
> Snapshot Count: 0
> Number of Bricks: 1
> Transport-type: tcp
> Bricks:
> Brick1: 192.168.80.191:/gluster_bricks/engine/engine
> Options Reconfigured:
> nfs.disable: on
> transport.address-family: inet
> storage.owner-uid: 36
> storage.owner-gid: 36
> features.shard: on
> performance.low-prio-threads: 32
> performance.strict-o-direct: off
> network.remote-dio: off
> network.ping-timeout: 30
> user.cifs: off
> performance.quick-read: off
> performance.read-ahead: off
> performance.io-cache: off
> cluster.eager-lock: enable
> Volume Name: ssd-samsung
> Type: Distribute
> Volume ID: 76576cc6-220b-4651-952d-99846178a19e
> Status: Started
> Snapshot Count: 0
> Number of Bricks: 1
> Transport-type: tcp
> Bricks:
> Brick1: 192.168.80.191:/gluster_bricks/sdc/data
> Options Reconfigured:
> cluster.eager-lock: enable
> performance.io-cache: off
> performance.read-ahead: off
> performance.quick-read: off
> user.cifs: off
> network.ping-timeout: 30
> network.remote-dio: off
> performance.strict-o-direct: on
> performance.low-prio-threads: 32
> features.shard: on
> storage.owner-gid: 36
> storage.owner-uid: 36
> transport.address-family: inet
> nfs.disable: on
>
> The other two hosts will be 192.168.80.192/193  - this is gluster
> dedicated network over 10GB sfp+ switch.
> - host 2 wil have identical harware configuration with host 1 ( each disk
> is actually a raid0 array )
> - host 3 has:
>-  1 ssd for OS
>-  1 ssd - for adding to engine volume in a full replica 3
>-  2 ssd's in a raid 1 array to be added as arbiter for the data volume
> ( ssd-samsung )
> So the plan is to have "engine"  scaled in a full replica 3,  and
> "ssd-samsung" scalled in a replica 3 arbitrated.
>
>
>
>
> On Sun, May 26, 2019 at 10:34 AM Strahil  wrote:
>
> Hi Leo,
>
> Gluster is quite smart, but in order to provide any hints , can you
> provide output of 'gluster volume info '.
> If you have 2 more systems , keep in mind that it is best to mirror the
> storage on the second replica (2 disks on 1 machine -> 2 disks on the new
> machine), while for the arbiter this is not neccessary.
>
> What is your network and NICs ? Based on my experience , I can recommend
> at least 10 gbit/s  interfase(s).
>
> Best Regards,
> Strahil Nikolov
> On May 26, 2019 07:52, Leo David  wrote:
>
> Hello Everyone,
> Can someone help me to clarify this ?
> I have a single-node 4.2.8 installation ( only two gluster storage domains
> - distributed  single drive volumes ). Now I just got two identintical
> servers and I would like to go for a 3 nodes bundle.
> Is it possible ( after joining the new nodes to the cluster ) to expand
> the existing volumes across the new nodes and change them to replica 3
> arbitrated ?
> If so, could you share with me what would it be the procedure ?
> Thank you very much !
>
> Leo
>
>
>
> --
> Best regards, Leo David
>
>

-- 
Best regards, Leo David
___
Users mailing list -- users@ovirt.org
To unsubscribe send an email to users-le...@ovirt.org
Privacy Statement: https://www.ovirt.org/site/privacy-policy/
oVirt Code of Conduct: 
https://www.ovirt.org/community/about/community-guidelines/
List Archives: 
https://lists.ovirt.org/archives/list/users@ovirt.org/message/FRECPRVVUVG42N5PXQWUG2MXM7R4WBGO/


[ovirt-users] Re: Single instance scaleup.

2019-05-26 Thread Leo David
Thank you Strahil,
The engine and ssd-samsung are distributed...
So these are the ones that I need to have replicated accross new nodes.
I am not very sure about the procedure to accomplish this.
Thanks,

Leo

On Sun, May 26, 2019, 13:04 Strahil  wrote:

> Hi Leo,
> As you do not have a distributed volume , you can easily switch to replica
> 2 arbiter 1 or replica 3 volumes.
>
> You can use the following for adding the bricks:
>
>
> https://access.redhat.com/documentation/en-US/Red_Hat_Storage/2.1/html/Administration_Guide/Expanding_Volumes.html
>
> Best Regards,
> Strahil Nikoliv
> On May 26, 2019 10:54, Leo David  wrote:
>
> Hi Stahil,
> Thank you so much for yout input !
>
>  gluster volume info
>
>
> Volume Name: engine
> Type: Distribute
> Volume ID: d7449fc2-cc35-4f80-a776-68e4a3dbd7e1
> Status: Started
> Snapshot Count: 0
> Number of Bricks: 1
> Transport-type: tcp
> Bricks:
> Brick1: 192.168.80.191:/gluster_bricks/engine/engine
> Options Reconfigured:
> nfs.disable: on
> transport.address-family: inet
> storage.owner-uid: 36
> storage.owner-gid: 36
> features.shard: on
> performance.low-prio-threads: 32
> performance.strict-o-direct: off
> network.remote-dio: off
> network.ping-timeout: 30
> user.cifs: off
> performance.quick-read: off
> performance.read-ahead: off
> performance.io-cache: off
> cluster.eager-lock: enable
> Volume Name: ssd-samsung
> Type: Distribute
> Volume ID: 76576cc6-220b-4651-952d-99846178a19e
> Status: Started
> Snapshot Count: 0
> Number of Bricks: 1
> Transport-type: tcp
> Bricks:
> Brick1: 192.168.80.191:/gluster_bricks/sdc/data
> Options Reconfigured:
> cluster.eager-lock: enable
> performance.io-cache: off
> performance.read-ahead: off
> performance.quick-read: off
> user.cifs: off
> network.ping-timeout: 30
> network.remote-dio: off
> performance.strict-o-direct: on
> performance.low-prio-threads: 32
> features.shard: on
> storage.owner-gid: 36
> storage.owner-uid: 36
> transport.address-family: inet
> nfs.disable: on
>
> The other two hosts will be 192.168.80.192/193  - this is gluster
> dedicated network over 10GB sfp+ switch.
> - host 2 wil have identical harware configuration with host 1 ( each disk
> is actually a raid0 array )
> - host 3 has:
>-  1 ssd for OS
>-  1 ssd - for adding to engine volume in a full replica 3
>-  2 ssd's in a raid 1 array to be added as arbiter for the data volume
> ( ssd-samsung )
> So the plan is to have "engine"  scaled in a full replica 3,  and
> "ssd-samsung" scalled in a replica 3 arbitrated.
>
>
>
>
> On Sun, May 26, 2019 at 10:34 AM Strahil  wrote:
>
> Hi Leo,
>
> Gluster is quite smart, but in order to provide any hints , can you
> provide output of 'gluster volume info '.
> If you have 2 more systems , keep in mind that it is best to mirror the
> storage on the second replica (2 disks on 1 machine -> 2 disks on the new
> machine), while for the arbiter this is not neccessary.
>
> What is your network and NICs ? Based on my experience , I can recommend
> at least 10 gbit/s  interfase(s).
>
> Best Regards,
> Strahil Nikolov
> On May 26, 2019 07:52, Leo David  wrote:
>
> Hello Everyone,
> Can someone help me to clarify this ?
> I have a single-node 4.2.8 installation ( only two gluster storage domains
> - distributed  single drive volumes ). Now I just got two identintical
> servers and I would like to go for a 3 nodes bundle.
> Is it possible ( after joining the new nodes to the cluster ) to expand
> the existing volumes across the new nodes and change them to replica 3
> arbitrated ?
> If so, could you share with me what would it be the procedure ?
> Thank you very much !
>
> Leo
>
>
>
> --
> Best regards, Leo David
>
>
___
Users mailing list -- users@ovirt.org
To unsubscribe send an email to users-le...@ovirt.org
Privacy Statement: https://www.ovirt.org/site/privacy-policy/
oVirt Code of Conduct: 
https://www.ovirt.org/community/about/community-guidelines/
List Archives: 
https://lists.ovirt.org/archives/list/users@ovirt.org/message/GFDDRUF3FIRXIKGS6M3I757PINVUNFLU/


[ovirt-users] Re: Single instance scaleup.

2019-05-26 Thread Leo David
Hi Stahil,
Thank you so much for yout input !

 gluster volume info


Volume Name: engine
Type: Distribute
Volume ID: d7449fc2-cc35-4f80-a776-68e4a3dbd7e1
Status: Started
Snapshot Count: 0
Number of Bricks: 1
Transport-type: tcp
Bricks:
Brick1: 192.168.80.191:/gluster_bricks/engine/engine
Options Reconfigured:
nfs.disable: on
transport.address-family: inet
storage.owner-uid: 36
storage.owner-gid: 36
features.shard: on
performance.low-prio-threads: 32
performance.strict-o-direct: off
network.remote-dio: off
network.ping-timeout: 30
user.cifs: off
performance.quick-read: off
performance.read-ahead: off
performance.io-cache: off
cluster.eager-lock: enable
Volume Name: ssd-samsung
Type: Distribute
Volume ID: 76576cc6-220b-4651-952d-99846178a19e
Status: Started
Snapshot Count: 0
Number of Bricks: 1
Transport-type: tcp
Bricks:
Brick1: 192.168.80.191:/gluster_bricks/sdc/data
Options Reconfigured:
cluster.eager-lock: enable
performance.io-cache: off
performance.read-ahead: off
performance.quick-read: off
user.cifs: off
network.ping-timeout: 30
network.remote-dio: off
performance.strict-o-direct: on
performance.low-prio-threads: 32
features.shard: on
storage.owner-gid: 36
storage.owner-uid: 36
transport.address-family: inet
nfs.disable: on

The other two hosts will be 192.168.80.192/193  - this is gluster dedicated
network over 10GB sfp+ switch.
- host 2 wil have identical harware configuration with host 1 ( each disk
is actually a raid0 array )
- host 3 has:
   -  1 ssd for OS
   -  1 ssd - for adding to engine volume in a full replica 3
   -  2 ssd's in a raid 1 array to be added as arbiter for the data volume
( ssd-samsung )
So the plan is to have "engine"  scaled in a full replica 3,  and
"ssd-samsung" scalled in a replica 3 arbitrated.




On Sun, May 26, 2019 at 10:34 AM Strahil  wrote:

> Hi Leo,
>
> Gluster is quite smart, but in order to provide any hints , can you
> provide output of 'gluster volume info '.
> If you have 2 more systems , keep in mind that it is best to mirror the
> storage on the second replica (2 disks on 1 machine -> 2 disks on the new
> machine), while for the arbiter this is not neccessary.
>
> What is your network and NICs ? Based on my experience , I can recommend
> at least 10 gbit/s  interfase(s).
>
> Best Regards,
> Strahil Nikolov
> On May 26, 2019 07:52, Leo David  wrote:
>
> Hello Everyone,
> Can someone help me to clarify this ?
> I have a single-node 4.2.8 installation ( only two gluster storage domains
> - distributed  single drive volumes ). Now I just got two identintical
> servers and I would like to go for a 3 nodes bundle.
> Is it possible ( after joining the new nodes to the cluster ) to expand
> the existing volumes across the new nodes and change them to replica 3
> arbitrated ?
> If so, could you share with me what would it be the procedure ?
> Thank you very much !
>
> Leo
>
>

-- 
Best regards, Leo David
___
Users mailing list -- users@ovirt.org
To unsubscribe send an email to users-le...@ovirt.org
Privacy Statement: https://www.ovirt.org/site/privacy-policy/
oVirt Code of Conduct: 
https://www.ovirt.org/community/about/community-guidelines/
List Archives: 
https://lists.ovirt.org/archives/list/users@ovirt.org/message/PJ2OO6SNVG4VQZDLJEEEJPTGLPZVQMUV/


[ovirt-users] Single instance scaleup.

2019-05-25 Thread Leo David
Hello Everyone,
Can someone help me to clarify this ?
I have a single-node 4.2.8 installation ( only two gluster storage domains
- distributed  single drive volumes ). Now I just got two identintical
servers and I would like to go for a 3 nodes bundle.
Is it possible ( after joining the new nodes to the cluster ) to expand the
existing volumes across the new nodes and change them to replica 3
arbitrated ?
If so, could you share with me what would it be the procedure ?
Thank you very much !

Leo
___
Users mailing list -- users@ovirt.org
To unsubscribe send an email to users-le...@ovirt.org
Privacy Statement: https://www.ovirt.org/site/privacy-policy/
oVirt Code of Conduct: 
https://www.ovirt.org/community/about/community-guidelines/
List Archives: 
https://lists.ovirt.org/archives/list/users@ovirt.org/message/JN34VSFJ2LBWND3OLVSIPHDP4XWP632K/


[ovirt-users] Re: Arbiter brick disk performance

2019-04-27 Thread Leo David
Thank you so much Strahil ! Yes, this would be the setup. Basically, I will
equip the 3rd node with only some consumer grade ssds to have the arbiter
metadata for all the volumes, while having the 1st and 2nd nodes equipped
with proper dc grade disks for both spinning and ssd volumes. This will
drastically reduce costs...
Thank you !

On Thu, Apr 25, 2019, 15:12 Strahil Nikolov  wrote:

> I can't get the idea. Can you give an example.
>
> let me share some of my setup.
> 1. Volume -> data_fast is consisting of:
> ovirt1:/gluster_bricks/data_fast/data_fast -> 500GB NVMe
> ovirt2:/gluster_bricks/data_fast/data_fast -> 500GB NVMe
> ovirt3:/gluster_bricks/data_fast/data_fast -> small LV on a slow
> (QLC-based) SATA SSD
>
> All hosted on thin LVM.
> Of course , for the engine I have:
> ovirt1:/gluster_bricks/engine/engine -> SATA ssd shared between OS and
> brick
> ovirt2:/gluster_bricks/engine/engine -> SATA ssd shared between OS and
> brick
> ovirt3:/gluster_bricks/engine/engine -> SATA ssd shared between OS and 4
> other bricks
>
> Since I have switched from old HDDs to consumer SSD disks - the engine
> volume is not reported by sanlock.service , despite Gluster v52.XX has
> higher latency.
>
> Best Regards,
> Strahil Nikolov
>
>
>
> В сряда, 24 април 2019 г., 21:25:10 ч. Гринуич-4, Leo David <
> leoa...@gmail.com> написа:
>
>
> Thank you very much Strahil, very helpful. As always. So I would equip the
> 3rd server and alocate one small ( 120 - 240gb) consumer grade ssd for each
> of the gluster volume, and at volume creation, to specify the small ssds as
> the 3rd brick.
> Do it make sense ?
> Thank you !
>
> On Wed, Apr 24, 2019, 18:10 Strahil  wrote:
>
> I think 2 small ssds (raid 1 mdadm) can do the job better as ssds have
> lower latencies .You can use them both for OS (minimum needed is 60 GB) and
> the rest will be plenty for an arbiter.
> By the way, if you plan using gluster snapshots - use thin LVM for the
> brick.
>
> Best Regards,
> Strahil Nikolov
> On Apr 24, 2019 16:20, Leo David  wrote:
>
> Hello Everyone,
> I need to look into adding some enterprise grade sas disks ( both ssd
> and spinning  ),  and since the prices are not too low,  I would like to
> benefit of replica 3 arbitrated.
> Therefore,  I intend to buy some smaller disks for use them as arbiter
> brick.
> My question is, what performance ( regarding iops,  througput ) the
> arbiter disks need to be. Should they be at least the same as the real data
> disks ?
> Knowing that they only keep metadata, I am thinking that will not be so
> much pressure on the arbiters.
> Any thoughts?
>
> Thank you !
>
>
> --
> Best regards, Leo David
>
>
___
Users mailing list -- users@ovirt.org
To unsubscribe send an email to users-le...@ovirt.org
Privacy Statement: https://www.ovirt.org/site/privacy-policy/
oVirt Code of Conduct: 
https://www.ovirt.org/community/about/community-guidelines/
List Archives: 
https://lists.ovirt.org/archives/list/users@ovirt.org/message/UXY6TJ2WC7DOY4RZAPRIZYLJ4V665T6K/


[ovirt-users] Re: Arbiter brick disk performance

2019-04-24 Thread Leo David
Thank you very much Strahil, very helpful. As always. So I would equip the
3rd server and alocate one small ( 120 - 240gb) consumer grade ssd for each
of the gluster volume, and at volume creation, to specify the small ssds as
the 3rd brick.
Do it make sense ?
Thank you !

On Wed, Apr 24, 2019, 18:10 Strahil  wrote:

> I think 2 small ssds (raid 1 mdadm) can do the job better as ssds have
> lower latencies .You can use them both for OS (minimum needed is 60 GB) and
> the rest will be plenty for an arbiter.
> By the way, if you plan using gluster snapshots - use thin LVM for the
> brick.
>
> Best Regards,
> Strahil Nikolov
> On Apr 24, 2019 16:20, Leo David  wrote:
>
> Hello Everyone,
> I need to look into adding some enterprise grade sas disks ( both ssd
> and spinning  ),  and since the prices are not too low,  I would like to
> benefit of replica 3 arbitrated.
> Therefore,  I intend to buy some smaller disks for use them as arbiter
> brick.
> My question is, what performance ( regarding iops,  througput ) the
> arbiter disks need to be. Should they be at least the same as the real data
> disks ?
> Knowing that they only keep metadata, I am thinking that will not be so
> much pressure on the arbiters.
> Any thoughts?
>
> Thank you !
>
>
> --
> Best regards, Leo David
>
>
___
Users mailing list -- users@ovirt.org
To unsubscribe send an email to users-le...@ovirt.org
Privacy Statement: https://www.ovirt.org/site/privacy-policy/
oVirt Code of Conduct: 
https://www.ovirt.org/community/about/community-guidelines/
List Archives: 
https://lists.ovirt.org/archives/list/users@ovirt.org/message/4MZ4UUCJM7CONEJHSIMASBO54RS2GXTJ/


[ovirt-users] Arbiter brick disk performance

2019-04-24 Thread Leo David
Hello Everyone,
I need to look into adding some enterprise grade sas disks ( both ssd
and spinning  ),  and since the prices are not too low,  I would like to
benefit of replica 3 arbitrated.
Therefore,  I intend to buy some smaller disks for use them as arbiter
brick.
My question is, what performance ( regarding iops,  througput ) the arbiter
disks need to be. Should they be at least the same as the real data disks ?
Knowing that they only keep metadata, I am thinking that will not be so
much pressure on the arbiters.
Any thoughts?

Thank you !


-- 
Best regards, Leo David
___
Users mailing list -- users@ovirt.org
To unsubscribe send an email to users-le...@ovirt.org
Privacy Statement: https://www.ovirt.org/site/privacy-policy/
oVirt Code of Conduct: 
https://www.ovirt.org/community/about/community-guidelines/
List Archives: 
https://lists.ovirt.org/archives/list/users@ovirt.org/message/E52TUH2HJY6DRA625643WVDEHAHZ7HOH/


[ovirt-users] Replica 3 distribute-replicated - data placement and fault tolerance

2019-04-19 Thread Leo David
Hello Everyone,
I did some fio performance tests on a particular vm and I have noticed
things that I do not understand regarding how data is placed along the
bricks. I am sure this is a lack of knowledge, but I would really
appreciate any help in understanding this, I did a bit of research on the
internet,  but just could't find something relevant.
I have one replica 3 distributed-replicated arbitrated volume, across 18
bricks ( 9 nodes, 2 jbods per node ).
The volume was created as:
node1 - brick1, node-2, brick1,.node9-brick1, node1 - brick2, node-2,
brick2,.node9-brick2
As far as i've understood,  under the hood there are sets of 3 times
replicated data ( subvolumes ) which are assigned to the first 3 bricks,
next set of replicated data to the next set of 3 bricks,  and so on..
Now, I have this testing vm runign one node one.
When I've started the fio test, i've noticed incresed gluster traffic from
node 1 to node2,4 and 5
So I assumed that the vm disk is data resides on a subvolume allocated to
bricks from these hosts.
Then I have migrated the vm on node 2, and did the same test. Now the
increased treffic is generated from node2 to node1, 4, and 5..
What I do not understand is:
 - why gluster client ( ovirt host ) is sending data to  3 bricks if the
volume is arbitrated - shouldn't it send only to 2 of them ?
- why are there 4 brick implicated in this subvolume
- what would it be the fault tolerance level in this setup ,ie: how many
hosts can I take down and still having the volume serving IO requests; can
they be random ?
I am sorry for my lack of knowledge,  I am just trying to understand what
is happening so I can deploy a decent proper setup of an hci environment.
Thank you,
Leo


-- 
Best regards, Leo David
___
Users mailing list -- users@ovirt.org
To unsubscribe send an email to users-le...@ovirt.org
Privacy Statement: https://www.ovirt.org/site/privacy-policy/
oVirt Code of Conduct: 
https://www.ovirt.org/community/about/community-guidelines/
List Archives: 
https://lists.ovirt.org/archives/list/users@ovirt.org/message/XVGQSXSBDQXCIO64NZWFLEMYAXIEJX3F/


[ovirt-users] Re: Importing existing GlusterFS

2019-04-17 Thread Leo David
In this case you might want to create dedicated gluster volumes for oVirt
and add them as new storage domains. Not sure about the performance though.

On Wed, Apr 17, 2019, 14:35 Zryty ADHD  wrote:

> Ok, I describe more my enviroment.
> I have a 3 servers with glustefseach 500GB which is use in openshift
> enviroment as storage, but i dont have any visual dashboard to organaze
> them such as capacity, free space etc.
> All servers using Centos 7.6.
> ___
> Users mailing list -- users@ovirt.org
> To unsubscribe send an email to users-le...@ovirt.org
> Privacy Statement: https://www.ovirt.org/site/privacy-policy/
> oVirt Code of Conduct:
> https://www.ovirt.org/community/about/community-guidelines/
> List Archives:
> https://lists.ovirt.org/archives/list/users@ovirt.org/message/TIZ6VIM47ZNPUSP2KNY52EQ2LFSWGE7O/
>
___
Users mailing list -- users@ovirt.org
To unsubscribe send an email to users-le...@ovirt.org
Privacy Statement: https://www.ovirt.org/site/privacy-policy/
oVirt Code of Conduct: 
https://www.ovirt.org/community/about/community-guidelines/
List Archives: 
https://lists.ovirt.org/archives/list/users@ovirt.org/message/PK6YJVJAANRPXVBGYJXZZOSU7CZEJXEO/


[ovirt-users] Re: Importing existing GlusterFS

2019-04-17 Thread Leo David
I think you need to import storage domain ?

On Wed, Apr 17, 2019, 13:00 Zryty ADHD  wrote:

> Hi,
> I have a questiion about that. I Install Ovirt 4.3.3 on Rhel 7.6 and want
> to import my existing GlusterFs cluster to it but i can't find option to do
> that. Can anyone explain me how to do that or its not possible in this
> version ?
> ___
> Users mailing list -- users@ovirt.org
> To unsubscribe send an email to users-le...@ovirt.org
> Privacy Statement: https://www.ovirt.org/site/privacy-policy/
> oVirt Code of Conduct:
> https://www.ovirt.org/community/about/community-guidelines/
> List Archives:
> https://lists.ovirt.org/archives/list/users@ovirt.org/message/C27K52OXODH6P26YWX4QEUKHPQPMNS76/
>
___
Users mailing list -- users@ovirt.org
To unsubscribe send an email to users-le...@ovirt.org
Privacy Statement: https://www.ovirt.org/site/privacy-policy/
oVirt Code of Conduct: 
https://www.ovirt.org/community/about/community-guidelines/
List Archives: 
https://lists.ovirt.org/archives/list/users@ovirt.org/message/OCOSJVTGSNXI3B6BX7ESFXUMJK65Y3GQ/


[ovirt-users] Re: Gluster arbiter volume storage domain - change

2019-04-16 Thread Leo David
Thank you so much Ravi, very helpful !

On Tue, Apr 16, 2019, 12:11 Ravishankar N  wrote:

>
> On 16/04/19 2:20 PM, Sahina Bose wrote:
>
> On Tue, Apr 16, 2019 at 1:39 PM Leo David  
>  wrote:
>
>
> Hi Everyone,
> I have wrongly configured the main gluster volume ( 12 identical 1tb ssd 
> disks, replica 3 distributed-replicated, across 6 nodes - 2 per node ) with 
> arbiter one.
> Oviously I am wasting storage space in this scenario with the arbiter bricks, 
> and I would like to convert the volume to non-arbitrated one, so having all 
> the data evenly spreaded across all the disks.
> Considering the the storage is being used by about 40 vms in production, what 
> would it be the steps, or is there any chance to change the volume type to 
> non-arbitrated on the fly and then rebalance ?
> Thank you very much !
>
>
> Ravi, can you help here - to change from arbiter to replica 3?
>
> The general steps are:
>
> 1. Ensure there are no pending heals.
>
> 2. Use the `remove-brick` command to reduce the volume to a replica 2
>
> 3. Use the `add-brick` command to convert it to a replica 3.
>
> 4. Monitor and check that the heal is eventually completed on the newly
> added bricks.
>
> The steps are best done when the VMs are offline so that self-heal traffic
> does not eat up too much of I/O traffic.
>
> Example:
> [root@tuxpad ravi]# gluster volume info
>
> Volume Name: testvol
> Type: Distributed-Replicate
> Volume ID: e3fc6ea5-a48c-4918-8a4b-0a7859f3a182
> Status: Started
> Snapshot Count: 0
> Number of Bricks: 2 x (2 + 1) = 6
> Transport-type: tcp
> Bricks:
> Brick1: 127.0.0.2:/home/ravi/bricks/brick1
> Brick2: 127.0.0.2:/home/ravi/bricks/brick2
> Brick3: 127.0.0.2:/home/ravi/bricks/brick3 (arbiter)
> Brick4: 127.0.0.2:/home/ravi/bricks/brick4
> Brick5: 127.0.0.2:/home/ravi/bricks/brick5
> Brick6: 127.0.0.2:/home/ravi/bricks/brick6 (arbiter)
> Options Reconfigured:
> transport.address-family: inet
> nfs.disable: on
> performance.client-io-threads: off
> [root@tuxpad ravi]#
>
> [root@tuxpad ravi]# gluster volume heal testvol info
> Brick 127.0.0.2:/home/ravi/bricks/brick1
> Status: Connected
> Number of entries: 0
>
> Brick 127.0.0.2:/home/ravi/bricks/brick2
> Status: Connected
> Number of entries: 0
>
> Brick 127.0.0.2:/home/ravi/bricks/brick3
> Status: Connected
> Number of entries: 0
>
> Brick 127.0.0.2:/home/ravi/bricks/brick4
> Status: Connected
> Number of entries: 0
>
> Brick 127.0.0.2:/home/ravi/bricks/brick5
> Status: Connected
> Number of entries: 0
>
> Brick 127.0.0.2:/home/ravi/bricks/brick6
> Status: Connected
> Number of entries: 0
>
> [root@tuxpad ravi]#
> [root@tuxpad ravi]# gluster volume remove-brick testvol replica 2
> 127.0.0.2:/home/ravi/bricks/brick3  127.0.0.2:/home/ravi/bricks/brick6
> force
> Removing brick(s) can result in data loss. Do you want to Continue? (y/n) y
> volume remove-brick commit force: success
> [root@tuxpad ravi]#
> [root@tuxpad ravi]# gluster volume info
>
> Volume Name: testvol
> Type: Distributed-Replicate
> Volume ID: e3fc6ea5-a48c-4918-8a4b-0a7859f3a182
> Status: Started
> Snapshot Count: 0
> Number of Bricks: 2 x 2 = 4
> Transport-type: tcp
> Bricks:
> Brick1: 127.0.0.2:/home/ravi/bricks/brick1
> Brick2: 127.0.0.2:/home/ravi/bricks/brick2
> Brick3: 127.0.0.2:/home/ravi/bricks/brick4
> Brick4: 127.0.0.2:/home/ravi/bricks/brick5
> Options Reconfigured:
> transport.address-family: inet
> nfs.disable: on
> performance.client-io-threads: off
> [root@tuxpad ravi]#
> [root@tuxpad ravi]# gluster volume add-brick testvol replica 3 
> 127.0.0.2:/home/ravi/bricks/brick3_new
> 127.0.0.2:/home/ravi/bricks/brick6_new
> volume add-brick: success
> [root@tuxpad ravi]#
> [root@tuxpad ravi]#
> [root@tuxpad ravi]# gluster volume info
>
> Volume Name: testvol
> Type: Distributed-Replicate
> Volume ID: e3fc6ea5-a48c-4918-8a4b-0a7859f3a182
> Status: Started
> Snapshot Count: 0
> Number of Bricks: 2 x 3 = 6
> Transport-type: tcp
> Bricks:
> Brick1: 127.0.0.2:/home/ravi/bricks/brick1
> Brick2: 127.0.0.2:/home/ravi/bricks/brick2
> Brick3: 127.0.0.2:/home/ravi/bricks/brick3_new
> Brick4: 127.0.0.2:/home/ravi/bricks/brick4
> Brick5: 127.0.0.2:/home/ravi/bricks/brick5
> Brick6: 127.0.0.2:/home/ravi/bricks/brick6_new
> Options Reconfigured:
> transport.address-family: inet
> nfs.disable: on
> performance.client-io-threads: off
> [root@tuxpad ravi]#
> [root@tuxpad ravi]#
> [root@tuxpad ravi]# gluster volume heal testvol info
> Brick 127.0.0.2:/home/ravi/bricks/brick1
> Status: Connected
> Number of entries: 0
>
> Brick 127.0.0.2:/home/ravi/br

[ovirt-users] Re: Gluster arbiter volume storage domain - change

2019-04-16 Thread Leo David
Than you !
Indeed, increasing the traffic it is a very good point...


On Tue, Apr 16, 2019, 11:50 Sahina Bose  wrote:

> On Tue, Apr 16, 2019 at 1:39 PM Leo David  wrote:
> >
> > Hi Everyone,
> > I have wrongly configured the main gluster volume ( 12 identical 1tb ssd
> disks, replica 3 distributed-replicated, across 6 nodes - 2 per node ) with
> arbiter one.
> > Oviously I am wasting storage space in this scenario with the arbiter
> bricks, and I would like to convert the volume to non-arbitrated one, so
> having all the data evenly spreaded across all the disks.
> > Considering the the storage is being used by about 40 vms in production,
> what would it be the steps, or is there any chance to change the volume
> type to non-arbitrated on the fly and then rebalance ?
> > Thank you very much !
>
> Ravi, can you help here - to change from arbiter to replica 3?
>
>
> > ___
> > Users mailing list -- users@ovirt.org
> > To unsubscribe send an email to users-le...@ovirt.org
> > Privacy Statement: https://www.ovirt.org/site/privacy-policy/
> > oVirt Code of Conduct:
> https://www.ovirt.org/community/about/community-guidelines/
> > List Archives:
> https://lists.ovirt.org/archives/list/users@ovirt.org/message/UBEZWN35M365IKCIE3U6TRHDDX7TS75T/
>
___
Users mailing list -- users@ovirt.org
To unsubscribe send an email to users-le...@ovirt.org
Privacy Statement: https://www.ovirt.org/site/privacy-policy/
oVirt Code of Conduct: 
https://www.ovirt.org/community/about/community-guidelines/
List Archives: 
https://lists.ovirt.org/archives/list/users@ovirt.org/message/737HIWC7IVNIKOFLAISKOPZ6LMRFUWZY/


[ovirt-users] Gluster arbiter volume storage domain - change

2019-04-16 Thread Leo David
Hi Everyone,
I have wrongly configured the main gluster volume ( 12 identical 1tb ssd
disks, replica 3 distributed-replicated, across 6 nodes - 2 per node ) with
arbiter one.
Oviously I am wasting storage space in this scenario with the arbiter
bricks, and I would like to convert the volume to non-arbitrated one, so
having all the data evenly spreaded across all the disks.
Considering the the storage is being used by about 40 vms in production,
what would it be the steps, or is there any chance to change the volume
type to non-arbitrated on the fly and then rebalance ?
Thank you very much !
___
Users mailing list -- users@ovirt.org
To unsubscribe send an email to users-le...@ovirt.org
Privacy Statement: https://www.ovirt.org/site/privacy-policy/
oVirt Code of Conduct: 
https://www.ovirt.org/community/about/community-guidelines/
List Archives: 
https://lists.ovirt.org/archives/list/users@ovirt.org/message/UBEZWN35M365IKCIE3U6TRHDDX7TS75T/


[ovirt-users] Re: Poor I/O Performance (again...)

2019-04-15 Thread Leo David
Thank you again Alex,
It makes a lot of sense now, with this detailed explanation.

On Mon, Apr 15, 2019, 20:25 Alex McWhirter  wrote:

> On 2019-04-15 13:08, Leo David wrote:
>
> Thank you Alex !
> I will try these performance settings.
> If someone from the dev guys could validate and recommend those as a good
> standard configuration, it would be just great.
> If they are ok,  wouldn't be a nice to have them applied from within UI
> with the "Optimize for VirtStore"  button ?
> Thnak you !
>
> On Mon, Apr 15, 2019 at 7:39 PM Alex McWhirter  wrote:
>
>> On 2019-04-14 23:22, Leo David wrote:
>>
>> Hi,
>> Thank you Alex, I was looking for some optimisation settings as well,
>> since I am pretty much in the same boat, using ssd based
>> replicate-distributed volumes across 12 hosts.
>> Could anyone else (maybe even from from ovirt or rhev team) validate
>> these settings or add some other tweaks as well, so we can use them as
>> standard ?
>> Thank you very much again !
>>
>> On Mon, Apr 15, 2019, 05:56 Alex McWhirter  wrote:
>>
>>> On 2019-04-14 20:27, Jim Kusznir wrote:
>>>
>>> Hi all:
>>> I've had I/O performance problems pretty much since the beginning of
>>> using oVirt.  I've applied several upgrades as time went on, but strangely,
>>> none of them have alleviated the problem.  VM disk I/O is still very slow
>>> to the point that running VMs is often painful; it notably affects nearly
>>> all my VMs, and makes me leary of starting any more.  I'm currently running
>>> 12 VMs and the hosted engine on the stack.
>>> My configuration started out with 1Gbps networking and hyperconverged
>>> gluster running on a single SSD on each node.  It worked, but I/O was
>>> painfully slow.  I also started running out of space, so I added an SSHD on
>>> each node, created another gluster volume, and moved VMs over to it.  I
>>> also ran that on a dedicated 1Gbps network.  I had recurring disk failures
>>> (seems that disks only lasted about 3-6 months; I warrantied all three at
>>> least once, and some twice before giving up).  I suspect the Dell PERC 6/i
>>> was partly to blame; the raid card refused to see/acknowledge the disk, but
>>> plugging it into a normal PC showed no signs of problems.  In any case,
>>> performance on that storage was notably bad, even though the gig-e
>>> interface was rarely taxed.
>>> I put in 10Gbps ethernet and moved all the storage on that none the
>>> less, as several people here said that 1Gbps just wasn't fast enough.  Some
>>> aspects improved a bit, but disk I/O is still slow.  And I was still having
>>> problems with the SSHD data gluster volume eating disks, so I bought a
>>> dedicated NAS server (supermicro 12 disk dedicated FreeNAS NFS storage
>>> system on 10Gbps ethernet).  Set that up.  I found that it was actually
>>> FASTER than the SSD-based gluster volume, but still slow.  Lately its been
>>> getting slower, too...Don't know why.  The FreeNAS server reports network
>>> loads around 4MB/s on its 10Gbe interface, so its not network constrained.
>>> At 4MB/s, I'd sure hope the 12 spindle SAS interface wasn't constrained
>>> either.  (and disk I/O operations on the NAS itself complete much
>>> faster).
>>> So, running a test on my NAS against an ISO file I haven't accessed in
>>> months:
>>>  # dd
>>> if=en_windows_server_2008_r2_standard_enterprise_datacenter_and_web_x64_dvd_x15-59754.iso
>>> of=/dev/null bs=1024k count=500
>>>
>>> 500+0 records in
>>> 500+0 records out
>>> 524288000 bytes transferred in 2.459501 secs (213168465 bytes/sec)
>>> Running it on one of my hosts:
>>> root@unifi:/home/kusznir# time dd if=/dev/sda of=/dev/null bs=1024k
>>> count=500
>>> 500+0 records in
>>> 500+0 records out
>>> 524288000 bytes (524 MB, 500 MiB) copied, 7.21337 s, 72.7 MB/s
>>> (I don't know if this is a true apples to apples comparison, as I don't
>>> have a large file inside this VM's image).  Even this is faster than I
>>> often see.
>>> I have a VoIP Phone server running as a VM.  Voicemail and other
>>> recordings usually fail due to IO issues opening and writing the files.
>>> Often, the first 4 or so seconds of the recording is missed; sometimes the
>>> entire thing just fails.  I didn't use to have this problem, but its
>>> definately been getting worse.  I finally bit the bullet and ordered a
>>> physical server dedicated for my VoIP System...But I sti

[ovirt-users] Re: Poor I/O Performance (again...)

2019-04-15 Thread Leo David
Thank you Alex !
I will try these performance settings.
If someone from the dev guys could validate and recommend those as a good
standard configuration, it would be just great.
If they are ok,  wouldn't be a nice to have them applied from within UI
with the "Optimize for VirtStore"  button ?
Thnak you !

On Mon, Apr 15, 2019 at 7:39 PM Alex McWhirter  wrote:

> On 2019-04-14 23:22, Leo David wrote:
>
> Hi,
> Thank you Alex, I was looking for some optimisation settings as well,
> since I am pretty much in the same boat, using ssd based
> replicate-distributed volumes across 12 hosts.
> Could anyone else (maybe even from from ovirt or rhev team) validate these
> settings or add some other tweaks as well, so we can use them as standard ?
> Thank you very much again !
>
> On Mon, Apr 15, 2019, 05:56 Alex McWhirter  wrote:
>
>> On 2019-04-14 20:27, Jim Kusznir wrote:
>>
>> Hi all:
>> I've had I/O performance problems pretty much since the beginning of
>> using oVirt.  I've applied several upgrades as time went on, but strangely,
>> none of them have alleviated the problem.  VM disk I/O is still very slow
>> to the point that running VMs is often painful; it notably affects nearly
>> all my VMs, and makes me leary of starting any more.  I'm currently running
>> 12 VMs and the hosted engine on the stack.
>> My configuration started out with 1Gbps networking and hyperconverged
>> gluster running on a single SSD on each node.  It worked, but I/O was
>> painfully slow.  I also started running out of space, so I added an SSHD on
>> each node, created another gluster volume, and moved VMs over to it.  I
>> also ran that on a dedicated 1Gbps network.  I had recurring disk failures
>> (seems that disks only lasted about 3-6 months; I warrantied all three at
>> least once, and some twice before giving up).  I suspect the Dell PERC 6/i
>> was partly to blame; the raid card refused to see/acknowledge the disk, but
>> plugging it into a normal PC showed no signs of problems.  In any case,
>> performance on that storage was notably bad, even though the gig-e
>> interface was rarely taxed.
>> I put in 10Gbps ethernet and moved all the storage on that none the less,
>> as several people here said that 1Gbps just wasn't fast enough.  Some
>> aspects improved a bit, but disk I/O is still slow.  And I was still having
>> problems with the SSHD data gluster volume eating disks, so I bought a
>> dedicated NAS server (supermicro 12 disk dedicated FreeNAS NFS storage
>> system on 10Gbps ethernet).  Set that up.  I found that it was actually
>> FASTER than the SSD-based gluster volume, but still slow.  Lately its been
>> getting slower, too...Don't know why.  The FreeNAS server reports network
>> loads around 4MB/s on its 10Gbe interface, so its not network constrained.
>> At 4MB/s, I'd sure hope the 12 spindle SAS interface wasn't constrained
>> either.  (and disk I/O operations on the NAS itself complete much
>> faster).
>> So, running a test on my NAS against an ISO file I haven't accessed in
>> months:
>>  # dd
>> if=en_windows_server_2008_r2_standard_enterprise_datacenter_and_web_x64_dvd_x15-59754.iso
>> of=/dev/null bs=1024k count=500
>>
>> 500+0 records in
>> 500+0 records out
>> 524288000 bytes transferred in 2.459501 secs (213168465 bytes/sec)
>> Running it on one of my hosts:
>> root@unifi:/home/kusznir# time dd if=/dev/sda of=/dev/null bs=1024k
>> count=500
>> 500+0 records in
>> 500+0 records out
>> 524288000 bytes (524 MB, 500 MiB) copied, 7.21337 s, 72.7 MB/s
>> (I don't know if this is a true apples to apples comparison, as I don't
>> have a large file inside this VM's image).  Even this is faster than I
>> often see.
>> I have a VoIP Phone server running as a VM.  Voicemail and other
>> recordings usually fail due to IO issues opening and writing the files.
>> Often, the first 4 or so seconds of the recording is missed; sometimes the
>> entire thing just fails.  I didn't use to have this problem, but its
>> definately been getting worse.  I finally bit the bullet and ordered a
>> physical server dedicated for my VoIP System...But I still want to figure
>> out why I'm having all these IO problems.  I read on the list of people
>> running 30+ VMs...I feel that my IO can't take any more VMs with any
>> semblance of reliability.  We have a Quickbooks server on here too
>> (windows), and the performance is abysmal; my CPA is charging me extra
>> because of all the lost staff time waiting on the system to respond and
>> generate reports.
>> I'm at my whits end...I started with gluster on SSD with

[ovirt-users] Re: Poor I/O Performance (again...)

2019-04-14 Thread Leo David
Hi,
Thank you Alex, I was looking for some optimisation settings as well, since
I am pretty much in the same boat, using ssd based replicate-distributed
volumes across 12 hosts.
Could anyone else (maybe even from from ovirt or rhev team) validate these
settings or add some other tweaks as well, so we can use them as standard ?
Thank you very much again !

On Mon, Apr 15, 2019, 05:56 Alex McWhirter  wrote:

> On 2019-04-14 20:27, Jim Kusznir wrote:
>
> Hi all:
>
> I've had I/O performance problems pretty much since the beginning of using
> oVirt.  I've applied several upgrades as time went on, but strangely, none
> of them have alleviated the problem.  VM disk I/O is still very slow to the
> point that running VMs is often painful; it notably affects nearly all my
> VMs, and makes me leary of starting any more.  I'm currently running 12 VMs
> and the hosted engine on the stack.
>
> My configuration started out with 1Gbps networking and hyperconverged
> gluster running on a single SSD on each node.  It worked, but I/O was
> painfully slow.  I also started running out of space, so I added an SSHD on
> each node, created another gluster volume, and moved VMs over to it.  I
> also ran that on a dedicated 1Gbps network.  I had recurring disk failures
> (seems that disks only lasted about 3-6 months; I warrantied all three at
> least once, and some twice before giving up).  I suspect the Dell PERC 6/i
> was partly to blame; the raid card refused to see/acknowledge the disk, but
> plugging it into a normal PC showed no signs of problems.  In any case,
> performance on that storage was notably bad, even though the gig-e
> interface was rarely taxed.
>
> I put in 10Gbps ethernet and moved all the storage on that none the less,
> as several people here said that 1Gbps just wasn't fast enough.  Some
> aspects improved a bit, but disk I/O is still slow.  And I was still having
> problems with the SSHD data gluster volume eating disks, so I bought a
> dedicated NAS server (supermicro 12 disk dedicated FreeNAS NFS storage
> system on 10Gbps ethernet).  Set that up.  I found that it was actually
> FASTER than the SSD-based gluster volume, but still slow.  Lately its been
> getting slower, too...Don't know why.  The FreeNAS server reports network
> loads around 4MB/s on its 10Gbe interface, so its not network constrained.
> At 4MB/s, I'd sure hope the 12 spindle SAS interface wasn't constrained
> either.  (and disk I/O operations on the NAS itself complete much
> faster).
>
> So, running a test on my NAS against an ISO file I haven't accessed in
> months:
>
>  # dd
> if=en_windows_server_2008_r2_standard_enterprise_datacenter_and_web_x64_dvd_x15-59754.iso
> of=/dev/null bs=1024k count=500
>
> 500+0 records in
> 500+0 records out
> 524288000 bytes transferred in 2.459501 secs (213168465 bytes/sec)
>
> Running it on one of my hosts:
> root@unifi:/home/kusznir# time dd if=/dev/sda of=/dev/null bs=1024k
> count=500
> 500+0 records in
> 500+0 records out
> 524288000 bytes (524 MB, 500 MiB) copied, 7.21337 s, 72.7 MB/s
>
> (I don't know if this is a true apples to apples comparison, as I don't
> have a large file inside this VM's image).  Even this is faster than I
> often see.
>
> I have a VoIP Phone server running as a VM.  Voicemail and other
> recordings usually fail due to IO issues opening and writing the files.
> Often, the first 4 or so seconds of the recording is missed; sometimes the
> entire thing just fails.  I didn't use to have this problem, but its
> definately been getting worse.  I finally bit the bullet and ordered a
> physical server dedicated for my VoIP System...But I still want to figure
> out why I'm having all these IO problems.  I read on the list of people
> running 30+ VMs...I feel that my IO can't take any more VMs with any
> semblance of reliability.  We have a Quickbooks server on here too
> (windows), and the performance is abysmal; my CPA is charging me extra
> because of all the lost staff time waiting on the system to respond and
> generate reports.
>
> I'm at my whits end...I started with gluster on SSD with 1Gbps network,
> migrated to 10Gbps network, and now to dedicated high performance NAS box
> over NFS, and still have performance issues.I don't know how to
> troubleshoot the issue any further, but I've never had these kinds of
> issues when I was playing with other VM technologies.  I'd like to get to
> the point where I can resell virtual servers to customers, but I can't do
> so with my current performance levels.
>
> I'd greatly appreciate help troubleshooting this further.
>
> --Jim
>
>
> ___
> Users mailing list -- users@ovirt.org
> To unsubscribe send an email to users-le...@ovirt.org
> Privacy Statement: https://www.ovirt.org/site/privacy-policy/
> oVirt Code of Conduct:
> https://www.ovirt.org/community/about/community-guidelines/
> List Archives:
> 

[ovirt-users] Re: Controller recomandation - LSI2008/9265

2019-04-05 Thread Leo David
Thank you Strahil for that.

On Fri, Apr 5, 2019, 06:45 Strahil  wrote:

> Adding Gluster users' mail list.
> On Apr 5, 2019 06:02, Leo David  wrote:
>
> Hi Everyone,
> Any thoughts on this ?
>
>
> On Wed, Apr 3, 2019, 17:02 Leo David  wrote:
>
> Hi Everyone,
> For a hyperconverged setup started with 3 nodes and going up in time up to
> 12 nodes, I have to choose between LSI2008 ( jbod ) and LSI9265 (raid).
> Perc h710 ( raid ) might be an option too, but on a different chassis.
> There will not be many disk installed on each node, so the replication
> will be replica 3 replicated-distribute volumes across the nodes as:
> node1/disk1  node2/disk1  node3/disk1
> node1/disk2  node2/disk2  node3/disk2
> and so on...
> As i will add nodes to the cluster ,  I intend expand the volumes using
> the same rule.
> What would it be a better way,  to used jbod cards ( no cache ) or raid
> card and create raid0 arrays ( one for each disk ) and therefore have a bit
> of raid cache ( 512Mb ) ?
> Is raid caching a benefit to have it underneath ovirt/gluster as long as I
> go for "Jbod"  installation anyway ?
> Thank you very much !
> --
> Best regards, Leo David
>
>
___
Users mailing list -- users@ovirt.org
To unsubscribe send an email to users-le...@ovirt.org
Privacy Statement: https://www.ovirt.org/site/privacy-policy/
oVirt Code of Conduct: 
https://www.ovirt.org/community/about/community-guidelines/
List Archives: 
https://lists.ovirt.org/archives/list/users@ovirt.org/message/6STI7U7LTOXSSH6WUNHX63WDIF2LZ46K/


[ovirt-users] Re: Controller recomandation - LSI2008/9265

2019-04-04 Thread Leo David
Hi Everyone,
Any thoughts on this ?


On Wed, Apr 3, 2019, 17:02 Leo David  wrote:

> Hi Everyone,
> For a hyperconverged setup started with 3 nodes and going up in time up to
> 12 nodes, I have to choose between LSI2008 ( jbod ) and LSI9265 (raid).
> Perc h710 ( raid ) might be an option too, but on a different chassis.
> There will not be many disk installed on each node, so the replication
> will be replica 3 replicated-distribute volumes across the nodes as:
> node1/disk1  node2/disk1  node3/disk1
> node1/disk2  node2/disk2  node3/disk2
> and so on...
> As i will add nodes to the cluster ,  I intend expand the volumes using
> the same rule.
> What would it be a better way,  to used jbod cards ( no cache ) or raid
> card and create raid0 arrays ( one for each disk ) and therefore have a bit
> of raid cache ( 512Mb ) ?
> Is raid caching a benefit to have it underneath ovirt/gluster as long as I
> go for "Jbod"  installation anyway ?
> Thank you very much !
> --
> Best regards, Leo David
>
___
Users mailing list -- users@ovirt.org
To unsubscribe send an email to users-le...@ovirt.org
Privacy Statement: https://www.ovirt.org/site/privacy-policy/
oVirt Code of Conduct: 
https://www.ovirt.org/community/about/community-guidelines/
List Archives: 
https://lists.ovirt.org/archives/list/users@ovirt.org/message/LVFAIC3GSJ7V5VVTIITXTZHRIVSO7UW3/


[ovirt-users] Controller recomandation - LSI2008/9265

2019-04-03 Thread Leo David
Hi Everyone,
For a hyperconverged setup started with 3 nodes and going up in time up to
12 nodes, I have to choose between LSI2008 ( jbod ) and LSI9265 (raid).
Perc h710 ( raid ) might be an option too, but on a different chassis.
There will not be many disk installed on each node, so the replication will
be replica 3 replicated-distribute volumes across the nodes as:
node1/disk1  node2/disk1  node3/disk1
node1/disk2  node2/disk2  node3/disk2
and so on...
As i will add nodes to the cluster ,  I intend expand the volumes using the
same rule.
What would it be a better way,  to used jbod cards ( no cache ) or raid
card and create raid0 arrays ( one for each disk ) and therefore have a bit
of raid cache ( 512Mb ) ?
Is raid caching a benefit to have it underneath ovirt/gluster as long as I
go for "Jbod"  installation anyway ?
Thank you very much !
-- 
Best regards, Leo David
___
Users mailing list -- users@ovirt.org
To unsubscribe send an email to users-le...@ovirt.org
Privacy Statement: https://www.ovirt.org/site/privacy-policy/
oVirt Code of Conduct: 
https://www.ovirt.org/community/about/community-guidelines/
List Archives: 
https://lists.ovirt.org/archives/list/users@ovirt.org/message/TQ432GPTJEP3J6WOJ6C3MXWXJSRSIXNP/


[ovirt-users] Re: HE - engine gluster volume - not mounted

2019-04-02 Thread Leo David
689"}, "memory": 34359738368,
"memory_policy": {"guaranteed": 34359738368, "max": 34359738368},
"migration": {"auto_converge": "inherit", "compressed": "inherit"},
"migration_downtime": -1, "multi_queues_enabled": true, "name":
"external-HostedEngineLocal", "next_run_configuration_exists": false,
"nics": [], "numa_nodes": [], "numa_tune_mode": "interleave", "origin":
"external", "original_template": {"href":
"/ovirt-engine/api/templates/----", "id":
"----"}, "os": {"boot": {"devices":
["hd"]}, "type": "other"}, "permissions": [], "placement_policy":
{"affinity": "migratable"}, "quota": {"id":
"d27a97ee-5564-11e9-bba0-00163e41da1e"}, "reported_devices": [],
"run_once": false, "sessions": [], "small_icon": {"href":
"/ovirt-engine/api/icons/a29967f4-53e5-4acc-92d8-4a971b54e655", "id":
"a29967f4-53e5-4acc-92d8-4a971b54e655"}, "snapshots": [], "sso":
{"methods": [{"id": "guest_agent"}]}, "start_paused": false, "stateless":
false, "statistics": [], "status": "up", "storage_error_resume_behaviour":
"auto_resume", "tags": [], "template": {"href":
"/ovirt-engine/api/templates/----", "id":
"----"}, "time_zone": {"name": "Etc/GMT"},
"type": "desktop", "usb": {"enabled": false}, "watchdogs": []}]},
"attempts": 24, "changed": false}

The engine eventually went up,  and i could login into the UI. Here, i've I
have found an additional stopped vm called "external-HostedEngineLocal" - i
assume the playbook didnt managed to delete it.
I just don't know what to say, if this installation is reliable considering
it is a fresh installation from official iso image...
Do you think it would be better to wait for the next release when hopefully
gluster 5.5 will be integrated too ?

Thank very much for your answers !





On Tue, Apr 2, 2019 at 6:31 PM Sahina Bose  wrote:

> On Tue, Apr 2, 2019 at 8:14 PM Leo David  wrote:
> >
> > Just to loop in,  i've forgot to hit "Reply all"
> >
> > I have deleted everything in the engine gluster mount path, unmounted
> the engine gluster volume ( not deleted the volume ) ,  and started the
> wizard with "Use already configured storage". I have pointed to use this
> gluster volume,  volume gets mounted under the correct path, but
> installation still fails:
> >
> > [ INFO ] TASK [ovirt.hosted_engine_setup : Activate storage domain]
> > [ ERROR ] Error: Fault reason is "Operation Failed". Fault detail is
> "[]". HTTP response code is 400.
> > [ ERROR ] fatal: [localhost]: FAILED! => {"changed": false, "msg":
> "Fault reason is \"Operation Failed\". Fault detail is \"[]\". HTTP
> response code is 400."}
>
> And I guess we don't have the engine logs to look at this?
> Is there any way you can access the engine console to check?
>
> >
> > On the node's vdsm.log I can continuously see:
> > 2019-04-02 13:02:18,832+0100 INFO  (jsonrpc/0) [jsonrpc.JsonRpcServer]
> RPC call Host.getStats succeeded in 0.03 seconds (__init__:312)
> > 2019-04-02 13:02:19,906+0100 INFO  (vmrecovery) [vdsm.api] START
> getConnectedStoragePoolsList(options=None) from=internal,
> task_id=769c3983-5160-44e4-b1d8-7ab4e41ddd34 (api:48)
> > 2019-04-02 13:02:19,907+0100 INFO  (vmrecovery) [vdsm.api] FINISH
> getConnectedStoragePoolsList return={'poollist': []} from=internal,
> task_id=769c3983-5160-44e4-b1d8-7ab4e41ddd34 (api:54)
> > 2019-04-02 13:02:19,907+0100 INFO  (vmrecovery) [vds] recovery: waiting
> for storage pool to go up (clientIF:709)
> > 2019-04-02 13:02:21,737+0100 INFO  (periodic/2) [vdsm.api] START
> repoStats(domains=()) from=internal,
> task_id=ba12fbc1-0170-41a2-82e6-8ccb05ae9e09 (api:48)
> > 2019-04-02 13:02:21,738+0100 INFO  (periodic/2) [vdsm.api] FINISH
> repoStats return={} from=internal,
> task_id=ba12fbc1-0170-41a2-82e6-8ccb05ae9e09 (api:54)
> >
>
> Any calls to "START connectStorageServer" in vdsm.log?
>
> > Should I perform an "engine-cleanup",  delete lvms

[ovirt-users] Re: HE - engine gluster volume - not mounted

2019-04-02 Thread Leo David
Just to loop in,  i've forgot to hit "Reply all"

I have deleted everything in the engine gluster mount path, unmounted the
engine gluster volume ( not deleted the volume ) ,  and started the wizard
with "Use already configured storage". I have pointed to use this gluster
volume,  volume gets mounted under the correct path, but installation still
fails:

[ INFO ] TASK [ovirt.hosted_engine_setup : Activate storage domain]
[ ERROR ] Error: Fault reason is "Operation Failed". Fault detail is "[]".
HTTP response code is 400.
[ ERROR ] fatal: [localhost]: FAILED! => {"changed": false, "msg": "Fault
reason is \"Operation Failed\". Fault detail is \"[]\". HTTP response code
is 400."}

On the node's vdsm.log I can continuously see:
2019-04-02 13:02:18,832+0100 INFO  (jsonrpc/0) [jsonrpc.JsonRpcServer] RPC
call Host.getStats succeeded in 0.03 seconds (__init__:312)
2019-04-02 13:02:19,906+0100 INFO  (vmrecovery) [vdsm.api] START
getConnectedStoragePoolsList(options=None) from=internal,
task_id=769c3983-5160-44e4-b1d8-7ab4e41ddd34 (api:48)
2019-04-02 13:02:19,907+0100 INFO  (vmrecovery) [vdsm.api] FINISH
getConnectedStoragePoolsList return={'poollist': []} from=internal,
task_id=769c3983-5160-44e4-b1d8-7ab4e41ddd34 (api:54)
2019-04-02 13:02:19,907+0100 INFO  (vmrecovery) [vds] recovery: waiting for
storage pool to go up (clientIF:709)
2019-04-02 13:02:21,737+0100 INFO  (periodic/2) [vdsm.api] START
repoStats(domains=()) from=internal,
task_id=ba12fbc1-0170-41a2-82e6-8ccb05ae9e09 (api:48)
2019-04-02 13:02:21,738+0100 INFO  (periodic/2) [vdsm.api] FINISH repoStats
return={} from=internal, task_id=ba12fbc1-0170-41a2-82e6-8ccb05ae9e09
(api:54)

Should I perform an "engine-cleanup",  delete lvms from Cockpit and start
it all over ?
Did anyone successfully used this particular iso image
"ovirt-node-ng-installer-4.3.2-2019031908.el7.iso" for a single node
installation ?
Thank you !
Leo

On Tue, Apr 2, 2019 at 1:45 PM Sahina Bose  wrote:

> Is it possible you have not cleared the gluster volume between installs?
>
> What's the corresponding error in vdsm.log?
>
>
> On Tue, Apr 2, 2019 at 4:07 PM Leo David  wrote:
> >
> > And there it is the last lines on the ansible_create_storage_domain log:
> >
> > 2019-04-02 10:53:49,139+0100 DEBUG var changed: host "localhost" var
> "otopi_storage_domain_details" type "" value: "{
> > "changed": false,
> > "exception": "Traceback (most recent call last):\n  File
> \"/tmp/ansible_ovirt_storage_domain_payload_6Jxg5v/__main__.py\", line 664,
> in main\nstorage_domains_module.post_create_check(sd_id)\n  File
> \"/tmp/ansible_ovirt_storage_domain_payload_6Jxg5v/__main__.py\", line 526,
> in post_create_check\nid=storage_domain.id,\n  File
> \"/usr/lib64/python2.7/site-packages/ovirtsdk4/services.py\", line 3053, in
> add\nreturn self._internal_add(storage_domain, headers, query, wait)\n
> File \"/usr/lib64/python2.7/site-packages/ovirtsdk4/service.py\", line 232,
> in _internal_add\nreturn future.wait() if wait else future\n  File
> \"/usr/lib64/python2.7/site-packages/ovirtsdk4/service.py\", line 55, in
> wait\nreturn self._code(response)\n  File
> \"/usr/lib64/python2.7/site-packages/ovirtsdk4/service.py\", line 229, in
> callback\nself._check_fault(response)\n  File
> \"/usr/lib64/python2.7/site-packages/ovirtsdk4/service.py\", line 132, in
> _check_fault\nself._raise_error(response, body)\n  File
> \"/usr/lib64/python2.7/site-packages/ovirtsdk4/service.py\", line 118, in
> _raise_error\nraise error\nError: Fault reason is \"Operation Failed\".
> Fault detail is \"[]\". HTTP response code is 400.\n",
> > "failed": true,
> > "msg": "Fault reason is \"Operation Failed\". Fault detail is
> \"[]\". HTTP response code is 400."
> > }"
> > 2019-04-02 10:53:49,141+0100 DEBUG var changed: host "localhost" var
> "ansible_play_hosts" type "" value: "[]"
> > 2019-04-02 10:53:49,141+0100 DEBUG var changed: host "localhost" var
> "play_hosts" type "" value: "[]"
> > 2019-04-02 10:53:49,142+0100 DEBUG var changed: host "localhost" var
> "ansible_play_batch" type "" value: "[]"
> > 2019-04-02 10:53:49,142+0100 ERROR ansible failed {'status': 'FAILED',
> 'ansible_type': 'task', 'ansible_task': u'Activate storage domain',
> 'ansible_result': u'type: \nstr: {\'_ansible_parsed\': True,
> u\'exception\': u\'Traceback (most recent

[ovirt-users] Fwd: Re: HE - engine gluster volume - not mounted

2019-04-02 Thread Leo David
-- Forwarded message -
From: Leo David 
Date: Tue, Apr 2, 2019, 15:10
Subject: Re: [ovirt-users] Re: HE - engine gluster volume - not mounted
To: Sahina Bose 


I have deleted everything in the engine gluster mount path, unmounted the
engine gluster volume ( not deleted the volume ) ,  and started the wizard
with "Use already configured storage". I have pointed to use this gluster
volume,  volume gets mounted under the correct path, but installation still
fails:

[ INFO ] TASK [ovirt.hosted_engine_setup : Activate storage domain]
[ ERROR ] Error: Fault reason is "Operation Failed". Fault detail is "[]".
HTTP response code is 400.
[ ERROR ] fatal: [localhost]: FAILED! => {"changed": false, "msg": "Fault
reason is \"Operation Failed\". Fault detail is \"[]\". HTTP response code
is 400."}

On the node's vdsm.log I can continuously see:
2019-04-02 13:02:18,832+0100 INFO  (jsonrpc/0) [jsonrpc.JsonRpcServer] RPC
call Host.getStats succeeded in 0.03 seconds (__init__:312)
2019-04-02 13:02:19,906+0100 INFO  (vmrecovery) [vdsm.api] START
getConnectedStoragePoolsList(options=None) from=internal,
task_id=769c3983-5160-44e4-b1d8-7ab4e41ddd34 (api:48)
2019-04-02 13:02:19,907+0100 INFO  (vmrecovery) [vdsm.api] FINISH
getConnectedStoragePoolsList return={'poollist': []} from=internal,
task_id=769c3983-5160-44e4-b1d8-7ab4e41ddd34 (api:54)
2019-04-02 13:02:19,907+0100 INFO  (vmrecovery) [vds] recovery: waiting for
storage pool to go up (clientIF:709)
2019-04-02 13:02:21,737+0100 INFO  (periodic/2) [vdsm.api] START
repoStats(domains=()) from=internal,
task_id=ba12fbc1-0170-41a2-82e6-8ccb05ae9e09 (api:48)
2019-04-02 13:02:21,738+0100 INFO  (periodic/2) [vdsm.api] FINISH repoStats
return={} from=internal, task_id=ba12fbc1-0170-41a2-82e6-8ccb05ae9e09
(api:54)

Should I perform an "engine-cleanup",  delete lvms from Cockpit and start
it all over ?
Did anyone successfully used this particular iso image
"ovirt-node-ng-installer-4.3.2-2019031908.el7.iso" for a single node
installation ?
Thank you !
Leo


On Tue, Apr 2, 2019 at 1:45 PM Sahina Bose  wrote:

> Is it possible you have not cleared the gluster volume between installs?
>
> What's the corresponding error in vdsm.log?
>
>
> On Tue, Apr 2, 2019 at 4:07 PM Leo David  wrote:
> >
> > And there it is the last lines on the ansible_create_storage_domain log:
> >
> > 2019-04-02 10:53:49,139+0100 DEBUG var changed: host "localhost" var
> "otopi_storage_domain_details" type "" value: "{
> > "changed": false,
> > "exception": "Traceback (most recent call last):\n  File
> \"/tmp/ansible_ovirt_storage_domain_payload_6Jxg5v/__main__.py\", line 664,
> in main\nstorage_domains_module.post_create_check(sd_id)\n  File
> \"/tmp/ansible_ovirt_storage_domain_payload_6Jxg5v/__main__.py\", line 526,
> in post_create_check\nid=storage_domain.id,\n  File
> \"/usr/lib64/python2.7/site-packages/ovirtsdk4/services.py\", line 3053, in
> add\nreturn self._internal_add(storage_domain, headers, query, wait)\n
> File \"/usr/lib64/python2.7/site-packages/ovirtsdk4/service.py\", line 232,
> in _internal_add\nreturn future.wait() if wait else future\n  File
> \"/usr/lib64/python2.7/site-packages/ovirtsdk4/service.py\", line 55, in
> wait\nreturn self._code(response)\n  File
> \"/usr/lib64/python2.7/site-packages/ovirtsdk4/service.py\", line 229, in
> callback\nself._check_fault(response)\n  File
> \"/usr/lib64/python2.7/site-packages/ovirtsdk4/service.py\", line 132, in
> _check_fault\nself._raise_error(response, body)\n  File
> \"/usr/lib64/python2.7/site-packages/ovirtsdk4/service.py\", line 118, in
> _raise_error\nraise error\nError: Fault reason is \"Operation Failed\".
> Fault detail is \"[]\". HTTP response code is 400.\n",
> > "failed": true,
> > "msg": "Fault reason is \"Operation Failed\". Fault detail is
> \"[]\". HTTP response code is 400."
> > }"
> > 2019-04-02 10:53:49,141+0100 DEBUG var changed: host "localhost" var
> "ansible_play_hosts" type "" value: "[]"
> > 2019-04-02 10:53:49,141+0100 DEBUG var changed: host "localhost" var
> "play_hosts" type "" value: "[]"
> > 2019-04-02 10:53:49,142+0100 DEBUG var changed: host "localhost" var
> "ansible_play_batch" type "" value: "[]"
> > 2019-04-02 10:53:49,142+0100 ERROR ansible failed {'status': 'FAILED',
> 'ansible_type': 'task', 'ansible_task': u'Activate storage domain',
> 'ansible

[ovirt-users] Re: HE - engine gluster volume - not mounted

2019-04-02 Thread Leo David
And there it is the last lines on the ansible_create_storage_domain log:

2019-04-02 10:53:49,139+0100 DEBUG var changed: host "localhost" var
"otopi_storage_domain_details" type "" value: "{
"changed": false,
"exception": "Traceback (most recent call last):\n  File
\"/tmp/ansible_ovirt_storage_domain_payload_6Jxg5v/__main__.py\", line 664,
in main\nstorage_domains_module.post_create_check(sd_id)\n  File
\"/tmp/ansible_ovirt_storage_domain_payload_6Jxg5v/__main__.py\", line 526,
in post_create_check\nid=storage_domain.id,\n  File
\"/usr/lib64/python2.7/site-packages/ovirtsdk4/services.py\", line 3053, in
add\nreturn self._internal_add(storage_domain, headers, query, wait)\n
File \"/usr/lib64/python2.7/site-packages/ovirtsdk4/service.py\", line 232,
in _internal_add\nreturn future.wait() if wait else future\n  File
\"/usr/lib64/python2.7/site-packages/ovirtsdk4/service.py\", line 55, in
wait\nreturn self._code(response)\n  File
\"/usr/lib64/python2.7/site-packages/ovirtsdk4/service.py\", line 229, in
callback\nself._check_fault(response)\n  File
\"/usr/lib64/python2.7/site-packages/ovirtsdk4/service.py\", line 132, in
_check_fault\nself._raise_error(response, body)\n  File
\"/usr/lib64/python2.7/site-packages/ovirtsdk4/service.py\", line 118, in
_raise_error\nraise error\nError: Fault reason is \"Operation Failed\".
Fault detail is \"[]\". HTTP response code is 400.\n",
"failed": true,
"msg": "Fault reason is \"Operation Failed\". Fault detail is \"[]\".
HTTP response code is 400."
}"
2019-04-02 10:53:49,141+0100 DEBUG var changed: host "localhost" var
"ansible_play_hosts" type "" value: "[]"
2019-04-02 10:53:49,141+0100 DEBUG var changed: host "localhost" var
"play_hosts" type "" value: "[]"
2019-04-02 10:53:49,142+0100 DEBUG var changed: host "localhost" var
"ansible_play_batch" type "" value: "[]"
2019-04-02 10:53:49,142+0100 ERROR ansible failed {'status': 'FAILED',
'ansible_type': 'task', 'ansible_task': u'Activate storage domain',
'ansible_result': u'type: \nstr: {\'_ansible_parsed\': True,
u\'exception\': u\'Traceback (most recent call last):\\n  File
"/tmp/ansible_ovirt_storage_domain_payload_6Jxg5v/__main__.py", line 664,
in main\\nstorage_domains_module.post_create_check(sd_id)\\n  File
"/tmp/ansible_ovirt_storage_domain_payload_6Jxg5v/__main__.py", line 526',
'task_duration': 9, 'ansible_host': u'localhost', 'ansible_playbook':
u'/usr/share/ovirt-hosted-engine-setup/ansible/trigger_role.yml'}
2019-04-02 10:53:49,143+0100 DEBUG ansible on_any args
 kwargs
ignore_errors:None
2019-04-02 10:53:49,148+0100 INFO ansible stats {
"ansible_playbook":
"/usr/share/ovirt-hosted-engine-setup/ansible/trigger_role.yml",
"ansible_playbook_duration": "01:15 Minutes",
"ansible_result": "type: \nstr: {u'localhost':
{'unreachable': 0, 'skipped': 6, 'ok': 23, 'changed': 0, 'failures': 1}}",
"ansible_type": "finish",
"status": "FAILED"
}
2019-04-02 10:53:49,149+0100 INFO SUMMARY:
DurationTask Name

[ < 1 sec ] Execute just a specific set of steps
[  00:02  ] Force facts gathering
[  00:02  ] Check local VM dir stat
[  00:02  ] Obtain SSO token using username/password credentials
[  00:02  ] Fetch host facts
[  00:01  ] Fetch cluster ID
[  00:02  ] Fetch cluster facts
[  00:02  ] Fetch Datacenter facts
[  00:01  ] Fetch Datacenter ID
[  00:01  ] Fetch Datacenter name
[  00:02  ] Add glusterfs storage domain
[  00:02  ] Get storage domain details
[  00:02  ] Find the appliance OVF
[  00:02  ] Parse OVF
[  00:02  ] Get required size
[ FAILED  ] Activate storage domain

Any ideea on how to escalate this issue ?
It just does not make sense to not be able to install from scratch a fresh
node...

Have a nice day  !

Leo


On Tue, Apr 2, 2019 at 12:11 PM Gobinda Das  wrote:

> Hi Leo,
>  Can you please paste "df -Th" and "gluster v status" out put ?
> Wanted to make sure engine mounted and volumes and bricks are up.
> What does vdsm log say?
>
> On Tue, Apr 2, 2019 at 2:06 PM Leo David  wrote:
>
>> Thank you very much !
>> I have just installed a new fresh node,   and triggered the single
>> instance hyperconverged setup. It seems it fails at the hosted engine final
>> steps of deployment:
>>  INFO ] TASK [ovirt.hosted_engine_setup : Get required size]
>> [ INFO ] ok: [localhost]
>> [ INFO ] TASK [ovirt.hosted_engine_setup

[ovirt-users] Re: HE - engine gluster volume - not mounted

2019-04-02 Thread Leo David
Hi,
I have just hit "Redeploy"  and not the volume seems to be mounted:

Filesystem Type
Size  Used Avail Use% Mounted on
/dev/mapper/onn-ovirt--node--ng--4.3.2--0.20190319.0+1 ext4
57G  3.0G   51G   6% /
devtmpfs   devtmpfs
48G 0   48G   0% /dev
tmpfs  tmpfs
48G  4.0K   48G   1% /dev/shm
tmpfs  tmpfs
48G   34M   48G   1% /run
tmpfs  tmpfs
48G 0   48G   0% /sys/fs/cgroup
/dev/sda1  ext4
976M  183M  726M  21% /boot
/dev/mapper/onn-varext4
15G  4.4G  9.5G  32% /var
/dev/mapper/onn-tmpext4
976M  3.2M  906M   1% /tmp
/dev/mapper/onn-var_logext4
17G   56M   16G   1% /var/log
/dev/mapper/onn-var_log_audit  ext4
2.0G  8.7M  1.8G   1% /var/log/audit
/dev/mapper/onn-home   ext4
976M  2.6M  907M   1% /home
/dev/mapper/onn-var_crash  ext4
9.8G   37M  9.2G   1% /var/crash
tmpfs  tmpfs
9.5G 0  9.5G   0% /run/user/0
/dev/mapper/gluster_vg_sdb-gluster_lv_engine   xfs
100G   35M  100G   1% /gluster_bricks/engine
c6100-ch3-node1-gluster.internal.lab:/engine   fuse.glusterfs
100G  1.1G   99G   2%
/rhev/data-center/mnt/glusterSD/c6100-ch3-node1-gluster.internal.lab:_engine

[root@c6100-ch3-node1 ovirt-hosted-engine-setup]# gluster v status
Status of volume: engine
Gluster process TCP Port  RDMA Port  Online  Pid
--
Brick c6100-ch3-node1-gluster.internal.lab:
/gluster_bricks/engine/engine   49152 0  Y
25397
Task Status of Volume engine
--
There are no active volume tasks

The problem is that the deployment still not finishing, now the error is:

INFO ] TASK [ovirt.hosted_engine_setup : Activate storage domain]
[ ERROR ] Error: Fault reason is "Operation Failed". Fault detail is "[]".
HTTP response code is 400.
[ ERROR ] fatal: [localhost]: FAILED! => {"changed": false, "msg": "Fault
reason is \"Operation Failed\". Fault detail is \"[]\". HTTP response code
is 400."}

I just do not understand anymore...



On Tue, Apr 2, 2019 at 12:11 PM Gobinda Das  wrote:

> Hi Leo,
>  Can you please paste "df -Th" and "gluster v status" out put ?
> Wanted to make sure engine mounted and volumes and bricks are up.
> What does vdsm log say?
>
> On Tue, Apr 2, 2019 at 2:06 PM Leo David  wrote:
>
>> Thank you very much !
>> I have just installed a new fresh node,   and triggered the single
>> instance hyperconverged setup. It seems it fails at the hosted engine final
>> steps of deployment:
>>  INFO ] TASK [ovirt.hosted_engine_setup : Get required size]
>> [ INFO ] ok: [localhost]
>> [ INFO ] TASK [ovirt.hosted_engine_setup : Remove unsuitable storage
>> domain]
>> [ INFO ] skipping: [localhost]
>> [ INFO ] TASK [ovirt.hosted_engine_setup : Check storage domain free
>> space]
>> [ INFO ] skipping: [localhost]
>> [ INFO ] TASK [ovirt.hosted_engine_setup : Activate storage domain]
>> [ ERROR ] Error: Fault reason is "Operation Failed". Fault detail is
>> "[Cannot attach Storage. There is no active Host in the Data Center.]".
>> HTTP response code is 409.
>> [ ERROR ] fatal: [localhost]: FAILED! => {"changed": false, "msg": "Fault
>> reason is \"Operation Failed\". Fault detail is \"[Cannot attach Storage.
>> There is no active Host in the Data Center.]\". HTTP response code is 409."}
>> Also,  the
>> ovirt-hosted-engine-setup-ansible-create_storage_domain-201932112413-xkq6nb.log
>>  throws
>> the following:
>>
>> 2019-04-02 09:25:40,420+0100 DEBUG var changed: host "localhost" var
>> "otopi_storage_domain_details" type "" value: "{
>> "changed": false,
>> "exception": "Traceback (most recent call last):\n  File
>> \"/tmp/ansible_ovirt_storage_domain_payload_87MSyY/__main__.py\", line 664,
>> in main\nstorage_domains_module.post_create_check(sd_id)\n  File
>> \"/tmp/ansible_ovirt_storage_domain_payload_87MSyY/__main__.py\", line 526,
>> in post_create_check\nid=storage_domain.id,\n  File
>> \"/usr/lib64/python2.7/site-packages/ovi

[ovirt-users] Re: HE - engine gluster volume - not mounted

2019-04-02 Thread Leo David
Thank you very much !
I have just installed a new fresh node,   and triggered the single instance
hyperconverged setup. It seems it fails at the hosted engine final steps of
deployment:
 INFO ] TASK [ovirt.hosted_engine_setup : Get required size]
[ INFO ] ok: [localhost]
[ INFO ] TASK [ovirt.hosted_engine_setup : Remove unsuitable storage domain]
[ INFO ] skipping: [localhost]
[ INFO ] TASK [ovirt.hosted_engine_setup : Check storage domain free space]
[ INFO ] skipping: [localhost]
[ INFO ] TASK [ovirt.hosted_engine_setup : Activate storage domain]
[ ERROR ] Error: Fault reason is "Operation Failed". Fault detail is
"[Cannot attach Storage. There is no active Host in the Data Center.]".
HTTP response code is 409.
[ ERROR ] fatal: [localhost]: FAILED! => {"changed": false, "msg": "Fault
reason is \"Operation Failed\". Fault detail is \"[Cannot attach Storage.
There is no active Host in the Data Center.]\". HTTP response code is 409."}
Also,  the
ovirt-hosted-engine-setup-ansible-create_storage_domain-201932112413-xkq6nb.log
throws
the following:

2019-04-02 09:25:40,420+0100 DEBUG var changed: host "localhost" var
"otopi_storage_domain_details" type "" value: "{
"changed": false,
"exception": "Traceback (most recent call last):\n  File
\"/tmp/ansible_ovirt_storage_domain_payload_87MSyY/__main__.py\", line 664,
in main\nstorage_domains_module.post_create_check(sd_id)\n  File
\"/tmp/ansible_ovirt_storage_domain_payload_87MSyY/__main__.py\", line 526,
in post_create_check\nid=storage_domain.id,\n  File
\"/usr/lib64/python2.7/site-packages/ovirtsdk4/services.py\", line 3053, in
add\nreturn self._internal_add(storage_domain, headers, query, wait)\n
File \"/usr/lib64/python2.7/site-packages/ovirtsdk4/service.py\", line 232,
in _internal_add\nreturn future.wait() if wait else future\n  File
\"/usr/lib64/python2.7/site-packages/ovirtsdk4/service.py\", line 55, in
wait\nreturn self._code(response)\n  File
\"/usr/lib64/python2.7/site-packages/ovirtsdk4/service.py\", line 229, in
callback\nself._check_fault(response)\n  File
\"/usr/lib64/python2.7/site-packages/ovirtsdk4/service.py\", line 132, in
_check_fault\nself._raise_error(response, body)\n  File
\"/usr/lib64/python2.7/site-packages/ovirtsdk4/service.py\", line 118, in
_raise_error\nraise error\nError: Fault reason is \"Operation Failed\".
Fault detail is \"[Cannot attach Storage. There is no active Host in the
Data Center.]\". HTTP response code is 409.\n",
"failed": true,
"msg": "Fault reason is \"Operation Failed\". Fault detail is \"[Cannot
attach Storage. There is no active Host in the Data Center.]\". HTTP
response code is 409."
}"

I have used the ovirt-node-ng-installer-4.3.2-2019031908.el7.iso. So far,
I am unable to deploy oVirt single node Hyperconverged...
Any thoughts ?



On Mon, Apr 1, 2019 at 9:46 PM Simone Tiraboschi 
wrote:

>
>
> On Mon, Apr 1, 2019 at 6:14 PM Leo David  wrote:
>
>> Thank you Simone.
>> I've decides to go for a new fresh install from iso, and i'll keep posted
>> if any troubles arise. But I am still trying to understand what are the
>> services that mount the lvms and volumes after configuration. There is
>> nothing related in fstab, so I assume there are a couple of .mount files
>> somewhere in the filesystem.
>> Im just trying to understand node's underneath workflow.
>>
>
> hosted-engine configuration is stored
> in /etc/ovirt-hosted-engine/hosted-engine.conf; ovirt-ha-broker will mount
> the hosted-engine storage domain according to that and so ovirt-ha-agent
> will be able to start the engine VM.
> Everything else is just in the engine DB.
>
>
>>
>> On Mon, Apr 1, 2019, 10:16 Simone Tiraboschi  wrote:
>>
>>> Hi,
>>> to understand what's failing I'd suggest to start attaching setup logs.
>>>
>>> On Sun, Mar 31, 2019 at 5:06 PM Leo David  wrote:
>>>
>>>> Hello Everyone,
>>>> Using 4.3.2 installation, and after running through HyperConverged
>>>> Setup,  at the last stage it fails. It seems that the previously created
>>>> "engine" volume is not mounted under "/rhev" path, therefore the setup
>>>> cannot finish the deployment.
>>>> Any ideea which are the services responsible of mounting the volumes on
>>>> oVirt Node distribution ? I'm thinking that maybe this particularly one
>>>> failed to start for some reason...
>>>> Thank you very much !
>>>>
>>>> --
>>>> Best regards, Leo Da

[ovirt-users] Re: HE - engine gluster volume - not mounted

2019-04-01 Thread Leo David
Thank you Simone.
I've decides to go for a new fresh install from iso, and i'll keep posted
if any troubles arise. But I am still trying to understand what are the
services that mount the lvms and volumes after configuration. There is
nothing related in fstab, so I assume there are a couple of .mount files
somewhere in the filesystem.
Im just trying to understand node's underneath workflow.

On Mon, Apr 1, 2019, 10:16 Simone Tiraboschi  wrote:

> Hi,
> to understand what's failing I'd suggest to start attaching setup logs.
>
> On Sun, Mar 31, 2019 at 5:06 PM Leo David  wrote:
>
>> Hello Everyone,
>> Using 4.3.2 installation, and after running through HyperConverged
>> Setup,  at the last stage it fails. It seems that the previously created
>> "engine" volume is not mounted under "/rhev" path, therefore the setup
>> cannot finish the deployment.
>> Any ideea which are the services responsible of mounting the volumes on
>> oVirt Node distribution ? I'm thinking that maybe this particularly one
>> failed to start for some reason...
>> Thank you very much !
>>
>> --
>> Best regards, Leo David
>> ___
>> Users mailing list -- users@ovirt.org
>> To unsubscribe send an email to users-le...@ovirt.org
>> Privacy Statement: https://www.ovirt.org/site/privacy-policy/
>> oVirt Code of Conduct:
>> https://www.ovirt.org/community/about/community-guidelines/
>> List Archives:
>> https://lists.ovirt.org/archives/list/users@ovirt.org/message/PUXDAQHVNZWF4TIXZ3GIBZHSJ7IC2VHC/
>>
>
___
Users mailing list -- users@ovirt.org
To unsubscribe send an email to users-le...@ovirt.org
Privacy Statement: https://www.ovirt.org/site/privacy-policy/
oVirt Code of Conduct: 
https://www.ovirt.org/community/about/community-guidelines/
List Archives: 
https://lists.ovirt.org/archives/list/users@ovirt.org/message/WND2O6L77H5CMKG45ZKA5GIMFUGGAHZW/


[ovirt-users] HE - engine gluster volume - not mounted

2019-03-31 Thread Leo David
Hello Everyone,
Using 4.3.2 installation, and after running through HyperConverged Setup,
at the last stage it fails. It seems that the previously created "engine"
volume is not mounted under "/rhev" path, therefore the setup cannot finish
the deployment.
Any ideea which are the services responsible of mounting the volumes on
oVirt Node distribution ? I'm thinking that maybe this particularly one
failed to start for some reason...
Thank you very much !

-- 
Best regards, Leo David
___
Users mailing list -- users@ovirt.org
To unsubscribe send an email to users-le...@ovirt.org
Privacy Statement: https://www.ovirt.org/site/privacy-policy/
oVirt Code of Conduct: 
https://www.ovirt.org/community/about/community-guidelines/
List Archives: 
https://lists.ovirt.org/archives/list/users@ovirt.org/message/PUXDAQHVNZWF4TIXZ3GIBZHSJ7IC2VHC/


[ovirt-users] Re: [Gluster-users] Announcing Gluster release 5.5

2019-03-28 Thread Leo David
Olaf, thank you very much for this feedback, I was just about to upgrade my
12 nodes 4.2.8 production cluster. And it seem so that you speared me of a
lot of trouble.
Though, I thought that 4.3.1 comes with gluster 5.5 which has been solved
the issues, and the upgrade procedure works seemless.
Not sure now how long or what oVirt version to wait for before upgrading my
cluster...

On Thu, Mar 28, 2019, 18:48  wrote:

> Dear All,
>
> I wanted to share my experience upgrading from 4.2.8 to 4.3.1. While
> previous upgrades from 4.1 to 4.2 etc. went rather smooth, this one was a
> different experience. After first trying a test upgrade on a 3 node setup,
> which went fine. i headed to upgrade the 9 node production platform,
> unaware of the backward compatibility issues between gluster 3.12.15 ->
> 5.3. After upgrading 2 nodes, the HA engine stopped and wouldn't start.
> Vdsm wasn't able to mount the engine storage domain, since /dom_md/metadata
> was missing or couldn't be accessed. Restoring this file by getting a good
> copy of the underlying bricks, removing the file from the underlying bricks
> where the file was 0 bytes and mark with the stickybit, and the
> corresponding gfid's. Removing the file from the mount point, and copying
> back the file on the mount point. Manually mounting the engine domain,  and
> manually creating the corresponding symbolic links in /rhev/data-center and
> /var/run/vdsm/storage and fixing the ownership back to vdsm.kvm (which was
> root.root), i was able to start the HA engine again. Since the engine was
> up again, and things seemed rather unstable i decided to continue the
> upgrade on the other nodes suspecting an incompatibility in gluster
> versions, i thought would be best to have them all on the same version
> rather soonish. However things went from bad to worse, the engine stopped
> again, and all vm’s stopped working as well.  So on a machine outside the
> setup and restored a backup of the engine taken from version 4.2.8 just
> before the upgrade. With this engine I was at least able to start some vm’s
> again, and finalize the upgrade. Once the upgraded, things didn’t stabilize
> and also lose 2 vm’s during the process due to image corruption. After
> figuring out gluster 5.3 had quite some issues I was as lucky to see
> gluster 5.5 was about to be released, on the moment the RPM’s were
> available I’ve installed those. This helped a lot in terms of stability,
> for which I’m very grateful! However the performance is unfortunate
> terrible, it’s about 15% of what the performance was running gluster
> 3.12.15. It’s strange since a simple dd shows ok performance, but our
> actual workload doesn’t. While I would expect the performance to be better,
> due to all improvements made since gluster version 3.12. Does anybody share
> the same experience?
> I really hope gluster 6 will soon be tested with ovirt and released, and
> things start to perform and stabilize again..like the good old days. Of
> course when I can do anything, I’m happy to help.
>
> I think the following short list of issues we have after the migration;
> Gluster 5.5;
> -   Poor performance for our workload (mostly write dependent)
> -   VM’s randomly pause on unknown storage errors, which are “stale
> file’s”. corresponding log; Lookup on shard 797 failed. Base file gfid =
> 8a27b91a-ff02-42dc-bd4c-caa019424de8 [Stale file handle]
> -   Some files are listed twice in a directory (probably related the
> stale file issue?)
> Example;
> ls -la
> /rhev/data-center/59cd53a9-0003-02d7-00eb-01e3/313f5d25-76af-4ecd-9a20-82a2fe815a3c/images/4add6751-3731-4bbd-ae94-aaeed12ea450/
> total 3081
> drwxr-x---.  2 vdsm kvm4096 Mar 18 11:34 .
> drwxr-xr-x. 13 vdsm kvm4096 Mar 19 09:42 ..
> -rw-rw.  1 vdsm kvm 1048576 Mar 28 12:55
> 1a7cf259-6b29-421d-9688-b25dfaafb13c
> -rw-rw.  1 vdsm kvm 1048576 Mar 28 12:55
> 1a7cf259-6b29-421d-9688-b25dfaafb13c
> -rw-rw.  1 vdsm kvm 1048576 Jan 27  2018
> 1a7cf259-6b29-421d-9688-b25dfaafb13c.lease
> -rw-r--r--.  1 vdsm kvm 290 Jan 27  2018
> 1a7cf259-6b29-421d-9688-b25dfaafb13c.meta
> -rw-r--r--.  1 vdsm kvm 290 Jan 27  2018
> 1a7cf259-6b29-421d-9688-b25dfaafb13c.meta
>
> - brick processes sometimes starts multiple times. Sometimes I’ve 5 brick
> processes for a single volume. Killing all glusterfsd’s for the volume on
> the machine and running gluster v start  force usually just starts one
> after the event, from then on things look all right.
>
> Ovirt 4.3.2.1-1.el7
> -   All vms images ownership are changed to root.root after the vm is
> shutdown, probably related to;
> https://bugzilla.redhat.com/show_bug.cgi?id=1666795 but not only scoped
> to the HA engine. I’m still in compatibility mode 4.2 for the cluster and
> for the vm’s, but upgraded to version ovirt 4.3.2
> -   The network provider is set to ovn, which is fine..actually cool,
> only the “ovs-vswitchd” is a CPU hog, and utilizes 100%
> -   It seems on all 

[ovirt-users] Re: VM disk corruption with LSM on Gluster

2019-03-27 Thread Leo David
Hi,
I can confirm that after setting these two options, I haven't encountered
disk corruptions anymore.
The downside, is that at least for me it had a pretty big impact on
performance.
The iops really went down - performing  inside vm fio tests.

On Wed, Mar 27, 2019, 07:03 Krutika Dhananjay  wrote:

> Could you enable strict-o-direct and disable remote-dio on the src volume
> as well, restart the vms on "old" and retry migration?
>
> # gluster volume set  performance.strict-o-direct on
> # gluster volume set  network.remote-dio off
>
> -Krutika
>
> On Tue, Mar 26, 2019 at 10:32 PM Sander Hoentjen 
> wrote:
>
>> On 26-03-19 14:23, Sahina Bose wrote:
>> > +Krutika Dhananjay and gluster ml
>> >
>> > On Tue, Mar 26, 2019 at 6:16 PM Sander Hoentjen 
>> wrote:
>> >> Hello,
>> >>
>> >> tl;dr We have disk corruption when doing live storage migration on
>> oVirt
>> >> 4.2 with gluster 3.12.15. Any idea why?
>> >>
>> >> We have a 3-node oVirt cluster that is both compute and
>> gluster-storage.
>> >> The manager runs on separate hardware. We are running out of space on
>> >> this volume, so we added another Gluster volume that is bigger, put a
>> >> storage domain on it and then we migrated VM's to it with LSM. After
>> >> some time, we noticed that (some of) the migrated VM's had corrupted
>> >> filesystems. After moving everything back with export-import to the old
>> >> domain where possible, and recovering from backups where needed we set
>> >> off to investigate this issue.
>> >>
>> >> We are now at the point where we can reproduce this issue within a day.
>> >> What we have found so far:
>> >> 1) The corruption occurs at the very end of the replication step, most
>> >> probably between START and FINISH of diskReplicateFinish, before the
>> >> START merge step
>> >> 2) In the corrupted VM, at some place where data should be, this data
>> is
>> >> replaced by zero's. This can be file-contents or a directory-structure
>> >> or whatever.
>> >> 3) The source gluster volume has different settings then the
>> destination
>> >> (Mostly because the defaults were different at creation time):
>> >>
>> >> Setting old(src)  new(dst)
>> >> cluster.op-version  30800 30800 (the same)
>> >> cluster.max-op-version  31202 31202 (the same)
>> >> cluster.metadata-self-heal  off   on
>> >> cluster.data-self-heal  off   on
>> >> cluster.entry-self-heal off   on
>> >> performance.low-prio-threads1632
>> >> performance.strict-o-direct off   on
>> >> network.ping-timeout4230
>> >> network.remote-dio  enableoff
>> >> transport.address-family- inet
>> >> performance.stat-prefetch   off   on
>> >> features.shard-block-size   512MB 64MB
>> >> cluster.shd-max-threads 1 8
>> >> cluster.shd-wait-qlength1024  1
>> >> cluster.locking-scheme  full  granular
>> >> cluster.granular-entry-heal noenable
>> >>
>> >> 4) To test, we migrate some VM's back and forth. The corruption does
>> not
>> >> occur every time. To this point it only occurs from old to new, but we
>> >> don't have enough data-points to be sure about that.
>> >>
>> >> Anybody an idea what is causing the corruption? Is this the best list
>> to
>> >> ask, or should I ask on a Gluster list? I am not sure if this is oVirt
>> >> specific or Gluster specific though.
>> > Do you have logs from old and new gluster volumes? Any errors in the
>> > new volume's fuse mount logs?
>>
>> Around the time of corruption I see the message:
>> The message "I [MSGID: 133017] [shard.c:4941:shard_seek]
>> 0-ZoneA_Gluster1-shard: seek called on
>> 7fabc273-3d8a-4a49-8906-b8ccbea4a49f. [Operation not supported]" repeated
>> 231 times between [2019-03-26 13:14:22.297333] and [2019-03-26
>> 13:15:42.912170]
>>
>> I also see this message at other times, when I don't see the corruption
>> occur, though.
>>
>> --
>> Sander
>> ___
>> Users mailing list -- users@ovirt.org
>> To unsubscribe send an email to users-le...@ovirt.org
>> Privacy Statement: https://www.ovirt.org/site/privacy-policy/
>> oVirt Code of Conduct:
>> https://www.ovirt.org/community/about/community-guidelines/
>> List Archives:
>> https://lists.ovirt.org/archives/list/users@ovirt.org/message/M3T2VGGGV6DE643ZKKJUAF274VSWTJFH/
>>
> ___
> Users mailing list -- users@ovirt.org
> To unsubscribe send an email to users-le...@ovirt.org
> Privacy Statement: https://www.ovirt.org/site/privacy-policy/
> oVirt Code of Conduct:
> https://www.ovirt.org/community/about/community-guidelines/
> List Archives:
> https://lists.ovirt.org/archives/list/users@ovirt.org/message/ZUIRM5PT4Y4USOSDGSUEP3YEE23LE4WG/
>

[ovirt-users] Re: 4.2.8 to 4.3.2 upgrade

2019-03-25 Thread Leo David
Thank you very much Jayme.
You have just saved me from potential  problems regarding upgrade.

On Fri, Mar 22, 2019, 12:02 Jayme  wrote:

> Apparently a new version of gluster was just released that addresses the
> issue that is causing The problems, I’d wait and make sure that whatever
> version you are upgrading to has that new package
>
> On Fri, Mar 22, 2019 at 1:53 AM Leo David  wrote:
>
>> Hi everyone,
>> I have seen a lot of threads here regarding 4.3.x release regarding
>> problems a different layers, most of them related to underneath gluster
>> storage.
>> I would do an upgrade though, thus benefiting the new added features.
>> My thoughts would be:
>> 1. did anyone succesfully went through this process, any problems occured
>> during of after the upgrade ?
>> 2. any sincere recomandation like "if it works don't fix it" considering
>> the platform is running in production ?
>> I would really apreciate your oppinion.
>> Thank you very much !
>>
>> Leo
>>
>>
>>
>> ___
>> Users mailing list -- users@ovirt.org
>> To unsubscribe send an email to users-le...@ovirt.org
>> Privacy Statement: https://www.ovirt.org/site/privacy-policy/
>> oVirt Code of Conduct:
>> https://www.ovirt.org/community/about/community-guidelines/
>> List Archives:
>> https://lists.ovirt.org/archives/list/users@ovirt.org/message/C4JZE7QNC7OZPWL4GZQDJW5KLWCCHXPK/
>>
>
___
Users mailing list -- users@ovirt.org
To unsubscribe send an email to users-le...@ovirt.org
Privacy Statement: https://www.ovirt.org/site/privacy-policy/
oVirt Code of Conduct: 
https://www.ovirt.org/community/about/community-guidelines/
List Archives: 
https://lists.ovirt.org/archives/list/users@ovirt.org/message/LMMQGJIVHJDGNTW6WM6HUU3HRDOMY3EQ/


[ovirt-users] 4.2.8 to 4.3.2 upgrade

2019-03-21 Thread Leo David
Hi everyone,
I have seen a lot of threads here regarding 4.3.x release regarding
problems a different layers, most of them related to underneath gluster
storage.
I would do an upgrade though, thus benefiting the new added features.
My thoughts would be:
1. did anyone succesfully went through this process, any problems occured
during of after the upgrade ?
2. any sincere recomandation like "if it works don't fix it" considering
the platform is running in production ?
I would really apreciate your oppinion.
Thank you very much !

Leo
___
Users mailing list -- users@ovirt.org
To unsubscribe send an email to users-le...@ovirt.org
Privacy Statement: https://www.ovirt.org/site/privacy-policy/
oVirt Code of Conduct: 
https://www.ovirt.org/community/about/community-guidelines/
List Archives: 
https://lists.ovirt.org/archives/list/users@ovirt.org/message/C4JZE7QNC7OZPWL4GZQDJW5KLWCCHXPK/


[ovirt-users] Re: Dell OMSA on oVirt node

2019-03-21 Thread Leo David
Thank you guys !
Wow... this complicates the things, since reinstalling the hosts with
centos instead of oVirt node distribution would give me a bit of headache.
The dell hosts are already running in production.
I think it would be really nice if standard node would support installing
management software for well known brands like dell, hp, lenovo,
supermicro, etc... I am sure we cannot track every server brand in the
world, but in realitty there are just a couple of them most spreaded across
ovirt node installations.
I will try to see what I can do, although I am a bit afraid to not break
the os somehow...

On Thu, Mar 21, 2019, 21:31 Jayme  wrote:

> Agree with Chris here, regular CentOS 7 hosts may be easier to manage in
> this case.  Not much persists when updating oVirt node, some select
> folders/files persist on updates such as /etc and /root for example but I'm
> not sure how custom packages/rpms are handled.  I believe there may be ways
> you can have packages persist but I'm not familiar with the process.  I
> know the idea of package persistence was brought up before but I'm not sure
> if/when/how it was implemented.
>
>
>
> On Thu, Mar 21, 2019 at 3:52 PM Chris Adams  wrote:
>
>> Once upon a time, Leo David  said:
>> > Hello everyone,
>> > I would really like to have installed Dell OMSA on my dell nodes so I
>> can
>> > benefit of lots of administration features. Does anyone managed to have
>> it
>> > installed ?
>>
>> oVirt Node has the regular CentOS yum repos disabled, but Dell's OMSA
>> expects them (and possibly some things from EPEL? can't remember).  You
>> can try "yum --enablerepo={base,updates} install srvadmin-all".
>>
>> I'm not sure how oVirt Node might handle the additional packages, what
>> will happen on oVirt upgrades, etc. though.  From what I understand,
>> installing additional software on Node isn't really supported.  You
>> might be better off installing "regular" CentOS and then oVirt, without
>> using the Node method.
>>
>> --
>> Chris Adams 
>> ___
>> Users mailing list -- users@ovirt.org
>> To unsubscribe send an email to users-le...@ovirt.org
>> Privacy Statement: https://www.ovirt.org/site/privacy-policy/
>> oVirt Code of Conduct:
>> https://www.ovirt.org/community/about/community-guidelines/
>> List Archives:
>> https://lists.ovirt.org/archives/list/users@ovirt.org/message/552ZFZQDXXXDGDTXNORTX7T6HMSWDTDZ/
>>
> ___
> Users mailing list -- users@ovirt.org
> To unsubscribe send an email to users-le...@ovirt.org
> Privacy Statement: https://www.ovirt.org/site/privacy-policy/
> oVirt Code of Conduct:
> https://www.ovirt.org/community/about/community-guidelines/
> List Archives:
> https://lists.ovirt.org/archives/list/users@ovirt.org/message/7QAHZOV7SKQHI5IQOOK7VVR43K3VBTLD/
>
___
Users mailing list -- users@ovirt.org
To unsubscribe send an email to users-le...@ovirt.org
Privacy Statement: https://www.ovirt.org/site/privacy-policy/
oVirt Code of Conduct: 
https://www.ovirt.org/community/about/community-guidelines/
List Archives: 
https://lists.ovirt.org/archives/list/users@ovirt.org/message/ZWBNFID2KJPNMB2AQ7F53NCAJASECOQ3/


[ovirt-users] Dell OMSA on oVirt node

2019-03-21 Thread Leo David
Hello everyone,
I would really like to have installed Dell OMSA on my dell nodes so I can
benefit of lots of administration features. Does anyone managed to have it
installed ?
I am running oVirt 4.2.8,  and after adding dell yum repos and running "yum
install srvadmin-all" I get the following errors:

Error: Package: srvadmin-itunnelprovider-9.2.0-3142.13664.el7.x86_64
(dell-system-update_dependent)
   Requires: sblim-sfcc >= 2.2.1
Error: Package: srvadmin-itunnelprovider-9.2.0-3142.13664.el7.x86_64
(dell-system-update_dependent)
   Requires: sblim-sfcb >= 1.3.7
Error: Package: srvadmin-itunnelprovider-9.2.0-3142.13664.el7.x86_64
(dell-system-update_dependent)
   Requires: libcmpiCppImpl0 >= 2.0.0
Error: Package: srvadmin-itunnelprovider-9.2.0-3142.13664.el7.x86_64
(dell-system-update_dependent)
   Requires: libcmpiCppImpl.so.0()(64bit)
Error: Package: srvadmin-itunnelprovider-9.2.0-3142.13664.el7.x86_64
(dell-system-update_dependent)
   Requires: openwsman-server >= 2.2.3
Error: Package: srvadmin-tomcat-9.2.0-3142.13664.el7.x86_64
(dell-system-update_dependent)
   Requires: openwsman-client >= 2.1.5
 You could try using --skip-broken to work around the problem
 You could try running: rpm -Va --nofiles --nodigest
Obviously there are conflicts,  but the server being on production,  I
would not mess around with the packages and trying to solve the conflicts.

Any thoughts ?
Thank you very much,

Leo


-- 
Best regards, Leo David
___
Users mailing list -- users@ovirt.org
To unsubscribe send an email to users-le...@ovirt.org
Privacy Statement: https://www.ovirt.org/site/privacy-policy/
oVirt Code of Conduct: 
https://www.ovirt.org/community/about/community-guidelines/
List Archives: 
https://lists.ovirt.org/archives/list/users@ovirt.org/message/VLSMWSNJSTSBAME2NNX7D35DBPQXDO64/


[ovirt-users] Bandwidth problem

2019-03-09 Thread Leo David
Hello Everyone,
I have 10Gb connections setup for all the hosts in the cluster, for both
management/vm  and gluster traffic ( separate network cards )
The problem is that i just cannot pass 1Gb/s traffic between vms ( even
between vms running on the same hosts ! - which makes the things more
weird... ). Traffic measured by using iperf tool.
Is there a way I can check waht could be the problem ? Network card type,
vm drivers,  any suggestion ? I just do not know where to look for a
possible cause.
Thank you very much !

Leo

-- 
Best regards, Leo David
___
Users mailing list -- users@ovirt.org
To unsubscribe send an email to users-le...@ovirt.org
Privacy Statement: https://www.ovirt.org/site/privacy-policy/
oVirt Code of Conduct: 
https://www.ovirt.org/community/about/community-guidelines/
List Archives: 
https://lists.ovirt.org/archives/list/users@ovirt.org/message/IVBUE56TB5E3CNSKRGR7TCTTX6IKKHXJ/


[ovirt-users] Re: Mounting CephFS

2019-03-02 Thread Leo David
Thank you,
I am trying to migrate a vm that has its disks on cephfs ( as posix domain
- mounted on all hosts ),  and it does not work. Not sure if this is
normal,  considering the vm disks being on this type of storage.  The error
logs in engine are:

2019-03-02 13:35:03,483Z ERROR
[org.ovirt.engine.core.vdsbroker.monitoring.VmAnalyzer]
(ForkJoinPool-1-worker-0) [] Migration of VM 'centos7-test' to host
'node1.internal' failed: VM destroyed during the startup.
2019-03-02 13:35:03,505Z ERROR
[org.ovirt.engine.core.vdsbroker.monitoring.VmsMonitoring]
(ForkJoinPool-1-worker-14) [] Rerun VM
'bec1cd40-9d62-4f6d-a9df-d97a79584441'. Called from VDS 'node2.internal'
2019-03-02 13:35:03,566Z ERROR
[org.ovirt.engine.core.dal.dbbroker.auditloghandling.AuditLogDirector]
(EE-ManagedThreadFactory-engine-Thread-42967) [] EVENT_ID:
VM_MIGRATION_TO_SERVER_FAILED(120), Migration failed  (VM: centos7-test,
Source: node2.internal, Destination: node1.internal).

Any thoughts ?

Thanks,

Leo


On Sat, Mar 2, 2019 at 11:59 AM Strahil  wrote:

> If you mean storage migration - could be possible.
> If it is about live migration between hosts - shouldn't happen.
> Anything in the logs ?
>
> Best Regards,
> Strahil Nikolov
> On Mar 2, 2019 09:23, Leo David  wrote:
>
> Thank you Strahil, yes thought about that too, I'll give it a try.
> Now ( to be a bit offtopic ), it seems that I can't live migrate the vm,
> even thought the cephfs mountpoint exists on all the hosts.
> Could it be the fact that the storage type is "posix" and live migration
> not being possible ?
>
> Thank you !
>
> On Sat, Mar 2, 2019, 04:05 Strahil  wrote:
>
> Can you try to set the credentials in a file (don't recall where that was
> for ceph) , so you can mount without specifying user/pass ?
>
> Best Regards,
> Strahil Nikolov
> On Mar 1, 2019 13:46, Leo David  wrote:
>
> Hi Everyone,
> I am trying to mount cephfs as a posix storage domain and getting an error
> in vdsm.log, although the direct command run on the node " mount -t ceph
> 10.10.6.1:/sata/ovirt-data  /cephfs-sata/  -o
> name=admin,secret=AQAzncZY0VyYHBAAoeeePdv8Rr5kZlW07EVhZZ== " works fine. I
> have configured:
> Storage type: POSIX compliant FS
> Path: 10.10.6.1:/sata/ovirt-data
> VFS Type: ceph
> Mount Options: name=admin,secret=AQAzncZY0VyYHBAAoeeePdv8Rr5kZlW07EVhZZ==
>
>
> 2019-03-01 11:35:33,457+ INFO  (jsonrpc/4) [storage.Mount] mounting
> 10.10.6.1:/sata/ovirt-data at /rhev/data-center/mnt/10.10.6.1:_sata_ovirt-data
> (mount:204)
> 2019-03-01 11:35:33,464+ INFO  (jsonrpc/0) [jsonrpc.JsonRpcServer] RPC
> call Host.ping2 succeeded in 0.00 seconds (__init__:312)
> 2019-03-01 11:35:33,471+ ERROR (jsonrpc/4) [storage.HSM] Could not
> connect to storageServer (hsm:2414)
> Traceback (most recent call last):
>   File "/usr/lib/python2.7/site-packages/vdsm/storage/hsm.py", line 2411,
> in connectStorageServer
> conObj.connect()
>   File "/usr/lib/python2.7/site-packages/vdsm/storage/storageServer.py",
> line 180, in connect
> six.reraise(t, v, tb)
>   File "/usr/lib/python2.7/site-packages/vdsm/storage/storageServer.py",
> line 172, in connect
> self._mount.mount(self.options, self._vfsType, cgroup=self.CGROUP)
>   File "/usr/lib/python2.7/site-packages/vdsm/storage/mount.py", line
> 207, in mount
> cgroup=cgroup)
>   File "/usr/lib/python2.7/site-packages/vdsm/common/supervdsm.py", line
> 56, in __call__
> return callMethod()
>   File "/usr/lib/python2.7/site-packages/vdsm/common/supervdsm.py", line
> 54, in 
> **kwargs)
>   File "", line 2, in mount
>   File "/usr/lib64/python2.7/multiprocessing/managers.py", line 773, in
> _callmethod
> raise convert_to_error(kind, result)
> *MountError: (1, ';mount: unsupported option format:  *
> name=admin,secret=AQAzncZY0VyYHBAAoeeePdv8Rr5kZlW07EVhZZ==\n')
> Any thoughts on this,  what could it be wrong with the options field ?
> Using oVirt 4.3.1
> Thank you very much and  have a great day !
>
> Leo
>
> --
> Best regards, Leo David
>
>

-- 
Best regards, Leo David
___
Users mailing list -- users@ovirt.org
To unsubscribe send an email to users-le...@ovirt.org
Privacy Statement: https://www.ovirt.org/site/privacy-policy/
oVirt Code of Conduct: 
https://www.ovirt.org/community/about/community-guidelines/
List Archives: 
https://lists.ovirt.org/archives/list/users@ovirt.org/message/2S3CM23KME6CHTES7KGY2TNGDGXZHBMO/


[ovirt-users] Re: Mounting CephFS

2019-03-01 Thread Leo David
Thank you Strahil, yes thought about that too, I'll give it a try.
Now ( to be a bit offtopic ), it seems that I can't live migrate the vm,
even thought the cephfs mountpoint exists on all the hosts.
Could it be the fact that the storage type is "posix" and live migration
not being possible ?

Thank you !

On Sat, Mar 2, 2019, 04:05 Strahil  wrote:

> Can you try to set the credentials in a file (don't recall where that was
> for ceph) , so you can mount without specifying user/pass ?
>
> Best Regards,
> Strahil Nikolov
> On Mar 1, 2019 13:46, Leo David  wrote:
>
> Hi Everyone,
> I am trying to mount cephfs as a posix storage domain and getting an error
> in vdsm.log, although the direct command run on the node " mount -t ceph
> 10.10.6.1:/sata/ovirt-data  /cephfs-sata/  -o
> name=admin,secret=AQAzncZY0VyYHBAAoeeePdv8Rr5kZlW07EVhZZ== " works fine. I
> have configured:
> Storage type: POSIX compliant FS
> Path: 10.10.6.1:/sata/ovirt-data
> VFS Type: ceph
> Mount Options: name=admin,secret=AQAzncZY0VyYHBAAoeeePdv8Rr5kZlW07EVhZZ==
>
>
> 2019-03-01 11:35:33,457+ INFO  (jsonrpc/4) [storage.Mount] mounting
> 10.10.6.1:/sata/ovirt-data at /rhev/data-center/mnt/10.10.6.1:_sata_ovirt-data
> (mount:204)
> 2019-03-01 11:35:33,464+ INFO  (jsonrpc/0) [jsonrpc.JsonRpcServer] RPC
> call Host.ping2 succeeded in 0.00 seconds (__init__:312)
> 2019-03-01 11:35:33,471+ ERROR (jsonrpc/4) [storage.HSM] Could not
> connect to storageServer (hsm:2414)
> Traceback (most recent call last):
>   File "/usr/lib/python2.7/site-packages/vdsm/storage/hsm.py", line 2411,
> in connectStorageServer
> conObj.connect()
>   File "/usr/lib/python2.7/site-packages/vdsm/storage/storageServer.py",
> line 180, in connect
> six.reraise(t, v, tb)
>   File "/usr/lib/python2.7/site-packages/vdsm/storage/storageServer.py",
> line 172, in connect
> self._mount.mount(self.options, self._vfsType, cgroup=self.CGROUP)
>   File "/usr/lib/python2.7/site-packages/vdsm/storage/mount.py", line
> 207, in mount
> cgroup=cgroup)
>   File "/usr/lib/python2.7/site-packages/vdsm/common/supervdsm.py", line
> 56, in __call__
> return callMethod()
>   File "/usr/lib/python2.7/site-packages/vdsm/common/supervdsm.py", line
> 54, in 
> **kwargs)
>   File "", line 2, in mount
>   File "/usr/lib64/python2.7/multiprocessing/managers.py", line 773, in
> _callmethod
> raise convert_to_error(kind, result)
> *MountError: (1, ';mount: unsupported option format:  *
> name=admin,secret=AQAzncZY0VyYHBAAoeeePdv8Rr5kZlW07EVhZZ==\n')
> Any thoughts on this,  what could it be wrong with the options field ?
> Using oVirt 4.3.1
> Thank you very much and  have a great day !
>
> Leo
>
> --
> Best regards, Leo David
>
>
___
Users mailing list -- users@ovirt.org
To unsubscribe send an email to users-le...@ovirt.org
Privacy Statement: https://www.ovirt.org/site/privacy-policy/
oVirt Code of Conduct: 
https://www.ovirt.org/community/about/community-guidelines/
List Archives: 
https://lists.ovirt.org/archives/list/users@ovirt.org/message/R43YYSI5WE7GQS7KANSJ52DTE7U7GA5X/


[ovirt-users] Re: Mounting CephFS

2019-03-01 Thread Leo David
Hi,
That was just me not properly reading the log... :(
Seems that I was passing a cr at the and  of the options ( \n ),  now I am
able to do the mount.
Thank you !

Leo


On Fri, Mar 1, 2019 at 1:46 PM Leo David  wrote:

> Hi Everyone,
> I am trying to mount cephfs as a posix storage domain and getting an error
> in vdsm.log, although the direct command run on the node " mount -t ceph
> 10.10.6.1:/sata/ovirt-data  /cephfs-sata/  -o
> name=admin,secret=AQAzncZY0VyYHBAAoeeePdv8Rr5kZlW07EVhZZ== " works fine. I
> have configured:
> Storage type: POSIX compliant FS
> Path: 10.10.6.1:/sata/ovirt-data
> VFS Type: ceph
> Mount Options: name=admin,secret=AQAzncZY0VyYHBAAoeeePdv8Rr5kZlW07EVhZZ==
>
>
> 2019-03-01 11:35:33,457+ INFO  (jsonrpc/4) [storage.Mount] mounting
> 10.10.6.1:/sata/ovirt-data at /rhev/data-center/mnt/10.10.6.1:_sata_ovirt-data
> (mount:204)
> 2019-03-01 11:35:33,464+ INFO  (jsonrpc/0) [jsonrpc.JsonRpcServer] RPC
> call Host.ping2 succeeded in 0.00 seconds (__init__:312)
> 2019-03-01 11:35:33,471+ ERROR (jsonrpc/4) [storage.HSM] Could not
> connect to storageServer (hsm:2414)
> Traceback (most recent call last):
>   File "/usr/lib/python2.7/site-packages/vdsm/storage/hsm.py", line 2411,
> in connectStorageServer
> conObj.connect()
>   File "/usr/lib/python2.7/site-packages/vdsm/storage/storageServer.py",
> line 180, in connect
> six.reraise(t, v, tb)
>   File "/usr/lib/python2.7/site-packages/vdsm/storage/storageServer.py",
> line 172, in connect
> self._mount.mount(self.options, self._vfsType, cgroup=self.CGROUP)
>   File "/usr/lib/python2.7/site-packages/vdsm/storage/mount.py", line 207,
> in mount
> cgroup=cgroup)
>   File "/usr/lib/python2.7/site-packages/vdsm/common/supervdsm.py", line
> 56, in __call__
> return callMethod()
>   File "/usr/lib/python2.7/site-packages/vdsm/common/supervdsm.py", line
> 54, in 
> **kwargs)
>   File "", line 2, in mount
>   File "/usr/lib64/python2.7/multiprocessing/managers.py", line 773, in
> _callmethod
> raise convert_to_error(kind, result)
> *MountError: (1, ';mount: unsupported option format:  *
> name=admin,secret=AQAzncZY0VyYHBAAoeeePdv8Rr5kZlW07EVhZZ==\n')
> Any thoughts on this,  what could it be wrong with the options field ?
> Using oVirt 4.3.1
> Thank you very much and  have a great day !
>
> Leo
>
> --
> Best regards, Leo David
>


-- 
Best regards, Leo David
___
Users mailing list -- users@ovirt.org
To unsubscribe send an email to users-le...@ovirt.org
Privacy Statement: https://www.ovirt.org/site/privacy-policy/
oVirt Code of Conduct: 
https://www.ovirt.org/community/about/community-guidelines/
List Archives: 
https://lists.ovirt.org/archives/list/users@ovirt.org/message/WI2LASQYJBU3YWNHNR7SJICDRPC5HUHP/


[ovirt-users] Mounting CephFS

2019-03-01 Thread Leo David
Hi Everyone,
I am trying to mount cephfs as a posix storage domain and getting an error
in vdsm.log, although the direct command run on the node " mount -t ceph
10.10.6.1:/sata/ovirt-data  /cephfs-sata/  -o
name=admin,secret=AQAzncZY0VyYHBAAoeeePdv8Rr5kZlW07EVhZZ== " works fine. I
have configured:
Storage type: POSIX compliant FS
Path: 10.10.6.1:/sata/ovirt-data
VFS Type: ceph
Mount Options: name=admin,secret=AQAzncZY0VyYHBAAoeeePdv8Rr5kZlW07EVhZZ==


2019-03-01 11:35:33,457+ INFO  (jsonrpc/4) [storage.Mount] mounting
10.10.6.1:/sata/ovirt-data at /rhev/data-center/mnt/10.10.6.1:_sata_ovirt-data
(mount:204)
2019-03-01 11:35:33,464+ INFO  (jsonrpc/0) [jsonrpc.JsonRpcServer] RPC
call Host.ping2 succeeded in 0.00 seconds (__init__:312)
2019-03-01 11:35:33,471+ ERROR (jsonrpc/4) [storage.HSM] Could not
connect to storageServer (hsm:2414)
Traceback (most recent call last):
  File "/usr/lib/python2.7/site-packages/vdsm/storage/hsm.py", line 2411,
in connectStorageServer
conObj.connect()
  File "/usr/lib/python2.7/site-packages/vdsm/storage/storageServer.py",
line 180, in connect
six.reraise(t, v, tb)
  File "/usr/lib/python2.7/site-packages/vdsm/storage/storageServer.py",
line 172, in connect
self._mount.mount(self.options, self._vfsType, cgroup=self.CGROUP)
  File "/usr/lib/python2.7/site-packages/vdsm/storage/mount.py", line 207,
in mount
cgroup=cgroup)
  File "/usr/lib/python2.7/site-packages/vdsm/common/supervdsm.py", line
56, in __call__
return callMethod()
  File "/usr/lib/python2.7/site-packages/vdsm/common/supervdsm.py", line
54, in 
**kwargs)
  File "", line 2, in mount
  File "/usr/lib64/python2.7/multiprocessing/managers.py", line 773, in
_callmethod
raise convert_to_error(kind, result)
*MountError: (1, ';mount: unsupported option format:  *
name=admin,secret=AQAzncZY0VyYHBAAoeeePdv8Rr5kZlW07EVhZZ==\n')
Any thoughts on this,  what could it be wrong with the options field ?
Using oVirt 4.3.1
Thank you very much and  have a great day !

Leo

-- 
Best regards, Leo David
___
Users mailing list -- users@ovirt.org
To unsubscribe send an email to users-le...@ovirt.org
Privacy Statement: https://www.ovirt.org/site/privacy-policy/
oVirt Code of Conduct: 
https://www.ovirt.org/community/about/community-guidelines/
List Archives: 
https://lists.ovirt.org/archives/list/users@ovirt.org/message/Q2WB7DLFB2CZRTKPSRTZDHPASJ3WZEK3/


[ovirt-users] Re: VM poor iops

2019-02-28 Thread Leo David
Yes, in the end that makes perfectly sense.
Thank you very much Sahina !


On Fri, Mar 1, 2019, 07:45 Sahina Bose  wrote:

> On Wed, Feb 27, 2019 at 11:21 AM Leo David  wrote:
> >
> > Thank you Sahina, I'm in that conversation too :).
> > On the other hand...
> > In this case, setting this option on, would only make sense in
> multi-node setups, and not in single instance ones, where we only have one
> hypervisor accsessing the volume.
> > Please correct me if this is wrong.
> > Have a nice day,
>
> In single instance deployments too, the option ensures all writes
> (with o-direct flag) are flushed to disk and not cached.
> >
> > Leo
> >
> >
> > On Tue, Feb 26, 2019, 08:24 Sahina Bose  wrote:
> >>
> >>
> >>
> >>
> >> On Fri, Sep 14, 2018 at 3:35 PM Paolo Margara 
> wrote:
> >>>
> >>> Hi,
> >>>
> >>> but performance.strict-o-direct is not one of the option enabled by
> gdeploy during installation because it's supposed to give some sort of
> benefit?
> >>
> >>
> >> See
> https://lists.ovirt.org/archives/list/users@ovirt.org/message/VS764WDBR2PLGGDZVRGBEM6OJCAFEM3R/
> on why the option is set.
> >>
> >>>
> >>> Paolo
> >>>
> >>>
> >>> Il 14/09/2018 11:34, Leo David ha scritto:
> >>>
> >>> performance.strict-o-direct:  on
> >>> This was the bloody option that created the botleneck ! It was ON.
> >>> So now i get an average of 17k random writes,  which is not bad at
> all. Below,  the volume options that worked for me:
> >>>
> >>> performance.strict-write-ordering: off
> >>> performance.strict-o-direct: off
> >>> server.event-threads: 4
> >>> client.event-threads: 4
> >>> performance.read-ahead: off
> >>> network.ping-timeout: 30
> >>> performance.quick-read: off
> >>> cluster.eager-lock: enable
> >>> performance.stat-prefetch: on
> >>> performance.low-prio-threads: 32
> >>> network.remote-dio: off
> >>> user.cifs: off
> >>> performance.io-cache: off
> >>> server.allow-insecure: on
> >>> features.shard: on
> >>> transport.address-family: inet
> >>> storage.owner-uid: 36
> >>> storage.owner-gid: 36
> >>> nfs.disable: on
> >>>
> >>> If any other tweaks can be done,  please let me know.
> >>> Thank you !
> >>>
> >>> Leo
> >>>
> >>>
> >>> On Fri, Sep 14, 2018 at 12:01 PM, Leo David  wrote:
> >>>>
> >>>> Hi Everyone,
> >>>> So i have decided to take out all of the gluster volume custom
> options,  and add them one by one while activating/deactivating the storage
> domain & rebooting one vm after each  added option :(
> >>>>
> >>>> The default options that giving bad iops ( ~1-2k) performance are :
> >>>>
> >>>> performance.stat-prefetch on
> >>>> cluster.eager-lock enable
> >>>> performance.io-cache off
> >>>> performance.read-ahead off
> >>>> performance.quick-read off
> >>>> user.cifs off
> >>>> network.ping-timeout 30
> >>>> network.remote-dio off
> >>>> performance.strict-o-direct on
> >>>> performance.low-prio-threads 32
> >>>>
> >>>> After adding only:
> >>>>
> >>>>
> >>>> server.allow-insecure on
> >>>> features.shard on
> >>>> storage.owner-gid 36
> >>>> storage.owner-uid 36
> >>>> transport.address-family inet
> >>>> nfs.disable on
> >>>>
> >>>> The performance increased to 7k-10k iops.
> >>>>
> >>>> The problem is that i don't know if that's sufficient ( maybe it can
> be more improved ) , or even worse than this there might be chances to into
> different volume issues by taking out some volume really needed options...
> >>>>
> >>>> If would have handy the default options that are applied to volumes
> as optimization in a 3way replica, I think that might help..
> >>>>
> >>>> Any thoughts ?
> >>>>
> >>>> Thank you very much !
> >>>>
> >>>>
> >>>> Leo
> >>>>
> >>>>
> >>>>
> >>>>
> >>

[ovirt-users] Re: VM poor iops

2019-02-26 Thread Leo David
Thank you Sahina, I'm in that conversation too :).
On the other hand...
In this case, setting this option on, would only make sense in multi-node
setups, and not in single instance ones, where we only have one hypervisor
accsessing the volume.
Please correct me if this is wrong.
Have a nice day,

Leo


On Tue, Feb 26, 2019, 08:24 Sahina Bose  wrote:

>
>
>
> On Fri, Sep 14, 2018 at 3:35 PM Paolo Margara 
> wrote:
>
>> Hi,
>>
>> but performance.strict-o-direct is not one of the option enabled by
>> gdeploy during installation because it's supposed to give some sort of
>> benefit?
>>
>
> See
> https://lists.ovirt.org/archives/list/users@ovirt.org/message/VS764WDBR2PLGGDZVRGBEM6OJCAFEM3R/
> on why the option is set.
>
>
>> Paolo
>>
>> Il 14/09/2018 11:34, Leo David ha scritto:
>>
>> performance.strict-o-direct:  on
>> This was the bloody option that created the botleneck ! It was ON.
>> So now i get an average of 17k random writes,  which is not bad at all.
>> Below,  the volume options that worked for me:
>>
>> performance.strict-write-ordering: off
>> performance.strict-o-direct: off
>> server.event-threads: 4
>> client.event-threads: 4
>> performance.read-ahead: off
>> network.ping-timeout: 30
>> performance.quick-read: off
>> cluster.eager-lock: enable
>> performance.stat-prefetch: on
>> performance.low-prio-threads: 32
>> network.remote-dio: off
>> user.cifs: off
>> performance.io-cache: off
>> server.allow-insecure: on
>> features.shard: on
>> transport.address-family: inet
>> storage.owner-uid: 36
>> storage.owner-gid: 36
>> nfs.disable: on
>>
>> If any other tweaks can be done,  please let me know.
>> Thank you !
>>
>> Leo
>>
>>
>> On Fri, Sep 14, 2018 at 12:01 PM, Leo David  wrote:
>>
>>> Hi Everyone,
>>> So i have decided to take out all of the gluster volume custom options,
>>> and add them one by one while activating/deactivating the storage domain &
>>> rebooting one vm after each  added option :(
>>>
>>> The default options that giving bad iops ( ~1-2k) performance are :
>>>
>>> performance.stat-prefetch on
>>> cluster.eager-lock enable
>>> performance.io-cache off
>>> performance.read-ahead off
>>> performance.quick-read off
>>> user.cifs off
>>> network.ping-timeout 30
>>> network.remote-dio off
>>> performance.strict-o-direct on
>>> performance.low-prio-threads 32
>>>
>>> After adding only:
>>>
>>>
>>> server.allow-insecure on
>>> features.shard on
>>> storage.owner-gid 36
>>> storage.owner-uid 36
>>> transport.address-family inet
>>> nfs.disable on
>>> The performance increased to 7k-10k iops.
>>>
>>> The problem is that i don't know if that's sufficient ( maybe it can be
>>> more improved ) , or even worse than this there might be chances to into
>>> different volume issues by taking out some volume really needed options...
>>>
>>> If would have handy the default options that are applied to volumes as
>>> optimization in a 3way replica, I think that might help..
>>>
>>> Any thoughts ?
>>>
>>> Thank you very much !
>>>
>>>
>>> Leo
>>>
>>>
>>>
>>>
>>>
>>> On Fri, Sep 14, 2018 at 8:54 AM, Leo David  wrote:
>>>
>>>> Any thoughs on these ? Is that UI optimization only a gluster volume
>>>> custom configuration ? If so, i guess it can be done from cli, but I am not
>>>> aware of the corect optimized parameters of the volume
>>>>
>>>>
>>>> On Thu, Sep 13, 2018, 18:25 Leo David  wrote:
>>>>
>>>>> Thank you Jayme. I am trying to do this, but I am getting an error,
>>>>> since the volume is replica 1 distribute, and it seems that oVirt expects 
>>>>> a
>>>>> replica 3 volume.
>>>>> Would it be another way to optimize the volume in this situation ?
>>>>>
>>>>>
>>>>> On Thu, Sep 13, 2018, 17:49 Jayme  wrote:
>>>>>
>>>>>> I had similar problems until I clicked "optimize volume for vmstore"
>>>>>> in the admin GUI for each data volume.  I'm not sure if this is what is
>>>>>> causing your problem here but I'd recommend trying that first.  It is
>>>>>> suppose to be op

[ovirt-users] Re: VM poor iops

2019-02-25 Thread Leo David
Hi,
Is the performance.strict-o-direct=on a mandatory option to avoid data
inconsistency, although it has a pretty big impact on volume iops
performance?
Thank you !




On Fri, Sep 14, 2018, 13:03 Paolo Margara  wrote:

> Hi,
>
> but performance.strict-o-direct is not one of the option enabled by
> gdeploy during installation because it's supposed to give some sort of
> benefit?
>
>
> Paolo
>
> Il 14/09/2018 11:34, Leo David ha scritto:
>
> performance.strict-o-direct:  on
> This was the bloody option that created the botleneck ! It was ON.
> So now i get an average of 17k random writes,  which is not bad at all.
> Below,  the volume options that worked for me:
>
> performance.strict-write-ordering: off
> performance.strict-o-direct: off
> server.event-threads: 4
> client.event-threads: 4
> performance.read-ahead: off
> network.ping-timeout: 30
> performance.quick-read: off
> cluster.eager-lock: enable
> performance.stat-prefetch: on
> performance.low-prio-threads: 32
> network.remote-dio: off
> user.cifs: off
> performance.io-cache: off
> server.allow-insecure: on
> features.shard: on
> transport.address-family: inet
> storage.owner-uid: 36
> storage.owner-gid: 36
> nfs.disable: on
>
> If any other tweaks can be done,  please let me know.
> Thank you !
>
> Leo
>
>
> On Fri, Sep 14, 2018 at 12:01 PM, Leo David  wrote:
>
>> Hi Everyone,
>> So i have decided to take out all of the gluster volume custom options,
>> and add them one by one while activating/deactivating the storage domain &
>> rebooting one vm after each  added option :(
>>
>> The default options that giving bad iops ( ~1-2k) performance are :
>>
>> performance.stat-prefetch on
>> cluster.eager-lock enable
>> performance.io-cache off
>> performance.read-ahead off
>> performance.quick-read off
>> user.cifs off
>> network.ping-timeout 30
>> network.remote-dio off
>> performance.strict-o-direct on
>> performance.low-prio-threads 32
>>
>> After adding only:
>>
>>
>> server.allow-insecure on
>> features.shard on
>> storage.owner-gid 36
>> storage.owner-uid 36
>> transport.address-family inet
>> nfs.disable on
>> The performance increased to 7k-10k iops.
>>
>> The problem is that i don't know if that's sufficient ( maybe it can be
>> more improved ) , or even worse than this there might be chances to into
>> different volume issues by taking out some volume really needed options...
>>
>> If would have handy the default options that are applied to volumes as
>> optimization in a 3way replica, I think that might help..
>>
>> Any thoughts ?
>>
>> Thank you very much !
>>
>>
>> Leo
>>
>>
>>
>>
>>
>> On Fri, Sep 14, 2018 at 8:54 AM, Leo David  wrote:
>>
>>> Any thoughs on these ? Is that UI optimization only a gluster volume
>>> custom configuration ? If so, i guess it can be done from cli, but I am not
>>> aware of the corect optimized parameters of the volume
>>>
>>>
>>> On Thu, Sep 13, 2018, 18:25 Leo David  wrote:
>>>
>>>> Thank you Jayme. I am trying to do this, but I am getting an error,
>>>> since the volume is replica 1 distribute, and it seems that oVirt expects a
>>>> replica 3 volume.
>>>> Would it be another way to optimize the volume in this situation ?
>>>>
>>>>
>>>> On Thu, Sep 13, 2018, 17:49 Jayme  wrote:
>>>>
>>>>> I had similar problems until I clicked "optimize volume for vmstore"
>>>>> in the admin GUI for each data volume.  I'm not sure if this is what is
>>>>> causing your problem here but I'd recommend trying that first.  It is
>>>>> suppose to be optimized by default but for some reason my ovirt 4.2 
>>>>> cockpit
>>>>> deploy did not apply those settings automatically.
>>>>>
>>>>> On Thu, Sep 13, 2018 at 10:21 AM Leo David  wrote:
>>>>>
>>>>>> Hi Everyone,
>>>>>> I am encountering the following issue on a single instance
>>>>>> hyper-converged 4.2 setup.
>>>>>> The following fio test was done:
>>>>>>
>>>>>> fio --randrepeat=1 --ioengine=libaio --direct=1 --gtod_reduce=1
>>>>>> --name=test --filename=test --bs=4k --iodepth=64 --size=4G
>>>>>> --readwrite=randwrite
>>>>>> The results are very poor doing the test inside of a vm with a
>>>&

[ovirt-users] Re: Gluster - performance.strict-o-direct and other performance tuning in different storage backends

2019-02-25 Thread Leo David
Thank you Krutika,
Does it mean that turning that setting off, i have chances to get into data
corruption ?
It seems to have a pretty big impact on vm performance..

On Mon, Feb 25, 2019, 12:40 Krutika Dhananjay  wrote:

> Gluster's write-behind translator by default buffers writes for flushing
> to disk later, *even* when the file is opened with O_DIRECT flag. Not
> honoring O_DIRECT could mean a reader from another client could be READing
> stale data from bricks because some WRITEs may not yet be flushed to disk.
> performance.strict-o-direct=on is one of the options needed to truly honor
> O_DIRECT behavior which is what qemu uses by virtue of cache=none option
> being set (the other being network.remote-dio=off) on the vm(s)
>
> -Krutika
>
>
> On Mon, Feb 25, 2019 at 2:50 PM Leo David  wrote:
>
>> Hello Everyone,
>> As per some previous posts,  this "performance.strict-o-direct=on"
>> setting caused trouble or poor vm iops.  I've noticed that this option is
>> still part of default setup or automatically configured with
>> "Optimize for virt. store" button.
>> In the end... is this setting a good or a bad practice for setting the vm
>> storage volume ?
>> Does it depends ( like maybe other gluster performance options ) on the
>> storage backend:
>> - raid type /  jbod
>> - raid controller cache size
>> I am usually using jbod disks attached to lsi hba card ( no cache ). Any
>> gluster recommendations regarding this setup ?
>> Is there any documentation for best practices on configurating ovirt's
>> gluster for different types of storage backends ?
>> Thank you very much !
>>
>> Have a great week,
>>
>> Leo
>>
>> --
>> Best regards, Leo David
>> ___
>> Users mailing list -- users@ovirt.org
>> To unsubscribe send an email to users-le...@ovirt.org
>> Privacy Statement: https://www.ovirt.org/site/privacy-policy/
>> oVirt Code of Conduct:
>> https://www.ovirt.org/community/about/community-guidelines/
>> List Archives:
>> https://lists.ovirt.org/archives/list/users@ovirt.org/message/7FKL42JSHIKPMKLLMDPKYM4XT4V5GT4W/
>>
>
___
Users mailing list -- users@ovirt.org
To unsubscribe send an email to users-le...@ovirt.org
Privacy Statement: https://www.ovirt.org/site/privacy-policy/
oVirt Code of Conduct: 
https://www.ovirt.org/community/about/community-guidelines/
List Archives: 
https://lists.ovirt.org/archives/list/users@ovirt.org/message/VSZPLEKPHVMJHVDKLU4FPJR4TPVWJYIN/


[ovirt-users] Gluster - performance.strict-o-direct and other performance tuning in different storage backends

2019-02-25 Thread Leo David
Hello Everyone,
As per some previous posts,  this "performance.strict-o-direct=on" setting
caused trouble or poor vm iops.  I've noticed that this option is still
part of default setup or automatically configured with
"Optimize for virt. store" button.
In the end... is this setting a good or a bad practice for setting the vm
storage volume ?
Does it depends ( like maybe other gluster performance options ) on the
storage backend:
- raid type /  jbod
- raid controller cache size
I am usually using jbod disks attached to lsi hba card ( no cache ). Any
gluster recommendations regarding this setup ?
Is there any documentation for best practices on configurating ovirt's
gluster for different types of storage backends ?
Thank you very much !

Have a great week,

Leo

-- 
Best regards, Leo David
___
Users mailing list -- users@ovirt.org
To unsubscribe send an email to users-le...@ovirt.org
Privacy Statement: https://www.ovirt.org/site/privacy-policy/
oVirt Code of Conduct: 
https://www.ovirt.org/community/about/community-guidelines/
List Archives: 
https://lists.ovirt.org/archives/list/users@ovirt.org/message/7FKL42JSHIKPMKLLMDPKYM4XT4V5GT4W/


[ovirt-users] Re: Preserve vm state

2019-02-21 Thread Leo David
Great !
This is the needed feature !
Thank you very much for your help,  and have a nice day Sirs.

Leo

On Thu, Feb 21, 2019 at 2:17 PM Greg Sheremeta  wrote:

> Yes, stateless pools is the feature you're looking for.
>
> The lab students can use VM Portal to check out a VM from the pool, and
> when they are done, the VM will reset.
>
> Documentation:
> https://www.ovirt.org/documentation/admin-guide/chap-Pools.html
> (just noticed it has a formatting issue. I'll get that fixed asap.)
>
> Greg
>
> On Wed, Feb 20, 2019 at 11:31 PM Leo David  wrote:
>
>> Hi Everyone,
>> I have this challange where I need to rollback a couple  of vms to a
>> certain base snapshot everytime it starts, for a school computer lab.
>> It there a way to configure this feature somehow in the UI ?
>> Thank you very much !
>>
>> Leo
>> ___
>> Users mailing list -- users@ovirt.org
>> To unsubscribe send an email to users-le...@ovirt.org
>> Privacy Statement: https://www.ovirt.org/site/privacy-policy/
>> oVirt Code of Conduct:
>> https://www.ovirt.org/community/about/community-guidelines/
>> List Archives:
>> https://lists.ovirt.org/archives/list/users@ovirt.org/message/FFJYUTPEOOQDHVECXKVZSKBHPAU34O3I/
>>
>
>
> --
>
> GREG SHEREMETA
>
> SENIOR SOFTWARE ENGINEER - TEAM LEAD - RHV UX
>
> Red Hat NA
>
> <https://www.redhat.com/>
>
> gsher...@redhat.comIRC: gshereme
> <https://red.ht/sig>
>


-- 
Best regards, Leo David
___
Users mailing list -- users@ovirt.org
To unsubscribe send an email to users-le...@ovirt.org
Privacy Statement: https://www.ovirt.org/site/privacy-policy/
oVirt Code of Conduct: 
https://www.ovirt.org/community/about/community-guidelines/
List Archives: 
https://lists.ovirt.org/archives/list/users@ovirt.org/message/2EQ5MINXBH53KSNUYOZ6FKCTMFOZ6CFZ/


[ovirt-users] Preserve vm state

2019-02-20 Thread Leo David
Hi Everyone,
I have this challange where I need to rollback a couple  of vms to a
certain base snapshot everytime it starts, for a school computer lab.
It there a way to configure this feature somehow in the UI ?
Thank you very much !

Leo
___
Users mailing list -- users@ovirt.org
To unsubscribe send an email to users-le...@ovirt.org
Privacy Statement: https://www.ovirt.org/site/privacy-policy/
oVirt Code of Conduct: 
https://www.ovirt.org/community/about/community-guidelines/
List Archives: 
https://lists.ovirt.org/archives/list/users@ovirt.org/message/FFJYUTPEOOQDHVECXKVZSKBHPAU34O3I/


[ovirt-users] Re: Ovirt Cluster completely unstable

2019-02-16 Thread Leo David
Just thinking
Maybe different options where configured to the volumes during update, that
could make them unstable ?
ie: sharding or something else

On Sat, Feb 16, 2019, 13:26 Darryl Scott  Sandro
>
>
> I don't have ovirt-log-collector on my ovirt engine.  How can obtain?  I
> see a github repo to make file, I do not want to be making files on my
> ovirt-engine, just not yet, I could possible on weekend.
>
>
> Where can I obtain the ovirt-log-collector?
>
>
>
> --
> *From:* Sandro Bonazzola 
> *Sent:* Thursday, February 14, 2019 9:16:05 AM
> *To:* Jayme
> *Cc:* Darryl Scott; users
> *Subject:* Re: [ovirt-users] Re: Ovirt Cluster completely unstable
>
>
>
> Il giorno gio 14 feb 2019 alle ore 07:54 Jayme  ha
> scritto:
>
> I have a three node HCI gluster which was previously running 4.2 with zero
> problems.  I just upgraded it yesterday.  I ran in to a few bugs right away
> with the upgrade process, but aside from that I also discovered other users
> with severe GlusterFS problems since the upgrade to new GlusterFS version.
> It is less than 24 hours since I upgrade my cluster and I just got a notice
> that one of my GlusterFS bricks is offline.  There does appear to be a very
> real and serious issue here with the latest updates.
>
>
> tracking the issue on Gluster side on this bug:
> https://bugzilla.redhat.com/show_bug.cgi?id=1677160
> If you can help Gluster community providing requested logs it would be
> great.
>
>
>
>
>
>
> On Wed, Feb 13, 2019 at 7:26 PM  wrote:
>
> I'm abandoning my production ovirt cluster due to instability.   I have a
> 7 host cluster running about 300 vms and have been for over a year.  It has
> become unstable over the past three days.  I have random hosts both,
> compute and storage disconnecting.  AND many vms disconnecting and becoming
> unusable.
>
> 7 host are 4 compute hosts running Ovirt 4.2.8 and three glusterfs hosts
> running 3.12.5.  I submitted a bugzilla bug and they immediately assigned
> it to the storage people but have not responded with any meaningful
> information.  I have submitted several logs.
>
> I have found some discussion on problems with instability with gluster
> 3.12.5.  I would be willing to upgrade my gluster to a more stable version
> if that's the culprit.  I installed gluster using the ovirt gui and this is
> the version the ovirt gui installed.
>
> Is there an ovirt health monitor available?  Where should I be looking to
> get a resolution the problems I'm facing.
> ___
> Users mailing list -- users@ovirt.org
> To unsubscribe send an email to users-le...@ovirt.org
> Privacy Statement: https://www.ovirt.org/site/privacy-policy/
> oVirt Code of Conduct:
> https://www.ovirt.org/community/about/community-guidelines/
> List Archives:
> https://lists.ovirt.org/archives/list/users@ovirt.org/message/BL4M3JQA3IEXCQUY4IGQXOAALRUQ7TVB/
>
> ___
> Users mailing list -- users@ovirt.org
> To unsubscribe send an email to users-le...@ovirt.org
> Privacy Statement: https://www.ovirt.org/site/privacy-policy/
> oVirt Code of Conduct:
> https://www.ovirt.org/community/about/community-guidelines/
> List Archives:
> https://lists.ovirt.org/archives/list/users@ovirt.org/message/QULCBXHTKSCPKH4UV6GLMOLJE6J7M5UW/
>
>
>
> --
>
> SANDRO BONAZZOLA
>
> MANAGER, SOFTWARE ENGINEERING, EMEA R RHV
>
> Red Hat EMEA 
>
> sbona...@redhat.com
> 
> ___
> Users mailing list -- users@ovirt.org
> To unsubscribe send an email to users-le...@ovirt.org
> Privacy Statement: https://www.ovirt.org/site/privacy-policy/
> oVirt Code of Conduct:
> https://www.ovirt.org/community/about/community-guidelines/
> List Archives:
> https://lists.ovirt.org/archives/list/users@ovirt.org/message/ECPLXX5JIG5VCIQZDH5KWTWOCXGJYD6Z/
>
___
Users mailing list -- users@ovirt.org
To unsubscribe send an email to users-le...@ovirt.org
Privacy Statement: https://www.ovirt.org/site/privacy-policy/
oVirt Code of Conduct: 
https://www.ovirt.org/community/about/community-guidelines/
List Archives: 
https://lists.ovirt.org/archives/list/users@ovirt.org/message/BDMYNYUGH72J2OQAWHHK5DJI2HVCIJH6/


[ovirt-users] Re: oVirt upgrade to specific version

2019-02-16 Thread Leo David
Thank you Jayme !
Yes... you are right, I will give it a try.
Thanks,

Leo

On Sat, Feb 16, 2019, 12:32 Jayme  Leo,
>
> Almost positive that it won’t update to the next major release until you
> install the 4.3 repos manually. It Should be easily verifiable with yum
> update commmand it won’t perform any action until you agree (as long as you
> aren’t passing -y flag)
>
> On Sat, Feb 16, 2019 at 2:50 AM Leo David  wrote:
>
>> Hi,
>> I have a running 4.2.7 hci cluster which I would upgrade to 4.2.8.
>> If I do this in the standard manner (yum update in engine vm and then the
>> nodes from engine UI) I assume it will go to 4.3, which I would rather
>> avoid for now.
>> Is there a way for getting it updated strictelly to 4.2.8 ?
>>
>> Thanks,
>>
>> Leo
>> ___
>> Users mailing list -- users@ovirt.org
>> To unsubscribe send an email to users-le...@ovirt.org
>> Privacy Statement: https://www.ovirt.org/site/privacy-policy/
>> oVirt Code of Conduct:
>> https://www.ovirt.org/community/about/community-guidelines/
>> List Archives:
>> https://lists.ovirt.org/archives/list/users@ovirt.org/message/HCURD5IZJKRASWCWSQC3K3MHMJGJS7CG/
>>
>
___
Users mailing list -- users@ovirt.org
To unsubscribe send an email to users-le...@ovirt.org
Privacy Statement: https://www.ovirt.org/site/privacy-policy/
oVirt Code of Conduct: 
https://www.ovirt.org/community/about/community-guidelines/
List Archives: 
https://lists.ovirt.org/archives/list/users@ovirt.org/message/PTJGTV4EYWUY4D5FDG6K3DJKSWFAUFVW/


[ovirt-users] oVirt upgrade to specific version

2019-02-15 Thread Leo David
Hi,
I have a running 4.2.7 hci cluster which I would upgrade to 4.2.8.
If I do this in the standard manner (yum update in engine vm and then the
nodes from engine UI) I assume it will go to 4.3, which I would rather
avoid for now.
Is there a way for getting it updated strictelly to 4.2.8 ?

Thanks,

Leo
___
Users mailing list -- users@ovirt.org
To unsubscribe send an email to users-le...@ovirt.org
Privacy Statement: https://www.ovirt.org/site/privacy-policy/
oVirt Code of Conduct: 
https://www.ovirt.org/community/about/community-guidelines/
List Archives: 
https://lists.ovirt.org/archives/list/users@ovirt.org/message/HCURD5IZJKRASWCWSQC3K3MHMJGJS7CG/


[ovirt-users] Re: Spice console very poor performance for Windows 10 vm

2019-02-15 Thread Leo David
Thank you,

Not sure i've understood the procedure to create a custom vdsm hook.
Is this a good example to follow ?
https://github.com/oVirt/vdsm/blob/master/vdsm_hooks/README

Thanks,

Leo


On Fri, Feb 15, 2019, 19:46 Michal Skrivanek 
>
> On 15 Feb 2019, at 16:04, Leo David  wrote:
>
> Thank you Victor.
> Yes, I have the latest guest-tools installed, and the problem is that
> after configuring the vm by using virsh and reboot,  the configuration
> reverts to defaults:
>  passwdValidTo='1970-01-01T00:00:01'>
>   
>   
>   
>   
>   
>   
>   
>   
>   
> 
> So my added changes are not loaded at vm boot.
>  I am sure this is an oVirt spicific behavior, but i just can;t find it
> out how to make this persistent.
>
>
> You can’t edit it in virsh in oVirt. Starting VM in oVirt is too complex
> for libvirt to handle it on its own. You need to write a vdsm hook if you
> want to modify resulting xml
>
> For trying out things I’d recommend to do that with a simple VM in
> virt-manager and once you find out the right config/parameters then write a
> hook with those for oVirt
>
> Thanks,
> michal
>
>
> On Fri, Feb 15, 2019 at 4:32 PM Victor Toso  wrote:
>
>> Hi,
>>
>> On Fri, Feb 15, 2019 at 04:24:15PM +0200, Leo David wrote:
>> > Hi Everyone,
>> > Any thoughts on this ?
>> > It seems that audio streaming is affected as well, and
>> > bandwidth is not an issue in this case.
>>
>> What audio issues do you see?
>>
>> > 'm thinking that maybe if I just just disable compression on
>> > spice,  things would get a bit better...maybe.
>> > Thank you !
>> >
>> > On Wed, Feb 13, 2019 at 8:05 AM Leo David  wrote:
>> >
>> > > Thank you so much Victor !
>> > > Anyone, any ideea how could I disable video compression for
>> > > spice console on particular vms ?
>>
>> I'm not familiar with oVirt interface but it shouldn't be hard if
>> you have access to the host.
>>
>> # virsh edit $vm-name
>>
>> switch what you have in graphics to:
>>
>> 
>> 
>> 
>> 
>>
>>
>> > > I am trying to implement an "almost" full desktop experience
>> > > with an oVirt based vdi environment.
>> > > And besides the Windows10 spice issues ( which are the main
>> > > cause of this thread ), it seems that Windows 7 is affected
>> > > too by the multimedia playing perspective. Which makes a
>> > > total blocker on project implementation
>>
>> Do you have spice-guest-tools installed?
>>
>> > > Any suggestions/ similar experiences ?
>> > > Thank you very much and have a nice day !
>> > >
>> > > Leo
>>
>> Cheers,
>> Victor
>> > >
>> > > On Mon, Feb 11, 2019, 12:01 Victor Toso > > >
>> > >> Hi,
>> > >>
>> > >> On Mon, Feb 11, 2019 at 11:50:49AM +0200, Leo David wrote:
>> > >> > Hi,
>> > >> > "This enable host-side streaming, are you sure you want it?"
>> > >> > Not sure yet, but i would at least disable compression, video
>> > >> > playing seems to be pretty poor, and crackling ( youtube, etc )
>> > >>
>> > >> For playing video use-cases (youtube) it might be okay but not
>> > >> for playing games as it has some hard coded delay in the
>> > >> streaming code path.
>> > >>
>> > >> The streaming is mjpeg so you don't save much bandwidth either.
>> > >>
>> > >> > "AFAIK, if virsh edit exits without issue, you need to shutdown
>> > >> > the vm and then start it again"
>> > >> > I did that,  and when the vm comes back on,  my changes are not
>> there
>> > >> > anymore 
>> > >>
>> > >> Might be something specific to ovirt, not sure :(
>> > >>
>> > >> I hope someone else can help you.
>> > >>
>> > >> > On Mon, Feb 11, 2019 at 10:45 AM Victor Toso <
>> victort...@redhat.com>
>> > >> wrote:
>> > >> >
>> > >> > > Hi,
>> > >> > >
>> > >> > > On Sun, Feb 10, 2019 at 02:08:48PM +0200, Leo David wrote:
>> > >> > > > Hi,
>> > >> > > >
>> > >> > > > I am 

[ovirt-users] Re: Spice console very poor performance for Windows 10 vm

2019-02-15 Thread Leo David
Thank you Victor.
Yes, I have the latest guest-tools installed, and the problem is that after
configuring the vm by using virsh and reboot,  the configuration reverts to
defaults:

  
  
  
  
  
  
  
  
  

So my added changes are not loaded at vm boot.
 I am sure this is an oVirt spicific behavior, but i just can;t find it out
how to make this persistent.

On Fri, Feb 15, 2019 at 4:32 PM Victor Toso  wrote:

> Hi,
>
> On Fri, Feb 15, 2019 at 04:24:15PM +0200, Leo David wrote:
> > Hi Everyone,
> > Any thoughts on this ?
> > It seems that audio streaming is affected as well, and
> > bandwidth is not an issue in this case.
>
> What audio issues do you see?
>
> > 'm thinking that maybe if I just just disable compression on
> > spice,  things would get a bit better...maybe.
> > Thank you !
> >
> > On Wed, Feb 13, 2019 at 8:05 AM Leo David  wrote:
> >
> > > Thank you so much Victor !
> > > Anyone, any ideea how could I disable video compression for
> > > spice console on particular vms ?
>
> I'm not familiar with oVirt interface but it shouldn't be hard if
> you have access to the host.
>
> # virsh edit $vm-name
>
> switch what you have in graphics to:
>
> 
> 
> 
> 
>
>
> > > I am trying to implement an "almost" full desktop experience
> > > with an oVirt based vdi environment.
> > > And besides the Windows10 spice issues ( which are the main
> > > cause of this thread ), it seems that Windows 7 is affected
> > > too by the multimedia playing perspective. Which makes a
> > > total blocker on project implementation
>
> Do you have spice-guest-tools installed?
>
> > > Any suggestions/ similar experiences ?
> > > Thank you very much and have a nice day !
> > >
> > > Leo
>
> Cheers,
> Victor
> > >
> > > On Mon, Feb 11, 2019, 12:01 Victor Toso  > >
> > >> Hi,
> > >>
> > >> On Mon, Feb 11, 2019 at 11:50:49AM +0200, Leo David wrote:
> > >> > Hi,
> > >> > "This enable host-side streaming, are you sure you want it?"
> > >> > Not sure yet, but i would at least disable compression, video
> > >> > playing seems to be pretty poor, and crackling ( youtube, etc )
> > >>
> > >> For playing video use-cases (youtube) it might be okay but not
> > >> for playing games as it has some hard coded delay in the
> > >> streaming code path.
> > >>
> > >> The streaming is mjpeg so you don't save much bandwidth either.
> > >>
> > >> > "AFAIK, if virsh edit exits without issue, you need to shutdown
> > >> > the vm and then start it again"
> > >> > I did that,  and when the vm comes back on,  my changes are not
> there
> > >> > anymore 
> > >>
> > >> Might be something specific to ovirt, not sure :(
> > >>
> > >> I hope someone else can help you.
> > >>
> > >> > On Mon, Feb 11, 2019 at 10:45 AM Victor Toso  >
> > >> wrote:
> > >> >
> > >> > > Hi,
> > >> > >
> > >> > > On Sun, Feb 10, 2019 at 02:08:48PM +0200, Leo David wrote:
> > >> > > > Hi,
> > >> > > >
> > >> > > > I am trying to disable video compression as per this thread:
> > >> > > >
> https://lists.ovirt.org/pipermail/users/2017-January/078753.html
> > >> > > >
> > >> > > > The thing is that I just can't figure out where to place the
> > >> following:
> > >> > > >
> > >> > > > 
> > >> > > > 
> > >> > > > 
> > >> > > > 
> > >> > >
> > >> > > This enable host-side streaming, are you sure you want it?
> > >> > >
> > >> > > > 
> > >> > > >
> > >> > > > If I attempt to edit vm properties by using virsh and add these
> > >> > > > custom settings, the configuration file gets overwritten once
> > >> > > > the vm reboots.
> > >> > >
> > >> > > AFAIK, if virsh edit exits without issue, you need to shutdown
> > >> > > the vm and then start it again. Reboot is not enough.
> > >> > >
> > >> > > > Any sug

[ovirt-users] Re: Spice console very poor performance for Windows 10 vm

2019-02-15 Thread Leo David
Hi Everyone,
Any thoughts on this ?
It seems that audio streaming is affected as well, and bandwidth is not an
issue in this case.
'm thinking that maybe if I just just disable compression on spice,  things
would get a bit better...maybe.
Thank you !

On Wed, Feb 13, 2019 at 8:05 AM Leo David  wrote:

> Thank you so much Victor !
> Anyone, any ideea how could I disable video compression for spice console
> on particular vms ?
> I am trying to implement an "almost" full desktop experience with an oVirt
> based vdi environment.
> And besides the Windows10 spice issues ( which are the main cause of this
> thread ), it seems that Windows 7 is affected too by the multimedia playing
> perspective. Which makes a total blocker on project implementation
> Any suggestions/ similar experiences ?
> Thank you very much and have a nice day !
>
> Leo
>
> On Mon, Feb 11, 2019, 12:01 Victor Toso 
>> Hi,
>>
>> On Mon, Feb 11, 2019 at 11:50:49AM +0200, Leo David wrote:
>> > Hi,
>> > "This enable host-side streaming, are you sure you want it?"
>> > Not sure yet, but i would at least disable compression, video
>> > playing seems to be pretty poor, and crackling ( youtube, etc )
>>
>> For playing video use-cases (youtube) it might be okay but not
>> for playing games as it has some hard coded delay in the
>> streaming code path.
>>
>> The streaming is mjpeg so you don't save much bandwidth either.
>>
>> > "AFAIK, if virsh edit exits without issue, you need to shutdown
>> > the vm and then start it again"
>> > I did that,  and when the vm comes back on,  my changes are not there
>> > anymore 
>>
>> Might be something specific to ovirt, not sure :(
>>
>> I hope someone else can help you.
>>
>> > On Mon, Feb 11, 2019 at 10:45 AM Victor Toso 
>> wrote:
>> >
>> > > Hi,
>> > >
>> > > On Sun, Feb 10, 2019 at 02:08:48PM +0200, Leo David wrote:
>> > > > Hi,
>> > > >
>> > > > I am trying to disable video compression as per this thread:
>> > > > https://lists.ovirt.org/pipermail/users/2017-January/078753.html
>> > > >
>> > > > The thing is that I just can't figure out where to place the
>> following:
>> > > >
>> > > > 
>> > > > 
>> > > > 
>> > > > 
>> > >
>> > > This enable host-side streaming, are you sure you want it?
>> > >
>> > > > 
>> > > >
>> > > > If I attempt to edit vm properties by using virsh and add these
>> > > > custom settings, the configuration file gets overwritten once
>> > > > the vm reboots.
>> > >
>> > > AFAIK, if virsh edit exits without issue, you need to shutdown
>> > > the vm and then start it again. Reboot is not enough.
>> > >
>> > > > Any suggestions?
>> > > >
>> > > > Thank you,
>> > > >
>> > > > Leo
>> > > >
>> > > >
>> > > >
>> > > >
>> > > > On Wed, Feb 6, 2019 at 7:17 PM Leo David  wrote:
>> > > >
>> > > > > Hello everyone,
>> > > > > Any chance that this issue to be already fixed in the new 4.3
>> version ?
>> > > > > Thank you !
>> > > > >
>> > > > > On Tue, Jan 8, 2019, 12:25 Victor Toso > wrote:
>> > > > >
>> > > > >> Hi,
>> > > > >>
>> > > > >> On Tue, Jan 08, 2019 at 12:08:31PM +0200, Leo David wrote:
>> > > > >> > Thank you very mucjh,  and sorry for being so lazy to search
>> > > > >> > for that rpm by myself. Somehow, fedora rpms missed from my
>> > > > >> > mind.  Oh boy, it requires a lot of packages. Do you think
>> > > > >> > would it be a good idea to temporarily install fedora repos, do
>> > > > >> > the yum installation to get the dependencoes too and then
>> > > > >> > disable the repo ? I am thinking to not break the ovirt node
>> > > > >> > installation.
>> > > > >>
>> > > > >> The easiest path is to get the source from your current rpm,
>> > > > >> apply the patch mentioned in previous email, build, install,
>> > > > >> test.
>> > > > >>
>> 

[ovirt-users] Re: Ovirt Cluster completely unstable

2019-02-13 Thread Leo David
Hi,
I would have a look at engine.log, it might provide usefull informations.
Also, i would test i different storage type (maybe a quick nfs data domain
) and see if problem persist with that one too.


On Thu, Feb 14, 2019, 01:26  I'm abandoning my production ovirt cluster due to instability.   I have a
> 7 host cluster running about 300 vms and have been for over a year.  It has
> become unstable over the past three days.  I have random hosts both,
> compute and storage disconnecting.  AND many vms disconnecting and becoming
> unusable.
>
> 7 host are 4 compute hosts running Ovirt 4.2.8 and three glusterfs hosts
> running 3.12.5.  I submitted a bugzilla bug and they immediately assigned
> it to the storage people but have not responded with any meaningful
> information.  I have submitted several logs.
>
> I have found some discussion on problems with instability with gluster
> 3.12.5.  I would be willing to upgrade my gluster to a more stable version
> if that's the culprit.  I installed gluster using the ovirt gui and this is
> the version the ovirt gui installed.
>
> Is there an ovirt health monitor available?  Where should I be looking to
> get a resolution the problems I'm facing.
> ___
> Users mailing list -- users@ovirt.org
> To unsubscribe send an email to users-le...@ovirt.org
> Privacy Statement: https://www.ovirt.org/site/privacy-policy/
> oVirt Code of Conduct:
> https://www.ovirt.org/community/about/community-guidelines/
> List Archives:
> https://lists.ovirt.org/archives/list/users@ovirt.org/message/BL4M3JQA3IEXCQUY4IGQXOAALRUQ7TVB/
>
___
Users mailing list -- users@ovirt.org
To unsubscribe send an email to users-le...@ovirt.org
Privacy Statement: https://www.ovirt.org/site/privacy-policy/
oVirt Code of Conduct: 
https://www.ovirt.org/community/about/community-guidelines/
List Archives: 
https://lists.ovirt.org/archives/list/users@ovirt.org/message/OT2IJ7TJXFJ5BA5POEPHCDYI6LRKVGZT/


[ovirt-users] Re: Spice console very poor performance for Windows 10 vm

2019-02-12 Thread Leo David
Thank you so much Victor !
Anyone, any ideea how could I disable video compression for spice console
on particular vms ?
I am trying to implement an "almost" full desktop experience with an oVirt
based vdi environment.
And besides the Windows10 spice issues ( which are the main cause of this
thread ), it seems that Windows 7 is affected too by the multimedia playing
perspective. Which makes a total blocker on project implementation
Any suggestions/ similar experiences ?
Thank you very much and have a nice day !

Leo

On Mon, Feb 11, 2019, 12:01 Victor Toso  Hi,
>
> On Mon, Feb 11, 2019 at 11:50:49AM +0200, Leo David wrote:
> > Hi,
> > "This enable host-side streaming, are you sure you want it?"
> > Not sure yet, but i would at least disable compression, video
> > playing seems to be pretty poor, and crackling ( youtube, etc )
>
> For playing video use-cases (youtube) it might be okay but not
> for playing games as it has some hard coded delay in the
> streaming code path.
>
> The streaming is mjpeg so you don't save much bandwidth either.
>
> > "AFAIK, if virsh edit exits without issue, you need to shutdown
> > the vm and then start it again"
> > I did that,  and when the vm comes back on,  my changes are not there
> > anymore 
>
> Might be something specific to ovirt, not sure :(
>
> I hope someone else can help you.
>
> > On Mon, Feb 11, 2019 at 10:45 AM Victor Toso 
> wrote:
> >
> > > Hi,
> > >
> > > On Sun, Feb 10, 2019 at 02:08:48PM +0200, Leo David wrote:
> > > > Hi,
> > > >
> > > > I am trying to disable video compression as per this thread:
> > > > https://lists.ovirt.org/pipermail/users/2017-January/078753.html
> > > >
> > > > The thing is that I just can't figure out where to place the
> following:
> > > >
> > > > 
> > > > 
> > > > 
> > > > 
> > >
> > > This enable host-side streaming, are you sure you want it?
> > >
> > > > 
> > > >
> > > > If I attempt to edit vm properties by using virsh and add these
> > > > custom settings, the configuration file gets overwritten once
> > > > the vm reboots.
> > >
> > > AFAIK, if virsh edit exits without issue, you need to shutdown
> > > the vm and then start it again. Reboot is not enough.
> > >
> > > > Any suggestions?
> > > >
> > > > Thank you,
> > > >
> > > > Leo
> > > >
> > > >
> > > >
> > > >
> > > > On Wed, Feb 6, 2019 at 7:17 PM Leo David  wrote:
> > > >
> > > > > Hello everyone,
> > > > > Any chance that this issue to be already fixed in the new 4.3
> version ?
> > > > > Thank you !
> > > > >
> > > > > On Tue, Jan 8, 2019, 12:25 Victor Toso  wrote:
> > > > >
> > > > >> Hi,
> > > > >>
> > > > >> On Tue, Jan 08, 2019 at 12:08:31PM +0200, Leo David wrote:
> > > > >> > Thank you very mucjh,  and sorry for being so lazy to search
> > > > >> > for that rpm by myself. Somehow, fedora rpms missed from my
> > > > >> > mind.  Oh boy, it requires a lot of packages. Do you think
> > > > >> > would it be a good idea to temporarily install fedora repos, do
> > > > >> > the yum installation to get the dependencoes too and then
> > > > >> > disable the repo ? I am thinking to not break the ovirt node
> > > > >> > installation.
> > > > >>
> > > > >> The easiest path is to get the source from your current rpm,
> > > > >> apply the patch mentioned in previous email, build, install,
> > > > >> test.
> > > > >>
> > > > >> If that does not work you can rollback. If works, you can rethink
> > > > >> what is best.
> > > > >>
> > > > >> Cheers,
> > > > >>
> > > > >> >  yum localinstall spice-server-0.14.1-1.fc30.x86_64.rpm
> > > > >> > Loaded plugins: enabled_repos_upload, fastestmirror,
> > > imgbased-persist,
> > > > >> > package_upload, product-id, search-disabled-repos,
> > > subscription-manager,
> > > > >> > vdsmupgrade
> > > > >> > This system is not registered with an entitlement server. You
> can
> >

[ovirt-users] Re: Issues adding iscsi storage domain

2019-02-11 Thread Leo David
:13,632+ INFO  (jsonrpc/1) [vdsm.api] FINISH
getDeviceList return={'devList': []} from=:::10.10.8.130,40100,
flow_id=29f514f0-0219-4861-a5d5-55c0c46ab222,
task_id=02f3cdea-8ffa-4291-92ee-4eb375c8e6e7 (api:52)
2019-02-11 10:06:13,632+ INFO  (jsonrpc/1) [jsonrpc.JsonRpcServer] RPC
call Host.getDeviceList succeeded in 1.12 seconds (__init__:573)

The luns are still not retrived form the target. I hit "Login"  button goes
greyed out,  but still no luns 
Connecting from other ( windows /linux ) clients to the same target works
fine.
I'm still scratching my head to find out how to make this work

Have a nice day,

Leo

On Mon, Feb 4, 2019 at 11:47 AM Benny Zlotnik  wrote:

> Do you not see something like this[1] in the vdsm log?
>
> [1]
> 2019-02-04 04:45:28,804-0500 INFO  (jsonrpc/6) [vdsm.api] START
> discoverSendTargets(con={'ipv6_enabled': 'false', 'connection':
> '10.35.0.233', 'pas
> sword': '', 'port': '3260', 'user': ''}, options=None)
> from=:::10.35.1.28,58662, flow_id=f2f05e5a-fab5-43e4-8114-ee33a2f1402b,
> task_id=69a6d513
> -6a81-4de5-bd79-0e2f748bbf39 (api:48)
> 2019-02-04 04:45:28,982-0500 INFO  (jsonrpc/6) [vdsm.api] FINISH
> discoverSendTargets return={'fullTargets': ['10.35.0.233:3260,1
> iqn.2015-01.com.benny:444'], 'targets': ['iqn.2015-01.com.benny:444']}
> from=:::10.35.1.28,58662, flow_id=f2f05e5a-fab5-43e4-8114-ee33a2f1402b,
> task_id=69a6d513-6a81-4de5-bd79-0e2f748bbf39 (api:54)
>
>
> On Thu, Jan 31, 2019 at 4:29 PM Leo David  wrote:
>
>> Thank you Benny,
>>
>> from engine:
>>
>> 2019-01-31 14:22:36,884Z INFO
>> [org.ovirt.engine.core.bll.storage.connection.ConnectStorageToVdsCommand]
>> (default task-6344) [d8697f85-8822-4588-8d7c-1ed6a5b1d9ca] Running command:
>> ConnectStorageToVdsCommand internal: false. Entities affected :  ID:
>> aaa0----123456789aaa Type: SystemAction group
>> CREATE_STORAGE_DOMAIN with role type ADMIN
>> 2019-01-31 14:22:36,892Z INFO
>> [org.ovirt.engine.core.vdsbroker.vdsbroker.ConnectStorageServerVDSCommand]
>> (default task-6344) [d8697f85-8822-4588-8d7c-1ed6a5b1d9ca] START,
>> ConnectStorageServerVDSCommand(HostName = hp-1.test.lab,
>> StorageServerConnectionManagementVDSParameters:{hostId='e45920ea-572d-4d52-917d-207d82c1d305',
>> storagePoolId='----', storageType='ISCSI',
>> connectionList='[StorageServerConnections:{id='null',
>> connection='10.10.8.13',
>> iqn='iqn.2004-04.com.qnap:ts-ec1280u-rp:iscsi.test.f35296', vfsType='null',
>> mountOptions='null', nfsVersion='null', nfsRetrans='null', nfsTimeo='null',
>> iface='null', netIfaceName='null'}]', sendNetworkEventOnFailure='true'}),
>> log id: 347096b2
>> 2019-01-31 14:22:38,112Z INFO
>> [org.ovirt.engine.core.vdsbroker.vdsbroker.ConnectStorageServerVDSCommand]
>> (default task-6344) [d8697f85-8822-4588-8d7c-1ed6a5b1d9ca] FINISH,
>> ConnectStorageServerVDSCommand, return:
>> {----=0}, log id: 347096b2
>> 2019-01-31 14:22:38,437Z INFO
>> [org.ovirt.engine.core.vdsbroker.vdsbroker.GetDeviceListVDSCommand]
>> (default task-6344) [0ef49cb3-8b81-4f65-8a9d-aa5d57ca6575] START,
>> GetDeviceListVDSCommand(HostName = hp-1.test.lab,
>> GetDeviceListVDSCommandParameters:{hostId='e45920ea-572d-4d52-917d-207d82c1d305',
>> storageType='ISCSI', checkStatus='false', lunIds='null'}), log id: 2fdc334a
>> 2019-01-31 14:22:39,694Z INFO
>> [org.ovirt.engine.core.vdsbroker.vdsbroker.GetDeviceListVDSCommand]
>> (default task-6344) [0ef49cb3-8b81-4f65-8a9d-aa5d57ca6575] FINISH,
>> GetDeviceListVDSCommand, return: [], log id: 2fdc334a
>>
>> In vdsm I can't find something related to this operation...
>> It feels like is not getting the luns from the target,  although from a
>> different client ( windows machine) i can successfully connect and map the
>> lun.
>>
>>
>>
>> On Thu, Jan 31, 2019 at 4:10 PM Benny Zlotnik 
>> wrote:
>>
>>> Can you attach engine and vdsm logs?
>>>
>>> On Thu, Jan 31, 2019 at 4:04 PM Leo David  wrote:
>>>
>>>> Hello everyone,
>>>> Trying to setup an iscsi target as a storage domain,  and it seems not
>>>> to be possible.
>>>> Discovered the hosts,  the targets are displayed.
>>>> Selected one target, clicked the "Login" arrow, spinner runs a bit ,
>>>> and the arrow gets grayed out.
>>>> But no LUNs are displayed, to select from.
>>>> From this step,  I can't go further,  if I hit "OK" nothing happends.
>>>> Just as a thoughtssh into the ovirt node used as initiator, and
>>>

[ovirt-users] Re: Spice console very poor performance for Windows 10 vm

2019-02-11 Thread Leo David
Hi,
"This enable host-side streaming, are you sure you want it?"
Not sure yet, but i would at least disable compression, video playing seems
to be pretty poor, and crackling ( youtube, etc )

"AFAIK, if virsh edit exits without issue, you need to shutdown
the vm and then start it again"
I did that,  and when the vm comes back on,  my changes are not there
anymore 



On Mon, Feb 11, 2019 at 10:45 AM Victor Toso  wrote:

> Hi,
>
> On Sun, Feb 10, 2019 at 02:08:48PM +0200, Leo David wrote:
> > Hi,
> >
> > I am trying to disable video compression as per this thread:
> > https://lists.ovirt.org/pipermail/users/2017-January/078753.html
> >
> > The thing is that I just can't figure out where to place the following:
> >
> > 
> > 
> > 
> > 
>
> This enable host-side streaming, are you sure you want it?
>
> > 
> >
> > If I attempt to edit vm properties by using virsh and add these
> > custom settings, the configuration file gets overwritten once
> > the vm reboots.
>
> AFAIK, if virsh edit exits without issue, you need to shutdown
> the vm and then start it again. Reboot is not enough.
>
> > Any suggestions?
> >
> > Thank you,
> >
> > Leo
> >
> >
> >
> >
> > On Wed, Feb 6, 2019 at 7:17 PM Leo David  wrote:
> >
> > > Hello everyone,
> > > Any chance that this issue to be already fixed in the new 4.3 version ?
> > > Thank you !
> > >
> > > On Tue, Jan 8, 2019, 12:25 Victor Toso  > >
> > >> Hi,
> > >>
> > >> On Tue, Jan 08, 2019 at 12:08:31PM +0200, Leo David wrote:
> > >> > Thank you very mucjh,  and sorry for being so lazy to search
> > >> > for that rpm by myself. Somehow, fedora rpms missed from my
> > >> > mind.  Oh boy, it requires a lot of packages. Do you think
> > >> > would it be a good idea to temporarily install fedora repos, do
> > >> > the yum installation to get the dependencoes too and then
> > >> > disable the repo ? I am thinking to not break the ovirt node
> > >> > installation.
> > >>
> > >> The easiest path is to get the source from your current rpm,
> > >> apply the patch mentioned in previous email, build, install,
> > >> test.
> > >>
> > >> If that does not work you can rollback. If works, you can rethink
> > >> what is best.
> > >>
> > >> Cheers,
> > >>
> > >> >  yum localinstall spice-server-0.14.1-1.fc30.x86_64.rpm
> > >> > Loaded plugins: enabled_repos_upload, fastestmirror,
> imgbased-persist,
> > >> > package_upload, product-id, search-disabled-repos,
> subscription-manager,
> > >> > vdsmupgrade
> > >> > This system is not registered with an entitlement server. You can
> use
> > >> > subscription-manager to register.
> > >> > Examining spice-server-0.14.1-1.fc30.x86_64.rpm:
> > >> > spice-server-0.14.1-1.fc30.x86_64
> > >> > Marking spice-server-0.14.1-1.fc30.x86_64.rpm as an update to
> > >> > spice-server-0.14.0-2.el7_5.3.x86_64
> > >> > Resolving Dependencies
> > >> > --> Running transaction check
> > >> > ---> Package spice-server.x86_64 0:0.14.0-2.el7_5.3 will be updated
> > >> > ---> Package spice-server.x86_64 0:0.14.1-1.fc30 will be an update
> > >> > --> Processing Dependency: libcrypto.so.1.1(OPENSSL_1_1_0)(64bit)
> for
> > >> > package: spice-server-0.14.1-1.fc30.x86_64
> > >> > Loading mirror speeds from cached hostfile
> > >> >  * epel: ftp.nluug.nl
> > >> >  * ovirt-4.2-epel: ftp.nluug.nl
> > >> > --> Processing Dependency: libssl.so.1.1(OPENSSL_1_1_0)(64bit) for
> > >> package:
> > >> > spice-server-0.14.1-1.fc30.x86_64
> > >> > --> Processing Dependency: libcrypto.so.1.1()(64bit) for package:
> > >> > spice-server-0.14.1-1.fc30.x86_64
> > >> > --> Processing Dependency: libgstapp-1.0.so.0()(64bit) for package:
> > >> > spice-server-0.14.1-1.fc30.x86_64
> > >> > --> Processing Dependency: libgstbase-1.0.so.0()(64bit) for package:
> > >> > spice-server-0.14.1-1.fc30.x86_64
> > >> > --> Processing Dependency: libgstreamer-1.0.so.0()(64bit) for
> package:
> > >> > spice-server-0.14.1-1.fc30.x86_64
> > >> > --> Processing Dependency: libgstvideo-1.0.

[ovirt-users] Re: Spice console very poor performance for Windows 10 vm

2019-02-10 Thread Leo David
Hi,

I am trying to disable video compression as per this thread:
https://lists.ovirt.org/pipermail/users/2017-January/078753.html

The thing is that I just can't figure out where to place the following:







If I attempt to edit vm properties by using virsh and add these custom
settings, the configuration file gets overwritten once the vm reboots.

Any suggestions?

Thank you,

Leo




On Wed, Feb 6, 2019 at 7:17 PM Leo David  wrote:

> Hello everyone,
> Any chance that this issue to be already fixed in the new 4.3 version ?
> Thank you !
>
> On Tue, Jan 8, 2019, 12:25 Victor Toso 
>> Hi,
>>
>> On Tue, Jan 08, 2019 at 12:08:31PM +0200, Leo David wrote:
>> > Thank you very mucjh,  and sorry for being so lazy to search
>> > for that rpm by myself. Somehow, fedora rpms missed from my
>> > mind.  Oh boy, it requires a lot of packages. Do you think
>> > would it be a good idea to temporarily install fedora repos, do
>> > the yum installation to get the dependencoes too and then
>> > disable the repo ? I am thinking to not break the ovirt node
>> > installation.
>>
>> The easiest path is to get the source from your current rpm,
>> apply the patch mentioned in previous email, build, install,
>> test.
>>
>> If that does not work you can rollback. If works, you can rethink
>> what is best.
>>
>> Cheers,
>>
>> >  yum localinstall spice-server-0.14.1-1.fc30.x86_64.rpm
>> > Loaded plugins: enabled_repos_upload, fastestmirror, imgbased-persist,
>> > package_upload, product-id, search-disabled-repos, subscription-manager,
>> > vdsmupgrade
>> > This system is not registered with an entitlement server. You can use
>> > subscription-manager to register.
>> > Examining spice-server-0.14.1-1.fc30.x86_64.rpm:
>> > spice-server-0.14.1-1.fc30.x86_64
>> > Marking spice-server-0.14.1-1.fc30.x86_64.rpm as an update to
>> > spice-server-0.14.0-2.el7_5.3.x86_64
>> > Resolving Dependencies
>> > --> Running transaction check
>> > ---> Package spice-server.x86_64 0:0.14.0-2.el7_5.3 will be updated
>> > ---> Package spice-server.x86_64 0:0.14.1-1.fc30 will be an update
>> > --> Processing Dependency: libcrypto.so.1.1(OPENSSL_1_1_0)(64bit) for
>> > package: spice-server-0.14.1-1.fc30.x86_64
>> > Loading mirror speeds from cached hostfile
>> >  * epel: ftp.nluug.nl
>> >  * ovirt-4.2-epel: ftp.nluug.nl
>> > --> Processing Dependency: libssl.so.1.1(OPENSSL_1_1_0)(64bit) for
>> package:
>> > spice-server-0.14.1-1.fc30.x86_64
>> > --> Processing Dependency: libcrypto.so.1.1()(64bit) for package:
>> > spice-server-0.14.1-1.fc30.x86_64
>> > --> Processing Dependency: libgstapp-1.0.so.0()(64bit) for package:
>> > spice-server-0.14.1-1.fc30.x86_64
>> > --> Processing Dependency: libgstbase-1.0.so.0()(64bit) for package:
>> > spice-server-0.14.1-1.fc30.x86_64
>> > --> Processing Dependency: libgstreamer-1.0.so.0()(64bit) for package:
>> > spice-server-0.14.1-1.fc30.x86_64
>> > --> Processing Dependency: libgstvideo-1.0.so.0()(64bit) for package:
>> > spice-server-0.14.1-1.fc30.x86_64
>> > --> Processing Dependency: liborc-0.4.so.0()(64bit) for package:
>> > spice-server-0.14.1-1.fc30.x86_64
>> > --> Processing Dependency: libssl.so.1.1()(64bit) for package:
>> > spice-server-0.14.1-1.fc30.x86_64
>> > --> Finished Dependency Resolution
>> > Error: Package: spice-server-0.14.1-1.fc30.x86_64
>> > (/spice-server-0.14.1-1.fc30.x86_64)
>> >Requires: libgstvideo-1.0.so.0()(64bit)
>> > Error: Package: spice-server-0.14.1-1.fc30.x86_64
>> > (/spice-server-0.14.1-1.fc30.x86_64)
>> >Requires: libgstbase-1.0.so.0()(64bit)
>> > Error: Package: spice-server-0.14.1-1.fc30.x86_64
>> > (/spice-server-0.14.1-1.fc30.x86_64)
>> >Requires: libgstreamer-1.0.so.0()(64bit)
>> > Error: Package: spice-server-0.14.1-1.fc30.x86_64
>> > (/spice-server-0.14.1-1.fc30.x86_64)
>> >Requires: libcrypto.so.1.1(OPENSSL_1_1_0)(64bit)
>> > Error: Package: spice-server-0.14.1-1.fc30.x86_64
>> > (/spice-server-0.14.1-1.fc30.x86_64)
>> >Requires: liborc-0.4.so.0()(64bit)
>> > Error: Package: spice-server-0.14.1-1.fc30.x86_64
>> > (/spice-server-0.14.1-1.fc30.x86_64)
>> >Requires: libcrypto.so.1.1()(64bit)
>> > Error: Package: spice-server-0.14.1-1.fc30.x86_64
>> > (/spice-server-0.14.1-1.fc30.x86_64)
>

[ovirt-users] Re: Spice console very poor performance for Windows 10 vm

2019-02-06 Thread Leo David
Hello everyone,
Any chance that this issue to be already fixed in the new 4.3 version ?
Thank you !

On Tue, Jan 8, 2019, 12:25 Victor Toso  Hi,
>
> On Tue, Jan 08, 2019 at 12:08:31PM +0200, Leo David wrote:
> > Thank you very mucjh,  and sorry for being so lazy to search
> > for that rpm by myself. Somehow, fedora rpms missed from my
> > mind.  Oh boy, it requires a lot of packages. Do you think
> > would it be a good idea to temporarily install fedora repos, do
> > the yum installation to get the dependencoes too and then
> > disable the repo ? I am thinking to not break the ovirt node
> > installation.
>
> The easiest path is to get the source from your current rpm,
> apply the patch mentioned in previous email, build, install,
> test.
>
> If that does not work you can rollback. If works, you can rethink
> what is best.
>
> Cheers,
>
> >  yum localinstall spice-server-0.14.1-1.fc30.x86_64.rpm
> > Loaded plugins: enabled_repos_upload, fastestmirror, imgbased-persist,
> > package_upload, product-id, search-disabled-repos, subscription-manager,
> > vdsmupgrade
> > This system is not registered with an entitlement server. You can use
> > subscription-manager to register.
> > Examining spice-server-0.14.1-1.fc30.x86_64.rpm:
> > spice-server-0.14.1-1.fc30.x86_64
> > Marking spice-server-0.14.1-1.fc30.x86_64.rpm as an update to
> > spice-server-0.14.0-2.el7_5.3.x86_64
> > Resolving Dependencies
> > --> Running transaction check
> > ---> Package spice-server.x86_64 0:0.14.0-2.el7_5.3 will be updated
> > ---> Package spice-server.x86_64 0:0.14.1-1.fc30 will be an update
> > --> Processing Dependency: libcrypto.so.1.1(OPENSSL_1_1_0)(64bit) for
> > package: spice-server-0.14.1-1.fc30.x86_64
> > Loading mirror speeds from cached hostfile
> >  * epel: ftp.nluug.nl
> >  * ovirt-4.2-epel: ftp.nluug.nl
> > --> Processing Dependency: libssl.so.1.1(OPENSSL_1_1_0)(64bit) for
> package:
> > spice-server-0.14.1-1.fc30.x86_64
> > --> Processing Dependency: libcrypto.so.1.1()(64bit) for package:
> > spice-server-0.14.1-1.fc30.x86_64
> > --> Processing Dependency: libgstapp-1.0.so.0()(64bit) for package:
> > spice-server-0.14.1-1.fc30.x86_64
> > --> Processing Dependency: libgstbase-1.0.so.0()(64bit) for package:
> > spice-server-0.14.1-1.fc30.x86_64
> > --> Processing Dependency: libgstreamer-1.0.so.0()(64bit) for package:
> > spice-server-0.14.1-1.fc30.x86_64
> > --> Processing Dependency: libgstvideo-1.0.so.0()(64bit) for package:
> > spice-server-0.14.1-1.fc30.x86_64
> > --> Processing Dependency: liborc-0.4.so.0()(64bit) for package:
> > spice-server-0.14.1-1.fc30.x86_64
> > --> Processing Dependency: libssl.so.1.1()(64bit) for package:
> > spice-server-0.14.1-1.fc30.x86_64
> > --> Finished Dependency Resolution
> > Error: Package: spice-server-0.14.1-1.fc30.x86_64
> > (/spice-server-0.14.1-1.fc30.x86_64)
> >Requires: libgstvideo-1.0.so.0()(64bit)
> > Error: Package: spice-server-0.14.1-1.fc30.x86_64
> > (/spice-server-0.14.1-1.fc30.x86_64)
> >Requires: libgstbase-1.0.so.0()(64bit)
> > Error: Package: spice-server-0.14.1-1.fc30.x86_64
> > (/spice-server-0.14.1-1.fc30.x86_64)
> >Requires: libgstreamer-1.0.so.0()(64bit)
> > Error: Package: spice-server-0.14.1-1.fc30.x86_64
> > (/spice-server-0.14.1-1.fc30.x86_64)
> >Requires: libcrypto.so.1.1(OPENSSL_1_1_0)(64bit)
> > Error: Package: spice-server-0.14.1-1.fc30.x86_64
> > (/spice-server-0.14.1-1.fc30.x86_64)
> >Requires: liborc-0.4.so.0()(64bit)
> > Error: Package: spice-server-0.14.1-1.fc30.x86_64
> > (/spice-server-0.14.1-1.fc30.x86_64)
> >Requires: libcrypto.so.1.1()(64bit)
> > Error: Package: spice-server-0.14.1-1.fc30.x86_64
> > (/spice-server-0.14.1-1.fc30.x86_64)
> >Requires: libssl.so.1.1(OPENSSL_1_1_0)(64bit)
> > Error: Package: spice-server-0.14.1-1.fc30.x86_64
> > (/spice-server-0.14.1-1.fc30.x86_64)
> >Requires: libgstapp-1.0.so.0()(64bit)
> > Error: Package: spice-server-0.14.1-1.fc30.x86_64
> > (/spice-server-0.14.1-1.fc30.x86_64)
> >Requires: libssl.so.1.1()(64bit)
> >  You could try using --skip-broken to work around the problem
> >  You could try running: rpm -Va --nofiles --nodigest
> > Uploading Enabled Repositories Report
> > Loaded plugins: fastestmirror, product-id, subscription-manager
> > This system is not registered with an entitlement server. You can use
> > subscription-manager to register.
> > Cannot upload

[ovirt-users] Re: Issues adding iscsi storage domain

2019-01-31 Thread Leo David
Thank you Benny,

from engine:

2019-01-31 14:22:36,884Z INFO
[org.ovirt.engine.core.bll.storage.connection.ConnectStorageToVdsCommand]
(default task-6344) [d8697f85-8822-4588-8d7c-1ed6a5b1d9ca] Running command:
ConnectStorageToVdsCommand internal: false. Entities affected :  ID:
aaa0----123456789aaa Type: SystemAction group
CREATE_STORAGE_DOMAIN with role type ADMIN
2019-01-31 14:22:36,892Z INFO
[org.ovirt.engine.core.vdsbroker.vdsbroker.ConnectStorageServerVDSCommand]
(default task-6344) [d8697f85-8822-4588-8d7c-1ed6a5b1d9ca] START,
ConnectStorageServerVDSCommand(HostName = hp-1.test.lab,
StorageServerConnectionManagementVDSParameters:{hostId='e45920ea-572d-4d52-917d-207d82c1d305',
storagePoolId='----', storageType='ISCSI',
connectionList='[StorageServerConnections:{id='null',
connection='10.10.8.13',
iqn='iqn.2004-04.com.qnap:ts-ec1280u-rp:iscsi.test.f35296', vfsType='null',
mountOptions='null', nfsVersion='null', nfsRetrans='null', nfsTimeo='null',
iface='null', netIfaceName='null'}]', sendNetworkEventOnFailure='true'}),
log id: 347096b2
2019-01-31 14:22:38,112Z INFO
[org.ovirt.engine.core.vdsbroker.vdsbroker.ConnectStorageServerVDSCommand]
(default task-6344) [d8697f85-8822-4588-8d7c-1ed6a5b1d9ca] FINISH,
ConnectStorageServerVDSCommand, return:
{----=0}, log id: 347096b2
2019-01-31 14:22:38,437Z INFO
[org.ovirt.engine.core.vdsbroker.vdsbroker.GetDeviceListVDSCommand]
(default task-6344) [0ef49cb3-8b81-4f65-8a9d-aa5d57ca6575] START,
GetDeviceListVDSCommand(HostName = hp-1.test.lab,
GetDeviceListVDSCommandParameters:{hostId='e45920ea-572d-4d52-917d-207d82c1d305',
storageType='ISCSI', checkStatus='false', lunIds='null'}), log id: 2fdc334a
2019-01-31 14:22:39,694Z INFO
[org.ovirt.engine.core.vdsbroker.vdsbroker.GetDeviceListVDSCommand]
(default task-6344) [0ef49cb3-8b81-4f65-8a9d-aa5d57ca6575] FINISH,
GetDeviceListVDSCommand, return: [], log id: 2fdc334a

In vdsm I can't find something related to this operation...
It feels like is not getting the luns from the target,  although from a
different client ( windows machine) i can successfully connect and map the
lun.



On Thu, Jan 31, 2019 at 4:10 PM Benny Zlotnik  wrote:

> Can you attach engine and vdsm logs?
>
> On Thu, Jan 31, 2019 at 4:04 PM Leo David  wrote:
>
>> Hello everyone,
>> Trying to setup an iscsi target as a storage domain,  and it seems not to
>> be possible.
>> Discovered the hosts,  the targets are displayed.
>> Selected one target, clicked the "Login" arrow, spinner runs a bit ,  and
>> the arrow gets grayed out.
>> But no LUNs are displayed, to select from.
>> From this step,  I can't go further,  if I hit "OK" nothing happends.
>> Just as a thoughtssh into the ovirt node used as initiator, and lsblk
>> command shows the block device as present.
>> So i have to cancel the "New Domain" windows without being able to add
>> the domain,  but that iscsi block device still remains present on the hosts.
>>
>> Using 4.2.8
>>
>> Any thoughts ?
>>
>> Thank you very much !
>>
>> Leo
>>
>> --
>> Best regards, Leo David
>> ___
>> Users mailing list -- users@ovirt.org
>> To unsubscribe send an email to users-le...@ovirt.org
>> Privacy Statement: https://www.ovirt.org/site/privacy-policy/
>> oVirt Code of Conduct:
>> https://www.ovirt.org/community/about/community-guidelines/
>> List Archives:
>> https://lists.ovirt.org/archives/list/users@ovirt.org/message/RRMR5TMWBKQQ3IKFDD2VQ5YELJGX4TCI/
>>
>

-- 
Best regards, Leo David
___
Users mailing list -- users@ovirt.org
To unsubscribe send an email to users-le...@ovirt.org
Privacy Statement: https://www.ovirt.org/site/privacy-policy/
oVirt Code of Conduct: 
https://www.ovirt.org/community/about/community-guidelines/
List Archives: 
https://lists.ovirt.org/archives/list/users@ovirt.org/message/RSMEE7I7GGENDGNAHEFCDNZNDP5AUKTR/


[ovirt-users] Issues adding iscsi storage domain

2019-01-31 Thread Leo David
Hello everyone,
Trying to setup an iscsi target as a storage domain,  and it seems not to
be possible.
Discovered the hosts,  the targets are displayed.
Selected one target, clicked the "Login" arrow, spinner runs a bit ,  and
the arrow gets grayed out.
But no LUNs are displayed, to select from.
>From this step,  I can't go further,  if I hit "OK" nothing happends.
Just as a thoughtssh into the ovirt node used as initiator, and lsblk
command shows the block device as present.
So i have to cancel the "New Domain" windows without being able to add the
domain,  but that iscsi block device still remains present on the hosts.

Using 4.2.8

Any thoughts ?

Thank you very much !

Leo

-- 
Best regards, Leo David
___
Users mailing list -- users@ovirt.org
To unsubscribe send an email to users-le...@ovirt.org
Privacy Statement: https://www.ovirt.org/site/privacy-policy/
oVirt Code of Conduct: 
https://www.ovirt.org/community/about/community-guidelines/
List Archives: 
https://lists.ovirt.org/archives/list/users@ovirt.org/message/RRMR5TMWBKQQ3IKFDD2VQ5YELJGX4TCI/


[ovirt-users] Re: VDO volume - storage domain size

2019-01-30 Thread Leo David
Hello,
Any news on this fix ? Using 4.2.8 and the issue seems still here...


On Mon, Jan 14, 2019, 09:53 Leo David  Thank you very much, Sahina.
> I will try the suggested workaround.
> Have a nice day,
>
> Leo
>
> On Mon, Jan 14, 2019, 06:47 Sahina Bose 
>> On Fri, Jan 11, 2019 at 3:23 PM Leo David  wrote:
>> >
>> > Hello Everyone,
>> > I'm I trying to benefit of vdo capabilities,  but it seems that I don;t
>> have too much luck with this.
>> > I have both created a vdo based gluster volume usign both following
>> methods:
>> > - at hyperconverged wizard cluster setup by enabling compression per
>> device
>> > - after installation, by creating a new gluster volume and enabling
>> compression
>> >
>> > In both cases,  I end up with a real device size gluster volume/storage
>> domain, although the vdo's appear in node's cockpit UI as being 10 times
>> then physical device.
>> >
>> > ie: 3 nodes,  each having 1 x 900GB ssd device, turns into 9t device
>> vdo device per host, but the storage domain (gluster  replica 3 ) ends up
>> as being 900GB   .
>> > Am I missing something ,  or maybe doing something wrong?
>> > Thank you very much !
>>
>> You're running into this bug -
>> https://bugzilla.redhat.com/show_bug.cgi?id=1629543
>>
>> As a workaround, can you try lvextend on the gluster bricks to make
>> use of the available capacity?
>>
>> >
>> > Leo
>> >
>> >
>> >
>> > Best regards, Leo David
>> > ___
>> > Users mailing list -- users@ovirt.org
>> > To unsubscribe send an email to users-le...@ovirt.org
>> > Privacy Statement: https://www.ovirt.org/site/privacy-policy/
>> > oVirt Code of Conduct:
>> https://www.ovirt.org/community/about/community-guidelines/
>> > List Archives:
>> https://lists.ovirt.org/archives/list/users@ovirt.org/message/F5CWQ2AYXCVUG2HTM3ASFDSGPRVX2M2F/
>>
>
___
Users mailing list -- users@ovirt.org
To unsubscribe send an email to users-le...@ovirt.org
Privacy Statement: https://www.ovirt.org/site/privacy-policy/
oVirt Code of Conduct: 
https://www.ovirt.org/community/about/community-guidelines/
List Archives: 
https://lists.ovirt.org/archives/list/users@ovirt.org/message/3V5TEOMN6VYVKE4AQRYRS4ATNX6MZETC/


[ovirt-users] Re: Hyperconverged setup - storage architecture - scaling

2019-01-30 Thread Leo David
Thank you very much, Sahina.
This is making the things a bit more clear to me.



On Tue, Jan 29, 2019, 11:20 Sahina Bose  On Mon, Jan 28, 2019 at 6:22 PM Leo David  wrote:
> >
> > Hello Everyone,
> > Reading through the document:
> > "Red Hat Hyperconverged Infrastructure for Virtualization 1.5
> >  Automating RHHI for Virtualization deployment"
> >
> > Regarding storage scaling,  i see the following statements:
> >
> > 2.7. SCALING
> > Red Hat Hyperconverged Infrastructure for Virtualization is supported
> for one node, and for clusters of 3, 6, 9, and 12 nodes.
> > The initial deployment is either 1 or 3 nodes.
> > There are two supported methods of horizontally scaling Red Hat
> Hyperconverged Infrastructure for Virtualization:
> >
> > 1 Add new hyperconverged nodes to the cluster, in sets of three, up to
> the maximum of 12 hyperconverged nodes.
> >
> > 2 Create new Gluster volumes using new disks on existing hyperconverged
> nodes.
> > You cannot create a volume that spans more than 3 nodes, or expand an
> existing volume so that it spans across more than 3 nodes at a time
> >
> > 2.9.1. Prerequisites for geo-replication
> > Be aware of the following requirements and limitations when configuring
> geo-replication:
> > One geo-replicated volume only
> > Red Hat Hyperconverged Infrastructure for Virtualization (RHHI for
> Virtualization) supports only one geo-replicated volume. Red Hat recommends
> backing up the volume that stores the data of your virtual machines, as
> this is usually contains the most valuable data.
> > --
> >
> > Also  in oVirtEngine UI, when I add a brick to an existing volume i get
> the following warning:
> >
> > "Expanding gluster volume in a hyper-converged setup is not recommended
> as it could lead to degraded performance. To expand storage for cluster, it
> is advised to add additional gluster volumes."
> >
> > Those things are raising a couple of questions that maybe for some for
> you guys are easy to answer, but for me it creates a bit of confusion...
> > I am also referring to RedHat product documentation,  because I  treat
> oVirt as production-ready as RHHI is.
>
> oVirt and RHHI though as close to each other as possible do differ in
> the versions used of the various components and the support
> limitations imposed.
> >
> > 1. Is there any reason for not going to distributed-replicated volumes (
> ie: spread one volume across 6,9, or 12 nodes ) ?
> > - ie: is recomanded that in a 9 nodes scenario I should have 3 separated
> volumes,  but how should I deal with the folowing question
>
> The reason for this limitation was a bug encountered when scaling a
> replica 3 volume to distribute-replica. This has since been fixed in
> the latest release of glusterfs.
>
> >
> > 2. If only one geo-replicated volume can be configured,  how should I
> deal with 2nd and 3rd volume replication for disaster recovery
>
> It is possible to have more than 1 geo-replicated volume as long as
> your network and CPU resources support this.
>
> >
> > 3. If the limit of hosts per datacenter is 250, then (in theory ) the
> recomended way in reaching this treshold would be to create 20 separated
> oVirt logical clusters with 12 nodes per each ( and datacenter managed from
> one ha-engine ) ?
> >
> > 4. In present, I have the folowing one 9 nodes cluster , all hosts
> contributing with 2 disks each  to a single replica 3 distributed
> replicated volume. They where added to the volume in the following order:
>   > node1 - disk1
> > node2 - disk1
> > ..
> > node9 - disk1
> > node1 - disk2
> > node2 - disk2
> > ......
> > node9 - disk2
> > At the moment, the volume is arbitrated, but I intend to go for full
> distributed replica 3.
> >
> > Is this a bad setup ? Why ?
> > It oviously brakes the redhat recommended rules...
> >
> > Is there anyone so kind to discuss on these things ?
> >
> > Thank you very much !
> >
> > Leo
> >
> >
> > --
> > Best regards, Leo David
> >
> >
> >
> >
> > --
> > Best regards, Leo David
> > ___
> > Users mailing list -- users@ovirt.org
> > To unsubscribe send an email to users-le...@ovirt.org
> > Privacy Statement: https://www.ovirt.org/site/privacy-policy/
> > oVirt Code of Conduct:
> https://www.ovirt.org/community/about/community-guidelines/
> > List Archives:
> https://lists.ovirt.org/archives/list/users@ovirt.org/message/QGZZJIT4JSLYSOVLVYZADXJTWVEM42KY/
>
___
Users mailing list -- users@ovirt.org
To unsubscribe send an email to users-le...@ovirt.org
Privacy Statement: https://www.ovirt.org/site/privacy-policy/
oVirt Code of Conduct: 
https://www.ovirt.org/community/about/community-guidelines/
List Archives: 
https://lists.ovirt.org/archives/list/users@ovirt.org/message/A6K3U4AU2RTRYTC3KKCIB6TKXXUJB4O7/


[ovirt-users] Hyperconverged setup - storage architecture - scaling

2019-01-28 Thread Leo David
Hello Everyone,
Reading through the document:
"Red Hat Hyperconverged Infrastructure for Virtualization 1.5
 Automating RHHI for Virtualization deployment"

Regarding storage scaling,  i see the following statements:





*2.7. SCALINGRed Hat Hyperconverged Infrastructure for Virtualization is
supported for one node, and for clusters of 3, 6, 9, and 12 nodes.The
initial deployment is either 1 or 3 nodes.There are two supported methods
of horizontally scaling Red Hat Hyperconverged Infrastructure for
Virtualization:*


*1 Add new hyperconverged nodes to the cluster, in sets of three, up to the
maximum of 12 hyperconverged nodes.*


*2 Create new Gluster volumes using new disks on existing hyperconverged
nodes.You cannot create a volume that spans more than 3 nodes, or expand an
existing volume so that it spans across more than 3 nodes at a time*




*2.9.1. Prerequisites for geo-replicationBe aware of the following
requirements and limitations when configuring geo-replication:One
geo-replicated volume onlyRed Hat Hyperconverged Infrastructure for
Virtualization (RHHI for Virtualization) supports only one geo-replicated
volume. Red Hat recommends backing up the volume that stores the data of
your virtual machines, as this is usually contains the most valuable data.*
--

Also  in oVirtEngine UI, when I add a brick to an existing volume i get the
following warning:

*"Expanding gluster volume in a hyper-converged setup is not recommended as
it could lead to degraded performance. To expand storage for cluster, it is
advised to add additional gluster volumes." *

Those things are raising a couple of questions that maybe for some for you
guys are easy to answer, but for me it creates a bit of confusion...
I am also referring to RedHat product documentation,  because I  treat
oVirt as production-ready as RHHI is.

*1*. Is there any reason for not going to distributed-replicated volumes (
ie: spread one volume across 6,9, or 12 nodes ) ?
- ie: is recomanded that in a 9 nodes scenario I should have 3 separated
volumes,  but how should I deal with the folowing question

*2.* If only one geo-replicated volume can be configured,  how should I
deal with 2nd and 3rd volume replication for disaster recovery

*3.* If the limit of hosts per datacenter is 250, then (in theory ) the
recomended way in reaching this treshold would be to create 20 separated
oVirt logical clusters with 12 nodes per each ( and datacenter managed from
one ha-engine ) ?

*4.* In present, I have the folowing one 9 nodes cluster , all hosts
contributing with 2 disks each  to a single replica 3 distributed
replicated volume. They where added to the volume in the following order:
node1 - disk1
node2 - disk1
..
node9 - disk1
node1 - disk2
node2 - disk2
..
node9 - disk2
At the moment, the volume is arbitrated, but I intend to go for full
distributed replica 3.

Is this a bad setup ? Why ?
It oviously brakes the redhat recommended rules...

Is there anyone so kind to discuss on these things ?

Thank you very much !

Leo


-- 
Best regards, Leo David




-- 
Best regards, Leo David
___
Users mailing list -- users@ovirt.org
To unsubscribe send an email to users-le...@ovirt.org
Privacy Statement: https://www.ovirt.org/site/privacy-policy/
oVirt Code of Conduct: 
https://www.ovirt.org/community/about/community-guidelines/
List Archives: 
https://lists.ovirt.org/archives/list/users@ovirt.org/message/QGZZJIT4JSLYSOVLVYZADXJTWVEM42KY/


[ovirt-users] Re: Deploying single instance - error

2019-01-28 Thread Leo David
Hi Gobinda,
gdeploy --version
gdeploy 2.0.2

yum list installed | grep gdeploy
gdeploy.noarch2.0.8-1.el7
installed

Thank you !


On Mon, Jan 28, 2019 at 10:56 AM Gobinda Das  wrote:

> Hi David,
>  Can you please check the  gdeploy version?
> This bug was fixed last year:
> https://bugzilla.redhat.com/show_bug.cgi?id=1626513
> And is part of: gdeploy-2.0.2-29
>
> On Sun, Jan 27, 2019 at 2:38 PM Leo David  wrote:
>
>> Hi,
>> It seems so that I had to manually add the sections, to make the scrip
>> working:
>> [diskcount]
>> 12
>> [stripesize]
>> 256
>>
>> It looks like ansible is still searching for these sections regardless
>> that I have configured "jbod"  in the wizard...
>>
>> Thanks,
>>
>> Leo
>>
>>
>>
>> On Sun, Jan 27, 2019 at 10:49 AM Leo David  wrote:
>>
>>> Hello Everyone,
>>> Using version 4.2.8, ( ovirt-node-ng-installer-4.2.0-2019012606.el7.iso
>>> ) for deploying one node instance by following from within CockpitUI seems
>>> not to be possible.
>>> Here's the generated inventory ( i've specified "jbod"  in the wizard ):
>>>
>>> #gdeploy configuration generated by cockpit-gluster plugin
>>> [hosts]
>>> 192.168.80.191
>>>
>>> [script1:192.168.80.191]
>>> action=execute
>>> ignore_script_errors=no
>>> file=/usr/share/gdeploy/scripts/grafton-sanity-check.sh -d sdb -h
>>> 192.168.80.191
>>> [disktype]
>>> jbod
>>> [service1]
>>> action=enable
>>> service=chronyd
>>> [service2]
>>> action=restart
>>> service=chronyd
>>> [shell2]
>>> action=execute
>>> command=vdsm-tool configure --force
>>> [script3]
>>> action=execute
>>> file=/usr/share/gdeploy/scripts/blacklist_all_disks.sh
>>> ignore_script_errors=no
>>> [pv1:192.168.80.191]
>>> action=create
>>> devices=sdb
>>> ignore_pv_errors=no
>>> [vg1:192.168.80.191]
>>> action=create
>>> vgname=gluster_vg_sdb
>>> pvname=sdb
>>> ignore_vg_errors=no
>>> [lv1:192.168.80.191]
>>> action=create
>>> lvname=gluster_lv_engine
>>> ignore_lv_errors=no
>>> vgname=gluster_vg_sdb
>>> mount=/gluster_bricks/engine
>>> size=230GB
>>> lvtype=thick
>>> [selinux]
>>> yes
>>> [service3]
>>> action=restart
>>> service=glusterd
>>> slice_setup=yes
>>> [firewalld]
>>> action=add
>>>
>>> ports=111/tcp,2049/tcp,54321/tcp,5900/tcp,5900-6923/tcp,5666/tcp,16514/tcp,54322/tcp
>>> services=glusterfs
>>> [script2]
>>> action=execute
>>> file=/usr/share/gdeploy/scripts/disable-gluster-hooks.sh
>>> [shell3]
>>> action=execute
>>> command=usermod -a -G gluster qemu
>>> [volume1]
>>> action=create
>>> volname=engine
>>> transport=tcp
>>>
>>> key=storage.owner-uid,storage.owner-gid,features.shard,performance.low-prio-threads,performance.strict-o-direct,network.remote-dio,network.ping-timeout,user.cifs,nfs.disable,performance.quick-read,performance.read-ahead,performance.io-cache,cluster.eager-lock
>>> value=36,36,on,32,on,off,30,off,on,off,off,off,enable
>>> brick_dirs=192.168.80.191:/gluster_bricks/engine/engine
>>> ignore_volume_errors=no
>>>
>>> It does not get to finish,  throwing the following error:
>>>
>>> PLAY [gluster_servers]
>>> *
>>> TASK [Create volume group on the disks]
>>> 
>>> changed: [192.168.80.191] => (item={u'brick': u'/dev/sdb', u'vg':
>>> u'gluster_vg_sdb'})
>>> PLAY RECAP
>>> *
>>> 192.168.80.191 : ok=1changed=1unreachable=0
>>> failed=0
>>> *Error: Section diskcount not found in the configuration file*
>>>
>>> Any thoughts ?
>>>
>>>
>>>
>>>
>>>
>>>
>>> --
>>> Best regards, Leo David
>>>
>>
>>
>> --
>> Best regards, Leo David
>> ___
>> Users mailing list -- users@ovirt.org
>> To unsubscribe send an email to users-le...@ovirt.org
>> Privacy Statement: https://www.ovirt.org/site/privacy-policy/
>> oVirt Code of Conduct:
>> https://www.ovirt.org/community/about/community-guidelines/
>> List Archives:
>> https://lists.ovirt.org/archives/list/users@ovirt.org/message/Z2X45A6V6WQC3DBH6DGENJGBAVKNPY5T/
>>
>
>
> --
>
>
> Thanks,
> Gobinda
>


-- 
Best regards, Leo David
___
Users mailing list -- users@ovirt.org
To unsubscribe send an email to users-le...@ovirt.org
Privacy Statement: https://www.ovirt.org/site/privacy-policy/
oVirt Code of Conduct: 
https://www.ovirt.org/community/about/community-guidelines/
List Archives: 
https://lists.ovirt.org/archives/list/users@ovirt.org/message/GXIQW5S74B2ICZUXYULCYGDN2S3H4V6Y/


[ovirt-users] Re: Deploying single instance - error

2019-01-27 Thread Leo David
Hi,
It seems so that I had to manually add the sections, to make the scrip
working:
[diskcount]
12
[stripesize]
256

It looks like ansible is still searching for these sections regardless
that I have configured "jbod"  in the wizard...

Thanks,

Leo



On Sun, Jan 27, 2019 at 10:49 AM Leo David  wrote:

> Hello Everyone,
> Using version 4.2.8, ( ovirt-node-ng-installer-4.2.0-2019012606.el7.iso
> ) for deploying one node instance by following from within CockpitUI seems
> not to be possible.
> Here's the generated inventory ( i've specified "jbod"  in the wizard ):
>
> #gdeploy configuration generated by cockpit-gluster plugin
> [hosts]
> 192.168.80.191
>
> [script1:192.168.80.191]
> action=execute
> ignore_script_errors=no
> file=/usr/share/gdeploy/scripts/grafton-sanity-check.sh -d sdb -h
> 192.168.80.191
> [disktype]
> jbod
> [service1]
> action=enable
> service=chronyd
> [service2]
> action=restart
> service=chronyd
> [shell2]
> action=execute
> command=vdsm-tool configure --force
> [script3]
> action=execute
> file=/usr/share/gdeploy/scripts/blacklist_all_disks.sh
> ignore_script_errors=no
> [pv1:192.168.80.191]
> action=create
> devices=sdb
> ignore_pv_errors=no
> [vg1:192.168.80.191]
> action=create
> vgname=gluster_vg_sdb
> pvname=sdb
> ignore_vg_errors=no
> [lv1:192.168.80.191]
> action=create
> lvname=gluster_lv_engine
> ignore_lv_errors=no
> vgname=gluster_vg_sdb
> mount=/gluster_bricks/engine
> size=230GB
> lvtype=thick
> [selinux]
> yes
> [service3]
> action=restart
> service=glusterd
> slice_setup=yes
> [firewalld]
> action=add
>
> ports=111/tcp,2049/tcp,54321/tcp,5900/tcp,5900-6923/tcp,5666/tcp,16514/tcp,54322/tcp
> services=glusterfs
> [script2]
> action=execute
> file=/usr/share/gdeploy/scripts/disable-gluster-hooks.sh
> [shell3]
> action=execute
> command=usermod -a -G gluster qemu
> [volume1]
> action=create
> volname=engine
> transport=tcp
>
> key=storage.owner-uid,storage.owner-gid,features.shard,performance.low-prio-threads,performance.strict-o-direct,network.remote-dio,network.ping-timeout,user.cifs,nfs.disable,performance.quick-read,performance.read-ahead,performance.io-cache,cluster.eager-lock
> value=36,36,on,32,on,off,30,off,on,off,off,off,enable
> brick_dirs=192.168.80.191:/gluster_bricks/engine/engine
> ignore_volume_errors=no
>
> It does not get to finish,  throwing the following error:
>
> PLAY [gluster_servers]
> *
> TASK [Create volume group on the disks]
> 
> changed: [192.168.80.191] => (item={u'brick': u'/dev/sdb', u'vg':
> u'gluster_vg_sdb'})
> PLAY RECAP
> *********
> 192.168.80.191 : ok=1changed=1unreachable=0
> failed=0
> *Error: Section diskcount not found in the configuration file*
>
> Any thoughts ?
>
>
>
>
>
>
> --
> Best regards, Leo David
>


-- 
Best regards, Leo David
___
Users mailing list -- users@ovirt.org
To unsubscribe send an email to users-le...@ovirt.org
Privacy Statement: https://www.ovirt.org/site/privacy-policy/
oVirt Code of Conduct: 
https://www.ovirt.org/community/about/community-guidelines/
List Archives: 
https://lists.ovirt.org/archives/list/users@ovirt.org/message/Z2X45A6V6WQC3DBH6DGENJGBAVKNPY5T/


[ovirt-users] Deploying single instance - error

2019-01-27 Thread Leo David
Hello Everyone,
Using version 4.2.8, ( ovirt-node-ng-installer-4.2.0-2019012606.el7.iso
) for deploying one node instance by following from within CockpitUI seems
not to be possible.
Here's the generated inventory ( i've specified "jbod"  in the wizard ):

#gdeploy configuration generated by cockpit-gluster plugin
[hosts]
192.168.80.191

[script1:192.168.80.191]
action=execute
ignore_script_errors=no
file=/usr/share/gdeploy/scripts/grafton-sanity-check.sh -d sdb -h
192.168.80.191
[disktype]
jbod
[service1]
action=enable
service=chronyd
[service2]
action=restart
service=chronyd
[shell2]
action=execute
command=vdsm-tool configure --force
[script3]
action=execute
file=/usr/share/gdeploy/scripts/blacklist_all_disks.sh
ignore_script_errors=no
[pv1:192.168.80.191]
action=create
devices=sdb
ignore_pv_errors=no
[vg1:192.168.80.191]
action=create
vgname=gluster_vg_sdb
pvname=sdb
ignore_vg_errors=no
[lv1:192.168.80.191]
action=create
lvname=gluster_lv_engine
ignore_lv_errors=no
vgname=gluster_vg_sdb
mount=/gluster_bricks/engine
size=230GB
lvtype=thick
[selinux]
yes
[service3]
action=restart
service=glusterd
slice_setup=yes
[firewalld]
action=add
ports=111/tcp,2049/tcp,54321/tcp,5900/tcp,5900-6923/tcp,5666/tcp,16514/tcp,54322/tcp
services=glusterfs
[script2]
action=execute
file=/usr/share/gdeploy/scripts/disable-gluster-hooks.sh
[shell3]
action=execute
command=usermod -a -G gluster qemu
[volume1]
action=create
volname=engine
transport=tcp
key=storage.owner-uid,storage.owner-gid,features.shard,performance.low-prio-threads,performance.strict-o-direct,network.remote-dio,network.ping-timeout,user.cifs,nfs.disable,performance.quick-read,performance.read-ahead,performance.io-cache,cluster.eager-lock
value=36,36,on,32,on,off,30,off,on,off,off,off,enable
brick_dirs=192.168.80.191:/gluster_bricks/engine/engine
ignore_volume_errors=no

It does not get to finish,  throwing the following error:

PLAY [gluster_servers]
*
TASK [Create volume group on the disks]

changed: [192.168.80.191] => (item={u'brick': u'/dev/sdb', u'vg':
u'gluster_vg_sdb'})
PLAY RECAP
*
192.168.80.191 : ok=1changed=1unreachable=0
failed=0
*Error: Section diskcount not found in the configuration file*

Any thoughts ?






-- 
Best regards, Leo David
___
Users mailing list -- users@ovirt.org
To unsubscribe send an email to users-le...@ovirt.org
Privacy Statement: https://www.ovirt.org/site/privacy-policy/
oVirt Code of Conduct: 
https://www.ovirt.org/community/about/community-guidelines/
List Archives: 
https://lists.ovirt.org/archives/list/users@ovirt.org/message/OOOVWZWHGQGTCXQGJ3JQBJDCDGXDZW5J/


[ovirt-users] Vdo based storage domain

2019-01-21 Thread Leo David
Hi everyone,
I have read and tried to understand how vdo works, and it seems pretty
impressive, but i feel like i am missing something.
At the end of the day, my question is:
Can I entirelly rely on the 10 times increased usable space ?
As and example, I have created a 4.8tb volume based on 3x480gb samsung
sm863a enterprise ssd devices. (full replica 3).
Should I be confident that I can use 4.8tb for my storage domain (vms data
store) without concerning about data repetabilitty in the future ?
I am just trying to think as an end-user consumer that cannot predict if
data that will be written to the device will be compresable or not, and
only see 4.8tb as trully usable space.
Any thoughts, any experience to share ?
Im sorry if my question sounds noob, but i'm just trying to get under end
user's hat.
Thank you very much,

Have a good day !
Leo
___
Users mailing list -- users@ovirt.org
To unsubscribe send an email to users-le...@ovirt.org
Privacy Statement: https://www.ovirt.org/site/privacy-policy/
oVirt Code of Conduct: 
https://www.ovirt.org/community/about/community-guidelines/
List Archives: 
https://lists.ovirt.org/archives/list/users@ovirt.org/message/UD3NXFMWRFR6DFBNLJMRBRWPUMATERIM/


[ovirt-users] Re: External private ova / image repository

2019-01-14 Thread Leo David
Hi,
Any thoughts on these, any ideeas ?
Maybe buying a payed cinder service from a cloud provider would do the job,
or implement some sort of image repo in my datacenter and expose it over
the web with some form of auth ?
Any case scenarios ?

Thank you very much !

On Fri, Jan 11, 2019, 12:15 Leo David  Hello Everyone,
> I am not sure what would it be the pieces needed to have an external repo
> that I can manage and use at the client site for downloading customized
> templates.
> ie: how an external docker repo works
> Any ideeas on this ?
> Thank you !
> Have a nice day,
>
> Leo
>
___
Users mailing list -- users@ovirt.org
To unsubscribe send an email to users-le...@ovirt.org
Privacy Statement: https://www.ovirt.org/site/privacy-policy/
oVirt Code of Conduct: 
https://www.ovirt.org/community/about/community-guidelines/
List Archives: 
https://lists.ovirt.org/archives/list/users@ovirt.org/message/EJYBVPU4HXGOH644DIAPGCXCJ5ALIM6R/


[ovirt-users] Re: VDO volume - storage domain size

2019-01-13 Thread Leo David
Thank you very much, Sahina.
I will try the suggested workaround.
Have a nice day,

Leo

On Mon, Jan 14, 2019, 06:47 Sahina Bose  On Fri, Jan 11, 2019 at 3:23 PM Leo David  wrote:
> >
> > Hello Everyone,
> > I'm I trying to benefit of vdo capabilities,  but it seems that I don;t
> have too much luck with this.
> > I have both created a vdo based gluster volume usign both following
> methods:
> > - at hyperconverged wizard cluster setup by enabling compression per
> device
> > - after installation, by creating a new gluster volume and enabling
> compression
> >
> > In both cases,  I end up with a real device size gluster volume/storage
> domain, although the vdo's appear in node's cockpit UI as being 10 times
> then physical device.
> >
> > ie: 3 nodes,  each having 1 x 900GB ssd device, turns into 9t device vdo
> device per host, but the storage domain (gluster  replica 3 ) ends up as
> being 900GB   .
> > Am I missing something ,  or maybe doing something wrong?
> > Thank you very much !
>
> You're running into this bug -
> https://bugzilla.redhat.com/show_bug.cgi?id=1629543
>
> As a workaround, can you try lvextend on the gluster bricks to make
> use of the available capacity?
>
> >
> > Leo
> >
> >
> >
> > Best regards, Leo David
> > ___
> > Users mailing list -- users@ovirt.org
> > To unsubscribe send an email to users-le...@ovirt.org
> > Privacy Statement: https://www.ovirt.org/site/privacy-policy/
> > oVirt Code of Conduct:
> https://www.ovirt.org/community/about/community-guidelines/
> > List Archives:
> https://lists.ovirt.org/archives/list/users@ovirt.org/message/F5CWQ2AYXCVUG2HTM3ASFDSGPRVX2M2F/
>
___
Users mailing list -- users@ovirt.org
To unsubscribe send an email to users-le...@ovirt.org
Privacy Statement: https://www.ovirt.org/site/privacy-policy/
oVirt Code of Conduct: 
https://www.ovirt.org/community/about/community-guidelines/
List Archives: 
https://lists.ovirt.org/archives/list/users@ovirt.org/message/I65VXVDUUTGHHQMHYGJ4XMV4HDVPLCRN/


[ovirt-users] External private ova / image repository

2019-01-11 Thread Leo David
Hello Everyone,
I am not sure what would it be the pieces needed to have an external repo
that I can manage and use at the client site for downloading customized
templates.
ie: how an external docker repo works
Any ideeas on this ?
Thank you !
Have a nice day,

Leo
___
Users mailing list -- users@ovirt.org
To unsubscribe send an email to users-le...@ovirt.org
Privacy Statement: https://www.ovirt.org/site/privacy-policy/
oVirt Code of Conduct: 
https://www.ovirt.org/community/about/community-guidelines/
List Archives: 
https://lists.ovirt.org/archives/list/users@ovirt.org/message/R2NLXW2LWOGZOHBRNT3RAGYLUWSZRBYZ/


[ovirt-users] VDO volume - storage domain size

2019-01-11 Thread Leo David
Hello Everyone,
I'm I trying to benefit of vdo capabilities,  but it seems that I don;t
have too much luck with this.
I have both created a vdo based gluster volume usign both following methods:
- at hyperconverged wizard cluster setup by enabling compression per device
- after installation, by creating a new gluster volume and enabling
compression

In both cases,  I end up with a real device size gluster volume/storage
domain, although the vdo's appear in node's cockpit UI as being 10 times
then physical device.

ie: 3 nodes,  each having 1 x 900GB ssd device, turns into 9t device vdo
device per host, but the storage domain (gluster  replica 3 ) ends up as
being 900GB   .
Am I missing something ,  or maybe doing something wrong?
Thank you very much !

Leo



Best regards, Leo David
___
Users mailing list -- users@ovirt.org
To unsubscribe send an email to users-le...@ovirt.org
Privacy Statement: https://www.ovirt.org/site/privacy-policy/
oVirt Code of Conduct: 
https://www.ovirt.org/community/about/community-guidelines/
List Archives: 
https://lists.ovirt.org/archives/list/users@ovirt.org/message/F5CWQ2AYXCVUG2HTM3ASFDSGPRVX2M2F/


[ovirt-users] Re: Spice console very poor performance for Windows 10 vm

2019-01-08 Thread Leo David
Thank you very mucjh,  and sorry for being so lazy to search for that rpm
by myself. Somehow, fedora rpms missed from my mind.
Oh boy, it requires a lot of packages. Do you think would it be a good idea
to temporarily install fedora repos, do the yum installation to get the
dependencoes too and then disable the repo ? I am thinking to not break the
ovirt node installation.

 yum localinstall spice-server-0.14.1-1.fc30.x86_64.rpm
Loaded plugins: enabled_repos_upload, fastestmirror, imgbased-persist,
package_upload, product-id, search-disabled-repos, subscription-manager,
vdsmupgrade
This system is not registered with an entitlement server. You can use
subscription-manager to register.
Examining spice-server-0.14.1-1.fc30.x86_64.rpm:
spice-server-0.14.1-1.fc30.x86_64
Marking spice-server-0.14.1-1.fc30.x86_64.rpm as an update to
spice-server-0.14.0-2.el7_5.3.x86_64
Resolving Dependencies
--> Running transaction check
---> Package spice-server.x86_64 0:0.14.0-2.el7_5.3 will be updated
---> Package spice-server.x86_64 0:0.14.1-1.fc30 will be an update
--> Processing Dependency: libcrypto.so.1.1(OPENSSL_1_1_0)(64bit) for
package: spice-server-0.14.1-1.fc30.x86_64
Loading mirror speeds from cached hostfile
 * epel: ftp.nluug.nl
 * ovirt-4.2-epel: ftp.nluug.nl
--> Processing Dependency: libssl.so.1.1(OPENSSL_1_1_0)(64bit) for package:
spice-server-0.14.1-1.fc30.x86_64
--> Processing Dependency: libcrypto.so.1.1()(64bit) for package:
spice-server-0.14.1-1.fc30.x86_64
--> Processing Dependency: libgstapp-1.0.so.0()(64bit) for package:
spice-server-0.14.1-1.fc30.x86_64
--> Processing Dependency: libgstbase-1.0.so.0()(64bit) for package:
spice-server-0.14.1-1.fc30.x86_64
--> Processing Dependency: libgstreamer-1.0.so.0()(64bit) for package:
spice-server-0.14.1-1.fc30.x86_64
--> Processing Dependency: libgstvideo-1.0.so.0()(64bit) for package:
spice-server-0.14.1-1.fc30.x86_64
--> Processing Dependency: liborc-0.4.so.0()(64bit) for package:
spice-server-0.14.1-1.fc30.x86_64
--> Processing Dependency: libssl.so.1.1()(64bit) for package:
spice-server-0.14.1-1.fc30.x86_64
--> Finished Dependency Resolution
Error: Package: spice-server-0.14.1-1.fc30.x86_64
(/spice-server-0.14.1-1.fc30.x86_64)
   Requires: libgstvideo-1.0.so.0()(64bit)
Error: Package: spice-server-0.14.1-1.fc30.x86_64
(/spice-server-0.14.1-1.fc30.x86_64)
   Requires: libgstbase-1.0.so.0()(64bit)
Error: Package: spice-server-0.14.1-1.fc30.x86_64
(/spice-server-0.14.1-1.fc30.x86_64)
   Requires: libgstreamer-1.0.so.0()(64bit)
Error: Package: spice-server-0.14.1-1.fc30.x86_64
(/spice-server-0.14.1-1.fc30.x86_64)
   Requires: libcrypto.so.1.1(OPENSSL_1_1_0)(64bit)
Error: Package: spice-server-0.14.1-1.fc30.x86_64
(/spice-server-0.14.1-1.fc30.x86_64)
   Requires: liborc-0.4.so.0()(64bit)
Error: Package: spice-server-0.14.1-1.fc30.x86_64
(/spice-server-0.14.1-1.fc30.x86_64)
   Requires: libcrypto.so.1.1()(64bit)
Error: Package: spice-server-0.14.1-1.fc30.x86_64
(/spice-server-0.14.1-1.fc30.x86_64)
   Requires: libssl.so.1.1(OPENSSL_1_1_0)(64bit)
Error: Package: spice-server-0.14.1-1.fc30.x86_64
(/spice-server-0.14.1-1.fc30.x86_64)
   Requires: libgstapp-1.0.so.0()(64bit)
Error: Package: spice-server-0.14.1-1.fc30.x86_64
(/spice-server-0.14.1-1.fc30.x86_64)
   Requires: libssl.so.1.1()(64bit)
 You could try using --skip-broken to work around the problem
 You could try running: rpm -Va --nofiles --nodigest
Uploading Enabled Repositories Report
Loaded plugins: fastestmirror, product-id, subscription-manager
This system is not registered with an entitlement server. You can use
subscription-manager to register.
Cannot upload enabled repos report, is this client registered?

Thank you  !

Leo

On Tue, Jan 8, 2019 at 10:19 AM Victor Toso  wrote:

> Hi,
>
> On Mon, Jan 07, 2019 at 07:29:13PM +0200, Leo David wrote:
> > Thank you very much Victor,
> > Is there a rpm that I can easily install on all the nodes, or will I need
> > to build that version from sources ?
>
> You can fetch the release of 0.14.1 version at
> https://gitlab.freedesktop.org/spice/spice/tags
>
> You can fetch an rpm from Fedora here
> https://koji.fedoraproject.org/koji/buildinfo?buildID=1138474
>
> Cheers,
>
> > On Mon, Jan 7, 2019, 19:14 Victor Toso  >
> > > Hi,
> > >
> > > On Mon, Jan 07, 2019 at 07:00:04PM +0200, Leo David wrote:
> > > > Thank you very much !
> > > >
> > > > I have modified
> /etc/ovirt-engine/osinfo.conf.d/00-defaults.properties
> > > file
> > > > and added the last line
> > > >
> > > > # Windows10x64
> > > > os.windows_10x64.id.value = 27
> > > > os.windows_10x64.name.value = Windows 10 x64
> > > > os.windows_10x64.derivedFrom.value = windows_8x64
> 

[ovirt-users] Re: Spice console very poor performance for Windows 10 vm

2019-01-07 Thread Leo David
Thank you very much Victor,
Is there a rpm that I can easily install on all the nodes, or will I need
to build that version from sources ?


On Mon, Jan 7, 2019, 19:14 Victor Toso  Hi,
>
> On Mon, Jan 07, 2019 at 07:00:04PM +0200, Leo David wrote:
> > Thank you very much !
> >
> > I have modified /etc/ovirt-engine/osinfo.conf.d/00-defaults.properties
> file
> > and added the last line
> >
> > # Windows10x64
> > os.windows_10x64.id.value = 27
> > os.windows_10x64.name.value = Windows 10 x64
> > os.windows_10x64.derivedFrom.value = windows_8x64
> > os.windows_10x64.productKey.value =
> > os.windows_10x64.resources.maximum.ram.value = 2097152
> > os.windows_10x64.cpu.unsupported.value = conroe, opteron_g1
> > os.windows_10x64.sysprepPath.value =
> > ${ENGINE_USR}/conf/sysprep/sysprep.w10x64
> > *os.windows_10x64.devices.display.vramMultiplier.value = 2*
> >
> > The vm has the "Windows10x64" profile configured
> >
> > Restarted the ovirt-engine vm, powered on the Windows 10 vm. Not any
> > difference...The console is very slow, almost unusable.
> >
> > Also, tried to upgrade spice-server on the node, but it seems that there
> > aeno updates available. At the moment, it is installed:
> >  spice-server.x86_64   0.14.0-2.el7_5.3
>
> Just for reference, the patch in spice-server that should help is
> from the mail thread
>
>
> https://lists.freedesktop.org/archives/spice-devel/2018-June/044237.html
>
> Merged as
>
>
> https://gitlab.freedesktop.org/spice/spice/commit/ca4984570f425e87e92abe5f62f9687bb55c1e14
>
> Looking at the repo with git tag --contains ca4984570f425e87e92
> it shows v0.14.1.
>
> 0.14.0-2 probably does not contain that. Either update to 0.14.1
> or backport the patch. It does need to shutdown and start the VM
> again.
>
> > Any thoughts ?
> >
> > Thank you !
>
> I hope it helps ;)
>
> Cheers,
>
___
Users mailing list -- users@ovirt.org
To unsubscribe send an email to users-le...@ovirt.org
Privacy Statement: https://www.ovirt.org/site/privacy-policy/
oVirt Code of Conduct: 
https://www.ovirt.org/community/about/community-guidelines/
List Archives: 
https://lists.ovirt.org/archives/list/users@ovirt.org/message/5BXQK3C7XG3VDXJLBCPLD3VOFRQ4VKUQ/


[ovirt-users] Re: Spice console very poor performance for Windows 10 vm

2019-01-07 Thread Leo David
Thank you very much !

I have modified /etc/ovirt-engine/osinfo.conf.d/00-defaults.properties file
and added the last line

# Windows10x64
os.windows_10x64.id.value = 27
os.windows_10x64.name.value = Windows 10 x64
os.windows_10x64.derivedFrom.value = windows_8x64
os.windows_10x64.productKey.value =
os.windows_10x64.resources.maximum.ram.value = 2097152
os.windows_10x64.cpu.unsupported.value = conroe, opteron_g1
os.windows_10x64.sysprepPath.value =
${ENGINE_USR}/conf/sysprep/sysprep.w10x64
*os.windows_10x64.devices.display.vramMultiplier.value = 2*

The vm has the "Windows10x64" profile configured

Restarted the ovirt-engine vm, powered on the Windows 10 vm. Not any
difference...The console is very slow, almost unusable.

Also, tried to upgrade spice-server on the node, but it seems that there
aeno updates available. At the moment, it is installed:
 spice-server.x86_64   0.14.0-2.el7_5.3

Any thoughts ?

Thank you !

Leo



On Mon, Jan 7, 2019 at 5:03 PM Victor Toso  wrote:

> Hi,
>
> See,
>
>
> https://lists.freedesktop.org/archives/spice-devel/2018-October/046023.html
>
> So, update your spice-server too (host).
>
> Cheers,
>
> On Sun, Jan 06, 2019 at 09:57:20PM +0200, Leo David wrote:
> > Hello Everyone,
> > Maybe I am something wrong, but spice console seem to be very laggy and
> > slow for windows 10 vms. I have tried both qxl and qxl-dod drivers,  but
> no
> > luck so far...
> > As a notice, the Win 2012R2 vm console is running fine, the problem seems
> > to only affect Windows 10.
> > Any ideas,  what should I do to sort this out ?
> > Thank you very much !
> >
> > Leo
> >
> >
> > --
> > Best regards, Leo David
>
> > ___
> > Users mailing list -- users@ovirt.org
> > To unsubscribe send an email to users-le...@ovirt.org
> > Privacy Statement: https://www.ovirt.org/site/privacy-policy/
> > oVirt Code of Conduct:
> https://www.ovirt.org/community/about/community-guidelines/
> > List Archives:
> https://lists.ovirt.org/archives/list/users@ovirt.org/message/ZPQQELK74526J3Y6FOX7JG7WHJ2GCMOS/
>
>

-- 
Best regards, Leo David
___
Users mailing list -- users@ovirt.org
To unsubscribe send an email to users-le...@ovirt.org
Privacy Statement: https://www.ovirt.org/site/privacy-policy/
oVirt Code of Conduct: 
https://www.ovirt.org/community/about/community-guidelines/
List Archives: 
https://lists.ovirt.org/archives/list/users@ovirt.org/message/BUALJVMBVLGUTHX72XUQSKYNBMOX4T3V/


  1   2   >