[ovirt-users] Re: Info about procedure to shutdown hosted engine VM

2018-09-28 Thread Giuseppe Ragusa
Hi all,
sorry for being late to such an interesting thread.

I discussed almost this same issue (properly and programmatically
shutting down a complete oVirt environment in a way that also
guarantees a clean and easy power up later) privately with some friends
some time ago.
Please note that the issue has been already discussed on the mailing
list before (we had started from those 
hints):http://lists.ovirt.org/pipermail/users/2017-August/083667.html

I will translate here from Italian our description of the scenario,
hoping to add something to the discussion (maybe simply as another
use case):
Setup:

 * We are talking about an hyperconverged oVirt+GlusterFS (HE-HC) setup
   (let's say 1 or 3 nodes, but more should work the same)
 * We are talking about abusing the "hyperconverged" term above (so 
CTDB/Samba/Gluster-NFS/Gluster-
   block are also running, directly on the nodes) ;-)
Business case:

 * Let's say that we are in a small business setup and we do not have
   the luxury of diesel-powered generators guaranteeing no black-outs
 * Let's say that we have (intelligent) UPSs with limited battery, so
   that we must make sure that a clean global power down gets initiated
   as soon as the UPSs signal that a certain low threshold has been
   passed (threshold to be carefully defined in order to give enough
   time for a clean shutdown)
 * Let's say that those UPSs may be:
   * 1 UPS powering everything (smells single-point-of-failure,
 but could be)
   * 2 UPSs with all physical equipment having redundant (2) power cords
   * 3 or more UPSs somehow variously connected
 * Let's say the the UPSs may be network-monitored (SNMP on the
   ovirtmgmt network) or directly attached to the nodes (USB/serial)
General strategy leading to shutdown decision:

 * We want to centralize UPS management and use something like NUT[1]
   running on the Engine vm
 * Network controlled UPSs will be directly controlled by NUT running on
   the Engine vm, while directly attached UPSs (USB/serial) will be
   controlled by NUT running on the nodes they are attached to, but only
   in a "proxy" mode (relaying actual control/logic to the NUT service
   running on the Engine vm)
 * A proper logic will be devised (knowing the capacity of each UPS, the
   load it sustains and what actually means to power down those
   connected equipment in view of quorum maintenance) in order to decide
   whether a partial power down or a complete global power down are
   needed, in case only a subset of UPSs should experience a low-battery
   event (obviously a complete low-battery on all UPSs means global
   power down)
Detailed strategy of shutdown implementation:

 * A partial power down (only some nodes) means:
   * Those nodes will be put in local maintenance (vms get automatically
 migrated on other nodes or cleanly shut down if migration is
 impossible because of constraints or limited resources; shutdown of
 vms should respect proper order, using tags, dependency rules, HA
 status or other hints) but without stopping GlusterFS services
 (since there are further services depending on those, see below)
   * Services running on those nodes get cleanly stopped:
 * Proper stopping of oVirt HA Agent and Broker services on
   those nodes
 * Proper stopping of CTDB (brings down Samba too) and Gluster-block
   (NFS-Ganesha too, if used instead of Gluster-NFS) services on
   those nodes
 * Clean unmounting of all still-mounted GlusterFS volumes on
   those nodes
   * Clean OS poweroff of those nodes
 * A global power down of everything means:
   * All guest vms (except the Engine) get cleanly shut down (by means
 of oVirt guest agent), possibly in a proper dependency order (using
 tags, dependency rules, HA status or other hints)
   * All storage domains (except the Engine one) are put in maintenance
   * Global oVirt maintenance is activated (no more HA actions to
 guarantee that the Engine is up)
   * Clean OS poweroff of the Engine vm
   * Proper stopping of oVirt HA Agent and Broker services on all nodes
   * Proper stopping of CTDB (brings down Samba too) and Gluster-
 block (NFS-Ganesha too, if used instead of Gluster-NFS) services
 on all nodes
   * Clean unmounting of all still-mounted GlusterFS volumes on all
 nodes
   * Clean stop of all GlusterFS volumes (issued from a single,
 chosen node)
   * Clean OS poweroff of all nodes
Sorry for the lenghty email :-)

Many thanks.

Best regards,
Giuseppe

PS: I will read through the official Ansible role for shutdown asap (I surely 
still need a lot of learning for writing proper Ansible playbooks... :-D )I 
just published our Ansible mockup [2]of the above detailed global
strategy, but it's based on statically collected info and must be run
from an external machine, to say nothing of my awful Ansible style and
the complete lack of the NUT logic and configuration part
On Wed, Sep 12, 2018, at 16:15, Simone Tiraboschi 

[ovirt-users] R: Re: Failed to deploy ovirt engine with "hosted-engine --deploy"

2018-09-08 Thread Giuseppe Ragusa
Hi all,
I confirm that:

1) the machine type pc-i440fx-rhel7.2.0 is enough to solve the problem

2) the OVEHOSTED_VM/emulatedMachine variable in answers file is NOT enough (the 
engine vm hangs again as soon as it gets rebooted, after the successful initial 
setup), but the VDSM hook in 
https://gist.github.com/RabidCicada/40655db1582ca5d07c9bbf2c429cdd01 solves the 
problem (for further vms too, arguably)

Many thanks.

Best regards,
Giuseppe

Da: Giuseppe Ragusa 
Inviato: venerdì 7 settembre 2018 16:04
A: Simone Tiraboschi; bon...@gmail.com
Cc: users
Oggetto: R: [ovirt-users] Re: Failed to deploy ovirt engine with "hosted-engine 
--deploy"

Hi Simone,
sorry for the late comment (I just found this thread while researching on 
nested oVirt-on-VMware issues).

It seems that the problem is generally known and has been solved upstream in 
kernel 4.16:

https://bugs.launchpad.net/qemu/+bug/1636217

Any ETA on backports in RHEL7? ;-)
(Note: should I open a Bugzilla on kernel package for this?)
I seems that Hyper-V would experience the same troubles and I think that making 
nested-inside-Windows-host work should be deemed fairly important for 
demo/trial/lab uses... :-)

In the meanwhile, workarounds have been published for the new Ansible-based 
setup flow:

https://lists.ovirt.org/archives/list/users@ovirt.org/thread/UDVRS5I64WLFJJF7YXKXZNTQOZRJ6DOJ/?sort=date

on the other side, I can confirm that with the legacy setup flow (with option 
--noansible) adding the following to the answer file avoids the problem (did 
not test whether newer types like pc-i440fx-rhel7.2.0 solve it too):

OVEHOSTED_VM/emulatedMschine=str:rhel6.0.0

Many thanks.

Best regards,
Giuseppe


Da: Simone Tiraboschi 
Inviato: lunedì 30 luglio 2018 09:20
A: bon...@gmail.com
Cc: users
Oggetto: [ovirt-users] Re: Failed to deploy ovirt engine with "hosted-engine 
--deploy"



On Mon, Jul 30, 2018 at 6:04 AM Bong Shau Fui 
mailto:bon...@gmail.com> wrote:
Hi Simone:
   Yes, it's in a nested environment.  L0 is vmware esxi 5.5.

I know for sue that a nested kvm env over vmware esxi is still problematic; kvm 
over kvm works fine instead.



regards,
Bong SF
___
Users mailing list -- users@ovirt.org<mailto:users@ovirt.org>
To unsubscribe send an email to 
users-le...@ovirt.org<mailto:users-le...@ovirt.org>
Privacy Statement: https://www.ovirt.org/site/privacy-policy/
oVirt Code of Conduct: 
https://www.ovirt.org/community/about/community-guidelines/
List Archives: 
https://lists.ovirt.org/archives/list/users@ovirt.org/message/YQZHWNK2CPSWGWIOB3WOJJ2QZK4HGLAD/
mailto:users-le...@ovirt.org>mailto:users@ovirt.org>mailto:bon...@gmail.com>

___
Users mailing list -- users@ovirt.org
To unsubscribe send an email to users-le...@ovirt.org
Privacy Statement: https://www.ovirt.org/site/privacy-policy/
oVirt Code of Conduct: 
https://www.ovirt.org/community/about/community-guidelines/
List Archives: 
https://lists.ovirt.org/archives/list/users@ovirt.org/message/W4GOU6N43OGPPDTKQ2TUOI6O2VI2IUOS/


[ovirt-users] R: Re: Failed to deploy ovirt engine with "hosted-engine --deploy"

2018-09-08 Thread Giuseppe Ragusa
Hi Simone,
sorry for the late comment (I just found this thread while researching on 
nested oVirt-on-VMware issues).

It seems that the problem is generally known and has been solved upstream in 
kernel 4.16:

https://bugs.launchpad.net/qemu/+bug/1636217

Any ETA on backports in RHEL7? ;-)
(Note: should I open a Bugzilla on kernel package for this?)
I seems that Hyper-V would experience the same troubles and I think that making 
nested-inside-Windows-host work should be deemed fairly important for 
demo/trial/lab uses... :-)

In the meanwhile, workarounds have been published for the new Ansible-based 
setup flow:

https://lists.ovirt.org/archives/list/users@ovirt.org/thread/UDVRS5I64WLFJJF7YXKXZNTQOZRJ6DOJ/?sort=date

on the other side, I can confirm that with the legacy setup flow (with option 
--noansible) adding the following to the answer file avoids the problem (did 
not test whether newer types like pc-i440fx-rhel7.2.0 solve it too):

OVEHOSTED_VM/emulatedMschine=str:rhel6.0.0

Many thanks.

Best regards,
Giuseppe


Da: Simone Tiraboschi 
Inviato: lunedì 30 luglio 2018 09:20
A: bon...@gmail.com
Cc: users
Oggetto: [ovirt-users] Re: Failed to deploy ovirt engine with "hosted-engine 
--deploy"



On Mon, Jul 30, 2018 at 6:04 AM Bong Shau Fui 
mailto:bon...@gmail.com> wrote:
Hi Simone:
   Yes, it's in a nested environment.  L0 is vmware esxi 5.5.

I know for sue that a nested kvm env over vmware esxi is still problematic; kvm 
over kvm works fine instead.



regards,
Bong SF
___
Users mailing list -- users@ovirt.org
To unsubscribe send an email to 
users-le...@ovirt.org
Privacy Statement: https://www.ovirt.org/site/privacy-policy/
oVirt Code of Conduct: 
https://www.ovirt.org/community/about/community-guidelines/
List Archives: 
https://lists.ovirt.org/archives/list/users@ovirt.org/message/YQZHWNK2CPSWGWIOB3WOJJ2QZK4HGLAD/
mailto:users-le...@ovirt.org>mailto:users@ovirt.org>mailto:bon...@gmail.com>
___
Users mailing list -- users@ovirt.org
To unsubscribe send an email to users-le...@ovirt.org
Privacy Statement: https://www.ovirt.org/site/privacy-policy/
oVirt Code of Conduct: 
https://www.ovirt.org/community/about/community-guidelines/
List Archives: 
https://lists.ovirt.org/archives/list/users@ovirt.org/message/PZZDLYIA3DWKKBKI2FM42KF5WABQS3PZ/


[ovirt-users] Self Hosted Engine installation - does the OVEHOSTED_NETWORK/gateway parameter have an "overloaded" meaning?

2018-03-12 Thread Giuseppe Ragusa
Hi all,

I have a question about the best interpretation/choice for the installation 
parameter OVEHOSTED_NETWORK/gateway

It is my understanding that the IP specified as OVEHOSTED_NETWORK/gateway will 
be used (by means of ping) to verify the ongoing network-wise status of oVirt 
cluster nodes, with any problems leading to classifications/actions which could 
even bring to fencing of the "faulty" node.

If this is the case, I find it debatable that such a role should be referred to 
as "gateway", since (particularly in small setups) it should be delegated to an 
always reachable IP, not connected to mundane tasks such as routers/gateways: 
Internet (or wider network) reachability (think of an old, cheap router whose 
power supply starts to misbehave/fail...) should not determine the status of 
the local oVirt cluster, whose nodes tipically could be directly connected 
(especially wrt the management ovirtmgmt network) on the same network segment 
without any need for routing.
I suggest that in such a small setup, the console IP of something like the 
central (managed and stackable) switch could be used: if the central switch (ie 
all the stacked parts of it) goes down, then really there will be no 
communication betweeen nodes anyway.

It is also my understanding that the above mentioned OVEHOSTED_NETWORK/gateway 
parameter is automatically passed to cloud-init to configure the actual default 
gateway of the Self Hosted Engine appliance, without any means to override this 
choice with an ad-hoc specialized parameter.

If this is the case, I think that, in light of the above mentioned scenario, a 
specific override could be provided, without requirying the admin to 
reconfigure the appliance after it is deployed (by the way: the appliance, at 
least in version 4.1.9, does not contain the NetworkManager-glib package, so 
Ansible playbooks trying to configure the default gateway by means of the nmcli 
module always fail, and without working default gateway it is not so easy to 
add packages... think chicken and egg... :-) ).

Any thoughts/suggestions?

Many thanks in advance.

Best regards,
Giuseppe
___
Users mailing list
Users@ovirt.org
http://lists.ovirt.org/mailman/listinfo/users


Re: [ovirt-users] ovirt 3.6, we had the ovirt manager go down in a bad way and all VMs for one node marked Unknown and Not Reponding while up

2018-01-29 Thread Giuseppe Ragusa
Da: users-boun...@ovirt.org  per conto di Christopher 
Cox 
Inviato: venerdì 26 gennaio 2018 01:57
A: dougsl...@redhat.com
Cc: users
Oggetto: Re: [ovirt-users] ovirt 3.6, we had the ovirt manager go down in a bad 
way and all VMs for one node marked Unknown and Not Reponding while up

>On 01/25/2018 04:57 PM, Douglas Landgraf wrote:
>> On Thu, Jan 25, 2018 at 5:12 PM, Christopher Cox  wrote:
>>> On 01/25/2018 02:25 PM, Douglas Landgraf wrote:

 On Wed, Jan 24, 2018 at 10:18 AM, Christopher Cox 
 wrote:
>

 Probably it's time to think to upgrade your environment from 3.6.
>>>
>>>
>>> I know.  But from a production standpoint mid-2016 wasn't that long ago.
>>> And 4 was just coming out of beta at the time.
>>>
>>> We were upgrading from 3.4 to 3.6.  And it took a long time (again, because
>>> it's all "live").  Trust me, the move to 4.0 was discussed, it was just a
>>> timing thing.
>>>
>>> With that said, I do "hear you"and certainly it's being discussed. We
>>> just don't see a "good" migration path... we see a slow path (moving nodes
>>> out, upgrading, etc.) and knowing that as with all things, nobody can
>>> guarantee "success", which would be a very bad thing.  So going from working
>>> 3.6 to totally (potential) broken 4.2, isn't going to impress anyone here,
>>> you know?  If all goes according to our best guesses, then great, but when
>>> things go bad, and the chance is not insignificant, well... I'm just not
>>> quite prepared with my résumé if you know what I mean.
>>>
>>> Don't get me wrong, our move from 3.4 to 3.6 had some similar risks, but we
>>> also migrated to whole new infrastructure, a luxury we will not have this
>>> time.  And somehow 3.4 to 3.6 doesn't sound as risky as 3.6 to 4.2.
>>
>> I see your concern. However,  keep your system updated with recent
>> software is something I would recommend. You could setup a parallel
>> 4.2 env and move the VMS slowly from 3.6.
>
>Understood.  But would people want software that changes so quickly?
>This isn't like moving from RH 7.2 to 7.3 in a matter of months, it's
>more like moving from major release to major release in a matter of
>months and doing again potentially in a matter of months.  Granted we're
>running oVirt and not RHV, so maybe we should be on the Fedora style
>upgrade plan.  Just not conducive to an enterprise environment (oVirt
>people, stop laughing).

The analogy you made is exactly on point: I think that, given the
maturity of the oVirt project, the time has come to complete the picture ;-)

RHEL -> CentOS

RHV -> ???

Note: I should mention RHGS too (or at least a subset) because we have the
oVirt hyperconverged setup to care for (RHHI)

So: is anyone interested in the rebuild of RHV/RHGS upstream packages?

If there is interest, I think that the proper path would be to join the CentOS
Virtualization SIG and perform the proposal/work there.

Best regards,
Giuseppe

>>> Is there a path from oVirt to RHEV?  Every bit of help we get helps us in
>>> making that decision as well, which I think would be a very good thing for
>>> both of us. (I inherited all this oVirt and I was the "guy" doing the 3.4 to
>>> 3.6 with the all new infrastructure).
>>
>> Yes, you can import your setup to RHEV.
>
>Good to know. Because of the fragility (support wise... I'm mean our
>oVirt has been rock solid, apart from rare glitches like this), we may
>follow this path.


___
Users mailing list
Users@ovirt.org
http://lists.ovirt.org/mailman/listinfo/users


[ovirt-users] oVirt NGN image customization troubles

2017-12-28 Thread Giuseppe Ragusa
Hi all,

I'm trying to modify the oVirt NGN image (to add RPMs, since imgbased 
rpmpersistence currently seems to have a bug: 
https://bugzilla.redhat.com/show_bug.cgi?id=1528468 ) but I'm unfortunately 
stuck at the very beginning: it seems that I'm unable to recreate even the 
standard 4.1 squashfs image.

I'm following the instructions at 
https://gerrit.ovirt.org/gitweb?p=ovirt-node-ng.git;a=blob;f=README 

I'm working inside a CentOS7 fully-updated vm (hosted inside VMware, with 
nested virtualization enabled).

I'm trying to work on the 4.1 branch, so I issued a:

./autogen.sh 
--with-ovirt-release-rpm-url=http://resources.ovirt.org/pub/yum-repo/ovirt-release41.rpm

And after that I'm stuck in the "make squashfs" step: it never ends (keeps 
printing dots forever with no errors/warnings in log messages nor any apparent 
activity on the virtual disk image).

Invoking it in debug mode and connecting to the VNC console shows the detailed 
Plymouth startup listing stuck (latest messages displayed: "Starting udev Wait 
for Complete Device Initialization..." and "Starting Device-Mapper Multipath 
Device Controller...")

I wonder if it's actually supposed to be run only from a recent Fedora (the 
"dnf" reference seems a good indicator): if so, which version?

I kindly ask for advice: has anyone succeeded in modifying/reproducing NGN 
squash images recently? If so, how? :-)

Many thanks in advance,

Giuseppe
___
Users mailing list
Users@ovirt.org
http://lists.ovirt.org/mailman/listinfo/users


[ovirt-users] How to track RHEV/RHV releases/bugfixes/advisories

2016-12-31 Thread Giuseppe Ragusa
Hi all,

Sorry if the question has already been answered or is prominently explained on 
some publicly available forum/page (I haven't been able to find it anywhere nor 
through Google neither on Red Hat website).

Since some customers could opt to follow the testing phase with oVirt by going 
in production with RHEV/RHV, I would like to know whether there is some 
dedicated mailing list / web page to check for announces (plus release planning 
/ roadmap etc.).

Many thanks in advance.

Best regards (and Happy New Year! ;-) )
Giuseppe
___
Users mailing list
Users@ovirt.org
http://lists.ovirt.org/mailman/listinfo/users


Re: [ovirt-users] Python stack trace for VDSM while monitoring GlusterFS volumes in HC HE oVirt 3.6.7 / GlusterFS 3.7.17

2016-12-20 Thread Giuseppe Ragusa
On Tue, Dec 20, 2016, at 09:16, Ramesh Nachimuthu wrote:
> - Original Message -
> > From: "Giuseppe Ragusa" <giuseppe.rag...@hotmail.com>
> > To: "Ramesh Nachimuthu" <rnach...@redhat.com>
> > Cc: users@ovirt.org, gluster-us...@gluster.org, "Ravishankar 
> > Narayanankutty" <ranar...@redhat.com>
> > Sent: Tuesday, December 20, 2016 4:15:18 AM
> > Subject: Re: [ovirt-users] Python stack trace for VDSM while monitoring 
> > GlusterFS volumes in HC HE oVirt 3.6.7 /
> > GlusterFS 3.7.17
> > 
> > On Fri, Dec 16, 2016, at 05:44, Ramesh Nachimuthu wrote:
> > > - Original Message -
> > > > From: "Giuseppe Ragusa" <giuseppe.rag...@hotmail.com>
> > > > To: "Ramesh Nachimuthu" <rnach...@redhat.com>
> > > > Cc: users@ovirt.org
> > > > Sent: Friday, December 16, 2016 2:42:18 AM
> > > > Subject: Re: [ovirt-users] Python stack trace for VDSM while monitoring
> > > > GlusterFS volumes in HC HE oVirt 3.6.7 /
> > > > GlusterFS 3.7.17
> > > > 
> > > > Giuseppe Ragusa ha condiviso un file di OneDrive. Per visualizzarlo, 
> > > > fare
> > > > clic sul collegamento seguente.
> > > > 
> > > > 
> > > > <https://1drv.ms/u/s!Am_io8oW4r10bw5KMtEtKgpcRoI>
> > > > [https://r1.res.office365.com/owa/prem/images/dc-generic_20.png]<https://1drv.ms/u/s!Am_io8oW4r10bw5KMtEtKgpcRoI>
> > > > 
> > > > vols.tar.gz<https://1drv.ms/u/s!Am_io8oW4r10bw5KMtEtKgpcRoI>
> > > > 
> > > > 
> > > > 
> > > > Da: Ramesh Nachimuthu <rnach...@redhat.com>
> > > > Inviato: lunedì 12 dicembre 2016 09.32
> > > > A: Giuseppe Ragusa
> > > > Cc: users@ovirt.org
> > > > Oggetto: Re: [ovirt-users] Python stack trace for VDSM while monitoring
> > > > GlusterFS volumes in HC HE oVirt 3.6.7 / GlusterFS 3.7.17
> > > > 
> > > > On 12/09/2016 08:50 PM, Giuseppe Ragusa wrote:
> > > > > Hi all,
> > > > >
> > > > > I'm writing to ask about the following problem (in a HC HE oVirt 3.6.7
> > > > > GlusterFS 3.7.17 3-hosts-replica-with-arbiter sharded-volumes setup 
> > > > > all
> > > > > on
> > > > > CentOS 7.2):
> > > > >
> > > > >  From /var/log/messages:
> > > > >
> > > > > Dec  9 15:27:46 shockley journal: vdsm jsonrpc.JsonRpcServer ERROR
> > > > > Internal
> > > > > server error#012Traceback (most recent call last):#012  File
> > > > > "/usr/lib/python2.7/site-packages/yajsonrpc/__init__.py", line 533, in
> > > > > _serveRequest#012res = method(**params)#012  File
> > > > > "/usr/share/vdsm/rpc/Bridge.py", line 275, in _dynamicMethod#012
> > > > > result
> > > > > = fn(*methodArgs)#012  File "/usr/share/vdsm/gluster/apiwrapper.py",
> > > > > line
> > > > > 117, in status#012return self._gluster.volumeStatus(volumeName,
> > > > > brick,
> > > > > statusOption)#012  File "/usr/share/vdsm/gluster/api.py", line 86, in
> > > > > wrapper#012rv = func(*args, **kwargs)#012  File
> > > > > "/usr/share/vdsm/gluster/api.py", line 407, in volumeStatus#012
> > > > > statusOption)#012  File "/usr/share/vdsm/supervdsm.py", line 50, in
> > > > > __call__#012return callMethod()#012  File
> > > > > "/usr/share/vdsm/supervdsm.py", line 48, in #012
> > > > > **kwargs)#012
> > > > > File "", line 2, in glusterVolumeStatus#012  File
> > > > > "/usr/lib64/python2.7/multiprocessing/managers.py", line 773, in _ca
> > > > >   llmethod#012raise convert_to_error(kind, result)#012KeyError:
> > > > >   'device'
> > > > > Dec  9 15:27:47 shockley ovirt-ha-agent:
> > > > > INFO:ovirt_hosted_engine_ha.lib.ovf.ovf_store.OVFStore:Extracting
> > > > > Engine
> > > > > VM OVF from the OVF_STORE
> > > > > Dec  9 15:27:47 shockley ovirt-ha-agent:
> > > > > INFO:ovirt_hosted_engine_ha.lib.ovf.ovf_store.OVFStore:OVF_STORE 
> > > > > volume
> > > > > path:
> > > > > /rhev/data-center/mnt/glusterSD/shockley.gluster.private:_enginedomain/1d60fd45-507d-4a78-8294-d642b3178ea3/images

Re: [ovirt-users] Python stack trace for VDSM while monitoring GlusterFS volumes in HC HE oVirt 3.6.7 / GlusterFS 3.7.17

2016-12-19 Thread Giuseppe Ragusa
On Fri, Dec 16, 2016, at 05:44, Ramesh Nachimuthu wrote:
> - Original Message -
> > From: "Giuseppe Ragusa" <giuseppe.rag...@hotmail.com>
> > To: "Ramesh Nachimuthu" <rnach...@redhat.com>
> > Cc: users@ovirt.org
> > Sent: Friday, December 16, 2016 2:42:18 AM
> > Subject: Re: [ovirt-users] Python stack trace for VDSM while monitoring 
> > GlusterFS volumes in HC HE oVirt 3.6.7 /
> > GlusterFS 3.7.17
> > 
> > Giuseppe Ragusa ha condiviso un file di OneDrive. Per visualizzarlo, fare
> > clic sul collegamento seguente.
> > 
> > 
> > <https://1drv.ms/u/s!Am_io8oW4r10bw5KMtEtKgpcRoI>
> > [https://r1.res.office365.com/owa/prem/images/dc-generic_20.png]<https://1drv.ms/u/s!Am_io8oW4r10bw5KMtEtKgpcRoI>
> > 
> > vols.tar.gz<https://1drv.ms/u/s!Am_io8oW4r10bw5KMtEtKgpcRoI>
> > 
> > 
> > 
> > Da: Ramesh Nachimuthu <rnach...@redhat.com>
> > Inviato: lunedì 12 dicembre 2016 09.32
> > A: Giuseppe Ragusa
> > Cc: users@ovirt.org
> > Oggetto: Re: [ovirt-users] Python stack trace for VDSM while monitoring
> > GlusterFS volumes in HC HE oVirt 3.6.7 / GlusterFS 3.7.17
> > 
> > On 12/09/2016 08:50 PM, Giuseppe Ragusa wrote:
> > > Hi all,
> > >
> > > I'm writing to ask about the following problem (in a HC HE oVirt 3.6.7
> > > GlusterFS 3.7.17 3-hosts-replica-with-arbiter sharded-volumes setup all on
> > > CentOS 7.2):
> > >
> > >  From /var/log/messages:
> > >
> > > Dec  9 15:27:46 shockley journal: vdsm jsonrpc.JsonRpcServer ERROR 
> > > Internal
> > > server error#012Traceback (most recent call last):#012  File
> > > "/usr/lib/python2.7/site-packages/yajsonrpc/__init__.py", line 533, in
> > > _serveRequest#012res = method(**params)#012  File
> > > "/usr/share/vdsm/rpc/Bridge.py", line 275, in _dynamicMethod#012result
> > > = fn(*methodArgs)#012  File "/usr/share/vdsm/gluster/apiwrapper.py", line
> > > 117, in status#012return self._gluster.volumeStatus(volumeName, brick,
> > > statusOption)#012  File "/usr/share/vdsm/gluster/api.py", line 86, in
> > > wrapper#012rv = func(*args, **kwargs)#012  File
> > > "/usr/share/vdsm/gluster/api.py", line 407, in volumeStatus#012
> > > statusOption)#012  File "/usr/share/vdsm/supervdsm.py", line 50, in
> > > __call__#012return callMethod()#012  File
> > > "/usr/share/vdsm/supervdsm.py", line 48, in #012**kwargs)#012
> > > File "", line 2, in glusterVolumeStatus#012  File
> > > "/usr/lib64/python2.7/multiprocessing/managers.py", line 773, in _ca
> > >   llmethod#012raise convert_to_error(kind, result)#012KeyError:
> > >   'device'
> > > Dec  9 15:27:47 shockley ovirt-ha-agent:
> > > INFO:ovirt_hosted_engine_ha.lib.ovf.ovf_store.OVFStore:Extracting Engine
> > > VM OVF from the OVF_STORE
> > > Dec  9 15:27:47 shockley ovirt-ha-agent:
> > > INFO:ovirt_hosted_engine_ha.lib.ovf.ovf_store.OVFStore:OVF_STORE volume
> > > path:
> > > /rhev/data-center/mnt/glusterSD/shockley.gluster.private:_enginedomain/1d60fd45-507d-4a78-8294-d642b3178ea3/images/22a172de-698e-4cc5-bff0-082882fb3347/8738287c-8a25-4a2a-a53a-65c366a972a1
> > > Dec  9 15:27:47 shockley ovirt-ha-agent:
> > > INFO:ovirt_hosted_engine_ha.agent.hosted_engine.HostedEngine.config:Found
> > > an OVF for HE VM, trying to convert
> > > Dec  9 15:27:47 shockley ovirt-ha-agent:
> > > INFO:ovirt_hosted_engine_ha.agent.hosted_engine.HostedEngine.config:Got
> > > vm.conf from OVF_STORE
> > > Dec  9 15:27:47 shockley ovirt-ha-agent:
> > > INFO:ovirt_hosted_engine_ha.agent.hosted_engine.HostedEngine:Current state
> > > EngineUp (score: 3400)
> > > Dec  9 15:27:47 shockley ovirt-ha-agent:
> > > INFO:ovirt_hosted_engine_ha.agent.hosted_engine.HostedEngine:Best remote
> > > host read.mgmt.private (id: 2, score: 3400)
> > > Dec  9 15:27:48 shockley journal: vdsm jsonrpc.JsonRpcServer ERROR 
> > > Internal
> > > server error#012Traceback (most recent call last):#012  File
> > > "/usr/lib/python2.7/site-packages/yajsonrpc/__init__.py", line 533, in
> > > _serveRequest#012res = method(**params)#012  File
> > > "/usr/share/vdsm/rpc/Bridge.py", line 275, in _dynamicMethod#012result
> > > = fn(*methodArgs)#012  File "/usr/share/vdsm/gluster/apiwrapper.py", line
> > > 117, in status#012return self._gluster.volu

Re: [ovirt-users] Python stack trace for VDSM while monitoring GlusterFS volumes in HC HE oVirt 3.6.7 / GlusterFS 3.7.17

2016-12-15 Thread Giuseppe Ragusa
Giuseppe Ragusa ha condiviso un file di OneDrive. Per visualizzarlo, fare clic 
sul collegamento seguente.


<https://1drv.ms/u/s!Am_io8oW4r10bw5KMtEtKgpcRoI>
[https://r1.res.office365.com/owa/prem/images/dc-generic_20.png]<https://1drv.ms/u/s!Am_io8oW4r10bw5KMtEtKgpcRoI>

vols.tar.gz<https://1drv.ms/u/s!Am_io8oW4r10bw5KMtEtKgpcRoI>



Da: Ramesh Nachimuthu <rnach...@redhat.com>
Inviato: lunedì 12 dicembre 2016 09.32
A: Giuseppe Ragusa
Cc: users@ovirt.org
Oggetto: Re: [ovirt-users] Python stack trace for VDSM while monitoring 
GlusterFS volumes in HC HE oVirt 3.6.7 / GlusterFS 3.7.17

On 12/09/2016 08:50 PM, Giuseppe Ragusa wrote:
> Hi all,
>
> I'm writing to ask about the following problem (in a HC HE oVirt 3.6.7 
> GlusterFS 3.7.17 3-hosts-replica-with-arbiter sharded-volumes setup all on 
> CentOS 7.2):
>
>  From /var/log/messages:
>
> Dec  9 15:27:46 shockley journal: vdsm jsonrpc.JsonRpcServer ERROR Internal 
> server error#012Traceback (most recent call last):#012  File 
> "/usr/lib/python2.7/site-packages/yajsonrpc/__init__.py", line 533, in 
> _serveRequest#012res = method(**params)#012  File 
> "/usr/share/vdsm/rpc/Bridge.py", line 275, in _dynamicMethod#012result = 
> fn(*methodArgs)#012  File "/usr/share/vdsm/gluster/apiwrapper.py", line 117, 
> in status#012return self._gluster.volumeStatus(volumeName, brick, 
> statusOption)#012  File "/usr/share/vdsm/gluster/api.py", line 86, in 
> wrapper#012rv = func(*args, **kwargs)#012  File 
> "/usr/share/vdsm/gluster/api.py", line 407, in volumeStatus#012
> statusOption)#012  File "/usr/share/vdsm/supervdsm.py", line 50, in 
> __call__#012return callMethod()#012  File "/usr/share/vdsm/supervdsm.py", 
> line 48, in #012**kwargs)#012  File "", line 2, in 
> glusterVolumeStatus#012  File 
> "/usr/lib64/python2.7/multiprocessing/managers.py", line 773, in _ca
>   llmethod#012raise convert_to_error(kind, result)#012KeyError: 'device'
> Dec  9 15:27:47 shockley ovirt-ha-agent: 
> INFO:ovirt_hosted_engine_ha.lib.ovf.ovf_store.OVFStore:Extracting Engine VM 
> OVF from the OVF_STORE
> Dec  9 15:27:47 shockley ovirt-ha-agent: 
> INFO:ovirt_hosted_engine_ha.lib.ovf.ovf_store.OVFStore:OVF_STORE volume path: 
> /rhev/data-center/mnt/glusterSD/shockley.gluster.private:_enginedomain/1d60fd45-507d-4a78-8294-d642b3178ea3/images/22a172de-698e-4cc5-bff0-082882fb3347/8738287c-8a25-4a2a-a53a-65c366a972a1
> Dec  9 15:27:47 shockley ovirt-ha-agent: 
> INFO:ovirt_hosted_engine_ha.agent.hosted_engine.HostedEngine.config:Found an 
> OVF for HE VM, trying to convert
> Dec  9 15:27:47 shockley ovirt-ha-agent: 
> INFO:ovirt_hosted_engine_ha.agent.hosted_engine.HostedEngine.config:Got 
> vm.conf from OVF_STORE
> Dec  9 15:27:47 shockley ovirt-ha-agent: 
> INFO:ovirt_hosted_engine_ha.agent.hosted_engine.HostedEngine:Current state 
> EngineUp (score: 3400)
> Dec  9 15:27:47 shockley ovirt-ha-agent: 
> INFO:ovirt_hosted_engine_ha.agent.hosted_engine.HostedEngine:Best remote host 
> read.mgmt.private (id: 2, score: 3400)
> Dec  9 15:27:48 shockley journal: vdsm jsonrpc.JsonRpcServer ERROR Internal 
> server error#012Traceback (most recent call last):#012  File 
> "/usr/lib/python2.7/site-packages/yajsonrpc/__init__.py", line 533, in 
> _serveRequest#012res = method(**params)#012  File 
> "/usr/share/vdsm/rpc/Bridge.py", line 275, in _dynamicMethod#012result = 
> fn(*methodArgs)#012  File "/usr/share/vdsm/gluster/apiwrapper.py", line 117, 
> in status#012return self._gluster.volumeStatus(volumeName, brick, 
> statusOption)#012  File "/usr/share/vdsm/gluster/api.py", line 86, in 
> wrapper#012rv = func(*args, **kwargs)#012  File 
> "/usr/share/vdsm/gluster/api.py", line 407, in volumeStatus#012
> statusOption)#012  File "/usr/share/vdsm/supervdsm.py", line 50, in 
> __call__#012return callMethod()#012  File "/usr/share/vdsm/supervdsm.py", 
> line 48, in #012**kwargs)#012  File "", line 2, in 
> glusterVolumeStatus#012  File 
> "/usr/lib64/python2.7/multiprocessing/managers.py", line 773, in _ca
>   llmethod#012raise convert_to_error(kind, result)#012KeyError: 'device'
> Dec  9 15:27:48 shockley ovirt-ha-broker: 
> INFO:ovirt_hosted_engine_ha.broker.listener.ConnectionHandler:Connection 
> established
> Dec  9 15:27:48 shockley ovirt-ha-broker: 
> INFO:ovirt_hosted_engine_ha.broker.listener.ConnectionHandler:Connection 
> closed
> Dec  9 15:27:48 shockley ovirt-ha-broker: 
> INFO:ovirt_hosted_engine_ha.broker.listener.ConnectionHandler:Connection 
> established
> Dec  9 15:27:48 shockley ov

[ovirt-users] Python stack trace for VDSM while monitoring GlusterFS volumes in HC HE oVirt 3.6.7 / GlusterFS 3.7.17

2016-12-09 Thread Giuseppe Ragusa

Hi all,

I'm writing to ask about the following problem (in a HC HE oVirt 3.6.7 
GlusterFS 3.7.17 3-hosts-replica-with-arbiter sharded-volumes setup all on 
CentOS 7.2):

>From /var/log/messages:

Dec  9 15:27:46 shockley journal: vdsm jsonrpc.JsonRpcServer ERROR Internal 
server error#012Traceback (most recent call last):#012  File 
"/usr/lib/python2.7/site-packages/yajsonrpc/__init__.py", line 533, in 
_serveRequest#012res = method(**params)#012  File 
"/usr/share/vdsm/rpc/Bridge.py", line 275, in _dynamicMethod#012result = 
fn(*methodArgs)#012  File "/usr/share/vdsm/gluster/apiwrapper.py", line 117, in 
status#012return self._gluster.volumeStatus(volumeName, brick, 
statusOption)#012  File "/usr/share/vdsm/gluster/api.py", line 86, in 
wrapper#012rv = func(*args, **kwargs)#012  File 
"/usr/share/vdsm/gluster/api.py", line 407, in volumeStatus#012
statusOption)#012  File "/usr/share/vdsm/supervdsm.py", line 50, in 
__call__#012return callMethod()#012  File "/usr/share/vdsm/supervdsm.py", 
line 48, in #012**kwargs)#012  File "", line 2, in 
glusterVolumeStatus#012  File 
"/usr/lib64/python2.7/multiprocessing/managers.py", line 773, in _ca
 llmethod#012raise convert_to_error(kind, result)#012KeyError: 'device'
Dec  9 15:27:47 shockley ovirt-ha-agent: 
INFO:ovirt_hosted_engine_ha.lib.ovf.ovf_store.OVFStore:Extracting Engine VM OVF 
from the OVF_STORE
Dec  9 15:27:47 shockley ovirt-ha-agent: 
INFO:ovirt_hosted_engine_ha.lib.ovf.ovf_store.OVFStore:OVF_STORE volume path: 
/rhev/data-center/mnt/glusterSD/shockley.gluster.private:_enginedomain/1d60fd45-507d-4a78-8294-d642b3178ea3/images/22a172de-698e-4cc5-bff0-082882fb3347/8738287c-8a25-4a2a-a53a-65c366a972a1
Dec  9 15:27:47 shockley ovirt-ha-agent: 
INFO:ovirt_hosted_engine_ha.agent.hosted_engine.HostedEngine.config:Found an 
OVF for HE VM, trying to convert
Dec  9 15:27:47 shockley ovirt-ha-agent: 
INFO:ovirt_hosted_engine_ha.agent.hosted_engine.HostedEngine.config:Got vm.conf 
from OVF_STORE
Dec  9 15:27:47 shockley ovirt-ha-agent: 
INFO:ovirt_hosted_engine_ha.agent.hosted_engine.HostedEngine:Current state 
EngineUp (score: 3400)
Dec  9 15:27:47 shockley ovirt-ha-agent: 
INFO:ovirt_hosted_engine_ha.agent.hosted_engine.HostedEngine:Best remote host 
read.mgmt.private (id: 2, score: 3400)
Dec  9 15:27:48 shockley journal: vdsm jsonrpc.JsonRpcServer ERROR Internal 
server error#012Traceback (most recent call last):#012  File 
"/usr/lib/python2.7/site-packages/yajsonrpc/__init__.py", line 533, in 
_serveRequest#012res = method(**params)#012  File 
"/usr/share/vdsm/rpc/Bridge.py", line 275, in _dynamicMethod#012result = 
fn(*methodArgs)#012  File "/usr/share/vdsm/gluster/apiwrapper.py", line 117, in 
status#012return self._gluster.volumeStatus(volumeName, brick, 
statusOption)#012  File "/usr/share/vdsm/gluster/api.py", line 86, in 
wrapper#012rv = func(*args, **kwargs)#012  File 
"/usr/share/vdsm/gluster/api.py", line 407, in volumeStatus#012
statusOption)#012  File "/usr/share/vdsm/supervdsm.py", line 50, in 
__call__#012return callMethod()#012  File "/usr/share/vdsm/supervdsm.py", 
line 48, in #012**kwargs)#012  File "", line 2, in 
glusterVolumeStatus#012  File 
"/usr/lib64/python2.7/multiprocessing/managers.py", line 773, in _ca
 llmethod#012raise convert_to_error(kind, result)#012KeyError: 'device'
Dec  9 15:27:48 shockley ovirt-ha-broker: 
INFO:ovirt_hosted_engine_ha.broker.listener.ConnectionHandler:Connection 
established
Dec  9 15:27:48 shockley ovirt-ha-broker: 
INFO:ovirt_hosted_engine_ha.broker.listener.ConnectionHandler:Connection closed
Dec  9 15:27:48 shockley ovirt-ha-broker: 
INFO:ovirt_hosted_engine_ha.broker.listener.ConnectionHandler:Connection 
established
Dec  9 15:27:48 shockley ovirt-ha-broker: 
INFO:ovirt_hosted_engine_ha.broker.listener.ConnectionHandler:Connection closed
Dec  9 15:27:48 shockley ovirt-ha-broker: 
INFO:ovirt_hosted_engine_ha.broker.listener.ConnectionHandler:Connection 
established
Dec  9 15:27:48 shockley ovirt-ha-broker: 
INFO:ovirt_hosted_engine_ha.broker.listener.ConnectionHandler:Connection closed
Dec  9 15:27:48 shockley ovirt-ha-broker: 
INFO:ovirt_hosted_engine_ha.broker.listener.ConnectionHandler:Connection 
established
Dec  9 15:27:48 shockley ovirt-ha-broker: 
INFO:ovirt_hosted_engine_ha.broker.listener.ConnectionHandler:Connection closed
Dec  9 15:27:48 shockley ovirt-ha-broker: INFO:mem_free.MemFree:memFree: 7392
Dec  9 15:27:50 shockley journal: vdsm jsonrpc.JsonRpcServer ERROR Internal 
server error#012Traceback (most recent call last):#012  File 
"/usr/lib/python2.7/site-packages/yajsonrpc/__init__.py", line 533, in 
_serveRequest#012res = method(**params)#012  File 
"/usr/share/vdsm/rpc/Bridge.py", line 275, in _dynamicMethod#012result = 
fn(*methodArgs)#012  File "/usr/share/vdsm/gluster/apiwrapper.py", line 117, in 
status#012return self._gluster.volumeStatus(volumeName, brick, 
statusOption)#012  File 

Re: [ovirt-users] [SOLVED] Re: Trying to make ovirt-hosted-engine-setup create a customized Engine-vm on 3.6 HC HE

2015-11-30 Thread Giuseppe Ragusa
On Wed, Nov 25, 2015, at 12:10, Simone Tiraboschi wrote:
>
>
> On Mon, Nov 23, 2015 at 10:17 PM, Giuseppe Ragusa
> <giuseppe.rag...@hotmail.com> wrote:
>> On Tue, Oct 27, 2015, at 00:10, Giuseppe Ragusa wrote:
>>
> On Mon, Oct 26, 2015, at 09:48, Simone Tiraboschi wrote:
>>
> >
>>
> >
>>
> > On Mon, Oct 26, 2015 at 12:14 AM, Giuseppe Ragusa
> > <giuseppe.rag...@hotmail.com> wrote:
>>
> >> Hi all,
>>
> >> I'm experiencing some difficulties using oVirt 3.6 latest snapshot.
>>
> >>
>>
> >> I'm trying to trick the self-hosted-engine setup to create a custom
> >> engine vm with 3 nics (with fixed MACs/UUIDs).
>>
> >>
>>
> >> The GlusterFS volume (3.7.5 hyperconverged, replica 3, for the
> >> engine vm) and the network bridges (ovirtmgmt and other two
> >> bridges, called nfs and lan, for the engine vm) have been
> >> preconfigured on the initial fully-patched CentOS 7.1 host (plus
> >> other two identical hosts which are awaiting to be added).
>>
> >>
>>
> >> I'm stuck at a point with the engine vm successfully starting but
> >> with only one nic present (connected to the ovirtmgmt bridge).
>>
> >>
>>
> >> I'm trying to obtain the modified engine vm by means of a trick
> >> which used to work in a previous (aborted because of lacking GlusterFS-by-
> >> libgfapi support) oVirt 3.5 test setup (about a year ago, maybe
> >> more): I'm substituting the standard /usr/share/ovirt-hosted-engine-
> >> setup/templates/vm.conf.in with the following:
>>
> >>
>>
> >> vmId=@VM_UUID@
>>
> >> memSize=@MEM_SIZE@
>>
> >> display=@CONSOLE_TYPE@
>>
> >> devices={index:2,iface:ide,address:{ controller:0, target:0,unit:0,
> >> bus:1, type:drive},specParams:{},readonly:true,deviceId:@CDROM_UUI-
> >> D@,path:@CDROM@,device:cdrom,shared:false,type:disk@BOOT_CDROM@}
>>
> >> devices={index:0,iface:virtio,format:raw,poolID:@SP_UUID@,volumeID-
> >> :@VOL_UUID@,imageID:@IMG_UUID@,specParams:{},readonly:false,domain-
> >> ID:@SD_UUID@,optional:false,deviceId:@IMG_UUID@,address:{bus:0x00,
> >> slot:0x06, domain:0x, type:pci, function:0x0},device:disk,shar-
> >> ed:exclusive,propagateErrors:off,type:disk@BOOT_DISK@}
>>
> >> devices={device:scsi,model:virtio-scsi,type:controller}
>>
> >> devices={index:4,nicModel:pv,macAddr:02:50:56:3f:c4:b0,linkActive:-
> >> true,network:@BRIDGE@,filter:vdsm-no-mac-
> >> spoofing,specParams:{},deviceId:@NIC_UUID@,address:{bus:0x00,
> >> slot:0x03, domain:0x, type:pci,
> >> function:0x0},device:bridge,type:interface@BOOT_PXE@}
>>
> >> devices={index:8,nicModel:pv,macAddr:02:50:56:3f:c4:a0,linkActive:-
> >> true,network:lan,filter:vdsm-no-mac-
> >> spoofing,specParams:{},deviceId:6c467650-1837-47ea-89bc-
> >> 1113f4bfefee,address:{bus:0x00, slot:0x09, domain:0x, type:pci,
> >> function:0x0},device:bridge,type:interface@BOOT_PXE@}
>>
> >> devices={index:16,nicModel:pv,macAddr:02:50:56:3f:c4:c0,linkActive-
> >> :true,network:nfs,filter:vdsm-no-mac-
> >> spoofing,specParams:{},deviceId:4d8e0705-8cb4-45b7-b960-
> >> 7f98bb59858d,address:{bus:0x00, slot:0x0c, domain:0x, type:pci,
> >> function:0x0},device:bridge,type:interface@BOOT_PXE@}
>>
> >> devices={device:console,specParams:{},type:console,deviceId:@CONSO-
> >> LE_UUID@,alias:console0}
>>
> >> vmName=@NAME@
>>
> >> spiceSecureChannels=smain,sdisplay,sinputs,scursor,splayback,sreco-
> >> rd,ssmartcard,susbredir
>>
> >> smp=@VCPUS@
>>
> >> cpuType=@CPU_TYPE@
>>
> >> emulatedMachine=@EMULATED_MACHINE@
>>
> >>
>>
> >> but unfortunately the vm gets created like this (output from "ps";
> >> note that I'm attaching a CentOS7.1 Netinstall ISO with an embedded
> >> kickstart: the installation should proceed by HTTP on the lan
> >> network but obviously fails):
>>
> >>
>>
> >> /usr/libexec/qemu-kvm -name HostedEngine -S -machine
>>
> >> pc-i440fx-rhel7.1.0,accel=kvm,usb=off -cpu Westmere -m 4096 -
> >> realtime mlock=off
>>
> >> -smp 2,sockets=2,cores=1,threads=1 -uuid f49da721-8aa6-4422-8b91-
> >> e91a0e38aa4a -s
>>
> >> mbios type=1,manufacturer=oVirt,product=oVirt Node,version=7-
> >> 1.1503.el7.centos.2
>>
> >> .8,serial=2a1855a9-18fb-

Re: [ovirt-users] oVirt 4.0 wishlist: oVirt Engine

2015-11-30 Thread Giuseppe Ragusa
On Fri, Nov 20, 2015, at 13:54, Giuseppe Ragusa wrote:
> Hi all,
> I go on with my wishlist, derived from both solitary mumblings and community 
> talks at the the first Italian oVirt Meetup.
> 
> I offer to help in coding (work/family schedules permitting) but keep in mind 
> that I'm a sysadmin with mainly C and bash-scripting skills (but hoping to 
> improve my less-than-newbie Python too...)
> 
> I've sent separate wishlist messages for oVirt Node and VDSM.
> 
> oVirt Engine:
> 
> *) add Samba/CTDB/Ganesha capabilities (maybe in the GlusterFS management 
> UI); there are related wishlist items on configuring/managing 
> Samba/CTDB/Ganesha on oVirt Node and on VDSM
> 
> *) add the ability to manage containers (maybe initially as an exclusive 
> cluster type but allowing it to coexist with GlusterFS); there are related 
> wishlist items on supporting containers on the oVirt Node and on VDSM
> 
> *) add Open vSwitch direct support (not Neutron-mediated); there are related 
> wishlist items on configuring/managing Open vSwitch on oVirt Node and on VDSM
> 
> *) add DRBD9 as a supported Storage Domain type, HC/HE too, managed from the 
> Engine UI similarly to GlusterFS; there are related wishlist items on 
> configuring/managing DRBD9 on oVirt Node and on VDSM
> 
> *) add support for managing/limiting GlusterFS heal/rebalance bandwidth usage 
> in HC setup [1]; this is actually a GlusterFS wishlist item first and 
> foremost, but I hope our use case could be considered compelling enough to 
> "force their hand" a bit ;)

I've just posted a corresponding RFE for GlusterFS on:

http://www.gluster.org/pipermail/gluster-devel/2015-November/047238.html

Upvote that, if you think it's needed ;-)

> Regards,
> Giuseppe
> 
> [1] bandwidth limiting seems to be supported only for geo-replication on 
> GlusterFS side; it is my understanding that on non-HC setups the 
> heal/rebalance traffic could be kept separate from hypervisor/client traffic 
> (if a separate, Gluster-only, network is physically available and Gluster 
> cluster nodes have been peer-probed on those network addresses)
___
Users mailing list
Users@ovirt.org
http://lists.ovirt.org/mailman/listinfo/users


[ovirt-users] Strange permissions on Hosted Engine HA Agent log files

2015-11-25 Thread Giuseppe Ragusa
Hi all,
I'm installing oVirt (3.6) in self-hosted mode, hyperconverged with GlusterFS 
(3.7.6).

I'm using the oVirt snapshot generated the night between the 18th and 19th of 
November, 2015.

The (single, at the moment) host and the Engine are both CentOS 7.1 fully 
up-to-date.

After ovirt-hosted-engine-setup successful completion, I found the following 
(about 3 days after setup completed) "anomalies":

666 1 vdsm kvm - /var/log/ovirt-hosted-engine-ha/agent.log
666 1 vdsm kvm - /var/log/ovirt-hosted-engine-ha/agent.log.2015-11-23
666 1 vdsm kvm - /var/log/ovirt-hosted-engine-ha/broker.log
666 1 vdsm kvm - /var/log/ovirt-hosted-engine-ha/broker.log.2015-11-23

The listing above comes from a custom security checking script that gives:

"octal permissions" "number of links" "owner" "group" - "absolute pathname"

Is the ominous "666" mark actually intended/necessary? ;-)

Do I need to open a bugzilla notification for this?

Many thanks in advance for your attention.

Regards,
Giuseppe
___
Users mailing list
Users@ovirt.org
http://lists.ovirt.org/mailman/listinfo/users


Re: [ovirt-users] oVirt 4.0 wishlist: oVirt Self Hosted Engine Setup

2015-11-25 Thread Giuseppe Ragusa
On Wed, Nov 25, 2015, at 12:13, Simone Tiraboschi wrote:
> 
> 
> On Mon, Nov 23, 2015 at 10:10 PM, Giuseppe Ragusa 
> <giuseppe.rag...@hotmail.com> wrote:
>> Hi all,
>> I go on with my wishlist, derived from both solitary mumblings and community 
>> talks at the the first Italian oVirt Meetup.
>>  
>> I offer to help in coding (work/family schedules permitting) but keep in 
>> mind that I'm a sysadmin with mainly C and bash-scripting skills (but hoping 
>> to improve my less-than-newbie Python too...)
>>  
>> I've sent separate wishlist messages for oVirt Node, oVirt Engine and VDSM.
>>  
>> oVirt Self Hosted Engine Setup:
>>  
>> *) allow virtual hardware customizations for locally-created Engine vm, 
>> specifically: allow to add an arbitrary number of NICs (asking for MAC 
>> address and local bridge to connect to) and maybe also an arbitrary number 
>> of disks (asking for size) as these seem to be the only/most_useful items 
>> missing; maybe the prebuilt appliance image too may be inspected by setup to 
>> detect a customized one and connect any further NICs to custom local bridges 
>> (which the user should be asked for)
> 
> For 3.6.1 (it should be in 3.6.0 but it's bugged) you will be able to edit 
> some parameter of the engine VM from the engine (than of course you need to 
> reboot to make them effective).
> I'm not sure if it's worth to make the setup more complex or if it's better 
> to keep it simple (single nic, single disk) and then let you edit the VM only 
> from the engine as for other VMs.

Thanks Simone for your reply!

You are right: I was bothering you with this setup wishlist item *mainly* 
because further Engine vm modification was impossible/awkward/difficult before 
3.6.1

Nonetheless I have seen many cases in which at least a second NIC would be 
absolutely needed to complete the Engine installation: it is a well known best 
practice to keep the management network (maybe conflated with the IPMI network 
in smaller cases) completely isolated from other services and to allow only 
limited access to/from it, and that network would be the 
ovirtmgmt-bridge-connected network (the only one available to the Engine, as of 
now); now think of a kickstart-based Engine OS installation/update from a local 
repository/mirror which would be reachable on a different network only (further 
access to the User/Administration Web portal could have similar needs but could 
be more easily covered by successive Engine vm modifications)

The "additional disks" part was (maybe "artificially") added by me out of 
fantasy, but I know of at least one enterprise customer that by policy mandates 
separate disks for OS and data (mainly on FC LUNs, to be honest, but FC is 
supported by hosted Engine now, isn't it?)

I absolutely don't know how the setup code is structured (and the recent 
logical "duplication" between mixins.py and vm.conf.in scares me a bit, 
actually ;), but I naively hope that changing the two single hardcoded nic/hdd 
questions into two loops of minimum 1 iteration (with a corresponding 
generalization of related otopi parameters) should not increase the complexity 
too much (and could be an excuse to rationalize/unify it further).

Obviously I could stand instantly corrected by anyone who really knows the 
code, but in exchange I would gain for free some interesting pointers/insights 
into the setup code/structure ;)


>> Regards,
>> Giuseppe
>> ___
>> Users mailing list
>>  Users@ovirt.org
>>  http://lists.ovirt.org/mailman/listinfo/users
___
Users mailing list
Users@ovirt.org
http://lists.ovirt.org/mailman/listinfo/users


Re: [ovirt-users] oVirt 4.0 wishlist: VDSM

2015-11-23 Thread Giuseppe Ragusa
On Sat, Nov 21, 2015, at 13:59, Dan Kenigsberg wrote:
> On Fri, Nov 20, 2015 at 01:54:35PM +0100, Giuseppe Ragusa wrote:
> > Hi all,
> > I go on with my wishlist, derived from both solitary mumblings and 
> > community talks at the the first Italian oVirt Meetup.
> > 
> > I offer to help in coding (work/family schedules permitting) but keep in 
> > mind that I'm a sysadmin with mainly C and bash-scripting skills (but 
> > hoping to improve my less-than-newbie Python too...)
> > 
> > I've sent separate wishlist messages for oVirt Node and Engine.
> > 
> > VDSM:
> > 
> > *) allow VDSM to configure/manage Samba, CTDB and Ganesha (specifically, 
> > I'm thinking of the GlusterFS integration); there are related wishlist 
> > items on configuring/managing Samba/CTDB/Ganesha on the Engine and on oVirt 
> > Node
> 
> I'd apreciate a more detailed feature definition. Vdsm (and ovirt) try
> to configure only thing that are needed for their own usage. What do you
> want to control? When? You're welcome to draf a feature page prior to
> coding the fix ;-)

I was thinking of adding CIFS/NFSv4 functionality to an hyperconverged cluster 
(GlusterFS/oVirt) which would have separate volumes for virtual machines 
storage (one volume for the Engine and one for other vms, with no CIFS/NFSv4 
capabilities offered) and for data shares (directly accessible by clients on 
LAN and obviously from local vms too).

Think of it as a 3-node HA NetApp+VMware killer ;-)

The UI idea (but that would be the Engine part, I understand) was along the 
lines of single-check enabling CIFS and/or NFSv4 sharing for a GlusterFS data 
volume, then optionally adding any further specific options (hosts allowed, 
users/groups for read/write access, network recycle_bin etc.); global Samba 
(domain/workgroup membership etc.) and CTDB (IPs/interfaces) configuration 
parameters would be needed too.

I have no experience on a GaneshaNFS clustered/HA configuration with GlusterFS, 
but (from superficial skimming through docs) it seems that it was not possible 
at all before 2.2 and now it needs a full Pacemaker/Corosync setup too 
(contrary to the IBM-GPFS-backed case), so that could be a problem.

This VDSM wishlist item was driven by the idea that all actions (and so future 
GlusterFS/Samba/CTDB too) performed by the Engine through the hosts/nodes were 
somehow "mediated" by VDSM and its API, but if this is not the case, then I 
retire my suggestion here and I will try to pursue it only on the Engine/Node 
side ;)

Many thanks for your attention.

Regards,
Giuseppe

> > *) add Open vSwitch direct support (not Neutron-mediated); there are 
> > related wishlist items on configuring/managing Open vSwitch on oVirt Node 
> > and on the Engine
> 
> That's on our immediate roadmap. Soon, vdsm-hook-ovs would be ready for
> testing.
> 
> > 
> > *) add DRBD9 as a supported Storage Domain type; there are related wishlist 
> > items on configuring/managing DRBD9 on the Engine and on oVirt Node
> > 
> > *) allow VDSM to configure/manage containers (maybe extend it by use of the 
> > LXC libvirt driver, similarly to the experimental work that has been put up 
> > to allow Xen vm management); there are related wishlist items on 
> > configuring/managing containers on the Engine and on oVirt Node
> > 
> > *) add a VDSM_remote mode (for lack of a better name, but mainly inspired 
> > by pacemaker_remote) to be used inside a guest by the above mentioned 
> > container support (giving to the Engine the required visibility on the 
> > managed containers, but excluding the "virtual node" from power management 
> > and other unsuitable actions)
> > 
> > Regards,
> > Giuseppe
> > 
> > ___
> > Users mailing list
> > Users@ovirt.org
> > http://lists.ovirt.org/mailman/listinfo/users
> 
___
Users mailing list
Users@ovirt.org
http://lists.ovirt.org/mailman/listinfo/users


[ovirt-users] oVirt 4.0 wishlist: oVirt Self Hosted Engine Setup

2015-11-23 Thread Giuseppe Ragusa
Hi all,
I go on with my wishlist, derived from both solitary mumblings and community 
talks at the the first Italian oVirt Meetup.

I offer to help in coding (work/family schedules permitting) but keep in mind 
that I'm a sysadmin with mainly C and bash-scripting skills (but hoping to 
improve my less-than-newbie Python too...)

I've sent separate wishlist messages for oVirt Node, oVirt Engine and VDSM.

oVirt Self Hosted Engine Setup:

*) allow virtual hardware customizations for locally-created Engine vm, 
specifically: allow to add an arbitrary number of NICs (asking for MAC address 
and local bridge to connect to) and maybe also an arbitrary number of disks 
(asking for size) as these seem to be the only/most_useful items missing; maybe 
the prebuilt appliance image too may be inspected by setup to detect a 
customized one and connect any further NICs to custom local bridges (which the 
user should be asked for)

Regards,
Giuseppe
___
Users mailing list
Users@ovirt.org
http://lists.ovirt.org/mailman/listinfo/users


Re: [ovirt-users] [SOLVED] Re: Trying to make ovirt-hosted-engine-setup create a customized Engine-vm on 3.6 HC HE

2015-11-23 Thread Giuseppe Ragusa
On Tue, Oct 27, 2015, at 00:10, Giuseppe Ragusa wrote:
> On Mon, Oct 26, 2015, at 09:48, Simone Tiraboschi wrote:
> > 
> > 
> > On Mon, Oct 26, 2015 at 12:14 AM, Giuseppe Ragusa 
> > <giuseppe.rag...@hotmail.com> wrote:
> >> Hi all,
> >> I'm experiencing some difficulties using oVirt 3.6 latest snapshot.
> >> 
> >> I'm trying to trick the self-hosted-engine setup to create a custom engine 
> >> vm with 3 nics (with fixed MACs/UUIDs).
> >> 
> >> The GlusterFS volume (3.7.5 hyperconverged, replica 3, for the engine vm) 
> >> and the network bridges (ovirtmgmt and other two bridges, called nfs and 
> >> lan, for the engine vm) have been preconfigured on the initial 
> >> fully-patched CentOS 7.1 host (plus other two identical hosts which are 
> >> awaiting to be added).
> >> 
> >> I'm stuck at a point with the engine vm successfully starting but with 
> >> only one nic present (connected to the ovirtmgmt bridge).
> >> 
> >> I'm trying to obtain the modified engine vm by means of a trick which used 
> >> to work in a previous (aborted because of lacking GlusterFS-by-libgfapi 
> >> support) oVirt 3.5 test setup (about a year ago, maybe more): I'm 
> >> substituting the standard 
> >> /usr/share/ovirt-hosted-engine-setup/templates/vm.conf.in with the 
> >> following:
> >> 
> >> vmId=@VM_UUID@
> >> memSize=@MEM_SIZE@
> >> display=@CONSOLE_TYPE@
> >> devices={index:2,iface:ide,address:{ controller:0, target:0,unit:0, bus:1, 
> >> type:drive},specParams:{},readonly:true,deviceId:@CDROM_UUID@,path:@CDROM@,device:cdrom,shared:false,type:disk@BOOT_CDROM@}
> >> devices={index:0,iface:virtio,format:raw,poolID:@SP_UUID@,volumeID:@VOL_UUID@,imageID:@IMG_UUID@,specParams:{},readonly:false,domainID:@SD_UUID@,optional:false,deviceId:@IMG_UUID@,address:{bus:0x00,
> >>  slot:0x06, domain:0x, type:pci, 
> >> function:0x0},device:disk,shared:exclusive,propagateErrors:off,type:disk@BOOT_DISK@}
> >> devices={device:scsi,model:virtio-scsi,type:controller}
> >> devices={index:4,nicModel:pv,macAddr:02:50:56:3f:c4:b0,linkActive:true,network:@BRIDGE@,filter:vdsm-no-mac-spoofing,specParams:{},deviceId:@NIC_UUID@,address:{bus:0x00,
> >>  slot:0x03, domain:0x, type:pci, 
> >> function:0x0},device:bridge,type:interface@BOOT_PXE@}
> >> devices={index:8,nicModel:pv,macAddr:02:50:56:3f:c4:a0,linkActive:true,network:lan,filter:vdsm-no-mac-spoofing,specParams:{},deviceId:6c467650-1837-47ea-89bc-1113f4bfefee,address:{bus:0x00,
> >>  slot:0x09, domain:0x, type:pci, 
> >> function:0x0},device:bridge,type:interface@BOOT_PXE@}
> >> devices={index:16,nicModel:pv,macAddr:02:50:56:3f:c4:c0,linkActive:true,network:nfs,filter:vdsm-no-mac-spoofing,specParams:{},deviceId:4d8e0705-8cb4-45b7-b960-7f98bb59858d,address:{bus:0x00,
> >>  slot:0x0c, domain:0x, type:pci, 
> >> function:0x0},device:bridge,type:interface@BOOT_PXE@}
> >> devices={device:console,specParams:{},type:console,deviceId:@CONSOLE_UUID@,alias:console0}
> >> vmName=@NAME@
> >> spiceSecureChannels=smain,sdisplay,sinputs,scursor,splayback,srecord,ssmartcard,susbredir
> >> smp=@VCPUS@
> >> cpuType=@CPU_TYPE@
> >> emulatedMachine=@EMULATED_MACHINE@
> >> 
> >> but unfortunately the vm gets created like this (output from "ps"; note 
> >> that I'm attaching a CentOS7.1 Netinstall ISO with an embedded kickstart: 
> >> the installation should proceed by HTTP on the lan network but obviously 
> >> fails):
> >> 
> >> /usr/libexec/qemu-kvm -name HostedEngine -S -machine 
> >> pc-i440fx-rhel7.1.0,accel=kvm,usb=off -cpu Westmere -m 4096 -realtime 
> >> mlock=off 
> >> -smp 2,sockets=2,cores=1,threads=1 -uuid 
> >> f49da721-8aa6-4422-8b91-e91a0e38aa4a -s
> >> mbios type=1,manufacturer=oVirt,product=oVirt 
> >> Node,version=7-1.1503.el7.centos.2
> >> .8,serial=2a1855a9-18fb-4d7a-b8b8-6fc898a8e827,uuid=f49da721-8aa6-4422-8b91-e91a
> >> 0e38aa4a -no-user-config -nodefaults -chardev 
> >> socket,id=charmonitor,path=/var/li
> >> b/libvirt/qemu/HostedEngine.monitor,server,nowait -mon 
> >> chardev=charmonitor,id=mo
> >> nitor,mode=control -rtc base=2015-10-25T11:22:22,driftfix=slew -global 
> >> kvm-pit.l
> >> ost_tick_policy=discard -no-hpet -no-reboot -boot strict=on -device 
> >> piix3-usb-uh
> >> ci,id=usb,bus=pci.0,addr=0x1.0x2 -device 
> >> virtio-scsi-pci,id=scsi0,bus=pci.0,addr
> >> =0x4 -device virtio-serial-pci,i

Re: [ovirt-users] Manually preconfigured network interfaces/bridges on oVirt 3.6 HC HE

2015-11-23 Thread Giuseppe Ragusa
On Mon, Nov 9, 2015, at 08:16, Sandro Bonazzola wrote:
> On Sun, Nov 8, 2015 at 9:57 PM, Giuseppe Ragusa <giuseppe.rag...@hotmail.com> 
> wrote:
>> On Tue, Nov 3, 2015, at 23:17, Giuseppe Ragusa wrote:
> On Tue, Nov 3, 2015, at 15:27, Simone Tiraboschi wrote:
> > On Mon, Nov 2, 2015 at 11:55 PM, Giuseppe Ragusa 
> > <giuseppe.rag...@hotmail.com> wrote:
> >> On Mon, Nov 2, 2015, at 09:52, Simone Tiraboschi wrote:
> >>> On Mon, Nov 2, 2015 at 1:48 AM, Giuseppe Ragusa 
> >>> <giuseppe.rag...@hotmail.com> wrote:
> >>>> Hi all,
> >>>> I'm stuck with the following error during the final phase of 
> >>>> ovirt-hosted-engine-setup:
> >>>>
> >>>>           The host hosted_engine_1 is in non-operational state.
> >>>>           Please try to activate it via the engine webadmin UI.
> >>>>
> >>>> If I login on the engine administration web UI I find the corresponding 
> >>>> message (inside NonOperational first host hosted_engine_1 Events tab):
> >>>>
> >>>> Host hosted_engine_1 does not comply with the cluster Default networks, 
> >>>> the following networks are missing on host: 'ovirtmgmt'
> >>>>
> >>>> I'm installing with an oVirt snapshot from October the 27th on a 
> >>>> fully-patched CentOS 7.1 host with a GlusterFS volume (3.7.5 
> >>>> hyperconverged, replica 3, for the engine-vm) pre-created and network 
> >>>> interfaces/bridges (ovirtmgmt and other two bridges, called nfs and lan, 
> >>>> on underlying 802.3ad bonds or plain interfaces) manually pre-configured 
> >>>> in /etc/sysconfig/network-interfaces/ifcfg-* (using "classic" network 
> >>>> service; NetworkManager disabled).
> >>>>
> >>>
> >>> If you manually created the network bridges, the match between them and 
> >>> the logical network should happen on name bases.
> >>
> >> Hi Simone,
> >> many thanks fpr your help (again) :)
> >>
> >> As you may note from the above comment, the name should actually match 
> >> (it's exactly ovirtmgmt) but it doesn't get recognized.
> >>
> >>
> >>> If it doesn't for any reasons (please report if you find any evidence), 
> >>> you can manually bind logical network and network interfaces editing the 
> >>> host properties from the web-ui. At that point the host should become 
> >>> active in a few seconds.
> >>
> >>
> >> Well, the most immediate evidence are the error messages already reported 
> >> (given that the bridge is actually present, with the right name and 
> >> actually working).
> >> Apart from that, I find the following past logs (I don't know whether they 
> >> are relevant or not):
> >>
> >> From /var/log/vdsm/connectivity.log:
> >
> >
> > Can you please add also host-deploy logs?
>
> Please find a gzipped tar archive of the whole directory 
> /var/log/ovirt-engine/host-deploy/ at:
>
> https://onedrive.live.com/redir?resid=74BDE216CAA3E26F!110=!AIQUc6i-n5blQO0=file%2cgz
>>  
>> Since I suppose that there's nothing relevant on those logs, I'm planning to 
>> specify "net_persistence = ifcfg" in /etc/vdsm/vdsm.conf and restart VDSM on 
>> the host, then making the (still blocked) setup re-check.
>>  
>> 
Is there anything I should pay attention to before proceeding? (in particular 
while restarting VDSM)
> 
> 
> ^^ Dan?

I went on and unfortunately "net_persistence = ifcfg" in /etc/vdsm/vdsm.conf 
and restarting VDSM on the host did not solve it (same error as before).

While trying (always without success) all other steps suggested by Simone 
(binding logical network and synchronizing networks from host) I found an 
interesting-looking libvirt network definition (autostart too) for 
vdsm-ovirtmgmt and this recalled some memories from past mailing list messages 
(that I still cannot find...) ;)

Long story short: aborting setup, cleaning up all and creating a libvirt 
network for each pre-provisioned bridge worked! ("net_persistence = ifcfg" has 
been kept for other, client-specific, reasons so I don't know if it's needed 
too)
Here it is, in BASH form:

for my_bridge in ovirtmgmt bridge1 bridge2; do
cat <<- EOM > /root/my-${my_bridge}.xml

  vdsm-${my_bridge}
  
  

EOM
virsh -c qemu:///system net-define /root/my-${my_bridge}.xml
virsh -c qemu:///system net-autostart

[ovirt-users] oVirt 4.0 wishlist: oVirt Engine

2015-11-20 Thread Giuseppe Ragusa
Hi all,
I go on with my wishlist, derived from both solitary mumblings and community 
talks at the the first Italian oVirt Meetup.

I offer to help in coding (work/family schedules permitting) but keep in mind 
that I'm a sysadmin with mainly C and bash-scripting skills (but hoping to 
improve my less-than-newbie Python too...)

I've sent separate wishlist messages for oVirt Node and VDSM.

oVirt Engine:

*) add Samba/CTDB/Ganesha capabilities (maybe in the GlusterFS management UI); 
there are related wishlist items on configuring/managing Samba/CTDB/Ganesha on 
oVirt Node and on VDSM

*) add the ability to manage containers (maybe initially as an exclusive 
cluster type but allowing it to coexist with GlusterFS); there are related 
wishlist items on supporting containers on the oVirt Node and on VDSM

*) add Open vSwitch direct support (not Neutron-mediated); there are related 
wishlist items on configuring/managing Open vSwitch on oVirt Node and on VDSM

*) add DRBD9 as a supported Storage Domain type, HC/HE too, managed from the 
Engine UI similarly to GlusterFS; there are related wishlist items on 
configuring/managing DRBD9 on oVirt Node and on VDSM

*) add support for managing/limiting GlusterFS heal/rebalance bandwidth usage 
in HC setup [1]; this is actually a GlusterFS wishlist item first and foremost, 
but I hope our use case could be considered compelling enough to "force their 
hand" a bit ;)

Regards,
Giuseppe

[1] bandwidth limiting seems to be supported only for geo-replication on 
GlusterFS side; it is my understanding that on non-HC setups the heal/rebalance 
traffic could be kept separate from hypervisor/client traffic (if a separate, 
Gluster-only, network is physically available and Gluster cluster nodes have 
been peer-probed on those network addresses)
___
Users mailing list
Users@ovirt.org
http://lists.ovirt.org/mailman/listinfo/users


[ovirt-users] oVirt 4.0 wishlist: VDSM

2015-11-20 Thread Giuseppe Ragusa
Hi all,
I go on with my wishlist, derived from both solitary mumblings and community 
talks at the the first Italian oVirt Meetup.

I offer to help in coding (work/family schedules permitting) but keep in mind 
that I'm a sysadmin with mainly C and bash-scripting skills (but hoping to 
improve my less-than-newbie Python too...)

I've sent separate wishlist messages for oVirt Node and Engine.

VDSM:

*) allow VDSM to configure/manage Samba, CTDB and Ganesha (specifically, I'm 
thinking of the GlusterFS integration); there are related wishlist items on 
configuring/managing Samba/CTDB/Ganesha on the Engine and on oVirt Node

*) add Open vSwitch direct support (not Neutron-mediated); there are related 
wishlist items on configuring/managing Open vSwitch on oVirt Node and on the 
Engine

*) add DRBD9 as a supported Storage Domain type; there are related wishlist 
items on configuring/managing DRBD9 on the Engine and on oVirt Node

*) allow VDSM to configure/manage containers (maybe extend it by use of the LXC 
libvirt driver, similarly to the experimental work that has been put up to 
allow Xen vm management); there are related wishlist items on 
configuring/managing containers on the Engine and on oVirt Node

*) add a VDSM_remote mode (for lack of a better name, but mainly inspired by 
pacemaker_remote) to be used inside a guest by the above mentioned container 
support (giving to the Engine the required visibility on the managed 
containers, but excluding the "virtual node" from power management and other 
unsuitable actions)

Regards,
Giuseppe

___
Users mailing list
Users@ovirt.org
http://lists.ovirt.org/mailman/listinfo/users


Re: [ovirt-users] Manually preconfigured network interfaces/bridges on oVirt 3.6 HC HE

2015-11-08 Thread Giuseppe Ragusa
On Tue, Nov 3, 2015, at 23:17, Giuseppe Ragusa wrote:
> On Tue, Nov 3, 2015, at 15:27, Simone Tiraboschi wrote:
> > 
> > 
> > On Mon, Nov 2, 2015 at 11:55 PM, Giuseppe Ragusa 
> > <giuseppe.rag...@hotmail.com> wrote:
> >> __
> >> 
> >> On Mon, Nov 2, 2015, at 09:52, Simone Tiraboschi wrote:
> >>> 
> >>> 
> >>> On Mon, Nov 2, 2015 at 1:48 AM, Giuseppe Ragusa 
> >>> <giuseppe.rag...@hotmail.com> wrote:
> >>>> Hi all,
> >>>> I'm stuck with the following error during the final phase of 
> >>>> ovirt-hosted-engine-setup:
> >>>> 
> >>>>           The host hosted_engine_1 is in non-operational state.
> >>>>           Please try to activate it via the engine webadmin UI.
> >>>> 
> >>>> If I login on the engine administration web UI I find the corresponding 
> >>>> message (inside NonOperational first host hosted_engine_1 Events tab):
> >>>> 
> >>>> Host hosted_engine_1 does not comply with the cluster Default networks, 
> >>>> the following networks are missing on host: 'ovirtmgmt'
> >>>> 
> >>>> I'm installing with an oVirt snapshot from October the 27th on a 
> >>>> fully-patched CentOS 7.1 host with a GlusterFS volume (3.7.5 
> >>>> hyperconverged, replica 3, for the engine-vm) pre-created and network 
> >>>> interfaces/bridges (ovirtmgmt and other two bridges, called nfs and lan, 
> >>>> on underlying 802.3ad bonds or plain interfaces) manually pre-configured 
> >>>> in /etc/sysconfig/network-interfaces/ifcfg-* (using "classic" network 
> >>>> service; NetworkManager disabled).
> >>>> 
> >>> 
> >>> If you manually created the network bridges, the match between them and 
> >>> the logical network should happen on name bases.
> >> 
> >> 
> >> Hi Simone,
> >> many thanks fpr your help (again) :)
> >> 
> >> As you may note from the above comment, the name should actually match 
> >> (it's exactly ovirtmgmt) but it doesn't get recognized.
> >> 
> >> 
> >>> If it doesn't for any reasons (please report if you find any evidence), 
> >>> you can manually bind logical network and network interfaces editing the 
> >>> host properties from the web-ui. At that point the host should become 
> >>> active in a few seconds.
> >> 
> >> 
> >> Well, the most immediate evidence are the error messages already reported 
> >> (given that the bridge is actually present, with the right name and 
> >> actually working).
> >> Apart from that, I find the following past logs (I don't know whether they 
> >> are relevant or not):
> >> 
> >> From /var/log/vdsm/connectivity.log:
> > 
> > 
> > Can you please add also host-deploy logs?
> 
> Please find a gzipped tar archive of the whole directory 
> /var/log/ovirt-engine/host-deploy/ at:
> 
> https://onedrive.live.com/redir?resid=74BDE216CAA3E26F!110=!AIQUc6i-n5blQO0=file%2cgz

Since I suppose that there's nothing relevant on those logs, I'm planning to 
specify "net_persistence = ifcfg" in /etc/vdsm/vdsm.conf and restart VDSM on 
the host, then making the (still blocked) setup re-check.

Is there anything I should pay attention to before proceeding? (in particular 
while restarting VDSM)

I will report back here on the results.

Regards,
Giuseppe

> Many thanks again for your kind assistance.
> 
> Regards,
> Giuseppe
> 
> >> 2015-11-01 21:37:21,029:DEBUG:recent_client:True
> >> 2015-11-01 21:37:51,088:DEBUG:recent_client:False
> >> 2015-11-01 21:38:21,146:DEBUG:dropped vnet0:(operstate:up speed:0 
> >> duplex:full) d
> >> ropped vnet2:(operstate:up speed:0 duplex:full) dropped 
> >> vnet1:(operstate:up spee
> >> d:0 duplex:full) 
> >> 2015-11-01 21:38:36,174:DEBUG:recent_client:True
> >> 2015-11-01 21:39:06,233:DEBUG:recent_client:False
> >> 2015-11-01 21:48:22,383:DEBUG:recent_client:True, lan:(operstate:up 
> >> speed:0 dupl
> >> ex:unknown), bond0:(operstate:up speed:2000 duplex:full), 
> >> bond1:(operstate:up sp
> >> eed:2000 duplex:full), enp0s20f1:(operstate:up speed:1000 duplex:full), 
> >> ;vdsmdum
> >> my;:(operstate:down speed:0 duplex:unknown), ovirtmgmt:(operstate:up 
> >> speed:0 dup
> >> lex:unknown), lo:(operstate:up speed:0 duplex:unknown), 
> >> enp7s0f0:(operstate:up s
>

Re: [ovirt-users] Manually preconfigured network interfaces/bridges on oVirt 3.6 HC HE

2015-11-03 Thread Giuseppe Ragusa
On Tue, Nov 3, 2015, at 15:27, Simone Tiraboschi wrote:
> 
> 
> On Mon, Nov 2, 2015 at 11:55 PM, Giuseppe Ragusa 
> <giuseppe.rag...@hotmail.com> wrote:
>> __
>> 
>> On Mon, Nov 2, 2015, at 09:52, Simone Tiraboschi wrote:
>>> 
>>> 
>>> On Mon, Nov 2, 2015 at 1:48 AM, Giuseppe Ragusa 
>>> <giuseppe.rag...@hotmail.com> wrote:
>>>> Hi all,
>>>> I'm stuck with the following error during the final phase of 
>>>> ovirt-hosted-engine-setup:
>>>> 
>>>>           The host hosted_engine_1 is in non-operational state.
>>>>           Please try to activate it via the engine webadmin UI.
>>>> 
>>>> If I login on the engine administration web UI I find the corresponding 
>>>> message (inside NonOperational first host hosted_engine_1 Events tab):
>>>> 
>>>> Host hosted_engine_1 does not comply with the cluster Default networks, 
>>>> the following networks are missing on host: 'ovirtmgmt'
>>>> 
>>>> I'm installing with an oVirt snapshot from October the 27th on a 
>>>> fully-patched CentOS 7.1 host with a GlusterFS volume (3.7.5 
>>>> hyperconverged, replica 3, for the engine-vm) pre-created and network 
>>>> interfaces/bridges (ovirtmgmt and other two bridges, called nfs and lan, 
>>>> on underlying 802.3ad bonds or plain interfaces) manually pre-configured 
>>>> in /etc/sysconfig/network-interfaces/ifcfg-* (using "classic" network 
>>>> service; NetworkManager disabled).
>>>> 
>>> 
>>> If you manually created the network bridges, the match between them and the 
>>> logical network should happen on name bases.
>> 
>> 
>> Hi Simone,
>> many thanks fpr your help (again) :)
>> 
>> As you may note from the above comment, the name should actually match (it's 
>> exactly ovirtmgmt) but it doesn't get recognized.
>> 
>> 
>>> If it doesn't for any reasons (please report if you find any evidence), you 
>>> can manually bind logical network and network interfaces editing the host 
>>> properties from the web-ui. At that point the host should become active in 
>>> a few seconds.
>> 
>> 
>> Well, the most immediate evidence are the error messages already reported 
>> (given that the bridge is actually present, with the right name and actually 
>> working).
>> Apart from that, I find the following past logs (I don't know whether they 
>> are relevant or not):
>> 
>> From /var/log/vdsm/connectivity.log:
> 
> 
> Can you please add also host-deploy logs?

Please find a gzipped tar archive of the whole directory 
/var/log/ovirt-engine/host-deploy/ at:

https://onedrive.live.com/redir?resid=74BDE216CAA3E26F!110=!AIQUc6i-n5blQO0=file%2cgz

Many thanks again for your kind assistance.

Regards,
Giuseppe

>> 2015-11-01 21:37:21,029:DEBUG:recent_client:True
>> 2015-11-01 21:37:51,088:DEBUG:recent_client:False
>> 2015-11-01 21:38:21,146:DEBUG:dropped vnet0:(operstate:up speed:0 
>> duplex:full) d
>> ropped vnet2:(operstate:up speed:0 duplex:full) dropped vnet1:(operstate:up 
>> spee
>> d:0 duplex:full) 
>> 2015-11-01 21:38:36,174:DEBUG:recent_client:True
>> 2015-11-01 21:39:06,233:DEBUG:recent_client:False
>> 2015-11-01 21:48:22,383:DEBUG:recent_client:True, lan:(operstate:up speed:0 
>> dupl
>> ex:unknown), bond0:(operstate:up speed:2000 duplex:full), 
>> bond1:(operstate:up sp
>> eed:2000 duplex:full), enp0s20f1:(operstate:up speed:1000 duplex:full), 
>> ;vdsmdum
>> my;:(operstate:down speed:0 duplex:unknown), ovirtmgmt:(operstate:up speed:0 
>> dup
>> lex:unknown), lo:(operstate:up speed:0 duplex:unknown), 
>> enp7s0f0:(operstate:up s
>> peed:1000 duplex:full), enp6s0f0:(operstate:up speed:100 duplex:full), 
>> enp6s0f1:
>> (operstate:up speed:1000 duplex:full), nfs:(operstate:up speed:0 
>> duplex:unknown)
>> , bond2:(operstate:up speed:3000 duplex:full), enp7s0f1:(operstate:up 
>> speed:1000
>>  duplex:full), enp0s20f0:(operstate:up speed:1000 duplex:full), 
>> enp0s20f3:(opers
>> tate:up speed:1000 duplex:full), enp0s20f2:(operstate:up speed:1000 
>> duplex:full)
>> 2015-11-01 21:48:52,450:DEBUG:recent_client:False
>> 2015-11-01 22:55:21,668:DEBUG:recent_client:True, lan:(operstate:up speed:0 
>> dupl
>> ex:unknown), bond0:(operstate:up speed:2000 duplex:full), 
>> bond1:(operstate:up sp
>> eed:2000 duplex:full), enp0s20f1:(operstate:up speed:1000 duplex:full), 
>> ;vdsmdum
>> my;:(operstate:down speed:0 

Re: [ovirt-users] Manually preconfigured network interfaces/bridges on oVirt 3.6 HC HE

2015-11-02 Thread Giuseppe Ragusa
On Mon, Nov 2, 2015, at 09:52, Simone Tiraboschi wrote:
>
>
> On Mon, Nov 2, 2015 at 1:48 AM, Giuseppe Ragusa
> <giuseppe.rag...@hotmail.com> wrote:
>> Hi all,
>>
I'm stuck with the following error during the final phase of 
ovirt-hosted-engine-
setup:
>>
>>
The host hosted_engine_1 is in non-operational state.
>>
Please try to activate it via the engine webadmin UI.
>>
>>
If I login on the engine administration web UI I find the corresponding
message (inside NonOperational first host hosted_engine_1 Events tab):
>>
>>
Host hosted_engine_1 does not comply with the cluster Default networks,
the following networks are missing on host: 'ovirtmgmt'
>>
>>
I'm installing with an oVirt snapshot from October the 27th on a fully-
patched CentOS 7.1 host with a GlusterFS volume (3.7.5 hyperconverged,
replica 3, for the engine-vm) pre-created and network interfaces/bridges
(ovirtmgmt and other two bridges, called nfs and lan, on underlying
802.3ad bonds or plain interfaces) manually pre-configured in 
/etc/sysconfig/network-interfaces/ifcfg-
* (using "classic" network service; NetworkManager disabled).
>>
>
> If you manually created the network bridges, the match between them
> and the logical network should happen on name bases.

Hi Simone, many thanks fpr your help (again) :)

As you may note from the above comment, the name should actually match
(it's exactly ovirtmgmt) but it doesn't get recognized.

> If it doesn't for any reasons (please report if you find any
> evidence), you can manually bind logical network and network
> interfaces editing the host properties from the web-ui. At that point
> the host should become active in a few seconds.

Well, the most immediate evidence are the error messages already
reported (given that the bridge is actually present, with the right name
and actually working). Apart from that, I find the following past logs
(I don't know whether they are relevant or not):

>From /var/log/vdsm/connectivity.log:

2015-11-01 21:37:21,029:DEBUG:recent_client:True 2015-11-01
21:37:51,088:DEBUG:recent_client:False 2015-11-01
21:38:21,146:DEBUG:dropped vnet0:(operstate:up speed:0 duplex:full) d
ropped vnet2:(operstate:up speed:0 duplex:full) dropped
vnet1:(operstate:up spee
d: duplex:full) 2015-11-01 21:38:36,174:DEBUG:recent_client:True 2015-11-
   01 21:39:06,233:DEBUG:recent_client:False 2015-11-01
   21:48:22,383:DEBUG:recent_client:True, lan:(operstate:up speed:0 dupl
   ex:unknown), bond0:(operstate:up speed:2000 duplex:full),
   bond1:(operstate:up sp eed:2000 duplex:full), enp0s20f1:(operstate:up
   speed:1000 duplex:full), ;vdsmdum my;:(operstate:down speed:0
   duplex:unknown), ovirtmgmt:(operstate:up speed:0 dup lex:unknown),
   lo:(operstate:up speed:0 duplex:unknown), enp7s0f0:(operstate:up s
   peed:1000 duplex:full), enp6s0f0:(operstate:up speed:100
   duplex:full), enp6s0f1: (operstate:up speed:1000 duplex:full),
   nfs:(operstate:up speed:0 duplex:unknown) , bond2:(operstate:up
   speed:3000 duplex:full), enp7s0f1:(operstate:up speed:1000
   duplex:full), enp0s20f0:(operstate:up speed:1000 duplex:full),
   enp0s20f3:(opers tate:up speed:1000 duplex:full),
   enp0s20f2:(operstate:up speed:1000 duplex:full) 2015-11-01
   21:48:52,450:DEBUG:recent_client:False 2015-11-01
   22:55:21,668:DEBUG:recent_client:True, lan:(operstate:up speed:0 dupl
   ex:unknown), bond0:(operstate:up speed:2000 duplex:full),
   bond1:(operstate:up sp eed:2000 duplex:full), enp0s20f1:(operstate:up
   speed:1000 duplex:full), ;vdsmdum my;:(operstate:down speed:0
   duplex:unknown), ovirtmgmt:(operstate:up speed:0 dup lex:unknown),
   lo:(operstate:up speed:0 duplex:unknown), enp7s0f0:(operstate:up s
   peed:1000 duplex:full), enp6s0f0:(operstate:up speed:100
   duplex:full), enp6s0f1: (operstate:up speed:1000 duplex:full),
   nfs:(operstate:up speed:0 duplex:unknown), bond2:(operstate:up
   speed:3000 duplex:full), enp7s0f1:(operstate:up speed:1000
   duplex:full), enp0s20f0:(operstate:up speed:1000 duplex:full),
   enp0s20f3:(operstate:up speed:1000 duplex:full),
   enp0s20f2:(operstate:up speed:1000 duplex:full) 2015-11-01
   22:56:00,952:DEBUG:recent_client:False, lan:(operstate:up speed:0
   duplex:unknown), bond0:(operstate:up speed:2000 duplex:full),
   bond1:(operstate:up speed:2000 duplex:full), enp0s20f1:(operstate:up
   speed:1000 duplex:full), ;vdsmdummy;:(operstate:down speed:0
   duplex:unknown), ovirtmgmt:(operstate:up speed:0 duplex:unknown),
   lo:(operstate:up speed:0 duplex:unknown), enp7s0f0:(operstate:up
   speed:1000 duplex:full), enp6s0f0:(operstate:up speed:100
   duplex:full), enp6s0f1:(operstate:up speed:1000 duplex:full),
   nfs:(operstate:up speed:0 duplex:unknown), bond2:(operstate:up
   speed:3000 duplex:full), enp7s0f1:(operstate:up speed:1000
   duplex:full), enp0s20f0:(operstate:up speed:1000 duplex:full),
   enp0s20f3:(operstate:up speed:1000 duple

[ovirt-users] Manually preconfigured network interfaces/bridges on oVirt 3.6 HC HE

2015-11-01 Thread Giuseppe Ragusa
Hi all,
I'm stuck with the following error during the final phase of 
ovirt-hosted-engine-setup:

  The host hosted_engine_1 is in non-operational state.
  Please try to activate it via the engine webadmin UI.

If I login on the engine administration web UI I find the corresponding message 
(inside NonOperational first host hosted_engine_1 Events tab):

Host hosted_engine_1 does not comply with the cluster Default networks, the 
following networks are missing on host: 'ovirtmgmt'

I'm installing with an oVirt snapshot from October the 27th on a fully-patched 
CentOS 7.1 host with a GlusterFS volume (3.7.5 hyperconverged, replica 3, for 
the engine-vm) pre-created and network interfaces/bridges (ovirtmgmt and other 
two bridges, called nfs and lan, on underlying 802.3ad bonds or plain 
interfaces) manually pre-configured in 
/etc/sysconfig/network-interfaces/ifcfg-* (using "classic" network service; 
NetworkManager disabled).

I seem to recall that a preconfigured network setup on oVirt 3.6 would need 
something predefined on the libvirt side too (apart from usual ifcfg-* files), 
but I cannot find the relevant mailing list message anymore nor any other 
specific documentation.

Does anyone have any further suggestion or clue (code/docs to read)?

Many thanks in advance.

Kind regards,
Giuseppe

PS: please keep also my address in replying because I'm experiencing some 
problems between Hotmail and oVirt-mailing-list
___
Users mailing list
Users@ovirt.org
http://lists.ovirt.org/mailman/listinfo/users


[ovirt-users] [SOLVED] Re: Trying to make ovirt-hosted-engine-setup create a customized Engine-vm on 3.6 HC HE

2015-10-26 Thread Giuseppe Ragusa
On Mon, Oct 26, 2015, at 09:48, Simone Tiraboschi wrote:
> 
> 
> On Mon, Oct 26, 2015 at 12:14 AM, Giuseppe Ragusa 
> <giuseppe.rag...@hotmail.com> wrote:
>> Hi all,
>> I'm experiencing some difficulties using oVirt 3.6 latest snapshot.
>> 
>> I'm trying to trick the self-hosted-engine setup to create a custom engine 
>> vm with 3 nics (with fixed MACs/UUIDs).
>> 
>> The GlusterFS volume (3.7.5 hyperconverged, replica 3, for the engine vm) 
>> and the network bridges (ovirtmgmt and other two bridges, called nfs and 
>> lan, for the engine vm) have been preconfigured on the initial fully-patched 
>> CentOS 7.1 host (plus other two identical hosts which are awaiting to be 
>> added).
>> 
>> I'm stuck at a point with the engine vm successfully starting but with only 
>> one nic present (connected to the ovirtmgmt bridge).
>> 
>> I'm trying to obtain the modified engine vm by means of a trick which used 
>> to work in a previous (aborted because of lacking GlusterFS-by-libgfapi 
>> support) oVirt 3.5 test setup (about a year ago, maybe more): I'm 
>> substituting the standard 
>> /usr/share/ovirt-hosted-engine-setup/templates/vm.conf.in with the following:
>> 
>> vmId=@VM_UUID@
>> memSize=@MEM_SIZE@
>> display=@CONSOLE_TYPE@
>> devices={index:2,iface:ide,address:{ controller:0, target:0,unit:0, bus:1, 
>> type:drive},specParams:{},readonly:true,deviceId:@CDROM_UUID@,path:@CDROM@,device:cdrom,shared:false,type:disk@BOOT_CDROM@}
>> devices={index:0,iface:virtio,format:raw,poolID:@SP_UUID@,volumeID:@VOL_UUID@,imageID:@IMG_UUID@,specParams:{},readonly:false,domainID:@SD_UUID@,optional:false,deviceId:@IMG_UUID@,address:{bus:0x00,
>>  slot:0x06, domain:0x, type:pci, 
>> function:0x0},device:disk,shared:exclusive,propagateErrors:off,type:disk@BOOT_DISK@}
>> devices={device:scsi,model:virtio-scsi,type:controller}
>> devices={index:4,nicModel:pv,macAddr:02:50:56:3f:c4:b0,linkActive:true,network:@BRIDGE@,filter:vdsm-no-mac-spoofing,specParams:{},deviceId:@NIC_UUID@,address:{bus:0x00,
>>  slot:0x03, domain:0x, type:pci, 
>> function:0x0},device:bridge,type:interface@BOOT_PXE@}
>> devices={index:8,nicModel:pv,macAddr:02:50:56:3f:c4:a0,linkActive:true,network:lan,filter:vdsm-no-mac-spoofing,specParams:{},deviceId:6c467650-1837-47ea-89bc-1113f4bfefee,address:{bus:0x00,
>>  slot:0x09, domain:0x, type:pci, 
>> function:0x0},device:bridge,type:interface@BOOT_PXE@}
>> devices={index:16,nicModel:pv,macAddr:02:50:56:3f:c4:c0,linkActive:true,network:nfs,filter:vdsm-no-mac-spoofing,specParams:{},deviceId:4d8e0705-8cb4-45b7-b960-7f98bb59858d,address:{bus:0x00,
>>  slot:0x0c, domain:0x, type:pci, 
>> function:0x0},device:bridge,type:interface@BOOT_PXE@}
>> devices={device:console,specParams:{},type:console,deviceId:@CONSOLE_UUID@,alias:console0}
>> vmName=@NAME@
>> spiceSecureChannels=smain,sdisplay,sinputs,scursor,splayback,srecord,ssmartcard,susbredir
>> smp=@VCPUS@
>> cpuType=@CPU_TYPE@
>> emulatedMachine=@EMULATED_MACHINE@
>> 
>> but unfortunately the vm gets created like this (output from "ps"; note that 
>> I'm attaching a CentOS7.1 Netinstall ISO with an embedded kickstart: the 
>> installation should proceed by HTTP on the lan network but obviously fails):
>> 
>> /usr/libexec/qemu-kvm -name HostedEngine -S -machine 
>> pc-i440fx-rhel7.1.0,accel=kvm,usb=off -cpu Westmere -m 4096 -realtime 
>> mlock=off 
>> -smp 2,sockets=2,cores=1,threads=1 -uuid 
>> f49da721-8aa6-4422-8b91-e91a0e38aa4a -s
>> mbios type=1,manufacturer=oVirt,product=oVirt 
>> Node,version=7-1.1503.el7.centos.2
>> .8,serial=2a1855a9-18fb-4d7a-b8b8-6fc898a8e827,uuid=f49da721-8aa6-4422-8b91-e91a
>> 0e38aa4a -no-user-config -nodefaults -chardev 
>> socket,id=charmonitor,path=/var/li
>> b/libvirt/qemu/HostedEngine.monitor,server,nowait -mon 
>> chardev=charmonitor,id=mo
>> nitor,mode=control -rtc base=2015-10-25T11:22:22,driftfix=slew -global 
>> kvm-pit.l
>> ost_tick_policy=discard -no-hpet -no-reboot -boot strict=on -device 
>> piix3-usb-uh
>> ci,id=usb,bus=pci.0,addr=0x1.0x2 -device 
>> virtio-scsi-pci,id=scsi0,bus=pci.0,addr
>> =0x4 -device virtio-serial-pci,id=virtio-serial0,bus=pci.0,addr=0x5 -drive 
>> file=
>> /var/tmp/engine.iso,if=none,id=drive-ide0-1-0,readonly=on,format=raw,serial= 
>> -device ide-cd,bus=ide.1,unit=0,drive=drive-ide0-1-0,id=ide0-1-0,bootindex=1 
>> -drive 
>> file=/var/run/vdsm/storage/be4434bf-a5fd-44d7-8011-d5e4ac9cf523/b3abc1cb-8a78-4b56-a9b0-e5f41fea0fdc/8d075a8d-730a-4925-8779-e0ca2b3dbcf4,if=none,id=drive-virtio-disk0,format=raw,serial=b3abc1

[ovirt-users] Trying to make ovirt-hosted-engine-setup create a customized Engine-vm on 3.6 HC HE

2015-10-25 Thread Giuseppe Ragusa
Hi all,
I'm experiencing some difficulties using oVirt 3.6 latest snapshot.

I'm trying to trick the self-hosted-engine setup to create a custom engine vm 
with 3 nics (with fixed MACs/UUIDs).

The GlusterFS volume (3.7.5 hyperconverged, replica 3, for the engine vm) and 
the network bridges (ovirtmgmt and other two bridges, called nfs and lan, for 
the engine vm) have been preconfigured on the initial fully-patched CentOS 7.1 
host (plus other two identical hosts which are awaiting to be added).

I'm stuck at a point with the engine vm successfully starting but with only one 
nic present (connected to the ovirtmgmt bridge).

I'm trying to obtain the modified engine vm by means of a trick which used to 
work in a previous (aborted because of lacking GlusterFS-by-libgfapi support) 
oVirt 3.5 test setup (about a year ago, maybe more): I'm substituting the 
standard /usr/share/ovirt-hosted-engine-setup/templates/vm.conf.in with the 
following:

vmId=@VM_UUID@
memSize=@MEM_SIZE@
display=@CONSOLE_TYPE@
devices={index:2,iface:ide,address:{ controller:0, target:0,unit:0, bus:1, 
type:drive},specParams:{},readonly:true,deviceId:@CDROM_UUID@,path:@CDROM@,device:cdrom,shared:false,type:disk@BOOT_CDROM@}
devices={index:0,iface:virtio,format:raw,poolID:@SP_UUID@,volumeID:@VOL_UUID@,imageID:@IMG_UUID@,specParams:{},readonly:false,domainID:@SD_UUID@,optional:false,deviceId:@IMG_UUID@,address:{bus:0x00,
 slot:0x06, domain:0x, type:pci, 
function:0x0},device:disk,shared:exclusive,propagateErrors:off,type:disk@BOOT_DISK@}
devices={device:scsi,model:virtio-scsi,type:controller}
devices={index:4,nicModel:pv,macAddr:02:50:56:3f:c4:b0,linkActive:true,network:@BRIDGE@,filter:vdsm-no-mac-spoofing,specParams:{},deviceId:@NIC_UUID@,address:{bus:0x00,
 slot:0x03, domain:0x, type:pci, 
function:0x0},device:bridge,type:interface@BOOT_PXE@}
devices={index:8,nicModel:pv,macAddr:02:50:56:3f:c4:a0,linkActive:true,network:lan,filter:vdsm-no-mac-spoofing,specParams:{},deviceId:6c467650-1837-47ea-89bc-1113f4bfefee,address:{bus:0x00,
 slot:0x09, domain:0x, type:pci, 
function:0x0},device:bridge,type:interface@BOOT_PXE@}
devices={index:16,nicModel:pv,macAddr:02:50:56:3f:c4:c0,linkActive:true,network:nfs,filter:vdsm-no-mac-spoofing,specParams:{},deviceId:4d8e0705-8cb4-45b7-b960-7f98bb59858d,address:{bus:0x00,
 slot:0x0c, domain:0x, type:pci, 
function:0x0},device:bridge,type:interface@BOOT_PXE@}
devices={device:console,specParams:{},type:console,deviceId:@CONSOLE_UUID@,alias:console0}
vmName=@NAME@
spiceSecureChannels=smain,sdisplay,sinputs,scursor,splayback,srecord,ssmartcard,susbredir
smp=@VCPUS@
cpuType=@CPU_TYPE@
emulatedMachine=@EMULATED_MACHINE@

but unfortunately the vm gets created like this (output from "ps"; note that 
I'm attaching a CentOS7.1 Netinstall ISO with an embedded kickstart: the 
installation should proceed by HTTP on the lan network but obviously fails):

/usr/libexec/qemu-kvm -name HostedEngine -S -machine 
pc-i440fx-rhel7.1.0,accel=kvm,usb=off -cpu Westmere -m 4096 -realtime mlock=off 
-smp 2,sockets=2,cores=1,threads=1 -uuid f49da721-8aa6-4422-8b91-e91a0e38aa4a -s
mbios type=1,manufacturer=oVirt,product=oVirt Node,version=7-1.1503.el7.centos.2
.8,serial=2a1855a9-18fb-4d7a-b8b8-6fc898a8e827,uuid=f49da721-8aa6-4422-8b91-e91a
0e38aa4a -no-user-config -nodefaults -chardev socket,id=charmonitor,path=/var/li
b/libvirt/qemu/HostedEngine.monitor,server,nowait -mon chardev=charmonitor,id=mo
nitor,mode=control -rtc base=2015-10-25T11:22:22,driftfix=slew -global kvm-pit.l
ost_tick_policy=discard -no-hpet -no-reboot -boot strict=on -device piix3-usb-uh
ci,id=usb,bus=pci.0,addr=0x1.0x2 -device virtio-scsi-pci,id=scsi0,bus=pci.0,addr
=0x4 -device virtio-serial-pci,id=virtio-serial0,bus=pci.0,addr=0x5 -drive file=
/var/tmp/engine.iso,if=none,id=drive-ide0-1-0,readonly=on,format=raw,serial= 
-device ide-cd,bus=ide.1,unit=0,drive=drive-ide0-1-0,id=ide0-1-0,bootindex=1 
-drive 
file=/var/run/vdsm/storage/be4434bf-a5fd-44d7-8011-d5e4ac9cf523/b3abc1cb-8a78-4b56-a9b0-e5f41fea0fdc/8d075a8d-730a-4925-8779-e0ca2b3dbcf4,if=none,id=drive-virtio-disk0,format=raw,serial=b3abc1cb-8a78-4b56-a9b0-e5f41fea0fdc,cache=none,werror=stop,rerror=stop,aio=threads
 -device 
virtio-blk-pci,scsi=off,bus=pci.0,addr=0x6,drive=drive-virtio-disk0,id=virtio-disk0
 -netdev tap,fd=26,id=hostnet0,vhost=on,vhostfd=27 -device 
virtio-net-pci,netdev=hostnet0,id=net0,mac=02:50:56:3f:c4:b0,bus=pci.0,addr=0x3 
-chardev 
socket,id=charchannel0,path=/var/lib/libvirt/qemu/channels/f49da721-8aa6-4422-8b91-e91a0e38aa4a.com.redhat.rhevm.vdsm,server,nowait
 -device 
virtserialport,bus=virtio-serial0.0,nr=1,chardev=charchannel0,id=channel0,name=com.redhat.rhevm.vdsm
 -chardev 
socket,id=charchannel1,path=/var/lib/libvirt/qemu/channels/f49da721-8aa6-4422-8b91-e91a0e38aa4a.org.qemu.guest_agent.0,server,nowait
 -device 
virtserialport,bus=virtio-serial0.0,nr=2,chardev=charchannel1,id=channel1,name=org.qemu.guest_agent.0
 -chardev 

[ovirt-users] about LXC and ovirt

2015-10-20 Thread Giuseppe Ragusa
> On Tue, Oct 20, 2015 at 4:51 AM, Dan Kenigsberg  wrote:
> 
> > On Mon, Oct 19, 2015 at 09:16:06PM +0200, Johan Kooijman wrote:
> > > Never seen an  update to this ticket. Are there any plans?
> > >
> > > On Tue, Jun 24, 2014 at 3:35 PM, Sven Kieske 
> > wrote:
> > >
> > > >
> > > >
> > > > Am 24.06.2014 15:13, schrieb Nathanaël Blanchet:
> > > > > Hi all,
> > > > >
> > > > > now rhel7 is out, it will become a part of the ovirt project in a
> > near
> > > > > future. Given taht official LXC support aims to complete the KVM
> > > > > virtualization part, is LXC planned to  be supported for linux VM by
> > > > > ovirt, like openvz is with proxmox?
> > > >
> > > > very good question, can't wait to read an answer!
> > > > +1 from here.
> >
> > I'm not aware of current plans. We can consider this when ovirt-4.0
> > feature request season opens.
> >
> > Until then, can you share your own use case for runnig LXC?
> >
> 
> It seems like Proxmox have quite the install base especially due to the
> ability to mix containers and "fat" VMs. AFAIK that's the only feature they
> have that is ahead of oVirt. And that install base should tell us this is
> indeed a feature needed and widely used.
> 
> 
> >
> > I'd love to see a vdsm hook that translates the qemu-kvm domxml into an
> > lxc one, as a first step. Anyone?
> >
> 
> That can be a fun project to do, but I'm not volunteering just yet ;)
> 
> 
> >
> > Dan.
> > ___
> > Users mailing list
> > Users at ovirt.org
> > http://lists.ovirt.org/mailman/listinfo/users
> >
+1
A couple of related questions:
*) what can we expect from the deprecation of libvirt LXC driver in RHEL? 
(CentOS would follow suit, barring an extraordinary effort from the 
Virtualization SIG, akin to the Xen-on-CentOS one)
*) dreaming of a future convergence of oVirt-node and Atomic would be... well, 
just dreaming? ;)
Giuseppe
  ___
Users mailing list
Users@ovirt.org
http://lists.ovirt.org/mailman/listinfo/users


[ovirt-users] sanlock + gluster recovery -- RFE

2014-05-21 Thread Giuseppe Ragusa
Hi,

 - Original Message -
  From: Ted Miller tmiller at hcjb.org
  To: users users at ovirt.org
  Sent: Tuesday, May 20, 2014 11:31:42 PM
  Subject: [ovirt-users] sanlock + gluster recovery -- RFE
  
  As you are aware, there is an ongoing split-brain problem with running
  sanlock on replicated gluster storage. Personally, I believe that this is
  the 5th time that I have been bitten by this sanlock+gluster problem.
  
  I believe that the following are true (if not, my entire request is probably
  off base).
  
  
  * ovirt uses sanlock in such a way that when the sanlock storage is on a
  replicated gluster file system, very small storage disruptions can
  result in a gluster split-brain on the sanlock space
 
 Although this is possible (at the moment) we are working hard to avoid it.
 The hardest part here is to ensure that the gluster volume is properly
 configured.
 
 The suggested configuration for a volume to be used with ovirt is:
 
 Volume Name: (...)
 Type: Replicate
 Volume ID: (...)
 Status: Started
 Number of Bricks: 1 x 3 = 3
 Transport-type: tcp
 Bricks:
 (...three bricks...)
 Options Reconfigured:
 network.ping-timeout: 10
 cluster.quorum-type: auto
 
 The two options ping-timeout and quorum-type are really important.
 
 You would also need a build where this bug is fixed in order to avoid any
 chance of a split-brain:
 
 https://bugzilla.redhat.com/show_bug.cgi?id=1066996

It seems that the aforementioned bug is peculiar to 3-bricks setups.

I understand that a 3-bricks setup can allow proper quorum formation without 
resorting to first-configured-brick-has-more-weight convention used with only 
2 bricks and quorum auto (which makes one node special, so not properly 
any-single-fault tolerant).

But, since we are on ovirt-users, is there a similar suggested configuration 
for a 2-hosts setup oVirt+GlusterFS with oVirt-side power management properly 
configured and tested-working?
I mean a configuration where any host can go south and oVirt (through the 
other one) fences it (forcibly powering it off with confirmation from IPMI or 
similar) then restarts HA-marked vms that were running there, all the while 
keeping the underlying GlusterFS-based storage domains responsive and 
readable/writeable (maybe apart from a lapse between detected other-node 
unresposiveness and confirmed fencing)?

Furthermore: is such a suggested configuration possible in a self-hosted-engine 
scenario?

Regards,
Giuseppe

  How did I get into this mess?
  
  ...
  
  What I would like to see in ovirt to help me (and others like me). 
  Alternates
  listed in order from most desirable (automatic) to least desirable (set of
  commands to type, with lots of variables to figure out).
 
 The real solution is to avoid the split-brain altogether. At the moment it
 seems that using the suggested configurations and the bug fix we shouldn't
 hit a split-brain.
 
  1. automagic recovery
  
  2. recovery subcommand
  
  3. script
  
  4. commands
 
 I think that the commands to resolve a split-brain should be documented.
 I just started a page here:
 
 http://www.ovirt.org/Gluster_Storage_Domain_Reference
 
 Could you add your documentation there? Thanks!
 
 -- 
 Federico

  ___
Users mailing list
Users@ovirt.org
http://lists.ovirt.org/mailman/listinfo/users


Re: [ovirt-users] [Users] Post-Install Engine VM Changes Feasible?

2014-05-14 Thread Giuseppe Ragusa
Hi all,
sorry for the late reply.

I noticed that I missed the deviceId property on my additional-nic line below, 
but I can confirm that the engine vm (installed with my previously modified 
template in /usr/share/ovirt-hosted-engine-setup/templates/vm.conf.in as 
outlined below) is still up and running (apparently) ok without it (I verified 
that the deviceId property has not been added automatically in 
/etc/ovirt-hosted-engine/vm.conf).

I admit that modifying a package file not marked as configuration (under 
/usr/share... may the FHS forgive me... :) is not best practice, but modifying 
the configuration one (under /etc...) afterwards seemed more error prone (needs 
propagation to further nodes).

In order to have a clear picture of the matter (and write/add-to a wiki page on 
engine vm customization) I'd like to read more on the syntax of these vm.conf 
files (they are neither libvirt XML files nor OTOPI files) and which properties 
are default/needed/etc.

From simple analogy, as an example, I thought that an unique index property 
would be needed (as in ide/virtio disk devices) for adding a nic, but Andrew 
example does not add it...

Any pointers to doc/code for further enlightenment?

Many thanks in advance,
Giuseppe

 Date: Thu, 10 Apr 2014 08:40:25 +0200
 From: sbona...@redhat.com
 To: and...@andrewklau.com
 CC: giuseppe.rag...@hotmail.com; j...@wrale.com; users@ovirt.org
 Subject: Re: [Users] Post-Install Engine VM Changes Feasible?
 
 
 
 Hi,
 
 
 Il 10/04/2014 02:40, Andrew Lau ha scritto:
  On Tue, Apr 8, 2014 at 8:52 PM, Andrew Lau and...@andrewklau.com wrote:
  On Mon, Mar 17, 2014 at 8:01 PM, Sandro Bonazzola sbona...@redhat.com 
  wrote:
  Il 15/03/2014 12:44, Giuseppe Ragusa ha scritto:
  Hi Joshua,
 
  --
  Date: Sat, 15 Mar 2014 02:32:59 -0400
  From: j...@wrale.com
  To: users@ovirt.org
  Subject: [Users] Post-Install Engine VM Changes Feasible?
 
  Hi,
 
  I'm in the process of installing 3.4 RC(2?) on Fedora 19.  I'm using 
  hosted engine, introspective GlusterFS+keepalived+NFS ala [1], across 
  six nodes.
 
  I have a layered networking topology ((V)LANs for public, internal, 
  storage, compute and ipmi).  I am comfortable doing the bridging for each
  interface myself via /etc/sysconfig/network-scripts/ifcfg-*.
 
  Here's my desired topology: 
  http://www.asciiflow.com/#Draw6325992559863447154
 
  Here's my keepalived setup: 
  https://gist.github.com/josh-at-knoesis/98618a16418101225726
 
  I'm writing a lot of documentation of the many steps I'm taking.  I hope 
  to eventually release a distributed introspective all-in-one (including
  distributed storage) guide.
 
 I hope you'll publish it also on ovirt.org wiki :-)
 
 
  Looking at vm.conf.in http://vm.conf.in, it looks like I'd by default 
  end up with one interface on my engine, probably on my internal VLAN, as
  that's where I'd like the control traffic to flow.  I definitely could 
  do NAT, but I'd be most happy to see the engine have a presence on all 
  of the
  LANs, if for no other reason than because I want to send backups 
  directly over the storage VLAN.
 
  I'll cut to it:  I believe I could successfully alter the vdsm template 
  (vm.conf.in http://vm.conf.in) to give me the extra interfaces I 
  require.
  It hit me, however, that I could just take the defaults for the initial 
  install.  Later, I think I'll be able to come back with virsh and make my
  changes to the gracefully disabled VM.  Is this true?
 
  [1] http://www.andrewklau.com/ovirt-hosted-engine-with-3-4-0-nightly/
 
  Thanks,
  Joshua
 
 
  I started from the same reference[1] and ended up statically modifying 
  vm.conf.in before launching setup, like this:
 
  cp -a /usr/share/ovirt-hosted-engine-setup/templates/vm.conf.in 
  /usr/share/ovirt-hosted-engine-setup/templates/vm.conf.in.orig
  cat  EOM  /usr/share/ovirt-hosted-engine-setup/templates/vm.conf.in
  vmId=@VM_UUID@
  memSize=@MEM_SIZE@
  display=@CONSOLE_TYPE@
  devices={index:2,iface:ide,address:{ controller:0, target:0,unit:0, 
  bus:1,
  type:drive},specParams:{},readonly:true,deviceId:@CDROM_UUID@,path:@CDROM@,device:cdrom,shared:false,type:disk@BOOT_CDROM@}
  devices={index:0,iface:virtio,format:raw,poolID:@SP_UUID@,volumeID:@VOL_UUID@,imageID:@IMG_UUID@,specParams:{},readonly:false,domainID:@SD_UUID@,optional:false,deviceId:@IMG_UUID@,address:{bus:0x00,
  slot:0x06, domain:0x, type:pci, 
  function:0x0},device:disk,shared:exclusive,propagateErrors:off,type:disk@BOOT_DISK@}
  devices={device:scsi,model:virtio-scsi,type:controller}
  devices={index:4,nicModel:pv,macAddr:@MAC_ADDR@,linkActive:true,network:@BRIDGE@,filter:vdsm-no-mac-spoofing,specParams:{},deviceId:@NIC_UUID@,address:{bus:0x00,
  slot:0x03, domain:0x, type:pci, 
  function:0x0},device:bridge,type:interface@BOOT_PXE@}
  devices={index:8,nicModel:pv

Re: [Users] Error adding second host to self-hosted-engine

2014-04-06 Thread Giuseppe Ragusa
Hi all,
while going through the logs I found the following in engine.log:

2014-04-06 07:54:48,788 INFO  [org.ovirt.engine.core.bll.InstallerMessages] 
(VdsDeploy) Installation 172.16.100.2: Retrieving installation logs to: 
'/var/log/ovirt-engi
ne/host-deploy/ovirt-20140406075448-172.16.100.2-2325b258.log'
2014-04-06 07:54:48,793 INFO  
[org.ovirt.engine.core.dal.dbbroker.auditloghandling.AuditLogDirector] 
(VdsDeploy) Correlation ID: 2325b258, Call Stack: null, Custom Even
t ID: -1, Message: Installing Host hosted_engine_2. Retrieving installation 
logs to: 
'/var/log/ovirt-engine/host-deploy/ovirt-20140406075448-172.16.100.2-2325b258.log'.
2014-04-06 07:54:49,140 INFO  [org.ovirt.engine.core.bll.InstallerMessages] 
(VdsDeploy) Installation 172.16.100.2: Stage: Termination
2014-04-06 07:54:49,238 INFO  
[org.ovirt.engine.core.dal.dbbroker.auditloghandling.AuditLogDirector] 
(VdsDeploy) Correlation ID: 2325b258, Call Stack: null, Custom Even
t ID: -1, Message: Installing Host hosted_engine_2. Stage: Termination.
2014-04-06 07:54:49,239 ERROR [org.ovirt.engine.core.bll.VdsDeploy] (VdsDeploy) 
Error during deploy dialog: java.io.IOException: Pipe closed
at java.io.PipedInputStream.read(PipedInputStream.java:308) 
[rt.jar:1.7.0_51]
at java.io.PipedInputStream.read(PipedInputStream.java:378) 
[rt.jar:1.7.0_51]
at sun.nio.cs.StreamDecoder.readBytes(StreamDecoder.java:283) 
[rt.jar:1.7.0_51]
at sun.nio.cs.StreamDecoder.implRead(StreamDecoder.java:325) 
[rt.jar:1.7.0_51]
at sun.nio.cs.StreamDecoder.read(StreamDecoder.java:177) 
[rt.jar:1.7.0_51]
at java.io.InputStreamReader.read(InputStreamReader.java:184) 
[rt.jar:1.7.0_51]
at java.io.BufferedReader.fill(BufferedReader.java:154) 
[rt.jar:1.7.0_51]
at java.io.BufferedReader.readLine(BufferedReader.java:317) 
[rt.jar:1.7.0_51]
at java.io.BufferedReader.readLine(BufferedReader.java:382) 
[rt.jar:1.7.0_51]
at 
org.ovirt.otopi.dialog.MachineDialogParser.nextEvent(MachineDialogParser.java:355)
 [otopi.jar:]
at 
org.ovirt.otopi.dialog.MachineDialogParser.nextEvent(MachineDialogParser.java:405)
 [otopi.jar:]
at org.ovirt.engine.core.bll.VdsDeploy._threadMain(VdsDeploy.java:749) 
[bll.jar:]
at org.ovirt.engine.core.bll.VdsDeploy.access$1800(VdsDeploy.java:80) 
[bll.jar:]
at org.ovirt.engine.core.bll.VdsDeploy$45.run(VdsDeploy.java:897) 
[bll.jar:]
at java.lang.Thread.run(Thread.java:744) [rt.jar:1.7.0_51]

2014-04-06 07:54:49,245 ERROR [org.ovirt.engine.core.bll.VdsDeploy] 
(org.ovirt.thread.pool-6-thread-39) [2325b258] Error during host 172.16.100.2 
install: java.io.IOExc
eption: Pipe closed
at java.io.PipedInputStream.read(PipedInputStream.java:308) 
[rt.jar:1.7.0_51]
at java.io.PipedInputStream.read(PipedInputStream.java:378) 
[rt.jar:1.7.0_51]
at sun.nio.cs.StreamDecoder.readBytes(StreamDecoder.java:283) 
[rt.jar:1.7.0_51]
at sun.nio.cs.StreamDecoder.implRead(StreamDecoder.java:325) 
[rt.jar:1.7.0_51]
at sun.nio.cs.StreamDecoder.read(StreamDecoder.java:177) 
[rt.jar:1.7.0_51]
at java.io.InputStreamReader.read(InputStreamReader.java:184) 
[rt.jar:1.7.0_51]
at java.io.BufferedReader.fill(BufferedReader.java:154) 
[rt.jar:1.7.0_51]
at java.io.BufferedReader.readLine(BufferedReader.java:317) 
[rt.jar:1.7.0_51]
at java.io.BufferedReader.readLine(BufferedReader.java:382) 
[rt.jar:1.7.0_51]
at 
org.ovirt.otopi.dialog.MachineDialogParser.nextEvent(MachineDialogParser.java:355)
 [otopi.jar:]
at 
org.ovirt.otopi.dialog.MachineDialogParser.nextEvent(MachineDialogParser.java:405)
 [otopi.jar:]
at org.ovirt.engine.core.bll.VdsDeploy._threadMain(VdsDeploy.java:749) 
[bll.jar:]
at org.ovirt.engine.core.bll.VdsDeploy.access$1800(VdsDeploy.java:80) 
[bll.jar:]
at org.ovirt.engine.core.bll.VdsDeploy$45.run(VdsDeploy.java:897) 
[bll.jar:]
at java.lang.Thread.run(Thread.java:744) [rt.jar:1.7.0_51]

2014-04-06 07:54:49,249 ERROR [org.ovirt.engine.core.bll.InstallerMessages] 
(org.ovirt.thread.pool-6-thread-39) [2325b258] Installation 172.16.100.2: Pipe 
closed

and then a little search turned out an abandoned effort at 
http://gerrit.ovirt.org/#/c/21460/

Since those unexpectedly closed pipes seem to be not so easily reproducible, 
should I simply retry? ;

Many thanks,
Giuseppe

From: giuseppe.rag...@hotmail.com
To: users@ovirt.org
Subject: Error adding second host to self-hosted-engine
Date: Sun, 6 Apr 2014 08:18:36 +0200




Hi all,
while reinstalling from scratch on CentOS 6.5 (using oVirt 3.4.0 GA plus latest 
snapshot using a GlusterFS-based NFS storage domain for the self-hosted Engine 
and a pure GlusterFS domain for the datacenter) the adding of the second node 
(already part of a 3.5.0beta5 GlusterFS cluster) failed at the end with:

[ INFO  ] Engine replied: DB Up!Welcome to Health Status!
[ INFO  ] Waiting for the host to become 

Re: [Users] Otopi pre-seeded Apache redirect directive ignored by engine-setup 3.4.0 GA

2014-04-05 Thread Giuseppe Ragusa
Hi Didi,
I opened BZ#1084717 and added Gerrit change as external reference.

I will report back if I manage to try engine-setup again with your patch 
applied.

Many thanks,
Giuseppe

Date: Sun, 30 Mar 2014 02:46:58 -0400
From: d...@redhat.com
To: giuseppe.rag...@hotmail.com
CC: users@ovirt.org; alo...@redhat.com
Subject: Re: [Users] Otopi pre-seeded Apache redirect directive ignored by 
engine-setup 3.4.0 GA

From: Giuseppe Ragusa giuseppe.rag...@hotmail.com
To: users@ovirt.org
Sent: Saturday, March 29, 2014 4:49:32 AM
Subject: [Users] Otopi pre-seeded Apache redirect directive ignored by 
engine-setup 3.4.0 GA

Hi all,
I tried pre-seeding engine-setup (part of a from scratch self-hosted-engine 
full reinstallation) as per subject with:

OVESETUP_APACHE/configureRootRedirection=bool:False

but the /etc/httpd/conf.d/ovirt-engine-root-redirect.conf file gets created 
anyway with usual content.

If needed, I can provide logs as soon as I manage to start my Engine VM again 
(just reported a separate bug on ovirt-hosted-engine-setup).

Obviously that's just a nuisance (comment-out/remove the 
ovirt-engine-root-redirect.conf file, restart Apache and it works as desired): 
I just wanted to notify it (can open BZ# if it helps).
Seems like a bug, you can try this [1] fix.
A potential workaround might be to add this 
too:OVESETUP_APACHE/configured=bool:True
[1] http://gerrit.ovirt.org/26211
If you try any of these, please report back.You might also want to open a bug 
for this.
Thanks for the report!-- Didi

  ___
Users mailing list
Users@ovirt.org
http://lists.ovirt.org/mailman/listinfo/users


[Users] 3.4.0 GA Web Admin and Firefox

2014-03-29 Thread Giuseppe Ragusa
Hi all,
each time I get to the administrator portal login with Firefox 24.4.0 ESR I get 
told that it isn't optimal.

Is it a false alarm (maybe something must be updated in browser detection) or 
should I change browser?

Many thanks.

Regards,
Giuseppe
  ___
Users mailing list
Users@ovirt.org
http://lists.ovirt.org/mailman/listinfo/users


[Users] Otopi pre-seeded Apache redirect directive ignored by engine-setup 3.4.0 GA

2014-03-28 Thread Giuseppe Ragusa
Hi all,
I tried pre-seeding engine-setup (part of a from scratch self-hosted-engine 
full reinstallation) as per subject with:

OVESETUP_APACHE/configureRootRedirection=bool:False

but the /etc/httpd/conf.d/ovirt-engine-root-redirect.conf file gets created 
anyway with usual content.

If needed, I can provide logs as soon as I manage to start my Engine VM again 
(just reported a separate bug on ovirt-hosted-engine-setup).

Obviously that's just a nuisance (comment-out/remove the 
ovirt-engine-root-redirect.conf file, restart Apache and it works as desired): 
I just wanted to notify it (can open BZ# if it helps).

Regards,
Giuseppe
  ___
Users mailing list
Users@ovirt.org
http://lists.ovirt.org/mailman/listinfo/users


Re: [Users] Otopi pre-seeded answers and firewall settings

2014-03-26 Thread Giuseppe Ragusa
 From: s.kie...@mittwald.de
 To: users@ovirt.org
 Date: Wed, 26 Mar 2014 08:23:05 +
 Subject: Re: [Users] Otopi pre-seeded answers and firewall settings
 
 I really don't get why many people always ask others to open
 BZ, if you could just do it yourself.
 
 It doesn't take much time, less than writing a
 Mail to explain to someone to report a BZ.

I suppose that proper bug tracking management should in general keep involved 
the real stakeholders, ie the users that have experienced the problems first 
hand, so that any further action can keep them in the feedback loop with real 
use cases, reminders to developers etc. and this seems particularly appropriate 
for open source projects with developers that devote voluntary work (Sven: try 
not to think of all those nice @redhat.com addresses and RHEV references ; )

On the other hand I also understand that this is a particularly straightforward 
RFE with no (apparent) need for logs, traces etc. but the established workflow 
always wins :) and proves real user request to managers :))

 So here it is:
 
 https://bugzilla.redhat.com/show_bug.cgi?id=1080823

Many thanks Sven (I actually have a bad track record as a Bugzilla reporter... 
almost always dismissed ; )

Regards,
Giuseppe

 Am 26.03.2014 08:51, schrieb Yedidyah Bar David:
  You can open a bug if you want, to make this configurable. 
 
 -- 
 Mit freundlichen Grüßen / Regards
 
 Sven Kieske
 
 Systemadministrator
 Mittwald CM Service GmbH  Co. KG
 Königsberger Straße 6
 32339 Espelkamp
 T: +49-5772-293-100
 F: +49-5772-293-333
 https://www.mittwald.de
 Geschäftsführer: Robert Meyer
 St.Nr.: 331/5721/1033, USt-IdNr.: DE814773217, HRA 6640, AG Bad Oeynhausen
 Komplementärin: Robert Meyer Verwaltungs GmbH, HRB 13260, AG Bad Oeynhausen
 ___
 Users mailing list
 Users@ovirt.org
 http://lists.ovirt.org/mailman/listinfo/users
  ___
Users mailing list
Users@ovirt.org
http://lists.ovirt.org/mailman/listinfo/users


Re: [Users] Otopi pre-seeded answers and firewall settings

2014-03-25 Thread Giuseppe Ragusa
Hi Didi,
many thanks for your invaluable help!

I'll try your suggestion 
(/etc/ovirt-host-deploy.conf.d/99-prevent-iptables.conf) asap and then I will 
report back.

By the way: I have a really custom iptables setup (multiple separated networks 
on hypervisor hosts), so I suppose it's best to hand tune firewall rules and 
then leave them alone (I pre-configure them, so the setup procedure won't be 
impeded in its communication needs anyway AND I will always guarantee the most 
stringent filtering possible with default deny ecc.).

Many thanks again,
Giuseppe

Date: Tue, 25 Mar 2014 04:05:33 -0400
From: d...@redhat.com
To: giuseppe.rag...@hotmail.com
CC: users@ovirt.org
Subject: Re: [Users] Otopi pre-seeded answers and firewall settings

From: Giuseppe Ragusa giuseppe.rag...@hotmail.com
To: Yedidyah Bar David d...@redhat.com
Cc: Users@ovirt.org users@ovirt.org
Sent: Tuesday, March 25, 2014 1:53:20 AM
Subject: RE: [Users] Otopi pre-seeded answers and firewall settings

Hi Didi,
I found the references to NETWORK/iptablesEnable in my engine logs 
(/var/log/ovirt-engine/host-deploy/ovirt-*.log), but it didn't seem to work 
after all.

Full logs attached.

I resurrected my Engine by rebooting the (still only) host, then restarting 
ovirt-ha-agent (at startup the agent failed while trying to launch vdsm, but I 
found vdsm running and so tried manually...).
OK, so it's host-deploy that's doing that.But it's not host-deploy itself - 
it's the engine that is talking to it, asking it to configure iptables.I don't 
know how to make the agent don't do that. I searched a bit the sources (which I 
don't know)and didn't find a simple way.
You can, however, try to override this by:# mkdir -p 
/etc/ovirt-host-deploy.conf.d# echo '[environment:enforce]'  
/etc/ovirt-host-deploy.conf.d/99-prevent-iptables.conf# echo 
'NETWORK/iptablesEnable=bool:False'  
/etc/ovirt-host-deploy.conf.d/99-prevent-iptables.conf
Never tried that, and not sure it's recommended - if it does work, it means 
that host-deploy will notupdate iptables, but the engine will think it did. So 
it's better to find a way to make the engine not dothat. Or, better yet, that 
you'll explain why you need this and somehow make the engine do what you 
want...-- Didi
  ___
Users mailing list
Users@ovirt.org
http://lists.ovirt.org/mailman/listinfo/users


Re: [Users] Otopi pre-seeded answers and firewall settings

2014-03-25 Thread Giuseppe Ragusa
Hi Joshua,
many thanks for your suggestion which I suppose would work perfectly, but I 
actually want iptables (CentOS 6.5 here, so no firewalld) rules in place all 
the time, but only MY OWN iptables rules ;

Regards,
Giuseppe

Date: Tue, 25 Mar 2014 18:04:04 -0400
Subject: Re: [Users] Otopi pre-seeded answers and firewall settings
From: j...@wrale.com
To: giuseppe.rag...@hotmail.com

Perhaps you could add the iptables and firewalld packages to yum.conf as 
excludes.  I don't know if this would fail silently, but if so, the engine 
installer would never know.

Thanks,

Joshua


On Tue, Mar 25, 2014 at 5:49 PM, Giuseppe Ragusa giuseppe.rag...@hotmail.com 
wrote:




Hi Didi,
many thanks for your invaluable help!

I'll try your suggestion 
(/etc/ovirt-host-deploy.conf.d/99-prevent-iptables.conf) asap and then I will 
report back.

By the way: I have a really custom iptables setup (multiple separated networks 
on hypervisor hosts), so I suppose it's best to hand tune firewall rules and 
then leave them alone (I pre-configure them, so the setup procedure won't be 
impeded in its communication needs anyway AND I will always guarantee the most 
stringent filtering possible with default deny ecc.).


Many thanks again,
Giuseppe

Date: Tue, 25 Mar 2014 04:05:33 -0400
From: d...@redhat.com
To: giuseppe.rag...@hotmail.com

CC: users@ovirt.org
Subject: Re: [Users] Otopi pre-seeded answers and firewall settings


From: Giuseppe Ragusa giuseppe.rag...@hotmail.com

To: Yedidyah Bar David d...@redhat.com
Cc: Users@ovirt.org users@ovirt.org

Sent: Tuesday, March 25, 2014 1:53:20 AM
Subject: RE: [Users] Otopi pre-seeded answers and firewall settings

Hi Didi,
I found the references to NETWORK/iptablesEnable in my engine logs 
(/var/log/ovirt-engine/host-deploy/ovirt-*.log), but it didn't seem to work 
after all.


Full logs attached.

I resurrected my Engine by rebooting the (still only) host, then restarting 
ovirt-ha-agent (at startup the agent failed while trying to launch vdsm, but I 
found vdsm running and so tried manually...).

OK, so it's host-deploy that's doing that.But it's not host-deploy itself - 
it's the engine that is talking to it, asking it to configure iptables.I don't 
know how to make the agent don't do that. I searched a bit the sources (which I 
don't know)
and didn't find a simple way.
You can, however, try to override this by:# mkdir -p 
/etc/ovirt-host-deploy.conf.d# echo '[environment:enforce]'  
/etc/ovirt-host-deploy.conf.d/99-prevent-iptables.conf
# echo 'NETWORK/iptablesEnable=bool:False'  
/etc/ovirt-host-deploy.conf.d/99-prevent-iptables.conf
Never tried that, and not sure it's recommended - if it does work, it means 
that host-deploy will not
update iptables, but the engine will think it did. So it's better to find a way 
to make the engine not dothat. Or, better yet, that you'll explain why you need 
this and somehow make the engine do what you want...
-- Didi
  

___

Users mailing list

Users@ovirt.org

http://lists.ovirt.org/mailman/listinfo/users



  ___
Users mailing list
Users@ovirt.org
http://lists.ovirt.org/mailman/listinfo/users


Re: [Users] Otopi pre-seeded answers and firewall settings

2014-03-24 Thread Giuseppe Ragusa
Hi Didi,

Date: Mon, 24 Mar 2014 03:36:32 -0400
From: d...@redhat.com
To: giuseppe.rag...@hotmail.com
CC: users@ovirt.org
Subject: Re: [Users] Otopi pre-seeded answers and firewall settings

From: Giuseppe Ragusa giuseppe.rag...@hotmail.com
To: Users@ovirt.org users@ovirt.org
Sent: Sunday, March 23, 2014 10:44:02 PM
Subject: [Users] Otopi pre-seeded answers and firewall settings

Hi all,
I'm trying to automate as much as possible of ovirt-hosted-engine-setup and 
engine-setup by means of otopi answer files passed in using 
--config-append=filename.conf.

I succeded in forcing engine-setup to leave my iptables settings alone with:

OVESETUP_CONFIG/firewallManager=str:iptables
OVESETUP_CONFIG/updateFirewall=bool:False
 Right.


but ovirt-hosted-engine-setup still modified my iptables settings even with the 
following options:

OVEHOSTED_NETWORK/firewallManager=str:iptables
 Actually I do not think we provide in hosted-engine deploy means to disable 
 this as we do in engine-setup. If you carefully read the code you see that 
 you can make it do nothing by setting this to a non-existent manager, e.g.:
 OVEHOSTED_NETWORK/firewallManager=str:nonexistent

I will try this asap (reinstalling from scratch using latest 3.4 snapshot 
packages + latest GlusterFS 3.5 nightly) and will report back.


OVEHOSTED_NETWORK/iptablesEnable=bool:False
 Where did you get this from? Can't find it in the code.

Nor do I anymore... it must have been my fault, sorry for the confusion



Maybe I used the wrong option (deduced by looking inside source code).

Does anybody have any hint/suggestion?
 The above should prevent 'hosted-engine --deploy' from configuring iptables 
 on the host, and to prevent 'engine-setup' from configuring iptables on the 
 VM. Later, the engine runs 'ovirt-host-deploy' which connects to the host 
 and configures there stuff - some by itself, some using vdsm, and some sent 
 through them directly from the engine. This is a process I know less...

The timestamp on the saved/modified iptables files suggests something happening 
right at the end of setup (when Self-Hosted-Engine adds/registers host).

 You can look at and/or post more relevant logs - 
 /var/log/ovirt-engine/host-deploy/* , /var/log/ovirt-engine/*.log from the 
 engine VM and /var/log/vdsm/* from the host, and also check iptables 
 configuration at various stages - during hosted-engine deploy but before 
 connecting to the engine, after, etc. -- 
 Didi

/var/log/vdsm/* on host contain no references to iptables
I will check on Engine logs as soon as I can start it up again (GlusterFS-based 
NFS keeps crashing, maybe for OOM/leakage).

Many thanks for your help,
Giuseppe

  ___
Users mailing list
Users@ovirt.org
http://lists.ovirt.org/mailman/listinfo/users


Re: [Users] Self-hosted engine setup ok but no engine vm running

2014-03-23 Thread Giuseppe Ragusa
 Date: Sun, 23 Mar 2014 09:44:37 +0100
 From: jmosk...@redhat.com
 To: giuseppe.rag...@hotmail.com; users@ovirt.org
 CC: dfedi...@redhat.com
 Subject: Re: [Users] Self-hosted engine setup ok but no engine vm running
 
 On 03/15/2014 03:03 AM, Giuseppe Ragusa wrote:
  Hi all,
  while testing further a from-scratch self-hosted-engine installation on
  CentOS 6.5 (after two setup restarts: applying a workaround for a
  missing pki directory and tweaking my own iptables rules to allow ping
  towards default gateway) on a physical node (oVirt 3.4.0_pre + GlusterFS
  3.5.0beta4; NFS storage for engine VM), the process ends successfully
  but the Engine VM is not found running afterwards.
 
  I archived the whole /var/log directory and attached here for completeness.
 
  I'll wait a bit for questions or other hints/requests before trying any
  further action.
 
  Many thanks in advance for your assistance,
  Giuseppe
 
 
 According to the logs you ran into: 
 https://bugzilla.redhat.com/show_bug.cgi?id=1075126 It's already fixed 
 in ovirt-hosted-engine-ha-1.1.2.1
 
 --Jirka

I had already applied manually the first workaround 
(http://gerrit.ovirt.org/25799) but then I noticed it had been updated 
(http://gerrit.ovirt.org/25825) and I can confirm that it works now (as I 
separately reported to Didi, who helped me with the first steps of the setup): 
simply restarting ovirt-ha-broker and ovirt-ha-agent makes the Engine VM come 
up automatically.

I suppose that the updated packages will be published alltogether at 3.4.0 GA, 
won't they?

Thank you very much for your assistance.

Regards,
Giuseppe
  ___
Users mailing list
Users@ovirt.org
http://lists.ovirt.org/mailman/listinfo/users


[Users] Otopi pre-seeded answers and firewall settings

2014-03-23 Thread Giuseppe Ragusa
Hi all,
I'm trying to automate as much as possible of ovirt-hosted-engine-setup and 
engine-setup by means of otopi answer files passed in using 
--config-append=filename.conf.

I succeded in forcing engine-setup to leave my iptables settings alone with:

OVESETUP_CONFIG/firewallManager=str:iptables
OVESETUP_CONFIG/updateFirewall=bool:False

but ovirt-hosted-engine-setup still modified my iptables settings even with the 
following options:

OVEHOSTED_NETWORK/firewallManager=str:iptables
OVEHOSTED_NETWORK/iptablesEnable=bool:False

Maybe I used the wrong option (deduced by looking inside source code).

Does anybody have any hint/suggestion?

Many thanks in advance,
Giuseppe
  ___
Users mailing list
Users@ovirt.org
http://lists.ovirt.org/mailman/listinfo/users


Re: [Users] Self-hosted-engine setup error

2014-03-22 Thread Giuseppe Ragusa
Hi Didi,
sorry for the delay in reporting final results.

I can confirm that simply creating /etc/pki/libvirt suffices: the private 
subdir gets automatically correctly created (along with all certificates etc.)  
and setup completes fine.

It did not start up because of BZ #1075126 (HA agent died) but I noticed (by 
chance) that there is an updated (18/03/2014) workaround and after manually 
applying it (no oVirt new packages have been published yet) the Engine VM 
started fine.

Now I'm battling with an (apparently) GlusterFS (3.5.0beta4) bug that makes the 
(NFS based, but Gluster-provided) Engine storage domain shutdown by itself 
after a while (causing Engine VM to die).

Many thanks again for your support,
Giuseppe

PS: would you suggest a complete reinstall with GlusterFS 3.4.x stable instead? 
;

PS2: sorry for top-posting (but Hotmail keeps failing on proper quoting...)

From: giuseppe.rag...@hotmail.com
To: d...@redhat.com
Date: Sun, 16 Mar 2014 15:14:07 +0100
CC: users@ovirt.org
Subject: Re: [Users] Self-hosted-engine setup error





Date: Sun, 16 Mar 2014 05:14:39 -0400
From: d...@redhat.com
To: giuseppe.rag...@hotmail.com
CC: users@ovirt.org
Subject: Re: [Users] Self-hosted-engine setup error

From: Giuseppe Ragusa giuseppe.rag...@hotmail.com
To: Yedidyah Bar David d...@redhat.com
Cc: users@ovirt.org
Sent: Saturday, March 15, 2014 2:15:18 AM
Subject: RE: [Users] Self-hosted-engine setup error




Hi Didi,

Date: Thu, 13 Mar 2014 02:46:50 -0400
From: d...@redhat.com
To: giuseppe.rag...@hotmail.com
CC: users@ovirt.org
Subject: Re: [Users] Self-hosted-engine setup error

From: Giuseppe Ragusa giuseppe.rag...@hotmail.com
To: users@ovirt.org
Sent: Thursday, March 13, 2014 2:38:42 AM
Subject: [Users] Self-hosted-engine setup error

Hi all,
while attempting a from-scratch self-hosted-engine installation on CentOS 6.5 
(also freshly reinstalled from scratch) on a physical node (oVirt 3.4.0_pre + 
GlusterFS 3.5.0beta4; NFS storage for engine VM), the process fails almost 
immediately with:

[root@cluster1 ~]# ovirt-hosted-engine-setup 
--config-append=/root/ovhe-setup-answers.conf
[ INFO  ] Stage: Initializing
  Continuing will configure this host for serving as hypervisor and 
create a VM where you have to install oVirt Engine afterwards.
  Are you sure you want to continue? (Yes, No)[Yes]:
[ INFO  ] Generating a temporary VNC password.
[ INFO  ] Stage: Environment setup
  Configuration files: ['/root/ovhe-setup-answers.conf']
  Log file: 
/var/log/ovirt-hosted-engine-setup/ovirt-hosted-engine-setup-20140313010526.log
  Version: otopi-1.2.0_rc3 (otopi-1.2.0-0.9.rc3.el6)
[ INFO  ] Hardware supports virtualization
[ INFO  ] Bridge ovirtmgmt already created
[ INFO  ] Stage: Environment packages setup
[ INFO  ] Stage: Programs detection
[ INFO  ] Stage: Environment setup
[ INFO  ] Generating VDSM certificates
[ ERROR ] Failed to execute stage 'Environment setup': [Errno 2] No such file 
or directory: '/etc/pki/libvirt/clientcert.pem'
I already got another such report yesterday - seems like a bug in the fix for 
https://bugzilla.redhat.com/show_bug.cgi?id=1034634 .I hope to push a fix later 
today.

I look forward to have the fix pushed/merged in actual packages.


[ INFO  ] Stage: Clean up
[ INFO  ] Stage: Pre-termination
[ INFO  ] Stage: Termination

The /root/ovhe-setup-answers.conf has been saved from a previous installation 
(before reinstalling) and only minimally edited (removed some lines with UUIDs 
etc.).

The /etc/pki/libvirt dir is completely missing on both nodes; last time I tried 
the whole setup I do not recall of having such problems, but maybe something 
was different then.

The generated 
/var/log/ovirt-hosted-engine-setup/ovirt-hosted-engine-setup-20140313010526.log 
has been saved as:

http://pastebin.com/ezAJETBN

I hope to be able to progress further to test the whole 2-nodes setup (second 
node freshly reinstalled too and already up with GlusterFS and waiting to be 
added to oVirt cluster) and datacenter configuration.

Many thanks in advance for any suggestions/help,
For now, you can simply:mkdir /etc/pki/libvirt
This should be enough.

The workaround works: the self-hosted-engine installation proceeds now.

Thanks for the report!-- 
Didi

Many thanks for your kind and prompt assistance,
Giuseppe

  

Thanks for the report. The workaround is probably not enough, depends on what 
youare trying to do. 'mkdir /etc/pki/libvirt/private' is needed too. Without 
it, the code thatcopies there a key will create a file 'private' instead of 
copying it into a directory 'private'.
Fix [1] was merged to all branches.
[1] http://gerrit.ovirt.org/25747
Best regards,-- Didi

Hi Didi,
the workaround seemed actually to be enough to make the self-hosted-engine 
setup go through up to the end without any user-visible error, but it left me 
with a non-running Engine VM afterwards (basically it did not restart up

Re: [Users] Self-hosted-engine setup error

2014-03-16 Thread Giuseppe Ragusa

Date: Sun, 16 Mar 2014 05:14:39 -0400
From: d...@redhat.com
To: giuseppe.rag...@hotmail.com
CC: users@ovirt.org
Subject: Re: [Users] Self-hosted-engine setup error

From: Giuseppe Ragusa giuseppe.rag...@hotmail.com
To: Yedidyah Bar David d...@redhat.com
Cc: users@ovirt.org
Sent: Saturday, March 15, 2014 2:15:18 AM
Subject: RE: [Users] Self-hosted-engine setup error




Hi Didi,

Date: Thu, 13 Mar 2014 02:46:50 -0400
From: d...@redhat.com
To: giuseppe.rag...@hotmail.com
CC: users@ovirt.org
Subject: Re: [Users] Self-hosted-engine setup error

From: Giuseppe Ragusa giuseppe.rag...@hotmail.com
To: users@ovirt.org
Sent: Thursday, March 13, 2014 2:38:42 AM
Subject: [Users] Self-hosted-engine setup error

Hi all,
while attempting a from-scratch self-hosted-engine installation on CentOS 6.5 
(also freshly reinstalled from scratch) on a physical node (oVirt 3.4.0_pre + 
GlusterFS 3.5.0beta4; NFS storage for engine VM), the process fails almost 
immediately with:

[root@cluster1 ~]# ovirt-hosted-engine-setup 
--config-append=/root/ovhe-setup-answers.conf
[ INFO  ] Stage: Initializing
  Continuing will configure this host for serving as hypervisor and 
create a VM where you have to install oVirt Engine afterwards.
  Are you sure you want to continue? (Yes, No)[Yes]:
[ INFO  ] Generating a temporary VNC password.
[ INFO  ] Stage: Environment setup
  Configuration files: ['/root/ovhe-setup-answers.conf']
  Log file: 
/var/log/ovirt-hosted-engine-setup/ovirt-hosted-engine-setup-20140313010526.log
  Version: otopi-1.2.0_rc3 (otopi-1.2.0-0.9.rc3.el6)
[ INFO  ] Hardware supports virtualization
[ INFO  ] Bridge ovirtmgmt already created
[ INFO  ] Stage: Environment packages setup
[ INFO  ] Stage: Programs detection
[ INFO  ] Stage: Environment setup
[ INFO  ] Generating VDSM certificates
[ ERROR ] Failed to execute stage 'Environment setup': [Errno 2] No such file 
or directory: '/etc/pki/libvirt/clientcert.pem'
I already got another such report yesterday - seems like a bug in the fix for 
https://bugzilla.redhat.com/show_bug.cgi?id=1034634 .I hope to push a fix later 
today.

I look forward to have the fix pushed/merged in actual packages.


[ INFO  ] Stage: Clean up
[ INFO  ] Stage: Pre-termination
[ INFO  ] Stage: Termination

The /root/ovhe-setup-answers.conf has been saved from a previous installation 
(before reinstalling) and only minimally edited (removed some lines with UUIDs 
etc.).

The /etc/pki/libvirt dir is completely missing on both nodes; last time I tried 
the whole setup I do not recall of having such problems, but maybe something 
was different then.

The generated 
/var/log/ovirt-hosted-engine-setup/ovirt-hosted-engine-setup-20140313010526.log 
has been saved as:

http://pastebin.com/ezAJETBN

I hope to be able to progress further to test the whole 2-nodes setup (second 
node freshly reinstalled too and already up with GlusterFS and waiting to be 
added to oVirt cluster) and datacenter configuration.

Many thanks in advance for any suggestions/help,
For now, you can simply:mkdir /etc/pki/libvirt
This should be enough.

The workaround works: the self-hosted-engine installation proceeds now.

Thanks for the report!-- 
Didi

Many thanks for your kind and prompt assistance,
Giuseppe

  

Thanks for the report. The workaround is probably not enough, depends on what 
youare trying to do. 'mkdir /etc/pki/libvirt/private' is needed too. Without 
it, the code thatcopies there a key will create a file 'private' instead of 
copying it into a directory 'private'.
Fix [1] was merged to all branches.
[1] http://gerrit.ovirt.org/25747
Best regards,-- Didi

Hi Didi,
the workaround seemed actually to be enough to make the self-hosted-engine 
setup go through up to the end without any user-visible error, but it left me 
with a non-running Engine VM afterwards (basically it did not restart up 
automatically under HA protection).

I collected all logs and reported it in a separate message with subject 
Self-hosted engine setup ok but no engine vm running‏  but got no comments 
yet.

If I get no suggestions I will try to perform some corrective actions based 
on my understanding of the problems at hand, but I did not want to corrupt 
the exact state to help in debugging (starting again from scratch with full 
first node reinstallation is a somewhat lenghty process since I have no local 
repo mirrors).

Many thanks again,
Giuseppe


  ___
Users mailing list
Users@ovirt.org
http://lists.ovirt.org/mailman/listinfo/users


Re: [Users] Post-Install Engine VM Changes Feasible?

2014-03-15 Thread Giuseppe Ragusa
Hi Joshua,

Date: Sat, 15 Mar 2014 02:32:59 -0400
From: j...@wrale.com
To: users@ovirt.org
Subject: [Users] Post-Install Engine VM Changes Feasible?

Hi,

I'm in the process of installing 3.4 RC(2?) on Fedora 19.  I'm using hosted 
engine, introspective GlusterFS+keepalived+NFS ala [1], across six nodes.

I have a layered networking topology ((V)LANs for public, internal, storage, 
compute and ipmi).  I am comfortable doing the bridging for each interface 
myself via /etc/sysconfig/network-scripts/ifcfg-*.  


Here's my desired topology: http://www.asciiflow.com/#Draw6325992559863447154

Here's my keepalived setup: 
https://gist.github.com/josh-at-knoesis/98618a16418101225726


I'm writing a lot of documentation of the many steps I'm taking.  I hope to 
eventually release a distributed introspective all-in-one (including 
distributed storage) guide.  


Looking at vm.conf.in, it looks like I'd by default end up with one interface 
on my engine, probably on my internal VLAN, as that's where I'd like the 
control traffic to flow.  I definitely could do NAT, but I'd be most happy to 
see the engine have a presence on all of the LANs, if for no other reason than 
because I want to send backups directly over the storage VLAN.  


I'll cut to it:  I believe I could successfully alter the vdsm template 
(vm.conf.in) to give me the extra interfaces I require.  It hit me, however, 
that I could just take the defaults for the initial install.  Later, I think 
I'll be able to come back with virsh and make my changes to the gracefully 
disabled VM.  Is this true? 


[1] http://www.andrewklau.com/ovirt-hosted-engine-with-3-4-0-nightly/

Thanks,
Joshua



I started from the same reference[1] and ended up statically modifying 
vm.conf.in before launching setup, like this:

cp -a /usr/share/ovirt-hosted-engine-setup/templates/vm.conf.in 
/usr/share/ovirt-hosted-engine-setup/templates/vm.conf.in.orig
cat  EOM  /usr/share/ovirt-hosted-engine-setup/templates/vm.conf.in
vmId=@VM_UUID@
memSize=@MEM_SIZE@
display=@CONSOLE_TYPE@
devices={index:2,iface:ide,address:{ controller:0, target:0,unit:0, bus:1, 
type:drive},specParams:{},readonly:true,deviceId:@CDROM_UUID@,path:@CDROM@,device:cdrom,shared:false,type:disk@BOOT_CDROM@}
devices={index:0,iface:virtio,format:raw,poolID:@SP_UUID@,volumeID:@VOL_UUID@,imageID:@IMG_UUID@,specParams:{},readonly:false,domainID:@SD_UUID@,optional:false,deviceId:@IMG_UUID@,address:{bus:0x00,
 slot:0x06, domain:0x, type:pci, 
function:0x0},device:disk,shared:exclusive,propagateErrors:off,type:disk@BOOT_DISK@}
devices={device:scsi,model:virtio-scsi,type:controller}
devices={index:4,nicModel:pv,macAddr:@MAC_ADDR@,linkActive:true,network:@BRIDGE@,filter:vdsm-no-mac-spoofing,specParams:{},deviceId:@NIC_UUID@,address:{bus:0x00,
 slot:0x03, domain:0x, type:pci, 
function:0x0},device:bridge,type:interface@BOOT_PXE@}
devices={index:8,nicModel:pv,macAddr:02:16:3e:4f:c4:b0,linkActive:true,network:lan,filter:vdsm-no-mac-spoofing,specParams:{},address:{bus:0x00,
 slot:0x09, domain:0x, type:pci, 
function:0x0},device:bridge,type:interface@BOOT_PXE@}
devices={device:console,specParams:{},type:console,deviceId:@CONSOLE_UUID@,alias:console0}
vmName=@NAME@
spiceSecureChannels=smain,sdisplay,sinputs,scursor,splayback,srecord,ssmartcard,susbredir
smp=@VCPUS@
cpuType=@CPU_TYPE@
emulatedMachine=@EMULATED_MACHINE@
EOM

I simply added a second nic (with a fixed MAC address from the 
locally-administered pool, since I didn't know how to auto-generate one) and 
added an index for nics too (mimicking the the storage devices setup already 
present).

My network setup is much simpler than yours: ovirtmgmt bridge is on an isolated 
oVirt-management-only network without gateway, my actual LAN with gateway and 
Internet access (for package updates/installation) is connected to lan bridge 
and the SAN/migration LAN is a further (not bridged) 10 Gib/s isolated network 
for which I do not expect to need Engine/VMs reachability (so no third 
interface for Engine) since all actions should be performed from Engine but 
only through vdsm hosts (I use a split-DNS setup by means of carefully 
crafted hosts files on Engine and vdsm hosts)

I can confirm that the engine vm gets created as expected and that network 
connectivity works.

Unfortunately I cannot validate the whole design yet, since I'm still debugging 
HA-agent problems that prevent a reliable Engine/SD startup.

Hope it helps.

Greetings,
Giuseppe

  ___
Users mailing list
Users@ovirt.org
http://lists.ovirt.org/mailman/listinfo/users


Re: [Users] Self-hosted-engine setup error

2014-03-14 Thread Giuseppe Ragusa
Hi Didi,

Date: Thu, 13 Mar 2014 02:46:50 -0400
From: d...@redhat.com
To: giuseppe.rag...@hotmail.com
CC: users@ovirt.org
Subject: Re: [Users] Self-hosted-engine setup error

From: Giuseppe Ragusa giuseppe.rag...@hotmail.com
To: users@ovirt.org
Sent: Thursday, March 13, 2014 2:38:42 AM
Subject: [Users] Self-hosted-engine setup error

Hi all,
while attempting a from-scratch self-hosted-engine installation on CentOS 6.5 
(also freshly reinstalled from scratch) on a physical node (oVirt 3.4.0_pre + 
GlusterFS 3.5.0beta4; NFS storage for engine VM), the process fails almost 
immediately with:

[root@cluster1 ~]# ovirt-hosted-engine-setup 
--config-append=/root/ovhe-setup-answers.conf
[ INFO  ] Stage: Initializing
  Continuing will configure this host for serving as hypervisor and 
create a VM where you have to install oVirt Engine afterwards.
  Are you sure you want to continue? (Yes, No)[Yes]:
[ INFO  ] Generating a temporary VNC password.
[ INFO  ] Stage: Environment setup
  Configuration files: ['/root/ovhe-setup-answers.conf']
  Log file: 
/var/log/ovirt-hosted-engine-setup/ovirt-hosted-engine-setup-20140313010526.log
  Version: otopi-1.2.0_rc3 (otopi-1.2.0-0.9.rc3.el6)
[ INFO  ] Hardware supports virtualization
[ INFO  ] Bridge ovirtmgmt already created
[ INFO  ] Stage: Environment packages setup
[ INFO  ] Stage: Programs detection
[ INFO  ] Stage: Environment setup
[ INFO  ] Generating VDSM certificates
[ ERROR ] Failed to execute stage 'Environment setup': [Errno 2] No such file 
or directory: '/etc/pki/libvirt/clientcert.pem'
I already got another such report yesterday - seems like a bug in the fix for 
https://bugzilla.redhat.com/show_bug.cgi?id=1034634 .I hope to push a fix later 
today.

I look forward to have the fix pushed/merged in actual packages.


[ INFO  ] Stage: Clean up
[ INFO  ] Stage: Pre-termination
[ INFO  ] Stage: Termination

The /root/ovhe-setup-answers.conf has been saved from a previous installation 
(before reinstalling) and only minimally edited (removed some lines with UUIDs 
etc.).

The /etc/pki/libvirt dir is completely missing on both nodes; last time I tried 
the whole setup I do not recall of having such problems, but maybe something 
was different then.

The generated 
/var/log/ovirt-hosted-engine-setup/ovirt-hosted-engine-setup-20140313010526.log 
has been saved as:

http://pastebin.com/ezAJETBN

I hope to be able to progress further to test the whole 2-nodes setup (second 
node freshly reinstalled too and already up with GlusterFS and waiting to be 
added to oVirt cluster) and datacenter configuration.

Many thanks in advance for any suggestions/help,
For now, you can simply:mkdir /etc/pki/libvirt
This should be enough.

The workaround works: the self-hosted-engine installation proceeds now.

Thanks for the report!-- 
Didi

Many thanks for your kind and prompt assistance,
Giuseppe

  ___
Users mailing list
Users@ovirt.org
http://lists.ovirt.org/mailman/listinfo/users


[Users] Self-hosted-engine setup error

2014-03-12 Thread Giuseppe Ragusa
Hi all,
while attempting a from-scratch self-hosted-engine installation on CentOS 6.5 
(also freshly reinstalled from scratch) on a physical node (oVirt 3.4.0_pre + 
GlusterFS 3.5.0beta4; NFS storage for engine VM), the process fails almost 
immediately with:

[root@cluster1 ~]# ovirt-hosted-engine-setup 
--config-append=/root/ovhe-setup-answers.conf
[ INFO  ] Stage: Initializing
  Continuing will configure this host for serving as hypervisor and 
create a VM where you have to install oVirt Engine afterwards.
  Are you sure you want to continue? (Yes, No)[Yes]:
[ INFO  ] Generating a temporary VNC password.
[ INFO  ] Stage: Environment setup
  Configuration files: ['/root/ovhe-setup-answers.conf']
  Log file: 
/var/log/ovirt-hosted-engine-setup/ovirt-hosted-engine-setup-20140313010526.log
  Version: otopi-1.2.0_rc3 (otopi-1.2.0-0.9.rc3.el6)
[ INFO  ] Hardware supports virtualization
[ INFO  ] Bridge ovirtmgmt already created
[ INFO  ] Stage: Environment packages setup
[ INFO  ] Stage: Programs detection
[ INFO  ] Stage: Environment setup
[ INFO  ] Generating VDSM certificates
[ ERROR ] Failed to execute stage 'Environment setup': [Errno 2] No such file 
or directory: '/etc/pki/libvirt/clientcert.pem'
[ INFO  ] Stage: Clean up
[ INFO  ] Stage: Pre-termination
[ INFO  ] Stage: Termination

The /root/ovhe-setup-answers.conf has been saved from a previous installation 
(before reinstalling) and only minimally edited (removed some lines with UUIDs 
etc.).

The /etc/pki/libvirt dir is completely missing on both nodes; last time I tried 
the whole setup I do not recall of having such problems, but maybe something 
was different then.

The generated 
/var/log/ovirt-hosted-engine-setup/ovirt-hosted-engine-setup-20140313010526.log 
has been saved as:

http://pastebin.com/ezAJETBN

I hope to be able to progress further to test the whole 2-nodes setup (second 
node freshly reinstalled too and already up with GlusterFS and waiting to be 
added to oVirt cluster) and datacenter configuration.

Many thanks in advance for any suggestions/help,
Giuseppe

  ___
Users mailing list
Users@ovirt.org
http://lists.ovirt.org/mailman/listinfo/users


Re: [Users] hosted engine help

2014-03-11 Thread Giuseppe Ragusa
 Date: Tue, 11 Mar 2014 15:16:36 +0100
 From: sbona...@redhat.com
 To: giuseppe.rag...@hotmail.com; jbro...@redhat.com; msi...@redhat.com
 CC: users@ovirt.org; fsimo...@redhat.com; gpadg...@redhat.com
 Subject: Re: [Users] hosted engine help
 
 Il 10/03/2014 22:32, Giuseppe Ragusa ha scritto:
  Hi all,
  
  Date: Mon, 10 Mar 2014 12:56:19 -0400
  From: jbro...@redhat.com
  To: msi...@redhat.com
  CC: users@ovirt.org
  Subject: Re: [Users] hosted engine help
 
 
 
  - Original Message -
   From: Martin Sivak msi...@redhat.com
   To: Dan Kenigsberg dan...@redhat.com
   Cc: users@ovirt.org
   Sent: Saturday, March 8, 2014 11:52:59 PM
   Subject: Re: [Users] hosted engine help
  
   Hi Jason,
  
   can you please attach the full logs? We had very similar issue before I 
   we
   need to see if is the same or not.
 
  I may have to recreate it -- I switched back to an all in one engine after 
  my
  setup started refusing to run the engine at all. It's no fun losing your 
  engine!
 
  This was a migrated-from-standalone setup, maybe that caused additional 
  wrinkles...
 
  Jason
 
  
   Thanks
  
  I experienced the exact same symptoms as Jason on a from-scratch 
  installation on two physical nodes with CentOS 6.5 (fully up-to-date) using 
  oVirt
  3.4.0_pre (latest test-day release) and GlusterFS 3.5.0beta3 (with 
  Gluster-provided NFS as storage for the self-hosted engine VM only).
 
 Using GlusterFS with hosted-engine storage is not supported and not 
 recommended.
 HA daemon may not work properly there.

If it is unsupported (and particularly not recommended) even with the 
interposed NFS (the native Gluster-provided NFSv3 export of a volume), then 
which is the recommended way to setup a fault-tolerant load-balanced 2 node 
oVirt cluster (without external dedicated SAN/NAS)?

  I roughly followed the guide from Andrew Lau:
  
  http://www.andrewklau.com/ovirt-hosted-engine-with-3-4-0-nightly/
  
  with some variations due to newer packages (resolved bugs) and different 
  hardware setup (no VLANs in my setup: physically separated networks; custom
  second nic added to Engine VM template before deploying etc.)
  
  The self-hosted installation on first node + Engine VM (configured for 
  managing both oVirt and the storage; Datacenter default set to NFS because 
  no
  GlusterFS offered) went apparently smooth, but the HA-agent failed to start 
  at the very end (same errors in logs as Jason: the storage domain seems
  missing) and I was only able to start it all manually with:
  
  hosted-engine --connect-storage
  hosted-engine --start-pool
 
 The above commands are used for development and shouldn't be used for 
 starting the engine.

Directly starting the engine (with the command below) failed because of storage 
unavailability, so I used the above trick as a last resort to at least 
prove that the engine was able to start and had not been somewhat destroyed 
or lost in the process (but I do understand that it is an extreme debug-only 
action).

  hosted-engine --vm-start
  
  then the Engine came up and I could use it, I even registered the second 
  node (same final error in HA-agent) and tried to add GlusterFS storage
  domains for further VMs and ISOs (by the way: the original NFS-GlusterFS 
  domain for Engine VM only is not present inside the Engine web UI) but it
  always failed activating the domains (they remain Inactive).
  
  Furthermore the engine gets killed some time after starting (from 3 up to 
  11 hours later) and the only way to get it back is repeating the above 
  commands.
 
 Need logs for this.

I will try to reproduce it all, but I can recall that on libvirt logs 
(HostedEngine.log) there was always clear indication of the PID that killed the 
Engine VM and each time it belonged to an instance of sanlock.

  I always managed GlusterFS natively (not through oVirt) from the 
  commandline and verified that the NFS-exported Engine-VM-only volume gets
  replicated, but I obviously failed to try migration because the HA part 
  results inactive and oVirt refuse to migrate the Engine.
  
  Since I tried many times, with variations and further manual actions 
  between (like trying to manually mount the NFS Engine domain, restarting the
  HA-agent only etc.), my logs are cluttered, so I should start from 
  scratch again and pack up all logs in one swipe.
 
 +1

;

  Tell me what I should capture and at which points in the whole process and 
  I will try to follow up as soon as possible.
 
 What:
 hosted-engine-setup, hosted-engine-ha, vdsm, libvirt, sanlock from the 
 physical hosts and engine and server logs from the hosted engine VM.
 
 When:
 As soon as you see an error.

If the setup design (wholly GlusterFS based) is somewhat flawed, please point 
me to some hint/docs/guide for the right way of setting it up on 2 standalone 
physical nodes, so as not to waste your time in chasing defects in something 
that is not supposed to be working anyway.

I will follow your

Re: [Users] hosted engine help

2014-03-10 Thread Giuseppe Ragusa
Hi all,
 Date: Mon, 10 Mar 2014 12:56:19 -0400
 From: jbro...@redhat.com
 To: msi...@redhat.com
 CC: users@ovirt.org
 Subject: Re: [Users] hosted engine help
 
 
 
 - Original Message -
  From: Martin Sivak msi...@redhat.com
  To: Dan Kenigsberg dan...@redhat.com
  Cc: users@ovirt.org
  Sent: Saturday, March 8, 2014 11:52:59 PM
  Subject: Re: [Users] hosted engine help
  
  Hi Jason,
  
  can you please attach the full logs? We had very similar issue before I we
  need to see if is the same or not.
 
 I may have to recreate it -- I switched back to an all in one engine after my
 setup started refusing to run the engine at all. It's no fun losing your 
 engine!
 
 This was a migrated-from-standalone setup, maybe that caused additional 
 wrinkles...
 
 Jason
 
  
  Thanks

I experienced the exact same symptoms as Jason on a from-scratch installation 
on two physical nodes with CentOS 6.5 (fully up-to-date) using oVirt 3.4.0_pre 
(latest test-day release) and GlusterFS 3.5.0beta3 (with Gluster-provided NFS 
as storage for the self-hosted engine VM only).
I roughly followed the guide from Andrew Lau:
http://www.andrewklau.com/ovirt-hosted-engine-with-3-4-0-nightly/
with some variations due to newer packages (resolved bugs) and different 
hardware setup (no VLANs in my setup: physically separated networks; custom 
second nic added to Engine VM template before deploying etc.)
The self-hosted installation on first node + Engine VM (configured for managing 
both oVirt and the storage; Datacenter default set to NFS because no GlusterFS 
offered) went apparently smooth, but the HA-agent failed to start at the very 
end (same errors in logs as Jason: the storage domain seems missing) and I 
was only able to start it all manually with:
hosted-engine --connect-storagehosted-engine --start-poolhosted-engine 
--vm-start
then the Engine came up and I could use it, I even registered the second node 
(same final error in HA-agent) and tried to add GlusterFS storage domains for 
further VMs and ISOs (by the way: the original NFS-GlusterFS domain for Engine 
VM only is not present inside the Engine web UI) but it always failed 
activating the domains (they remain Inactive).
Furthermore the engine gets killed some time after starting (from 3 up to 11 
hours later) and the only way to get it back is repeating the above commands.
I always managed GlusterFS natively (not through oVirt) from the commandline 
and verified that the NFS-exported Engine-VM-only volume gets replicated, but I 
obviously failed to try migration because the HA part results inactive and 
oVirt refuse to migrate the Engine.
Since I tried many times, with variations and further manual actions between 
(like trying to manually mount the NFS Engine domain, restarting the HA-agent 
only etc.), my logs are cluttered, so I should start from scratch again and 
pack up all logs in one swipe.
Tell me what I should capture and at which points in the whole process and I 
will try to follow up as soon as possible.
Many thanks,Giuseppe
  --
  Martin Sivák
  msi...@redhat.com
  Red Hat Czech
  RHEV-M SLA / Brno, CZ
  
  - Original Message -
   On Fri, Mar 07, 2014 at 10:17:43AM +0100, Sandro Bonazzola wrote:
Il 07/03/2014 01:10, Jason Brooks ha scritto:
 Hey everyone, I've been testing out oVirt 3.4 w/ hosted engine, and
 while I've managed to bring the engine up, I've only been able to do 
 it
 manually, using hosted-engine --vm-start.
 
 The ovirt-ha-agent service fails reliably for me, erroring out with
 RequestError: Request failed: success.
 
 I've pasted error passages from the ha agent and vdsm logs below.
 
 Any pointers?

looks like a VDSM bug, Dan?
   
   Why? The exception is raised from deep inside the ovirt_hosted_engine_ha
   code.
   ___
   Users mailing list
   Users@ovirt.org
   http://lists.ovirt.org/mailman/listinfo/users
   
  ___
  Users mailing list
  Users@ovirt.org
  http://lists.ovirt.org/mailman/listinfo/users
  
 ___
 Users mailing list
 Users@ovirt.org
 http://lists.ovirt.org/mailman/listinfo/users
  ___
Users mailing list
Users@ovirt.org
http://lists.ovirt.org/mailman/listinfo/users