[ovirt-users] Re: upgrade issue

2020-01-16 Thread Yuval Turgeman
Hi Jingjie,

You're mixing a normal centos host with an ovirt-node installation.  If
you'd like to use ovirt with centos nodes, you simply need to do yum
update.  The ovirt-node-ng-image-update is an update rpm for a host that is
installed with ovirt-node-ng-installer.iso.

Thanks,
Yuval

On Wed, Jan 15, 2020 at 11:43 PM Jingjie Jiang 
wrote:

> Hi Paul,
>
> Thanks for your reply.
>
> I did install ovirt-release43 before I install the
> ovirt-node-ng-image-update package.
>
>
> Here is the brief procedure:
>
> 1. Install the 4.2.8 ovirt host on CentOS7.6 by adding the node from
> AdminPortal.
>
> 2. Upgrade ovirt 4.2.8 engine to 4.3.6.
>
> 3. Set the host to maintainence from AdminPortal.
>
> 4. Yum install ovirt-release43 rpm
>
> # yum install
> http://resources.ovirt.org/pub/yum-repo/ovirt-release43.rpm
>
> 5.Yum install node update package
>
># yum install
> https://resources.ovirt.org/pub/ovirt-4.3/rpm/el7/noarch/ovirt-node-ng-image-update-4.3.6-1.el7.noarch.rpm
>
> 6. reboot
>
>
> vdsm did not get update.
>
>
> -Jingjie
>
>
> On 1/15/20 6:14 AM, Staniforth, Paul wrote:
>
> Hel# yum install
> https://resources.ovirt.org/pub/ovirt-4.3/rpm/el7/noarch/ovirt-node-ng-image-update-4.3.6-1.el7.noarch.rpmlo
> Jingjie,
>I think you need to install
> https://resources.ovirt.org/pub/yum-repo/ovirt-release43.rpm
> 
> so yum updates to the 4.3 release otherwise it will stay on the 4.2 release
> (4.2.8 being the last version).
>
>
> Regards,
>Paul S.
> --
> *From:* Jingjie Jiang 
> 
> *Sent:* 14 January 2020 15:30
> *To:* Parth Dhanjal  
> *Cc:* users  
> *Subject:* [ovirt-users] Re: upgrade issue
>
>
> *Caution External Mail:* Do not click any links or open any attachments
> unless you trust the sender and know that the content is safe.
>
> Hi Path,
>
> Thanks for your rely.
>
> Please check my inline comments
>
> -Jingjie
> On 1/14/20 2:28 AM, Parth Dhanjal wrote:
>
> Hello Jingjie!
>
> You can follow
> https://www.ovirt.org/documentation/upgrade-guide/chap-Upgrading_from_4.1_to_oVirt_4.2.html
> 
>
> I tried the above link before and it worked.
>
> But, I'd like to know how  the upgrade with ovirt-node-ng-image-update
> package works.
>
> Any suggestion about the issue I reported in previous email?
>
> Also, about the link you pointed, "imgbase check" command crashed when I
> ran from the ovirt host based on CentOS:
>
> # imgbase check
> Traceback (most recent call last):
>   File "/usr/lib64/python2.7/runpy.py", line 162, in _run_module_as_main
> "__main__", fname, loader, pkg_name)
>   File "/usr/lib64/python2.7/runpy.py", line 72, in _run_code
> exec code in run_globals
>   File "/usr/lib/python2.7/site-packages/imgbased/__main__.py", line 53,
> in 
> CliApplication()
>   File "/usr/lib/python2.7/site-packages/imgbased/__init__.py", line 82,
> in CliApplication
> app.hooks.emit("post-arg-parse", args)
>   File "/usr/lib/python2.7/site-packages/imgbased/hooks.py", line 120, in
> emit
> cb(self.context, *args)
>   File "/usr/lib/python2.7/site-packages/imgbased/plugins/core.py", line
> 185, in post_argparse
> run_check(app)
>   File "/usr/lib/python2.7/site-packages/imgbased/plugins/core.py", line
> 225, in run_check
> status = Health(app).status()
>   File "/usr/lib/python2.7/site-packages/imgbased/plugins/core.py", line
> 358, in status
> status.results.append(group().run())
>   File "/usr/lib/python2.7/site-packages/imgbased/plugins/core.py", line
> 385, in check_thin
> pool = self.app.imgbase._thinpool()
>   File "/usr/lib/python2.7/site-packages/imgbased/imgbase.py", line 120,
> in _thinpool
> return LVM.Thinpool.from_tag(self.thinpool_tag)
>   File "/usr/lib/python2.7/site-packages/imgbased/lvm.py", line 227, in
> from_tag
> assert len(lvs) == 1
> AssertionError
>
>
>
> Under this link you can find a tab to upgrade the hosts. Instead of
> https://resources.ovirt.org/pub/yum-repo/ovirt-release42.rpm
> 

[ovirt-users] Re: Upgrading from oVirt-Node 4.2.8 to latest

2019-10-10 Thread Yuval Turgeman
Sven, assuming you yum installed [1], can you share your imgbased.log file
and the output for `imgbase layout` ?

[1]
https://resources.ovirt.org/pub/ovirt-4.3/rpm/el7/noarch/ovirt-node-ng-image-update-4.3.6-1.el7.noarch.rpm

On Thu, Oct 10, 2019 at 11:17 AM Sandro Bonazzola 
wrote:

>
>
> Il giorno gio 10 ott 2019 alle ore 09:35 Sven Achtelik <
> sven.achte...@eps.aero> ha scritto:
>
>> Running the command from the docs works without any error messages on the
>> output, but in the gui it’s still 4.28 after rebooting. Is there more
>> documentation on how to troubleshot the cause for this ?
>>
>
> it's probably a bug on the update flow missing to update a line with the
> version to be displayed but the node should be correctly updated to 4.3.6
> after completion and reboot.
> Adding +Nir Levy  and +Yuval Turgeman
>  for investigating on this.
>
>
>
>
>>
>>
>> *Von:* Sandro Bonazzola [mailto:sbona...@redhat.com]
>> *Gesendet:* Donnerstag, 10. Oktober 2019 08:36
>> *An:* Sven Achtelik 
>> *Cc:* users 
>> *Betreff:* Re: [ovirt-users] Upgrading from oVirt-Node 4.2.8 to latest
>>
>>
>>
>>
>>
>>
>>
>> Il giorno mer 9 ott 2019 alle ore 10:51 Sven Achtelik <
>> sven.achte...@eps.aero> ha scritto:
>>
>> Hi All,
>>
>>
>>
>> is there a way to go from 4.2.8 on ovirt node to go directly to the
>> latest version, without reinstalling the node from the iso file ? I wasn’t
>> able to find anything in documentation on how to get this done.
>>
>>
>>
>> Please see
>> https://ovirt.org/documentation/upgrade-guide/appe-Manually_Updating_Hosts.html
>>
>>
>>
>>
>>
>> Thanks, Sven
>>
>> ___
>> Users mailing list -- users@ovirt.org
>> To unsubscribe send an email to users-le...@ovirt.org
>> Privacy Statement: https://www.ovirt.org/site/privacy-policy/
>> oVirt Code of Conduct:
>> https://www.ovirt.org/community/about/community-guidelines/
>> List Archives:
>> https://lists.ovirt.org/archives/list/users@ovirt.org/message/QEB6VPEPFKLBZZICD7FYB4EM2M4QYOQM/
>>
>>
>>
>>
>> --
>>
>> *Sandro Bonazzola*
>>
>> MANAGER, SOFTWARE ENGINEERING, EMEA R&D RHV
>>
>> Red Hat EMEA <https://www.redhat.com/>
>>
>> sbona...@redhat.com
>>
>> [image: Das Bild wurde vom Absender entfernt.] <https://www.redhat.com/>
>>
>> *Red Hat respects your work life balance. Therefore there is no need to
>> answer this email out of your office hours.
>> <https://mojo.redhat.com/docs/DOC-1199578>*
>>
>
>
> --
>
> Sandro Bonazzola
>
> MANAGER, SOFTWARE ENGINEERING, EMEA R&D RHV
>
> Red Hat EMEA <https://www.redhat.com/>
>
> sbona...@redhat.com
> <https://www.redhat.com/>*Red Hat respects your work life balance.
> Therefore there is no need to answer this email out of your office hours.
> <https://mojo.redhat.com/docs/DOC-1199578>*
>
___
Users mailing list -- users@ovirt.org
To unsubscribe send an email to users-le...@ovirt.org
Privacy Statement: https://www.ovirt.org/site/privacy-policy/
oVirt Code of Conduct: 
https://www.ovirt.org/community/about/community-guidelines/
List Archives: 
https://lists.ovirt.org/archives/list/users@ovirt.org/message/HEQYL3IA3JEXZCTGSM6JH4FMI7Y3T4OE/


[ovirt-users] Re: ovirt node install

2019-08-19 Thread Yuval Turgeman
Looks like the fix missed the release, it will be fixed on the next
version.  For now, you can try to either install the latest nightly iso,
or manually install the latest ovirt-node-ng-nodectl rpm on your existing
installation

On Monday, August 19, 2019,  wrote:

> I concur with Paul, I got the same from fresh installs of the oVirt node
> image created August 5th on all instances.
> ___
> Users mailing list -- users@ovirt.org
> To unsubscribe send an email to users-le...@ovirt.org
> Privacy Statement: https://www.ovirt.org/site/privacy-policy/
> oVirt Code of Conduct: https://www.ovirt.org/community/about/community-
> guidelines/
> List Archives: https://lists.ovirt.org/archives/list/users@ovirt.org/
> message/RT2QR3MCXJUYTLXBJBUQUR2J5ACF6P45/
>
___
Users mailing list -- users@ovirt.org
To unsubscribe send an email to users-le...@ovirt.org
Privacy Statement: https://www.ovirt.org/site/privacy-policy/
oVirt Code of Conduct: 
https://www.ovirt.org/community/about/community-guidelines/
List Archives: 
https://lists.ovirt.org/archives/list/users@ovirt.org/message/F4BFUIFS3UTDBUBLV44T6D64ATJA764G/


[ovirt-users] Re: ovirt node install

2019-08-19 Thread Yuval Turgeman
You just hit [1] - can you try this with the latest 4.3.5 ?

[1] https://bugzilla.redhat.com/show_bug.cgi?id=1728998

On Tue, Aug 13, 2019 at 6:57 PM Staniforth, Paul <
p.stanifo...@leedsbeckett.ac.uk> wrote:

> Hello,
>
> on the latest version of the oVirt-node install running
> nodectl info gives the error
>
>
>
> Traceback (most recent call last):
>   File "/usr/lib64/python2.7/runpy.py", line 162, in _run_module_as_main
> "__main__", fname, loader, pkg_name)
>   File "/usr/lib64/python2.7/runpy.py", line 72, in _run_code
> exec code in run_globals
>   File "/usr/lib/python2.7/site-packages/nodectl/__main__.py", line 42, in
> 
> CliApplication()
>   File "/usr/lib/python2.7/site-packages/nodectl/__init__.py", line 200,
> in CliApplication
> return cmdmap.command(args)
>   File "/usr/lib/python2.7/site-packages/nodectl/__init__.py", line 118,
> in command
> return self.commands[command](**kwargs)
>   File "/usr/lib/python2.7/site-packages/nodectl/__init__.py", line 76, in
> info
> Info(self.imgbased, self.machine).write()
>   File "/usr/lib/python2.7/site-packages/nodectl/info.py", line 46, in
> __init__
> self._fetch_information()
>   File "/usr/lib/python2.7/site-packages/nodectl/info.py", line 49, in
> _fetch_information
> self._get_bootloader_info()
>   File "/usr/lib/python2.7/site-packages/nodectl/info.py", line 62, in
> _get_bootloader_info
> bootinfo["entries"][k] = v.__dict__
> AttributeError: 'list' object has no attribute '__dict__'
>
>
> Also what is the correct way to update from ovirt node 4.2 to 4.3?
>
> I used
>
> yum install
> https://resources.ovirt.org/pub/ovirt-4.3/rpm/el7/noarch/ovirt-node-ng-image-update-4.3.4-1.el7.noarch.rpm
>
> I then did yum erase ovirt-release42 and rm  /etc/yum.repos.d/ovirt-4.2*
>
> Regards,
>Paul S.
> To view the terms under which this email is distributed, please go to:-
> http://leedsbeckett.ac.uk/disclaimer/email/
>
> ___
> Users mailing list -- users@ovirt.org
> To unsubscribe send an email to users-le...@ovirt.org
> Privacy Statement: https://www.ovirt.org/site/privacy-policy/
> oVirt Code of Conduct:
> https://www.ovirt.org/community/about/community-guidelines/
> List Archives:
> https://lists.ovirt.org/archives/list/users@ovirt.org/message/GNXDVTU66ZUFIRTHTNB2RPBGVYIFYLF5/
>
___
Users mailing list -- users@ovirt.org
To unsubscribe send an email to users-le...@ovirt.org
Privacy Statement: https://www.ovirt.org/site/privacy-policy/
oVirt Code of Conduct: 
https://www.ovirt.org/community/about/community-guidelines/
List Archives: 
https://lists.ovirt.org/archives/list/users@ovirt.org/message/GDIOP5A4YD2QUMQHAN5P5KXQ4PD2EN5V/


[ovirt-users] Re: ovirt-engine-appliance ova

2019-07-24 Thread Yuval Turgeman
What system are you running `make` on ?  There's some logic before that
iirc (like which repos to install from etc).  Basically,
ovirt-appliance/automation/build-artifacts.sh is the place to go.


On Tuesday, July 23, 2019, Yedidyah Bar David  wrote:

> On Mon, Jul 22, 2019 at 11:53 PM Jingjie Jiang 
> wrote:
>
>> Hi David,
>>
>
> (Actually it's "Yedidyah" or "Didi")
>
>
>> Thanks for your info.
>>
>> Please check my reply inline.
>>
>>
>> -Jingjie
>> On 7/16/19 3:55 AM, Yedidyah Bar David wrote:
>>
>> On Thu, Jul 11, 2019 at 10:46 PM  
>>  wrote:
>>
>> Hi,
>> Can someone tell me how to generate  ovirt-engine-appliance ova file in 
>> ovirt-engine-appliance-4.3-20190610.1.el7.x86_64.rpm?
>>
>> You might want to check the project's source code:
>> https://github.com/ovirt/ovirt-appliance/
>>
>> Or study the logs of a CI build of it:
>> https://jenkins.ovirt.org/job/ovirt-appliance_master_build-artifacts-el7-x86_64/
>>
>> I never tried building it myself locally, though.
>>
>> I tried to build after checked out the source code from
>> https://github.com/ovirt/ovirt-appliance/,
>>
>> but the build failed.
>>
>> *# make*
>> *livemedia-creator --make-disk --ram=2048 --vcpus=4 --iso=boot.iso
>> --ks=ovirt-engine-appliance.ks --qcow2
>> --image-name=ovirt-engine-appliance.qcow2*
>> *2019-07-22 12:34:00,095: livemedia-creator 19.7.19-1*
>> *2019-07-22 12:34:00,154: disk_size = 51GiB*
>> *2019-07-22 12:34:00,154: disk_img =
>> /var/tmp/ovirt-engine-appliance.qcow2*
>> *2019-07-22 12:34:00,154: install_log =
>> /root/ovirt/ovirt-appliance/engine-appliance/virt-install.log*
>> *mount: /dev/loop0 is write-protected, mounting read-only*
>> *Formatting '/var/tmp/ovirt-engine-appliance.qcow2', fmt=qcow2
>> size=54760833024 encryption=off cluster_size=65536 lazy_refcounts=off *
>> *2019-07-22 12:34:10,195: Running virt-install.*
>>
>> *Starting install...*
>> *Retrieving file vmlinuz...  | 6.3 MB
>> 00:00 *
>> *Retrieving file initrd.img...   |  50 MB
>> 00:00 *
>> *Domain installation still in progress. You can reconnect to *
>> *the console to complete the installation process.*
>> *..*
>> *2019-07-22 12:35:15,281: Installation error detected. See logfile.*
>> *2019-07-22 12:35:15,283: Shutting down
>> LiveOS-27f2dc2b-4b30-4eb1-adcd-b5ab50fdbf55*
>> *Domain LiveOS-27f2dc2b-4b30-4eb1-adcd-b5ab50fdbf55 destroyed*
>>
>> *Domain LiveOS-27f2dc2b-4b30-4eb1-adcd-b5ab50fdbf55 has been undefined*
>>
>> *2019-07-22 12:35:15,599: unmounting the iso*
>> *2019-07-22 12:35:20,612: Install failed: virt_install failed*
>> *2019-07-22 12:35:20,613: Removing bad disk image*
>> *2019-07-22 12:35:20,613: virt_install failed*
>> *make: *** [ovirt-engine-appliance.qcow2] Error 1*
>>
>> In the log I found the error as following from virt-install.log:
>>
>> *16:35:07,472 ERR anaconda:CmdlineError: The following mandatory spokes
>> are not completed:#012Installation source#012Software selection*
>> *16:35:07,472 DEBUG anaconda:running handleException*
>> *16:35:07,473 CRIT anaconda:Traceback (most recent call last):#012#012
>> File
>> "/usr/lib64/python2.7/site-packages/pyanaconda/ui/tui/simpleline/base.py",
>> line 352, in _mainloop#012prompt =
>> last_screen.prompt(self._screens[-1][1])#012#012  File
>> "/usr/lib64/python2.7/site-packages/pyanaconda/ui/tui/hubs/summary.py",
>> line 107, in prompt#012raise CmdlineError(errtxt)#012#012CmdlineError:
>> The following mandatory spokes are not completed:#012Installation
>> source#012Software selection*
>> *16:35:08,020 DEBUG anaconda:Gtk cannot be initialized*
>> *16:35:08,020 DEBUG anaconda:In the main thread, running exception
>> handler*
>> *16:35:08,386 NOTICE multipathd:zram0: add path (uevent)*
>> *16:35:08,386 NOTICE multipathd:zram0: spurious uevent, path already in
>> pathvec*
>> *16:35:08,386 NOTICE multipathd:zram0: HDIO_GETGEO failed with 25*
>> *16:35:08,386 ERR multipathd:zram0: failed to get path uid*
>> *16:35:08,388 ERR multipathd:uevent trigger error*
>>
>> Can you help me to fix the issue?
>>
>
> Sorry, I never tried to build it myself, nor have experience with
> livemedia-creator. As I wrote above, I suggest to compare your
> output/result with that of oVirt CI. Otherwise, I'd probably start
> debugging by searching the net for the error messages you received.
>
> Good luck and best regards,
>
>
>>
>> I tried to import ovirt-engine-appliance 
>> ova(ovirt-engine-appliance-4.3-20190610.1.el7.ova) from ovirt-engine, but I 
>> got error as following:
>> Failed to load VM configuration from OVA file: 
>> /var/tmp/ovirt-engine-appliance-4.2-20190121.1.el7.ova
>>
>> No idea why this failed.
>>
>>
>> I guess ovirt-engine-appliance-4.2-20190121.1.el7.ova has more than 
>> CentOS7.6.
>>
>> It has CentOS + oVirt engine.
>>
>> The only major use for it is by hosted-engine --deploy. In theory you
>> can try importing it elsewhere, but I do not recall reports about
>> people that tried this and whether it works.

[ovirt-users] Re: Hosted engine setup: "Failed to configure management network on host Local due to setup networks failure"

2019-06-16 Thread Yuval Turgeman
Hi Edward, you're hitting [1] - it will be included in the next appliance

[1] https://bugzilla.redhat.com/show_bug.cgi?id=1718399


On Monday, June 17, 2019, Edward Berger  wrote:

> The hosted engine is created in two steps, first as a 192.168.x.x address
> as a local VM on the host, then it gets copied over to shared storage and
> gets the real ip address you assigned in the setup wizard..  So that part
> is normal behavior.
>
> I had a recent hosted engine installation failure with oVirt node 4.3.4,
> where the local VM was stuck trying to yum install yum-utils, but couldn't
> because it is behind a firewall, so I ssh'd into the local VM, added a
> proxy line to /etc/yum.conf, kill -HUP'd the bad process and manually
> re-ran the yum install command and it was able to complete the hosted
> engine installation.
>
> If that's not the issue, maybe your node's network config is not something
> the installer expects like preconfigured bridge when it wants to do the
> bridge configuration for itself, or a bond type not supported...
>
>
> On Sun, Jun 16, 2019 at 12:12 PM  wrote:
>
> Hi,
>
> I've been failing to install hosted-engine on oVirt Node for a long time.
> I'm now trying on a Coffee Lake Xeon-based system, having previously tried
> on Broadwell-E.
>
> Trying using the webui or hosted-engine --deploy has similar result.
> Error in the title occurs when using the webui.  Using hosted-engine
> --deploy gets shows:
> [ INFO  ] TASK [ovirt.hosted_engine_setup : Wait for the host to be up]
> [ INFO  ] ok: [localhost]
> [ INFO  ] TASK [ovirt.hosted_engine_setup : Check host status]
> [ ERROR ] fatal: [localhost]: FAILED! => {"changed": false, "msg": "The
> host has been set in non_operational status, please check engine logs, fix
> accordingly and re-deploy.\n"}
>
> Despite the failure, the oVirt webui is can be browsed on https://:6900,
> but the host has status "unassigned".  The Node webui (https://:9090)
> has the engine VM running, but when I login to its console, I see its IP is
> 192.168.122.123, not the DHCP-reserved IP address (on our 10.0.8.x
> network), which doesn't seem right.  I suspect some problem with DHCP, but
> I don't know how to fix.  Any ideas?
>
> vdsm.log shows:
> 2019-06-16 15:06:39,117+ INFO  (vmrecovery) [vds] recovery: waiting
> for storage pool to go up (clientIF:709)
> 2019-06-16 15:06:44,122+ INFO  (vmrecovery) [vdsm.api] START
> getConnectedStoragePoolsList(options=None) from=internal,
> task_id=7f984b0d-9765-457e-ac8e-c5cd0bdf73d2 (api:48)
> 2019-06-16 15:06:44,122+ INFO  (vmrecovery) [vdsm.api] FINISH
> getConnectedStoragePoolsList return={'poollist': []} from=internal,
> task_id=7f984b0d-9765-457e-ac8e-c5cd0bdf73d2 (api:54)
> 2019-06-16 15:06:44,122+ INFO  (vmrecovery) [vds] recovery: waiting
> for storage pool to go up (clientIF:709)
> 2019-06-16 15:06:48,258+ INFO  (periodic/1) [vdsm.api] START
> repoStats(domains=()) from=internal, 
> task_id=0526307b-bb37-4eff-94d6-910ac0d64933
> (api:48)
> 2019-06-16 15:06:48,258+ INFO  (periodic/1) [vdsm.api] FINISH
> repoStats return={} from=internal, 
> task_id=0526307b-bb37-4eff-94d6-910ac0d64933
> (api:54)
> 2019-06-16 15:06:49,126+ INFO  (vmrecovery) [vdsm.api] START
> getConnectedStoragePoolsList(options=None) from=internal,
> task_id=0d5b359e-1a4c-4cc0-87a1-4a41e91ba356 (api:48)
> 2019-06-16 15:06:49,126+ INFO  (vmrecovery) [vdsm.api] FINISH
> getConnectedStoragePoolsList return={'poollist': []} from=internal,
> task_id=0d5b359e-1a4c-4cc0-87a1-4a41e91ba356 (api:54)
> 2019-06-16 15:06:49,126+ INFO  (vmrecovery) [vds] recovery: waiting
> for storage pool to go up (clientIF:709)
> 2019-06-16 15:06:53,040+ INFO  (jsonrpc/5) [api.host] START
> getAllVmStats() from=::1,50104 (api:48)
> 2019-06-16 15:06:53,041+ INFO  (jsonrpc/5) [api.host] FINISH
> getAllVmStats return={'status': {'message': 'Done', 'code': 0},
> 'statsList': (suppressed)} from=::1,50104 (api:54)
> 2019-06-16 15:06:53,041+ INFO  (jsonrpc/5) [jsonrpc.JsonRpcServer] RPC
> call Host.getAllVmStats succeeded in 0.01 seconds (__init__:312)
> 2019-06-16 15:06:54,132+ INFO  (vmrecovery) [vdsm.api] START
> getConnectedStoragePoolsList(options=None) from=internal,
> task_id=99c33317-7753-4d24-a10b-b716adcdaf76 (api:48)
> 2019-06-16 15:06:54,132+ INFO  (vmrecovery) [vdsm.api] FINISH
> getConnectedStoragePoolsList return={'poollist': []} from=internal,
> task_id=99c33317-7753-4d24-a10b-b716adcdaf76 (api:54)
> 2019-06-16 15:06:54,132+ INFO  (vmrecovery) [vds] recovery: waiting
> for storage pool to go up (clientIF:709)
> 2019-06-16 15:06:59,134+ INFO  (vmrecovery) [vdsm.api] START
> getConnectedStoragePoolsList(options=None) from=internal,
> task_id=8f5679a1-8734-491d-b925-7387effe4726 (api:48)
> 2019-06-16 15:06:59,134+ INFO  (vmrecovery) [vdsm.api] FINISH
> getConnectedStoragePoolsList return={'poollist': []} from=internal,
> task_id=8f5679a1-8734-491d-b925-7387effe4726 (api:54)
> 2019-06-16 15:06:59,134+ I

[ovirt-users] Re: Installing oVirt on Physical machine V4.3

2019-02-22 Thread Yuval Turgeman
How are you installing it (iso, pxe) ?

On Fri, Feb 22, 2019, 17:15  wrote:

> Motherboard :   ASUS M5A78L-M/USB3
> UEFI Support:   No
> Processor   :   AMD Fx-4300 (3800MGz)
> :   Virtualization Enabled in BIOS
> NIC :   1. Ethernet
> 2. Wifi Adapter
> HDD :   1TB WD
> RAM :   8GB
>
> I am working on a VDI project & testing on this machine.
>
> I had centos 7 earlier & installed oVirt. It was working fine & was able
> to access the service url.
>
> Now, I am trying to remove the base OS from the infra & go ahead with
> oVirt-node.
> ___
> Users mailing list -- users@ovirt.org
> To unsubscribe send an email to users-le...@ovirt.org
> Privacy Statement: https://www.ovirt.org/site/privacy-policy/
> oVirt Code of Conduct:
> https://www.ovirt.org/community/about/community-guidelines/
> List Archives:
> https://lists.ovirt.org/archives/list/users@ovirt.org/message/INAIAAKK2VBHYCXNHWSRCNQDV3OANIL2/
>
___
Users mailing list -- users@ovirt.org
To unsubscribe send an email to users-le...@ovirt.org
Privacy Statement: https://www.ovirt.org/site/privacy-policy/
oVirt Code of Conduct: 
https://www.ovirt.org/community/about/community-guidelines/
List Archives: 
https://lists.ovirt.org/archives/list/users@ovirt.org/message/7ITTMEVOX24TXHG563S6VGU6OHFBPUBC/


[ovirt-users] Re: update to 4.2.8 fails

2019-02-17 Thread Yuval Turgeman
It's mentioned in the release notes [1], probably worth to mention it in
the general upgrade guide

[1] https://ovirt.org/release/4.3.0/#install--upgrade-from-previous-versions


On Fri, Feb 15, 2019 at 1:05 PM Greg Sheremeta  wrote:

>
> On Thu, Feb 14, 2019 at 11:06 PM Vincent Royer 
> wrote:
>
>> Greg,
>>
>> Can I contribute?
>>
>
> I'm so glad you asked -- YES, please do! :)
>
> The docs, along with the rest of the ovirt.org site, are in GitHub, and
> very easily editable. To edit a single page -- in the page footer, click
> "Edit this page on GitHub". That will create a simple Pull Request for
> review.
>
> You can also clone and fork like any other project, if you are more
> comfortable with that. If you want to run the site locally,
> https://github.com/oVirt/ovirt-site/blob/master/CONTRIBUTING.md
>
> Let me know if you have any questions!
>
>
>>
>>
>>
>> On Thu, Feb 14, 2019 at 2:05 PM Greg Sheremeta 
>> wrote:
>>
>>>
>>> On Thu, Feb 14, 2019 at 2:16 PM Vincent Royer 
>>> wrote:
>>>
 Greg,

 The first thing on the list in your link is to check what repos are
 enabled with yum repolist, but makes no mention of what repos *should *be
 enabled on node-ng, nor what to do about it if you have the wrong ones.
 I've never had an Ovirt update go the way it was "supposed" to go, despite
 having a bunch of documentation at hand.  Usually I end up blowing away the
 hosts and starting with a fresh ISO.

 The page you linked makes no mention of the command Edward mentioned
 that got things working for me:

 yum update ovirt-node-ng-image-update

 All the instructions I can find just say to run yum update, but that
 resulted in a bunch of dependency errors for me, on a normal 4.2.6 node
 install that I haven't touched since installation.  Why?  If I'm following
 the instructions, shouldn't it work?


 running this command in the upgrade guide:

 [image: image.png]

 Gives me "This system is not registered with an entitlement server".
  Is that an outdated instruction?  Does it apply to the particular update
 I'm trying to apply?   No way to tell...

>>>
>>> It only applies to RHEL. If you are not on RHEL, you wouldn't run that.
>>> So it definitely needs improvement.
>>>
>>>

 What would really help is a clear separation between commands intended
 for centos/RHEL and commands intended for Node.  As an outsider, it's very
 difficult to know.   Every chapter, where there is any difference in
 procedure, the documents should be split with RHEL on one side and NODE on
 the other.

>>>
>>> +1.
>>>
>>>

 The documentation would also benefit from tags like dates and versions
 that they apply to.  "Valid for Ovirt 4.2.6 to 4.2.8 as of Feb 2, 2019".
 Then the documents should be tested and the dates/versions adjusted, or the
 docs adjusted, as needed.

>>>
>>> Agree.
>>>
>>>

 Ovirt is awesome.

>>>
>>> Agree :)
>>>
>>>
 But the docs are the project's worst enemy.

>>>
>>> I understand your frustrations. We've been trying to improve the
>>> documentation lately, and feedback like yours is crucial. So thank you.
>>> I opened to https://github.com/oVirt/ovirt-site/issues/1906 track this.
>>>
>>> Best wishes,
>>> Greg
>>>
>>>
>>>



 On Thu, Feb 14, 2019 at 3:34 AM Greg Sheremeta 
 wrote:

> Hi,
>
> On Wed, Feb 13, 2019 at 11:18 PM Vincent Royer 
> wrote:
>
>> wow am I crazy or is that not mentioned anywhere that I can find in
>> the docs?
>>
>
>
> https://ovirt.org/documentation/upgrade-guide/appe-Manually_Updating_Hosts.html
>
> Does that make sense, or do you think it needs enhancement? If it
> needs enhancement, please open a documentation bug:
> https://github.com/oVirt/ovirt-site/issues/new
>
>
>
>>
>> some combinations of commands and reboots finally got the update to
>> take.
>>
>> Any idea about the messages about not being registered to an
>> entitlement server?  whats that all about??
>>
>
> If you're on CentOS, it's a harmless side effect of cockpit being
> installed.
> # cat  /etc/yum/pluginconf.d/subscription-manager.conf
> [main]
> enabled=1
> # change to 0 if you prefer
>
>
>
>>
>>
>>
>> On Wed, Feb 13, 2019 at 7:30 PM Edward Berger 
>> wrote:
>>
>>> If its a node-ng install, you should just update the whole image with
>>> yum update ovirt-node-ng-image-update
>>>
>>> On Wed, Feb 13, 2019 at 8:12 PM Vincent Royer 
>>> wrote:
>>>
 Sorry, this is a node install w/ he.

 On Wed, Feb 13, 2019, 4:44 PM Vincent Royer >>> wrote:

> trying to update from 4.2.6 to 4.2.8
>
> yum update fails with:
>
> --> Finished Dependency Resolution
>
>

[ovirt-users] Re: oVirt Node 4.2.7 upgrade fails with broken dependencies ?

2018-11-14 Thread Yuval Turgeman
For ovirt-node, the only package that should be downloaded as an update is
the ovirt-node-ng-image-update rpm.  It's should be set in the yum repo
files using includepkgs.

On Wed, Nov 14, 2018 at 5:30 PM Sandro Bonazzola 
wrote:

>
>
> Il giorno mer 14 nov 2018 alle ore 16:27 Jayme  ha
> scritto:
>
>> I am having the same issue as well attempting to update oVirt node to
>> latest.
>>
>
> Please manually disable EPEL repository for now. We are checking with
> CentOS OpsTools SIG if we can get an updated collectd there.
>
>
>>
>>
>>
>> On Wed, Nov 14, 2018 at 11:07 AM Giulio Casella 
>> wrote:
>>
>>> It's due to a update of collectd in epel, but ovirt repos contain also
>>> collectd-write_http and collectd-disk (still not updated). We have to
>>> wait for ovirt guys to release updated versions in
>>> ovirt-4.2-centos-opstools repo.
>>>
>>> I think it'll be a matter of few days.
>>>
>>> Ciao,
>>> Giulio
>>>
>>> Il 14/11/2018 13:51, Rogério Ceni Coelho ha scritto:
>>> > Ovirt Engine with same problem.
>>> >
>>> > [root@nscovirt42prdpoa ~]# yum update
>>> > Loaded plugins: fastestmirror, versionlock
>>> > Loading mirror speeds from cached hostfile
>>> >  * base: centos.brnet.net.br 
>>> >  * epel: mirror.ci.ifes.edu.br 
>>> >  * extras: centos.brnet.net.br 
>>> >  * ovirt-4.2: mirror.linux.duke.edu 
>>> >  * ovirt-4.2-epel: mirror.ci.ifes.edu.br >> >
>>> >  * updates: centos.brnet.net.br 
>>> > Resolving Dependencies
>>> > --> Running transaction check
>>> > ---> Package collectd.x86_64 0:5.8.0-6.1.el7 will be updated
>>> > --> Processing Dependency: collectd(x86-64) = 5.8.0-6.1.el7 for
>>> package:
>>> > collectd-disk-5.8.0-6.1.el7.x86_64
>>> > --> Processing Dependency: collectd(x86-64) = 5.8.0-6.1.el7 for
>>> package:
>>> > collectd-write_http-5.8.0-6.1.el7.x86_64
>>> > ---> Package collectd.x86_64 0:5.8.1-1.el7 will be an update
>>> > ---> Package collectd-postgresql.x86_64 0:5.8.0-6.1.el7 will be updated
>>> > ---> Package collectd-postgresql.x86_64 0:5.8.1-1.el7 will be an update
>>> > ---> Package ovirt-engine-extensions-api-impl.noarch 0:4.2.7.4-1.el7
>>> > will be updated
>>> > ---> Package ovirt-engine-extensions-api-impl.noarch 0:4.2.7.5-1.el7
>>> > will be an update
>>> > ---> Package ovirt-engine-lib.noarch 0:4.2.7.4-1.el7 will be updated
>>> > ---> Package ovirt-engine-lib.noarch 0:4.2.7.5-1.el7 will be an update
>>> > ---> Package ovirt-engine-setup.noarch 0:4.2.7.4-1.el7 will be updated
>>> > ---> Package ovirt-engine-setup.noarch 0:4.2.7.5-1.el7 will be an
>>> update
>>> > ---> Package ovirt-engine-setup-base.noarch 0:4.2.7.4-1.el7 will be
>>> updated
>>> > ---> Package ovirt-engine-setup-base.noarch 0:4.2.7.5-1.el7 will be an
>>> > update
>>> > ---> Package ovirt-engine-setup-plugin-ovirt-engine.noarch
>>> > 0:4.2.7.4-1.el7 will be updated
>>> > ---> Package ovirt-engine-setup-plugin-ovirt-engine.noarch
>>> > 0:4.2.7.5-1.el7 will be an update
>>> > ---> Package ovirt-engine-setup-plugin-ovirt-engine-common.noarch
>>> > 0:4.2.7.4-1.el7 will be updated
>>> > ---> Package ovirt-engine-setup-plugin-ovirt-engine-common.noarch
>>> > 0:4.2.7.5-1.el7 will be an update
>>> > ---> Package ovirt-engine-setup-plugin-vmconsole-proxy-helper.noarch
>>> > 0:4.2.7.4-1.el7 will be updated
>>> > ---> Package ovirt-engine-setup-plugin-vmconsole-proxy-helper.noarch
>>> > 0:4.2.7.5-1.el7 will be an update
>>> > ---> Package ovirt-engine-setup-plugin-websocket-proxy.noarch
>>> > 0:4.2.7.4-1.el7 will be updated
>>> > ---> Package ovirt-engine-setup-plugin-websocket-proxy.noarch
>>> > 0:4.2.7.5-1.el7 will be an update
>>> > ---> Package ovirt-engine-vmconsole-proxy-helper.noarch 0:4.2.7.4-1.el7
>>> > will be updated
>>> > ---> Package ovirt-engine-vmconsole-proxy-helper.noarch 0:4.2.7.5-1.el7
>>> > will be an update
>>> > ---> Package ovirt-engine-websocket-proxy.noarch 0:4.2.7.4-1.el7 will
>>> be
>>> > updated
>>> > ---> Package ovirt-engine-websocket-proxy.noarch 0:4.2.7.5-1.el7 will
>>> be
>>> > an update
>>> > ---> Package ovirt-release42.noarch 0:4.2.7-1.el7 will be updated
>>> > ---> Package ovirt-release42.noarch 0:4.2.7.1-1.el7 will be an update
>>> > --> Finished Dependency Resolution
>>> > Error: Package: collectd-write_http-5.8.0-6.1.el7.x86_64
>>> > (@ovirt-4.2-centos-opstools)
>>> >Requires: collectd(x86-64) = 5.8.0-6.1.el7
>>> >Removing: collectd-5.8.0-6.1.el7.x86_64
>>> > (@ovirt-4.2-centos-opstools)
>>> >collectd(x86-64) = 5.8.0-6.1.el7
>>> >Updated By: collectd-5.8.1-1.el7.x86_64 (epel)
>>> >collectd(x86-64) = 5.8.1-1.el7
>>> >Available: collectd-5.7.2-1.el7.x86_64
>>> > (ovirt-4.2-centos-opstools)
>>> >collectd(x86-64) = 5.7.2-1.el7
>>> >Available: collectd-5.7.2-3.el7.x86_64
>>> > (ovirt-4.2-centos-opstools)
>>> >   

[ovirt-users] Re: Diary of hosted engine install woes

2018-10-08 Thread Yuval Turgeman
For debugging purposes try to download directly with curl/wget and if that
works, I would try to increase the timeout in yum.conf

On Mon, Oct 8, 2018 at 6:05 PM Simone Tiraboschi 
wrote:

>
>
> On Mon, Oct 8, 2018 at 5:00 PM  wrote:
>
>> Thanks for the suggestion Simone.  Tried sudo yum install
>> ovirt-engine-appliance, no joy:
>>
>
> Can you please run a "yum clean all" and retry?
>
>
>> "
>>
>> =
>>  Package   Arch  Version
>> RepositorySize
>>
>> =
>> Installing:
>>  ovirt-engine-appliancenoarch
>> 4.2-20180903.1.el7   ovirt-4.2992 M
>>
>> Transaction Summary
>>
>> =
>> Install  1 Package
>>
>> Total download size: 992 M
>> Installed size: 992 M
>> Is this ok [y/d/N]: y
>> Downloading packages:
>> Delta RPMs disabled because /usr/bin/applydeltarpm not installed.
>> ovirt-engine-appliance-4.2-201 FAILED
>>
>> http://resources.ovirt.org/pub/ovirt-4.2/rpm/el7/noarch/ovirt-engine-appliance-4.2-20180903.1.el7.noarch.rpm:
>> [Errno 12] Timeout on
>> http://resources.ovirt.org/pub/ovirt-4.2/rpm/el7/noarch/ovirt-engine-appliance-4.2-20180903.1.el7.noarch.rpm:
>> (28, 'Operation too slow. Less than 1000 bytes/sec transferred the last 30
>> seconds')
>> Trying other mirror.
>> ovirt-engine-appliance-4.2-201 FAILED
>>
>> http://resources.ovirt.org/pub/ovirt-4.2/rpm/el7/noarch/ovirt-engine-appliance-4.2-20180903.1.el7.noarch.rpm:
>> [Errno 12] Timeout on
>> http://resources.ovirt.org/pub/ovirt-4.2/rpm/el7/noarch/ovirt-engine-appliance-4.2-20180903.1.el7.noarch.rpm:
>> (28, 'Connection timed out after 30001 milliseconds')
>> Trying other mirror.
>> ovirt-engine-appliance-4.2-201 FAILED
>>
>> http://resources.ovirt.org/pub/ovirt-4.2/rpm/el7/noarch/ovirt-engine-appliance-4.2-20180903.1.el7.noarch.rpm:
>> [Errno 12] Timeout on
>> http://resources.ovirt.org/pub/ovirt-4.2/rpm/el7/noarch/ovirt-engine-appliance-4.2-20180903.1.el7.noarch.rpm:
>> (28, 'Connection timed out after 30001 milliseconds')
>> Trying other mirror.
>> ovirt-engine-appliance-4.2-201 FAILED
>>
>> http://resources.ovirt.org/pub/ovirt-4.2/rpm/el7/noarch/ovirt-engine-appliance-4.2-20180903.1.el7.noarch.rpm:
>> [Errno 12] Timeout on
>> http://resources.ovirt.org/pub/ovirt-4.2/rpm/el7/noarch/ovirt-engine-appliance-4.2-20180903.1.el7.noarch.rpm:
>> (28, 'Connection timed out after 30001 milliseconds')
>> Trying other mirror.
>> ovirt-engine-appliance-4.2-201 FAILED
>>   ] 1.8 kB/s | 124 MB 134:10:07 ETA
>>
>> http://resources.ovirt.org/pub/ovirt-4.2/rpm/el7/noarch/ovirt-engine-appliance-4.2-20180903.1.el7.noarch.rpm:
>> [Errno 12] Timeout on
>> http://resources.ovirt.org/pub/ovirt-4.2/rpm/el7/noarch/ovirt-engine-appliance-4.2-20180903.1.el7.noarch.rpm:
>> (28, 'Operation too slow. Less than 1000 bytes/sec transferred the last 30
>> seconds')
>> Trying other mirror.
>> ovirt-engine-appliance-4.2-201 FAILED
>>
>> http://resources.ovirt.org/pub/ovirt-4.2/rpm/el7/noarch/ovirt-engine-appliance-4.2-20180903.1.el7.noarch.rpm:
>> [Errno 12] Timeout on
>> http://resources.ovirt.org/pub/ovirt-4.2/rpm/el7/noarch/ovirt-engine-appliance-4.2-20180903.1.el7.noarch.rpm:
>> (28, 'Connection timed out after 30001 milliseconds')
>> Trying other mirror.
>> ovirt-engine-appliance-4.2-201 FAILED
>>
>> http://resources.ovirt.org/pub/ovirt-4.2/rpm/el7/noarch/ovirt-engine-appliance-4.2-20180903.1.el7.noarch.rpm:
>> [Errno 12] Timeout on
>> http://resources.ovirt.org/pub/ovirt-4.2/rpm/el7/noarch/ovirt-engine-appliance-4.2-20180903.1.el7.noarch.rpm:
>> (28, 'Connection timed out after 3 milliseconds')
>> Trying other mirror.
>> ovirt-engine-appliance-4.2-201 FAILED
>>
>> http://resources.ovirt.org/pub/ovirt-4.2/rpm/el7/noarch/ovirt-engine-appliance-4.2-20180903.1.el7.noarch.rpm:
>> [Errno 12] Timeout on
>> http://resources.ovirt.org/pub/ovirt-4.2/rpm/el7/noarch/ovirt-engine-appliance-4.2-20180903.1.el7.noarch.rpm:
>> (28, 'Connection timed out after 3 milliseconds')
>> Trying other mirror.
>> ovirt-engine-appliance-4.2-201 FAILED
>>
>> http://resources.ovirt.org/pub/ovirt-4.2/rpm/el7/noarch/ovirt-engine-appliance-4.2-20180903.1.el7.noarch.rpm:
>> [Errno 12] Timeout on
>> http://resources.ovirt.org/pub/ovirt-4.2/rpm/el7/noarch/ovirt-engine-appliance-4.2-20180903.1.el7.noarch.rpm:
>> (28, 'Operation too slow. Less than 1000 bytes/sec transferred the last 30
>> seconds')
>> Trying other mirror.
>> ovirt-engine-appliance-4.2-201 FAILED
>>
>> http://resources.ovirt.org/pub/ovirt-4.2/rpm/el7/noarch/ovirt-engine-appliance-4.2-20180903.1.el7.noarch.rpm:
>> 

[ovirt-users] Re: upgrade 4.2.6 to 4.2.6.1: node status degraded

2018-09-24 Thread Yuval Turgeman
Wait, root disk's uuid is different ??

On Mon, Sep 24, 2018, 15:39 Yuval Turgeman  wrote:

> Bootid is there, so that's not the issue.. can you run `imgbase --debug
> check` ?
>
> On Mon, Sep 24, 2018, 15:22 KRUECKEL OLIVER 
> wrote:
>
>>
>> ------
>> *Von:* Yuval Turgeman 
>> *Gesendet:* Montag, 24. September 2018 11:29:31
>> *An:* Sandro Bonazzola
>> *Cc:* KRUECKEL OLIVER; Ryan Barry; Chen Shao; Ying Cui; users
>> *Betreff:* Re: [ovirt-users] Re: upgrade 4.2.6 to 4.2.6.1: node status
>> degraded
>>
>> Can you share the output from `cat /proc/cmdline` and perhaps the
>> grub.conf ?
>> Imgbased adds a bootid and perhaps it's missing for some reason
>>
>> On Mon, Sep 24, 2018, 11:59 Sandro Bonazzola  wrote:
>>
>>> Adding some people who may help understanding what happened and work on
>>> a solution for this.
>>>
>>> Il giorno lun 24 set 2018 alle ore 10:30 
>>> ha scritto:
>>>
>>>> Identified this problem for some time (running after about 3. 4th
>>>> update always in this problem), have always helped me with a new
>>>> installtion. Now I've looked at it more closely (maybe this information
>>>> will help the knower).
>>>>
>>>> Installation runs without a problem, reboot, system runs as expected,
>>>> repeated reboot => node status: DEGRADED
>>>>
>>>> What I found is: /dev/sda1 and /dev/sda2 are missing, so it can not
>>>> mount /boot/ and /boot/efi !
>>>>
>>>> in dmesg all 3rd partitions are displayed. with parted as well, after
>>>> partprobe are /dev/sda1 and /dev/sda2 under /dev/ available, mount /boot or
>>>> mount /boot/efi does not issue an error, the partionenen however are not
>>>> mounted (df -h does not show it and umount /boot or /boot/efi says so too).
>>>>
>>>> I have the same problem with
>>>> ovirt-node-ng-image-update-4.2.7-0.1.rc1.el7.noarch.rpm
>>>>
>>>> If I undo the installation (imgbase base
>>>> --remove=ovirt-node-ng-image-update-4.2 . and yum remove
>>>> ovirt-node-ng-image-update-4.2 .) and repeat the installation, I can
>>>> reproduce the behavior (install, reboot, every works with the new version,
>>>> reboot, node status: DEGRADED)
>>>>
>>>> Have this behavior on four test servers.
>>>>
>>>>
>>>> here df -h, ll /boot after the 1st reboot and the output of imgbase
>>>> layout and imgbase w
>>>>
>>>> [root@ovirt-n1 ~]# df -h
>>>> Dateisystem
>>>> Größe Benutzt Verf. Verw% Eingehängt auf
>>>> /dev/mapper/onn_ovirt--n1-ovirt--node--ng--4.2.6.1--0.20180913.0+1
>>>> 183G3,3G  170G2% /
>>>> devtmpfs
>>>>  95G   0   95G0% /dev
>>>> tmpfs
>>>> 95G 16K   95G1% /dev/shm
>>>> tmpfs
>>>> 95G 42M   95G1% /run
>>>> tmpfs
>>>> 95G   0   95G0% /sys/fs/cgroup
>>>> /dev/mapper/onn_ovirt--n1-var
>>>> 15G187M   14G2% /var
>>>> /dev/sda2
>>>>  976M417M  492M   46% /boot
>>>> /dev/mapper/onn_ovirt--n1-tmp
>>>>  976M3,4M  906M1% /tmp
>>>> /dev/mapper/onn_ovirt--n1-home
>>>> 976M2,6M  907M1% /home
>>>> /dev/mapper/onn_ovirt--n1-var_log
>>>>  7,8G414M  7,0G6% /var/log
>>>> /dev/mapper/onn_ovirt--n1-var_log_audit
>>>>  2,0G 39M  1,8G3% /var/log/audit
>>>> /dev/mapper/onn_ovirt--n1-var_crash
>>>>  9,8G 37M  9,2G1% /var/crash
>>>> /dev/sda1
>>>>  200M9,8M  191M5% /boot/efi
>>>> gluster01.test.visa-ad.at:/st1
>>>> 805G 71G  734G9%
>>>> /rhev/data-center/mnt/glusterSD/gluster01.test.visa-ad.at:_st1
>>>> glustermount:iso
>>>>  50G 20G   30G   40% /rhev/data-center/mnt/glusterSD/glustermount:iso
>>>> glustermount:export
>>>>  100G4,8G   96G5%
>>>> /rhev/data-center/mnt/glusterSD/glustermount:export
>>>> tmpfs
>>>> 19G   0   19G0% /run/user/0
>>>> [root@ovirt-n1 ~]# ll /boot
>>>> insgesamt 187016
>>>> -rw-r--r--. 1 root root   140971  8. Mai 10:37
>>>> config-3.10.0-693.21.1.el7.x86_64
>>>> -rw-r--r--. 1 root root   147859 24. Sep 09:04
>>>> config-3.10.0-862.11.6.el7.x86_64
&

[ovirt-users] Re: upgrade 4.2.6 to 4.2.6.1: node status degraded

2018-09-24 Thread Yuval Turgeman
Bootid is there, so that's not the issue.. can you run `imgbase --debug
check` ?

On Mon, Sep 24, 2018, 15:22 KRUECKEL OLIVER 
wrote:

>
> --
> *Von:* Yuval Turgeman 
> *Gesendet:* Montag, 24. September 2018 11:29:31
> *An:* Sandro Bonazzola
> *Cc:* KRUECKEL OLIVER; Ryan Barry; Chen Shao; Ying Cui; users
> *Betreff:* Re: [ovirt-users] Re: upgrade 4.2.6 to 4.2.6.1: node status
> degraded
>
> Can you share the output from `cat /proc/cmdline` and perhaps the
> grub.conf ?
> Imgbased adds a bootid and perhaps it's missing for some reason
>
> On Mon, Sep 24, 2018, 11:59 Sandro Bonazzola  wrote:
>
>> Adding some people who may help understanding what happened and work on a
>> solution for this.
>>
>> Il giorno lun 24 set 2018 alle ore 10:30 
>> ha scritto:
>>
>>> Identified this problem for some time (running after about 3. 4th update
>>> always in this problem), have always helped me with a new installtion. Now
>>> I've looked at it more closely (maybe this information will help the
>>> knower).
>>>
>>> Installation runs without a problem, reboot, system runs as expected,
>>> repeated reboot => node status: DEGRADED
>>>
>>> What I found is: /dev/sda1 and /dev/sda2 are missing, so it can not
>>> mount /boot/ and /boot/efi !
>>>
>>> in dmesg all 3rd partitions are displayed. with parted as well, after
>>> partprobe are /dev/sda1 and /dev/sda2 under /dev/ available, mount /boot or
>>> mount /boot/efi does not issue an error, the partionenen however are not
>>> mounted (df -h does not show it and umount /boot or /boot/efi says so too).
>>>
>>> I have the same problem with
>>> ovirt-node-ng-image-update-4.2.7-0.1.rc1.el7.noarch.rpm
>>>
>>> If I undo the installation (imgbase base
>>> --remove=ovirt-node-ng-image-update-4.2 . and yum remove
>>> ovirt-node-ng-image-update-4.2 .) and repeat the installation, I can
>>> reproduce the behavior (install, reboot, every works with the new version,
>>> reboot, node status: DEGRADED)
>>>
>>> Have this behavior on four test servers.
>>>
>>>
>>> here df -h, ll /boot after the 1st reboot and the output of imgbase
>>> layout and imgbase w
>>>
>>> [root@ovirt-n1 ~]# df -h
>>> DateisystemGröße
>>> Benutzt Verf. Verw% Eingehängt auf
>>> /dev/mapper/onn_ovirt--n1-ovirt--node--ng--4.2.6.1--0.20180913.0+1
>>> 183G3,3G  170G2% /
>>> devtmpfs
>>>  95G   0   95G0% /dev
>>> tmpfs
>>> 95G 16K   95G1% /dev/shm
>>> tmpfs
>>> 95G 42M   95G1% /run
>>> tmpfs
>>> 95G   0   95G0% /sys/fs/cgroup
>>> /dev/mapper/onn_ovirt--n1-var
>>> 15G187M   14G2% /var
>>> /dev/sda2
>>>  976M417M  492M   46% /boot
>>> /dev/mapper/onn_ovirt--n1-tmp
>>>  976M3,4M  906M1% /tmp
>>> /dev/mapper/onn_ovirt--n1-home
>>> 976M2,6M  907M1% /home
>>> /dev/mapper/onn_ovirt--n1-var_log
>>>  7,8G414M  7,0G6% /var/log
>>> /dev/mapper/onn_ovirt--n1-var_log_audit
>>>  2,0G 39M  1,8G3% /var/log/audit
>>> /dev/mapper/onn_ovirt--n1-var_crash
>>>  9,8G 37M  9,2G1% /var/crash
>>> /dev/sda1
>>>  200M9,8M  191M5% /boot/efi
>>> gluster01.test.visa-ad.at:/st1
>>> 805G 71G  734G9%
>>> /rhev/data-center/mnt/glusterSD/gluster01.test.visa-ad.at:_st1
>>> glustermount:iso
>>>  50G 20G   30G   40% /rhev/data-center/mnt/glusterSD/glustermount:iso
>>> glustermount:export
>>>  100G4,8G   96G5%
>>> /rhev/data-center/mnt/glusterSD/glustermount:export
>>> tmpfs
>>> 19G   0   19G0% /run/user/0
>>> [root@ovirt-n1 ~]# ll /boot
>>> insgesamt 187016
>>> -rw-r--r--. 1 root root   140971  8. Mai 10:37
>>> config-3.10.0-693.21.1.el7.x86_64
>>> -rw-r--r--. 1 root root   147859 24. Sep 09:04
>>> config-3.10.0-862.11.6.el7.x86_64
>>> drwx--. 3 root root16384  1. Jan 1970  efi
>>> -rw-r--r--. 1 root root   192572  5. Nov 2016  elf-memtest86+-5.01
>>> drwxr-xr-x. 2 root root 4096  4. Mai 18:34 extlinux
>>> drwxr-xr-x. 2 root root 4096  4. Mai 18:16 grub
>>> drwx--. 5 root root 4096  8. Mai 08:45 grub2
>>> -rw---. 1 root root 59917312  8. Mai 10:39
>>> initramfs-3.10.0-693.21.1.el7.x86_64.

[ovirt-users] Re: upgrade 4.2.6 to 4.2.6.1: node status degraded

2018-09-24 Thread Yuval Turgeman
Can you share the output from `cat /proc/cmdline` and perhaps the grub.conf
?
Imgbased adds a bootid and perhaps it's missing for some reason

On Mon, Sep 24, 2018, 11:59 Sandro Bonazzola  wrote:

> Adding some people who may help understanding what happened and work on a
> solution for this.
>
> Il giorno lun 24 set 2018 alle ore 10:30  ha
> scritto:
>
>> Identified this problem for some time (running after about 3. 4th update
>> always in this problem), have always helped me with a new installtion. Now
>> I've looked at it more closely (maybe this information will help the
>> knower).
>>
>> Installation runs without a problem, reboot, system runs as expected,
>> repeated reboot => node status: DEGRADED
>>
>> What I found is: /dev/sda1 and /dev/sda2 are missing, so it can not mount
>> /boot/ and /boot/efi !
>>
>> in dmesg all 3rd partitions are displayed. with parted as well, after
>> partprobe are /dev/sda1 and /dev/sda2 under /dev/ available, mount /boot or
>> mount /boot/efi does not issue an error, the partionenen however are not
>> mounted (df -h does not show it and umount /boot or /boot/efi says so too).
>>
>> I have the same problem with
>> ovirt-node-ng-image-update-4.2.7-0.1.rc1.el7.noarch.rpm
>>
>> If I undo the installation (imgbase base
>> --remove=ovirt-node-ng-image-update-4.2 . and yum remove
>> ovirt-node-ng-image-update-4.2 .) and repeat the installation, I can
>> reproduce the behavior (install, reboot, every works with the new version,
>> reboot, node status: DEGRADED)
>>
>> Have this behavior on four test servers.
>>
>>
>> here df -h, ll /boot after the 1st reboot and the output of imgbase
>> layout and imgbase w
>>
>> [root@ovirt-n1 ~]# df -h
>> DateisystemGröße
>> Benutzt Verf. Verw% Eingehängt auf
>> /dev/mapper/onn_ovirt--n1-ovirt--node--ng--4.2.6.1--0.20180913.0+1  183G
>>   3,3G  170G2% /
>> devtmpfs 95G
>>  0   95G0% /dev
>> tmpfs95G
>>16K   95G1% /dev/shm
>> tmpfs95G
>>42M   95G1% /run
>> tmpfs95G
>>  0   95G0% /sys/fs/cgroup
>> /dev/mapper/onn_ovirt--n1-var15G
>>   187M   14G2% /var
>> /dev/sda2   976M
>>   417M  492M   46% /boot
>> /dev/mapper/onn_ovirt--n1-tmp   976M
>>   3,4M  906M1% /tmp
>> /dev/mapper/onn_ovirt--n1-home  976M
>>   2,6M  907M1% /home
>> /dev/mapper/onn_ovirt--n1-var_log   7,8G
>>   414M  7,0G6% /var/log
>> /dev/mapper/onn_ovirt--n1-var_log_audit 2,0G
>>39M  1,8G3% /var/log/audit
>> /dev/mapper/onn_ovirt--n1-var_crash 9,8G
>>37M  9,2G1% /var/crash
>> /dev/sda1   200M
>>   9,8M  191M5% /boot/efi
>> gluster01.test.visa-ad.at:/st1
>> 805G 71G  734G9%
>> /rhev/data-center/mnt/glusterSD/gluster01.test.visa-ad.at:_st1
>> glustermount:iso 50G
>>20G   30G   40% /rhev/data-center/mnt/glusterSD/glustermount:iso
>> glustermount:export 100G
>>   4,8G   96G5% /rhev/data-center/mnt/glusterSD/glustermount:export
>> tmpfs19G
>>  0   19G0% /run/user/0
>> [root@ovirt-n1 ~]# ll /boot
>> insgesamt 187016
>> -rw-r--r--. 1 root root   140971  8. Mai 10:37
>> config-3.10.0-693.21.1.el7.x86_64
>> -rw-r--r--. 1 root root   147859 24. Sep 09:04
>> config-3.10.0-862.11.6.el7.x86_64
>> drwx--. 3 root root16384  1. Jan 1970  efi
>> -rw-r--r--. 1 root root   192572  5. Nov 2016  elf-memtest86+-5.01
>> drwxr-xr-x. 2 root root 4096  4. Mai 18:34 extlinux
>> drwxr-xr-x. 2 root root 4096  4. Mai 18:16 grub
>> drwx--. 5 root root 4096  8. Mai 08:45 grub2
>> -rw---. 1 root root 59917312  8. Mai 10:39
>> initramfs-3.10.0-693.21.1.el7.x86_64.img
>> -rw---. 1 root root 21026491 11. Jul 12:10
>> initramfs-3.10.0-693.21.1.el7.x86_64kdump.img
>> -rw---. 1 root root 26672143  4. Mai 18:24
>> initramfs-3.10.0-693.el7.x86_64.img
>> -rw---. 1 root root 62740408 24. Sep 09:05
>> initramfs-3.10.0-862.11.6.el7.x86_64.img
>> -rw-r--r--. 1 root root   611296  4. Mai 18:23 initrd-plymouth.img
>> drwx--. 2 root root16384  8. Mai 10:32 lost+found
>> -rw-r--r--. 1 root root   190896  5. Nov 2016  memtest86+-5.01
>> drwxr-xr-x. 2 root root 4096  8. Mai 10:39
>> ovirt-node-ng-4.2.3-0.20180504.0+1
>> drwxr-xr-x. 2 root root 4096  4. Sep 16:31
>> ovirt-node-ng-4.2.6-0.20180903.0+1
>> 

[ovirt-users] Re: Unable to upgrade ovirt-node from 4.2.3

2018-09-17 Thread Yuval Turgeman
Glad to hear, thanks for the update !

On Mon, Sep 17, 2018, 18:13 Benedetto Vassallo 
wrote:

> Thank you guys, it worked!
>
>
> Def. Quota Yuval Turgeman :
>
> Try to first remove the LV for /var/crash or mount it, then remove any
> base that is not used (imgbase base --remove ), then yum update
> again.
>
> Documented here (for other LVs):
>
> https://access.redhat.com/documentation/en-us/red_hat_virtualization/4.1/html/upgrade_guide/recovering_from_failed_nist-800_upgrade
>
> On Mon, Sep 17, 2018 at 5:13 PM, Sandro Bonazzola 
> wrote:
>
>> Adding Yuval
>>
>>
>> Il giorno lun 17 set 2018 alle ore 15:16 
>> ha scritto:
>>
>>> Hi,
>>> I have installed some 4.2.2 ovirt nodes (fresh installation) and
>>> upgraded them to 4.2.3 from the ovirt-engine UI.
>>> All was right, but if I want to upgrade from 4.2.3 to any new version I
>>> got a failure in post script.
>>> In the /tmp/imgbased.log I have the following error:
>>>
>>> 2018-09-17 14:28:09,073 [DEBUG] (MainThread) Calling: (['rmdir',
>>> u'/tmp/mnt.POEYF'],) {'close_fds': True, 'stderr': -2}
>>> 2018-09-17 14:28:09,080 [DEBUG] (MainThread) Returned:
>>> 2018-09-17 14:28:09,081 [ERROR] (MainThread) Failed to migrate etc
>>> Traceback (most recent call last):
>>>   File
>>> "/tmp/tmp.s2GHHNumuT/usr/lib/python2.7/site-packages/imgbased/plugins/osupdater.py",
>>> line 118, in on_new_layer
>>> check_nist_layout(imgbase, new_lv)
>>>   File
>>> "/tmp/tmp.s2GHHNumuT/usr/lib/python2.7/site-packages/imgbased/plugins/osupdater.py",
>>> line 209, in check_nist_layout
>>> v.create(t, paths[t]["size"], paths[t]["attach"])
>>>   File
>>> "/tmp/tmp.s2GHHNumuT/usr/lib/python2.7/site-packages/imgbased/volume.py",
>>> line 48, in create
>>> "Path is already a volume: %s" % where
>>> AssertionError: Path is already a volume: /var/crash
>>> 2018-09-17 14:28:09,092 [DEBUG] (MainThread) Calling binary: (['umount',
>>> '-l', u'/tmp/mnt.gt3IE'],) {}
>>> 2018-09-17 14:28:09,092 [DEBUG] (MainThread) Calling: (['umount', '-l',
>>> u'/tmp/mnt.gt3IE'],) {'close_fds': True, 'stderr': -2}
>>> 2018-09-17 14:28:09,181 [DEBUG] (MainThread) Returned:
>>> 2018-09-17 14:28:09,182 [DEBUG] (MainThread) Calling binary: (['rmdir',
>>> u'/tmp/mnt.gt3IE'],) {}
>>> 2018-09-17 14:28:09,182 [DEBUG] (MainThread) Calling: (['rmdir',
>>> u'/tmp/mnt.gt3IE'],) {'close_fds': True, 'stderr': -2}
>>> 2018-09-17 14:28:09,190 [DEBUG] (MainThread) Returned:
>>> 2018-09-17 14:28:09,190 [DEBUG] (MainThread) Calling binary: (['umount',
>>> '-l', u'/tmp/mnt.2weYe'],) {}
>>> 2018-09-17 14:28:09,190 [DEBUG] (MainThread) Calling: (['umount', '-l',
>>> u'/tmp/mnt.2weYe'],) {'close_fds': True, 'stderr': -2}
>>> 2018-09-17 14:28:09,389 [DEBUG] (MainThread) Returned:
>>> 2018-09-17 14:28:09,389 [DEBUG] (MainThread) Calling binary: (['rmdir',
>>> u'/tmp/mnt.2weYe'],) {}
>>> 2018-09-17 14:28:09,389 [DEBUG] (MainThread) Calling: (['rmdir',
>>> u'/tmp/mnt.2weYe'],) {'close_fds': True, 'stderr': -2}
>>> 2018-09-17 14:28:09,397 [DEBUG] (MainThread) Returned:
>>> Traceback (most recent call last):
>>>   File "/usr/lib64/python2.7/runpy.py", line 162, in _run_module_as_main
>>> "__main__", fname, loader, pkg_name)
>>>   File "/usr/lib64/python2.7/runpy.py", line 72, in _run_code
>>> exec code in run_globals
>>>   File
>>> "/tmp/tmp.s2GHHNumuT/usr/lib/python2.7/site-packages/imgbased/__main__.py",
>>> line 53, in 
>>> CliApplication()
>>>   File
>>> "/tmp/tmp.s2GHHNumuT/usr/lib/python2.7/site-packages/imgbased/__init__.py",
>>> line 82, in CliApplication
>>> app.hooks.emit("post-arg-parse", args)
>>>   File
>>> "/tmp/tmp.s2GHHNumuT/usr/lib/python2.7/site-packages/imgbased/hooks.py",
>>> line 120, in emit
>>> cb(self.context, *args)
>>>   File
>>> "/tmp/tmp.s2GHHNumuT/usr/lib/python2.7/site-packages/imgbased/plugins/update.py",
>>> line 56, in post_argparse
>>> base_lv, _ = L

[ovirt-users] Re: Unable to upgrade ovirt-node from 4.2.3

2018-09-17 Thread Yuval Turgeman
Try to first remove the LV for /var/crash or mount it, then remove any base
that is not used (imgbase base --remove ), then yum update again.

Documented here (for other LVs):
https://access.redhat.com/documentation/en-us/red_hat_virtualization/4.1/html/upgrade_guide/recovering_from_failed_nist-800_upgrade

On Mon, Sep 17, 2018 at 5:13 PM, Sandro Bonazzola 
wrote:

> Adding Yuval
>
>
> Il giorno lun 17 set 2018 alle ore 15:16  ha
> scritto:
>
>> Hi,
>> I have installed some 4.2.2 ovirt nodes (fresh installation) and upgraded
>> them to 4.2.3 from the ovirt-engine UI.
>> All was right, but if I want to upgrade from 4.2.3 to any new version I
>> got a failure in post script.
>> In the /tmp/imgbased.log I have the following error:
>>
>> 2018-09-17 14:28:09,073 [DEBUG] (MainThread) Calling: (['rmdir',
>> u'/tmp/mnt.POEYF'],) {'close_fds': True, 'stderr': -2}
>> 2018-09-17 14:28:09,080 [DEBUG] (MainThread) Returned:
>> 2018-09-17 14:28:09,081 [ERROR] (MainThread) Failed to migrate etc
>> Traceback (most recent call last):
>>   File "/tmp/tmp.s2GHHNumuT/usr/lib/python2.7/site-packages/
>> imgbased/plugins/osupdater.py", line 118, in on_new_layer
>> check_nist_layout(imgbase, new_lv)
>>   File "/tmp/tmp.s2GHHNumuT/usr/lib/python2.7/site-packages/
>> imgbased/plugins/osupdater.py", line 209, in check_nist_layout
>> v.create(t, paths[t]["size"], paths[t]["attach"])
>>   File 
>> "/tmp/tmp.s2GHHNumuT/usr/lib/python2.7/site-packages/imgbased/volume.py",
>> line 48, in create
>> "Path is already a volume: %s" % where
>> AssertionError: Path is already a volume: /var/crash
>> 2018-09-17 14:28:09,092 [DEBUG] (MainThread) Calling binary: (['umount',
>> '-l', u'/tmp/mnt.gt3IE'],) {}
>> 2018-09-17 14:28:09,092 [DEBUG] (MainThread) Calling: (['umount', '-l',
>> u'/tmp/mnt.gt3IE'],) {'close_fds': True, 'stderr': -2}
>> 2018-09-17 14:28:09,181 [DEBUG] (MainThread) Returned:
>> 2018-09-17 14:28:09,182 [DEBUG] (MainThread) Calling binary: (['rmdir',
>> u'/tmp/mnt.gt3IE'],) {}
>> 2018-09-17 14:28:09,182 [DEBUG] (MainThread) Calling: (['rmdir',
>> u'/tmp/mnt.gt3IE'],) {'close_fds': True, 'stderr': -2}
>> 2018-09-17 14:28:09,190 [DEBUG] (MainThread) Returned:
>> 2018-09-17 14:28:09,190 [DEBUG] (MainThread) Calling binary: (['umount',
>> '-l', u'/tmp/mnt.2weYe'],) {}
>> 2018-09-17 14:28:09,190 [DEBUG] (MainThread) Calling: (['umount', '-l',
>> u'/tmp/mnt.2weYe'],) {'close_fds': True, 'stderr': -2}
>> 2018-09-17 14:28:09,389 [DEBUG] (MainThread) Returned:
>> 2018-09-17 14:28:09,389 [DEBUG] (MainThread) Calling binary: (['rmdir',
>> u'/tmp/mnt.2weYe'],) {}
>> 2018-09-17 14:28:09,389 [DEBUG] (MainThread) Calling: (['rmdir',
>> u'/tmp/mnt.2weYe'],) {'close_fds': True, 'stderr': -2}
>> 2018-09-17 14:28:09,397 [DEBUG] (MainThread) Returned:
>> Traceback (most recent call last):
>>   File "/usr/lib64/python2.7/runpy.py", line 162, in _run_module_as_main
>> "__main__", fname, loader, pkg_name)
>>   File "/usr/lib64/python2.7/runpy.py", line 72, in _run_code
>> exec code in run_globals
>>   File 
>> "/tmp/tmp.s2GHHNumuT/usr/lib/python2.7/site-packages/imgbased/__main__.py",
>> line 53, in 
>> CliApplication()
>>   File 
>> "/tmp/tmp.s2GHHNumuT/usr/lib/python2.7/site-packages/imgbased/__init__.py",
>> line 82, in CliApplication
>> app.hooks.emit("post-arg-parse", args)
>>   File 
>> "/tmp/tmp.s2GHHNumuT/usr/lib/python2.7/site-packages/imgbased/hooks.py",
>> line 120, in emit
>> cb(self.context, *args)
>>   File 
>> "/tmp/tmp.s2GHHNumuT/usr/lib/python2.7/site-packages/imgbased/plugins/update.py",
>> line 56, in post_argparse
>> base_lv, _ = LiveimgExtractor(app.imgbase).extract(args.FILENAME)
>>   File 
>> "/tmp/tmp.s2GHHNumuT/usr/lib/python2.7/site-packages/imgbased/plugins/update.py",
>> line 118, in extract
>> "%s" % size, nvr)
>>   File 
>> "/tmp/tmp.s2GHHNumuT/usr/lib/python2.7/site-packages/imgbased/plugins/update.py",
>> line 99, in add_base_with_tree
>> new_layer_lv = self.imgbase.add_layer(new_base)
>>   File 
>> "/tmp/tmp.s2GHHNumuT/usr/lib/python2.7/site-packages/imgbased/imgbase.py",
>> line 192, in add_layer
>> self.hooks.emit("new-layer-added", prev_lv, new_lv)
>>   File 
>> "/tmp/tmp.s2GHHNumuT/usr/lib/python2.7/site-packages/imgbased/hooks.py",
>> line 120, in emit
>> cb(self.context, *args)
>>   File "/tmp/tmp.s2GHHNumuT/usr/lib/python2.7/site-packages/
>> imgbased/plugins/osupdater.py", line 132, in on_new_layer
>> raise ConfigMigrationError()
>> imgbased.plugins.osupdater.ConfigMigrationError
>>
>> I have this on all nodes.
>> Today I tried to make a fresh 2.2 install on a test VM, execute "yum
>> update", updated it to 2.3 and then I got tha same error again.
>> I did a default installation from 
>> ovirt-node-ng-installer-ovirt-4.2-2018040514.iso,
>> automatic partitioning and nothing customized.
>> Is there anyone having the same issue or knowing how to solve it?
>> Thanks in advantage.
>> ___
>> Users mailing list -- user

[ovirt-users] Re: [ANN] oVirt Engine 4.2.6 async update is now available

2018-09-16 Thread Yuval Turgeman
The failure was caused because you had /var/log LV that was not mounted for
some reason (was that manual by any chance?).  onn/var_crash was created
during the update and removed successfully because of the var_log issue.
The only question is why it didn't clean up the
onn/ovirt-node-ng-4.2.6, it failed to clean it up for some reason.  Was
the abrt process hanging this LV ?

Thanks,
Yuval

On Fri, Sep 14, 2018 at 11:53 AM,  wrote:

> I've managed to upgrade them now by removing logical volumes, usually it's
> just /dev/onn/home but one I had to keep reinstalling see were it failed so
> had to
>
> lvremove /dev/onn/ovirt-node-ng-4.2.6.1-0.20180913.0+1
> lvremove /dev/onn/var_crash
> lvremove   /dev/onn/var_log
> lvremove   /dev/onn/var_log_audit
>
> I had trouble removing because failed because it was in use since there
> was an abrt process holding onto the mount.
>
> Thanks,
>  Paul S.
> ___
> Users mailing list -- users@ovirt.org
> To unsubscribe send an email to users-le...@ovirt.org
> Privacy Statement: https://www.ovirt.org/site/privacy-policy/
> oVirt Code of Conduct: https://www.ovirt.org/community/about/community-
> guidelines/
> List Archives: https://lists.ovirt.org/archives/list/users@ovirt.org/
> message/LAS4IEPE2IF53QJVPMJEFLU2Q76AWQRD/
>
___
Users mailing list -- users@ovirt.org
To unsubscribe send an email to users-le...@ovirt.org
Privacy Statement: https://www.ovirt.org/site/privacy-policy/
oVirt Code of Conduct: 
https://www.ovirt.org/community/about/community-guidelines/
List Archives: 
https://lists.ovirt.org/archives/list/users@ovirt.org/message/IHIT2XQWUSA5F2R6B7AGGCBIDVZCC5QY/


[ovirt-users] Re: [ANN] oVirt Engine 4.2.6 async update is now available

2018-09-13 Thread Yuval Turgeman
Hi Paul,

Can you share the entire imgbased.log ?

Thanks,
Yuval


On Thu, Sep 13, 2018 at 4:51 PM, Sandro Bonazzola 
wrote:

>
> Il giorno gio 13 set 2018 alle ore 15:33 
> ha scritto:
>
>> Hello,
>>   I can't upgrade to 4.2.6 it doesn't have the base layer and I
>> have to  "lvremove /dev/onn/ovirt-node-ng-4.2.6.1-0.20180913.0+1"
>> In  imgbased.log I get
>>
>> 2018-09-13 13:55:45,106 [ERROR] (MainThread) Failed to migrate etc
>> Traceback (most recent call last):
>>   File "/tmp/tmp.NqH2wp0Zty/usr/lib/python2.7/site-packages/
>> imgbased/plugins/osupdater.py", line 118, in on_new_layer
>> check_nist_layout(imgbase, new_lv)
>>   File "/tmp/tmp.NqH2wp0Zty/usr/lib/python2.7/site-packages/
>> imgbased/plugins/osupdater.py", line 209, in check_nist_layout
>> v.create(t, paths[t]["size"], paths[t]["attach"])
>>   File 
>> "/tmp/tmp.NqH2wp0Zty/usr/lib/python2.7/site-packages/imgbased/volume.py",
>> line 48, in create
>> "Path is already a volume: %s" % where
>> AssertionError: Path is already a volume: /var/log
>>
>>
> Yuval can you please help here?
>
>
>
>> Regards,
>> Paul S.
>> ___
>> Users mailing list -- users@ovirt.org
>> To unsubscribe send an email to users-le...@ovirt.org
>> Privacy Statement: https://www.ovirt.org/site/privacy-policy/
>> oVirt Code of Conduct: https://www.ovirt.org/community/about/community-
>> guidelines/
>> List Archives: https://lists.ovirt.org/archives/list/users@ovirt.org/
>> message/YDWA57HNUPYWS7CX55NAUWKY4JE5DORH/
>>
>
>
> --
>
> SANDRO BONAZZOLA
>
> MANAGER, SOFTWARE ENGINEERING, EMEA R&D RHV
>
> Red Hat EMEA 
>
> sbona...@redhat.com
> 
> 
>
___
Users mailing list -- users@ovirt.org
To unsubscribe send an email to users-le...@ovirt.org
Privacy Statement: https://www.ovirt.org/site/privacy-policy/
oVirt Code of Conduct: 
https://www.ovirt.org/community/about/community-guidelines/
List Archives: 
https://lists.ovirt.org/archives/list/users@ovirt.org/message/LSZ4HABABFTYZN2OGBYO6VB7NFTUBPY5/


[ovirt-users] Re: oVirt Node 4.2.3.1. to 4.2.5 upgrade trouble, log attached

2018-08-30 Thread Yuval Turgeman
Well that would clean the old stuff, indeed, but you shouldn't have to do
all this stuff by yourself, too bad you can't find the logs.  Btw, removing
the lvs and grub entries can be done using `imgbase base --remove
.0`

On Thu, Aug 30, 2018 at 7:33 PM, Matt Simonsen  wrote:

> I'm not sure I have logs from any instances that failed.
>
> However having upgraded about 10 nodes, the trick for success seems to be:
>
> - Manually cleaning grub.conf with any past node kernels (ie: when on
> 4.2.3, I remove 4.2.2)
>
> - Manually removing any past kernel directories from /boot
>
> - Removing any old LVs (the .0 and .0+1)
>
> - yum update & reboot
>
> I'm not sure how our systems got to require this, we've done 5-6 upgrades
> starting with 4.1, and never had to do this before.
>
> If I continue to have problems from 4.2.5 to 4.2.6, I will send as clear
> of a bug report with logs as possible.
>
> Thank you for your help,
>
> Matt
>
>
>
>
> On 08/27/2018 12:37 AM, Yuval Turgeman wrote:
>
> Hi Matt,
>
> I just went over the log you sent, couldn't find anything else other than
> the semanage failure which you say seems to be ok.  Do you have any other
> logs (perhaps from other machines) that we can look at it ?
>
> Thanks,
> Yuval.
>
> On Tue, Aug 21, 2018 at 10:11 PM, Matt Simonsen  wrote:
>
>> I ran this on a host that has the same exact failing upgrade. It returned
>> with no output.
>>
>> I'm expecting if I manually remove the /boot kernel, the grub lines from
>> any other installs, and the other LV layers that the upgrade will work but
>> with myself and others experiencing this I'm happy to assist in finding the
>> cause.
>>
>> Is there anything else I can do to assist?
>>
>> Thanks,
>>
>> Matt
>>
>>
>>
>>
>>
>> On 08/21/2018 12:38 AM, Yuval Turgeman wrote:
>>
>> Hi again Matt,
>>
>> I was wondering what `semanage permissive -a setfiles_t` looks like on
>> the host that failed to upgrade because I don't see the exact error in the
>> log.
>>
>> Thanks,
>> Yuval.
>>
>>
>>
>> On Tue, Aug 21, 2018 at 12:04 AM, Matt Simonsen  wrote:
>>
>>> Hello,
>>>
>>> I replied to a different email in this thread, noting I believe I may
>>> have a workaround to this issue.
>>>
>>> I did run this on a server that has not yet been upgraded, which
>>> previously has failed at being updated, and the command returned "0" with
>>> no output.
>>>
>>> [ ~]# semanage permissive -a setfiles_t
>>> [ ~]# echo $?
>>> 0
>>>
>>> Please let me know if there is anything else I can do to assist,
>>>
>>> Matt
>>>
>>>
>>>
>>>
>>>
>>> On 08/20/2018 08:19 AM, Yuval Turgeman wrote:
>>>
>>> Hi Matt,
>>>
>>> Can you attach the output from the following line
>>>
>>> # semanage permissive -a setfiles_t
>>>
>>> Thanks,
>>> Yuval.
>>>
>>>
>>> On Fri, Aug 17, 2018 at 2:26 AM, Matt Simonsen  wrote:
>>>
>>>> Hello all,
>>>>
>>>> I've emailed about similar trouble with an oVirt Node upgrade using the
>>>> ISO install. I've attached the /tmp/imgbased.log file in hopes it will help
>>>> give a clue as to why the trouble.
>>>>
>>>> Since these use NFS storage I can rebuild, but would like to know,
>>>> ideally, what caused the upgrade to break.
>>>>
>>>> Truthfully following the install, I don't think I have done *that* much
>>>> to these systems, so I'm not sure what would have caused the problem.
>>>>
>>>> I have done several successful upgrades in the past and most of my
>>>> standalone systems have been working great.
>>>>
>>>> I've been really happy with oVirt, so kudos to the team.
>>>>
>>>> Thanks for any help,
>>>>
>>>> Matt
>>>>
>>>>
>>>>
>>>> ___
>>>> Users mailing list -- users@ovirt.org
>>>> To unsubscribe send an email to users-le...@ovirt.org
>>>> Privacy Statement: https://www.ovirt.org/site/privacy-policy/
>>>> oVirt Code of Conduct: https://www.ovirt.org/communit
>>>> y/about/community-guidelines/
>>>> List Archives: https://lists.ovirt.org/archiv
>>>> es/list/users@ovirt.org/message/G6P7CHKTBD7ESE33MIXEDKV44QXITDJP/
>>>>
>>>>
>>>
>>>
>>
>>
>
>
___
Users mailing list -- users@ovirt.org
To unsubscribe send an email to users-le...@ovirt.org
Privacy Statement: https://www.ovirt.org/site/privacy-policy/
oVirt Code of Conduct: 
https://www.ovirt.org/community/about/community-guidelines/
List Archives: 
https://lists.ovirt.org/archives/list/users@ovirt.org/message/DSJLBTGD3JFMN7333B4Z7ELFNGH5F5GS/


[ovirt-users] Re: oVirt Node 4.2.3.1. to 4.2.5 upgrade trouble, log attached

2018-08-27 Thread Yuval Turgeman
Hi Matt,

I just went over the log you sent, couldn't find anything else other than
the semanage failure which you say seems to be ok.  Do you have any other
logs (perhaps from other machines) that we can look at it ?

Thanks,
Yuval.

On Tue, Aug 21, 2018 at 10:11 PM, Matt Simonsen  wrote:

> I ran this on a host that has the same exact failing upgrade. It returned
> with no output.
>
> I'm expecting if I manually remove the /boot kernel, the grub lines from
> any other installs, and the other LV layers that the upgrade will work but
> with myself and others experiencing this I'm happy to assist in finding the
> cause.
>
> Is there anything else I can do to assist?
>
> Thanks,
>
> Matt
>
>
>
>
>
> On 08/21/2018 12:38 AM, Yuval Turgeman wrote:
>
> Hi again Matt,
>
> I was wondering what `semanage permissive -a setfiles_t` looks like on the
> host that failed to upgrade because I don't see the exact error in the log.
>
> Thanks,
> Yuval.
>
>
>
> On Tue, Aug 21, 2018 at 12:04 AM, Matt Simonsen  wrote:
>
>> Hello,
>>
>> I replied to a different email in this thread, noting I believe I may
>> have a workaround to this issue.
>>
>> I did run this on a server that has not yet been upgraded, which
>> previously has failed at being updated, and the command returned "0" with
>> no output.
>>
>> [ ~]# semanage permissive -a setfiles_t
>> [ ~]# echo $?
>> 0
>>
>> Please let me know if there is anything else I can do to assist,
>>
>> Matt
>>
>>
>>
>>
>>
>> On 08/20/2018 08:19 AM, Yuval Turgeman wrote:
>>
>> Hi Matt,
>>
>> Can you attach the output from the following line
>>
>> # semanage permissive -a setfiles_t
>>
>> Thanks,
>> Yuval.
>>
>>
>> On Fri, Aug 17, 2018 at 2:26 AM, Matt Simonsen  wrote:
>>
>>> Hello all,
>>>
>>> I've emailed about similar trouble with an oVirt Node upgrade using the
>>> ISO install. I've attached the /tmp/imgbased.log file in hopes it will help
>>> give a clue as to why the trouble.
>>>
>>> Since these use NFS storage I can rebuild, but would like to know,
>>> ideally, what caused the upgrade to break.
>>>
>>> Truthfully following the install, I don't think I have done *that* much
>>> to these systems, so I'm not sure what would have caused the problem.
>>>
>>> I have done several successful upgrades in the past and most of my
>>> standalone systems have been working great.
>>>
>>> I've been really happy with oVirt, so kudos to the team.
>>>
>>> Thanks for any help,
>>>
>>> Matt
>>>
>>>
>>>
>>> ___
>>> Users mailing list -- users@ovirt.org
>>> To unsubscribe send an email to users-le...@ovirt.org
>>> Privacy Statement: https://www.ovirt.org/site/privacy-policy/
>>> oVirt Code of Conduct: https://www.ovirt.org/communit
>>> y/about/community-guidelines/
>>> List Archives: https://lists.ovirt.org/archiv
>>> es/list/users@ovirt.org/message/G6P7CHKTBD7ESE33MIXEDKV44QXITDJP/
>>>
>>>
>>
>>
>
>
___
Users mailing list -- users@ovirt.org
To unsubscribe send an email to users-le...@ovirt.org
Privacy Statement: https://www.ovirt.org/site/privacy-policy/
oVirt Code of Conduct: 
https://www.ovirt.org/community/about/community-guidelines/
List Archives: 
https://lists.ovirt.org/archives/list/users@ovirt.org/message/H6WSQJ3H7TBYDVJKBMKSW7VGS2B2A33Q/


[ovirt-users] Re: ovirt-node-ng-image-update 4.2.4 to 4.2.5.1 fails

2018-08-27 Thread Yuval Turgeman
Thanks for the update, Glenn, I'm glad it works ! :)

On Wed, Aug 22, 2018 at 10:54 PM, Glenn Farmer 
wrote:

> Yuval, thanks for you assistance & guidance.
>
> I just wanted to confirm that with /var/crash mounted (and leftover
> v4.2.5.1 LV from previous failed installation removed) - I was able to
> successfully upgrade from v4.2.4 to v4.2.5.1.
>
> Thanks again - Glenn
> ___
> Users mailing list -- users@ovirt.org
> To unsubscribe send an email to users-le...@ovirt.org
> Privacy Statement: https://www.ovirt.org/site/privacy-policy/
> oVirt Code of Conduct: https://www.ovirt.org/community/about/community-
> guidelines/
> List Archives: https://lists.ovirt.org/archives/list/users@ovirt.org/
> message/FIJJIAGNWT42U6PUPOI45VQBIQYSMJ5E/
>
___
Users mailing list -- users@ovirt.org
To unsubscribe send an email to users-le...@ovirt.org
Privacy Statement: https://www.ovirt.org/site/privacy-policy/
oVirt Code of Conduct: 
https://www.ovirt.org/community/about/community-guidelines/
List Archives: 
https://lists.ovirt.org/archives/list/users@ovirt.org/message/LOVANURIZGL6TCY525RJTV3ZNCZCXMVL/


[ovirt-users] Re: oVirt Node 4.2.3.1. to 4.2.5 upgrade trouble, log attached

2018-08-21 Thread Yuval Turgeman
Hi again Matt,

I was wondering what `semanage permissive -a setfiles_t` looks like on the
host that failed to upgrade because I don't see the exact error in the log.

Thanks,
Yuval.



On Tue, Aug 21, 2018 at 12:04 AM, Matt Simonsen  wrote:

> Hello,
>
> I replied to a different email in this thread, noting I believe I may have
> a workaround to this issue.
>
> I did run this on a server that has not yet been upgraded, which
> previously has failed at being updated, and the command returned "0" with
> no output.
>
> [ ~]# semanage permissive -a setfiles_t
> [ ~]# echo $?
> 0
>
> Please let me know if there is anything else I can do to assist,
>
> Matt
>
>
>
>
>
> On 08/20/2018 08:19 AM, Yuval Turgeman wrote:
>
> Hi Matt,
>
> Can you attach the output from the following line
>
> # semanage permissive -a setfiles_t
>
> Thanks,
> Yuval.
>
>
> On Fri, Aug 17, 2018 at 2:26 AM, Matt Simonsen  wrote:
>
>> Hello all,
>>
>> I've emailed about similar trouble with an oVirt Node upgrade using the
>> ISO install. I've attached the /tmp/imgbased.log file in hopes it will help
>> give a clue as to why the trouble.
>>
>> Since these use NFS storage I can rebuild, but would like to know,
>> ideally, what caused the upgrade to break.
>>
>> Truthfully following the install, I don't think I have done *that* much
>> to these systems, so I'm not sure what would have caused the problem.
>>
>> I have done several successful upgrades in the past and most of my
>> standalone systems have been working great.
>>
>> I've been really happy with oVirt, so kudos to the team.
>>
>> Thanks for any help,
>>
>> Matt
>>
>>
>>
>> ___
>> Users mailing list -- users@ovirt.org
>> To unsubscribe send an email to users-le...@ovirt.org
>> Privacy Statement: https://www.ovirt.org/site/privacy-policy/
>> oVirt Code of Conduct: https://www.ovirt.org/communit
>> y/about/community-guidelines/
>> List Archives: https://lists.ovirt.org/archiv
>> es/list/users@ovirt.org/message/G6P7CHKTBD7ESE33MIXEDKV44QXITDJP/
>>
>>
>
>
___
Users mailing list -- users@ovirt.org
To unsubscribe send an email to users-le...@ovirt.org
Privacy Statement: https://www.ovirt.org/site/privacy-policy/
oVirt Code of Conduct: 
https://www.ovirt.org/community/about/community-guidelines/
List Archives: 
https://lists.ovirt.org/archives/list/users@ovirt.org/message/3HOORJMQ7OJYJDFULSJR4LCIFTJNVN2W/


[ovirt-users] Re: oVirt Node 4.2.3.1. to 4.2.5 upgrade trouble, log attached

2018-08-20 Thread Yuval Turgeman
Hi Matt,

Can you attach the output from the following line

# semanage permissive -a setfiles_t

Thanks,
Yuval.


On Fri, Aug 17, 2018 at 2:26 AM, Matt Simonsen  wrote:

> Hello all,
>
> I've emailed about similar trouble with an oVirt Node upgrade using the
> ISO install. I've attached the /tmp/imgbased.log file in hopes it will help
> give a clue as to why the trouble.
>
> Since these use NFS storage I can rebuild, but would like to know,
> ideally, what caused the upgrade to break.
>
> Truthfully following the install, I don't think I have done *that* much to
> these systems, so I'm not sure what would have caused the problem.
>
> I have done several successful upgrades in the past and most of my
> standalone systems have been working great.
>
> I've been really happy with oVirt, so kudos to the team.
>
> Thanks for any help,
>
> Matt
>
>
>
> ___
> Users mailing list -- users@ovirt.org
> To unsubscribe send an email to users-le...@ovirt.org
> Privacy Statement: https://www.ovirt.org/site/privacy-policy/
> oVirt Code of Conduct: https://www.ovirt.org/community/about/community-
> guidelines/
> List Archives: https://lists.ovirt.org/archives/list/users@ovirt.org/
> message/G6P7CHKTBD7ESE33MIXEDKV44QXITDJP/
>
>
___
Users mailing list -- users@ovirt.org
To unsubscribe send an email to users-le...@ovirt.org
Privacy Statement: https://www.ovirt.org/site/privacy-policy/
oVirt Code of Conduct: 
https://www.ovirt.org/community/about/community-guidelines/
List Archives: 
https://lists.ovirt.org/archives/list/users@ovirt.org/message/TKQMSWAEFNQJHM5WUGPOIXS54IPNFKAQ/


[ovirt-users] Re: ovirt-node-ng-image-update 4.2.4 to 4.2.5.1 fails

2018-08-20 Thread Yuval Turgeman
Hi Glenn,

Can you please attach /var/log/imgbased.log ?

Thanks,
Yuval.

On Mon, Aug 20, 2018 at 5:40 PM, Douglas Schilling Landgraf <
dland...@redhat.com> wrote:

> Adding Yuval and Ryan.
>
> On Sun, 2018-08-19 at 07:34 +, Glenn Farmer wrote:
> > yum update ends with:
> >
> > warning: %post(ovirt-node-ng-image-update-4.2.5.1-1.el7.noarch)
> > scriptlet failed, exit status 1
> > Non-fatal POSTIN scriptlet failure in rpm package ovirt-node-ng-
> > image-update-4.2.5.1-1.el7.noarch
> >
> > It creates the layers:
> >
> > ovirt-node-ng-4.2.5.1-0.20180731.0   onn Vri---tz-k  6.00g pool00
> > ovirt-node-ng-4.2.5.1-0.20180731.0+1 onn  Vwi-a-tz--  6.00g pool00
> > ovirt-node-ng-4.2.5.1-0.20180731.0
> >
> > But no grub2 boot entry.
> >
> > nodectl info:
> >
> > layers:
> >   ovirt-node-ng-4.2.4-0.20180626.0:
> > ovirt-node-ng-4.2.4-0.20180626.0+1
> >   ovirt-node-ng-4.2.5.1-0.20180731.0:
> > ovirt-node-ng-4.2.5.1-0.20180731.0+1
> >   ovirt-node-ng-4.2.2-0.20180405.0:
> > ovirt-node-ng-4.2.2-0.20180405.0+1
> > bootloader:
> >   default: ovirt-node-ng-4.2.4-0.20180626.0+1
> >   entries:
> > ovirt-node-ng-4.2.2-0.20180405.0+1:
> >   index: 1
> >   title: ovirt-node-ng-4.2.2-0.20180405.0+1
> >   kernel: /boot/ovirt-node-ng-4.2.2-0.20180405.0+1/vmlinuz-
> > 3.10.0-693.21.1.el7.x86_64
> >   args: "ro crashkernel=auto rd.lvm.lv=onn/ovirt-node-ng-4.2.2-
> > 0.20180405.0+1 img.bootid=ovirt-node-ng-4.2.2-0.20180405.0+1"
> >   initrd: /boot/ovirt-node-ng-4.2.2-0.20180405.0+1/initramfs-
> > 3.10.0-693.21.1.el7.x86_64.img
> >   root: /dev/onn/ovirt-node-ng-4.2.2-0.20180405.0+1
> > ovirt-node-ng-4.2.4-0.20180626.0+1:
> >   index: 0
> >   title: ovirt-node-ng-4.2.4-0.20180626.0+1
> >   kernel: /boot/ovirt-node-ng-4.2.4-0.20180626.0+1/vmlinuz-
> > 3.10.0-862.3.3.el7.x86_64
> >   args: "ro crashkernel=auto rd.lvm.lv=onn/ovirt-node-ng-4.2.4-
> > 0.20180626.0+1 img.bootid=ovirt-node-ng-4.2.4-0.20180626.0+1"
> >   initrd: /boot/ovirt-node-ng-4.2.4-0.20180626.0+1/initramfs-
> > 3.10.0-862.3.3.el7.x86_64.img
> >   root: /dev/onn/ovirt-node-ng-4.2.4-0.20180626.0+1
> > current_layer: ovirt-node-ng-4.2.4-0.20180626.0+1
> >
> > Just posting for others that might have the same issue.
> > ___
> > Users mailing list -- users@ovirt.org
> > To unsubscribe send an email to users-le...@ovirt.org
> > Privacy Statement: https://www.ovirt.org/site/privacy-policy/
> > oVirt Code of Conduct:
> > https://www.ovirt.org/community/about/community-guidelines/
> > List Archives:
> > https://lists.ovirt.org/archives/list/users@ovirt.org/message/
> VOTNR3NH3EAEW6YDINIILFVGQB2BX544/
>
>
> --
> Cheers
> Douglas
>
>
___
Users mailing list -- users@ovirt.org
To unsubscribe send an email to users-le...@ovirt.org
Privacy Statement: https://www.ovirt.org/site/privacy-policy/
oVirt Code of Conduct: 
https://www.ovirt.org/community/about/community-guidelines/
List Archives: 
https://lists.ovirt.org/archives/list/users@ovirt.org/message/7A4WOK3Y2FKRWOINRHQDYMBWDKJBTB5U/


[ovirt-users] Re: oVirt 4.2.3 to 4.2.4 failed

2018-07-05 Thread Yuval Turgeman
Hi Oliver,

Sorry we couldn't get this to upgrade, but removing the base layers kinda
killed us - however, we already have some ideas on how to improve imgbased
to make it more friendly :)

Thanks for the update !
Yuval.


On Thu, Jul 5, 2018 at 3:52 PM, Oliver Riesener <
oliver.riese...@hs-bremen.de> wrote:

> Hi Yuval,
>
> as you can see in my last attachment, after lv meta restore i was unable
> to modify LV's in pool00.
> Thin pool has queued transactions got 23 expect 16 or so.
>
> I reboot and try repairing from Centos 7 USB Stick and can’t access /
> remove LV because they
> has Read LOCK and then Write LOCK is prohibited.
>
> The System boots only into the dracut emergency console and i decide  me
> for reliability
> to reinstall it with a fresh 4.2.4 NODE after cleaning the disk. :-)
>
> Now it running overt-node-ng-4.2.4.
> -
> Noticeable on this Issue is:
> - ng-node should not be installed on previously used CentOS Disks without
> cleaning. (var_crash LV)
> - upgrades eg. 4.2.4 should be easy reinstall-able.
> - What about old version in LV thin pool, how to remove them safely ?
> - fstrim -av trims also LV thin pool volumes, nice :-)
>
> Many thanks to you, i have learned a lot of lvm.
>
> Oliver
>
> > Am 03.07.2018 um 22:58 schrieb Yuval Turgeman :
> >
> > OK Good, this is much better now, but ovirt-node-ng-4.2.4-0.20180626.0+1
> still exists without its base - try this:
> >
> > 1. lvremove onn_ovn-monster/ovirt-node-ng-4.2.4-0.20180626.0+1
> > 2. nodectl info
> >
> > On Tue, Jul 3, 2018 at 11:52 PM, Oliver Riesener <
> oliver.riese...@hs-bremen.de> wrote:
> > I did it, with issues, see attachment.
> >
> >
> >
> >
> >> Am 03.07.2018 um 22:25 schrieb Yuval Turgeman :
> >>
> >> Hi Oliver,
> >>
> >> I would try the following, but please notice it is *very* dangerous, so
> a backup is probably a good idea (man vgcfgrestore)...
> >>
> >> 1. vgcfgrestore --list onn_ovn-monster
> >> 2. search for a .vg file that was created before deleting those 2 lvs
> (ovirt-node-ng-4.2.3-0.20180524.0 and ovirt-node-ng-4.2.3.1-0.20180530.0)
> >> 3. vgcfgrestore -f path-to-the-file-from-step2.vg onn_ovn-monster
> --force
> >> 4. lvremove onn_ovn-monster/ovirt-node-ng-4.2.4-0.20180626.0
> >> 5. lvremove onn_ovn-monster/ovirt-node-ng-4.2.4-0.20180626.0+1
> >> 6. lvremove the lvs from the thinpool that are not mounted/used
> (var_crash?)
> >> 7. nodectl info to make sure everything is ok
> >> 8. reinstall the image-update rpm
> >>
> >> Thanks,
> >> Yuval.
> >>
> >>
> >>
> >> On Tue, Jul 3, 2018 at 10:57 PM, Yuval Turgeman 
> wrote:
> >> Hi Oliver,
> >>
> >> The KeyError happens because there are no bases for the layers.  For
> each LV that ends with a +1, there should be a base read-only LV without
> +1.  So for 3 ovirt-node-ng images, you're supposed to have 6 layers.  This
> is the reason nodectl info fails, and the upgrade will fail also.  In your
> original email it looks OK - I have never seen this happen, was this a
> manual lvremove ? I need to reproduce this and check what can be done.
> >>
> >> You can find me on #ovirt (irc.oftc.net) also :)
> >>
> >>
> >> On Tue, Jul 3, 2018 at 10:41 PM, Oliver Riesener <
> oliver.riese...@hs-bremen.de> wrote:
> >> Yuval, here comes the lvs output.
> >>
> >> The IO Errors are because Node is in maintenance.
> >> The LV root is from previous installed centos 7.5.
> >> The i have installed node-ng 4.2.1 and got this MIX.
> >> The LV turbo is a SSD in it’s own VG named ovirt.
> >>
> >> I have removed LV ovirt-node-ng-4.2.1-0.20180223.0 and (+1) removed
> >> because nodectl info error:
> >>
> >> KeyError:  >>
> >> Now i get the error @4.2.3:
> >> [root@ovn-monster ~]# nodectl info
> >> Traceback (most recent call last):
> >>   File "/usr/lib64/python2.7/runpy.py", line 162, in
> _run_module_as_main
> >> "__main__", fname, loader, pkg_name)
> >>   File "/usr/lib64/python2.7/runpy.py", line 72, in _run_code
> >> exec code in run_globals
> >>   File "/usr/lib/python2.7/site-packages/nodectl/__main__.py", line
> 42, in 
> >> CliApplication()
> >>   File "/usr/lib/python2.7/site-packages/nodectl/__init__.py", line
> 200, in CliApplication
> >> return cmdmap.command(args)
> >>   File "/usr/lib/python2.7/site

[ovirt-users] Re: oVirt 4.2.3 to 4.2.4 failed

2018-07-03 Thread Yuval Turgeman
OK Good, this is much better now, but ovirt-node-ng-4.2.4-0.20180626.0+1
still exists without its base - try this:

1. lvremove onn_ovn-monster/ovirt-node-ng-4.2.4-0.20180626.0+1
2. nodectl info

On Tue, Jul 3, 2018 at 11:52 PM, Oliver Riesener <
oliver.riese...@hs-bremen.de> wrote:

> I did it, with issues, see attachment.
>
>
>
>
> Am 03.07.2018 um 22:25 schrieb Yuval Turgeman :
>
> Hi Oliver,
>
> I would try the following, but please notice it is *very* dangerous, so a
> backup is probably a good idea (man vgcfgrestore)...
>
> 1. vgcfgrestore --list onn_ovn-monster
> 2. search for a .vg file that was created before deleting those 2 lvs (
> ovirt-node-ng-4.2.3-0.20180524.0 and ovirt-node-ng-4.2.3.1-0.20180530.0)
> 3. vgcfgrestore -f path-to-the-file-from-step2.vg onn_ovn-monster --force
> 4. lvremove onn_ovn-monster/ovirt-node-ng-4.2.4-0.20180626.0
> 5. lvremove onn_ovn-monster/ovirt-node-ng-4.2.4-0.20180626.0+1
> 6. lvremove the lvs from the thinpool that are not mounted/used
> (var_crash?)
> 7. nodectl info to make sure everything is ok
> 8. reinstall the image-update rpm
>
> Thanks,
> Yuval.
>
>
>
> On Tue, Jul 3, 2018 at 10:57 PM, Yuval Turgeman 
> wrote:
>
>> Hi Oliver,
>>
>> The KeyError happens because there are no bases for the layers.  For each
>> LV that ends with a +1, there should be a base read-only LV without +1.  So
>> for 3 ovirt-node-ng images, you're supposed to have 6 layers.  This is the
>> reason nodectl info fails, and the upgrade will fail also.  In your
>> original email it looks OK - I have never seen this happen, was this a
>> manual lvremove ? I need to reproduce this and check what can be done.
>>
>> You can find me on #ovirt (irc.oftc.net) also :)
>>
>>
>> On Tue, Jul 3, 2018 at 10:41 PM, Oliver Riesener <
>> oliver.riese...@hs-bremen.de> wrote:
>>
>>> Yuval, here comes the lvs output.
>>>
>>> The IO Errors are because Node is in maintenance.
>>> The LV root is from previous installed centos 7.5.
>>> The i have installed node-ng 4.2.1 and got this MIX.
>>> The LV turbo is a SSD in it’s own VG named ovirt.
>>>
>>> I have removed LV ovirt-node-ng-4.2.1-0.20180223.0 and (+1) removed
>>> because nodectl info error:
>>>
>>> KeyError: >>
>>> Now i get the error @4.2.3:
>>> [root@ovn-monster ~]# nodectl info
>>> Traceback (most recent call last):
>>>   File "/usr/lib64/python2.7/runpy.py", line 162, in _run_module_as_main
>>> "__main__", fname, loader, pkg_name)
>>>   File "/usr/lib64/python2.7/runpy.py", line 72, in _run_code
>>> exec code in run_globals
>>>   File "/usr/lib/python2.7/site-packages/nodectl/__main__.py", line 42,
>>> in 
>>> CliApplication()
>>>   File "/usr/lib/python2.7/site-packages/nodectl/__init__.py", line
>>> 200, in CliApplication
>>> return cmdmap.command(args)
>>>   File "/usr/lib/python2.7/site-packages/nodectl/__init__.py", line
>>> 118, in command
>>> return self.commands[command](**kwargs)
>>>   File "/usr/lib/python2.7/site-packages/nodectl/__init__.py", line 76,
>>> in info
>>> Info(self.imgbased, self.machine).write()
>>>   File "/usr/lib/python2.7/site-packages/nodectl/info.py", line 45, in
>>> __init__
>>> self._fetch_information()
>>>   File "/usr/lib/python2.7/site-packages/nodectl/info.py", line 49, in
>>> _fetch_information
>>> self._get_layout()
>>>   File "/usr/lib/python2.7/site-packages/nodectl/info.py", line 66, in
>>> _get_layout
>>> layout = LayoutParser(self.app.imgbase.layout()).parse()
>>>   File "/usr/lib/python2.7/site-packages/imgbased/imgbase.py", line
>>> 155, in layout
>>> return self.naming.layout()
>>>   File "/usr/lib/python2.7/site-packages/imgbased/naming.py", line 109,
>>> in layout
>>> tree = self.tree(lvs)
>>>   File "/usr/lib/python2.7/site-packages/imgbased/naming.py", line 224,
>>> in tree
>>> bases[img.base.nvr].layers.append(img)
>>> KeyError: 
>>>
>>> lvs -a
>>>
>>> [root@ovn-monster ~]# lvs -a
>>>   /dev/mapper/36090a02860ee716bee5e05b11dc52616: read failed after 0 of
>>> 4096 at 0: Eingabe-/Ausgabefehler
>>>   /dev/mapper/36090a02860ee716bee5e05b11dc52616: read failed after 0 of
>>> 4096 at 5497568559104: Eingabe-/Ausgabefehler
&

[ovirt-users] Re: oVirt 4.2.3 to 4.2.4 failed

2018-07-03 Thread Yuval Turgeman
Hi Oliver,

I would try the following, but please notice it is *very* dangerous, so a
backup is probably a good idea (man vgcfgrestore)...

1. vgcfgrestore --list onn_ovn-monster
2. search for a .vg file that was created before deleting those 2 lvs (
ovirt-node-ng-4.2.3-0.20180524.0 and ovirt-node-ng-4.2.3.1-0.20180530.0)
3. vgcfgrestore -f path-to-the-file-from-step2.vg onn_ovn-monster --force
4. lvremove onn_ovn-monster/ovirt-node-ng-4.2.4-0.20180626.0
5. lvremove onn_ovn-monster/ovirt-node-ng-4.2.4-0.20180626.0+1
6. lvremove the lvs from the thinpool that are not mounted/used (var_crash?)
7. nodectl info to make sure everything is ok
8. reinstall the image-update rpm

Thanks,
Yuval.



On Tue, Jul 3, 2018 at 10:57 PM, Yuval Turgeman  wrote:

> Hi Oliver,
>
> The KeyError happens because there are no bases for the layers.  For each
> LV that ends with a +1, there should be a base read-only LV without +1.  So
> for 3 ovirt-node-ng images, you're supposed to have 6 layers.  This is the
> reason nodectl info fails, and the upgrade will fail also.  In your
> original email it looks OK - I have never seen this happen, was this a
> manual lvremove ? I need to reproduce this and check what can be done.
>
> You can find me on #ovirt (irc.oftc.net) also :)
>
>
> On Tue, Jul 3, 2018 at 10:41 PM, Oliver Riesener <
> oliver.riese...@hs-bremen.de> wrote:
>
>> Yuval, here comes the lvs output.
>>
>> The IO Errors are because Node is in maintenance.
>> The LV root is from previous installed centos 7.5.
>> The i have installed node-ng 4.2.1 and got this MIX.
>> The LV turbo is a SSD in it’s own VG named ovirt.
>>
>> I have removed LV ovirt-node-ng-4.2.1-0.20180223.0 and (+1) removed
>> because nodectl info error:
>>
>> KeyError: >
>> Now i get the error @4.2.3:
>> [root@ovn-monster ~]# nodectl info
>> Traceback (most recent call last):
>>   File "/usr/lib64/python2.7/runpy.py", line 162, in _run_module_as_main
>> "__main__", fname, loader, pkg_name)
>>   File "/usr/lib64/python2.7/runpy.py", line 72, in _run_code
>> exec code in run_globals
>>   File "/usr/lib/python2.7/site-packages/nodectl/__main__.py", line 42,
>> in 
>> CliApplication()
>>   File "/usr/lib/python2.7/site-packages/nodectl/__init__.py", line 200,
>> in CliApplication
>> return cmdmap.command(args)
>>   File "/usr/lib/python2.7/site-packages/nodectl/__init__.py", line 118,
>> in command
>> return self.commands[command](**kwargs)
>>   File "/usr/lib/python2.7/site-packages/nodectl/__init__.py", line 76,
>> in info
>> Info(self.imgbased, self.machine).write()
>>   File "/usr/lib/python2.7/site-packages/nodectl/info.py", line 45, in
>> __init__
>> self._fetch_information()
>>   File "/usr/lib/python2.7/site-packages/nodectl/info.py", line 49, in
>> _fetch_information
>> self._get_layout()
>>   File "/usr/lib/python2.7/site-packages/nodectl/info.py", line 66, in
>> _get_layout
>> layout = LayoutParser(self.app.imgbase.layout()).parse()
>>   File "/usr/lib/python2.7/site-packages/imgbased/imgbase.py", line 155,
>> in layout
>> return self.naming.layout()
>>   File "/usr/lib/python2.7/site-packages/imgbased/naming.py", line 109,
>> in layout
>> tree = self.tree(lvs)
>>   File "/usr/lib/python2.7/site-packages/imgbased/naming.py", line 224,
>> in tree
>> bases[img.base.nvr].layers.append(img)
>> KeyError: 
>>
>> lvs -a
>>
>> [root@ovn-monster ~]# lvs -a
>>   /dev/mapper/36090a02860ee716bee5e05b11dc52616: read failed after 0 of
>> 4096 at 0: Eingabe-/Ausgabefehler
>>   /dev/mapper/36090a02860ee716bee5e05b11dc52616: read failed after 0 of
>> 4096 at 5497568559104: Eingabe-/Ausgabefehler
>>   /dev/mapper/36090a02860ee716bee5e05b11dc52616: read failed after 0 of
>> 4096 at 5497568616448: Eingabe-/Ausgabefehler
>>   /dev/mapper/36090a02860ee716bee5e05b11dc52616: read failed after 0 of
>> 4096 at 4096: Eingabe-/Ausgabefehler
>>   /dev/mapper/36090a02860ee9137c5ae35cd4bc5f6b8: read failed after 0 of
>> 4096 at 0: Eingabe-/Ausgabefehler
>>   /dev/mapper/36090a02860ee9137c5ae35cd4bc5f6b8: read failed after 0 of
>> 4096 at 1099526242304: Eingabe-/Ausgabefehler
>>   /dev/mapper/36090a02860ee9137c5ae35cd4bc5f6b8: read failed after 0 of
>> 4096 at 1099526299648: Eingabe-/Ausgabefehler
>>   /dev/mapper/36090a02860ee9137c5ae35cd4bc5f6b8: read failed after 0 of
>> 4096 at 4096: Eingabe-/Ausgabefehler
>>   /dev/mapper/36090a02860e

[ovirt-users] Re: oVirt 4.2.3 to 4.2.4 failed

2018-07-03 Thread Yuval Turgeman
er Vwi-aotz--1,00g
> pool00  4,79
>
>   [lvol0_pmspare]  onn_ovn-monster ewi---  144,00m
>
>
>   ovirt-node-ng-4.2.3-0.20180524.0+1   onn_ovn-monster Vwi-aotz--
> <252,38g pool00  2,88
>
>   ovirt-node-ng-4.2.3.1-0.20180530.0+1 onn_ovn-monster Vwi-a-tz--
> <252,38g pool00  0,86
>
>   ovirt-node-ng-4.2.4-0.20180626.0 onn_ovn-monster Vri-a-tz-k
> <252,38g pool00  0,85
>
>   ovirt-node-ng-4.2.4-0.20180626.0+1   onn_ovn-monster Vwi-a-tz--
> <252,38g pool00 ovirt-node-ng-4.2.4-0.20180626.0 0,85
>
>   pool00   onn_ovn-monster twi-aotz-- <279,38g
> 6,76   1,01
>
>   [pool00_tdata]   onn_ovn-monster Twi-ao <279,38g
>
>
>   [pool00_tmeta]   onn_ovn-monster ewi-ao1,00g
>
>
>   root onn_ovn-monster Vwi-a-tz-- <252,38g
> pool00  1,24
>
>   swap onn_ovn-monster -wi-ao4,00g
>
>
>   tmp  onn_ovn-monster Vwi-aotz--1,00g
> pool00  5,01
>
>   var  onn_ovn-monster Vwi-aotz--   15,00g
> pool00  3,56
>
>   var_crashonn_ovn-monster Vwi-aotz--   10,00g
> pool00  2,86
>
>   var_log  onn_ovn-monster Vwi-aotz--8,00g
> pool00  38,48
>
>   var_log_auditonn_ovn-monster Vwi-aotz--2,00g
> pool00  6,77
>
>   turboovirt   -wi-ao  894,25g
>
>
>
> Am 03.07.2018 um 12:58 schrieb Yuval Turgeman :
>
> Oliver, can you share the output from lvs ?
>
> On Tue, Jul 3, 2018 at 12:06 AM, Oliver Riesener <
> oliver.riese...@hs-bremen.de> wrote:
>
>> Hi Yuval,
>>
>> * reinstallation failed, because LV already exists.
>>   ovirt-node-ng-4.2.4-0.20180626.0 onn_ovn-monster Vri-a-tz-k
>> <252,38g pool00  0,85
>>   ovirt-node-ng-4.2.4-0.20180626.0+1   onn_ovn-monster Vwi-a-tz--
>> <252,38g pool00 ovirt-node-ng-4.2.4-0.20180626.0 0,85
>> See attachment imgbased.reinstall.log
>>
>> * I removed them and re-reinstall without luck.
>>
>> I got KeyError: 
>>
>> See attachment imgbased.rereinstall.log
>>
>> Also a new problem with nodectl info
>> [root@ovn-monster tmp]# nodectl info
>> Traceback (most recent call last):
>>   File "/usr/lib64/python2.7/runpy.py", line 162, in _run_module_as_main
>> "__main__", fname, loader, pkg_name)
>>   File "/usr/lib64/python2.7/runpy.py", line 72, in _run_code
>> exec code in run_globals
>>   File "/usr/lib/python2.7/site-packages/nodectl/__main__.py", line 42,
>> in 
>> CliApplication()
>>   File "/usr/lib/python2.7/site-packages/nodectl/__init__.py", line 200,
>> in CliApplication
>> return cmdmap.command(args)
>>   File "/usr/lib/python2.7/site-packages/nodectl/__init__.py", line 118,
>> in command
>> return self.commands[command](**kwargs)
>>   File "/usr/lib/python2.7/site-packages/nodectl/__init__.py", line 76,
>> in info
>> Info(self.imgbased, self.machine).write()
>>   File "/usr/lib/python2.7/site-packages/nodectl/info.py", line 45, in
>> __init__
>> self._fetch_information()
>>   File "/usr/lib/python2.7/site-packages/nodectl/info.py", line 49, in
>> _fetch_information
>> self._get_layout()
>>   File "/usr/lib/python2.7/site-packages/nodectl/info.py", line 66, in
>> _get_layout
>> layout = LayoutParser(self.app.imgbase.layout()).parse()
>>   File "/usr/lib/python2.7/site-packages/imgbased/imgbase.py", line 155,
>> in layout
>> return self.naming.layout()
>>   File "/usr/lib/python2.7/site-packages/imgbased/naming.py", line 109,
>> in layout
>> tree = self.tree(lvs)
>>   File "/usr/lib/python2.7/site-packages/imgbased/naming.py", line 224,
>> in tree
>> bases[img.base.nvr].layers.append(img)
>> KeyError: 
>>
>>
>>
>>
>>
>>
>> Am 02.07.2018 um 22:22 schrieb Oliver Riesener <
>> oliver.riese...@hs-bremen.de>:
>>
>> Hi

[ovirt-users] Re: oVirt 4.2.3 to 4.2.4 failed

2018-07-03 Thread Yuval Turgeman
Hi Matt,

I would try to run `fstrim -a` (man fstrim) and see if it frees anything
from the thinpool.  If you do decide to run this, please send the output
for lvs again.

Also, are you on #ovirt ?

Thanks,
Yuval.


On Tue, Jul 3, 2018 at 9:00 PM, Matt Simonsen  wrote:

> Thank you again for the assistance with this issue.
>
> Below is the result of the command below.
>
> In the future I am considering using different Logical RAID Volumes to get
> different devices (sda, sdb, etc) for the oVirt Node image & storage
> filesystem to simplify.  However I'd like to understand why this upgrade
> failed and also how to correct it if at all possible.
>
> I believe I need to recreate the /var/crash partition? I incorrectly
> removed it, is it simply a matter of using LVM to add a new partition and
> format it?
>
> Secondly, do you have any suggestions on how to move forward with the
> error regarding the pool capacity? I'm not sure if this is a legitimate
> error or problem in the upgrade process.
>
> Thanks,
>
> Matt
>
>
>
>
> On 07/03/2018 03:58 AM, Yuval Turgeman wrote:
>
> Not sure this is the problem, autoextend should be enabled for the
> thinpool, `lvs -o +profile` should show imgbased-pool (defined at
> /etc/lvm/profile/imgbased-pool.profile)
>
> On Tue, Jul 3, 2018 at 8:55 AM, Yedidyah Bar David 
> wrote:
>
>> On Mon, Jul 2, 2018 at 7:54 PM, Matt Simonsen  wrote:
>> >
>> > This error adds some clarity.
>> >
>> > That said, I'm a bit unsure how the space can be the issue given I have
>> several hundred GB of storage in the thin pool that's unused...
>> >
>> > How do you suggest I proceed?
>> >
>> > Thank you for your help,
>> >
>> > Matt
>> >
>> >
>> >
>> > [root@node6-g8-h4 ~]# lvs
>> >
>> >   LV   VG  Attr
>>  LSize   Pool   Origin Data%  Meta%  Move Log
>> Cpy%Sync Convert
>> >   home onn_node1-g8-h4 Vwi-aotz--
>>  1.00g pool004.79
>> >   ovirt-node-ng-4.2.2-0.20180423.0 onn_node1-g8-h4 Vwi---tz-k
>> <50.06g pool00 root
>> >   ovirt-node-ng-4.2.2-0.20180423.0+1   onn_node1-g8-h4 Vwi---tz--
>> <50.06g pool00 ovirt-node-ng-4.2.2-0.20180423.0
>> >   ovirt-node-ng-4.2.3.1-0.20180530.0   onn_node1-g8-h4 Vri---tz-k
>> <50.06g pool00
>> >   ovirt-node-ng-4.2.3.1-0.20180530.0+1 onn_node1-g8-h4 Vwi-aotz--
>> <50.06g pool00 ovirt-node-ng-4.2.3.1-0.20180530.0 6.95
>> >   pool00   onn_node1-g8-h4 twi-aotz--
>> <1.30t   76.63  50.34
>>
>> I think your thinpool meta volume is close to full and needs to be
>> enlarged.
>> This quite likely happened because you extended the thinpool without
>> extending the meta vol.
>>
>> Check also 'lvs -a'.
>>
>> This might be enough, but check the names first:
>>
>> lvextend -L+200m onn_node1-g8-h4/pool00_tmeta
>>
>> Best regards,
>>
>> >   root onn_node1-g8-h4 Vwi---tz--
>> <50.06g pool00
>> >   tmp  onn_node1-g8-h4 Vwi-aotz--
>>  1.00g pool005.04
>> >   var  onn_node1-g8-h4 Vwi-aotz--
>> 15.00g pool005.86
>> >   var_crashonn_node1-g8-h4 Vwi---tz--
>> 10.00g pool00
>> >   var_local_images onn_node1-g8-h4 Vwi-aotz--
>>  1.10t pool0089.72
>> >   var_log  onn_node1-g8-h4 Vwi-aotz--
>>  8.00g pool006.84
>> >   var_log_auditonn_node1-g8-h4 Vwi-aotz--
>>  2.00g pool006.16
>> > [root@node6-g8-h4 ~]# vgs
>> >   VG  #PV #LV #SN Attr   VSize  VFree
>> >   onn_node1-g8-h4   1  13   0 wz--n- <1.31t 8.00g
>> >
>> >
>> > 2018-06-29 14:19:31,142 [DEBUG] (MainThread) Version: imgbased-1.0.20
>> > 2018-06-29 14:19:31,147 [DEBUG] (MainThread) Arguments:
>> Namespace(FILENAME='/usr/share/ovirt-node-ng/image//ovirt-
>> node-ng-4.2.0-0.20180626.0.el7.squashfs.img', command='update',
>> debug=True, experimental=False, format='liveimg', stream='Image')
>> > 2018-06-29 14:19:31,147 [INFO] (Main

[ovirt-users] Re: oVirt 4.2.3 to 4.2.4 failed

2018-07-03 Thread Yuval Turgeman
Oliver, can you share the output from lvs ?

On Tue, Jul 3, 2018 at 12:06 AM, Oliver Riesener <
oliver.riese...@hs-bremen.de> wrote:

> Hi Yuval,
>
> * reinstallation failed, because LV already exists.
>   ovirt-node-ng-4.2.4-0.20180626.0 onn_ovn-monster Vri-a-tz-k
> <252,38g pool00  0,85
>   ovirt-node-ng-4.2.4-0.20180626.0+1   onn_ovn-monster Vwi-a-tz--
> <252,38g pool00 ovirt-node-ng-4.2.4-0.20180626.0 0,85
> See attachment imgbased.reinstall.log
>
> * I removed them and re-reinstall without luck.
>
> I got KeyError: 
>
> See attachment imgbased.rereinstall.log
>
> Also a new problem with nodectl info
> [root@ovn-monster tmp]# nodectl info
> Traceback (most recent call last):
>   File "/usr/lib64/python2.7/runpy.py", line 162, in _run_module_as_main
> "__main__", fname, loader, pkg_name)
>   File "/usr/lib64/python2.7/runpy.py", line 72, in _run_code
> exec code in run_globals
>   File "/usr/lib/python2.7/site-packages/nodectl/__main__.py", line 42,
> in 
> CliApplication()
>   File "/usr/lib/python2.7/site-packages/nodectl/__init__.py", line 200,
> in CliApplication
> return cmdmap.command(args)
>   File "/usr/lib/python2.7/site-packages/nodectl/__init__.py", line 118,
> in command
> return self.commands[command](**kwargs)
>   File "/usr/lib/python2.7/site-packages/nodectl/__init__.py", line 76,
> in info
> Info(self.imgbased, self.machine).write()
>   File "/usr/lib/python2.7/site-packages/nodectl/info.py", line 45, in
> __init__
> self._fetch_information()
>   File "/usr/lib/python2.7/site-packages/nodectl/info.py", line 49, in
> _fetch_information
> self._get_layout()
>   File "/usr/lib/python2.7/site-packages/nodectl/info.py", line 66, in
> _get_layout
> layout = LayoutParser(self.app.imgbase.layout()).parse()
>   File "/usr/lib/python2.7/site-packages/imgbased/imgbase.py", line 155,
> in layout
> return self.naming.layout()
>   File "/usr/lib/python2.7/site-packages/imgbased/naming.py", line 109,
> in layout
> tree = self.tree(lvs)
>   File "/usr/lib/python2.7/site-packages/imgbased/naming.py", line 224,
> in tree
> bases[img.base.nvr].layers.append(img)
> KeyError: 
>
>
>
>
>
>
> Am 02.07.2018 um 22:22 schrieb Oliver Riesener <
> oliver.riese...@hs-bremen.de>:
>
> Hi Yuval,
>
> yes you are right, there was a unused and deactivated var_crash LV.
>
> * I activated and mount it to /var/crash via /etc/fstab.
> * /var/crash was empty, and LV has already ext4 fs.
>   var_crashonn_ovn-monster Vwi-aotz--   10,00g
> pool002,86
>
>
> * Now i will try to upgrade again.
>   * yum reinstall ovirt-node-ng-image-update.noarch
>
> BTW, no more imgbased.log files found.
>
> Am 02.07.2018 um 20:57 schrieb Yuval Turgeman :
>
> From your log:
>
> AssertionError: Path is already a volume: /var/crash
>
> Basically, it means that you already have an LV for /var/crash but it's
> not mounted for some reason, so either mount it (if the data good) or
> remove it and then reinstall the image-update rpm.  Before that, check that
> you dont have any other LVs in that same state - or you can post the output
> for lvs... btw, do you have any more imgbased.log files laying around ?
>
> You can find more details about this here:
>
> https://access.redhat.com/documentation/en-us/red_hat_
> virtualization/4.1/html/upgrade_guide/recovering_from_
> failed_nist-800_upgrade
>
> On Mon, Jul 2, 2018 at 8:12 PM, Oliver Riesener  bremen.de> wrote:
>
>> Hi,
>>
>> i attached my /tmp/imgbased.log
>>
>> Sheers
>>
>> Oliver
>>
>>
>>
>> Am 02.07.2018 um 13:58 schrieb Yuval Turgeman :
>>
>> Looks like the upgrade script failed - can you please attach
>> /var/log/imgbased.log or /tmp/imgbased.log ?
>>
>> Thanks,
>> Yuval.
>>
>> On Mon, Jul 2, 2018 at 2:54 PM, Sandro Bonazzola 
>> wrote:
>>
>>> Yuval, can you please have a look?
>>>
>>> 2018-06-30 7:48 GMT+02:00 Oliver Riesener 
>>> :
>>>
>>>> Yes, here is the same.
>>>>
>>>> It seams the bootloader isn’t configured right ?
>>>>
>>>> I did the Upgrade and reboot to 4.2.4 from UI and got:
>>>>
>>>> [root@ovn-monster ~]# nodectl info
>>>> layers:
>>>>   ovirt-node-ng-4.2.4-0.20180626.0:
>>>> ovirt-node-ng-4.2.4-0.20180626.0+1

[ovirt-users] Re: oVirt 4.2.3 to 4.2.4 failed

2018-07-03 Thread Yuval Turgeman
7;,
> '--noheadings', '--ignoreskippedcluster', '@imgbased:pool', '-o',
> 'lv_full_name'],) {'close_fds': True, 'stderr':  mode 'w' at 0x7f56b787eed0>}
> > 2018-06-29 14:19:31,381 [DEBUG] (MainThread) Returned:
> onn_node1-g8-h4/pool00
> > 2018-06-29 14:19:31,381 [DEBUG] (MainThread) Pool:  'onn_node1-g8-h4/pool00' />
> > 2018-06-29 14:19:31,382 [DEBUG] (MainThread) Calling binary:
> (['lvcreate', '--thin', '--virtualsize', u'53750005760B', '--name',
> 'ovirt-node-ng-4.2.4-0.20180626.0', u'onn_node1-g8-h4/pool00'],) {}
> > 2018-06-29 14:19:31,382 [DEBUG] (MainThread) Calling: (['lvcreate',
> '--thin', '--virtualsize', u'53750005760B', '--name',
> 'ovirt-node-ng-4.2.4-0.20180626.0', u'onn_node1-g8-h4/pool00'],)
> {'close_fds': True, 'stderr': -2}
> > 2018-06-29 14:19:31,406 [DEBUG] (MainThread) Exception!   Cannot create
> new thin volume, free space in thin pool onn_node1-g8-h4/pool00 reached
> threshold.
> >
> > 2018-06-29 14:19:31,406 [DEBUG] (MainThread) Calling binary: (['umount',
> '-l', u'/tmp/mnt.153do'],) {}
> > 2018-06-29 14:19:31,406 [DEBUG] (MainThread) Calling: (['umount', '-l',
> u'/tmp/mnt.153do'],) {'close_fds': True, 'stderr': -2}
> > 2018-06-29 14:19:31,422 [DEBUG] (MainThread) Returned:
> > 2018-06-29 14:19:31,422 [DEBUG] (MainThread) Calling binary: (['rmdir',
> u'/tmp/mnt.153do'],) {}
> > 2018-06-29 14:19:31,422 [DEBUG] (MainThread) Calling: (['rmdir',
> u'/tmp/mnt.153do'],) {'close_fds': True, 'stderr': -2}
> > 2018-06-29 14:19:31,425 [DEBUG] (MainThread) Returned:
> > 2018-06-29 14:19:31,425 [DEBUG] (MainThread) Calling binary: (['umount',
> '-l', u'/tmp/mnt.1OhaU'],) {}
> > 2018-06-29 14:19:31,425 [DEBUG] (MainThread) Calling: (['umount', '-l',
> u'/tmp/mnt.1OhaU'],) {'close_fds': True, 'stderr': -2}
> > 2018-06-29 14:19:31,437 [DEBUG] (MainThread) Returned:
> > 2018-06-29 14:19:31,437 [DEBUG] (MainThread) Calling binary: (['rmdir',
> u'/tmp/mnt.1OhaU'],) {}
> > 2018-06-29 14:19:31,437 [DEBUG] (MainThread) Calling: (['rmdir',
> u'/tmp/mnt.1OhaU'],) {'close_fds': True, 'stderr': -2}
> > 2018-06-29 14:19:31,440 [DEBUG] (MainThread) Returned:
> > Traceback (most recent call last):
> >   File "/usr/lib64/python2.7/runpy.py", line 162, in _run_module_as_main
> > "__main__", fname, loader, pkg_name)
> >   File "/usr/lib64/python2.7/runpy.py", line 72, in _run_code
> > exec code in run_globals
> >   File 
> > "/tmp/tmp.mzQBYouvWT/usr/lib/python2.7/site-packages/imgbased/__main__.py",
> line 53, in 
> > CliApplication()
> >   File 
> > "/tmp/tmp.mzQBYouvWT/usr/lib/python2.7/site-packages/imgbased/__init__.py",
> line 82, in CliApplication
> > app.hooks.emit("post-arg-parse", args)
> >   File 
> > "/tmp/tmp.mzQBYouvWT/usr/lib/python2.7/site-packages/imgbased/hooks.py",
> line 120, in emit
> > cb(self.context, *args)
> >   File 
> > "/tmp/tmp.mzQBYouvWT/usr/lib/python2.7/site-packages/imgbased/plugins/update.py",
> line 56, in post_argparse
> > base_lv, _ = LiveimgExtractor(app.imgbase).extract(args.FILENAME)
> >   File 
> > "/tmp/tmp.mzQBYouvWT/usr/lib/python2.7/site-packages/imgbased/plugins/update.py",
> line 118, in extract
> > "%s" % size, nvr)
> >   File 
> > "/tmp/tmp.mzQBYouvWT/usr/lib/python2.7/site-packages/imgbased/plugins/update.py",
> line 84, in add_base_with_tree
> > lvs)
> >   File 
> > "/tmp/tmp.mzQBYouvWT/usr/lib/python2.7/site-packages/imgbased/imgbase.py",
> line 310, in add_base
> > new_base_lv = pool.create_thinvol(new_base.lv_name, size)
> >   File 
> > "/tmp/tmp.mzQBYouvWT/usr/lib/python2.7/site-packages/imgbased/lvm.py",
> line 324, in create_thinvol
> > self.lvm_name])
> >   File 
> > "/tmp/tmp.mzQBYouvWT/usr/lib/python2.7/site-packages/imgbased/utils.py",
> line 390, in lvcreate
> > return self.call(["lvcreate"] + args, **kwargs)
> >   File 
> > "/tmp/tmp.mzQBYouvWT/usr/lib/python2.7/site-packages/imgbased/utils.py",
> line 378, in call
> > s

[ovirt-users] Re: oVirt 4.2.3 to 4.2.4 failed

2018-07-02 Thread Yuval Turgeman
Btw, removing /var/crash was directed to Oliver - you have different
problems


On Mon, Jul 2, 2018 at 10:23 PM, Matt Simonsen  wrote:

> Yes, it shows 8g on the VG
>
> I removed the LV for /var/crash, then installed again, and it is still
> failing on the step:
>
>
> 2018-07-02 12:21:10,015 [DEBUG] (MainThread) Calling: (['lvcreate',
> '--thin', '--virtualsize', u'53750005760B', '--name',
> 'ovirt-node-ng-4.2.4-0.20180626.0', u'onn_node1-g8-h4/pool00'],)
> {'close_fds': True, 'stderr': -2}
> 2018-07-02 12:21:10,069 [DEBUG] (MainThread) Exception!   Cannot create
> new thin volume, free space in thin pool onn_node1-g8-h4/pool00 reached
> threshold.
>
> 2018-07-02 12:21:10,069 [DEBUG] (MainThread) Calling binary: (['umount',
> '-l', u'/tmp/mnt.ZYOjC'],) {}
>
>
> Thanks
>
> Matt
>
>
>
>
>
> On 07/02/2018 10:55 AM, Yuval Turgeman wrote:
>
> Not in front of my laptop so it's a little hard to read but does it say 8g
> free on the vg ?
>
> On Mon, Jul 2, 2018, 20:00 Matt Simonsen  wrote:
>
>> This error adds some clarity.
>>
>> That said, I'm a bit unsure how the space can be the issue given I have
>> several hundred GB of storage in the thin pool that's unused...
>>
>> How do you suggest I proceed?
>>
>> Thank you for your help,
>>
>> Matt
>>
>>
>> [root@node6-g8-h4 ~]# lvs
>>
>>   LV   VG  Attr
>> LSize   Pool   Origin Data%  Meta%  Move Log
>> Cpy%Sync Convert
>>   home onn_node1-g8-h4 Vwi-aotz--
>> 1.00g pool00
>> 4.79
>>   ovirt-node-ng-4.2.2-0.20180423.0 onn_node1-g8-h4 Vwi---tz-k
>> <50.06g pool00 root
>>
>>   ovirt-node-ng-4.2.2-0.20180423.0+1   onn_node1-g8-h4 Vwi---tz--
>> <50.06g pool00 ovirt-node-ng-4.2.2-0.20180423.0
>>
>>   ovirt-node-ng-4.2.3.1-0.20180530.0   onn_node1-g8-h4 Vri---tz-k
>> <50.06g pool00
>>
>>   ovirt-node-ng-4.2.3.1-0.20180530.0+1 onn_node1-g8-h4 Vwi-aotz--
>> <50.06g pool00 ovirt-node-ng-4.2.3.1-0.20180530.0
>> 6.95
>>   pool00   onn_node1-g8-h4 twi-aotz--
>> <1.30t   76.63
>> 50.34
>>   root onn_node1-g8-h4 Vwi---tz--
>> <50.06g pool00
>>
>>   tmp  onn_node1-g8-h4 Vwi-aotz--
>> 1.00g pool00
>> 5.04
>>   var  onn_node1-g8-h4 Vwi-aotz--
>> 15.00g pool00
>> 5.86
>>   var_crashonn_node1-g8-h4 Vwi---tz--
>> 10.00g pool00
>>
>>   var_local_images onn_node1-g8-h4 Vwi-aotz--
>> 1.10t pool00
>> 89.72
>>   var_log  onn_node1-g8-h4 Vwi-aotz--
>> 8.00g pool00
>> 6.84
>>   var_log_auditonn_node1-g8-h4 Vwi-aotz--
>> 2.00g pool00
>> 6.16
>> [root@node6-g8-h4 ~]# vgs
>>   VG  #PV #LV #SN Attr   VSize  VFree
>>   onn_node1-g8-h4   1  13   0 wz--n- <1.31t 8.00g
>>
>>
>> 2018-06-29 14:19:31,142 [DEBUG] (MainThread) Version: imgbased-1.0.20
>> 2018-06-29 14:19:31,147 [DEBUG] (MainThread) Arguments:
>> Namespace(FILENAME='/usr/share/ovirt-node-ng/image//
>> ovirt-node-ng-4.2.0-0.20180626.0.el7.squashfs.img', command='update',
>> debug=True, experimental=False, format='liveimg', stream='Image')
>> 2018-06-29 14:19:31,147 [INFO] (MainThread) Extracting image
>> '/usr/share/ovirt-node-ng/image//ovirt-node-ng-4.2.0-0.
>> 20180626.0.el7.squashfs.img'
>> 2018-06-29 14:19:31,148 [DEBUG] (MainThread) Calling binary: (['mktemp',
>> '-d', '--tmpdir', 'mnt.X'],) {}
>> 2018-06-29 14:19:31,148 [DEBUG] (MainThread) Calling: (['mktemp', '-d',
>> '--tmpdir', 'mnt.X'],) {'close_fds': True, 'stderr': -2}
>> 2018-06-29 14:19:31,150 [DEBUG] (MainThread) Returned: /tmp/mnt.1OhaU
>> 2018-06-29 14:19:31,151 [DEBUG] (MainThread) Calling binary: (['mount',
>> '/usr/share/ovirt-node-ng/image//ovirt-node-ng-4.2.0-0.20180626.0.el7.squashfs.img',
>> u'/tmp/mnt.1OhaU'],) {}
>> 2018-06-29 14:19:31,151 [DEBUG] (MainThread) Calling: (['mount',
>> '/usr/share/ovirt-node-ng/image//ovirt-node-ng-4.2.0-0.20180626.0.el7.squashfs.img',
>

[ovirt-users] Re: oVirt 4.2.3 to 4.2.4 failed

2018-07-02 Thread Yuval Turgeman
Are you mounted with discard ? perhaps fstrim ?

On Mon, Jul 2, 2018 at 10:23 PM, Matt Simonsen  wrote:

> Yes, it shows 8g on the VG
>
> I removed the LV for /var/crash, then installed again, and it is still
> failing on the step:
>
>
> 2018-07-02 12:21:10,015 [DEBUG] (MainThread) Calling: (['lvcreate',
> '--thin', '--virtualsize', u'53750005760B', '--name',
> 'ovirt-node-ng-4.2.4-0.20180626.0', u'onn_node1-g8-h4/pool00'],)
> {'close_fds': True, 'stderr': -2}
> 2018-07-02 12:21:10,069 [DEBUG] (MainThread) Exception!   Cannot create
> new thin volume, free space in thin pool onn_node1-g8-h4/pool00 reached
> threshold.
>
> 2018-07-02 12:21:10,069 [DEBUG] (MainThread) Calling binary: (['umount',
> '-l', u'/tmp/mnt.ZYOjC'],) {}
>
>
> Thanks
>
> Matt
>
>
>
>
>
> On 07/02/2018 10:55 AM, Yuval Turgeman wrote:
>
> Not in front of my laptop so it's a little hard to read but does it say 8g
> free on the vg ?
>
> On Mon, Jul 2, 2018, 20:00 Matt Simonsen  wrote:
>
>> This error adds some clarity.
>>
>> That said, I'm a bit unsure how the space can be the issue given I have
>> several hundred GB of storage in the thin pool that's unused...
>>
>> How do you suggest I proceed?
>>
>> Thank you for your help,
>>
>> Matt
>>
>>
>> [root@node6-g8-h4 ~]# lvs
>>
>>   LV   VG  Attr
>> LSize   Pool   Origin Data%  Meta%  Move Log
>> Cpy%Sync Convert
>>   home onn_node1-g8-h4 Vwi-aotz--
>> 1.00g pool00
>> 4.79
>>   ovirt-node-ng-4.2.2-0.20180423.0 onn_node1-g8-h4 Vwi---tz-k
>> <50.06g pool00 root
>>
>>   ovirt-node-ng-4.2.2-0.20180423.0+1   onn_node1-g8-h4 Vwi---tz--
>> <50.06g pool00 ovirt-node-ng-4.2.2-0.20180423.0
>>
>>   ovirt-node-ng-4.2.3.1-0.20180530.0   onn_node1-g8-h4 Vri---tz-k
>> <50.06g pool00
>>
>>   ovirt-node-ng-4.2.3.1-0.20180530.0+1 onn_node1-g8-h4 Vwi-aotz--
>> <50.06g pool00 ovirt-node-ng-4.2.3.1-0.20180530.0
>> 6.95
>>   pool00   onn_node1-g8-h4 twi-aotz--
>> <1.30t   76.63
>> 50.34
>>   root onn_node1-g8-h4 Vwi---tz--
>> <50.06g pool00
>>
>>   tmp  onn_node1-g8-h4 Vwi-aotz--
>> 1.00g pool00
>> 5.04
>>   var  onn_node1-g8-h4 Vwi-aotz--
>> 15.00g pool00
>> 5.86
>>   var_crashonn_node1-g8-h4 Vwi---tz--
>> 10.00g pool00
>>
>>   var_local_images onn_node1-g8-h4 Vwi-aotz--
>> 1.10t pool00
>> 89.72
>>   var_log  onn_node1-g8-h4 Vwi-aotz--
>> 8.00g pool00
>> 6.84
>>   var_log_auditonn_node1-g8-h4 Vwi-aotz--
>> 2.00g pool00
>> 6.16
>> [root@node6-g8-h4 ~]# vgs
>>   VG  #PV #LV #SN Attr   VSize  VFree
>>   onn_node1-g8-h4   1  13   0 wz--n- <1.31t 8.00g
>>
>>
>> 2018-06-29 14:19:31,142 [DEBUG] (MainThread) Version: imgbased-1.0.20
>> 2018-06-29 14:19:31,147 [DEBUG] (MainThread) Arguments:
>> Namespace(FILENAME='/usr/share/ovirt-node-ng/image//
>> ovirt-node-ng-4.2.0-0.20180626.0.el7.squashfs.img', command='update',
>> debug=True, experimental=False, format='liveimg', stream='Image')
>> 2018-06-29 14:19:31,147 [INFO] (MainThread) Extracting image
>> '/usr/share/ovirt-node-ng/image//ovirt-node-ng-4.2.0-0.
>> 20180626.0.el7.squashfs.img'
>> 2018-06-29 14:19:31,148 [DEBUG] (MainThread) Calling binary: (['mktemp',
>> '-d', '--tmpdir', 'mnt.X'],) {}
>> 2018-06-29 14:19:31,148 [DEBUG] (MainThread) Calling: (['mktemp', '-d',
>> '--tmpdir', 'mnt.X'],) {'close_fds': True, 'stderr': -2}
>> 2018-06-29 14:19:31,150 [DEBUG] (MainThread) Returned: /tmp/mnt.1OhaU
>> 2018-06-29 14:19:31,151 [DEBUG] (MainThread) Calling binary: (['mount',
>> '/usr/share/ovirt-node-ng/image//ovirt-node-ng-4.2.0-0.20180626.0.el7.squashfs.img',
>> u'/tmp/mnt.1OhaU'],) {}
>> 2018-06-29 14:19:31,151 [DEBUG] (MainThread) Calling: (['mount',
>> '/usr/share/ovirt-node-ng/image//ovirt-node-ng-4.2.0-0.20180626.0.el7.squashfs.img',
>> u'/tmp/mnt.1OhaU&#x

[ovirt-users] Re: oVirt 4.2.3 to 4.2.4 failed

2018-07-02 Thread Yuval Turgeman
>From your log:

AssertionError: Path is already a volume: /var/crash

Basically, it means that you already have an LV for /var/crash but it's not
mounted for some reason, so either mount it (if the data good) or remove it
and then reinstall the image-update rpm.  Before that, check that you dont
have any other LVs in that same state - or you can post the output for
lvs... btw, do you have any more imgbased.log files laying around ?

You can find more details about this here:

https://access.redhat.com/documentation/en-us/red_hat_virtualization/4.1/html/upgrade_guide/recovering_from_failed_nist-800_upgrade

On Mon, Jul 2, 2018 at 8:12 PM, Oliver Riesener <
oliver.riese...@hs-bremen.de> wrote:

> Hi,
>
> i attached my /tmp/imgbased.log
>
> Sheers
>
> Oliver
>
>
>
> Am 02.07.2018 um 13:58 schrieb Yuval Turgeman :
>
> Looks like the upgrade script failed - can you please attach
> /var/log/imgbased.log or /tmp/imgbased.log ?
>
> Thanks,
> Yuval.
>
> On Mon, Jul 2, 2018 at 2:54 PM, Sandro Bonazzola 
> wrote:
>
>> Yuval, can you please have a look?
>>
>> 2018-06-30 7:48 GMT+02:00 Oliver Riesener :
>>
>>> Yes, here is the same.
>>>
>>> It seams the bootloader isn’t configured right ?
>>>
>>> I did the Upgrade and reboot to 4.2.4 from UI and got:
>>>
>>> [root@ovn-monster ~]# nodectl info
>>> layers:
>>>   ovirt-node-ng-4.2.4-0.20180626.0:
>>> ovirt-node-ng-4.2.4-0.20180626.0+1
>>>   ovirt-node-ng-4.2.3.1-0.20180530.0:
>>> ovirt-node-ng-4.2.3.1-0.20180530.0+1
>>>   ovirt-node-ng-4.2.3-0.20180524.0:
>>> ovirt-node-ng-4.2.3-0.20180524.0+1
>>>   ovirt-node-ng-4.2.1.1-0.20180223.0:
>>> ovirt-node-ng-4.2.1.1-0.20180223.0+1
>>> bootloader:
>>>   default: ovirt-node-ng-4.2.3-0.20180524.0+1
>>>   entries:
>>> ovirt-node-ng-4.2.3-0.20180524.0+1:
>>>   index: 0
>>>   title: ovirt-node-ng-4.2.3-0.20180524.0
>>>   kernel: /boot/ovirt-node-ng-4.2.3-0.20
>>> 180524.0+1/vmlinuz-3.10.0-862.3.2.el7.x86_64
>>>   args: "ro crashkernel=auto 
>>> rd.lvm.lv=onn_ovn-monster/ovirt-node-ng-4.2.3-0.20180524.0+1
>>> rd.lvm.lv=onn_ovn-monster/swap 
>>> rd.md.uuid=c6c3013b:027a9346:67dfd181:89635587
>>> rhgb quiet LANG=de_DE.UTF-8 img.bootid=ovirt-node-ng-4.2.3
>>> -0.20180524.0+1"
>>>   initrd: /boot/ovirt-node-ng-4.2.3-0.20
>>> 180524.0+1/initramfs-3.10.0-862.3.2.el7.x86_64.img
>>>   root: /dev/onn_ovn-monster/ovirt-node-ng-4.2.3-0.20180524.0+1
>>> ovirt-node-ng-4.2.1.1-0.20180223.0+1:
>>>   index: 1
>>>   title: ovirt-node-ng-4.2.1.1-0.20180223.0
>>>   kernel: /boot/ovirt-node-ng-4.2.1.1-0.
>>> 20180223.0+1/vmlinuz-3.10.0-693.17.1.el7.x86_64
>>>   args: "ro crashkernel=auto rd.lvm.lv=onn_ovn-monster/ovir
>>> t-node-ng-4.2.1.1-0.20180223.0+1 rd.lvm.lv=onn_ovn-monster/swap
>>> rd.md.uuid=c6c3013b:027a9346:67dfd181:89635587 rhgb quiet
>>> LANG=de_DE.UTF-8 img.bootid=ovirt-node-ng-4.2.1.1-0.20180223.0+1"
>>>   initrd: /boot/ovirt-node-ng-4.2.1.1-0.
>>> 20180223.0+1/initramfs-3.10.0-693.17.1.el7.x86_64.img
>>>   root: /dev/onn_ovn-monster/ovirt-node-ng-4.2.1.1-0.20180223.0+1
>>> current_layer: ovirt-node-ng-4.2.3-0.20180524.0+1
>>> [root@ovn-monster ~]# uptime
>>>  07:35:27 up 2 days, 15:42,  1 user,  load average: 1,07, 1,00, 0,95
>>>
>>> Am 29.06.2018 um 23:53 schrieb Matt Simonsen :
>>>
>>> Hello,
>>>
>>> I did yum updates on 2 of my oVirt 4.2.3 nodes running the prebuilt node
>>> platform and it doesn't appear the updates worked.
>>>
>>>
>>> [root@node6-g8-h4 ~]# yum update
>>> Loaded plugins: enabled_repos_upload, fastestmirror, imgbased-persist,
>>>   : package_upload, product-id, search-disabled-repos,
>>> subscription-
>>>   : manager
>>> This system is not registered with an entitlement server. You can use
>>> subscription-manager to register.
>>> Loading mirror speeds from cached hostfile
>>>  * ovirt-4.2-epel: linux.mirrors.es.net
>>> Resolving Dependencies
>>> --> Running transaction check
>>> ---> Package ovirt-node-ng-image-update.noarch 0:4.2.3.1-1.el7 will be
>>> updated
>>> ---> Package ovirt-node-ng-image-update.noarch 0:4.2.4-1.el7 will be
>>> obsoleting
>>

[ovirt-users] Re: oVirt 4.2.3 to 4.2.4 failed

2018-07-02 Thread Yuval Turgeman
gt; {'close_fds': True, 'stderr': -2}
> 2018-06-29 14:19:31,406 [DEBUG] (MainThread) Exception!   Cannot create
> new thin volume, free space in thin pool onn_node1-g8-h4/pool00 reached
> threshold.
>
> 2018-06-29 14:19:31,406 [DEBUG] (MainThread) Calling binary: (['umount',
> '-l', u'/tmp/mnt.153do'],) {}
> 2018-06-29 14:19:31,406 [DEBUG] (MainThread) Calling: (['umount', '-l',
> u'/tmp/mnt.153do'],) {'close_fds': True, 'stderr': -2}
> 2018-06-29 14:19:31,422 [DEBUG] (MainThread) Returned:
> 2018-06-29 14:19:31,422 [DEBUG] (MainThread) Calling binary: (['rmdir',
> u'/tmp/mnt.153do'],) {}
> 2018-06-29 14:19:31,422 [DEBUG] (MainThread) Calling: (['rmdir',
> u'/tmp/mnt.153do'],) {'close_fds': True, 'stderr': -2}
> 2018-06-29 14:19:31,425 [DEBUG] (MainThread) Returned:
> 2018-06-29 14:19:31,425 [DEBUG] (MainThread) Calling binary: (['umount',
> '-l', u'/tmp/mnt.1OhaU'],) {}
> 2018-06-29 14:19:31,425 [DEBUG] (MainThread) Calling: (['umount', '-l',
> u'/tmp/mnt.1OhaU'],) {'close_fds': True, 'stderr': -2}
> 2018-06-29 14:19:31,437 [DEBUG] (MainThread) Returned:
> 2018-06-29 14:19:31,437 [DEBUG] (MainThread) Calling binary: (['rmdir',
> u'/tmp/mnt.1OhaU'],) {}
> 2018-06-29 14:19:31,437 [DEBUG] (MainThread) Calling: (['rmdir',
> u'/tmp/mnt.1OhaU'],) {'close_fds': True, 'stderr': -2}
> 2018-06-29 14:19:31,440 [DEBUG] (MainThread) Returned:
> Traceback (most recent call last):
>   File "/usr/lib64/python2.7/runpy.py", line 162, in _run_module_as_main
> "__main__", fname, loader, pkg_name)
>   File "/usr/lib64/python2.7/runpy.py", line 72, in _run_code
> exec code in run_globals
>   File
> "/tmp/tmp.mzQBYouvWT/usr/lib/python2.7/site-packages/imgbased/__main__.py",
> line 53, in 
> CliApplication()
>   File
> "/tmp/tmp.mzQBYouvWT/usr/lib/python2.7/site-packages/imgbased/__init__.py",
> line 82, in CliApplication
> app.hooks.emit("post-arg-parse", args)
>   File
> "/tmp/tmp.mzQBYouvWT/usr/lib/python2.7/site-packages/imgbased/hooks.py",
> line 120, in emit
> cb(self.context, *args)
>   File
> "/tmp/tmp.mzQBYouvWT/usr/lib/python2.7/site-packages/imgbased/plugins/update.py",
> line 56, in post_argparse
> base_lv, _ = LiveimgExtractor(app.imgbase).extract(args.FILENAME)
>   File
> "/tmp/tmp.mzQBYouvWT/usr/lib/python2.7/site-packages/imgbased/plugins/update.py",
> line 118, in extract
> "%s" % size, nvr)
>   File
> "/tmp/tmp.mzQBYouvWT/usr/lib/python2.7/site-packages/imgbased/plugins/update.py",
> line 84, in add_base_with_tree
> lvs)
>   File
> "/tmp/tmp.mzQBYouvWT/usr/lib/python2.7/site-packages/imgbased/imgbase.py",
> line 310, in add_base
> new_base_lv = pool.create_thinvol(new_base.lv_name, size)
>   File
> "/tmp/tmp.mzQBYouvWT/usr/lib/python2.7/site-packages/imgbased/lvm.py", line
> 324, in create_thinvol
> self.lvm_name])
>   File
> "/tmp/tmp.mzQBYouvWT/usr/lib/python2.7/site-packages/imgbased/utils.py",
> line 390, in lvcreate
> return self.call(["lvcreate"] + args, **kwargs)
>   File
> "/tmp/tmp.mzQBYouvWT/usr/lib/python2.7/site-packages/imgbased/utils.py",
> line 378, in call
> stdout = call(*args, **kwargs)
>   File
> "/tmp/tmp.mzQBYouvWT/usr/lib/python2.7/site-packages/imgbased/utils.py",
> line 153, in call
> return subprocess.check_output(*args, **kwargs).strip()
>   File "/usr/lib64/python2.7/subprocess.py", line 575, in check_output
> raise CalledProcessError(retcode, cmd, output=output)
> subprocess.CalledProcessError: Command '['lvcreate', '--thin',
> '--virtualsize', u'53750005760B', '--name',
> 'ovirt-node-ng-4.2.4-0.20180626.0', u'onn_node1-g8-h4/pool00']' returned
> non-zero exit status 5
>
>
>
>
>
> On 07/02/2018 04:58 AM, Yuval Turgeman wrote:
>
> Looks like the upgrade script failed - can you please attach
> /var/log/imgbased.log or /tmp/imgbased.log ?
>
> Thanks,
> Yuval.
>
> On Mon, Jul 2, 2018 at 2:54 PM, Sandro Bonazzola 
> wrote:
>
>> Yuval, can you please have a look?
>>
>> 2018-06-30 7:48 GMT+02:00 Oliver Riesener :
>>
>>> Yes, here is the same.
>>>
>>> It seams the bootloader isn’t configured right ?
>>>
>>> I did the Upgrade and reboo

[ovirt-users] Re: oVirt 4.2.3 to 4.2.4 failed

2018-07-02 Thread Yuval Turgeman
Looks like the upgrade script failed - can you please attach
/var/log/imgbased.log or /tmp/imgbased.log ?

Thanks,
Yuval.

On Mon, Jul 2, 2018 at 2:54 PM, Sandro Bonazzola 
wrote:

> Yuval, can you please have a look?
>
> 2018-06-30 7:48 GMT+02:00 Oliver Riesener :
>
>> Yes, here is the same.
>>
>> It seams the bootloader isn’t configured right ?
>>
>> I did the Upgrade and reboot to 4.2.4 from UI and got:
>>
>> [root@ovn-monster ~]# nodectl info
>> layers:
>>   ovirt-node-ng-4.2.4-0.20180626.0:
>> ovirt-node-ng-4.2.4-0.20180626.0+1
>>   ovirt-node-ng-4.2.3.1-0.20180530.0:
>> ovirt-node-ng-4.2.3.1-0.20180530.0+1
>>   ovirt-node-ng-4.2.3-0.20180524.0:
>> ovirt-node-ng-4.2.3-0.20180524.0+1
>>   ovirt-node-ng-4.2.1.1-0.20180223.0:
>> ovirt-node-ng-4.2.1.1-0.20180223.0+1
>> bootloader:
>>   default: ovirt-node-ng-4.2.3-0.20180524.0+1
>>   entries:
>> ovirt-node-ng-4.2.3-0.20180524.0+1:
>>   index: 0
>>   title: ovirt-node-ng-4.2.3-0.20180524.0
>>   kernel: /boot/ovirt-node-ng-4.2.3-0.20
>> 180524.0+1/vmlinuz-3.10.0-862.3.2.el7.x86_64
>>   args: "ro crashkernel=auto 
>> rd.lvm.lv=onn_ovn-monster/ovirt-node-ng-4.2.3-0.20180524.0+1
>> rd.lvm.lv=onn_ovn-monster/swap rd.md.uuid=c6c3013b:027a9346:67dfd181:89635587
>> rhgb quiet LANG=de_DE.UTF-8 img.bootid=ovirt-node-ng-4.2.3
>> -0.20180524.0+1"
>>   initrd: /boot/ovirt-node-ng-4.2.3-0.20
>> 180524.0+1/initramfs-3.10.0-862.3.2.el7.x86_64.img
>>   root: /dev/onn_ovn-monster/ovirt-node-ng-4.2.3-0.20180524.0+1
>> ovirt-node-ng-4.2.1.1-0.20180223.0+1:
>>   index: 1
>>   title: ovirt-node-ng-4.2.1.1-0.20180223.0
>>   kernel: /boot/ovirt-node-ng-4.2.1.1-0.
>> 20180223.0+1/vmlinuz-3.10.0-693.17.1.el7.x86_64
>>   args: "ro crashkernel=auto rd.lvm.lv=onn_ovn-monster/ovir
>> t-node-ng-4.2.1.1-0.20180223.0+1 rd.lvm.lv=onn_ovn-monster/swap
>> rd.md.uuid=c6c3013b:027a9346:67dfd181:89635587 rhgb quiet
>> LANG=de_DE.UTF-8 img.bootid=ovirt-node-ng-4.2.1.1-0.20180223.0+1"
>>   initrd: /boot/ovirt-node-ng-4.2.1.1-0.
>> 20180223.0+1/initramfs-3.10.0-693.17.1.el7.x86_64.img
>>   root: /dev/onn_ovn-monster/ovirt-node-ng-4.2.1.1-0.20180223.0+1
>> current_layer: ovirt-node-ng-4.2.3-0.20180524.0+1
>> [root@ovn-monster ~]# uptime
>>  07:35:27 up 2 days, 15:42,  1 user,  load average: 1,07, 1,00, 0,95
>>
>> Am 29.06.2018 um 23:53 schrieb Matt Simonsen :
>>
>> Hello,
>>
>> I did yum updates on 2 of my oVirt 4.2.3 nodes running the prebuilt node
>> platform and it doesn't appear the updates worked.
>>
>>
>> [root@node6-g8-h4 ~]# yum update
>> Loaded plugins: enabled_repos_upload, fastestmirror, imgbased-persist,
>>   : package_upload, product-id, search-disabled-repos,
>> subscription-
>>   : manager
>> This system is not registered with an entitlement server. You can use
>> subscription-manager to register.
>> Loading mirror speeds from cached hostfile
>>  * ovirt-4.2-epel: linux.mirrors.es.net
>> Resolving Dependencies
>> --> Running transaction check
>> ---> Package ovirt-node-ng-image-update.noarch 0:4.2.3.1-1.el7 will be
>> updated
>> ---> Package ovirt-node-ng-image-update.noarch 0:4.2.4-1.el7 will be
>> obsoleting
>> ---> Package ovirt-node-ng-image-update-placeholder.noarch
>> 0:4.2.3.1-1.el7 will be obsoleted
>> --> Finished Dependency Resolution
>>
>> Dependencies Resolved
>>
>> 
>> =
>>  Package  Arch
>> Version Repository   Size
>> 
>> =
>> Installing:
>>  ovirt-node-ng-image-update   noarch
>> 4.2.4-1.el7 ovirt-4.2   647 M
>>  replacing  ovirt-node-ng-image-update-placeholder.noarch
>> 4.2.3.1-1.el7
>>
>> Transaction Summary
>> 
>> =
>> Install  1 Package
>>
>> Total download size: 647 M
>> Is this ok [y/d/N]: y
>> Downloading packages:
>> warning: /var/cache/yum/x86_64/7/ovirt-4.2/packages/ovirt-node-ng-ima
>> ge-update-4.2.4-1.el7.noarch.rpm: Header V4 RSA/SHA1 Signature, key ID
>> fe590cb7: NOKEY
>> Public key for ovirt-node-ng-image-update-4.2.4-1.el7.noarch.rpm is not
>> installed
>> ovirt-node-ng-image-update-4.2.4-1.el7.noarch.rpm | 647 MB  00:02:07
>> Retrieving key from file:///etc/pki/rpm-gpg/RPM-GPG-ovirt-4.2
>> Importing GPG key 0xFE590CB7:
>>  Userid : "oVirt "
>>  Fingerprint: 31a5 d783 7fad 7cb2 86cd 3469 ab8c 4f9d fe59 0cb7
>>  Package: ovirt-release42-4.2.3.1-1.el7.noarch (installed)
>>  From   : /etc/pki/rpm-gpg/RPM-GPG-ovirt-4.2
>> Is this ok [y/N]: y
>> Running transaction check
>> Running transaction test
>> Transaction test succeeded
>> Running transaction
>>   Installing : ovirt-node-ng-image-u

Re: [ovirt-users] oVirt Node 4.1.8 -> 4.2 upgrade

2018-01-18 Thread Yuval Turgeman
Hi Luca,

We updated the FAQ [1] with a small script to help upgrade between major
releases.

[1] https://www.ovirt.org/node/faq/

On Mon, Jan 15, 2018 at 10:21 AM, Luca 'remix_tj' Lorenzetto <
lorenzetto.l...@gmail.com> wrote:

>
>
> On Thu, Jan 11, 2018 at 2:31 PM, Ryan Barry  wrote:
>
>> Note that, in the case of Node, the ovirt-release RPM should not be
>> installed. Mostly because it automatically enables a number of per-package
>> updates (such as vdsm) instead of installing a single image.
>>
>> Trimming the repo files so they look like what is shipped in Node
>> (IgnorePkgs and OnlyPkgs) will pull ovirt-node-ng-image-update.rpm,
>> which is the only package needing an update.
>>
>>
> Hi,
>
> i'm upgrading ovirt-node too. I found out that installing ovirt-release
> file and running yum upgrade wants to upgrade everything. Instead, if i
> make
>
> yum upgrade ovirt-node-ng-image*
>
> I get the new image downloaded.
>
> So which is the right procedure to upgrade a node? Maybe providing a
> single command alternative to yum (let's say ovirt-node-upgrade) for
> upgrading would be better and avoid issues when doing such operation.
>
> Luca
>
> --
> "E' assurdo impiegare gli uomini di intelligenza eccellente per fare
> calcoli che potrebbero essere affidati a chiunque se si usassero delle
> macchine"
> Gottfried Wilhelm von Leibnitz, Filosofo e Matematico (1646-1716)
>
> "Internet è la più grande biblioteca del mondo.
> Ma il problema è che i libri sono tutti sparsi sul pavimento"
> John Allen Paulos, Matematico (1945-vivente)
>
> Luca 'remix_tj' Lorenzetto, http://www.remixtj.net , <
> lorenzetto.l...@gmail.com>
>
> ___
> Users mailing list
> Users@ovirt.org
> http://lists.ovirt.org/mailman/listinfo/users
>
>
___
Users mailing list
Users@ovirt.org
http://lists.ovirt.org/mailman/listinfo/users


Re: [ovirt-users] oVirt NGN image customization troubles

2018-01-12 Thread Yuval Turgeman
Recent versions of livemedia-creator use qemu directly instead of libvirt,
and I think I saw a problem there also, but didn't get to fix it just yet.
You can use virt-builder to install a centos vm, or use an el7-based mock
environment.  You can follow the jenkins job here [1], basically you need
to do is:

1. clone ovirt-node-ng (you already did)
2. clone jenkins
3. cd ovirt-node-ng
4. ../jenkins/mock_configs/mock_runner.sh --build-only --mock-confs-dir
../jenkins/mock_configs/ --shell 'el7.*x86_64'

This will create a chroot environment in the same way we do in our CI, in
that chroot shell, do `cd ~` and then autogen, and make squashfs

[1]
http://jenkins.ovirt.org/job/ovirt-node-ng_ovirt-4.1_build-artifacts-el7-x86_64/403/consoleFull


On Thu, Jan 11, 2018 at 3:32 PM, Ryan Barry  wrote:

> I haven't tried to build on EL for a long time, but the easiest way to
> modify may simply be to unpack the squashfs, chroot inside of it, and
> repack it. Have you tried this?
>
> On Wed, Dec 27, 2017 at 11:33 AM, Giuseppe Ragusa <
> giuseppe.rag...@hotmail.com> wrote:
>
>> Hi all,
>>
>> I'm trying to modify the oVirt NGN image (to add RPMs, since imgbased
>> rpmpersistence currently seems to have a bug:
>> https://bugzilla.redhat.com/show_bug.cgi?id=1528468 ) but I'm
>> unfortunately stuck at the very beginning: it seems that I'm unable to
>> recreate even the standard 4.1 squashfs image.
>>
>> I'm following the instructions at https://gerrit.ovirt.org/gitwe
>> b?p=ovirt-node-ng.git;a=blob;f=README
>>
>> I'm working inside a CentOS7 fully-updated vm (hosted inside VMware, with
>> nested virtualization enabled).
>>
>> I'm trying to work on the 4.1 branch, so I issued a:
>>
>> ./autogen.sh --with-ovirt-release-rpm-url=http://resources.ovirt.org/pub/
>> yum-repo/ovirt-release41.rpm
>>
>> And after that I'm stuck in the "make squashfs" step: it never ends
>> (keeps printing dots forever with no errors/warnings in log messages nor
>> any apparent activity on the virtual disk image).
>>
>> Invoking it in debug mode and connecting to the VNC console shows the
>> detailed Plymouth startup listing stuck (latest messages displayed:
>> "Starting udev Wait for Complete Device Initialization..." and "Starting
>> Device-Mapper Multipath Device Controller...")
>>
>> I wonder if it's actually supposed to be run only from a recent Fedora
>> (the "dnf" reference seems a good indicator): if so, which version?
>>
>> I kindly ask for advice: has anyone succeeded in modifying/reproducing
>> NGN squash images recently? If so, how? :-)
>>
>> Many thanks in advance,
>>
>> Giuseppe
>> ___
>> Users mailing list
>> Users@ovirt.org
>> http://lists.ovirt.org/mailman/listinfo/users
>>
>
>
>
> --
>
> RYAN BARRY
>
> SENIOR SOFTWARE ENGINEER - TEAM LEAD - RHEV HYPERVISOR
>
> Red Hat NA 
>
> rba...@redhat.comM: +1-651-815-9306 IM: rbarry
> 
>
> ___
> Users mailing list
> Users@ovirt.org
> http://lists.ovirt.org/mailman/listinfo/users
>
>
___
Users mailing list
Users@ovirt.org
http://lists.ovirt.org/mailman/listinfo/users


Re: [ovirt-users] How to simulate update of ng node from 4.1.7 to 4.1.8

2018-01-08 Thread Yuval Turgeman
Install using the iso as you would, and if you can, try to grab the image
update rpm for 4.1.8 [1], and just install it with `rpm -Uhv`.
If you'd rather use the iso, you will need to mount it, and install the
ovirt-node-ng squashfs in the same manner that the image-update rpm
installs it (rpm -qp --scripts path-to-image-update.rpm).

[1]
http://resources.ovirt.org/pub/ovirt-4.1/rpm/el7/noarch/ovirt-node-ng-image-update-4.1.8-1.el7.centos.noarch.rpm


On Sun, Jan 7, 2018 at 7:57 PM, Gianluca Cecchi 
wrote:

> Suppose I download both the isos:
>
> the fixed one for 4.1.17 here:
> http://resources.ovirt.org/pub/ovirt-4.1/iso/ovirt-node-
> ng-installer-ovirt/4.1-2017110820/ovirt-node-ng-installer-ovirt-4.1-
> 2017110820.iso
>
> and the 4.1.8 here:
> http://resources.ovirt.org/pub/ovirt-4.1/iso/ovirt-node-
> ng-installer-ovirt/4.1-2017121114/ovirt-node-ng-installer-ovirt-4.1-
> 2017121114.iso
>
> and I want to simulate install of 4.1.7 from the iso and then update of
> the ng node to 4.1.8 without internet access but using in some way the
> 4.1.8 iso (or something derived from it) on my local lan, how can I do?
>
> Thanks,
> Gianluca
>
> ___
> Users mailing list
> Users@ovirt.org
> http://lists.ovirt.org/mailman/listinfo/users
>
>
___
Users mailing list
Users@ovirt.org
http://lists.ovirt.org/mailman/listinfo/users


Re: [ovirt-users] Ovirt 4.2 and host console

2017-12-27 Thread Yuval Turgeman
Hi

As you can see from the systemctl status, the difference is caused due to
the preset rule for cockpit.socket, which is enabled on the node-ng based
system, and disabled on your regular centos based system.
If you take a look at the cockpit postinstall script, you'd see something
like:

systemctl --no-reload preset cockpit.socket...

This sets the cockpit.socket according to its preset rule.

Thanks,
Yuval.




On Tue, Dec 26, 2017 at 3:35 PM, Gianluca Cecchi 
wrote:

> On Sat, Dec 23, 2017 at 11:18 AM, Gianluca Cecchi <
> gianluca.cec...@gmail.com> wrote:
>
>> On Fri, Dec 22, 2017 at 10:22 PM, Sandro Bonazzola 
>> wrote:
>>
>>>
>>>
>>> Il 22 Dic 2017 10:12 PM, "Yaniv Kaul"  ha scritto:
>>>
>>>
>>>
>>> On Dec 22, 2017 7:33 PM, "Gianluca Cecchi" 
>>> wrote:
>>>
>>> Hello, after upgrading engine and then  plain CentOS 7.4 host from 4.1
>>> to 4.2, I see in host section if I select line for the host, right click
>>> and host console... That tries to go to the typical 9090 cockpit Port of
>>> node-ng...
>>> Is this an error or in 4.2 the access to host console is for plain OS
>>> nodes too?
>>> In that case is there any service I have to enable on host?
>>> It seems indeed my host is not currently listening on 9090 Port
>>>
>>>
>>> Cockpit + firewall settings to enable to get to it.
>>>
>>>
>>> Cockpit service should be up and running after the upgrade. Ovirt hist
>>> depliy takes care of it. Firewall is configured by the engine unless you
>>> disabled firewall config on the host configuration dialog.
>>>
>>> Didi, can you help here? Gianluca, can you share host upgrade logs?
>>>
>>>
>>>
>> Hello,
>> this is a plain CentOS, not ovirt-node-ng one, that was upgraded from
>> 4.1.7 to 4.2
>> So the upgrade path has been to put the host into maintenance, yum
>> update, reboot.
>>
>> Indeed cockpit has been installed as part of the yum update part:
>>
>> Dec 22 10:40:07 Installed: cockpit-bridge-155-1.el7.centos.x86_64
>> Dec 22 10:40:07 Installed: cockpit-system-155-1.el7.centos.noarch
>> Dec 22 10:40:40 Installed: cockpit-networkmanager-155-1.el7.centos.noarch
>> Dec 22 10:40:42 Installed: cockpit-ws-155-1.el7.centos.x86_64
>> Dec 22 10:40:43 Installed: cockpit-155-1.el7.centos.x86_64
>> Dec 22 10:40:52 Installed: cockpit-storaged-155-1.el7.centos.noarch
>> Dec 22 10:40:57 Installed: cockpit-dashboard-155-1.el7.centos.x86_64
>> Dec 22 10:41:35 Installed: cockpit-ovirt-dashboard-0.11.3
>> -0.1.el7.centos.noarch
>>
>> I also see that there is a systemd cockpit.service unit that is
>> configured as static and requires a cockpit.socket unit, that in turn is
>> WantedBy sockets.target
>>
>> But if I run
>>
>> [root@ovirt01 ~]# remotectl certificate
>> remotectl: No certificate found in dir: /etc/cockpit/ws-certs.d
>> [root@ovirt01 ~]#
>>
>> So it seems that the cockpit.service ExecStartPre has not been ever
>> run...
>> ExecStartPre=/usr/sbin/remotectl certificate --ensure --user=root
>> --group=cockpit-ws --selinux-type=etc_t
>>
>> [root@ovirt01 ~]# systemctl status cockpit.service
>> ● cockpit.service - Cockpit Web Service
>>Loaded: loaded (/usr/lib/systemd/system/cockpit.service; static;
>> vendor preset: disabled)
>>Active: inactive (dead)
>>  Docs: man:cockpit-ws(8)
>> [root@ovirt01 ~]#
>>
>> Gianluca
>>
>>
>>
> At the end I compared my CentOS 7.4 oVirt 4.2 configuration with the
> configuration of a RHEV 4.1 RHV-H environment and there I see this kind of
> config
>
> [root@rhevora1 ~]# systemctl status cockpit.socket
> ● cockpit.socket - Cockpit Web Service Socket
>Loaded: loaded (/usr/lib/systemd/system/cockpit.socket; enabled;
> vendor preset: enabled)
>Active: active (listening) since Fri 2017-10-27 17:02:30 CEST; 1 months
> 29 days ago
>  Docs: man:cockpit-ws(8)
>Listen: [::]:9090 (Stream)
>
> Oct 27 17:02:30 rhevora1.mydomain systemd[1]: Listening on Cockpit Web
> Service Socket.
> Oct 27 17:02:30 rhevora1.mydomain systemd[1]: Starting Cockpit Web Service
> Socket.
> [root@rhevora1 ~]#
>
> So there is basically the difference that on RHEV RHV-H the cockpit.socket
> systemd unit is enabled by default (cockpit-ws-148-1.el7.x86_64), while on
> this just updated to 4.2 oVirt host on plain CentOS we have
>
> [root@ovirt01 etc]# systemctl status cockpit.socket
> ● cockpit.socket - Cockpit Web Service Socket
>Loaded: loaded (/usr/lib/systemd/system/cockpit.socket; disabled;
> vendor preset: disabled)
>Active: inactive (dead)
>  Docs: man:cockpit-ws(8)
>Listen: 0.0.0.0:9090 (Stream)
> [root@ovirt01 etc]#
>
> On oVirt 4.2 the version is currently cockpit-ws-155-1.el7.centos.x86_64
>
> What I've done has been
> systemctl enable cockpit.socket
> systemctl start cockpit.socket
>
> and now I have the ovirt01 host correctly listening on 9090 port and I'm
> able to web connect and enjoy...
>
> Let me know if this is a sort of regression of cockpit-ws package or
> what...
>
> Gianluca
>
> ___
> Users mailing list
> Users@ovirt.org

Re: [ovirt-users] Failed to open grubx64.efi

2017-12-12 Thread Yuval Turgeman
Glad to hear, thanks for the update ! :)

On Tue, Dec 12, 2017 at 1:20 PM, Luca 'remix_tj' Lorenzetto <
lorenzetto.l...@gmail.com> wrote:

> On Thu, Nov 23, 2017 at 10:41 AM, Luca 'remix_tj' Lorenzetto
>  wrote:
> >
> > On Wed, Nov 22, 2017 at 11:05 AM, Yuval Turgeman 
> wrote:
> > [cut]
> > > You can access boot_params under /sys/kernel/boot_params/data, so if
> you
> > > want
> > > to understand what's going on with your machine, you can try to use
> strings
> > > to
> > > see if it's enabled while running anaconda - on my test machine it
> looks
> > > like this:
> > >
> > > [anaconda root@localhost ~]# strings /sys/kernel/boot_params/data
> > > EL64
> > > fHdrS
> > >
> >
> > I confirm, if i boot via CD i see that value. So now i try to do the
> > same via PXE and run a setup using UEFI.
> >
> > The only "issue" i've seen is that after bootloader menù, screen
> > remains on blinking cursor for several seconds. Seems it's on hang.
> > If you have enough patience to wait, boot continues.
> >
> > I'll measure the timing and report, but is an issue that also other
> > people is experiencing?
> >
>
>
> Just for closing the discussion and updating anyone that could find
> this thread while looking for infos about UEFI, oVirt-node and PXE:
>
> I managed to complete a full pxe setup of several ovirt nodes with
> release 4.1.6 without issues. Nodes wait some time in blank screen
> with blinking, but then boot completes.
>
> Luca
>
>
> --
> "E' assurdo impiegare gli uomini di intelligenza eccellente per fare
> calcoli che potrebbero essere affidati a chiunque se si usassero delle
> macchine"
> Gottfried Wilhelm von Leibnitz, Filosofo e Matematico (1646-1716)
>
> "Internet è la più grande biblioteca del mondo.
> Ma il problema è che i libri sono tutti sparsi sul pavimento"
> John Allen Paulos, Matematico (1945-vivente)
>
> Luca 'remix_tj' Lorenzetto, http://www.remixtj.net , <
> lorenzetto.l...@gmail.com>
>
___
Users mailing list
Users@ovirt.org
http://lists.ovirt.org/mailman/listinfo/users


Re: [ovirt-users] oVirt Node ng upgrade failed

2017-12-01 Thread Yuval Turgeman
Great, thanks! we already have a patch in POST ready here:
https://gerrit.ovirt.org/#/c/84957/

Thanks,
Yuval

On Dec 1, 2017 15:04, "Kilian Ries"  wrote:

Bug is opened:


https://bugzilla.redhat.com/show_bug.cgi?id=1519784



Ok, i try to fix my host next week. Thanks for your help ;)
--
*Von:* Yuval Turgeman 
*Gesendet:* Donnerstag, 30. November 2017 09:22:39

*An:* Kilian Ries
*Cc:* users
*Betreff:* Re: [ovirt-users] oVirt Node ng upgrade failed

Looks like it, yes - we try to add setfiles_t to permissive, because we
assume selinux is on, and if it's disabled, semanage fails with the error
you mentioned.  Can you open a bug on this ?

If you would like to fix the system, you will need to clean the unused LVs,
remove the relevant boot entries from grub (if they exist) and
/boot/ovirt-node-ng-4.1.7-0.20171108.0+1 (if it exists), then reinstall the
rpm.


On Thu, Nov 30, 2017 at 10:16 AM, Kilian Ries  wrote:

> Yes, selinux is disabled via /etc/selinux/config; Is that the problem? :/
> ------
> *Von:* Yuval Turgeman 
> *Gesendet:* Donnerstag, 30. November 2017 09:13:34
> *An:* Kilian Ries
> *Cc:* users
>
> *Betreff:* Re: [ovirt-users] oVirt Node ng upgrade failed
>
> Kilian, did you disable selinux by any chance ? (selinux=0 on boot) ?
>
> On Thu, Nov 30, 2017 at 9:57 AM, Yuval Turgeman  wrote:
>
>> Looks like selinux is broken on your machine for some reason, can you
>> share /etc/selinux ?
>>
>> Thanks,
>> Yuval.
>>
>> On Tue, Nov 28, 2017 at 6:31 PM, Kilian Ries  wrote:
>>
>>> @Yuval Turgeman
>>>
>>>
>>> ###
>>>
>>>
>>> [17:27:10][root@vm5:~]$semanage permissive -a setfiles_t
>>>
>>> SELinux:  Could not downgrade policy file 
>>> /etc/selinux/targeted/policy/policy.30,
>>> searching for an older version.
>>>
>>> SELinux:  Could not open policy file <= 
>>> /etc/selinux/targeted/policy/policy.30:
>>>  No such file or directory
>>>
>>> /sbin/load_policy:  Can't load policy:  No such file or directory
>>>
>>> libsemanage.semanage_reload_policy: load_policy returned error code 2.
>>> (No such file or directory).
>>>
>>> SELinux:  Could not downgrade policy file 
>>> /etc/selinux/targeted/policy/policy.30,
>>> searching for an older version.
>>>
>>> SELinux:  Could not open policy file <= 
>>> /etc/selinux/targeted/policy/policy.30:
>>>  No such file or directory
>>>
>>> /sbin/load_policy:  Can't load policy:  No such file or directory
>>>
>>> libsemanage.semanage_reload_policy: load_policy returned error code 2.
>>> (No such file or directory).
>>>
>>> OSError: No such file or directory
>>>
>>>
>>> ###
>>>
>>>
>>> @Ryan Barry
>>>
>>>
>>> Manual yum upgrade finished without any error but imgbased.log still
>>> shows me the following:
>>>
>>>
>>> ###
>>>
>>>
>>> 2017-11-28 17:25:28,372 [DEBUG] (MainThread) Returned:
>>>
>>> 2017-11-28 17:25:28,434 [DEBUG] (MainThread) Creating /home as
>>> {'attach': True, 'size': '1G'}
>>>
>>> 2017-11-28 17:25:28,434 [DEBUG] (MainThread) Calling binary: (['vgs',
>>> '--noheadings', '@imgbased:volume', '-o', 'lv_full_name'],) {'stderr':
>>> }
>>>
>>> 2017-11-28 17:25:28,434 [DEBUG] (MainThread) Calling: (['vgs',
>>> '--noheadings', '@imgbased:volume', '-o', 'lv_full_name'],) {'close_fds':
>>> True, 'stderr': }
>>>
>>> 2017-11-28 17:25:28,533 [DEBUG] (MainThread) Returned: onn/home
>>>
>>>   onn/tmp
>>>
>>>   onn/var_log
>>>
>>>   onn/var_log_audit
>>>
>>> 2017-11-28 17:25:28,533 [DEBUG] (MainThread) Calling binary: (['umount',
>>> '-l', '/etc'],) {}
>>>
>>> 2017-11-28 17:25:28,534 [DEBUG] (MainThread) Calling: (['umount', '-l',
>>> '/etc'],) {'close_fds': True, 'stderr': -2}
>>>
>>> 2017-11-28 17:25:28,539 [DEBUG] (MainThread) Returned:
>>>
>>> 2017-11-28 17:25:28,540 [DEBUG] (MainThread) Calling binary: (['umount',
>>> '-l', u'/tmp/mnt.tuHU8'],) {}
>>>
>>> 2017-11-28 17:25:28,540 [DEBUG]

Re: [ovirt-users] ovirt-node-ng-update

2017-11-30 Thread Yuval Turgeman
2 more questions -

1.  Which ovirt repos are enabled on your node ?
2.  Can you share the output from `rpm -qa | grep ovirt-node-ng` ?

Thanks,
Yuval.

On Thu, Nov 30, 2017 at 11:02 AM, Nathanaël Blanchet 
wrote:

>
>
> Le 30/11/2017 à 08:58, Yuval Turgeman a écrit :
>
> Hi,
>
> Which version are you using ?
>
> 4.1.7
>
>
> Thanks,
> Yuval.
>
> On Wed, Nov 29, 2017 at 4:17 PM, Nathanaël Blanchet 
> wrote:
>
>> Hi all,
>>
>> I didn't find any explicit howto about upgrade of ovirt-node, but I may
>> mistake...
>>
>> However, here is what I guess: after installing a fresh ovirt-node-ng
>> iso, the engine check upgrade finds an available update
>> "ovirt-node-ng-image-update"
>>
>> But, the available update is the same as the current one.  If I choose
>> installing it succeeds, but after rebooting, ovirt-node-ng-image-update is
>> not still part of installed rpms so that engine tells me an update of
>> ovirt-node is still available.
>>
>> --
>> Nathanaël Blanchet
>>
>> Supervision réseau
>> Pôle Infrastrutures Informatiques
>> 227 avenue Professeur-Jean-Louis-Viala
>> <https://maps.google.com/?q=227+avenue+Professeur-Jean-Louis-Viala&entry=gmail&source=g>
>> 34193 MONTPELLIER CEDEX 5
>> Tél. 33 (0)4 67 54 84 55
>> Fax  33 (0)4 67 54 84 14
>> blanc...@abes.fr
>>
>> ___
>> Users mailing list
>> Users@ovirt.org
>> http://lists.ovirt.org/mailman/listinfo/users
>>
>
>
> --
> Nathanaël Blanchet
>
> Supervision réseau
> Pôle Infrastrutures Informatiques
> 227 avenue Professeur-Jean-Louis-Viala
> 34193 MONTPELLIER CEDEX 5 
> Tél. 33 (0)4 67 54 84 55
> Fax  33 (0)4 67 54 84 14blanc...@abes.fr
>
>
> ___
> Users mailing list
> Users@ovirt.org
> http://lists.ovirt.org/mailman/listinfo/users
>
>
___
Users mailing list
Users@ovirt.org
http://lists.ovirt.org/mailman/listinfo/users


Re: [ovirt-users] oVirt Node ng upgrade failed

2017-11-30 Thread Yuval Turgeman
Looks like it, yes - we try to add setfiles_t to permissive, because we
assume selinux is on, and if it's disabled, semanage fails with the error
you mentioned.  Can you open a bug on this ?

If you would like to fix the system, you will need to clean the unused LVs,
remove the relevant boot entries from grub (if they exist) and
/boot/ovirt-node-ng-4.1.7-0.20171108.0+1 (if it exists), then reinstall the
rpm.


On Thu, Nov 30, 2017 at 10:16 AM, Kilian Ries  wrote:

> Yes, selinux is disabled via /etc/selinux/config; Is that the problem? :/
> --
> *Von:* Yuval Turgeman 
> *Gesendet:* Donnerstag, 30. November 2017 09:13:34
> *An:* Kilian Ries
> *Cc:* users
>
> *Betreff:* Re: [ovirt-users] oVirt Node ng upgrade failed
>
> Kilian, did you disable selinux by any chance ? (selinux=0 on boot) ?
>
> On Thu, Nov 30, 2017 at 9:57 AM, Yuval Turgeman  wrote:
>
>> Looks like selinux is broken on your machine for some reason, can you
>> share /etc/selinux ?
>>
>> Thanks,
>> Yuval.
>>
>> On Tue, Nov 28, 2017 at 6:31 PM, Kilian Ries  wrote:
>>
>>> @Yuval Turgeman
>>>
>>>
>>> ###
>>>
>>>
>>> [17:27:10][root@vm5:~]$semanage permissive -a setfiles_t
>>>
>>> SELinux:  Could not downgrade policy file 
>>> /etc/selinux/targeted/policy/policy.30,
>>> searching for an older version.
>>>
>>> SELinux:  Could not open policy file <= 
>>> /etc/selinux/targeted/policy/policy.30:
>>>  No such file or directory
>>>
>>> /sbin/load_policy:  Can't load policy:  No such file or directory
>>>
>>> libsemanage.semanage_reload_policy: load_policy returned error code 2.
>>> (No such file or directory).
>>>
>>> SELinux:  Could not downgrade policy file 
>>> /etc/selinux/targeted/policy/policy.30,
>>> searching for an older version.
>>>
>>> SELinux:  Could not open policy file <= 
>>> /etc/selinux/targeted/policy/policy.30:
>>>  No such file or directory
>>>
>>> /sbin/load_policy:  Can't load policy:  No such file or directory
>>>
>>> libsemanage.semanage_reload_policy: load_policy returned error code 2.
>>> (No such file or directory).
>>>
>>> OSError: No such file or directory
>>>
>>>
>>> ###
>>>
>>>
>>> @Ryan Barry
>>>
>>>
>>> Manual yum upgrade finished without any error but imgbased.log still
>>> shows me the following:
>>>
>>>
>>> ###
>>>
>>>
>>> 2017-11-28 17:25:28,372 [DEBUG] (MainThread) Returned:
>>>
>>> 2017-11-28 17:25:28,434 [DEBUG] (MainThread) Creating /home as
>>> {'attach': True, 'size': '1G'}
>>>
>>> 2017-11-28 17:25:28,434 [DEBUG] (MainThread) Calling binary: (['vgs',
>>> '--noheadings', '@imgbased:volume', '-o', 'lv_full_name'],) {'stderr':
>>> }
>>>
>>> 2017-11-28 17:25:28,434 [DEBUG] (MainThread) Calling: (['vgs',
>>> '--noheadings', '@imgbased:volume', '-o', 'lv_full_name'],) {'close_fds':
>>> True, 'stderr': }
>>>
>>> 2017-11-28 17:25:28,533 [DEBUG] (MainThread) Returned: onn/home
>>>
>>>   onn/tmp
>>>
>>>   onn/var_log
>>>
>>>   onn/var_log_audit
>>>
>>> 2017-11-28 17:25:28,533 [DEBUG] (MainThread) Calling binary: (['umount',
>>> '-l', '/etc'],) {}
>>>
>>> 2017-11-28 17:25:28,534 [DEBUG] (MainThread) Calling: (['umount', '-l',
>>> '/etc'],) {'close_fds': True, 'stderr': -2}
>>>
>>> 2017-11-28 17:25:28,539 [DEBUG] (MainThread) Returned:
>>>
>>> 2017-11-28 17:25:28,540 [DEBUG] (MainThread) Calling binary: (['umount',
>>> '-l', u'/tmp/mnt.tuHU8'],) {}
>>>
>>> 2017-11-28 17:25:28,540 [DEBUG] (MainThread) Calling: (['umount', '-l',
>>> u'/tmp/mnt.tuHU8'],) {'close_fds': True, 'stderr': -2}
>>>
>>> 2017-11-28 17:25:28,635 [DEBUG] (MainThread) Returned:
>>>
>>> 2017-11-28 17:25:28,635 [DEBUG] (MainThread) Calling binary: (['rmdir',
>>> u'/tmp/mnt.tuHU8'],) {}
>>>
>>> 2017-11-28 17:25:28,635 [DEBUG] (MainThread) Calling: (['rmdir',
>>> u&#x

Re: [ovirt-users] oVirt Node ng upgrade failed

2017-11-30 Thread Yuval Turgeman
Kilian, did you disable selinux by any chance ? (selinux=0 on boot) ?

On Thu, Nov 30, 2017 at 9:57 AM, Yuval Turgeman  wrote:

> Looks like selinux is broken on your machine for some reason, can you
> share /etc/selinux ?
>
> Thanks,
> Yuval.
>
> On Tue, Nov 28, 2017 at 6:31 PM, Kilian Ries  wrote:
>
>> @Yuval Turgeman
>>
>>
>> ###
>>
>>
>> [17:27:10][root@vm5:~]$semanage permissive -a setfiles_t
>>
>> SELinux:  Could not downgrade policy file 
>> /etc/selinux/targeted/policy/policy.30,
>> searching for an older version.
>>
>> SELinux:  Could not open policy file <= 
>> /etc/selinux/targeted/policy/policy.30:
>>  No such file or directory
>>
>> /sbin/load_policy:  Can't load policy:  No such file or directory
>>
>> libsemanage.semanage_reload_policy: load_policy returned error code 2.
>> (No such file or directory).
>>
>> SELinux:  Could not downgrade policy file 
>> /etc/selinux/targeted/policy/policy.30,
>> searching for an older version.
>>
>> SELinux:  Could not open policy file <= 
>> /etc/selinux/targeted/policy/policy.30:
>>  No such file or directory
>>
>> /sbin/load_policy:  Can't load policy:  No such file or directory
>>
>> libsemanage.semanage_reload_policy: load_policy returned error code 2.
>> (No such file or directory).
>>
>> OSError: No such file or directory
>>
>>
>> ###
>>
>>
>> @Ryan Barry
>>
>>
>> Manual yum upgrade finished without any error but imgbased.log still
>> shows me the following:
>>
>>
>> ###
>>
>>
>> 2017-11-28 17:25:28,372 [DEBUG] (MainThread) Returned:
>>
>> 2017-11-28 17:25:28,434 [DEBUG] (MainThread) Creating /home as {'attach':
>> True, 'size': '1G'}
>>
>> 2017-11-28 17:25:28,434 [DEBUG] (MainThread) Calling binary: (['vgs',
>> '--noheadings', '@imgbased:volume', '-o', 'lv_full_name'],) {'stderr':
>> }
>>
>> 2017-11-28 17:25:28,434 [DEBUG] (MainThread) Calling: (['vgs',
>> '--noheadings', '@imgbased:volume', '-o', 'lv_full_name'],) {'close_fds':
>> True, 'stderr': }
>>
>> 2017-11-28 17:25:28,533 [DEBUG] (MainThread) Returned: onn/home
>>
>>   onn/tmp
>>
>>   onn/var_log
>>
>>   onn/var_log_audit
>>
>> 2017-11-28 17:25:28,533 [DEBUG] (MainThread) Calling binary: (['umount',
>> '-l', '/etc'],) {}
>>
>> 2017-11-28 17:25:28,534 [DEBUG] (MainThread) Calling: (['umount', '-l',
>> '/etc'],) {'close_fds': True, 'stderr': -2}
>>
>> 2017-11-28 17:25:28,539 [DEBUG] (MainThread) Returned:
>>
>> 2017-11-28 17:25:28,540 [DEBUG] (MainThread) Calling binary: (['umount',
>> '-l', u'/tmp/mnt.tuHU8'],) {}
>>
>> 2017-11-28 17:25:28,540 [DEBUG] (MainThread) Calling: (['umount', '-l',
>> u'/tmp/mnt.tuHU8'],) {'close_fds': True, 'stderr': -2}
>>
>> 2017-11-28 17:25:28,635 [DEBUG] (MainThread) Returned:
>>
>> 2017-11-28 17:25:28,635 [DEBUG] (MainThread) Calling binary: (['rmdir',
>> u'/tmp/mnt.tuHU8'],) {}
>>
>> 2017-11-28 17:25:28,635 [DEBUG] (MainThread) Calling: (['rmdir',
>> u'/tmp/mnt.tuHU8'],) {'close_fds': True, 'stderr': -2}
>>
>> 2017-11-28 17:25:28,640 [DEBUG] (MainThread) Returned:
>>
>> 2017-11-28 17:25:28,641 [ERROR] (MainThread) Failed to migrate etc
>>
>> Traceback (most recent call last):
>>
>>   File 
>> "/tmp/tmp.ipxGZrbQEi/usr/lib/python2.7/site-packages/imgbased/plugins/osupdater.py",
>> line 109, in on_new_layer
>>
>> check_nist_layout(imgbase, new_lv)
>>
>>   File 
>> "/tmp/tmp.ipxGZrbQEi/usr/lib/python2.7/site-packages/imgbased/plugins/osupdater.py",
>> line 179, in check_nist_layout
>>
>> v.create(t, paths[t]["size"], paths[t]["attach"])
>>
>>   File 
>> "/tmp/tmp.ipxGZrbQEi/usr/lib/python2.7/site-packages/imgbased/volume.py",
>> line 48, in create
>>
>> "Path is already a volume: %s" % where
>>
>> AssertionError: Path is already a volume: /home
>>
>> 2017-11-28 17:25:28,642 [DEBUG] (MainThread) Calling binary: (['umount',
>> '-l', u'/tmp/m

Re: [ovirt-users] ovirt-node-ng-update

2017-11-29 Thread Yuval Turgeman
Hi,

Which version are you using ?

Thanks,
Yuval.

On Wed, Nov 29, 2017 at 4:17 PM, Nathanaël Blanchet 
wrote:

> Hi all,
>
> I didn't find any explicit howto about upgrade of ovirt-node, but I may
> mistake...
>
> However, here is what I guess: after installing a fresh ovirt-node-ng
> iso, the engine check upgrade finds an available update
> "ovirt-node-ng-image-update"
>
> But, the available update is the same as the current one.  If I choose
> installing it succeeds, but after rebooting, ovirt-node-ng-image-update is
> not still part of installed rpms so that engine tells me an update of
> ovirt-node is still available.
>
> --
> Nathanaël Blanchet
>
> Supervision réseau
> Pôle Infrastrutures Informatiques
> 227 avenue Professeur-Jean-Louis-Viala
> 
> 34193 MONTPELLIER CEDEX 5
> Tél. 33 (0)4 67 54 84 55
> Fax  33 (0)4 67 54 84 14
> blanc...@abes.fr
>
> ___
> Users mailing list
> Users@ovirt.org
> http://lists.ovirt.org/mailman/listinfo/users
>
___
Users mailing list
Users@ovirt.org
http://lists.ovirt.org/mailman/listinfo/users


Re: [ovirt-users] oVirt Node ng upgrade failed

2017-11-29 Thread Yuval Turgeman
Looks like selinux is broken on your machine for some reason, can you share
/etc/selinux ?

Thanks,
Yuval.

On Tue, Nov 28, 2017 at 6:31 PM, Kilian Ries  wrote:

> @Yuval Turgeman
>
>
> ###
>
>
> [17:27:10][root@vm5:~]$semanage permissive -a setfiles_t
>
> SELinux:  Could not downgrade policy file 
> /etc/selinux/targeted/policy/policy.30,
> searching for an older version.
>
> SELinux:  Could not open policy file <= 
> /etc/selinux/targeted/policy/policy.30:
>  No such file or directory
>
> /sbin/load_policy:  Can't load policy:  No such file or directory
>
> libsemanage.semanage_reload_policy: load_policy returned error code 2.
> (No such file or directory).
>
> SELinux:  Could not downgrade policy file 
> /etc/selinux/targeted/policy/policy.30,
> searching for an older version.
>
> SELinux:  Could not open policy file <= 
> /etc/selinux/targeted/policy/policy.30:
>  No such file or directory
>
> /sbin/load_policy:  Can't load policy:  No such file or directory
>
> libsemanage.semanage_reload_policy: load_policy returned error code 2.
> (No such file or directory).
>
> OSError: No such file or directory
>
>
> ###
>
>
> @Ryan Barry
>
>
> Manual yum upgrade finished without any error but imgbased.log still shows
> me the following:
>
>
> ###
>
>
> 2017-11-28 17:25:28,372 [DEBUG] (MainThread) Returned:
>
> 2017-11-28 17:25:28,434 [DEBUG] (MainThread) Creating /home as {'attach':
> True, 'size': '1G'}
>
> 2017-11-28 17:25:28,434 [DEBUG] (MainThread) Calling binary: (['vgs',
> '--noheadings', '@imgbased:volume', '-o', 'lv_full_name'],) {'stderr':
> }
>
> 2017-11-28 17:25:28,434 [DEBUG] (MainThread) Calling: (['vgs',
> '--noheadings', '@imgbased:volume', '-o', 'lv_full_name'],) {'close_fds':
> True, 'stderr': }
>
> 2017-11-28 17:25:28,533 [DEBUG] (MainThread) Returned: onn/home
>
>   onn/tmp
>
>   onn/var_log
>
>   onn/var_log_audit
>
> 2017-11-28 17:25:28,533 [DEBUG] (MainThread) Calling binary: (['umount',
> '-l', '/etc'],) {}
>
> 2017-11-28 17:25:28,534 [DEBUG] (MainThread) Calling: (['umount', '-l',
> '/etc'],) {'close_fds': True, 'stderr': -2}
>
> 2017-11-28 17:25:28,539 [DEBUG] (MainThread) Returned:
>
> 2017-11-28 17:25:28,540 [DEBUG] (MainThread) Calling binary: (['umount',
> '-l', u'/tmp/mnt.tuHU8'],) {}
>
> 2017-11-28 17:25:28,540 [DEBUG] (MainThread) Calling: (['umount', '-l',
> u'/tmp/mnt.tuHU8'],) {'close_fds': True, 'stderr': -2}
>
> 2017-11-28 17:25:28,635 [DEBUG] (MainThread) Returned:
>
> 2017-11-28 17:25:28,635 [DEBUG] (MainThread) Calling binary: (['rmdir',
> u'/tmp/mnt.tuHU8'],) {}
>
> 2017-11-28 17:25:28,635 [DEBUG] (MainThread) Calling: (['rmdir',
> u'/tmp/mnt.tuHU8'],) {'close_fds': True, 'stderr': -2}
>
> 2017-11-28 17:25:28,640 [DEBUG] (MainThread) Returned:
>
> 2017-11-28 17:25:28,641 [ERROR] (MainThread) Failed to migrate etc
>
> Traceback (most recent call last):
>
>   File "/tmp/tmp.ipxGZrbQEi/usr/lib/python2.7/site-packages/
> imgbased/plugins/osupdater.py", line 109, in on_new_layer
>
> check_nist_layout(imgbase, new_lv)
>
>   File "/tmp/tmp.ipxGZrbQEi/usr/lib/python2.7/site-packages/
> imgbased/plugins/osupdater.py", line 179, in check_nist_layout
>
> v.create(t, paths[t]["size"], paths[t]["attach"])
>
>   File 
> "/tmp/tmp.ipxGZrbQEi/usr/lib/python2.7/site-packages/imgbased/volume.py",
> line 48, in create
>
> "Path is already a volume: %s" % where
>
> AssertionError: Path is already a volume: /home
>
> 2017-11-28 17:25:28,642 [DEBUG] (MainThread) Calling binary: (['umount',
> '-l', u'/tmp/mnt.bEW2k'],) {}
>
> 2017-11-28 17:25:28,642 [DEBUG] (MainThread) Calling: (['umount', '-l',
> u'/tmp/mnt.bEW2k'],) {'close_fds': True, 'stderr': -2}
>
> 2017-11-28 17:25:29,061 [DEBUG] (MainThread) Returned:
>
> 2017-11-28 17:25:29,061 [DEBUG] (MainThread) Calling binary: (['rmdir',
> u'/tmp/mnt.bEW2k'],) {}
>
> 2017-11-28 17:25:29,061 [DEBUG] (MainThread) Calling: (['rmdir',
> u'/tmp/mnt.bEW2k'],) {'close_fds': True, 'stderr': -2}
>
> 2017-11-28 17:25:29,067 [DEBUG] (MainThread) Returned:
>

Re: [ovirt-users] oVirt Node ng upgrade failed

2017-11-26 Thread Yuval Turgeman
Hi,

Can you try to run on your 4.1.1 `semanage permissive -a setfiles_t` and
share your output ?

Thanks,
Yuval

On Fri, Nov 24, 2017 at 11:01 AM, Kilian Ries  wrote:

> This is the imgbased.log:
>
>
> https://www.dropbox.com/s/v9dmgz14cpzfcsn/imgbased.log.tar.gz?dl=0
>
> Ok, i'll try your steps and come back later ...
>
>
> --
> *Von:* Ryan Barry 
> *Gesendet:* Donnerstag, 23. November 2017 23:33:34
> *An:* Kilian Ries; Lev Veyde; users
> *Betreff:* Re: [ovirt-users] oVirt Node ng upgrade failed
>
> Can you grab imgbased.log?
>
> To retry, "rpm -e ovirt-node-ng-image-update" and remove the new LVs. "yum
> install ovirt-node-ng-image-update" from the CLI instead of engine so we
> can get full logs would be useful
>
> On Thu, Nov 23, 2017 at 16:01 Lev Veyde  wrote:
>
>>
>> -- Forwarded message --
>> From: Kilian Ries 
>> Date: Thu, Nov 23, 2017 at 5:16 PM
>> Subject: [ovirt-users] oVirt Node ng upgrade failed
>> To: "Users@ovirt.org" 
>>
>>
>> Hi,
>>
>>
>> just tried to upgrade from
>>
>>
>> ovirt-node-ng-4.1.1.1-0.20170504.0+1
>>
>>
>> to
>>
>>
>> ovirt-node-ng-4.1.7-0.20171108.0+1
>>
>>
>> but it failed:
>>
>>
>> ###
>>
>>
>> 2017-11-23 10:19:21 INFO otopi.plugins.otopi.packagers.yumpackager
>> yumpackager.info:80 Yum Verify: 1/4: ovirt-node-ng-image-update.noarch
>> 0:4.1.7-1.el7.centos - u
>>
>> 2017-11-23 10:19:21 INFO otopi.plugins.otopi.packagers.yumpackager
>> yumpackager.info:80 Yum Verify: 2/4: 
>> ovirt-node-ng-image-update-placeholder.noarch
>> 0:4.1.1.1-1.el7.centos - od
>>
>> 2017-11-23 10:19:21 INFO otopi.plugins.otopi.packagers.yumpackager
>> yumpackager.info:80 Yum Verify: 3/4: ovirt-node-ng-image.noarch
>> 0:4.1.1.1-1.el7.centos - od
>>
>> 2017-11-23 10:19:21 INFO otopi.plugins.otopi.packagers.yumpackager
>> yumpackager.info:80 Yum Verify: 4/4: ovirt-node-ng-image-update.noarch
>> 0:4.1.1.1-1.el7.centos - ud
>>
>> 2017-11-23 10:19:21 DEBUG otopi.plugins.otopi.packagers.yumpackager
>> yumpackager.verbose:76 Yum Transaction processed
>>
>> 2017-11-23 10:19:21 DEBUG otopi.context context._executeMethod:142 method
>> exception
>>
>> Traceback (most recent call last):
>>
>>   File "/tmp/ovirt-3JI9q14aGS/pythonlib/otopi/context.py", line 132, in
>> _executeMethod
>>
>> method['method']()
>>
>>   File "/tmp/ovirt-3JI9q14aGS/otopi-plugins/otopi/packagers/yumpackager.py",
>> line 261, in _packages
>>
>> self._miniyum.processTransaction()
>>
>>   File "/tmp/ovirt-3JI9q14aGS/pythonlib/otopi/miniyum.py", line 1049, in
>> processTransaction
>>
>> _('One or more elements within Yum transaction failed')
>>
>> RuntimeError: One or more elements within Yum transaction failed
>>
>> 2017-11-23 10:19:21 ERROR otopi.context context._executeMethod:151 Failed
>> to execute stage 'Package installation': One or more elements within Yum
>> transaction failed
>>
>> 2017-11-23 10:19:21 DEBUG otopi.transaction transaction.abort:119
>> aborting 'Yum Transaction'
>>
>> 2017-11-23 10:19:21 INFO otopi.plugins.otopi.packagers.yumpackager
>> yumpackager.info:80 Yum Performing yum transaction rollback
>>
>> 2017-11-23 10:19:21 DEBUG otopi.plugins.otopi.packagers.yumpackager
>> yumpackager.verbose:76 Yum Downloading: 
>> centos-opstools-release/7/x86_64/filelists_db
>> (0%)
>>
>> 2017-11-23 10:19:21 DEBUG otopi.plugins.otopi.packagers.yumpackager
>> yumpackager.verbose:76 Yum Downloading: 
>> centos-opstools-release/7/x86_64/filelists_db
>> 374 k(100%)
>>
>> 2017-11-23 10:19:22 DEBUG otopi.plugins.otopi.packagers.yumpackager
>> yumpackager.verbose:76 Yum Downloading: 
>> centos-opstools-release/7/x86_64/other_db
>> (0%)
>>
>> 2017-11-23 10:19:22 DEBUG otopi.plugins.otopi.packagers.yumpackager
>> yumpackager.verbose:76 Yum Downloading: 
>> centos-opstools-release/7/x86_64/other_db
>> 53 k(100%)
>>
>> 2017-11-23 10:19:22 DEBUG otopi.plugins.otopi.packagers.yumpackager
>> yumpackager.verbose:76 Yum Downloading: ovirt-4.1/7/filelists_db (0%)
>>
>> 2017-11-23 10:19:22 DEBUG otopi.plugins.otopi.packagers.yumpackager
>> yumpackager.verbose:76 Yum Downloading: ovirt-4.1/7/filelists_db 55 k(4%)
>>
>> 2017-11-23 10:19:23 DEBUG otopi.plugins.otopi.packagers.yumpackager
>> yumpackager.verbose:76 Yum Downloading: ovirt-4.1/7/filelists_db 201 k(17%)
>>
>> 2017-11-23 10:19:23 DEBUG otopi.plugins.otopi.packagers.yumpackager
>> yumpackager.verbose:76 Yum Downloading: ovirt-4.1/7/filelists_db 648 k(56%)
>>
>> 2017-11-23 10:19:23 DEBUG otopi.plugins.otopi.packagers.yumpackager
>> yumpackager.verbose:76 Yum Downloading: ovirt-4.1/7/filelists_db 1.1 M(99%)
>>
>> 2017-11-23 10:19:23 DEBUG otopi.plugins.otopi.packagers.yumpackager
>> yumpackager.verbose:76 Yum Downloading: ovirt-4.1/7/filelists_db 1.1 M(100%)
>>
>> 2017-11-23 10:19:25 DEBUG otopi.plugins.otopi.packagers.yumpackager
>> yumpackager.verbose:76 Yum Downloading: ovirt-4.1/7/other_db (0%)
>>
>> 2017-11-23 10:19:25 DEBUG otopi.plugins.otopi.packagers.yumpackager
>> yumpackager.verbose:76 Yum Downloading: ovirt-4.1/7/other_db 45 k(14

Re: [ovirt-users] Failed to open grubx64.efi

2017-11-22 Thread Yuval Turgeman
Hi,

I checked a little more, anaconda uses blivet to detect if the machine is
EFI for its boot partition requirements, and blivet checks if
/sys/firmware/efi exists [1].
The kernel registers /sys/firmware/efi only if EFI_BOOT is enabled [2], and
this is set in on setup, when the kernel searches for an EFI loader
signature
in boot_params [3] (the signatures are defined as "EL32" and "EL64").

You can access boot_params under /sys/kernel/boot_params/data, so if you
want
to understand what's going on with your machine, you can try to use strings
to
see if it's enabled while running anaconda - on my test machine it looks
like this:


[anaconda root@localhost ~]# strings /sys/kernel/boot_params/data

EL64

fHdrS


Hope this helps :)
Yuval.

[1]
https://github.com/storaged-project/blivet/blob/44bd6738a49cd15398dd151cc2653f175efccf14/blivet/arch.py#L242
[2]
https://git.kernel.org/pub/scm/linux/kernel/git/stable/linux-stable.git/tree/drivers/firmware/efi/efi.c?h=v3.10.108#n90
[3]
https://git.kernel.org/pub/scm/linux/kernel/git/stable/linux-stable.git/tree/arch/x86/kernel/setup.c#n930


On Tue, Nov 21, 2017 at 8:57 PM, Yuval Turgeman  wrote:

> Boot partition reqs should be handled the same on both ovirt node and
> centos (or rhel) and you having to add this manually on rhel can give us a
> hint that this is a bug in anaconda (doesnt detect efi?).
>
> In other words, if you need to add this to rhel you'd need to add it to
> ovirt node and autopart shouldnt scare you off, just follow the
> partitioning guidelines, add your changes and you are all set.  You can
> grab some ks examples here:
>
> git clone https://gerrit.ovirt.org/ovirt-node-ng
>
> On Nov 21, 2017 20:13, "Luca 'remix_tj' Lorenzetto" <
> lorenzetto.l...@gmail.com> wrote:
>
>> On Tue, Nov 21, 2017 at 4:54 PM, Yuval Turgeman 
>> wrote:
>> > Hi,
>> >
>> > I tried to recreate this without success, i'll try with different hw
>> > tomorrow.
>> > The thing is, autopart with thinp doesn't mean that everything is
>> lvm-thin -
>> > /boot should be a regular (primary) partition (for details you can check
>> > anaconda's ovirt install class)
>> > This could be a bug in anaconda or in the kickstart that deploys the
>> node
>> > (if not installing directly from the iso), can you install CentOS-7.4
>> with
>> > UEFI enabled on this machine ?  If you have some installation logs, that
>> > would help :)
>>
>> Hi,
>>
>> on the same hardware i can install with success rhel 7.4 using UEFI.
>> This required to change my default partitioning to another one
>> containing
>> /boot/efi/ partition, adding this entry:
>>
>> part /boot/efi --fstype=efi --size=200 --ondisk=sda
>>
>> I suppose that autopart doesn't create this partition.
>>
>> Luca
>>
>>
>> --
>> "E' assurdo impiegare gli uomini di intelligenza eccellente per fare
>> calcoli che potrebbero essere affidati a chiunque se si usassero delle
>> macchine"
>> Gottfried Wilhelm von Leibnitz, Filosofo e Matematico (1646-1716)
>>
>> "Internet è la più grande biblioteca del mondo.
>> Ma il problema è che i libri sono tutti sparsi sul pavimento"
>> John Allen Paulos, Matematico (1945-vivente)
>>
>> Luca 'remix_tj' Lorenzetto, http://www.remixtj.net , <
>> lorenzetto.l...@gmail.com>
>>
>
___
Users mailing list
Users@ovirt.org
http://lists.ovirt.org/mailman/listinfo/users


Re: [ovirt-users] Failed to open grubx64.efi

2017-11-21 Thread Yuval Turgeman
Boot partition reqs should be handled the same on both ovirt node and
centos (or rhel) and you having to add this manually on rhel can give us a
hint that this is a bug in anaconda (doesnt detect efi?).

In other words, if you need to add this to rhel you'd need to add it to
ovirt node and autopart shouldnt scare you off, just follow the
partitioning guidelines, add your changes and you are all set.  You can
grab some ks examples here:

git clone https://gerrit.ovirt.org/ovirt-node-ng

On Nov 21, 2017 20:13, "Luca 'remix_tj' Lorenzetto" <
lorenzetto.l...@gmail.com> wrote:

> On Tue, Nov 21, 2017 at 4:54 PM, Yuval Turgeman  wrote:
> > Hi,
> >
> > I tried to recreate this without success, i'll try with different hw
> > tomorrow.
> > The thing is, autopart with thinp doesn't mean that everything is
> lvm-thin -
> > /boot should be a regular (primary) partition (for details you can check
> > anaconda's ovirt install class)
> > This could be a bug in anaconda or in the kickstart that deploys the node
> > (if not installing directly from the iso), can you install CentOS-7.4
> with
> > UEFI enabled on this machine ?  If you have some installation logs, that
> > would help :)
>
> Hi,
>
> on the same hardware i can install with success rhel 7.4 using UEFI.
> This required to change my default partitioning to another one
> containing
> /boot/efi/ partition, adding this entry:
>
> part /boot/efi --fstype=efi --size=200 --ondisk=sda
>
> I suppose that autopart doesn't create this partition.
>
> Luca
>
>
> --
> "E' assurdo impiegare gli uomini di intelligenza eccellente per fare
> calcoli che potrebbero essere affidati a chiunque se si usassero delle
> macchine"
> Gottfried Wilhelm von Leibnitz, Filosofo e Matematico (1646-1716)
>
> "Internet è la più grande biblioteca del mondo.
> Ma il problema è che i libri sono tutti sparsi sul pavimento"
> John Allen Paulos, Matematico (1945-vivente)
>
> Luca 'remix_tj' Lorenzetto, http://www.remixtj.net , <
> lorenzetto.l...@gmail.com>
>
___
Users mailing list
Users@ovirt.org
http://lists.ovirt.org/mailman/listinfo/users


Re: [ovirt-users] Failed to open grubx64.efi

2017-11-21 Thread Yuval Turgeman
Hi,

I tried to recreate this without success, i'll try with different hw
tomorrow.
The thing is, autopart with thinp doesn't mean that everything is lvm-thin
- /boot should be a regular (primary) partition (for details you can check
anaconda's ovirt install class)
This could be a bug in anaconda or in the kickstart that deploys the node
(if not installing directly from the iso), can you install CentOS-7.4 with
UEFI enabled on this machine ?  If you have some installation logs, that
would help :)

Thanks,
Yuval.


On Thu, Nov 16, 2017 at 9:02 PM, Luca 'remix_tj' Lorenzetto <
lorenzetto.l...@gmail.com> wrote:

> Hello Julio,
>
> On Thu, Nov 16, 2017 at 6:41 PM, Julio Cesar Bustamante
>  wrote:
> > Hi there,
> >
> > I have installed Ovirt Host in a HS22 Blade Ibm, but I have this bug.
> >
> > Failed to open \efi\centos\grubx64.efi not found
> > Falied to load image \EFI\centos\grubx64.efi Not found
> >
>
> I had the same problem with a newer Lenovo Blade (x240 M5, IIRC).
> The problem is that ovirt-node-ng by default uses autopart with thin
> provisioning and that partition scheme doesn't create /boot/efi drive.
>
> Without that drive, grub-efi cannot be installed.
>
> I switched my blades back to legacy only and everything worked as expected.
>
> I'm planning to extend my tests with UEFI in future, but at the moment
> i don't see the necessity switch from legacy mode.
>
> Luca
>
>
>
>
> --
> "E' assurdo impiegare gli uomini di intelligenza eccellente per fare
> calcoli che potrebbero essere affidati a chiunque se si usassero delle
> macchine"
> Gottfried Wilhelm von Leibnitz, Filosofo e Matematico (1646-1716)
>
> "Internet è la più grande biblioteca del mondo.
> Ma il problema è che i libri sono tutti sparsi sul pavimento"
> John Allen Paulos, Matematico (1945-vivente)
>
> Luca 'remix_tj' Lorenzetto, http://www.remixtj.net , <
> lorenzetto.l...@gmail.com>
> ___
> Users mailing list
> Users@ovirt.org
> http://lists.ovirt.org/mailman/listinfo/users
>
___
Users mailing list
Users@ovirt.org
http://lists.ovirt.org/mailman/listinfo/users


Re: [ovirt-users] oVirt Node 4.1.6 on IBM x3650 M3

2017-10-31 Thread Yuval Turgeman
Hi,

We did have some problems in the past with efi, but they should be fixed by
now.
Did you use the ISO for installation ?  What error are you seeing - which
file is missing there ?

Thanks,
Yuval.

On Thu, Oct 26, 2017 at 11:03 PM, Jonathan Baecker 
wrote:

> Thank you, good to know that this works! I need to play a bit with it.
>
>
>
> Am 26.10.2017 um 21:59 schrieb Eduardo Mayoral:
>
>> Yes, I use power management with ipmilan, no issues.
>>
>> I do have license on the IMM for remote console, but that is not a
>> requirement, AFAIK.
>>
>> I remember I first tried to use for oVirt a dedicated login on the IMM
>> with just "Remote Server Power/Restart Access" and I could not get to
>> work, so I just granted "Supervisor" to the dedicated login. Other than
>> that, no problem.
>>
>> Eduardo Mayoral Jimeno (emayo...@arsys.es)
>> Administrador de sistemas. Departamento de Plataformas. Arsys internet.
>> +34 941 620 145 ext. 5153
>>
>> On 26/10/17 21:47, Jonathan Baecker wrote:
>>
>>> Thank you, for your commands! I have now also install CentOS minimal,
>>> this works. I only though that oVirt Node have some optimizations, but
>>> maybe not.
>>>
>>> @Eduardo Mayoral, can I ask you that you are able with this servers,
>>> to use the power management? As I understand, they support ipmilan,
>>> but I don't know how...
>>>
>>> Regards
>>> Jonathan
>>>
>>> Am 24.10.2017 um 23:41 schrieb Sean McMurray:
>>>
 I have seen this problem before. For some reason, oVirt Node 4.1.x
 does not always install everything right for efi. In my limited
 experience, it fails to do it correctly 4 out of 5 times. The mystery
 to me is why it gets it right sometimes. I solve the problem by
 manually copying the missing file into my efi boot partition.


 On 10/24/2017 12:46 PM, Eduardo Mayoral wrote:

> 3 of my compute nodes are IBM x3650 M3 . I do not use oVirt Node but
> rather plain CentOS 7 for the compute nodes. I use 4.1.6 too.
>
> I remember I had a bad time trying to disable UEFI on the BIOS of
> those servers. In my opinion, the firmware in that model ridden with
> problems. In the end, I installed with UEFI (You will need a
> /boot/efi partition)
>
> Once installed, I have not had any issues with them.
>
> Eduardo Mayoral Jimeno (emayo...@arsys.es)
> Administrador de sistemas. Departamento de Plataformas. Arsys internet.
> +34 941 620 145 ext. 5153
> On 24/10/17 09:57, Jon bae wrote:
>
>> Hello everybody,
>> I would like to install oVirt Node on a IBM Machine, but after the
>> installation it can not boot. I get the message:
>>
>> "/boot/efi/..." file not found
>>
>> I try many different things like turn of uefi options in bios etc.
>> but with no effect.
>>
>> Now I figure out that when I install full CentOS 7.3 from live DVD
>> it just boot normal.
>>
>> Is there any workaround to get this to work?
>>
>> Regards
>>
>> Jonathan
>>
>>
>> ___
>> Users mailing list
>> Users@ovirt.org
>> http://lists.ovirt.org/mailman/listinfo/users
>>
>
>
> ___
> Users mailing list
> Users@ovirt.org
> http://lists.ovirt.org/mailman/listinfo/users
>
 ___
 Users mailing list
 Users@ovirt.org
 http://lists.ovirt.org/mailman/listinfo/users

>>>
>>> ___
>>> Users mailing list
>>> Users@ovirt.org
>>> http://lists.ovirt.org/mailman/listinfo/users
>>>
>>
>
> ___
> Users mailing list
> Users@ovirt.org
> http://lists.ovirt.org/mailman/listinfo/users
>
___
Users mailing list
Users@ovirt.org
http://lists.ovirt.org/mailman/listinfo/users


Re: [ovirt-users] oVirt Node update question

2017-09-24 Thread Yuval Turgeman
Hi Matthias,

Basically, ovirt-node-ng is shipped as an image, and imgbased then syncs
some files to the new lv so the user will get a new working image without
breaking his configuration.  It could be that some files were sync'ed
across wrong, can you please attache /tmp/imgbased.log ?  Also, which
version are you using now ?

Thanks,
Yuval.


On Fri, Sep 22, 2017 at 4:33 PM, Matthias Leopold <
matthias.leop...@meduniwien.ac.at> wrote:

> Hi Yuval,
>
> i updated my nodes from 4.1.3 to 4.1.6 today and noticed that the
>
> > /etc/yum.repos.d/ovirt-4.1-pre-dependencies.repo
> > /etc/yum.repos.d/ovirt-4.1-pre.repo
>
> files i moved away previously reappeared after rebooting, so i'm getting
> updates to 4.1.7-0.1.rc1.20170919143904.git0c14f08 proposed again.
> obviously i haven't fully understood this "layer" concept of imgbased. the
> practical question for me is: how do i get _permanently_ rid of these files
> in path "/etc/yum.repos.d/"?
>
> thanks
> matthias
>
> Am 2017-08-31 um 16:24 schrieb Yuval Turgeman:
>
>> Yes that would do it, thanks for the update :)
>>
>> On Thu, Aug 31, 2017 at 5:21 PM, Matthias Leopold <
>> matthias.leop...@meduniwien.ac.at <mailto:matthias.leopold@medun
>> iwien.ac.at>> wrote:
>>
>> Hi,
>>
>> all of the nodes that already made updates in the past have
>>
>> /etc/yum.repos.d/ovirt-4.1-pre-dependencies.repo
>> /etc/yum.repos.d/ovirt-4.1-pre.repo
>>
>> i went through the logs in /var/log/ovirt-engine/host-deploy/ and my
>> own notes and discovered/remembered that this being presented with
>> RC versions started on 20170707 when i updated my nodes from 4.1.2
>> to 4.1.3-0.3.rc3.20170622082156.git47b4302 (!). probably there was a
>> short timespan when you erroneously published a RC version in the
>> wrong repo, my nodes "caught" it and dragged this along until today
>> when i finally cared ;-) I moved the
>> /etc/yum.repos.d/ovirt-4.1-pre*.repo files away and now everything
>> seems fine
>>
>> Regards
>> Matthias
>>
>> Am 2017-08-31 um 15:25 schrieb Yuval Turgeman:
>>
>> Hi,
>>
>> Don't quite understand how you got to that 4.1.6 rc, it's only
>> available in the pre release repo, can you paste the yum repos
>> that are enabled on your system ?
>>
>> Thanks,
>> Yuval.
>>
>> On Thu, Aug 31, 2017 at 4:19 PM, Matthias Leopold
>> > <mailto:matthias.leop...@meduniwien.ac.at>
>> <mailto:matthias.leop...@meduniwien.ac.at
>> <mailto:matthias.leop...@meduniwien.ac.at>>> wrote:
>>
>>  Hi,
>>
>>  thanks a lot.
>>
>>  So i understand everything is fine with my nodes and i'll
>> wait until
>>  the update GUI shows the right version to update (4.1.5 at
>> the moment).
>>
>>  Regards
>>  Matthias
>>
>>
>>  Am 2017-08-31 um 14:56 schrieb Yuval Turgeman:
>>
>>  Hi,
>>
>>  oVirt node ng is shipped with a placeholder rpm
>> preinstalled.
>>  The image-update rpms obsolete the placeholder rpm, so
>> once a
>>  new image-update rpm is published, yum update will pull
>> those
>>  packages.  So you have 1 system that was a fresh
>> install and the
>>  others were upgrades.
>>  Next, the post install script for those image-update
>> rpms will
>>  install --justdb the image-update rpms to the new image
>> (so
>>  running yum update in the new image won't try to pull
>> again the
>>  same version).
>>
>>  Regarding the 4.1.6 it's very strange, we'll need to
>> check the
>>  repos to see why it was published.
>>
>>  As for nodectl, if there are no changes, it won't be
>> updated and
>>  you'll see an "old" version or a version that doesn't
>> seem to be
>>  matching the current image, but it is ok, we are
>> thinking of
>>  changing its name to make it less confusing.
>>
>>  Hope this

Re: [ovirt-users] failed upgrade oVirt node 4.1.3 -> 4.1.5

2017-09-03 Thread Yuval Turgeman
Hi,

Seems to be a bug that was resolved here https://gerrit.ovirt.org/c/80716/

Thanks,
Yuval.


On Fri, Sep 1, 2017 at 3:55 PM, Matthias Leopold <
matthias.leop...@meduniwien.ac.at> wrote:

> hi,
>
> i'm sorry to write to this list again, but i failed to upgrade a freshly
> installed oVirt Node from version 4.1.3 to 4.1.5. it seems to be a SELinux
> related problem. i'm attaching imgbased.log + relevant lines from
> engine.log.
>
> is the skipped version (4.1.4) the problem?
> can i force upgrade to version 4.1.4?
>
> thx
> matthias
>
>
> ___
> Users mailing list
> Users@ovirt.org
> http://lists.ovirt.org/mailman/listinfo/users
>
>
___
Users mailing list
Users@ovirt.org
http://lists.ovirt.org/mailman/listinfo/users


Re: [ovirt-users] oVirt Node update question

2017-08-31 Thread Yuval Turgeman
Yes that would do it, thanks for the update :)

On Thu, Aug 31, 2017 at 5:21 PM, Matthias Leopold <
matthias.leop...@meduniwien.ac.at> wrote:

> Hi,
>
> all of the nodes that already made updates in the past have
>
> /etc/yum.repos.d/ovirt-4.1-pre-dependencies.repo
> /etc/yum.repos.d/ovirt-4.1-pre.repo
>
> i went through the logs in /var/log/ovirt-engine/host-deploy/ and my own
> notes and discovered/remembered that this being presented with RC versions
> started on 20170707 when i updated my nodes from 4.1.2 to
> 4.1.3-0.3.rc3.20170622082156.git47b4302 (!). probably there was a short
> timespan when you erroneously published a RC version in the wrong repo, my
> nodes "caught" it and dragged this along until today when i finally cared
> ;-) I moved the /etc/yum.repos.d/ovirt-4.1-pre*.repo files away and now
> everything seems fine
>
> Regards
> Matthias
>
> Am 2017-08-31 um 15:25 schrieb Yuval Turgeman:
>
>> Hi,
>>
>> Don't quite understand how you got to that 4.1.6 rc, it's only available
>> in the pre release repo, can you paste the yum repos that are enabled on
>> your system ?
>>
>> Thanks,
>> Yuval.
>>
>> On Thu, Aug 31, 2017 at 4:19 PM, Matthias Leopold <
>> matthias.leop...@meduniwien.ac.at <mailto:matthias.leopold@medun
>> iwien.ac.at>> wrote:
>>
>> Hi,
>>
>> thanks a lot.
>>
>> So i understand everything is fine with my nodes and i'll wait until
>> the update GUI shows the right version to update (4.1.5 at the
>> moment).
>>
>> Regards
>> Matthias
>>
>>
>> Am 2017-08-31 um 14:56 schrieb Yuval Turgeman:
>>
>> Hi,
>>
>> oVirt node ng is shipped with a placeholder rpm preinstalled.
>> The image-update rpms obsolete the placeholder rpm, so once a
>> new image-update rpm is published, yum update will pull those
>> packages.  So you have 1 system that was a fresh install and the
>> others were upgrades.
>> Next, the post install script for those image-update rpms will
>> install --justdb the image-update rpms to the new image (so
>> running yum update in the new image won't try to pull again the
>> same version).
>>
>> Regarding the 4.1.6 it's very strange, we'll need to check the
>> repos to see why it was published.
>>
>> As for nodectl, if there are no changes, it won't be updated and
>> you'll see an "old" version or a version that doesn't seem to be
>> matching the current image, but it is ok, we are thinking of
>> changing its name to make it less confusing.
>>
>> Hope this helps,
>> Yuval.
>>
>>
>> On Thu, Aug 31, 2017 at 11:17 AM, Matthias Leopold
>> > <mailto:matthias.leop...@meduniwien.ac.at>
>> <mailto:matthias.leop...@meduniwien.ac.at
>>
>> <mailto:matthias.leop...@meduniwien.ac.at>>> wrote:
>>
>>  hi,
>>
>>  i still don't completely understand the oVirt Node update
>> process
>>  and the involved rpm packages.
>>
>>  We have 4 nodes, all running oVirt Node 4.1.3. Three of
>> them show as
>>  available updates
>> 'ovirt-node-ng-image-update-4.
>> 1.6-0.1.rc1.20170823083853.gitd646d2f.el7.centos'
>>  (i don't want run release candidates), one of them shows
>>  'ovirt-node-ng-image-update-4.1.5-1.el7.centos' (this is
>> what i
>>  like). The node that doesn't want to upgrade to
>> '4.1.6-0.1.rc1'
>>  lacks the rpm package
>>  'ovirt-node-ng-image-update-4.1.3-1.el7.centos.noarch',
>> only has
>> 'ovirt-node-ng-image-update-pl
>> aceholder-4.1.3-1.el7.centos.noarch'.
>>  Also the version of ovirt-node-ng-nodectl is
>>  '4.1.3-0.20170709.0.el7' instead of
>> '4.1.3-0.20170705.0.el7'. This
>>  node was the last one i installed and never made a version
>> update
>>  before.
>>
>>  I only began using oVirt starting with 4.1, but already
>> completed
>>  minor version upgrades of oVirt nodes. IIRC this 'mysterious'
>> 

Re: [ovirt-users] oVirt Node update question

2017-08-31 Thread Yuval Turgeman
Hi,

Don't quite understand how you got to that 4.1.6 rc, it's only available in
the pre release repo, can you paste the yum repos that are enabled on your
system ?

Thanks,
Yuval.

On Thu, Aug 31, 2017 at 4:19 PM, Matthias Leopold <
matthias.leop...@meduniwien.ac.at> wrote:

> Hi,
>
> thanks a lot.
>
> So i understand everything is fine with my nodes and i'll wait until the
> update GUI shows the right version to update (4.1.5 at the moment).
>
> Regards
> Matthias
>
>
> Am 2017-08-31 um 14:56 schrieb Yuval Turgeman:
>
>> Hi,
>>
>> oVirt node ng is shipped with a placeholder rpm preinstalled.
>> The image-update rpms obsolete the placeholder rpm, so once a new
>> image-update rpm is published, yum update will pull those packages.  So you
>> have 1 system that was a fresh install and the others were upgrades.
>> Next, the post install script for those image-update rpms will install
>> --justdb the image-update rpms to the new image (so running yum update in
>> the new image won't try to pull again the same version).
>>
>> Regarding the 4.1.6 it's very strange, we'll need to check the repos to
>> see why it was published.
>>
>> As for nodectl, if there are no changes, it won't be updated and you'll
>> see an "old" version or a version that doesn't seem to be matching the
>> current image, but it is ok, we are thinking of changing its name to make
>> it less confusing.
>>
>> Hope this helps,
>> Yuval.
>>
>>
>> On Thu, Aug 31, 2017 at 11:17 AM, Matthias Leopold <
>> matthias.leop...@meduniwien.ac.at <mailto:matthias.leopold@medun
>> iwien.ac.at>> wrote:
>>
>> hi,
>>
>> i still don't completely understand the oVirt Node update process
>> and the involved rpm packages.
>>
>> We have 4 nodes, all running oVirt Node 4.1.3. Three of them show as
>> available updates
>> 'ovirt-node-ng-image-update-4.1.6-0.1.rc1.20170823083853.git
>> d646d2f.el7.centos'
>> (i don't want run release candidates), one of them shows
>> 'ovirt-node-ng-image-update-4.1.5-1.el7.centos' (this is what i
>> like). The node that doesn't want to upgrade to '4.1.6-0.1.rc1'
>> lacks the rpm package
>> 'ovirt-node-ng-image-update-4.1.3-1.el7.centos.noarch', only has
>> 'ovirt-node-ng-image-update-placeholder-4.1.3-1.el7.centos.noarch'.
>> Also the version of ovirt-node-ng-nodectl is
>> '4.1.3-0.20170709.0.el7' instead of '4.1.3-0.20170705.0.el7'. This
>> node was the last one i installed and never made a version update
>> before.
>>
>> I only began using oVirt starting with 4.1, but already completed
>> minor version upgrades of oVirt nodes. IIRC this 'mysterious'
>> ovirt-node-ng-image-update package comes into place when updating a
>> node for the first time after initial installation. Usually i
>> wouldn't care about all of this, but now i have this RC update
>> situation that i don't want. How is this supposed to work? How can i
>> resolve it?
>>
>> thx
>> matthias
>>
>> ___
>> Users mailing list
>> Users@ovirt.org <mailto:Users@ovirt.org>
>> http://lists.ovirt.org/mailman/listinfo/users
>> <http://lists.ovirt.org/mailman/listinfo/users>
>>
>>
>>
> --
> Matthias Leopold
> IT Systems & Communications
> Medizinische Universität Wien
> Spitalgasse 23 / BT 88 /Ebene 00
> A-1090 Wien
> Tel: +43 1 40160-21241
> Fax: +43 1 40160-921200
>
___
Users mailing list
Users@ovirt.org
http://lists.ovirt.org/mailman/listinfo/users


Re: [ovirt-users] oVirt Node update question

2017-08-31 Thread Yuval Turgeman
Hi,

oVirt node ng is shipped with a placeholder rpm preinstalled.
The image-update rpms obsolete the placeholder rpm, so once a new
image-update rpm is published, yum update will pull those packages.  So you
have 1 system that was a fresh install and the others were upgrades.
Next, the post install script for those image-update rpms will install
--justdb the image-update rpms to the new image (so running yum update in
the new image won't try to pull again the same version).

Regarding the 4.1.6 it's very strange, we'll need to check the repos to see
why it was published.

As for nodectl, if there are no changes, it won't be updated and you'll see
an "old" version or a version that doesn't seem to be matching the current
image, but it is ok, we are thinking of changing its name to make it less
confusing.

Hope this helps,
Yuval.


On Thu, Aug 31, 2017 at 11:17 AM, Matthias Leopold <
matthias.leop...@meduniwien.ac.at> wrote:

> hi,
>
> i still don't completely understand the oVirt Node update process and the
> involved rpm packages.
>
> We have 4 nodes, all running oVirt Node 4.1.3. Three of them show as
> available updates 'ovirt-node-ng-image-update-4.
> 1.6-0.1.rc1.20170823083853.gitd646d2f.el7.centos' (i don't want run
> release candidates), one of them shows 
> 'ovirt-node-ng-image-update-4.1.5-1.el7.centos'
> (this is what i like). The node that doesn't want to upgrade to
> '4.1.6-0.1.rc1' lacks the rpm package 
> 'ovirt-node-ng-image-update-4.1.3-1.el7.centos.noarch',
> only has 'ovirt-node-ng-image-update-placeholder-4.1.3-1.el7.centos.noarch'.
> Also the version of ovirt-node-ng-nodectl is '4.1.3-0.20170709.0.el7'
> instead of '4.1.3-0.20170705.0.el7'. This node was the last one i installed
> and never made a version update before.
>
> I only began using oVirt starting with 4.1, but already completed minor
> version upgrades of oVirt nodes. IIRC this 'mysterious'
> ovirt-node-ng-image-update package comes into place when updating a node
> for the first time after initial installation. Usually i wouldn't care
> about all of this, but now i have this RC update situation that i don't
> want. How is this supposed to work? How can i resolve it?
>
> thx
> matthias
>
> ___
> Users mailing list
> Users@ovirt.org
> http://lists.ovirt.org/mailman/listinfo/users
>
___
Users mailing list
Users@ovirt.org
http://lists.ovirt.org/mailman/listinfo/users


Re: [ovirt-users] Python errors with ovirt 4.1.4

2017-08-07 Thread Yuval Turgeman
Hi,

The problem should be solved here:

http://jenkins.ovirt.org/job/ovirt-node-ng_ovirt-4.1_build-artifacts-el7-x86_64/lastSuccessfulBuild/artifact/exported-artifacts/

Thanks,
Yuval.


On Fri, Aug 4, 2017 at 12:37 PM, Staniforth, Paul <
p.stanifo...@leedsbeckett.ac.uk> wrote:

> Hello,
>
>I have 3 nodes and used the engine to update them to
>
>
> ovirt-node-ng-4.1.4-0.20170728.0
>
>
> but the engine still reported a new update which I tried but it failed.
>
>
> On the nodes yum check-update showed an update for
>
>
> ovirt-node-ng-nodectl.noarch4.1.4-0.20170728.0.el7
>
>
> installing this produces the same errors when logging into the node or
> running nodectl motd.
>
> nodectl check and info where fine but the engine produced errors when
> checking for updates.
>
>
> I used yum history to rollback the ovirt-node-ng-nodectl.noarch.
>
>
> I now have no errors but strangely the engine reports 2 nodes have
> updates available but not the 3rd which wasn't the one I did a nodectl
> update on.
>
>
> Regards,
>
>Paul S.
>
>
> --
> *From:* users-boun...@ovirt.org  on behalf of
> david caughey 
> *Sent:* 02 August 2017 10:48
> *To:* Users@ovirt.org
> *Subject:* [ovirt-users] Python errors with ovirt 4.1.4
>
> Hi Folks,
>
> I'm testing out the new version with the 4.1.4 ovirt iso and am getting
> errors directly after install:
>
> Last login: Wed Aug  2 10:17:56 2017
> Traceback (most recent call last):
>   File "/usr/lib64/python2.7/runpy.py", line 162, in _run_module_as_main
> "__main__", fname, loader, pkg_name)
>   File "/usr/lib64/python2.7/runpy.py", line 72, in _run_code
> exec code in run_globals
>   File "/usr/lib/python2.7/site-packages/nodectl/__main__.py", line 42,
> in 
> CliApplication()
>   File "/usr/lib/python2.7/site-packages/nodectl/__init__.py", line 200,
> in CliApplication
> return cmdmap.command(args)
>   File "/usr/lib/python2.7/site-packages/nodectl/__init__.py", line 118,
> in command
> return self.commands[command](**kwargs)
>   File "/usr/lib/python2.7/site-packages/nodectl/__init__.py", line 102,
> in motd
> machine_readable=True).output, self.machine).write()
>   File "/usr/lib/python2.7/site-packages/nodectl/status.py", line 51, in
> __init__
> self._update_info(status)
>   File "/usr/lib/python2.7/site-packages/nodectl/status.py", line 78, in
> _update_info
> if "ok" not in status.lower():
> AttributeError: Status instance has no attribute 'lower'
> Admin Console: https://192.168.122.61:9090/
>
> The admin console seems to work fine.
>
> Are these issues serious or can they be ignored.
>
> BR/David
> To view the terms under which this email is distributed, please go to:-
> http://disclaimer.leedsbeckett.ac.uk/disclaimer/disclaimer.html
>
>
> ___
> Users mailing list
> Users@ovirt.org
> http://lists.ovirt.org/mailman/listinfo/users
>
>
___
Users mailing list
Users@ovirt.org
http://lists.ovirt.org/mailman/listinfo/users


Re: [ovirt-users] Python errors with ovirt 4.1.4

2017-08-02 Thread Yuval Turgeman
Hi David,

It's a known bug in nodectl, it was addressed in [1].

If `imgbase check` is ok, your system should be fine.

Thanks,
Yuval.

[1] https://gerrit.ovirt.org/#/c/80037/

On Wed, Aug 2, 2017 at 12:48 PM, david caughey  wrote:

> Hi Folks,
>
> I'm testing out the new version with the 4.1.4 ovirt iso and am getting
> errors directly after install:
>
> Last login: Wed Aug  2 10:17:56 2017
> Traceback (most recent call last):
>   File "/usr/lib64/python2.7/runpy.py", line 162, in _run_module_as_main
> "__main__", fname, loader, pkg_name)
>   File "/usr/lib64/python2.7/runpy.py", line 72, in _run_code
> exec code in run_globals
>   File "/usr/lib/python2.7/site-packages/nodectl/__main__.py", line 42,
> in 
> CliApplication()
>   File "/usr/lib/python2.7/site-packages/nodectl/__init__.py", line 200,
> in CliApplication
> return cmdmap.command(args)
>   File "/usr/lib/python2.7/site-packages/nodectl/__init__.py", line 118,
> in command
> return self.commands[command](**kwargs)
>   File "/usr/lib/python2.7/site-packages/nodectl/__init__.py", line 102,
> in motd
> machine_readable=True).output, self.machine).write()
>   File "/usr/lib/python2.7/site-packages/nodectl/status.py", line 51, in
> __init__
> self._update_info(status)
>   File "/usr/lib/python2.7/site-packages/nodectl/status.py", line 78, in
> _update_info
> if "ok" not in status.lower():
> AttributeError: Status instance has no attribute 'lower'
> Admin Console: https://192.168.122.61:9090/
>
> The admin console seems to work fine.
>
> Are these issues serious or can they be ignored.
>
> BR/David
>
> ___
> Users mailing list
> Users@ovirt.org
> http://lists.ovirt.org/mailman/listinfo/users
>
>
___
Users mailing list
Users@ovirt.org
http://lists.ovirt.org/mailman/listinfo/users


Re: [ovirt-users] Regarding Ovirt Node ISO

2017-07-28 Thread Yuval Turgeman
Hi,

Yes, you can find a manifest-rpm file in the exported artifacts for the
ovirt-node-ng jobs.

Thanks,
Yuval.

On Jul 28, 2017 8:41 AM, "TranceWorldLogic ." 
wrote:

> Hi,
>
> I want to know packages list in ovirt node ISO.
> Do we have some documentation or some auto output of jenkin job ?
>
> Please let me know.
>
> Thanks,
> ~Rohit
>
> ___
> Users mailing list
> Users@ovirt.org
> http://lists.ovirt.org/mailman/listinfo/users
>
>
___
Users mailing list
Users@ovirt.org
http://lists.ovirt.org/mailman/listinfo/users


Re: [ovirt-users] oVirt Node 4.1.2 offering RC update?

2017-07-11 Thread Yuval Turgeman
Hi,

The GA iso is 4.1-2017070915

Thanks,
Yuval.


On Tue, Jul 11, 2017 at 1:13 AM, Vinícius Ferrão  wrote:

> May I ask another question?
>
> I’ve noted two new ISO images of 4.1.3 Node version over here:
> http://resources.ovirt.org/pub/ovirt-4.1/iso/ovirt-
> node-ng-installer-ovirt/
>
> Which one is the stable version:
> 4.1-2017070915 <(201)%20707-0915>
> 4.1-2017070913 <(201)%20707-0913>
>
> Thanks,
> V.
>
> On 9 Jul 2017, at 13:26, Lev Veyde  wrote:
>
> Hi Vinicius,
>
> It's actually due to my mistake, and as result the package got tagged with
> the RC version instead of the GA.
> The package itself was based on the 4.1.3 code, though.
> I rebuilt it and published a fixed package, so the issue should be
> resolved now.
>
> Thanks in advance,
>
> On Sat, Jul 8, 2017 at 5:12 AM, Vinícius Ferrão  wrote:
>
>> Hello,
>>
>> I’ve noted a strange thing on oVirt. On the Hosted Engine an update was
>> offered and I was a bit confused, since I’m running the latest oVirt Node
>> release.
>>
>> To check if 4.1.3 was already released I issued an “yum update” on the
>> command line and for my surprise an RC release was offered. This not seems
>> to be right:
>>
>> 
>> ==
>>  PackageArch   Version
>> Repository Size
>> 
>> ==
>> Installing:
>>  ovirt-node-ng-image-update noarch 
>> 4.1.3-0.3.rc3.20170622082156.git47b4302.el7.centos
>>ovirt-4.1 544 M
>>  replacing  ovirt-node-ng-image-update-placeholder.noarch
>> 4.1.2-1.el7.centos
>> Updating:
>>  ovirt-engine-appliance noarch 4.1-20170622.1.el7.centos
>> ovirt-4.1 967 M
>>
>> Transaction Summary
>> 
>> ==
>> Install  1 Package
>> Upgrade  1 Package
>>
>> Total download size: 1.5 G
>> Is this ok [y/d/N]: N
>>
>> Is this normal behavior? This isn’t really good, since it can lead to
>> stable to unstable moves on production. If this is normal, how can we avoid
>> it?
>>
>> Thanks,
>> V.
>>
>> ___
>> Users mailing list
>> Users@ovirt.org
>> http://lists.ovirt.org/mailman/listinfo/users
>>
>
>
>
> --
>
> Lev Veyde
>
> Software Engineer, RHCE | RHCVA | MCITP
> Red Hat Israel
>
> 
>
> l...@redhat.com | lve...@redhat.com
> 
> TRIED. TESTED. TRUSTED. 
>
>
>
___
Users mailing list
Users@ovirt.org
http://lists.ovirt.org/mailman/listinfo/users


Re: [ovirt-users] node ng upgrade failed

2017-07-10 Thread Yuval Turgeman
Hi,

Can you please attach /tmp/imgbased.log ?

Thanks,
Yuval.


On Mon, Jul 10, 2017 at 11:27 AM, Grundmann, Christian <
christian.grundm...@fabasoft.com> wrote:

> Hi,
>
> I tried to update to node ng 4.1.3 (from 4.1.1) which failed
>
>
>
> Jul 10 10:10:49 imgbased: 2017-07-10 10:10:49,986 [INFO] Extracting image
> '/usr/share/ovirt-node-ng/image//ovirt-node-ng-4.1.3-0.
> 20170709.0.el7.squashfs.img'
>
> Jul 10 10:10:50 imgbased: 2017-07-10 10:10:50,816 [INFO] Starting base
> creation
>
> Jul 10 10:10:50 imgbased: 2017-07-10 10:10:50,816 [INFO] New base will be:
> ovirt-node-ng-4.1.3-0.20170709.0
>
> Jul 10 10:10:51 imgbased: 2017-07-10 10:10:51,539 [INFO] New LV is:  'onn_cs-kvm-001/ovirt-node-ng-4.1.3-0.20170709.0' />
>
> Jul 10 10:10:53 imgbased: 2017-07-10 10:10:53,070 [INFO] Creating new
> filesystem on base
>
> Jul 10 10:10:53 imgbased: 2017-07-10 10:10:53,412 [INFO] Writing tree to
> base
>
> Jul 10 10:12:04 imgbased: 2017-07-10 10:12:04,344 [INFO] Adding a new
> layer after 
>
> Jul 10 10:12:04 imgbased: 2017-07-10 10:12:04,344 [INFO] Adding a new
> layer after 
>
> Jul 10 10:12:04 imgbased: 2017-07-10 10:12:04,345 [INFO] New layer will
> be: 
>
> Jul 10 10:12:52 imgbased: 2017-07-10 10:12:52,714 [ERROR] Failed to
> migrate etc#012Traceback (most recent call last):#012  File
> "/tmp/tmp.EMsKrrbmZs/usr/lib/python2.7/site-packages/
> imgbased/plugins/osupdater.py", line 119, in on_new_layer#012
> check_nist_layout(imgbase, new_lv)#012  File "/tmp/tmp.EMsKrrbmZs/usr/lib/
> python2.7/site-packages/imgbased/plugins/osupdater.py", line 173, in
> check_nist_layout#012v.create(t, paths[t]["size"],
> paths[t]["attach"])#012  File "/tmp/tmp.EMsKrrbmZs/usr/lib/
> python2.7/site-packages/imgbased/volume.py", line 48, in create#012
> "Path is already a volume: %s" % where#012AssertionError: Path is already a
> volume: /var/log
>
> Jul 10 10:12:53 python: detected unhandled Python exception in
> '/tmp/tmp.EMsKrrbmZs/usr/lib/python2.7/site-packages/imgbased/__main__.py'
>
> Jul 10 10:12:53 abrt-server: Executable '/tmp/tmp.EMsKrrbmZs/usr/lib/
> python2.7/site-packages/imgbased/__main__.py' doesn't belong to any
> package and ProcessUnpackaged is set to 'no'
>
> Jul 10 10:15:10 imgbased: 2017-07-10 10:15:10,079 [INFO] Extracting image
> '/usr/share/ovirt-node-ng/image//ovirt-node-ng-4.1.3-0.
> 20170622.0.el7.squashfs.img'
>
> Jul 10 10:15:11 imgbased: 2017-07-10 10:15:11,226 [INFO] Starting base
> creation
>
> Jul 10 10:15:11 imgbased: 2017-07-10 10:15:11,226 [INFO] New base will be:
> ovirt-node-ng-4.1.3-0.20170622.0
>
> Jul 10 10:15:11 python: detected unhandled Python exception in
> '/tmp/tmp.pqf2qhifaY/usr/lib/python2.7/site-packages/imgbased/__main__.py'
>
> Jul 10 10:15:12 abrt-server: Executable '/tmp/tmp.pqf2qhifaY/usr/lib/
> python2.7/site-packages/imgbased/__main__.py' doesn't belong to any
> package and ProcessUnpackaged is set to 'no'
>
>
>
>
>
> How can I fix it?
>
>
>
> Thx Christian
>
> ___
> Users mailing list
> Users@ovirt.org
> http://lists.ovirt.org/mailman/listinfo/users
>
>
___
Users mailing list
Users@ovirt.org
http://lists.ovirt.org/mailman/listinfo/users


Re: [ovirt-users] ovirt-ng upgrade

2017-07-08 Thread Yuval Turgeman
Hi,

Node-ng is tested and shipped as a complete operating system image, so
enabling packages other than the ones listed in the repos is not a good
idea and will probably break your system.

Thanks,
Yuval.

On Fri, Jul 7, 2017 at 1:13 PM, Nathanaël Blanchet  wrote:

> Hello,
>
> I've been used to install vdsm on regular centos to provision my hosts.
>
> I recently installed ovirt-ng on new hosts, but centos "base" and
> "updates" repo are disabled by default. So I updated them by enabling
> theses repos tio update centos.
>
> Several questions :
>
>- Is this a good practice or should I wait a new ovirt-ng image to
>update the whole system with
>
> # ovirt-node-upgrade --iso=/path/to/ovirt-node-image.iso --reboot=1
>
>- Now ovirt 4.1.3 is out, I have to uncomment
>"includepkgs=ovirt-node-ng-image-update ovirt-node-ng-image
>ovirt-engine-appliance" in the ovirt repo to get the last vdsm, but the
>dependencies are broken:
>[root@ulysses yum.repos.d]# yum update --enablerepo=base
>--enablerepo=updates -y
>...
>--> Résolution des dépendances terminée
>Erreur : Paquet : ovirt-hosted-engine-setup-2.1.3.3-1.el7.centos.noarch
>(ovirt-4.1)
> Requiert : rubygem-fluent-plugin-viaq_data_model
>
> Is this a good alternative way to do, or may I have issues if I do want
> such a thing?
>
> --
> Nathanaël Blanchet
>
> Supervision réseau
> Pôle Infrastrutures Informatiques
> 227 avenue Professeur-Jean-Louis-Viala
> 34193 MONTPELLIER CEDEX 5 
> Tél. 33 (0)4 67 54 84 55
> Fax  33 (0)4 67 54 84 14blanc...@abes.fr
>
>
> ___
> Users mailing list
> Users@ovirt.org
> http://lists.ovirt.org/mailman/listinfo/users
>
>
___
Users mailing list
Users@ovirt.org
http://lists.ovirt.org/mailman/listinfo/users


Re: [ovirt-users] hosted engine local disk estimate for OVA file

2017-06-25 Thread Yuval Turgeman
On Sun, Jun 25, 2017 at 1:31 PM, Yaniv Kaul  wrote:

>
>
> On Sun, Jun 25, 2017 at 12:38 PM, Ben De Luca  wrote:
>
>> Hi,
>>  I am in the middle of a disaster recovery situation trying to
>> install ovirt 4.1 after a failure of some of our NFS systems. So I was
>> redeploying 4.0 but there is a bug with the image uploader, that means I
>> cant upload images. Our actually virtual machine hosts have very small HD's
>> and the current 4.1 release installer thinks that the OVA extracted is
>> 50GiB, I have exactly 50GB free! yay. So I did manage to hack the
>> installer, to ignore my local disk space as the OVA is really only a few
>> gigs in side.
>>
>>  But its been pretty painful. Any chance of some one fixing the
>> estimate there, I have read through the code, there was an attempt to find
>> out the real size but they gave up and just guessed.
>>
>>
> We changed it to 50G in[1].
> Did we over-estimate?
>


Could very well be, but this patch was posted in order to align the
upstream disk size.  Adding Fabian, perhaps he remembers the initial
thoughts.

Thanks,
Yuval.



>
> Y.
>
> [1] https://gerrit.ovirt.org/#/c/72962/
>
>
>> -bd
>> *tired sysadmin*
>>
>>
>> ___
>> Users mailing list
>> Users@ovirt.org
>> http://lists.ovirt.org/mailman/listinfo/users
>>
>>
>
___
Users mailing list
Users@ovirt.org
http://lists.ovirt.org/mailman/listinfo/users


Re: [ovirt-users] Additional user in ovirt-node-ng 4.x

2017-06-21 Thread Yuval Turgeman
Hi,

You can add users as you would on a regular centos host, and they should be
preserved after upgrades AFAIK.
You'd probably get some "nodectl" warnings (nodectl must run as root) when
you log in, but that shouldn't be a problem.

Thanks,
Yuval.

On Tue, Jun 20, 2017 at 7:04 PM, Luca 'remix_tj' Lorenzetto <
lorenzetto.l...@gmail.com> wrote:

> Hello,
>
> my colleague is asking if is possible to define a certain user on our
> hosts.
> I know that the rule is to avoid installations and changes to hosts
> running ovirt-node, but in this case the user creation should have a
> very minimum impact.
> He's running HP uCMDB that makes an ssh connection to the host for
> grabbing some system information to populate it's DB.
>
> Is possible to create a new user? Does this user is preserved with
> upgrades? Is possible also to allow this new user to run 4 (exactly 4)
> commands through sudo?
>
> Luca
>
> --
> "E' assurdo impiegare gli uomini di intelligenza eccellente per fare
> calcoli che potrebbero essere affidati a chiunque se si usassero delle
> macchine"
> Gottfried Wilhelm von Leibnitz, Filosofo e Matematico (1646-1716)
>
> "Internet è la più grande biblioteca del mondo.
> Ma il problema è che i libri sono tutti sparsi sul pavimento"
> John Allen Paulos, Matematico (1945-vivente)
>
> Luca 'remix_tj' Lorenzetto, http://www.remixtj.net , <
> lorenzetto.l...@gmail.com>
> ___
> Users mailing list
> Users@ovirt.org
> http://lists.ovirt.org/mailman/listinfo/users
>
___
Users mailing list
Users@ovirt.org
http://lists.ovirt.org/mailman/listinfo/users


Re: [ovirt-users] unsuccessful hosted engine install

2017-06-06 Thread Yuval Turgeman
HI Brendan,

Can you please send the output for systemctl status vdsmd and journalctl -u
vdsmd.service ?

Thanks,


On Wed, Jun 7, 2017 at 9:32 AM, Sandro Bonazzola 
wrote:

>
>
> On Tue, Jun 6, 2017 at 2:56 PM, Brendan Hartzell  wrote:
>
>> Upon login to the server, to watch terminal output, I noticed that the
>> node status is degraded.
>>
>> [root@node-1 ~]# nodectl check
>> Status: WARN
>> Bootloader ... OK
>>  Layer boot entries ... OK
>>  Valid boot entries ... OK
>> Mount points ... OK
>>  Separate /var ... OK
>>  Discard is used ... OK
>> Basic storage ... OK
>>  Initialized VG ... OK
>>  Initialized Thin Pool ... OK
>>  Initialized LVs ... OK
>> Thin storage ... OK
>>  Checking available space in thinpool ... OK
>>  Checking thinpool auto-extend ... OK
>> vdsmd ... BAD
>>
>
> Yuval, can you help here?
>
>
>
>>
>>
>> Pressing forward with the retry using the web-UI.
>>
>> After resetting my iSCSI storage (on the storage server side), Install
>> started.
>>
>> Status in the web-UI:
>> Creating Storage Domain
>> Creating Storage Pool
>> Connecting Storage Pool
>> Verifying sanlock lockspace initialization
>> Creating Image for 'hosted-engine.lockspace' ...
>> Image for 'hosted-engine.lockspace' created successfully
>> Creating Image for 'hosted-engine.metadata' ...
>> Image for 'hosted-engine.metadata' created successfully
>> Creating VM Image
>> Extracting disk image from OVF archive (could take a few minutes
>> depending on archive size)
>> Validating pre-allocated volume size
>>
>> Output from the terminal:
>> [45863.076979]watchdog watchdog0: watchdog did not stop!
>>
>> System restarted.
>>
>> Attaching ovirt-hosted-engine-setup log.
>>
>> I'm running an SOS report, but it's too big for the users list.  I can
>> email it directly to you upon request.
>>
>> On Tue, Jun 6, 2017 at 12:12 AM, Simone Tiraboschi 
>> wrote:
>>
>>>
>>>
>>> On Tue, Jun 6, 2017 at 2:10 AM, Brendan Hartzell 
>>> wrote:
>>>
 As requested,

>>>
>>> It seams fine, there are no pending locks now.
>>> Could you please retry?
>>>
>>>

 The output of ovirt-hosted-engine-cleanup

 [root@node-1 ~]# ovirt-hosted-engine-cleanup
 This will de-configure the host to run ovirt-hosted-engine-setup from
 scratch.
 Caution, this operation should be used with care.

 Are you sure you want to proceed? [y/n]
 y
  -=== Destroy hosted-engine VM ===-
 You must run deploy first
  -=== Stop HA services ===-
  -=== Shutdown sanlock ===-
 shutdown force 1 wait 0
 shutdown done 0
  -=== Disconnecting the hosted-engine storage domain ===-
 You must run deploy first
  -=== De-configure VDSM networks ===-
  -=== Stop other services ===-
  -=== De-configure external daemons ===-
  -=== Removing configuration files ===-
 ? /etc/init/libvirtd.conf already missing
 - removing /etc/libvirt/nwfilter/vdsm-no-mac-spoofing.xml
 ? /etc/ovirt-hosted-engine/answers.conf already missing
 ? /etc/ovirt-hosted-engine/hosted-engine.conf already missing
 - removing /etc/vdsm/vdsm.conf
 - removing /etc/pki/vdsm/certs/cacert.pem
 - removing /etc/pki/vdsm/certs/vdsmcert.pem
 - removing /etc/pki/vdsm/keys/vdsmkey.pem
 - removing /etc/pki/vdsm/libvirt-spice/ca-cert.pem
 - removing /etc/pki/vdsm/libvirt-spice/ca-key.pem
 - removing /etc/pki/vdsm/libvirt-spice/server-cert.pem
 - removing /etc/pki/vdsm/libvirt-spice/server-key.pem
 ? /etc/pki/CA/cacert.pem already missing
 ? /etc/pki/libvirt/*.pem already missing
 ? /etc/pki/libvirt/private/*.pem already missing
 ? /etc/pki/ovirt-vmconsole/*.pem already missing
 - removing /var/cache/libvirt/qemu
 ? /var/run/ovirt-hosted-engine-ha/* already missing
 [root@node-1 ~]#

 Output of sanlock client status:
 [root@node-1 ~]# sanlock client status
 [root@node-1 ~]#

 Thank you for your help!

 On Mon, Jun 5, 2017 at 7:25 AM, Simone Tiraboschi 
 wrote:

>
>
> On Mon, Jun 5, 2017 at 3:57 PM, Brendan Hartzell 
> wrote:
>
>> After letting this sit for a few days, does anyone have any ideas as
>> to how to deal with my situation?  Would anyone like me to send the SOS
>> report directly to them?  It's a 9MB file.
>>
>> If nothing comes up, I'm going to try and sift through the SOS report
>> tonight, but I won't know what I'm trying to find.
>>
>> Thank you for any and all help.
>>
>> On Thu, Jun 1, 2017 at 1:15 AM, Sandro Bonazzola > > wrote:
>>
>>>
>>>
>>> On Thu, Jun 1, 2017 at 6:36 AM, Brendan Hartzell 
>>> wrote:
>>>
 Ran the 4 commands listed above, no errors on the screen.

 Started the hosted-engine standard setup from the web-UI.

 Using iSCSI for the storage.

 Using mostly default options, I got these errors in the web-UI.

  Error creating Volume Group: Failed to init

Re: [ovirt-users] Upgrade 4.0.6 to 4.1.1 -- How to Update oVirt Node (4.x) Hosts?

2017-05-09 Thread Yuval Turgeman
nosuid,nodev,noexec,
> relatime,cpuacct,cpu)
>
> cgroup on /sys/fs/cgroup/perf_event type cgroup (rw,nosuid,nodev,noexec,
> relatime,perf_event)
>
> cgroup on /sys/fs/cgroup/hugetlb type cgroup (rw,nosuid,nodev,noexec,
> relatime,hugetlb)
>
> cgroup on /sys/fs/cgroup/freezer type cgroup (rw,nosuid,nodev,noexec,
> relatime,freezer)
>
> cgroup on /sys/fs/cgroup/net_cls,net_prio type cgroup
> (rw,nosuid,nodev,noexec,relatime,net_prio,net_cls)
>
> cgroup on /sys/fs/cgroup/memory type cgroup (rw,nosuid,nodev,noexec,
> relatime,memory)
>
> cgroup on /sys/fs/cgroup/cpuset type cgroup (rw,nosuid,nodev,noexec,
> relatime,cpuset)
>
> cgroup on /sys/fs/cgroup/blkio type cgroup (rw,nosuid,nodev,noexec,
> relatime,blkio)
>
> cgroup on /sys/fs/cgroup/devices type cgroup (rw,nosuid,nodev,noexec,
> relatime,devices)
>
> cgroup on /sys/fs/cgroup/pids type cgroup (rw,nosuid,nodev,noexec,
> relatime,pids)
>
> configfs on /sys/kernel/config type configfs (rw,relatime)
>
> /dev/mapper/onn_labvmhostt05-ovirt--node--ng--4.1.1.1--0.20170406.0+1 on
> / type xfs (rw,relatime,seclabel,attr2,inode64,logbsize=256k,sunit=
> 512,swidth=512,noquota)
>
> rpc_pipefs on /var/lib/nfs/rpc_pipefs type rpc_pipefs (rw,relatime)
>
> selinuxfs on /sys/fs/selinux type selinuxfs (rw,relatime)
>
> systemd-1 on /proc/sys/fs/binfmt_misc type autofs
> (rw,relatime,fd=29,pgrp=1,timeout=300,minproto=5,maxproto=5,direct)
>
> debugfs on /sys/kernel/debug type debugfs (rw,relatime)
>
> hugetlbfs on /dev/hugepages type hugetlbfs (rw,relatime,seclabel)
>
> mqueue on /dev/mqueue type mqueue (rw,relatime,seclabel)
>
> nfsd on /proc/fs/nfsd type nfsd (rw,relatime)
>
> /dev/mapper/361866da06a447c001fb358304710f8ea1 on /boot type ext4
> (rw,relatime,seclabel,data=ordered)
>
> /dev/mapper/onn_labvmhostt05-var on /var type xfs
> (rw,relatime,seclabel,attr2,discard,inode64,logbsize=256k,
> sunit=512,swidth=512,noquota)
>
> tmpfs on /run/user/0 type tmpfs (rw,nosuid,nodev,relatime,
> seclabel,size=52808308k,mode=700)
>
> /dev/mapper/onn_labvmhostt05-ovirt--node--ng--4.1.1.1--0.20170406.0+1 on
> /tmp/tmp.O5CIMItAuZ type xfs (rw,relatime,seclabel,attr2,
> inode64,logbsize=256k,sunit=512,swidth=512,noquota)
>
>
>
> Should I just blow away the disks and install fresh?  Any last things to
> try?
>
>
>
> Thanks,
>
> Daniel
>
>
>
> *From: *Yuval Turgeman 
> *Date: *Tuesday, May 9, 2017 at 9:55 AM
> *To: *"Beckman, Daniel" 
> *Cc: *"sbona...@redhat.com" , Yedidyah Bar David <
> d...@redhat.com>, "users@ovirt.org" 
> *Subject: *Re: [ovirt-users] Upgrade 4.0.6 to 4.1.1 -- How to Update
> oVirt Node (4.x) Hosts?
>
>
>
> My pleasure ! :)
>
>
>
> The line should be as follows:
>
>
>
> grubby --copy-default --add-kernel /boot/ovirt-node-ng-4.1.1.1-0.
> 20170406.0+1/vmlinuz-3.10.0-514.10.2.el7.x86_64 --initrd
> /boot/ovirt-node-ng-4.1.1.1-0.20170406.0+1/initramfs-3.10.0-514.10.2.el7.x86_64.img
> --args rhgb crashkernel=auto root=/dev/onn_labvmhostt05/
> ovirt-node-ng-4.1.1.1-0.20170406.0+1 rd.lvm.lv=onn_labvmhostt05/
> ovirt-node-ng-4.1.1.1-0.20170406.0+1 rd.lvm.lv=onn_labvmhostt05/swap
> quiet img.bootid=ovirt-node-ng-4.1.1.1-0.20170406.0+1 --title
> ovirt-node-ng-4.1.1.1-0.20170406.0 --bad-image-okay
>
>
>
>
>
>
>
> On Tue, May 9, 2017 at 5:49 PM, Beckman, Daniel <
> daniel.beck...@ingramcontent.com> wrote:
>
> Hi Yuval,
>
>
>
> Thanks for your patience. ☺
>
>
>
> I tried that – completely removing /boot/ovirt-node-ng-4.1.1.1* and
> performing the same previous steps. Before doing this I cleared out the
> imgbased.log file so it has only the latest entries.
>
>
>
> I’m assuming this is the command you referenced:
>
> [DEBUG] Calling binary: (['grubby', '--copy-default', '--add-kernel',
> '/boot/ovirt-node-ng-4.1.1.1-0.20170406.0+1/vmlinuz-3.10.0-514.10.2.el7.x86_64',
> '--initrd', 
> '/boot/ovirt-node-ng-4.1.1.1-0.20170406.0+1/initramfs-3.10.0-514.10.2.el7.x86_64.img',
> '--args', 'rhgb crashkernel=auto root=/dev/onn_labvmhostt05/
> ovirt-node-ng-4.1.1.1-0.20170406.0+1 rd.lvm.lv=onn_labvmhostt05/
> ovirt-node-ng-4.1.1.1-0.20170406.0+1 rd.lvm.lv=onn_labvmhostt05/swap
> quiet img.bootid=ovirt-node-ng-4.1.1.1-0.20170406.0+1', '--title',
> 'ovirt-node-ng-4.1.1.1-0.20170406.0', '--bad-image-okay'],) {}
>
>
>
> I could use some help in getting the correct syntax. I’ve attached the
> latest imgbased.log file.
>
>
>
> Thanks,
>
> Daniel
>
>
>
> *From: *Yuval Turgeman 
> *Date: *Tuesday, Ma

Re: [ovirt-users] Upgrade 4.0.6 to 4.1.1 -- How to Update oVirt Node (4.x) Hosts?

2017-05-09 Thread Yuval Turgeman
My pleasure ! :)

The line should be as follows:

grubby --copy-default --add-kernel
/boot/ovirt-node-ng-4.1.1.1-0.20170406.0+1/vmlinuz-3.10.0-514.10.2.el7.x86_64
--initrd
/boot/ovirt-node-ng-4.1.1.1-0.20170406.0+1/initramfs-3.10.0-514.10.2.el7.x86_64.img
--args rhgb crashkernel=auto
root=/dev/onn_labvmhostt05/ovirt-node-ng-4.1.1.1-0.20170406.0+1
rd.lvm.lv=onn_labvmhostt05/ovirt-node-ng-4.1.1.1-0.20170406.0+1
rd.lvm.lv=onn_labvmhostt05/swap quiet
img.bootid=ovirt-node-ng-4.1.1.1-0.20170406.0+1 --title
ovirt-node-ng-4.1.1.1-0.20170406.0 --bad-image-okay



On Tue, May 9, 2017 at 5:49 PM, Beckman, Daniel <
daniel.beck...@ingramcontent.com> wrote:

> Hi Yuval,
>
>
>
> Thanks for your patience. ☺
>
>
>
> I tried that – completely removing /boot/ovirt-node-ng-4.1.1.1* and
> performing the same previous steps. Before doing this I cleared out the
> imgbased.log file so it has only the latest entries.
>
>
>
> I’m assuming this is the command you referenced:
>
> [DEBUG] Calling binary: (['grubby', '--copy-default', '--add-kernel',
> '/boot/ovirt-node-ng-4.1.1.1-0.20170406.0+1/vmlinuz-3.10.0-514.10.2.el7.x86_64',
> '--initrd', 
> '/boot/ovirt-node-ng-4.1.1.1-0.20170406.0+1/initramfs-3.10.0-514.10.2.el7.x86_64.img',
> '--args', 'rhgb crashkernel=auto root=/dev/onn_labvmhostt05/
> ovirt-node-ng-4.1.1.1-0.20170406.0+1 rd.lvm.lv=onn_labvmhostt05/
> ovirt-node-ng-4.1.1.1-0.20170406.0+1 rd.lvm.lv=onn_labvmhostt05/swap
> quiet img.bootid=ovirt-node-ng-4.1.1.1-0.20170406.0+1', '--title',
> 'ovirt-node-ng-4.1.1.1-0.20170406.0', '--bad-image-okay'],) {}
>
>
>
> I could use some help in getting the correct syntax. I’ve attached the
> latest imgbased.log file.
>
>
>
> Thanks,
>
> Daniel
>
>
>
> *From: *Yuval Turgeman 
> *Date: *Tuesday, May 9, 2017 at 3:43 AM
>
> *To: *"Beckman, Daniel" 
> *Cc: *"sbona...@redhat.com" , Yedidyah Bar David <
> d...@redhat.com>, "users@ovirt.org" 
> *Subject: *Re: [ovirt-users] Upgrade 4.0.6 to 4.1.1 -- How to Update
> oVirt Node (4.x) Hosts?
>
>
>
> Hi, it seems like some stuff was left on /boot from previous attempts,
> making the boot setup stage fail, which means that the node is actually
> installed on the onn_labvmhostt05/ovirt-node-ng-4.1.1.1-0.20170406.0+1 LV
> but the kernel wasn't installed, making it impossible to boot to that LV.
>
> The way I see it, you could try to clean up the
> /boot/ovirt-node-ng-4.1.1.1* files and retry everything just like you did
> (umount, lvremove, reinstall rpms, etc), but the thing is that in one of
> your runs, there's a 'grubby' line that failed and stderr is not shown in
> the log.  Try to follow the steps above and retry, and if grubby fails
> again (you can see it in the last few lines of the imgbased.log), you could
> try to manually run that grubby line from the log and send its output and
> imgbased.log so we could continue from there.
>
>
>
> Thanks,
>
> Yuval.
>
>
>
>
>
> On Mon, May 8, 2017 at 11:14 PM, Beckman, Daniel <
> daniel.beck...@ingramcontent.com> wrote:
>
> Hello,
>
>
>
> I was originally on 4.0.3 (from the ISO). The two 4.1.1 layers were not
> mounted; I went ahead and used lvremove to remove them. I removed all three
> packages,  cleared out /etc/yum.repos.d, re-added ovirt-release41 from the
> URL, and then re-installed ovirt-node-ng-image-update, which installed
> ovirt-node-ng-image as a dependency. The install did not report any errors.
> It put the 4.1.1 layers back in. I’ve uploaded the latest
> /tmp/imgbased.log.
>
>
>
> Thanks,
>
> Daniel
>
>
>
> *From: *Yuval Turgeman 
> *Date: *Friday, May 5, 2017 at 12:32 PM
>
>
> *To: *"Beckman, Daniel" 
> *Cc: *"sbona...@redhat.com" , Yedidyah Bar David <
> d...@redhat.com>, "users@ovirt.org" 
> *Subject: *Re: [ovirt-users] Upgrade 4.0.6 to 4.1.1 -- How to Update
> oVirt Node (4.x) Hosts?
>
>
>
> Were you on 4.0.3 or 4.0.6 ?  Anyway, try to umount and lvremove the two
> 4.1.1 layers, then redo the steps from the last email.  If it doesnt work
> please resend /tmp/imgbased.log
>
>
>
> Thanks,
>
> Yuval
>
>
>
> On May 5, 2017 6:17 PM, "Beckman, Daniel"  com> wrote:
>
> Here is output of ‘lvs –a’:
>
>
>
>   LV   VG   Attr   LSize
> Pool   Origin Data%  Meta%  Move Log Cpy%Sync
> Convert
>
>   [lvol0_pmspare]  onn_labvmhostt05 ewi---  88.00m
>
>
>
>   ovirt-node-ng-4.

Re: [ovirt-users] Upgrade 4.0.6 to 4.1.1 -- How to Update oVirt Node (4.x) Hosts?

2017-05-09 Thread Yuval Turgeman
Hi, it seems like some stuff was left on /boot from previous attempts,
making the boot setup stage fail, which means that the node is actually
installed on the onn_labvmhostt05/ovirt-node-ng-4.1.1.1-0.20170406.0+1 LV
but the kernel wasn't installed, making it impossible to boot to that LV.
The way I see it, you could try to clean up the
/boot/ovirt-node-ng-4.1.1.1* files and retry everything just like you did
(umount, lvremove, reinstall rpms, etc), but the thing is that in one of
your runs, there's a 'grubby' line that failed and stderr is not shown in
the log.  Try to follow the steps above and retry, and if grubby fails
again (you can see it in the last few lines of the imgbased.log), you could
try to manually run that grubby line from the log and send its output and
imgbased.log so we could continue from there.

Thanks,
Yuval.


On Mon, May 8, 2017 at 11:14 PM, Beckman, Daniel <
daniel.beck...@ingramcontent.com> wrote:

> Hello,
>
>
>
> I was originally on 4.0.3 (from the ISO). The two 4.1.1 layers were not
> mounted; I went ahead and used lvremove to remove them. I removed all three
> packages,  cleared out /etc/yum.repos.d, re-added ovirt-release41 from the
> URL, and then re-installed ovirt-node-ng-image-update, which installed
> ovirt-node-ng-image as a dependency. The install did not report any errors.
> It put the 4.1.1 layers back in. I’ve uploaded the latest
> /tmp/imgbased.log.
>
>
>
> Thanks,
>
> Daniel
>
>
>
> *From: *Yuval Turgeman 
> *Date: *Friday, May 5, 2017 at 12:32 PM
>
> *To: *"Beckman, Daniel" 
> *Cc: *"sbona...@redhat.com" , Yedidyah Bar David <
> d...@redhat.com>, "users@ovirt.org" 
> *Subject: *Re: [ovirt-users] Upgrade 4.0.6 to 4.1.1 -- How to Update
> oVirt Node (4.x) Hosts?
>
>
>
> Were you on 4.0.3 or 4.0.6 ?  Anyway, try to umount and lvremove the two
> 4.1.1 layers, then redo the steps from the last email.  If it doesnt work
> please resend /tmp/imgbased.log
>
>
>
> Thanks,
>
> Yuval
>
>
>
> On May 5, 2017 6:17 PM, "Beckman, Daniel"  com> wrote:
>
> Here is output of ‘lvs –a’:
>
>
>
>   LV   VG   Attr   LSize
> Pool   Origin Data%  Meta%  Move Log Cpy%Sync
> Convert
>
>   [lvol0_pmspare]  onn_labvmhostt05 ewi---  88.00m
>
>
>
>   ovirt-node-ng-4.0.3-0.20160830.0 onn_labvmhostt05 Vwi---tz-k
> 335.92g pool00 root
>
>
>   ovirt-node-ng-4.0.3-0.20160830.0+1   onn_labvmhostt05 Vwi-aotz--
> 335.92g pool00 ovirt-node-ng-4.0.3-0.20160830.0   1.26
>
>
>   ovirt-node-ng-4.1.1.1-0.20170406.0   onn_labvmhostt05 Vri---tz-k
> 335.92g pool00
>
>
>   ovirt-node-ng-4.1.1.1-0.20170406.0+1 onn_labvmhostt05 Vwi---tz--
> 335.92g pool00 ovirt-node-ng-4.1.1.1-0.20170406.0
>
>
>   pool00   onn_labvmhostt05 twi-aotz--
> 350.96g   2.53   0.17
>
>
>   [pool00_tdata]   onn_labvmhostt05 Twi-ao
> 350.96g
>
>
>   [pool00_tmeta]   onn_labvmhostt05 ewi-ao   1.00g
>
>
>
>   root     onn_labvmhostt05 Vwi---tz--
> 335.92g pool00
>
>
>   swap onn_labvmhostt05 -wi-ao   4.00g
>
>
>
>   var  onn_labvmhostt05 Vwi-aotz--  15.00g
> pool008.47
>
>
>
> Thanks,
>
> Daniel
>
>
>
> *From: *Yuval Turgeman 
> *Date: *Thursday, May 4, 2017 at 4:18 PM
> *To: *"Beckman, Daniel" 
> *Cc: *"sbona...@redhat.com" , Yedidyah Bar David <
> d...@redhat.com>, "users@ovirt.org" 
> *Subject: *Re: [ovirt-users] Upgrade 4.0.6 to 4.1.1 -- How to Update
> oVirt Node (4.x) Hosts?
>
>
>
> what does `lvs -a` show ?
>
>
>
> On May 4, 2017 21:50, "Beckman, Daniel" 
> wrote:
>
> Hi Yuval,
>
>
>
> All three of those packages (ovirt-node-ng-image-update,
> ovirt-node-ng-image, ovirt-release41) were already installed. So I ran a
> ‘yum remove’ on all of them, removed everything from /etc/yum.repos.d,
> installed the release RPM, then installed the other two packages. Here’s
> the installation:
>
>
>
> 
> 
> 
> ==
>
> Package
> Arch  Version
> RepositorySize
>
> 

Re: [ovirt-users] New oVirt Node install on oVirt Cluster 4.0.5 - How can I install oVirt Node with same 4.0.5 version ???

2017-05-05 Thread Yuval Turgeman
I take it updating everything to 4.0.6 is not an option ?

Thanks,
Yuval

On May 5, 2017 6:16 PM, "Rogério Ceni Coelho" 
wrote:

> I can think in two ways. Please let me know if have a change to go ok.
>
> First, download ovirt-node-ng-image-update and ovirt-node-ng-image from
> 4.0.5 version and run yum localinstall.
>
> Second, create a rpm list from other 4.0.5 node, do an diff agains my
> 4.0.3 node and use the diff to download packages from 4.0.5 version and run
> yum localintall.
>
> My concern with this are that i can not upgrade ovirt engine every time i
> need to install a new ovirt node.
>
> Thanks.
>
> Em sex, 5 de mai de 2017 às 11:26, Rogério Ceni Coelho <
> rogeriocenicoe...@gmail.com> escreveu:
>
>> Hi oVirt Troopers,
>>
>> I have two segregated oVirt Clusters running 4.0.5 ( DEV and PROD
>> enviroments).
>>
>> Now i need to install a new oVirt Node Server (Dell PowerEdge M620), but
>> i see that does not exist an 4.0.5 iso on http://resources.ovirt.org/
>> pub/ovirt-4.0/iso/ovirt-node-ng-installer/ , only 4.0.3 and 4.0.6.
>>
>> How can i install this new server and go to the same version of all 20
>> others ???
>>
>> There is a way to install 4.0.3 and update to 4.0.5 only ?
>>
>> Thanks in advance.
>>
>
> ___
> Users mailing list
> Users@ovirt.org
> http://lists.ovirt.org/mailman/listinfo/users
>
>
___
Users mailing list
Users@ovirt.org
http://lists.ovirt.org/mailman/listinfo/users


Re: [ovirt-users] Upgrade 4.0.6 to 4.1.1 -- How to Update oVirt Node (4.x) Hosts?

2017-05-05 Thread Yuval Turgeman
Were you on 4.0.3 or 4.0.6 ?  Anyway, try to umount and lvremove the two
4.1.1 layers, then redo the steps from the last email.  If it doesnt work
please resend /tmp/imgbased.log

Thanks,
Yuval

On May 5, 2017 6:17 PM, "Beckman, Daniel" 
wrote:

> Here is output of ‘lvs –a’:
>
>
>
>   LV   VG   Attr   LSize
> Pool   Origin Data%  Meta%  Move Log Cpy%Sync
> Convert
>
>   [lvol0_pmspare]  onn_labvmhostt05 ewi---  88.00m
>
>
>
>   ovirt-node-ng-4.0.3-0.20160830.0 onn_labvmhostt05 Vwi---tz-k
> 335.92g pool00 root
>
>
>   ovirt-node-ng-4.0.3-0.20160830.0+1   onn_labvmhostt05 Vwi-aotz--
> 335.92g pool00 ovirt-node-ng-4.0.3-0.20160830.0   1.26
>
>
>   ovirt-node-ng-4.1.1.1-0.20170406.0   onn_labvmhostt05 Vri---tz-k
> 335.92g pool00
>
>
>   ovirt-node-ng-4.1.1.1-0.20170406.0+1 onn_labvmhostt05 Vwi---tz--
> 335.92g pool00 ovirt-node-ng-4.1.1.1-0.20170406.0
>
>
>   pool00   onn_labvmhostt05 twi-aotz--
> 350.96g   2.53   0.17
>
>
>   [pool00_tdata]   onn_labvmhostt05 Twi-ao
> 350.96g
>
>
>   [pool00_tmeta]   onn_labvmhostt05 ewi-ao   1.00g
>
>
>
>   root onn_labvmhostt05 Vwi---tz--
> 335.92g pool00
>
>
>   swap onn_labvmhostt05 -wi-ao   4.00g
>
>
>
>   var      onn_labvmhostt05 Vwi-aotz--  15.00g
> pool008.47
>
>
>
> Thanks,
>
> Daniel
>
>
>
> *From: *Yuval Turgeman 
> *Date: *Thursday, May 4, 2017 at 4:18 PM
> *To: *"Beckman, Daniel" 
> *Cc: *"sbona...@redhat.com" , Yedidyah Bar David <
> d...@redhat.com>, "users@ovirt.org" 
> *Subject: *Re: [ovirt-users] Upgrade 4.0.6 to 4.1.1 -- How to Update
> oVirt Node (4.x) Hosts?
>
>
>
> what does `lvs -a` show ?
>
>
>
> On May 4, 2017 21:50, "Beckman, Daniel" 
> wrote:
>
> Hi Yuval,
>
>
>
> All three of those packages (ovirt-node-ng-image-update,
> ovirt-node-ng-image, ovirt-release41) were already installed. So I ran a
> ‘yum remove’ on all of them, removed everything from /etc/yum.repos.d,
> installed the release RPM, then installed the other two packages. Here’s
> the installation:
>
>
>
> 
> 
> 
> ==
>
> Package
> Arch  Version
> RepositorySize
>
> 
> 
> 
> ==
>
> Installing:
>
> ovirt-node-ng-image-update
> noarch4.1.1.1-1.el7.centos
> ovirt-4.13.8 k
>
> Installing for dependencies:
>
> ovirt-node-ng-image
> noarch4.1.1.1-1.el7.centos
>ovirt-4.1526 M
>
>
>
> Transaction Summary
>
> 
> 
> 
> ==
>
> Install  1 Package (+1 Dependent package)
>
>
>
> Total download size: 526 M
>
> Installed size: 526 M
>
> Is this ok [y/d/N]: y
>
> Downloading packages:
>
> (1/2): ovirt-node-ng-image-update-4.1.1.1-1.el7.centos.noarch.rpm
>   
>|
> 3.8 kB  00:00:00
>
> (2/2): ovirt-node-ng-image-4.1.1.1-1.el7.centos.noarch.rpm
>
> | 526 MB  00:01:55
>
> 
> 
> 
> --
>
> Total
>
>   4.6 MB/s | 526 MB
> 00:01:55
>
> Running transaction check
>
> Running transaction test
>
> Transaction test succeeded
>
> Running transaction
>
>   Installing : ovirt-node-ng-image-4.1.1.1-1.
> el7.centos.noarch
>
> 1/2
>
>   Installing : ovirt-node-ng-image-update-4.1.1.1-1.el7.ce

Re: [ovirt-users] Upgrade 4.0.6 to 4.1.1 -- How to Update oVirt Node (4.x) Hosts?

2017-05-04 Thread Yuval Turgeman
what does `lvs -a` show ?

On May 4, 2017 21:50, "Beckman, Daniel" 
wrote:

> Hi Yuval,
>
>
>
> All three of those packages (ovirt-node-ng-image-update,
> ovirt-node-ng-image, ovirt-release41) were already installed. So I ran a
> ‘yum remove’ on all of them, removed everything from /etc/yum.repos.d,
> installed the release RPM, then installed the other two packages. Here’s
> the installation:
>
>
>
> 
> 
> 
> ==
>
> Package
> Arch  Version
> RepositorySize
>
> 
> 
> 
> ==
>
> Installing:
>
> ovirt-node-ng-image-update
> noarch4.1.1.1-1.el7.centos
> ovirt-4.13.8 k
>
> Installing for dependencies:
>
> ovirt-node-ng-image
> noarch4.1.1.1-1.el7.centos
>ovirt-4.1526 M
>
>
>
> Transaction Summary
>
> 
> 
> 
> ==
>
> Install  1 Package (+1 Dependent package)
>
>
>
> Total download size: 526 M
>
> Installed size: 526 M
>
> Is this ok [y/d/N]: y
>
> Downloading packages:
>
> (1/2): ovirt-node-ng-image-update-4.1.1.1-1.el7.centos.noarch.rpm
>   
>|
> 3.8 kB  00:00:00
>
> (2/2): ovirt-node-ng-image-4.1.1.1-1.el7.centos.noarch.rpm
>
> | 526 MB  00:01:55
>
> 
> 
> 
> --
>
> Total
>
>   4.6 MB/s | 526 MB
> 00:01:55
>
> Running transaction check
>
> Running transaction test
>
> Transaction test succeeded
>
> Running transaction
>
>   Installing : ovirt-node-ng-image-4.1.1.1-1.
> el7.centos.noarch
>
> 1/2
>
>   Installing : ovirt-node-ng-image-update-4.1.1.1-1.el7.centos.noarch
>
> 2/2
>
> mount: special device 
> /dev/onn_labvmhostt05/ovirt-node-ng-4.1.1.1-0.20170406.0+1
> does not exist
>
> rm: cannot remove ‘/tmp/tmp.uEAD6kCtlR/usr/share/imgbased/*image-update*.rpm’:
> No such file or directory
>
> umount: /tmp/tmp.uEAD6kCtlR: not mounted
>
>   Verifying  : ovirt-node-ng-image-update-4.1.1.1-1.el7.centos.noarch
>
>   1/2
>
>   Verifying  : ovirt-node-ng-image-4.1.1.1-1.
> el7.centos.noarch
>
> 2/2
>
>
>
> Installed:
>
>   ovirt-node-ng-image-update.noarch 0:4.1.1.1-1.el7.centos
>
>
>
>
>
> Dependency Installed:
>
>   ovirt-node-ng-image.noarch 0:4.1.1.1-1.el7.centos
>
>
>
>
>
> Complete!
>
>
>
> Also, note output of ‘nodectl check’:
>
> [root@labvmhostt05 yum.repos.d]# nodectl check
>
> Status: FAILED
>
> Bootloader ... FAILED - It looks like there are no valid bootloader
> entries. Please ensure this is fixed before rebooting.
>
>   Layer boot entries ... FAILED - No bootloader entries which point to
> imgbased layers
>
>   Valid boot entries ... FAILED - No valid boot entries for imgbased
> layers or non-imgbased layers
>
> Mount points ... OK
>
>   Separate /var ... OK
>
>   Discard is used ... OK
>
> Basic storage ... OK
>
>   Initialized VG ... OK
>
>   Initialized Thin Pool ... OK
>
>   Initialized LVs ... OK
>
> Thin storage ... OK
>
>   Checking available space in thinpool ... OK
>
>   Checking thinpool auto-extend ... OK
>
> vdsmd ... OK
>
>
>
> I’ll attach the /tmp/imgbased.log file.
>
>
>
> Thanks,
>
> Daniel
>
>
>
> *From: *Yuval Turgeman 
> *Date: *Wednesday, May 3, 2017 at 1:23 PM
> *To: *"Beckman, Daniel" 
> *Cc: *"users@ovirt.org" , Yedidyah Bar David <
> d...@redhat.com>, "sbona...@redhat.com" 
> *Subject: *Re: [ovirt-users] Upgrade 4.0.6 to 4.1.1 -- How to Update
>

Re: [ovirt-users] Upgrade 4.0.6 to 4.1.1 -- How to Update oVirt Node (4.x) Hosts?

2017-05-03 Thread Yuval Turgeman
Hi, you can try the following:

1.  Make sure you have a /etc/iscsi/initiatorname.iscsi file.  If you
don't, create an empty one (to avoid a migration bug)
2.  Install the ovirt-release41 rpm (
http://resources.ovirt.org/pub/yum-repo/ovirt-release41.rpm)
3.  yum update ovirt-node-ng-image-update
4.  Make sure only 2 rpms are about to be installed
(ovirt-node-ng-image-update and ovirt-node-ng-image) ~530M

Save /tmp/imgbased.log in case something fails so we could take a look :)

Thanks,
Yuval




On Wed, May 3, 2017 at 6:23 PM, Beckman, Daniel <
daniel.beck...@ingramcontent.com> wrote:

> I don’t recall doing anything special with the repositories, apart from
> (recently) adding the oVirt 4.1 repository. These hosts were originally
> deployed by downloading the oVirt Node 4.0.3 ISO image.
>
>
>
> How should the repos be setup? Is there an RPM I can download that will
> install the appropriate repos?
>
>
>
> Thanks,
>
> Daniel
>
>
>
> *From: *Yuval Turgeman 
> *Date: *Tuesday, May 2, 2017 at 2:56 PM
> *To: *"Beckman, Daniel" 
> *Cc: *"users@ovirt.org" , Yedidyah Bar David <
> d...@redhat.com>, "sbona...@redhat.com" 
> *Subject: *Re: [ovirt-users] Upgrade 4.0.6 to 4.1.1 -- How to Update
> oVirt Node (4.x) Hosts?
>
>
>
> Looks like your repos are not set up correctly.  oVirt node is an image
> and the update is a complete image as well, containing a set packages that
> were tested and known to work well together.  This means that when you yum
> update your system, a single "ovirt-node-ng-image-update" rpm should be
> installed instead of a list of packages like you mentioned.  That's
> probably what messed things up.  How did you configure your repos ?
>
>
>
> On May 1, 2017 6:15 PM, "Beckman, Daniel"  com> wrote:
>
> Hello,
>
> I’ve attached the log file for one of the hosts from
> /var/log/ovirt-engine/host-deploy.
>
> As to the manual update: yes, I ran it after removing the ovirt40 repo and
> adding ovirt41 repo. Here are the packages it updated:
> Updated cockpit-ovirt-dashboard-0.10.6-1.4.2.el7.centos.noarch
>   ?
> Update  0.10.7-0.0.16.el7.centos.noarch
>@ovirt-4.1
> Dep-Install collectd-5.7.0-2.el7.x86_64
>  @centos-opstools-release
> Dep-Install collectd-disk-5.7.0-2.el7.x86_64
>   @centos-opstools-release
> Dep-Install collectd-netlink-5.7.0-2.el7.x86_64
>@centos-opstools-release
> Dep-Install collectd-virt-5.7.0-2.el7.x86_64
>   @centos-opstools-release
> Dep-Install collectd-write_http-5.7.0-2.el7.x86_64
>   @centos-opstools-release
> Dep-Install fluentd-0.12.26-2.el7.noarch
>   @centos-opstools-release
> Dep-Install gdeploy-2.0.1-13.noarch
>  @rnachimu-gdeploy
> Updated glusterfs-3.7.20-1.el7.x86_64
>?
> Update3.8.11-1.el7.x86_64
>  @ovirt-4.1-centos-gluster38
> Updated glusterfs-api-3.7.20-1.el7.x86_64
>?
> Update3.8.11-1.el7.x86_64
>  @ovirt-4.1-centos-gluster38
> Updated glusterfs-cli-3.7.20-1.el7.x86_64
>?
> Update3.8.11-1.el7.x86_64
>  @ovirt-4.1-centos-gluster38
> Updated glusterfs-client-xlators-3.7.20-1.el7.x86_64
>   ?
> Update   3.8.11-1.el7.x86_64
>   @ovirt-4.1-centos-gluster38
> Updated glusterfs-fuse-3.7.20-1.el7.x86_64
>   ?
> Update 3.8.11-1.el7.x86_64
>   @ovirt-4.1-centos-gluster38
> Updated glusterfs-geo-replication-3.7.20-1.el7.x86_64
>?
> Update3.8.11-1.el7.x86_64
>  @ovirt-4.1-centos-gluster38
> Updated glusterfs-libs-3.7.20-1.el7.x86_64
>   ?
> Update 3.8.11-1.el7.x86_64
>   @ovirt-4.1-centos-gluster38
> Updated glusterfs-rdma-3.7.20-1.el7.x86_64
>   ?
> Update 3.8.11-1.el7.x86_64
>   @ovirt-4.1-centos-gluster38
> Updated glusterfs-server-3.7.20-1.el7.x86_64
>   ?
> Update   3.8.11-1.el7.x86_64
>   @ovirt-4.1-centos-gluster38
> Updated imgbased-0.8.11-0.201612061451git1b9e081.el7.centos.noarch
> ?
> Update   0.9.23-1.el7.centos.noarch
>  @ovirt-4.1
> Updated ioprocess-0.16.1-1.el7.x86_64
>@?ovirt-centos-ovirt41
> Update0.17.0-1.201611101241.gitb7e353c.el7.centos.x86_64
>   @ovirt-4.1
> Dep-Install

Re: [ovirt-users] Upgrade 4.0.6 to 4.1.1 -- How to Update oVirt Node (4.x) Hosts?

2017-05-02 Thread Yuval Turgeman
Looks like your repos are not set up correctly.  oVirt node is an image and
the update is a complete image as well, containing a set packages that were
tested and known to work well together.  This means that when you yum
update your system, a single "ovirt-node-ng-image-update" rpm should be
installed instead of a list of packages like you mentioned.  That's
probably what messed things up.  How did you configure your repos ?

On May 1, 2017 6:15 PM, "Beckman, Daniel" 
wrote:

> Hello,
>
> I’ve attached the log file for one of the hosts from
> /var/log/ovirt-engine/host-deploy.
>
> As to the manual update: yes, I ran it after removing the ovirt40 repo and
> adding ovirt41 repo. Here are the packages it updated:
> Updated cockpit-ovirt-dashboard-0.10.6-1.4.2.el7.centos.noarch
>   ?
> Update  0.10.7-0.0.16.el7.centos.noarch
>@ovirt-4.1
> Dep-Install collectd-5.7.0-2.el7.x86_64
>  @centos-opstools-release
> Dep-Install collectd-disk-5.7.0-2.el7.x86_64
>   @centos-opstools-release
> Dep-Install collectd-netlink-5.7.0-2.el7.x86_64
>@centos-opstools-release
> Dep-Install collectd-virt-5.7.0-2.el7.x86_64
>   @centos-opstools-release
> Dep-Install collectd-write_http-5.7.0-2.el7.x86_64
>   @centos-opstools-release
> Dep-Install fluentd-0.12.26-2.el7.noarch
>   @centos-opstools-release
> Dep-Install gdeploy-2.0.1-13.noarch
>  @rnachimu-gdeploy
> Updated glusterfs-3.7.20-1.el7.x86_64
>  ?
> Update3.8.11-1.el7.x86_64
>  @ovirt-4.1-centos-gluster38
> Updated glusterfs-api-3.7.20-1.el7.x86_64
>?
> Update3.8.11-1.el7.x86_64
>  @ovirt-4.1-centos-gluster38
> Updated glusterfs-cli-3.7.20-1.el7.x86_64
>?
> Update3.8.11-1.el7.x86_64
>  @ovirt-4.1-centos-gluster38
> Updated glusterfs-client-xlators-3.7.20-1.el7.x86_64
>   ?
> Update   3.8.11-1.el7.x86_64
>   @ovirt-4.1-centos-gluster38
> Updated glusterfs-fuse-3.7.20-1.el7.x86_64
>   ?
> Update 3.8.11-1.el7.x86_64
>   @ovirt-4.1-centos-gluster38
> Updated glusterfs-geo-replication-3.7.20-1.el7.x86_64
>?
> Update3.8.11-1.el7.x86_64
>  @ovirt-4.1-centos-gluster38
> Updated glusterfs-libs-3.7.20-1.el7.x86_64
>   ?
> Update 3.8.11-1.el7.x86_64
>   @ovirt-4.1-centos-gluster38
> Updated glusterfs-rdma-3.7.20-1.el7.x86_64
>   ?
> Update 3.8.11-1.el7.x86_64
>   @ovirt-4.1-centos-gluster38
> Updated glusterfs-server-3.7.20-1.el7.x86_64
>   ?
> Update   3.8.11-1.el7.x86_64
>   @ovirt-4.1-centos-gluster38
> Updated imgbased-0.8.11-0.201612061451git1b9e081.el7.centos.noarch
> ?
> Update   0.9.23-1.el7.centos.noarch
>  @ovirt-4.1
> Updated ioprocess-0.16.1-1.el7.x86_64
>  @?ovirt-centos-ovirt41
> Update0.17.0-1.201611101241.gitb7e353c.el7.centos.x86_64
>   @ovirt-4.1
> Dep-Install libtomcrypt-1.17-23.el7.x86_64
>   @ovirt-4.1-epel
> Dep-Install libtommath-0.42.0-4.el7.x86_64
>   @ovirt-4.1-epel
> Dep-Install libtool-ltdl-2.4.2-22.el7_3.x86_64
>   @updates
> Updated mom-0.5.8-1.el7.centos.noarch
>  @?ovirt-4.1
> Update  0.5.9-1.el7.centos.noarch
>  @ovirt-4.1
> Dep-Install net-snmp-1:5.7.2-24.el7_3.2.x86_64
>   @updates
> Dep-Install net-snmp-agent-libs-1:5.7.2-24.el7_3.2.x86_64
>@updates
> Updated openvswitch-2.5.0-2.el7.x86_64
>   ?
> Update  2.7.0-1.el7.centos.x86_64
>  @ovirt-4.1
> Updated otopi-1.5.2-1.el7.centos.noarch
>?
> Update1.6.1-1.el7.centos.noarch
>  @ovirt-4.1
> Updated ovirt-host-deploy-1.5.3-1.el7.centos.noarch
>?
> Update1.6.3-1.el7.centos.noarch
>  @ovirt-4.1
> Updated ovirt-hosted-engine-ha-2.0.6-1.el7.centos.noarch
>   ?
> Update 2.1.0.5-1.el7.centos.noarch
>   @ovirt-4.1
> Updated ovirt-hosted-engine-setup-2.0.4.1-1.el7.centos.noarch
>?
> Update2.1.0.5-1.el7.centos.noarch
>  @ovirt-4.1
> Updated ovirt-imageio-common-0.4.0-1.el7.noarch
>?
> Update   1.0.0-1.el7.noarch
>  @ovirt-centos-ovirt41
> Updated ovirt-imageio-daemon-0.4.0-1.el7.noarch
>?
> Update   1.0.0-1.el7.noarch
>  @ovirt-centos-ovirt41
> Updated ovirt-node-ng-nodec

Re: [ovirt-users] Upgrade 4.0.6 to 4.1.1 -- How to Update oVirt Node (4.x) Hosts?

2017-04-30 Thread Yuval Turgeman
Looks like something went wrong during the update process, can you please
attach /tmp/imgbased.log ?
Regarding nodectl, I'm not sure, adding Ryan.  Basically, it's a python 2.7
module, not sure why it's running in python3.  How are you trying to run
this ?

On Sun, Apr 30, 2017 at 8:44 AM, Yedidyah Bar David  wrote:

> On Thu, Apr 27, 2017 at 6:48 PM, Beckman, Daniel
>  wrote:
> > Didi,
> >
> > Thanks for the tip on the utilities – I’ll add that for future upgrades.
> Since you pointed that out,  I’m reminded that in a previous upgrade
> (following one of the developer’s suggestions) I had added this:
> > /etc/ovirt-engine/engine.conf.d/99-custom-truststore.conf
> > So I guess that’s why my https certificate was preserved.
>
> Good.
>
> >
> > As to the documentation, I did submit a pull request (#923) and
> ‘JohnMarksRH’ added that along with some additional edits. I’ll move any
> continuing discussion on that to another thread. And yes, the RHV
> documentation is excellent and I’ve often turned to it. It’s too bad some
> of the effort ends up being duplicated. Anyway….
> >
> > Here’s what I did with one of the oVirt nodes:
> > yum -y remove ovirt-release40
> > yum -y install http://resources.ovirt.org/pub/yum-repo/ovirt-release41.
> rpm
> > cd /etc/yum.repos.d
> > # ls
> > CentOS-Base.repo   CentOS-fasttrack.repo
>  CentOS-Sources.repo   cockpit-preview-epel-7.repo
> > CentOS-CR.repo CentOS-fasttrack.repo.rpmnew  CentOS-Vault.repo
>ovirt-4.0-dependencies.repo
> > CentOS-Debuginfo.repo  CentOS-Media.repo
>  CentOS-Vault.repo.rpmnew  ovirt-4.0.repo
> > rm -f ovirt-4.0*
> >
> > After doing that, when I check again in the admin GUI for an upgrade, it
> shows one available (4.1.1.1). From the GUI I tell it to upgrade, and it
> runs along with no errors, seems to finish, and then reboots the host.
> >
> > When the host comes back up, it’s still running 4.0.6. When I check
> again for an available upgrade, it doesn’t see it available. I’m attaching
> the installation log that is referenced in Events in the GUI.
> >
> > If I go straight into the node and run ‘yum update’ and reboot, then it
> gets the latest 4.1.x image and the engine detects it as such.
>
> You mean you do that after the above (removing 4.0 repos, adding 4.1)?
>
> What packages did it update?
>
> Please check also time-wise nearby log files for this host in
> /var/log/ovirt-engine/host-deploy and share them.
> 'ovirt-host-mgmt*' is the result of checking for updates from the admin
> web ui.
>
> > But of course that’s not the ideal method. I used the manual method for
> the remaining hosts.
> >
> > I don’t know if this is related, but since the upgrade I’ve also noticed
> an unfamiliar error when I log in directly to the engine host. (It’s a
> standalone Centos7 VM running on a separate KVM host.) Here is is:
> >
> > nodectl must be run as root!
> > nodectl must be run as root!
> > This comes up when *any* user logs into the box. When I switch to root I
> get this:
> > /bin/python3: Error while finding spec for 'nodectl.__main__' ( 'ImportError'>: No module named 'nodectl')
> > /bin/python3: Error while finding spec for 'nodectl.__main__' ( 'ImportError'>: No module named 'nodectl')
> > So it looks like it’s been invoked from here:
> > ls -llh /etc/profile.d/nodectl*
> > -rwxr-xr-x. 1 root root 13 Apr  6 06:46 /etc/profile.d/nodectl-motd.sh
> > -rwxr-xr-x. 1 root root 24 Apr  6 06:46 /etc/profile.d/nodectl-run-
> banner.sh
> > According to ‘yum whatprovides’ this appears to have been installed by
> package “ovirt-node-ng-nodectl-4.1.0-0.20170406.0.el7.noarch”.
> >
> > Anyone else getting this? I can try fixing the python error by adding
> the module, but I thought I’d report this first. Any suggestions as to next
> steps?
>
> Adding Yuval for the node-specific issues.
>
> Best,
>
> >
> > Thanks
> > Daniel
> >
> >
> >
> >
> >
> >
> >
> >
> >
> >
> >
> > On 4/25/17, 2:01 AM, "Yedidyah Bar David"  wrote:
> >
> > On Tue, Apr 25, 2017 at 1:19 AM, Beckman, Daniel
> >  wrote:
> > > So I successfully upgraded my engine from 4.06 to 4.1.1 with no
> major
> > > issues.
> > >
> > >
> > >
> > > A nice thing I noticed was that my custom CA certificate for https
> on the
> > > admin and user portals wasn’t clobbered by setup.
> > >
> > >
> > >
> > > I did have to restore my custom settings for ISO uploader, log
> collector,
> > > and websocket proxy:
> > >
> > > cp
> > > /etc/ovirt-engine/isouploader.conf.d/10-engine-setup.conf.<
> latest_timestamp>
> > > /etc/ovirt-engine/isouploader.conf.d/10-engine-setup.conf
> > >
> > > cp
> > > /etc/ovirt-engine/ovirt-websocket-proxy.conf.d/10-
> setup.conf.
> > > /etc/ovirt-engine/ovirt-websocket-proxy.conf.d/10-setup.conf
> > >
> > > cp
> > > /etc/ovirt-engine/logcollector.conf.d/10-engine-
> setup.conf.
> > > /etc/ovirt-engine/logcollector.conf.d/10-engine-setup.conf
> >
> > The utilities 

Re: [ovirt-users] Upgrade hypervisor to 4.1.1.1

2017-04-10 Thread Yuval Turgeman
restorecon on virtlogd.conf would be enough

On Apr 10, 2017 2:51 PM, "Misak Khachatryan"  wrote:

> Is it node setup? Today i tried to upgrade my one node cluster, after that
> VM's fail to start, it turns out that selinux prevents virtlogd to start.
>
> ausearch -c 'virtlogd' --raw | audit2allow -M my-virtlogd
> semodule -i my-virtlogd.pp
> /sbin/restorecon -v /etc/libvirt/virtlogd.conf
>
> fixed things for me, YMMV.
>
>
> Best regards,
> Misak Khachatryan
>
> On Mon, Apr 10, 2017 at 2:03 PM, Sandro Bonazzola 
> wrote:
>
>> Can you please provide a full sos report from that host?
>>
>> On Sun, Apr 9, 2017 at 8:38 PM, Sandro Bonazzola 
>> wrote:
>>
>>> Adding node team.
>>>
>>> Il 09/Apr/2017 15:43, "eric stam"  ha scritto:
>>>
>>> Yesterday I executed an upgrade on my hypervisor to version 4.1.1.1
>>> After the upgrade, it is impossible to start a virtual machine on it.
>>> The messages I found: Failed to connect socket to
>>> '/var/run/libvirt/virtlogd-sock': Connection refused
>>>
>>> [root@vm-1 log]# hosted-engine --vm-status | grep -i engine
>>>
>>> Engine status  : {"reason": "bad vm status",
>>> "health": "bad", "vm": "down", "detail": "down"}
>>>
>>> state=EngineUnexpectedlyDown
>>>
>>> The redhead version: CentOS Linux release 7.3.1611 (Core)
>>>
>>> Is this a known problem?
>>>
>>> Regards, Eric
>>>
>>>
>>>
>>> ___
>>> Users mailing list
>>> Users@ovirt.org
>>> http://lists.ovirt.org/mailman/listinfo/users
>>>
>>>
>>>
>>
>>
>> --
>>
>> SANDRO BONAZZOLA
>>
>> ASSOCIATE MANAGER, SOFTWARE ENGINEERING, EMEA ENG VIRTUALIZATION R&D
>>
>> Red Hat EMEA 
>> 
>> TRIED. TESTED. TRUSTED. 
>>
>> ___
>> Users mailing list
>> Users@ovirt.org
>> http://lists.ovirt.org/mailman/listinfo/users
>>
>>
>
___
Users mailing list
Users@ovirt.org
http://lists.ovirt.org/mailman/listinfo/users


Re: [ovirt-users] Error on Node upgrade 2

2017-03-20 Thread Yuval Turgeman
Hi Fernando,

There was a problem with the version you're using, the details can be found
here [1].

Please try to use a newer release of node-ng-4.1 for the upgrade.


[1] https://bugzilla.redhat.com/show_bug.cgi?id=1427468


On Wed, Mar 15, 2017 at 8:50 PM, FERNANDO FREDIANI <
fernando.fredi...@upx.com> wrote:

> Well, this failed badly.
> In another try it did the upgrade to the newer version it was not being
> able to do before, but upon reboot several services fail to start and the
> host doesn't come up. I then rebooted into the previous version.
>
> A complete re-install seems to be the only option.
> Seems the upgrade process need refining.
>
> Taking the opportunity what is the procedure to remove from the Engine a
> Host that will never come back up ? Seems the remove option is grayed when
> the server is offline or unreachable.
>
> Fernando
>
> On 15/03/2017 10:06, FERNANDO FREDIANI wrote:
>
> Find it attached Yuval.
>
> Fernando
>
>
> On 14/03/2017 18:21, Yuval Turgeman wrote:
>
> Adding Ryan, Fernando, can you please attach /tmp/imgbased.log ?
>
> On Mar 13, 2017 2:43 PM, "FERNANDO FREDIANI" 
> wrote:
>
>> Hi Yedidyah.
>> Running oVirt-Node *4.1.0--0.20170201.0+1* on the nodes and tried a
>> normal upgrade. It detected it has to upgrade to
>> *4.1.1-0.3.rc1.20170303133657.git20d3594.el7.centos* from the
>> ovirt-4.1-pre repository.
>>
>> The upgrade finished with the following problems:
>>
>> ...
>> Running transaction
>>   Updating   : ovirt-node-ng-image-4.1.1-0.3.
>> rc1.20170303133657.git20d3594.el7.centos.noarch
>> 1/4
>>   Updating   : ovirt-node-ng-image-update-4.1
>> .1-0.3.rc1.20170303133657.git20d3594.el7.centos.noarch
>> 2/4
>> mount: special device /dev/onn_kvm01/ovirt-node-ng-4.1.1-0.20170303.0+1
>> does not exist
>> cp: target ‘/tmp/tmp.N6JgSdFcu6/usr/share/imgbased/’ is not a directory
>> rm: cannot remove 
>> ‘/tmp/tmp.N6JgSdFcu6/usr/share/imgbased/*image-update*.rpm’:
>> No such file or directory
>> umount: /tmp/tmp.N6JgSdFcu6: not mounted
>> ...
>>
>> So it seems it was not able to do it correctly.
>> This is the second time it happens and I had to remove manually these
>> packages containing 4.1.1 version.
>>
>> Tried also:
>>
>> # lvdisplay | grep ovirt-node-ng-4.1.1-0.20170303.0+1
>>   LV Path/dev/onn_kvm01/ovirt-node-ng-4
>> .1.1-0.20170303.0+1
>>   LV Nameovirt-node-ng-4.1.1-0.20170303.0+1
>> # fdisk -l /dev/onn_kvm01/ovirt-node-ng-4.1.1-0.20170303.0+1
>> fdisk: cannot open /dev/onn_kvm01/ovirt-node-ng-4.1.1-0.20170303.0+1: No
>> such file or directory
>> # vgchange -ay
>> # fdisk -l /dev/onn_kvm01/ovirt-node-ng-4.1.1-0.20170303.0+1 (then
>> worked)
>> But the upon reboot it came back to 4.1.0 as if 4.1.1 never existed
>>
>> Fernando
>>
>> On 12/03/2017 03:30, Yedidyah Bar David wrote:
>>
>> On Fri, Mar 10, 2017 at 2:37 PM, FERNANDO 
>> FREDIANI  wrote:
>>
>> I am not sure if another email I sent went through but has anyone got
>> problems when upgrading a running oVirt-node-ng from 4.1.0 to 4.1.1.
>>
>> What kind of problems?
>>
>>
>> Is the only solution a complete reinstall of the node ?
>>
>> No, this should work.
>>
>> Best,
>>
>>
>> Thanks
>>
>> Fernando
>>
>> ___
>> Users mailing 
>> listUsers@ovirt.orghttp://lists.ovirt.org/mailman/listinfo/users
>>
>> ___ Users mailing list
>> Users@ovirt.org http://lists.ovirt.org/mailman/listinfo/users
>
>
___
Users mailing list
Users@ovirt.org
http://lists.ovirt.org/mailman/listinfo/users


Re: [ovirt-users] Error on Node upgrade 2

2017-03-14 Thread Yuval Turgeman
Adding Ryan, Fernando, can you please attach /tmp/imgbased.log ?

On Mar 13, 2017 2:43 PM, "FERNANDO FREDIANI" 
wrote:

> Hi Yedidyah.
> Running oVirt-Node *4.1.0--0.20170201.0+1* on the nodes and tried a
> normal upgrade. It detected it has to upgrade to
> *4.1.1-0.3.rc1.20170303133657.git20d3594.el7.centos* from the
> ovirt-4.1-pre repository.
>
> The upgrade finished with the following problems:
>
> ...
> Running transaction
>   Updating   : ovirt-node-ng-image-4.1.1-0.3.
> rc1.20170303133657.git20d3594.el7.centos.noarch
> 1/4
>   Updating   : ovirt-node-ng-image-update-4.1.1-0.3.rc1.20170303133657.
> git20d3594.el7.centos.noarch   2/4
> mount: special device /dev/onn_kvm01/ovirt-node-ng-4.1.1-0.20170303.0+1
> does not exist
> cp: target ‘/tmp/tmp.N6JgSdFcu6/usr/share/imgbased/’ is not a directory
> rm: cannot remove ‘/tmp/tmp.N6JgSdFcu6/usr/share/imgbased/*image-update*.rpm’:
> No such file or directory
> umount: /tmp/tmp.N6JgSdFcu6: not mounted
> ...
>
> So it seems it was not able to do it correctly.
> This is the second time it happens and I had to remove manually these
> packages containing 4.1.1 version.
>
> Tried also:
>
> # lvdisplay | grep ovirt-node-ng-4.1.1-0.20170303.0+1
>   LV Path/dev/onn_kvm01/ovirt-node-ng-4.1.1-0.20170303.0+1
>   LV Nameovirt-node-ng-4.1.1-0.20170303.0+1
> # fdisk -l /dev/onn_kvm01/ovirt-node-ng-4.1.1-0.20170303.0+1
> fdisk: cannot open /dev/onn_kvm01/ovirt-node-ng-4.1.1-0.20170303.0+1: No
> such file or directory
> # vgchange -ay
> # fdisk -l /dev/onn_kvm01/ovirt-node-ng-4.1.1-0.20170303.0+1 (then worked)
> But the upon reboot it came back to 4.1.0 as if 4.1.1 never existed
>
> Fernando
>
> On 12/03/2017 03:30, Yedidyah Bar David wrote:
>
> On Fri, Mar 10, 2017 at 2:37 PM, FERNANDO FREDIANI 
>  wrote:
>
> I am not sure if another email I sent went through but has anyone got
> problems when upgrading a running oVirt-node-ng from 4.1.0 to 4.1.1.
>
>
> What kind of problems?
>
>
>
> Is the only solution a complete reinstall of the node ?
>
>
> No, this should work.
>
> Best,
>
>
>
> Thanks
>
> Fernando
>
> ___
> Users mailing listUsers@ovirt.orghttp://lists.ovirt.org/mailman/listinfo/users
>
>
>
>
>
> ___
> Users mailing list
> Users@ovirt.org
> http://lists.ovirt.org/mailman/listinfo/users
>
>
___
Users mailing list
Users@ovirt.org
http://lists.ovirt.org/mailman/listinfo/users


Re: [ovirt-users] Upgrading oVirt-Node-NG from 4.0.3 to 4.0.6

2017-02-09 Thread Yuval Turgeman
Hi, so 4.0.6 was downloaded but it is not upgrading the node ?

On Fri, Feb 3, 2017 at 11:58 PM, Thomas Kendall  wrote:

> We recently migrated from 3.6 to 4.0, but I'm a little confused about how
> to keep the nodes up to date. I see the auto-updates come through for my
> 4.0.3 nodes, but they don't seem to upgrade them to the newer 4.0.x
> releases.
>
> Is there a way to do this upgrade?  I have two nodes that were installed
> with 4.0.3, and I would like to bring them up to the same version as
> everything else.
>
> For reference, the 4.0.3 nodes were built off the 4.0-2016083011 iso, and
> the 4.0.6 nodes were built off the 4.0-2017011712 iso.
>
> Thanks,
> Thomas
>
> ___
> Users mailing list
> Users@ovirt.org
> http://lists.ovirt.org/mailman/listinfo/users
>
>
___
Users mailing list
Users@ovirt.org
http://lists.ovirt.org/mailman/listinfo/users