Hello all,
I have a few nodes with local storage, and I've considered exporting
them via NFS to migrate into shared storage, more than a few times.
I have thought of this post on the ovirt-users list many times:
https://lists.ovirt.org/pipermail/users/2017-December/085521.html
Is this proce
oint: [SWAP]
devices: /dev/vda2
This is the recommended LVM filter for this host:
filter = [ "a|^/dev/vda2$|", "r|.*|" ]
This filter allows LVM to access the local devices used by the
hypervisor, but not shared storage owned by Vdsm. If you add a new
device to
ter" which I suspect is related.
Thanks
Matt
On 09/04/2018 01:23 PM, Matt Simonsen wrote:
Hello,
I'm running oVirt with several data centers, some with NFS storage and
some with local storage.
I had problems in the past with a large pool and local storage. The
problem was nod
Hello,
I'm running oVirt with several data centers, some with NFS storage and
some with local storage.
I had problems in the past with a large pool and local storage. The
problem was nodectl showed the pool being too full (I think >80%), but
it was only the images that made the pool "full" -
nes) that we can look at it ?
Thanks,
Yuval.
On Tue, Aug 21, 2018 at 10:11 PM, Matt Simonsen <mailto:m...@khoza.com>> wrote:
I ran this on a host that has the same exact failing upgrade. It
returned with no output.
I'm expecting if I manually remove the /boot ker
exact error in
the log.
Thanks,
Yuval.
On Tue, Aug 21, 2018 at 12:04 AM, Matt Simonsen <mailto:m...@khoza.com>> wrote:
Hello,
I replied to a different email in this thread, noting I believe I
may have a workaround to this issue.
I did run this on a server that h
at 2:26 AM, Matt Simonsen <mailto:m...@khoza.com>> wrote:
Hello all,
I've emailed about similar trouble with an oVirt Node upgrade
using the ISO install. I've attached the /tmp/imgbased.log file in
hopes it will help give a clue as to why the trouble.
Sinc
ed this.
Thanks,
Matt
PS- I will also be in #ovirt for about 3 hours if anyone would like to
work there with me
On 08/20/2018 08:19 AM, Yuval Turgeman wrote:
Hi Matt,
Can you attach the output from the following line
# semanage permissive -a setfiles_t
Thanks,
Yuval.
On Fri, Aug 17, 201
ing to 4.2.4 or 4.2.5, based on errors that I am
hoping the attached log file would clarify for somebody familiar with
the upgrade process.
Thanks,
Matt
On Thu, Aug 16, 2018, 4:49 PM Matt Simonsen <mailto:m...@khoza.com>> wrote:
Hello all,
I've emailed about simil
Hello all,
I've emailed about similar trouble with an oVirt Node upgrade using the
ISO install. I've attached the /tmp/imgbased.log file in hopes it will
help give a clue as to why the trouble.
Since these use NFS storage I can rebuild, but would like to know,
ideally, what caused the upgrad
g from the thinpool. If you do decide to run this, please send
the output for lvs again.
Also, are you on #ovirt ?
Thanks,
Yuval.
On Tue, Jul 3, 2018 at 9:00 PM, Matt Simonsen <mailto:m...@khoza.com>> wrote:
Thank you again for the assistance with this issue.
Below is the re
show imgbased-pool (defined at
/etc/lvm/profile/imgbased-pool.profile)
On Tue, Jul 3, 2018 at 8:55 AM, Yedidyah Bar David <mailto:d...@redhat.com>> wrote:
On Mon, Jul 2, 2018 at 7:54 PM, Matt Simonsen mailto:m...@khoza.com>> wrote:
>
> This error adds some cl
On 07/02/2018 12:55 PM, Yuval Turgeman wrote:
Are you mounted with discard ? perhaps fstrim ?
I believe that I have all the default options, and I have one extra
partition for images.
#
# /etc/fstab
# Created by anaconda on Sat Oct 31 18:04:29 2015
#
# Accessible filesystems, by referenc
.
2018-07-02 12:21:10,069 [DEBUG] (MainThread) Calling binary: (['umount',
'-l', u'/tmp/mnt.ZYOjC'],) {}
Thanks
Matt
On 07/02/2018 10:55 AM, Yuval Turgeman wrote:
Not in front of my laptop so it's a little hard to read but does it
say 8g free on the vg ?
4.2.4-0.20180626.0:
ovirt-node-ng-4.2.4-0.20180626.0+1
ovirt-node-ng-4.2.3.1-0.20180530.0:
ovirt-node-ng-4.2.3.1-0.20180530.0+1
ovirt-node-ng-4.2.3-0.20180524.0:
ovirt-node-ng-4.2.3-0.20180524.0+1
ovirt-node-ng-4.2.1.1-0.20180223.0:
ovirt-node-ng-4.2
Hello,
I did yum updates on 2 of my oVirt 4.2.3 nodes running the prebuilt node
platform and it doesn't appear the updates worked.
[root@node6-g8-h4 ~]# yum update
Loaded plugins: enabled_repos_upload, fastestmirror, imgbased-persist,
: package_upload, product-id, search-disable
Hello,
We have a development box with local storage, running ovirt Node 4.1
It appears that using the admin interface on port 9090 I can resize a
live partition to a smaller size.
Our storage is a seperate LVM partition, ext4 formated.
My question is, both theoretically and practically, if a
Doh! Problem solved. Well at least I found it on my own...
Date on server is wrong, and certs were silently failing.
Matt
On 03/08/2018 04:16 PM, Matt Simonsen wrote:
I installed based on an older Node Next DVD (4.1.7) that has worked in
the past and it doesn't appear to be working
I installed based on an older Node Next DVD (4.1.7) that has worked in
the past and it doesn't appear to be working when I add it to a cluster.
The installer says//it cannot queue package iproute.
Is there a repo down or that has changed? Thanks for any suggestions.
It appears yum is also brok
Hello all,
This may not be oVirt specific (but it may be) so thank you in advance
for any assistance.
I have a system installed with oVirt Node Next 4.1.9 that was installed
to /dev/sda
I had a seperate RAID Volume /dev/sdb that should not have been used,
but now that the operating system
On 01/30/2018 03:43 PM, Christopher Cox wrote:
So, you're saying you export to an Export Domain (NFS), detach, and
then rsync that somewhere else (a different NFS system) and try to
attach that an Export(import) Domain to a different datacenter and
import? Sounds like it should work to me.
Hello all,
We have a several oVirt data centers mostly using oVirt 4.1.9 and NFS
backed storage.
I'm planning a move for what will eventually be an exported VM, from one
physical location to another one.
Is there any reason it would be problematic to export the image and then
use rsync to
On 12/15/2017 03:06 AM, Simone Tiraboschi wrote:
On Fri, Dec 15, 2017 at 4:45 AM, Donny Davis <mailto:do...@fortnebula.com>> wrote:
have you gotten an image update yet?
On Thu, Dec 14, 2017 at 8:08 PM, Matt Simonsen mailto:m...@khoza.com>> wrote:
Hello all,
Hello all,
I read at https://www.ovirt.org/develop/projects/node/troubleshooting/
that "Changes made from the command line are done at your own risk.
Making changes has the potential to leave your system in an unusable
state." It seems clear that RPMs should not be installed.
Is this accurat
Hello all,
I read with Gluster using hyper-convergence that the engine must reside
on the same LAN as the nodes. I guess this makes sense by definition -
ie: using Gluster storage and replicating Gluster bricks across the web
sounds awful.
This got me wondering about best practices for the e
25 matches
Mail list logo