On Mon, Oct 12, 2020 at 9:33 PM Amit Bawer wrote:
>
>
> On Mon, Oct 12, 2020 at 9:12 PM Lee Hanel wrote:
>
>> my /etc/exports looks like:
>>
>> (rw,async,no_wdelay,crossmnt,insecure,no_root_squash,insecure_locks,sec=sys,anonuid=1025,anongid=100)
>>
> The a
to have them for the nfs shares for ovirt?
I'd suggest to specify the exports for ovirt on their own
/export/path1 *(rw,sync,no_root_suqash)
/export/path2 *(rw,sync,no_root_suqash)
...
> also to note, as vdsm I can create files/directories on the share.
>
>
> On Mon, Oct 12
On Mon, Oct 12, 2020 at 7:47 PM wrote:
> ok, I think that the selinux context might be wrong? but I saw nothing
> in the audit logs about it.
>
> drwxr-xr-x. 1 vdsm kvm system_u:object_r:nfs_t:s0 40 Oct 8 17:19 /data
>
> I don't see in the ovirt docs what the selinux context needs to be. Is
>
On Mon, Oct 12, 2020 at 6:03 PM wrote:
> Greetings,
>
> I'm trying to upgrade from 4.3 to 4.4. When trying to mount the original
> nfs items, I'm getting the following error:
>
> vdsm.storage.exception.StorageServerAccessPermissionError: Permission
> settings on the specified path do not allow a
On Sun, Oct 4, 2020 at 5:28 PM Gianluca Cecchi
wrote:
> On Sun, Oct 4, 2020 at 10:21 AM Amit Bawer wrote:
>
>>
>>
>> Since there wasn't a filter set on the node, the 4.4.2 update added the
>> default filter for the root-lv pv
>> if there was some filter
On Sun, Oct 4, 2020 at 2:07 AM Gianluca Cecchi
wrote:
> On Sat, Oct 3, 2020 at 9:42 PM Amit Bawer wrote:
>
>>
>>
>> On Sat, Oct 3, 2020 at 10:24 PM Amit Bawer wrote:
>>
>>>
>>>
>>> For the gluster bricks being filtered out in 4.4.2, thi
On Sat, Oct 3, 2020 at 10:24 PM Amit Bawer wrote:
>
>
> On Sat, Oct 3, 2020 at 7:26 PM Gianluca Cecchi
> wrote:
>
>> On Sat, Oct 3, 2020 at 4:06 PM Amit Bawer wrote:
>>
>>> From the info it seems that startup panics because gluster bricks cannot
>>>
On Sat, Oct 3, 2020 at 7:26 PM Gianluca Cecchi
wrote:
> On Sat, Oct 3, 2020 at 4:06 PM Amit Bawer wrote:
>
>> From the info it seems that startup panics because gluster bricks cannot
>> be mounted.
>>
>>
> Yes, it is so
> This is a testbed NUC I use for test
>From the info it seems that startup panics because gluster bricks cannot be
mounted.
The filter that you do have in the 4.4.2 screenshot should correspond to
your root pv,
you can confirm that by doing (replace the pv-uuid with the one from your
filter):
#udevadm info
/dev/disk/by-id/lvm-pv-uui
On Thu, Oct 1, 2020 at 4:12 PM Mike Lindsay wrote:
> Hey Folks,
>
> I've got a bit of a strange one here. I downloaded and installed
> ovirt-node-ng-installer-4.4.2-2020091810.el8.iso today on an old dev
> laptop and to get it to install I needed to add acpi=off to the kernel
> boot param to get
On Tue, Sep 29, 2020 at 11:02 AM Gianluca Cecchi
wrote:
> Hello,
> having to monitor and check live storage migrations, it seems to me that
> scrolling through events I can see information about the target storage
> domain name, but nothing about the source. Is it correct?
>
> Can it be added? Wh
On Mon, Aug 3, 2020 at 4:06 PM Arden Shackelford
wrote:
> Hello!
>
> Been looking to get setup with oVirt for a few weeks and had a chance the
> past week or so to attempt getting it all setup. Ended up doing a bit of
> troubleshooting but finally got to the point where the Cockpit setup
> prompt
You may also refer to
https://access.redhat.com/documentation/en-us/red_hat_virtualization/4.3/html/technical_reference/chap-virtual_machine_snapshots
On Sun, Aug 2, 2020 at 10:35 AM Strahil Nikolov via Users
wrote:
> It's quite simple.
>
> For example , you got vm1 with 10GB OS disk that is f
On Tue, Jul 28, 2020 at 8:56 AM Nick Kas via Users wrote:
> Hello evryone,
> setup ovirt 4.4.1 on CentOS 8.2 as an experiment, and I am trying to get
> an iSCSI domain working but have issues. The little experimental cluster
> has 3 hosts. There is an ovirtmgmt network on the default vlan, and tw
You can look into ovirt vdsm pytest readme [1], it is slightly outdated as
invocation for 4.4 branch is now using "tox -e storage" (with no "-pyXX"
suffix as all tests are python3 compatible there)
All up2date options are listed on tox.ini [2]. If you need to set up your
vdsm dev/test env you can f
On Sun, May 31, 2020 at 8:09 AM Strahil Nikolov via Users
wrote:
> And what about https://bugzilla.redhat.com/show_bug.cgi?id=1787906
> Do we have any validation of the checksum via the python script ?
>
No integrated checksum.
>
>
> Best Regards,
> Strahil Nikolov
>
> На 31 май 2020 г. 0:18:4
On Saturday, May 30, 2020, matteo fedeli wrote:
> Hi! I' installed CentOS 8 and ovirt package following this step:
>
> systemctl enable --now cockpit.socket
> yum install https://resources.ovirt.org/pub/yum-repo/ovirt-release44.rpm
> yum module -y enable javapackages-tools
> yum module -y enable
Maybe someone from virt team could refer to this.
On Wed, May 27, 2020 at 2:06 PM Gianluca Cecchi
wrote:
> On Wed, May 27, 2020 at 11:26 AM Gianluca Cecchi <
> gianluca.cec...@gmail.com> wrote:
>
>> On Wed, May 27, 2020 at 10:21 AM Amit Bawer wrote:
>>
>>
>From your vdsm log, it seems as occurrence of an issue discussed at
https://lists.ovirt.org/archives/list/users@ovirt.org/message/JE76KK2WFDCDJL3DF52OGOLGXWHWPEDA/
2020-05-26 19:51:53,837-0400 ERROR (vm/90f09a7c) [virt.vm]
(vmId='90f09a7c-3af1-45a9-a210-78a9b0cd4c3d') The vm start process failed
On Fri, May 15, 2020 at 12:19 AM Vinícius Ferrão via Users
wrote:
> Hello,
>
> I would like to know if this is a bug or not, if yes I will submit to Red
> Hat.
>
Fixed on vdsm-4.30.46
>
> I’m trying to add a ppc64le (POWER9) machine to the hosts pool, but
> there’s missing dependencies on VDSM:
On Thu, Mar 19, 2020 at 10:16 AM Gianluca Cecchi
wrote:
> Hello,
> I created some hosts at time of 4.3.3 or similar and connecting to iSCSI I
> set this in multipath.conf to specify customization
>
> "
> # VDSM REVISION 1.5
> # VDSM PRIVATE
>
> # This file is managed by vdsm.
> ...
> "
>
> Then I
Maybe you have meant to a formatted string:
my_var3 = "prefix_%s" % suffix
Or
my_var3 = "prefix_{}".format(suffix)
where suffix has some runtime string result.
On Thursday, February 27, 2020, Gianluca Cecchi
wrote:
> Hello,
> suppose in a .py using sdk I have
>
> my_var1 = 'prefix'
>
> then
On Mon, Feb 10, 2020 at 4:13 PM Jorick Astrego wrote:
>
> On 2/10/20 1:27 PM, Jorick Astrego wrote:
>
> Hmm, I didn't notice that.
>
> I did a check on the NFS server and I found the
> "1ed0a635-67ee-4255-aad9-b70822350706" in the exportdom path
> (/data/exportdom).
>
> This was an old NFS export
On Mon, Feb 10, 2020 at 2:27 PM Jorick Astrego wrote:
>
> On 2/10/20 11:09 AM, Amit Bawer wrote:
>
> compared it with host having nfs domain working
> this
>
> On Mon, Feb 10, 2020 at 11:11 AM Jorick Astrego
> wrote:
>
>>
>> On 2/9/20 10:27 AM, Amit Bawer
compared it with host having nfs domain working
this
On Mon, Feb 10, 2020 at 11:11 AM Jorick Astrego wrote:
>
> On 2/9/20 10:27 AM, Amit Bawer wrote:
>
>
>
> On Thu, Feb 6, 2020 at 11:07 AM Jorick Astrego wrote:
>
>> Hi,
>>
>> Something weird is goi
On Thu, Feb 6, 2020 at 11:07 AM Jorick Astrego wrote:
> Hi,
>
> Something weird is going on with our ovirt node 4.3.8 install mounting a
> nfs share.
>
> We have a NFS domain for a couple of backup disks and we have a couple of
> 4.2 nodes connected to it.
>
> Now I'm adding a fresh cluster of 4.
I doubt if you can use 4.3.8 nodes with a 4.2 cluster without upgrading it
first. But myabe members of this list could say differently.
On Friday, February 7, 2020, Jorick Astrego wrote:
>
> On 2/6/20 6:22 PM, Amit Bawer wrote:
>
>
>
> On Thu, Feb 6, 2020 at 2:54 PM Jor
On Thu, Feb 6, 2020 at 2:54 PM Jorick Astrego wrote:
>
> On 2/6/20 1:44 PM, Amit Bawer wrote:
>
>
>
> On Thu, Feb 6, 2020 at 1:07 PM Jorick Astrego wrote:
>
>> Here you go, this is from the activation I just did a couple of minutes
>> ago.
>>
> I was h
On Thu, Feb 6, 2020 at 11:07 AM Jorick Astrego wrote:
> Hi,
>
> Something weird is going on with our ovirt node 4.3.8 install mounting a
> nfs share.
>
> We have a NFS domain for a couple of backup disks and we have a couple of
> 4.2 nodes connected to it.
>
> Now I'm adding a fresh cluster of 4.
On Sat, Feb 1, 2020 at 6:39 PM wrote:
> Ok, i will try to set 777 permissoin on NFS storage. But, why this issue
> starting from updating 4.30.32-1 to 4.30.33-1? Withowt any another
> changes.
>
The differing commit for 4.30.33 over 4.30.32 is the transition into block
size probing done by iop
Maybe information on this message thread could help:
https://lists.ovirt.org/archives/list/users@ovirt.org/message/N7M57G7HC46NBQTX6T3KSVHEYV3IDDIP/
On Saturday, February 1, 2020, Steve Watkins wrote:
> Since I managed to crash my last attempt at installing by uploading an
> ISO, I wound up ju
On Saturday, February 1, 2020, wrote:
> Hi! I trying to upgrade my hosts and have problem with it. After uprgading
> one host i see that this one NonOperational. All was fine with
> vdsm-4.30.24-1.el7 but after upgrading with new version
> vdsm-4.30.40-1.el7.x86_64 and some others i have errors.
Maybe this could help:
https://lists.ovirt.org/pipermail/users/2017-August/083692.html
On Tuesday, January 28, 2020, wrote:
> Hi All,
>
> A question regarding memory management with ovirt. I know memory can
> be complicated hence I'm asking the experts. :)
>
> Two examples of where it looks - to
On Tue, Jan 28, 2020 at 6:08 AM Vinícius Ferrão
wrote:
> Hello,
>
> I’m with an issue on one of my oVirt installs and I wasn’t able to solve
> it. When trying to put a node in maintenance it complains about image
> transfers:
>
Which ovrit-engine version is being used? was an upgrade involved?
Hi
You can refer to 4.3 upgrade guide:
https://access.redhat.com/documentation/en-us/red_hat_virtualization/4.3/html/upgrade_guide/index
On Thursday, December 19, 2019, David David wrote:
> hi all
>
> how do upgrade from 4.2.5 to 4.3.7 ?
> what steps are necessary, are the same as when upgradin
in check we had here, we got similar warnings for using the ignore OVF
updates checks, but the SD was set inactive at end of process.
what is the SD status in your case after this try?
On Wed, Dec 4, 2019 at 4:49 PM Albl, Oliver
wrote:
> Yes.
>
> Am 04.12.2019 um 15:47 schrieb Amit B
x27;t updated on those OVF stores (Data Center Production, Storage Domain
> HOST_LUN_219).
>
Have you selected the checkbox for "Ignore OVF update failure" before
putting into maintenance?
>
> All the best,
>
> Oliver
>
>
>
> *Von:* Amit Bawer
> *Gesen
Hi Oliver,
For deactivating the unresponsive storage domains, you can use the Compute
-> Data Centers -> Maintenance option with "Ignore OVF update failure"
checked.
This will force deactivation of the SD.
Will provide further details about the issue in the ticket.
On Tue, Dec 3, 2019 at 12:02
On Tuesday, December 3, 2019, Amit Bawer wrote:
>
>
> On Tuesday, December 3, 2019, Ivan Apolonio wrote:
>
>> Hello Amit. Thanks for you reply.
>>
>> This is the content of /etc/sudoers.d/50_vdsm file (it's the default
>> generated by ovirt
On Tuesday, December 3, 2019, Ivan Apolonio wrote:
> Hello Amit. Thanks for you reply.
>
> This is the content of /etc/sudoers.d/50_vdsm file (it's the default
> generated by ovirt install):
>
> Cmnd_Alias VDSM_LIFECYCLE = \
> /usr/sbin/dmidecode -s system-uuid
> Cmnd_Alias VDSM_STORAGE = \
>
On Mon, Dec 2, 2019 at 9:38 PM Amit Bawer wrote:
>
>
> On Sun, Nov 17, 2019 at 9:19 AM Ivan de Gusmão Apolonio <
> i...@apolonio.com.br> wrote:
>
>> I'm having trouble to create a storage ISO Domain and attach it to a
>> Datacenter. It just give me this erro
On Sun, Nov 17, 2019 at 9:19 AM Ivan de Gusmão Apolonio <
i...@apolonio.com.br> wrote:
> I'm having trouble to create a storage ISO Domain and attach it to a
> Datacenter. It just give me this error message:
>
> Error while executing action Attach Storage Domain: Could not obtain lock
>
> Also the
On Mon, Dec 2, 2019 at 7:21 PM Robert Webb wrote:
> Thanks for the response.
>
> As it turned out, the issue was on the export where I had to remove
> subtree_check and added no_root_squash and it started working.
>
> Being new to the setup, I am still trying to work through some config
> issues
On Mon, Dec 2, 2019 at 4:36 PM Luca 'remix_tj' Lorenzetto <
lorenzetto.l...@gmail.com> wrote:
> On Mon, Dec 2, 2019 at 3:19 PM wrote:
> >
> > Hi Luca,
> >
> > thanks for you reply. Following this procedure which are the step that i
> must reinstall instead of upgrade the Hypervisors?
> >
>
> You'
On Sun, Dec 1, 2019 at 5:22 PM wrote:
> I have a clean install with openmediavault as backend NFS and cannot get
> it to work. Keep getting permission errors even though I created a vdsm
> user and kvm group; and they are the owners of the directory on OMV with
> full permissions.
>
> The directo
omains
> Any other ideas are welcome
> Regards
>
> El sáb., 30 de noviembre de 2019 3:19 p. m., Amit Bawer
> escribió:
>
>> Are you able to extend the disks to 1GB+ size ?
>>
>>- Go to “Virtual Machines” tab and select virtual machine
>>- Go to “Di
Are you able to extend the disks to 1GB+ size ?
- Go to “Virtual Machines” tab and select virtual machine
- Go to “Disks” sub tab and select disk
- Click on “Edit”, pay attention that if disk is locked or VM has other
status than “UP”, “PAUSED”, “DOWN” or “SUSPENDED”, editing is not al
firewalld is running?
systemctl disable --now firewalld
On Monday, November 25, 2019, Rob wrote:
> It can’t be DNS, since the engine runs on a separate network anyway ie
> front end, so why can’t it reach the Volume I wonder
>
>
> On 25 Nov 2019, at 12:55, Gobinda Das wrote:
>
> There could be
it's plausible that
systemctl restart supervdsm
is required as well
On Mon, Nov 25, 2019 at 12:17 PM Parth Dhanjal wrote:
> As such there are no errors in the vdsm log.
> Maybe you can try these steps again
>
> *ovirt-hosted-engine-cleanup*
> *vdsm-tool configure --force*
> *systemctl restart
On Sat, Nov 23, 2019 at 2:49 PM wrote:
> Anyone have some insight? Is there an official support channel or dev team
> I can engage to help look at this (even paid support) ? Does RHEV Support
> take ad-hoc cases on oVirt setups?\
>
If you have a RH subscription you may access the official KB, and
mnt_options of hosted engine
and answer here [1]
[1] https://lists.ovirt.org/pipermail/users/2018-January/086265.html
>
>
> Kindly awaiting your reply.
>
>
>
> — — —
> Met vriendelijke groet / Kind regards,
>
> *Marko Vrgotic*
>
>
>
>
>
>
>
>
> 81920 bytes (819 MB) copied, 11.6553 s, *70.3 MB/s*
>
> [root@lgu215-admin ~]# dd if=/dev/zero of=/tmp/test1.img bs=8192
> count=10 *oflag=dsync*
>
> 10+0 records in
>
> 10+0 records out
>
> 81920 bytes (819 MB) copied, 393.035 s, *2.1 MB/s*
>
According to resolution of [1] it's a multipathd/udev configuration issue.
Could be worth to track this issue.
[1] https://tracker.ceph.com/issues/12763
On Wed, Sep 25, 2019 at 3:18 PM Dan Poltawski
wrote:
> On ovirt 4.3.5 we are seeing various problems related to the rbd device
> staying mappe
rahil
> > Date: Tuesday, 24 September 2019 at 19:10
> > To: "Vrgotic, Marko" , Amit > .com>
> > Cc: users
> > Subject: Re: [ovirt-users] Re: Super Low VM disk IO via Shared
> > Storage
> >
> > Why don't you try with 4096 ?
> >
On Tue, Sep 24, 2019 at 5:21 PM wrote:
>
> I'm trying to clone the snapshot into a new vm. The tool I am using is
> ovirtBAckup from the github wefixit-AT The link is here
> https://github.com/wefixit-AT/oVirtBackup
>
> this piece of code snippet throws me error.
>
> if not config.get_dry_run():
have you reproduced performance issue when checking this directly with the
shared storage mount, outside the VMs?
On Tue, Sep 24, 2019 at 4:53 PM Vrgotic, Marko
wrote:
> Dear oVirt,
>
>
>
> I have executed some tests regarding IO disk speed on the VMs, running on
> shared storage and local stora
+Sahina Bose
Thanks for your clarification Julian, creating new Gluster brick from
ovirt's REST API is not currently supported, only from UI.
On Mon, Sep 23, 2019 at 6:25 PM Julian Schill wrote:
> > This is documented in section 6.92.2. of [1]
> >
> > [1]
> >
> https://access.redhat.com/documen
This is documented in section 6.92.2. of [1]
[1]
https://access.redhat.com/documentation/en-us/red_hat_virtualization/4.3/html/rest_api_guide/services#services-gluster_bricks-methods-add
On Sun, Sep 22, 2019 at 11:59 PM Julian Schill wrote:
> Under "Compute -> Hosts -> Host Name -> Storage Devi
58 matches
Mail list logo