Hi
I was trying to add the new NFS share path in the ovirt node. But unforunately
it doesn' work for me.Try to mount the nfs via ovirt manager but fails with
following error message
Error while executing action Add Storage Connection: Permission settings on the
specified path do not allow
Hi List,
I can't get oVirt 4.4.8.5-1.el8 (running on oVirt Node hosts) to connect
to an NFS share on a Synology NAS.
I gave up trying to get the hosted engine deployed and put that on an
iscsi volume instead...
The directory being exported from NAS is owned by vdsm / kvm (36:36)
perms
I have an HCI cluster running on Gluster storage. I exposed an NFS share into
oVirt as a storage domain so that I could clone all of my VMs (I'm preparing to
move physically to a new datacenter). I got 3-4 VMs cloned perfectly fine
yesterday. But then this evening, I tried to clone a big VM,
Hi All,
I've got an error when I try to import a storage domain from a Synology
NAS that used to be a 4.3.10 oVirt installation but no matter what I try
I can't get it to import. I keep getting a permission denied error while
with the same settings it worked for the previous version oVirt.
This
Hello
Should the documentation point out that the /exports/data mount point is hard
coded in ovirt-hosted-engine-setup-ansible-create_storage_domain.yaml.
I think some Data Centers will want to use a different mount path
or Did I miss a prompt or entry in the Deployment screen for
Hi,
I asked this question in another thread but it seems to have been lost in
the noise so I'm reposting with a more descriptive subject.
I'm trying to start a Windows VM and use the virtio-win VFD floppy to get
the drivers but the VM startup fails due to a permissions issue detailed
below. The
Hi Ovirt Mailing-list
Hope you are well. We had an outage over several ovirt clusters. It looks
like we had the same ISO NFS domain shared to all off them, many of the
VM's had a CD attached to it. The NFS server went down for an hour, all
hell broke lose when the NFS server went down. Some of
I have a clean install with openmediavault as backend NFS and cannot get it to
work. Keep getting permission errors even though I created a vdsm user and kvm
group; and they are the owners of the directory on OMV with full permissions.
The directory gets created on the NFS side for the host,
Hello Everyone,
Ive been struggling since a while to find out a propper solution to have a
full backup scenario for a production environment.
In the past, we have used Proxmox, and the scheduled incremenral nfs based
full vm backups is a thing that we really miss.
As far as i know, at this point
I`m trying to mount a nfs share.
if I manually mount it from ssh, I can access it without issues.
However when I do it from the web config, it keeps failing:
Not sure how to solve that.
Thanks
Simon
2019-09-09 09:08:47,601-0400 INFO (jsonrpc/5) [jsonrpc.JsonRpcServer] RPC
(sorry for double post earlier)
Le 25/11/2018 à 18:19, Nir Soffer a écrit :>
> Hello,
>
> Our cluster use NFS storage (on each node as data, and on a NAS for
> export)
>
>
> Looks like you invented your own hyperconverge solution.
Well, when we looks at the network specs for Gluster
Hello,
Our cluster use NFS storage (on each node as data, and on a NAS for export)
We currently have a huge problem on our cluster, and export VM as OVA
takes lot's of time.
I checked on the NFS storage, and it looks like there's files who might
be OVA in the images directory, and OVF files in
Hello,
Our cluster use NFS storage (on each node as data, and on a NAS for export)
We currently have a huge problem on our cluster, and export VM as OVA
takes lot's of time.
I checked on the NFS storage, and it looks like there's files who might
be OVA in the images directory, and OVF files in
Hello all,
I've been looking around but I've not found anything definitive on whether
Ovirt can do NFS Multipathing, and if so how?
Does anyone have any good how tos or configuration guides?
Thanks,
Thomas
___
Users mailing list -- users@ovirt.org
I currently have a NFS server with decent speeds. I get about 200MB/s
write and 400+ MB/s read.
My single node oVirt host has a mirrored SSD store for my Windows 10 VM,
and I have about 3-5 VMs running on the NFS data store.
However, VM/s on the NFS datastore are SLOW. They can write to their
Hello,
I try to mount an NFS Data share from a EMC VNXe3200 export. Unfortunately,
NFS share cannot be mount, there is permission error.
(Indeed, after the same issue on another NFS ISO repo, I've change exported
directory user/group to 36:36, and O-Virt NFS mount was great).
So, I think
Hello,
I try to mount an NFS Data share from a EMC VNXe3200 export. Unfortunately, NFS
share cannot be mount, there is permission error.
(Indeed, after the same issue on another NFS ISO repo, I’ve change exported
directory user/group to 36:36, and O-Virt NFS mount was great).
So, I think
Hi,
I noticed that when mounting nfs domains in ovirt 4.1 using
auto-negotiate it settles on v4.1.
Is this due to lack of live storage migration as referenced in this bug
https://bugzilla.redhat.com/show_bug.cgi?id=1464787
Are there other known issues with 4.2 support?
Thanks,
Alan
Have to admit that I didn't play with the hosted engine thing, but maybe
you can find the answers in the documentation:
- https://ovirt.org/documentation/self-hosted/Self-Hosted_Engine_Guide/
- https://www.ovirt.org/documentation/how-to/hosted-engine/
-
Thanks, I totally missed that :-/
And this wil also work for the hosted engine dedicated domain, putting the
storage domain the virtual machine is depending on in maintenance ?
Le 15-Mar-2018 10:38:48 +0100, eshen...@redhat.com a crit:
You can edit the storage domain setting after the
You can edit the storage domain setting after the storage domain
deactivated (entered to maintenance mode).
On Thu, Mar 15, 2018 at 11:12 AM, wrote:
> In fact I don't really know how to change storage domains setttings (like
> nfs version or export path, ...), if it is
In fact I don't really know how to change storage domains setttings (like nfs
version or export path, ...), if it is only possible.
I thought they could be disabled after stopping all related VMS, and maybe
settings panel would then unlock ?
But this should be impossible with hosted engine
I am not sure what you mean,
Can you please try to explain what is the difference between "VMs domain"
to "hosted storage domain" according to you?
Thanks,
On Thu, Mar 15, 2018 at 10:45 AM, wrote:
> Thanks for your answer.
>
> And to use V4.1 instead of V3 on a domain, do
Thanks for your answer.
And to use V4.1 instead of V3 on a domain, do I just have to disconnect it
and change its settings ? Seems to be easy to do with VMs domains, but how to
do it with hosted storage domain ? Regards
Le 14-Mar-2018 11:54:52 +0100, eshen...@redhat.com a crit:
Hi,
Thanks for your answer.
And in order to use V4.1 instead of V3 on a domain, do I just have to
disconnect it and change its settings ? Seems to be doable with VMs domains,
but how to do it with hosted storage domain ? I haven't find a command line way
to do this yet.
Regards
Le
Hi,
NFS 4.1 supported and working since version 3.6 (according to this bug fix
[1])
[1] Support NFS v4.1 connections -
https://bugzilla.redhat.com/show_bug.cgi?id=1283964
___
Users mailing list
Users@ovirt.org
Hi,
Is NFS 4.1 supported and working flawlessly in oVirt ? I would like to
give it a try (performances with // transferts), but as it requires changes in
my network design if I want to add new links, I want to be sure it worth the
effort. Is there an easy way to "migrate"a NFS3
Hello there,
At first I thought I had a performance problem with virtio-scsi on
Windows, but after thorough experimentation, I finally found that my
performance problem was related to the way I share my storage using NFS.
Using the settings suggested on the oVirt website for the /etc/exports
On Wed, Apr 5, 2017 at 7:01 PM, Bill James wrote:
> once it is detached it no longer shows up in any list other than the
> dialog for "attach data domain".
> How do I right click on it or select it to "remove" it?
>
It will show up if you select "System" on the left tree view
once it is detached it no longer shows up in any list other than the
dialog for "attach data domain".
How do I right click on it or select it to "remove" it?
I did find that if I select Data Center -> Default on left, select the
storage tab, for domain in maintenance right click will let me
On Wed, Apr 5, 2017 at 2:27 AM, Bill James wrote:
> ovirt-engine-tools-4.1.0.4-1.el7.centos.noarch
> vdsm-4.19.4-1.el7.centos.x86_64
>
>
> I'm trying to add a NFS storage domain but the gui keeps telling me the
> export path is illegal.
> Even though I can mount the volume
On Wed, Apr 5, 2017 at 2:27 AM, Bill James wrote:
> ovirt-engine-tools-4.1.0.4-1.el7.centos.noarch
> vdsm-4.19.4-1.el7.centos.x86_64
>
>
> I'm trying to add a NFS storage domain but the gui keeps telling me the
> export path is illegal.
> Even though I can mount the volume fine
ovirt-engine-tools-4.1.0.4-1.el7.centos.noarch
vdsm-4.19.4-1.el7.centos.x86_64
I'm trying to add a NFS storage domain but the gui keeps telling me the
export path is illegal.
Even though I can mount the volume fine at the command line.
I have not found any log that contains the error so I
On Mar 23, 2017, at 10:51 AM, Yedidyah Bar David wrote:
> On Thu, Mar 23, 2017 at 4:12 PM, Devin A. Bougie
> wrote:
>> Hi, All. Are there any recommendations or best practices WRT whether or not
>> to host an NFS ISO domain from the hosted-engine VM
On Thu, Mar 23, 2017 at 4:12 PM, Devin A. Bougie
wrote:
> Hi, All. Are there any recommendations or best practices WRT whether or not
> to host an NFS ISO domain from the hosted-engine VM (running the oVirt Engine
> Appliance)? We have a hosted-engine 4.1.1 cluster
Hi, All. Are there any recommendations or best practices WRT whether or not to
host an NFS ISO domain from the hosted-engine VM (running the oVirt Engine
Appliance)? We have a hosted-engine 4.1.1 cluster up and running, and now just
have to decide where to serve the NFS ISO domain from.
Many
Thanks! I think now It's question to NetApp, when they'll make 4.2 available,
I've tried to manually mount v4.2 on host,
but unfortunately:
# mount -o vers=4.2 10.1.1.111:/test /tmp/123
mount.nfs: Protocol not supported
so, my NetApp is vers=4.1 max (
--
Friday, February 3, 2017,
On Fri, Feb 3, 2017 at 2:54 PM, Yaniv Kaul wrote:
>
>
> On Feb 3, 2017 1:50 PM, "Nir Soffer" wrote:
>
> On Fri, Feb 3, 2017 at 2:29 PM, Sergey Kulikov wrote:
>>
>>
>> Hm... maybe I need to set any options, is there any way to force ovirt to
On Feb 3, 2017 1:50 PM, "Nir Soffer" wrote:
On Fri, Feb 3, 2017 at 2:29 PM, Sergey Kulikov wrote:
>
>
> Hm... maybe I need to set any options, is there any way to force ovirt to
mount with this extension, or version 4.2
> there is only 4.1 selection in "New
On Fri, Feb 3, 2017 at 2:29 PM, Sergey Kulikov wrote:
>
>
> Hm... maybe I need to set any options, is there any way to force ovirt to
> mount with this extension, or version 4.2
> there is only 4.1 selection in "New Domain" menu.
> Current mount options:
> type nfs4
>
Unfortunately I can't browse this bug:
"You are not authorized to access bug #1079385."
Can you email me details on ths bug?
I think that's the reason I can't find this fix for rhel\centos in google)
--
Friday, February 3, 2017, 14:45:43:
> On Thu, Feb 2, 2017 at 11:45 PM, Sergey
Hm... maybe I need to set any options, is there any way to force ovirt to mount
with this extension, or version 4.2
there is only 4.1 selection in "New Domain" menu.
Current mount options:
type nfs4
On Thu, Feb 2, 2017 at 11:45 PM, Sergey Kulikov wrote:
>
> I've upgraded to 4.1 release, it have great feature "Pass discards", that
> now can be used without vdsm hooks,
> After upgrade I've tested it with NFS 4.1 storage, exported from netapp,
> but unfortunately found out, that
I've upgraded to 4.1 release, it have great feature "Pass discards", that now
can be used without vdsm hooks,
After upgrade I've tested it with NFS 4.1 storage, exported from netapp, but
unfortunately found out, that
it's not working, after some investigation, I've found, that NFS
just to close off this issue.
I found problem.
My /etc/exports file had a space in it where it shouldn't have (between
* and options).
This seemed to acceptable with older versions of centos7.2 but on my
newer host with latest centos7.2 (3.10.0-327.13.1.el7.x86_64) it only
mounted read-only.
On Wed, 2016-04-13 at 15:52 -0700, Bill James wrote:
> [vdsm@ovirt4 test /]$ touch /rhev/data-center/mnt/ovirt3-
> ks.test.j2noc.com:_ovirt-store_nfs/test
> touch: cannot touch ‘/rhev/data-center/mnt/ovirt3-
> ks.test.j2noc.com:_ovirt-store_nfs/test’: Read-only file system
>
> Hmm, read-only.
On Wed, 2016-04-13 at 15:09 -0700, Bill James wrote:
> I have a cluster working fine with 2 nodes.
> I'm trying to add a third and it is complaining:
>
>
> StorageServerAccessPermissionError: Permission settings on the
> specified
> path do not allow access to the storage. Verify permission
On Tue, Mar 22, 2016 at 6:12 PM, Bill James wrote:
> I thought NFS was pretty standard in use on ovirt systems.
I guess it's very common on hosts as a client, not sure about server.
I guess most setups use separate machines to host VMs and for storage.
> Why does it take
> a
I thought NFS was pretty standard in use on ovirt systems. Why does it
take a custom setup to enable NFS in firewall rule?
https://bugzilla.redhat.com/show_bug.cgi?id=513
added:
engine-config --set IPTablesConfigSiteCustom="-A INPUT -p tcp -m
multiport --dports 2049 -j ACCEPT"
Thanks.
On Fri, Mar 18, 2016 at 2:03 AM, Bill James wrote:
> How do I make it that when ever I add or reinstall a hardware node that
> oVirt creates a rule for NFS, port 2049?
Search for 'IPTablesConfigSiteCustom'.
Best,
>
> I have to either add it manually after ovirt removes it,
How do I make it that when ever I add or reinstall a hardware node that
oVirt creates a rule for NFS, port 2049?
I have to either add it manually after ovirt removes it, or just tell
ovirt not to touch firewall rules.
Our ISO domain is not hosted by the ovirt-engine, fyi.
Hi,
if your nfs issue came after a 7.2 upgrade, you may facing this naughty
bug (that disappears after a reboot) :
https://bugzilla.redhat.com/show_bug.cgi?id=1214496.
If you can't reboot because of production then do such:
# systemctl restart rpcbind.socket
# systemctl restart rpc-statd
That
Please provide more information in the body of the mail, and attach engine
and vdsm log showing the timeframe of the error.
On Tue, Jan 26, 2016 at 7:29 AM, Anzar Sainudeen
wrote:
> Dear Team,
>
>
>
> We are installed new version of ovirt 3.6 on centos 7 using IBM servers.
On Tue, Jan 12, 2016 at 10:45 PM, Markus Stockhausen <
stockhau...@collogia.de> wrote:
> >> Von: Yaniv Kaul [yk...@redhat.com]
> >> Gesendet: Dienstag, 12. Januar 2016 13:15
> >> An: Markus Stockhausen
> >> Cc: users@ovirt.org; Mike Hildebrandt
>
On Tue, Jan 12, 2016 at 9:32 AM, Markus Stockhausen
wrote:
> Hi there,
>
> we got a nasty situation yesterday in our OVirt 3.5.6 environment.
> We ran a LSM that failed during the cleanup operation. To be precise
> when the process deleted an image on the source NFS
> On Jan 12, 2016, at 8:32 AM, Markus Stockhausen
> wrote:
>
> Hi there,
>
> we got a nasty situation yesterday in our OVirt 3.5.6 environment.
> We ran a LSM that failed during the cleanup operation. To be precise
> when the process deleted an image on the source
> On Jan 12, 2016, at 9:11 AM, Markus Stockhausen <stockhau...@collogia.de>
> wrote:
>
>> Von: Vinzenz Feenstra [vfeen...@redhat.com]
>> Gesendet: Dienstag, 12. Januar 2016 09:00
>> An: Markus Stockhausen
>> Cc: users@ovirt.org; Mike Hildebrandt
>&
> Von: Vinzenz Feenstra [vfeen...@redhat.com]
> Gesendet: Dienstag, 12. Januar 2016 09:00
> An: Markus Stockhausen
> Cc: users@ovirt.org; Mike Hildebrandt
> Betreff: Re: [ovirt-users] NFS IO timeout configuration
> > Hi there,
> >
> > we got a nasty situa
On Tue, Jan 12, 2016 at 9:32 AM, Markus Stockhausen wrote:
> Hi there,
>
> we got a nasty situation yesterday in our OVirt 3.5.6 environment.
> We ran a LSM that failed during the cleanup operation. To be precise
> when the process deleted an image on the source NFS
>> Von: Yaniv Kaul [yk...@redhat.com]
>> Gesendet: Dienstag, 12. Januar 2016 13:15
>> An: Markus Stockhausen
>> Cc: users@ovirt.org; Mike Hildebrandt
>> Betreff: Re: [ovirt-users] NFS IO timeout configuration
>>
>> On Tue, Jan 12, 2016 at 9:32 AM,
Hi there,
we got a nasty situation yesterday in our OVirt 3.5.6 environment.
We ran a LSM that failed during the cleanup operation. To be precise
when the process deleted an image on the source NFS storage.
Engine log gives:
2016-01-11 20:49:45,120 INFO
Hi,
i ust run "yum install ovirt-engine" and "engine-setup" and setup
"/var/lib/exports/iso", which is default, as shared NFS Domain. And i
think thats why ovirt setups this with default <1GB of space. So i need
an explanation why this is so and if this is so i whould next time
answer with
Hello,
i am new with ovirt and i installed ovirt engine on centos7. After i
logged in the administrator web gui i see the default nfs domain with
less than 1gb space, while i have more than 200gb on disk. The problem
is, that i cant change the value and found no informtion in the manuals
too.
On Thu, Dec 3, 2015 at 12:02 PM, wrote:
> Hello,
> i am new with ovirt and i installed ovirt engine on centos7. After i
> logged in the administrator web gui i see the default nfs domain with less
> than 1gb space, while i have more than 200gb on disk. The problem is,
On Thu, Dec 3, 2015 at 1:40 PM, wrote:
> Hi,
>
> i ust run "yum install ovirt-engine" and "engine-setup" and setup
> "/var/lib/exports/iso", which is default, as shared NFS Domain. And i think
> thats why ovirt setups this with default <1GB of space. So i need an
>
Hi,
yes its an iso domain and yes i can only create a data domain. but the
question is why it is limited to 1gb? and why i cant changed it? what
are the reasons for that?
thx
Am 2015-12-03 14:25, schrieb Simone Tiraboschi:
On Thu, Dec 3, 2015 at 1:40 PM, wrote:
On 3-12-2015 14:41, kont...@taste-of-it.de wrote:
> Hi,
>
> yes its an iso domain and yes i can only create a data domain. but the
> question is why it is limited to 1gb? and why i cant changed it? what
> are the reasons for that?
Just guessing towards your env but what does df -h /var or df -h
On Thu, Dec 3, 2015 at 4:14 PM, Joop wrote:
> On 3-12-2015 14:41, kont...@taste-of-it.de wrote:
> > Hi,
> >
> > yes its an iso domain and yes i can only create a data domain. but the
> > question is why it is limited to 1gb? and why i cant changed it? what
> > are the reasons
On 03/12/2015 5:41 AM, kont...@taste-of-it.de wrote:
yes its an iso domain and yes i can only create a data domain. but the
question is why it is limited to 1gb?
When you create the ISO domain during engine-setup, I believe the ISO
domain is created on the Engine host itself.
Can you see
Hi there,
we noticed that a newly created NFS data domain is mounted with
UDP protocol. Does anyone know if that is the desired behaviour
of current OVirt versions?
ovirtnode# mount -a
...
10.10.30.254:/var/nas4/OVirtIB on
/rhev/data-center/mnt/10.10.30.254:_var_nas4_OVirtIB
type nfs
Roberto roberto.nu...@comifar.it
To: users@ovirt.org
Sent: Tuesday, May 12, 2015 5:17:39 PM
Subject: [ovirt-users] NFS export domain still remain in Preparing for
maintenance
Hi all
We are using oVirt engine 3.5.1-0.0 on Centos 6.6
We have two DC. One with hosts using vdsm
Hi all
We are using oVirt engine 3.5.1-0.0 on Centos 6.6
We have two DC. One with hosts using vdsm-4.16.10-8.gitc937927.el7.x86_64, the
other vdsm-4.16.12-7.gita30da75.el6.x86_64 on Centos6.6
No hosted-engine, it run on a dedicates VM, outside oVirt.
Behavior: When try to put the NFS export
Hi,
The root cause of my problem was that there was a stale NFS mount on my
hosts. They still had an nfs mount active.
Killing the ioprocess processes that were keeping the mount busy and
then unmounting the nfs mounts allowed me to activate the iso and export
domain again.
Regards,
Rik
We use sanlock for locking.
Can you add the log files with the error?
- Original Message -
From: shimano shim...@go2.pl
To: users@ovirt.org
Sent: Friday, April 10, 2015 2:56:58 PM
Subject: [ovirt-users] NFS Storage: domain locked by NFS-provider?
Hi all,
I tried to configure
, 2015 2:56:58 PM
*Subject: *[ovirt-users] NFS Storage: domain locked by NFS-provider?
Hi all,
I tried to configure NFS Storage Domain with load balance by configuring
different IPs on Hosts for the same NFS-hostname and running few NFS
servers on those machines (NFS exports from MooseFS
- Original Message -
From: Cong Yue cong_...@alliedtelesis.com
To: Donny Davis do...@cloudspin.me, users@ovirt.org
Sent: Wednesday, December 17, 2014 9:27:56 PM
Subject: Re: [ovirt-users] NFS can not be mounted after the installation of
ovirt-hosted-engine
No, it was not realted
- Original Message -
From: Cong Yue cong_...@alliedtelesis.com
To: Simone Tiraboschi stira...@redhat.com
Cc: users@ovirt.org
Sent: Wednesday, December 17, 2014 7:18:26 PM
Subject: RE: [ovirt-users] NFS can not be mounted after the installation of
ovirt-hosted-engine
Thanks
Hi
I walked through the installation of ovirt-hosted-engine as
http://community.redhat.com/blog/2014/10/up-and-running-with-ovirt-3-5/
And I met a problem in the step of Configure storage
In my ovirt host, I am using nfs v3 for the test. I created two exports points,
and just after that I
- Original Message -
From: Cong Yue cong_...@alliedtelesis.com
To: users@ovirt.org
Sent: Wednesday, December 17, 2014 6:33:48 PM
Subject: [ovirt-users] NFS can not be mounted after the installation of
ovirt-hosted-engine
Hi
I walked through the installation
- Original Message -
From: Simone Tiraboschi stira...@redhat.com
To: Cong Yue cong_...@alliedtelesis.com
Cc: users@ovirt.org
Sent: Wednesday, December 17, 2014 6:43:34 PM
Subject: Re: [ovirt-users] NFS can not be mounted after the installation
of ovirt-hosted-engine
: Wednesday, December 17, 2014 9:48 AM
To: Yue, Cong
Cc: users@ovirt.org
Subject: Re: [ovirt-users] NFS can not be mounted after the installation of
ovirt-hosted-engine
- Original Message -
From: Simone Tiraboschi stira...@redhat.com
To: Cong Yue cong_...@alliedtelesis.com
Cc: users
/iptables and
restarted iptables service.
Thanks,
Cong
-Original Message-
From: Donny Davis [mailto:do...@cloudspin.me]
Sent: Wednesday, December 17, 2014 11:24 AM
To: Yue, Cong; users@ovirt.org
Subject: RE: [ovirt-users] NFS can not be mounted after the installation of
ovirt-hosted-engine
So
I am trying to install the second host to test the HA for hypervisor . I am
using external storage and assume that one is with HA.
I configured the first node with that shared storage as nfs2-3:/engine. And now
everything works well except for browser embedded console. :)
But when I did
For this, I found the reason. My external storage is with nfs4 type, but I
tired with nfs3.
Thanks,
Cong
From: Yue, Cong
Sent: Wednesday, December 17, 2014 3:56 PM
To: 'users@ovirt.org'
Subject: RE: nfs shared storage can not be mounted in second host during
hosted-engine --deploy
This is
Dear all,
We recently added 2 hypervisors to the domain on ovirt, but for some reason
they can't connect to the nfs share:
When I manually try to mount the nfs-share ([root@ovirthyp01dev ~]# mount
-vvv -t nfs -o vers=3,tcp progress:/media/NfsProgress /rhev/data-center/mnt/
On Tue, 2014-12-16 at 09:00 +0100, Koen Vanoppen wrote:
Dear all,
We recently added 2 hypervisors to the domain on ovirt, but for some
reason they can't connect to the nfs share:
When I manually try to mount the nfs-share ([root@ovirthyp01dev ~]#
mount -vvv -t nfs -o vers=3,tcp
Already installed... :-) and the service nfs and rpcbind are running
2014-12-16 9:07 GMT+01:00 Karli Sjöberg karli.sjob...@slu.se:
On Tue, 2014-12-16 at 09:00 +0100, Koen Vanoppen wrote:
Dear all,
We recently added 2 hypervisors to the domain on ovirt, but for some
reason they can't
On Tue, 2014-12-16 at 09:30 +0100, Koen Vanoppen wrote:
Already installed... :-) and the service nfs and rpcbind are running
Just checking:) Tried restarting the server? SELinux? Firewall?
/K
2014-12-16 9:07 GMT+01:00 Karli Sjöberg karli.sjob...@slu.se:
On Tue, 2014-12-16 at
Check, check, check :-)
Still nothing...
2014-12-16 9:36 GMT+01:00 Karli Sjöberg karli.sjob...@slu.se:
On Tue, 2014-12-16 at 09:30 +0100, Koen Vanoppen wrote:
Already installed... :-) and the service nfs and rpcbind are running
Just checking:) Tried restarting the server? SELinux?
- Original Message -
From: Koen Vanoppen vanoppen.k...@gmail.com
To: users@ovirt.org
Sent: Tuesday, December 16, 2014 10:00:32 AM
Subject: [ovirt-users] NFS
Dear all,
We recently added 2 hypervisors to the domain on ovirt, but for some reason
they can't connect to the nfs share
95% of the time this is a firewall issue.
As a test, I'd disable your firewall completely and see if that
rectifies it. If so, you can work on proper firewall rules to allow
oVirt to work.
-Bob
On 12/16/2014 03:30 AM, Koen Vanoppen wrote:
Already installed... :-) and the service nfs and
)
--
Message: 1
Date: Tue, 16 Dec 2014 07:45:57 -0500 (EST)
From: Nir Soffer nsof...@redhat.com
To: Koen Vanoppen vanoppen.k...@gmail.com
Cc: users@ovirt.org
Subject: Re: [ovirt-users] NFS
Message-ID:
1775969167.13951943.1418733957101.javamail.zim...@redhat.com
Content-Type: text
Hi ,
I'm using NFS storage domain, is it possible to backup the domain manually?
or is there an api in ovirt which support this?
For manual backup, is possible to use tools like rsync ? or the data will
not be consistent ?
Any idea of possible solutions for this?;as we don't have dedicated
On 31/10/14 10:13, Mohyedeen Nazzal wrote:
I'm using NFS storage domain, is it possible to backup the domain manually?
or is there an api in ovirt which support this?
For manual backup, is possible to use tools like rsync ? or the data will
not be consistent ?
Any idea of possible
Thanks Sven,
Detaching the domain is not an option, sorry if did not make it clear i
meant to backup the storage domain while VMs are running.
Also regarding the Live Snapshot option, The live snapshot simply create a
second hardisk for the VM and start to write on it.. it does not do any
backup
On 31/10/14 10:39, Mohyedeen Nazzal wrote:
Also regarding the Live Snapshot option, The live snapshot simply create a
second hardisk for the VM and start to write on it.. it does not do any
backup just a restore point.
Yes exactly, so you can safely backup the first disk.
You have a
Il 11/05/2014 22:19, Itamar Heim ha scritto:
On 04/14/2014 02:44 AM, Nauman Abbas wrote:
Hello all
Need some quick help on this. I restarted one of my oVirt nodes and it
doesn't seem to go up after that. I have done some troubleshooting and
found out that the nfs-server service is not going
On 04/14/2014 02:44 AM, Nauman Abbas wrote:
Hello all
Need some quick help on this. I restarted one of my oVirt nodes and it
doesn't seem to go up after that. I have done some troubleshooting and
found out that the nfs-server service is not going up. Here's the error.
[root@ovirtnode2 /]#
Hello all
Need some quick help on this. I restarted one of my oVirt nodes and it
doesn't seem to go up after that. I have done some troubleshooting and
found out that the nfs-server service is not going up. Here's the error.
[root@ovirtnode2 /]# systemctl start nfs-server.service
Job for
99 matches
Mail list logo