Hello,
I seem to remember in RHEV 3.0 that when you configured an IPA domain,
its admin was automatically configured as an admin for RHEV itself.
Is it true and in case does remain true for oVirt?
I configured IPA as shipped on CentOS 6.3+updates
ipa-server-2.2.0-17.el6_3.1.x86_64
I successfully
On 02/01/2013 09:29 PM, Dead Horse wrote:
> To test further I loaded up two more identical servers with EL 6.3 and the
> same package versions originally indicated. The difference here is that I
> did not turn these into ovirt nodes. EG: installing VDSM.
>
> - All configurations were left at defau
To test further I loaded up two more identical servers with EL 6.3 and the
same package versions originally indicated. The difference here is that I
did not turn these into ovirt nodes. EG: installing VDSM.
- All configurations were left at defaults on both servers
- iptables and selinux disabled
Both nodes are identical and can fully communicate with each other.
Since the normal non p2p live migration works both hosts can reach each
other via the connection URI.
Perhaps I am missing something here?
- DHC
___
Users mailing list
Users@ovirt.org
htt
On 02/01/2013 07:38 PM, Kanagaraj wrote:
On 02/01/2013 06:47 PM, Joop wrote:
Shireesh Anjal wrote:
On 02/01/2013 05:13 PM, noc wrote:
On 1-2-2013 11:07, Kanagaraj wrote:
Hi Joop,
Looks like the problem is because of the glusterfs version you are
using. vdsm could not parse the output from g
Hi
It looks like this bug report :
https://bugzilla.redhat.com/show_bug.cgi?id=903716
Kevin
2013/2/1 Kevin Maziere Aubry
> Hi
>
> My environnement is Fedora18 / Ovirt 3.2 and
> VDSM vdsm-4.10.3-6.fc18.x86_64.
>
> I use a storage on fiber channel. When creating a VM the disk is created
> by ov
Hi
My environnement is Fedora18 / Ovirt 3.2 and VDSM vdsm-4.10.3-6.fc18.x86_64.
I use a storage on fiber channel. When creating a VM the disk is created by
ovirt.
When starting the VM the disk has not the permission to be accessed by vdsm.
In fact the device for the disk has root:disks as owner/g
On 02/01/2013 06:47 PM, Joop wrote:
Shireesh Anjal wrote:
On 02/01/2013 05:13 PM, noc wrote:
On 1-2-2013 11:07, Kanagaraj wrote:
Hi Joop,
Looks like the problem is because of the glusterfs version you are
using. vdsm could not parse the output from gluster.
Can you update the glusterfs t
Shireesh Anjal wrote:
On 02/01/2013 05:13 PM, noc wrote:
On 1-2-2013 11:07, Kanagaraj wrote:
Hi Joop,
Looks like the problem is because of the glusterfs version you are
using. vdsm could not parse the output from gluster.
Can you update the glusterfs to
http://bits.gluster.org/pub/gluste
Shireesh Anjal wrote:
On 02/01/2013 05:13 PM, noc wrote:
On 1-2-2013 11:07, Kanagaraj wrote:
Hi Joop,
Looks like the problem is because of the glusterfs version you are
using. vdsm could not parse the output from gluster.
Can you update the glusterfs to
http://bits.gluster.org/pub/gluste
On 02/01/2013 05:13 PM, noc wrote:
On 1-2-2013 11:07, Kanagaraj wrote:
Hi Joop,
Looks like the problem is because of the glusterfs version you are
using. vdsm could not parse the output from gluster.
Can you update the glusterfs to
http://bits.gluster.org/pub/gluster/glusterfs/v3.4.0qa7/x
On 1-2-2013 11:07, Kanagaraj wrote:
Hi Joop,
Looks like the problem is because of the glusterfs version you are
using. vdsm could not parse the output from gluster.
Can you update the glusterfs to
http://bits.gluster.org/pub/gluster/glusterfs/v3.4.0qa7/x86_64/ and
check it out?
How??
I
Hola,
Could you make some pastebins with the contents of the files
/etc/sysconfig/network-scripts/ifcfg* ?
Also, virsh -r net-list and the log generated on the process
of losing the connection when creating a guest.
Best,
Toni
- Original Message -
> From: "Juan Jose"
> To: "Moti Asaya
Hi Joop,
Looks like the problem is because of the glusterfs version you are
using. vdsm could not parse the output from gluster.
Can you update the glusterfs to
http://bits.gluster.org/pub/gluster/glusterfs/v3.4.0qa7/x86_64/ and
check it out?
Thanks,
Kanagaraj
On 02/01/2013 03:23 PM, Jo
Hello Monti and Dafna,
The host have connectivity with the engine until I try to install a VM and
in the middle of the process host loses the connectivity. I can see that it
is a connection problem. How can I check if the host address is the same as
I used to add it to my data-center?, I made an I
On 01/31/2013 07:07 PM, Dead Horse wrote:
> Here is the content exceprt from libvirtd.log for the command: virsh #
> migrate --p2p sl63 qemu+ssh://192.168.1.2/system
>
Thanks for testing this. However, this is another problem. When
migrating p2p, the destination URI must be reachable from the s
16 matches
Mail list logo