On 04/14/2016 11:03 PM, Simone Tiraboschi wrote:
> On Thu, Apr 14, 2016 at 10:38 PM, Simone Tiraboschi
> wrote:
>> On Thu, Apr 14, 2016 at 6:53 PM, Richard Neuboeck
>> wrote:
>>> On 14.04.16 18:46, Simone Tiraboschi wrote:
On Thu, Apr 14, 2016 at 4:04 PM, Richard Neuboeck
wrote:
>>>
On Thu, Apr 14, 2016 at 7:35 PM, Nir Soffer wrote:
> On Wed, Apr 13, 2016 at 4:34 PM, Luiz Claudio Prazeres Goncalves
> wrote:
> > Nir, here is the problem:
> > https://bugzilla.redhat.com/show_bug.cgi?id=1298693
> >
> > When you do a hosted-engine --deploy and pick "glusterfs" you don't have
>
On Thu, Apr 14, 2016 at 7:07 PM, Luiz Claudio Prazeres Goncalves <
luiz...@gmail.com> wrote:
> Sandro, any word here? Btw, I'm not talking about hyperconvergency in this
> case, but 3 external gluster nodes using replica 3
>
> Regards
> Luiz
>
> Em qua, 13 de abr de 2016 10:34, Luiz Claudio Prazer
Hi all,
I've successfully set up the serial console feature for all my vms.
But the only way I found to make it work is to add each user as a
UserVmManager role, whereas they have the SuperUser role at the
datacenter level.I know there is an open bug on it for this.
A second bug is that adding
On Thu, Apr 14, 2016 at 10:38 PM, Simone Tiraboschi wrote:
> On Thu, Apr 14, 2016 at 6:53 PM, Richard Neuboeck
> wrote:
>> On 14.04.16 18:46, Simone Tiraboschi wrote:
>>> On Thu, Apr 14, 2016 at 4:04 PM, Richard Neuboeck
>>> wrote:
On 04/14/2016 02:14 PM, Simone Tiraboschi wrote:
> On
On Thu, Apr 14, 2016 at 6:53 PM, Richard Neuboeck wrote:
> On 14.04.16 18:46, Simone Tiraboschi wrote:
>> On Thu, Apr 14, 2016 at 4:04 PM, Richard Neuboeck
>> wrote:
>>> On 04/14/2016 02:14 PM, Simone Tiraboschi wrote:
On Thu, Apr 14, 2016 at 12:51 PM, Richard Neuboeck
wrote:
> On
I have a question regarding migration of VMs. It's my hope that someone
can tell me if my migration idea can work or if it is not possible
I want to migrate about 100-200 VMs from one oVirt deployment to a new
oVirt deployment. Some of the VMs are over 3TB in size. Exporting and
importing these
I've been experimenting with SR-IOV. I have a network with two vNIC profiles,
one for passthrough and one for virtio. Per this video:
https://www.youtube.com/watch?v=A-MROZ8D06Y
I think I should be able to do "mixed mode" using SR-IOV and virtio on the same
physical NIC. It does work, initially.
All,
I am trying to add a physical drive to Ovirt. For instance, my physical
machine has 3 extra hard drives attached and I want to add it directly instead
of through Gluster or iSCSI. Is this possible?
I tried using POSIX by creating an xfs filesystem on /dev/sde1 and tried to add
that as a
On Wed, Apr 13, 2016 at 4:34 PM, Luiz Claudio Prazeres Goncalves
wrote:
> Nir, here is the problem:
> https://bugzilla.redhat.com/show_bug.cgi?id=1298693
>
> When you do a hosted-engine --deploy and pick "glusterfs" you don't have a
> way to define the mount options, therefore, the use of the
> "b
Sandro, any word here? Btw, I'm not talking about hyperconvergency in this
case, but 3 external gluster nodes using replica 3
Regards
Luiz
Em qua, 13 de abr de 2016 10:34, Luiz Claudio Prazeres Goncalves <
luiz...@gmail.com> escreveu:
> Nir, here is the problem:
> https://bugzilla.redhat.com/sho
On Thu, Apr 14, 2016 at 4:04 PM, Richard Neuboeck wrote:
> On 04/14/2016 02:14 PM, Simone Tiraboschi wrote:
>> On Thu, Apr 14, 2016 at 12:51 PM, Richard Neuboeck
>> wrote:
>>> On 04/13/2016 10:00 AM, Simone Tiraboschi wrote:
On Wed, Apr 13, 2016 at 9:38 AM, Richard Neuboeck
wrote:
>>>
I'm trying to attach a SAN domain to data center.
It's an old domain that is re-attached. Import works fine.
But doing the attachement failed, either throug the UI or REST API call.
The log of the api calls are :
> POST /api/datacenters/92fbe5d6-2920-401d-b69b-ad4568e4f407/storagedomains
> HTT
Hi Nick,
I had this problem myself a while ago and it turned out the issue was
DNS related (one of the hosts couldn't do a DNS lookup on the name
registered to the other host so it failed with a strange error.) The
best way to diagnose a migration failure is probably with the
/var/log/vdsm
On Thu, Apr 14, 2016 at 2:14 PM, Nick Vercampt
wrote:
> Dear Sirs
>
> I'm writing to ask a question about the live migration on my oVirt setup.
>
> I'm currently running oVirt 3.6 on a virtual test enviroment with 1
> default cluster (2 hosts, CentOS 7) and 1 Gluster enabled cluster (with 2
> vi
On 04/14/2016 02:14 PM, Simone Tiraboschi wrote:
> On Thu, Apr 14, 2016 at 12:51 PM, Richard Neuboeck
> wrote:
>> On 04/13/2016 10:00 AM, Simone Tiraboschi wrote:
>>> On Wed, Apr 13, 2016 at 9:38 AM, Richard Neuboeck
>>> wrote:
The answers file shows the setup time of both machines.
>>
Dear Sirs
I'm writing to ask a question about the live migration on my oVirt setup.
I'm currently running oVirt 3.6 on a virtual test enviroment with 1 default
cluster (2 hosts, CentOS 7) and 1 Gluster enabled cluster (with 2 virtual
storage nodes, also CentOS7).
My datacenter has a shared data
On Thu, Apr 14, 2016 at 12:51 PM, Richard Neuboeck
wrote:
> On 04/13/2016 10:00 AM, Simone Tiraboschi wrote:
>> On Wed, Apr 13, 2016 at 9:38 AM, Richard Neuboeck
>> wrote:
>>> The answers file shows the setup time of both machines.
>>>
>>> On both machines hosted-engine.conf got rotated right be
Hi,
I've managed to get it work.
What I've done is to first run "engine-manage-domains delete" to remove
the domain and add it again using the new aaa extension tool
"ovirt-engine-extension-aaa-ldap-setup". It's not a good idea to mix
these two methods, I guess.
Restart the engine after each chang
On 04/13/2016 10:00 AM, Simone Tiraboschi wrote:
> On Wed, Apr 13, 2016 at 9:38 AM, Richard Neuboeck
> wrote:
>> The answers file shows the setup time of both machines.
>>
>> On both machines hosted-engine.conf got rotated right before I wrote
>> this mail. Is it possible that I managed to interr
Hi,
I have one method - maybe not so nice, but works:
Create new VM on a NFS storage with disk type and size of disk which
you want to import.
In GUI find ID of new created disk. Copy image to the storage and
replace image in the correct directory based on the ID
The issue is most probably that your user don't have permissions to
login/see vms in oVirt.
Just login as admin@internal to webadmin and assign user 'aaa' some
permissions.
Here[1] is example how to work with virtual machine permissions.
[1]
https://access.redhat.com/documentation/en-US/Red_Ha
On Thu, Apr 14, 2016 at 1:23 PM, wrote:
> Hi Nir,
>
> El 2016-04-14 11:02, Nir Soffer escribió:
>>
>> On Thu, Apr 14, 2016 at 12:38 PM, Fred Rolland
>> wrote:
>>>
>>> Nir,
>>> See attached the repoplot output.
>>
>>
>> So we have about one concurrent lvm command without any disk operations,
>> a
Hi Nir,
El 2016-04-14 11:02, Nir Soffer escribió:
On Thu, Apr 14, 2016 at 12:38 PM, Fred Rolland
wrote:
Nir,
See attached the repoplot output.
So we have about one concurrent lvm command without any disk
operations, and
everything seems snappy.
Nicolás, maybe this storage or the host is o
On Thu, Apr 14, 2016 at 11:27 AM, Gianluca Cecchi
wrote:
> On Thu, Apr 14, 2016 at 8:15 AM, Yedidyah Bar David wrote:
>>
>> On Thu, Apr 14, 2016 at 5:18 AM, Michael Hall wrote:
>>
>>
>> 3. NFS
>> loop-back mounting nfs is considered risky, due to potential locking
>> issues. Therefore, if you wa
On Thu, Apr 14, 2016 at 12:38 PM, Fred Rolland wrote:
> Nir,
> See attached the repoplot output.
So we have about one concurrent lvm command without any disk operations, and
everything seems snappy.
Nicolás, maybe this storage or the host is overloaded by the vms? Are your vms
doing lot of io?
Hello,
I was wondering if there is any way to import a disk image
(qcow/qcow2) into an oVirt storage domain? I've tried v2v but it won't
work because the image customization parts of it won't deal with Ubuntu,
and I tried import-to-ovirt.pl but the disks it creates seem to be
broken in som
Hi,
I'm using curl and I followed steps in [1] and double checked the
permissions.
I've tested API access vs. webadmin access (see below).
$ curl -v --negotiate -X GET -H "Accept: application/xml" -k
https://server8.funfurt.de/ovirt-engine/webadmin/?locale=de_DE
# Result: HTTP 401
$ kinit
$ curl
Nir,
See attached the repoplot output.
On Thu, Apr 14, 2016 at 12:18 PM, Nir Soffer wrote:
> On Thu, Apr 14, 2016 at 12:02 PM, Fred Rolland
> wrote:
> > From the log, we can see that the lvextend command took 18 sec, which is
> > quite long.
>
> Fred, can you run repoplot on this log file? it w
On Thu, Apr 14, 2016 at 12:02 PM, Fred Rolland wrote:
> From the log, we can see that the lvextend command took 18 sec, which is
> quite long.
Fred, can you run repoplot on this log file? it will may explain why this lvm
call took 18 seconds.
Nir
>
> 60decf0c-6d9a-4c3b-bee6-de9d2ff05e85::DEBUG:
>From the log, we can see that the lvextend command took 18 sec, which is
quite long.
60decf0c-6d9a-4c3b-bee6-de9d2ff05e85::DEBUG::2016-04-13
10:52:06,759::lvm::290::Storage.Misc.excCmd::(cmd) /usr/bin/taskset
--cpu-list 0-23 /usr/bin/sudo -n /usr/sbin/lvm lvextend --config ' devices
{ preferred_n
> On 14 Apr 2016, at 09:57, nico...@devels.es wrote:
>
> Ok, that makes sense, thanks for the insight both Alex and Fred. I'm
> attaching the VDSM log of the SPM node at the time of the pause. I couldn't
> find anything that would clearly identify the problem, but maybe you'll be
> able to.
I
Thank you very much ! its working :)
On Wed, Apr 13, 2016 at 9:05 PM, Nathanaël Blanchet
wrote:
> No need to convert anything, a2ef36fa-ecfa-4138-8f19-2f7609276d4b is
> alreay the raw file you need. You can rsync it and rename it to myvm.img.
>
>
> Le 13/04/2016 17:28, Budur Nagaraju a écrit :
>
On Thu, Apr 14, 2016 at 8:15 AM, Yedidyah Bar David wrote:
> On Thu, Apr 14, 2016 at 5:18 AM, Michael Hall wrote:
>
>
> 3. NFS
> loop-back mounting nfs is considered risky, due to potential locking
> issues. Therefore, if you want to use NFS, you are better off doing
> something like this:
>
>
H
Hi,
> But the project doesn't look ready to go and I can't find a download.
I think that is one of the unfortunate effects of how the website was
converted. Check the At a glance section, it says the status is
Released. We have had it released since oVirt 3.3 with significant
improvements in 3.4
Ok, that makes sense, thanks for the insight both Alex and Fred. I'm
attaching the VDSM log of the SPM node at the time of the pause. I
couldn't find anything that would clearly identify the problem, but
maybe you'll be able to.
Thanks.
Regards.
El 2016-04-13 13:09, Fred Rolland escribió:
H
36 matches
Mail list logo