The oVirt Project is pleased to announce the availability of oVirt 4.0.5,
as of November 15th, 2016.
This release is available now for:
* Fedora 23 (tech preview)
* Red Hat Enterprise Linux 7.3 or later
* CentOS Linux (or similar) 7.2 or later
This release supports Hypervisor Hosts running:
*
Hi,
The reason I use this term is because it is indeed 2 engines, or two different
setups / environments.
I’m not talking about moving a VM from a virtual DC to another virtual DC in
the same engine.
Best,
--
Dr Christophe Trefois, Dipl.-Ing.
Technical Specialist / Post-Doc
UNIVERSITÉ DU
Export domains are to move VMs between oVirt environments. As
already mentioned, you attach the export domain to one data center,
export the vm, maint/detach, attach to different data center, attach
and import (then maint/detach, etc.).
The language "export domains 'between' engines" is a
You can mount the snapshot to another VM and copy the image.
We will be adding a image download option in oVirt 4.1.
Yaniv Dary
Technical Product Manager
Red Hat Israel Ltd.
34 Jerusalem Road
Building A, 4th floor
Ra'anana, Israel 4350109
Tel : +972 (9) 7692306
8272306
Email:
On Tue, Nov 15, 2016 at 5:05 PM, Simone Tiraboschi
wrote:
>
>
> On Tue, Nov 15, 2016 at 5:00 PM, Gianluca Cecchi <
> gianluca.cec...@gmail.com> wrote:
>
>> On Tue, Nov 15, 2016 at 4:39 PM, Simone Tiraboschi
>> wrote:
>>
>>>
Boot time was at
I cannot seem to find any information on how to change the notification
settings, in particular the email adresses to which to send notifications
and the from address for the notifications. I have tried adding a file on
the engine itself in /etc/ovirt-engine/notifier/notifier.conf.d/ and
setting
Missing information on:
- What OS is running on the guest ?
- Is the QXL driver installed on the guest ?
- Is spice-vdagent installed and is running on the guest ?
Thanks
Yaniv Dary
Technical Product Manager
Red Hat Israel Ltd.
34 Jerusalem Road
Building A, 4th floor
Ra'anana, Israel 4350109
On Tue, Nov 15, 2016 at 4:43 PM, Yaniv Dary wrote:
> You can mount the snapshot to another VM and copy the image.
> We will be adding a image download option in oVirt 4.1.
>
> Yaniv Dary
> Technical Product Manager
> Red Hat Israel Ltd.
> 34 Jerusalem Road
> Building A, 4th
Probably, please open a bug.
Yaniv Dary
Technical Product Manager
Red Hat Israel Ltd.
34 Jerusalem Road
Building A, 4th floor
Ra'anana, Israel 4350109
Tel : +972 (9) 7692306
8272306
Email: yd...@redhat.com
IRC : ydary
On Fri, Nov 11, 2016 at 12:09 PM, knarra wrote:
In general ever since 3.5 you don't need a export domain to do this.
It is advised you use import storage domain feature to do the same thing.
You create a new storage domain move VM to it and then remove it and attach
it to the new engine.
This should be much faster than the export domain.
Yaniv
Dear All,
I have a running vm that have a snapshot. It was made until the machine was
stopped. This snapshot is a consistent backup of the whole vm.
Is it possible to extract the snapshot from the running vm's image? I have try
the qemu-img, but I didn't found similar function.
I need the
Yes, this is possible, since import storage domain exists since oVirt 3.5.
Doing a test is always advised due.
Yaniv Dary
Technical Product Manager
Red Hat Israel Ltd.
34 Jerusalem Road
Building A, 4th floor
Ra'anana, Israel 4350109
Tel : +972 (9) 7692306
8272306
Email: yd...@redhat.com
On Tue, Nov 15, 2016 at 4:39 PM, Simone Tiraboschi
wrote:
>
>>
>> Boot time was at 14:32 and from image it seems that on 14:39 all should
>> have been already ok... but perhaps further settlement needed after first
>> "Storage Pool Manager runs on ." message?
>> When I
On Tue, Nov 15, 2016 at 5:00 PM, Gianluca Cecchi
wrote:
> On Tue, Nov 15, 2016 at 4:39 PM, Simone Tiraboschi
> wrote:
>
>>
>>>
>>> Boot time was at 14:32 and from image it seems that on 14:39 all should
>>> have been already ok... but perhaps
Dear users,
We have an ovirt cluster with three nodes and centralized storage.(nfs based).
The clusters based on centos6 os.
We bought some new server for more resources, but we want to upgrade to 4.0 .
Is it possible to make brand new install and import all of VM-s from storage?
The
On Tue, Nov 15, 2016 at 3:04 PM, Gianluca Cecchi
wrote:
>
>
> On Tue, Nov 15, 2016 at 2:51 PM, Gianluca Cecchi <
> gianluca.cec...@gmail.com> wrote:
>
>> Single host with self hosted engine.
>> All seems ok after rebooting host and exiting local maintenance.
>> But
On Mon, 14 Nov 2016 19:42:46 + Daniel wrote:
BD> An “export domain” is made for just this purpose. Create a NFS (version 3)
share and make it accessible to the hypervisors for each engine. (It should be
a dedicated NFS share, not used for anything else.) As I recall it should be
owned by
On Tue, Nov 15, 2016 at 2:46 PM, Gianluca Cecchi
wrote:
>
>
>
>
>> In case let me know how can I open a documentation bug for this if you
>>> agree.
>>>
>>
>> You are more than welcome.
>> Thanks!
>>
>>
>
> Against what?
>
>
Created this one
On Tue, Nov 15, 2016 at 5:16 PM, Gianluca Cecchi
wrote:
> On Tue, Nov 15, 2016 at 5:05 PM, Simone Tiraboschi
> wrote:
>
>>
>>
>> On Tue, Nov 15, 2016 at 5:00 PM, Gianluca Cecchi <
>> gianluca.cec...@gmail.com> wrote:
>>
>>> On Tue, Nov 15, 2016 at
Hi Gianluca,
yes, you are right. Now second and third host can be directly added
from UI. Before adding the second and third host please make sure that
the following steps are done for hyperconverged setup.
1) On Hosted engine vm run the command 'engine-config -s
Hi,
I was installing latest upstream master and i am hitting the issue
below. Can some one please let me know if this a bug ? If yes, is this
going to be fixed in the next nightly?
[WARNING] OVF does not contain a valid image description, using default.
[ INFO ] Detecting host
Sorry I'm quite new to this
Thank you Yaniv,
I'm running CentOS 7.2, with kernel 3.10.0-327.28.3 on both my guest vm and
ovirt host
both the QXL driver and the spice-vdagent are installed on my guest vm.
xorg-x11-drv-qxl version 0.1.1 18.el7
spice-vdagent-0.14.0-10.el7
I'm running ovirt
That's correct Glusterfs manages its own quotas with xattrs. If you create
glusterfs over xfs and set quotas on both, then, both the quota will be
simultaneously active. Hence it would be advisable to only use gluster
quota commands to manage quota.
Regards,
Sanoj
On Sun, Nov 13, 2016 at 10:09
On Wed, Nov 16, 2016 at 7:59 AM, knarra wrote:
> Hi Gianluca,
>
> yes, you are right. Now second and third host can be directly added
> from UI. Before adding the second and third host please make sure that the
> following steps are done for hyperconverged setup.
>
> 1) On
hi
I apologize if I missed it reading release(repo) note.
What are users supposed to do with EPEL repo?
I'm asking for hit this:
--> Package python-perf.x86_64 0:4.8.7-1.el7.elrepo will be
an update
--> Finished Dependency Resolution
Error: Package: nfs-ganesha-gluster-2.3.0-1.el7.x86_64
Strange, the import failed twice, and it succeeded when I tried a third
time. I'll report back when I encounter this problem again. Thanks.
Best regards,
Martijn.
Op 15-11-2016 om 08:33 schreef Elad Ben Aharon:
> Can you please attach engine.log?
> Thanks
>
> On Mon, Nov 14, 2016 at 6:28 PM,
On Tue, Nov 15, 2016 at 1:16 PM, Gianluca Cecchi
wrote:
> 2) Command to upgrade the engine
>> In oVirt page we have
>> yum update "ovirt-engine-setup*"
>>
>> In RHEV we have the command without the star
>> yum update ovirt-engine-setup
>>
>> If I remember correctly the
Hello,
I'm testing hyperconverged setup with gluster and oVirt 4.0.5 and three
hosts and self hosted engine.
I'm at the point where first host is ok and engine up and I have to deploy
second and third host.
In the past the command to give on them was
root@host2 # hosted-engine --deploy
and at
Hello,
I'm just noticing changes in web page for release notes of oVirt 4.0.5:
http://www.ovirt.org/release/4.0.5/
There are now several links to the corresponding official RHEV documents.
I think this is good, especially if they remains not so far each other.
I just noticed 2 things:
1) the
oVirt 4.0.5 release is now available..see Sandro e-mail from today..)
Thanks,
On Mon, Nov 14, 2016 at 5:25 PM, Derek Atkins wrote:
> Michael,
>
> Thanks. I'll wait for 4.0.5 to be released.
>
> Is there a published
> release schedule anywhere? Google only brought me to the
>
> 2) Command to upgrade the engine
> In oVirt page we have
> yum update "ovirt-engine-setup*"
>
> In RHEV we have the command without the star
> yum update ovirt-engine-setup
>
> If I remember correctly the star was important in the past
> Any comments on this difference? Documentation error
On Tue, Nov 15, 2016 at 1:26 PM, lejeczek wrote:
> hi
>
> I apologize if I missed it reading release(repo) note.
> What are users supposed to do with EPEL repo?
> I'm asking for hit this:
>
> --> Package python-perf.x86_64 0:4.8.7-1.el7.elrepo will be an update
> -->
On Tue, Nov 15, 2016 at 2:19 PM, Simone Tiraboschi
wrote:
>
>
> On Tue, Nov 15, 2016 at 1:16 PM, Gianluca Cecchi <
> gianluca.cec...@gmail.com> wrote:
>
>> 2) Command to upgrade the engine
>>> In oVirt page we have
>>> yum update "ovirt-engine-setup*"
>>>
>>> In RHEV we have
[+soumya]
On 11/15/2016 06:51 PM, Simone Tiraboschi wrote:
On Tue, Nov 15, 2016 at 1:26 PM, lejeczek > wrote:
hi
I apologize if I missed it reading release(repo) note.
What are users supposed to do with EPEL repo?
I'm asking
Single host with self hosted engine.
All seems ok after rebooting host and exiting local maintenance.
But when I try to start a VM I get
VM racclient1 is down with error. Exit message: Unable to get volume size
for domain 556abaa8-0fcc-4042-963b-f27db5e03837 volume
On Tue, Nov 15, 2016 at 2:51 PM, Gianluca Cecchi
wrote:
> Single host with self hosted engine.
> All seems ok after rebooting host and exiting local maintenance.
> But when I try to start a VM I get
>
> VM racclient1 is down with error. Exit message: Unable to get
36 matches
Mail list logo