Hello all,
I'm looking for an high level, approved, workflow to completely remove a
datacenter and all its object.
Datacenter is made by 1 cluster, 1 server, 1 local storage, no VMs.
Hosted engine is on a different data center.
Everything is nicely working.
The administration
Hello,
I didn't find anyway to easy list all my vms thanks to the ansible
modules...
I tried the ovirt4.py script which is able to list the whole facts, so
vms list, when the number of them is small in a test datacenter, but in
a production datacenter, I get an issue:
File "./ovirt4.py",
On 5/9/2017 6:06 AM, Gary Lloyd wrote:
Hi
I was just wondering if anyone is running Ovirt using a shared SAS array
with the ability to live migrate between hosts ?
If so has anyone been able to get hosted engine working with it ?
Yes, it works fine. Just claim it's FC storage and use a
what do you guys think on having a pdf/ html in the main upper canvas
"guide" button instead of pointing to http://www.ovirt.org/documentation/?
so that way everyone has the correct information for the correct version
regards,
2017-05-09 14:04 GMT-03:00 Jeff Burns :
>
>
Nevermind me, it was a permission problem sorry if you clicked here :(
On Tue, May 9, 2017 at 4:08 PM, Erick Vogeler
wrote:
> Pic of the error
> https://i.imgur.com/rhp3thT.png
>
> Engine Log:
>
> https://0bin.net/paste/Pz6QV5hPaGmksgnA#azl9UXo2M+ilLLMg31Wh+
>
Pic of the error
https://i.imgur.com/rhp3thT.png
Engine Log:
https://0bin.net/paste/Pz6QV5hPaGmksgnA#azl9UXo2M+ilLLMg31Wh+IymHHSWNsLlA1R5DZa1WUG
Vdsm log
https://0bin.net/paste/1kLWT5btLQVa9el1#aQu+lviLYw-RZVxVJD8dYMI1juALurJI3vjw1ZWNpfF
Thank you appreciate it
Seriously, what's wrong with you? I've being reading your comments for some
time and the only things I see are whining, unproductive complaining and
disrespectful comments.
Please, stop it. There are tons of ways to say something, and the way you
use is insulting for dozens of people developing
We have three gluster shares (_data, _engine, _export) created by a
brick located on three of our VM hosts. See output from "gluster volume
info" below:
Volume Name: data
Type: Replicate
Volume ID: c07fdf43-b838-4e4b-bb26-61dbf406cb57
Status: Started
Number of Bricks: 1 x (2 + 1) = 3
On Tue, May 9, 2017 at 5:59 PM, Fabrice Bacchella <
fabrice.bacche...@orange.fr> wrote:
> The documentation is alway a good laugh at ovirt. Look for RHEL instead.
>
After you finish laughing, you could improve it.
I have no doubt that you have both the skills as well as the experience and
I'm whining because ovirt is a wonderful product, peoples behind are nice, but
ho boy, what about the execution !
And no I have done much more than empty complains as I have open my share a bug
report, written a blog entry about using ovirt+kerberos+SSO, written a full
fledged CLI and trying
Hello my name is Vincent
¿I want to know the location of the discs of ovirt 3.5?
When cloning a VM, the disk is cloned, it is cloned with the same capacity,
I moved the disk to a nfs drive and it tells me that it only weighs 4k
Regards
Vincent Romero
https://www.dropbox.com/s/0vw8pvm99dpnq0a/ovirtlogs.tar.gz?dl=0
I opened a bug as https://bugzilla.redhat.com/show_bug.cgi?id=1448399
> Le 4 mai 2017 à 17:10, Fabrice Bacchella a
> écrit :
>
> I'm playing with the python sdk and getting :
>
> [2017-05-04
Hi, it seems like some stuff was left on /boot from previous attempts,
making the boot setup stage fail, which means that the node is actually
installed on the onn_labvmhostt05/ovirt-node-ng-4.1.1.1-0.20170406.0+1 LV
but the kernel wasn't installed, making it impossible to boot to that LV.
The way
here are some pics:
https://i.imgur.com/hfJM6PG.png
machines show down although they are up
https://i.imgur.com/NkHisyr.png
Engine Log:
https://drive.google.com/file/d/0B4wHJ6nwLi9BcE5INXRJSVpPSHM/view?usp=sharing
VDSM Log:
On Tue, May 9, 2017 at 10:14 AM, Erick Vogeler wrote:
> here are some pics:
> https://i.imgur.com/hfJM6PG.png
>
> machines show down although they are up
> https://i.imgur.com/NkHisyr.png
>
> Engine Log:
>
Restarting the engine solved the issue.
It is a Local DC. The IP of the host was changed, and though it is
configured as FQDN in Ovirt, things got weird...
On Tue, May 9, 2017 at 10:29 AM, Yedidyah Bar David wrote:
> On Tue, May 9, 2017 at 10:14 AM, Erick Vogeler
On Tue, May 9, 2017 at 11:08 AM, Fred Rolland wrote:
> Restarting the engine solved the issue.
> It is a Local DC. The IP of the host was changed, and though it is
> configured as FQDN in Ovirt, things got weird...
Also Erick replied in private, saying:
> Yes yes
>
> Problem
You can find the disk id in the "Disk" tab of webadmin portal, then go to
the nfs directory you are using, and find that id, it is the very disk you
are looking for.
2017-05-10 5:30 GMT+08:00 Vincent Romero :
> Hello my name is Vincent
>
> ¿I want to know the location of
On Mon, May 8, 2017 at 9:00 AM, knarra wrote:
> On 05/07/2017 04:48 PM, Mike DePaulo wrote:
>>
>> Hi. I am trying to follow this guide. Is it possible to use part of my
>> OS disk /dev/sda for the bricks?
>>
>>
Le 09/05/2017 à 12:29, santosh bhabal a écrit :
Hello Experts,
I am new to Ovirt community.
Apology if this question has asked earlier.
I just wanted to know that does Ovirt support Citrix Xenserver or not?*
Definitively not, ovirt is a only a KVM-based hypervisor manager, even
if it is
Opened: https://bugzilla.redhat.com/show_bug.cgi?id=1449181
2017-05-09 13:37 GMT+08:00 Yedidyah Bar David :
> On Mon, May 8, 2017 at 8:04 PM, plysan wrote:
> > Solved by the additional configuration in
> >
Hi
I was just wondering if anyone is running Ovirt using a shared SAS array
with the ability to live migrate between hosts ?
If so has anyone been able to get hosted engine working with it ?
Thanks
*Gary Lloyd*
I.T. Systems:Keele University
Hello Experts,
I am new to Ovirt community.
Apology if this question has asked earlier.
I just wanted to know that does Ovirt support Citrix Xenserver or not?
Reagrds
Santosh.
___
Users mailing list
Users@ovirt.org
Hi there,
we have a big problem with our ovirt 4.1.1 enviroment.
After a fc-storage failure, and an automatic reboot of the host with the
hosted engine on it, we can't get the engine running again.
The probelm seems an invalid lockspace. Sanlock.log shows:
2017-05-09 12:07:22+0200 35 [4991]:
> Le 9 mai 2017 à 12:06, Gary Lloyd a écrit :
>
> Hi
>
> I was just wondering if anyone is running Ovirt using a shared SAS array with
> the ability to live migrate between hosts ?
Yes, I have, every things run fine.
> If so has anyone been able to get hosted engine
Thanks for the update.
On Tue, May 9, 2017 at 4:11 PM, Nathanaël Blanchet wrote:
>
>
> Le 09/05/2017 à 12:29, santosh bhabal a écrit :
>
> Hello Experts,
>
> I am new to Ovirt community.
> Apology if this question has asked earlier.
> I just wanted to know that does Ovirt
On Fri, May 5, 2017 at 4:47 PM, Juan Hernández wrote:
>
> Yes, capablanca is too old.
>
> The instructions that you mention should still work, but remember to
> make a backup before doing that in your production environment.
>
> Anyhow, I'd suggest that you install a fresh
Hi,
Thanks for reply. Have tried to gather logs from hosts here on google
drive: https://drive.google.com/open?id=0B7R4U330JfWpbkNhb2pxZWhmUUk
On Sun, Apr 30, 2017 at 10:50 AM, Fred Rolland wrote:
> Hi,
>
> Can you provide the vdsm and engine logs ?
>
> Thanks,
> Fred
>
> On
On Tue, May 9, 2017 at 6:52 PM, Nathanaël Blanchet wrote:
> Hello,
>
> I didn't find anyway to easy list all my vms thanks to the ansible
> modules...
> I tried the ovirt4.py script which is able to list the whole facts, so vms
> list, when the number of them is small in a test
We have three gluster shares (_data, _engine, _export) created by a
brick located on three of our VM hosts. See output from "gluster volume
info" below:
Volume Name: data
Type: Replicate
Volume ID: c07fdf43-b838-4e4b-bb26-61dbf406cb57
Status: Started
Number of Bricks: 1 x (2 + 1) = 3
Hello ovirt users,
First off all, thanks for your work. I've been using the software for a few
months and the experience has been great.
I'm having a hard time trying to set the group on a glusterfs volume
PLAY [master]
**
TASK
On Tue, May 9, 2017 at 2:45 PM, plysan wrote:
> Opened: https://bugzilla.redhat.com/show_bug.cgi?id=1449181
Thanks!
>
> 2017-05-09 13:37 GMT+08:00 Yedidyah Bar David :
>>
>> On Mon, May 8, 2017 at 8:04 PM, plysan wrote:
>> > Solved by the
The documentation is alway a good laugh at ovirt. Look for RHEL instead.
> Le 9 mai 2017 à 16:13, Juan Pablo a écrit :
>
> Team, Is just me or the documentation pages are not being updated ? many are
> outdated.. how can we collaborate?
>
> whats up with
33 matches
Mail list logo