Hi,
sorry for delay, guess i'll gonna plan to upgrade to 3.4 soon
Eli, Artyom, Omer - big thanks for your valuable help, it was important
to me to understand what went wrong in that incident.
Yuriy Demchenko
On 05/19/2014 06:26 PM, Artyom Lukianov wrote:
Bug already fixed in 3.3
Il 20/05/2014 20:43, Bob Doolittle ha scritto:
On 05/20/2014 10:41 AM, Sandro Bonazzola wrote:
Il 20/05/2014 16:36, Bob Doolittle ha scritto:
On 05/20/2014 10:23 AM, Sandro Bonazzola wrote:
Il 20/05/2014 16:06, Bob Doolittle ha scritto:
On 05/20/2014 09:42 AM, Sandro Bonazzola wrote:
Il
On May 20, 2014, at 02:05 , Jeff Clay jeffc...@gmail.com wrote:
When selecting to shutdown vm's from the admi portal, it often doesn't work
although, sometimes it does. These machines are all stateless and in the same
pool, yet sometimes they will shutdown from the portal, most of the time
I'd want to add that these rules are for NFSv3
asking Bob if he is maybe useing NFSv4 ?
Am 21.05.2014 08:43, schrieb Sandro Bonazzola:
I'm not saying NFS issue is irrelevant :-)
I'm saying that if you're adding NFS service on the node running hosted
engine you'll need to configure iptables
___
Users mailing list
Users@ovirt.org
http://lists.ovirt.org/mailman/listinfo/users
Hi,
We're going to start composing oVirt 3.4.2 RC on *2014-05-27 08:00 UTC* from
3.4 branches.
The bug tracker [1] shows no blocking bugs for the release
There are still 71 bugs [2] targeted to 3.4.2.
Excluding node and documentation bugs we still have 39 bugs [3] targeted to
3.4.2.
Jpackage.org is down again, there isn't even an authoritive dns server
configured anymore
Anyone know what's going on there? It seems to have dropped of the
planet and this messes up the dependency repo big time.
Kind regards,
Jorick Astrego
Netbulae BV
On Mon, 2014-05-19 at 10:31 +0200,
Hi,
and thanks for the list with the open bugs.
Here are my proposed blockers (I won't add
them directly without dev agreement)
The list is not ordered in any way:
Cache records in memory
https://bugzilla.redhat.com/show_bug.cgi?id=870330
Status: Post
ovirt-log-collector: conflicts with file
Hi,
We released oVirt 3.5.0 Alpha on *2014-05-20* and we're now preparing for
feature freeze scheduled for 2014-05-30.
We're going to compose a second Alpha on Firday *2014-05-30 08:00 UTC*.
Maintainers:
- Please be sure that master snapshot allow to create VMs before *2014-05-29
15:00 UTC*
Il 21/05/2014 11:08, Sven Kieske ha scritto:
Hi,
and thanks for the list with the open bugs.
Here are my proposed blockers (I won't add
them directly without dev agreement)
The list is not ordered in any way:
Cache records in memory
https://bugzilla.redhat.com/show_bug.cgi?id=870330
Hi,
I don't know the exact resolution for this, but I'll add some people
who managed to make it work, following this tutorial:
http://wiki.dreyou.org/dokuwiki/doku.php?id=ovirt_rpm_start33
See this thread on the users ML:
http://lists.ovirt.org/pipermail/users/2013-December/018341.html
HTH
On 05/21/2014 02:04 PM, Gabi C wrote:
Hello!
I have an ovirt setup, 3.4.1, up-to date, with gluster package
3.5.0-3.fc19 on all 3 nodes. Glusterfs setup is replicated on 3
bricks. On 2 nodes 'gluster peeer status' raise 2 peer connected with
it's UUID. On third node 'gluster peer status'
Hi guys,
Just a little more info on the problem. I've upgraded another oVirt
system before from Dreyou and it worked perfectly, however on this
particular system, we had to restore from backups (DB PKI and
etc/ovirt-engine) as the physical machine died that was hosting the
engine, so perhaps this
On afected node:
gluster peer status
gluster peer status
Number of Peers: 3
Hostname: 10.125.1.194
Uuid: 85c2a08c-a955-47cc-a924-cf66c6814654
State: Peer in Cluster (Connected)
Hostname: 10.125.1.196
Uuid: c22e41b8-2818-4a96-a6df-a237517836d6
State: Peer in Cluster (Connected)
Hostname:
On Wed, May 21, 2014 at 11:33:13AM +0200, Sandro Bonazzola wrote:
Il 21/05/2014 11:08, Sven Kieske ha scritto:
Hi,
and thanks for the list with the open bugs.
Here are my proposed blockers (I won't add
them directly without dev agreement)
The list is not ordered in any way:
- Original Message -
From: Ted Miller tmil...@hcjb.org
To: users users@ovirt.org
Sent: Tuesday, May 20, 2014 11:31:42 PM
Subject: [ovirt-users] sanlock + gluster recovery -- RFE
As you are aware, there is an ongoing split-brain problem with running
sanlock on replicated gluster
On 05/21/2014 03:09 AM, Sven Kieske wrote:
I'd want to add that these rules are for NFSv3
asking Bob if he is maybe useing NFSv4 ?
At the moment I don't need either one. I need to solve my major issues
first. Then when things are working I'll worry about setting up NFS to
export new domains
..or should I:
-stop volumes
-remove brick belonging to the affected node
-remove afected node/peer
-add thenode, brick then start volumes?
On Wed, May 21, 2014 at 1:13 PM, Gabi C gab...@gmail.com wrote:
On afected node:
gluster peer status
gluster peer status
Number of Peers: 3
On 05/21/2014 02:49 PM, Bob Doolittle wrote:
On 05/21/2014 03:09 AM, Sven Kieske wrote:
I'd want to add that these rules are for NFSv3
asking Bob if he is maybe useing NFSv4 ?
At the moment I don't need either one. I need to solve my major issues
first. Then when things are working I'll worry
What are the steps which led this situation?
Did you re-install one of the nodes after forming the cluster or reboot
which could have changed the ip?
On 05/21/2014 03:43 PM, Gabi C wrote:
On afected node:
gluster peer status
gluster peer status
Number of Peers: 3
Hostname: 10.125.1.194
Hello!
I haven't change the IP, nor reinstall nodes. All nodes are updated via
yum. All I can think of was that after having some issue with gluster,from
WebGUI I deleted VM, deactivate and detach storage domains ( I have 2) ,
than, *manually*, from one of the nodes , remove bricks, then detach
Ok.
I am not sure deleting the file or re-peer probe would be the right way
to go.
Gluster-users can help you here.
On 05/21/2014 07:08 PM, Gabi C wrote:
Hello!
I haven't change the IP, nor reinstall nodes. All nodes are updated
via yum. All I can think of was that after having some
On 05/21/2014 09:24 AM, Jiri Moskovcak wrote:
On 05/21/2014 02:49 PM, Bob Doolittle wrote:
On 05/21/2014 03:09 AM, Sven Kieske wrote:
I'd want to add that these rules are for NFSv3
asking Bob if he is maybe useing NFSv4 ?
At the moment I don't need either one. I need to solve my major
Minutes:http://ovirt.org/meetings/ovirt/2014/ovirt.2014-05-21-14.02.html
Minutes (text): http://ovirt.org/meetings/ovirt/2014/ovirt.2014-05-21-14.02.txt
Log:
http://ovirt.org/meetings/ovirt/2014/ovirt.2014-05-21-14.02.log.html
=
#ovirt: oVirt Weekly
Hi,
- Original Message -
From: Ted Miller tmiller at hcjb.org
To: users users at ovirt.org
Sent: Tuesday, May 20, 2014 11:31:42 PM
Subject: [ovirt-users] sanlock + gluster recovery -- RFE
As you are aware, there is an ongoing split-brain problem with running
sanlock on
Hi
I was just wondering if ISCSI Multipathing is supported yet on Direct Lun ?
I have deployed 3.4.0.1 but i can only see the option for ISCSI
multipathing on storage domains.
We will be glad if it could be, as it saves us having to inject new code
into our vdsm nodes with each new version.
- Original Message -
From: Giuseppe Ragusa giuseppe.rag...@hotmail.com
To: fsimo...@redhat.com
Cc: users@ovirt.org
Sent: Wednesday, May 21, 2014 5:15:30 PM
Subject: sanlock + gluster recovery -- RFE
Hi,
- Original Message -
From: Ted Miller tmiller at hcjb.org
To:
Hello
I was used tout run ovirt 3.2.2 installed from the dreyou repo and it has
worked like a charm until now. I succeeded to upgrade to 3.3.5 official
repository but I didn't pay attention with the host vdsm upgrade and I
installed 4.14. So the 3.3.5 web engine complained that it wasn't the
On 5/21/2014 11:15 AM, Giuseppe Ragusa wrote:
Hi,
- Original Message -
From: Ted Miller tmiller at hcjb.org
To: users users at ovirt.org
Sent: Tuesday, May 20, 2014 11:31:42 PM
Subject: [ovirt-users] sanlock + gluster recovery -- RFE
As you are aware, there is an ongoing
regards,
My Name is Carlos and I'm a I:T Admin. I recently noticed that my oVirt Web
Admin portal shows 0% memory usage for all my virtual machines, a situation
that does not seem right to me.
I have tried to search for information about why it may be happening that,
but I have not found
On Wed, May 21, 2014 at 4:17 PM, Carlos Castillo
carlos.casti...@globalr.net wrote:
regards,
My Name is Carlos and I'm a I:T Admin. I recently noticed that my oVirt
Web Admin portal shows 0% memory usage for all my virtual machines, a
situation that does not seem right to me.
I have tried
Den 21 maj 2014 22:18 skrev Carlos Castillo carlos.casti...@globalr.net:
regards,
My Name is Carlos and I'm a I:T Admin. I recently noticed that my oVirt Web
Admin portal shows 0% memory usage for all my virtual machines, a situation
that does not seem right to me.
I have tried to
I just did a rolling upgrade of my gluster storage cluster to the latest
3.5 bits. This all seems to have gone smoothly and all the volumes are on
line. All volumes are replicated 1x2
The ovirt console now insists that two of my volumes , including the
vm-store volume with my vm's happily
Thank you both for the info, tomorrow I will be doing tests on my
computers.
Regards
2014-05-21 16:02 GMT-04:30 Karli Sjöberg karli.sjob...@slu.se:
Den 21 maj 2014 22:18 skrev Carlos Castillo carlos.casti...@globalr.net:
regards,
My Name is Carlos and I'm a I:T Admin. I recently
Hello.
There are a few bugs that are related to live snapshots/storage migrations not
working.
https://bugzilla.redhat.com/show_bug.cgi?id=1009100 is one of them and is
targeted for 3.5.0.
According to the bug there is some engine work still required.
I understand that with EL6 based hosts live
engine.log and vdsm.log?
This can mostly happen due to following reasons
- gluster volume status vm-store is not consistently returning the
right output
- ovirt-engine is not able to identify the bricks properly
Anyway, engine.log will give better clarity.
On 05/22/2014 02:24 AM, Alastair
Hi guys, sorry to repost but getting a bit desperate. Is anyone able to
assist?
Thanks.
Regards.
Neil Wilson
On 21 May 2014 12:06 PM, Neil nwilson...@gmail.com wrote:
Hi guys,
Just a little more info on the problem. I've upgraded another oVirt
system before from Dreyou and it worked
Hi list,
I just tried this again with preallocated disk, otherwise the exact same
procedure as described in my original post. No problems at all so far
with regards,
--
Morten A. Middelthon
Email: mor...@flipp.net
Phone: +47 907 83 708
___
Users
38 matches
Mail list logo