Le 04/02/2016 22:35, Colin Coe a écrit :
Is the oVirt agent up to date?
yum -y upgrade
... [blah blah blah]
... reboot
and then :
# cat /etc/centos-release
CentOS Linux release 7.2.1511 (Core)
# rpm -qa|grep -i agent
ovirt-guest-agent-common-1.0.11-1.el7.noarch
qemu-guest-agent-2.3.0-4.el7.x8
Hi Colin,
What you are trying to do is live merge - delete a vm's snapshot for
running vm.
This feature might not be supported on some of your hypervisors because of
vdsm or OS version.
Can you please share the OS version and vdsm vesrion of the hypervisor that
hositng the running vm which you cann
After changing the owner of engine to “engine”, I was able to upgrade normally,
so that looks like it was my problem.
Thanks for the pointers!
> On Feb 4, 2016, at 3:40 PM, Darrell Budic wrote:
>
> I suspect that’s my problem, my database isn’t owned by engine:
>
> engine=# \l
>
It's not impossible that this is a bug. I did a lot of work on those
columns in 3.6. The vm status column was heavily reworked.
But, I can't look into it until at least Monday :)
Let me know if anyone finds a pattern!
Greg
On Thu, Feb 4, 2016 at 4:35 PM, Colin Coe wrote:
> Is the oVirt agent
I've created a GSS case for this: 01578873. Attached to the case is a
rhevm-log-collector for the affected prod RHEV and a rhevm-log-collector
for the dev RHEV which is not affected.
Thanks
On Fri, Feb 5, 2016 at 7:26 AM, Colin Coe wrote:
> I've just checked our dev and test RHEV instances (bo
Hi and thanks for the response
Is there any way that this could be done without putting the entire
datacenter (and I assume shutting done all the VMs) into maintenance mode?
I was thinking along the lines of:
- Put all hosts except for the the SPM into maintenance mode
- Determine which LUNs matc
I've just checked our dev and test RHEV instances (both also on v3.5.7),
and found that the overnight snapshot script is working correct and is
deleting old snapshots. A quick check of other VMs shows their snapshots
do not have the delete button greyed out.
I'm thinking there's something screwy
Hi and thanks for the response
1) RHEV v3.5.7
2) The VM was powered on and running fine when the snapshot was taken (this
is done through cron)
3) The snapshot was created via a Python oVirt SDK script
4) The status is showing "OK" for both the VM's disks and the snapshots
disks.
As a test, I too
In case anyone else runs into this, one of our admins had changed the cluster
to policy to optimize for speed rather than utilization. Reconfiguring this
option resolved the issue and the migrations are no longer overloading a single
host.
-Patrick
From: mailto:users-boun...@ovirt.org>> on beh
I suspect that’s my problem, my database isn’t owned by engine:
engine=# \l
List of databases
Name| Owner | Encoding | Collation |Ctype| Access
privileges
---+--+--+-+-+
Is the oVirt agent up to date?
---
Sent from my Nexus 5
On Feb 5, 2016 5:33 AM, "Charles Kozler" wrote:
> My VMs are all Linux
>
> On Thu, Feb 4, 2016 at 4:32 PM, Colin Coe wrote:
>
>> I run RHEV not oVirt so I don't see this but I suspect it's a feature
>> request that I put in to be notifie
My VMs are all Linux
On Thu, Feb 4, 2016 at 4:32 PM, Colin Coe wrote:
> I run RHEV not oVirt so I don't see this but I suspect it's a feature
> request that I put in to be notified when a Windows VM is running an old
> version of the RHEV tools/agent.
>
> CC
>
> ---
>
> Sent from my Nexus 5
> O
I run RHEV not oVirt so I don't see this but I suspect it's a feature
request that I put in to be notified when a Windows VM is running an old
version of the RHEV tools/agent.
CC
---
Sent from my Nexus 5
On Feb 5, 2016 04:30, "Nicolas Ecarnot" wrote:
> Le 04/02/2016 18:09, Charles Kozler a éc
I have only one switch so two interfaces are connected to the same switch. The
configuration in switch is corrected. I opened a ticket for switch Tech
support and the configuration was validated.
This configuration worked without problems h24 for one year! All problems
started after a ker
On Thu, Feb 04, 2016 at 06:26:14PM +0100, Stefano Danzi wrote:
>
>
> Il 04/02/2016 16.55, Dan Kenigsberg ha scritto:
> >On Wed, Jan 06, 2016 at 08:45:16AM +0200, Dan Kenigsberg wrote:
> >>On Mon, Jan 04, 2016 at 01:54:37PM +0200, Dan Kenigsberg wrote:
> >>>On Mon, Jan 04, 2016 at 12:31:38PM +0100
is there an easy / intutive way to find out the underlying image associated
to a VM? for instance, looking at a storage domain from the server, it is
not easy to figure out what VM it actually belongs to
[storage[root@snode01 images]$ find -type f | grep -iv meta | grep -iv
lease | xargs du -sch
In case anyone else out there is reading this looking for the answer, after
rooting around on IRC I've found out that the feature I described is
available in upcoming 3.6 release.
On Wed, Feb 3, 2016 at 3:37 PM, Tim Bielawa wrote:
> I've been able to run the simple commands to enumerate quotas i
Le 04/02/2016 18:09, Charles Kozler a écrit :
Matt -
Same issue here!
+1
--
Nicolas ECARNOT
___
Users mailing list
Users@ovirt.org
http://lists.ovirt.org/mailman/listinfo/users
I'm running oVirt 3.6.2 on CentOS 7.2, all up to date, with a hosted
engine. The storage for the engine is on a dedicated iSCSI LUN. When I
created the LUN, I made it 40G so I'd have a little more disk space for
the engine (logs, ISOs, etc.), but then forgot to make the VM image
larger than the d
Il 04/02/2016 16.55, Dan Kenigsberg ha scritto:
On Wed, Jan 06, 2016 at 08:45:16AM +0200, Dan Kenigsberg wrote:
On Mon, Jan 04, 2016 at 01:54:37PM +0200, Dan Kenigsberg wrote:
On Mon, Jan 04, 2016 at 12:31:38PM +0100, Stefano Danzi wrote:
I did some tests:
kernel-3.10.0-327.3.1.el7.x86_64 -
Matt -
Same issue here!
On Thu, Feb 4, 2016 at 12:08 PM, Matthew Trent <
matthew.tr...@lewiscountywa.gov> wrote:
> When I upgraded to 3.6.1 (I think), I had exclamation points on several
> VMs, and hovering over (or looking at the bottom of the VM's General
> tab) gave a message about time zone
?When I upgraded to 3.6.1 (I think), I had exclamation points on several VMs,
and hovering over (or looking at the bottom of the VM's General tab) gave a
message about time zone mis-match. After 3.6.2, the message about time zone
mis-match is gone, but the exclamation points remain.
--
Matthew
- Original Message -
> From: "Marcelo Leandro"
> To: "Martin Perina"
> Cc: "Eli Mesika" , "Darrell Budic"
> , "users"
> Sent: Thursday, February 4, 2016 5:55:09 PM
> Subject: Re: [ovirt-users] Problem update ovirt 3.5.6.2-1.el7 to 6.2.6-1.el7
>
> This worked for me, thanks.
> command
This worked for me, thanks.
command:su - postgres -c "psql --command=\"GRANT ALL ON DATABASE
@ENGINE_DB_DATABASE@ TO @ENGINE_DB_USER@;\""
output:
GRANT
after it:
command:
LC_ALL="C" PGPASSWORD="@ENGINE_DB_PASSWORD@" psql -w
--pset=tuples_only=on --host="@ENGINE_DB_HOST@"
--port="@ENGINE_DB_PORT@
Is there any way migrate VM’s more evenly across the cluster when a host is
being placed into maintenance? Currently it attempts to auto migrate all the
VM’s to another single host and then balance out. When the destination host is
more than 50% memory utilized this has caused over subscription
Sure would be a nice feature, though! It would simplify things for those of us
who build out of re-purposed Windows servers (still a lot of life left in them
for Linux applications!) and end up with a mix of CPUs. For most of my VMs I
don't need the latest and greatest CPU features, but being ab
- Original Message -
> From: "Martin Perina"
> To: "Marcelo Leandro"
> Cc: "Darrell Budic" , "Eli Mesika"
> , "users"
> Sent: Thursday, February 4, 2016 6:12:34 PM
> Subject: Re: [ovirt-users] Problem update ovirt 3.5.6.2-1.el7 to 6.2.6-1.el7
>
> Hi,
>
> so it seems, that for some s
- Original Message -
> From: "Eli Mesika"
> To: "Martin Perina"
> Cc: "Marcelo Leandro" , "Darrell Budic"
> , "users"
> Sent: Thursday, February 4, 2016 5:17:24 PM
> Subject: Re: [ovirt-users] Problem update ovirt 3.5.6.2-1.el7 to 6.2.6-1.el7
>
>
>
> - Original Message -
>
Hi,
so it seems, that for some strange reason, user 'engine' cannot create
schema in 'engine' database although it should be an owner of this db.
I double checked that on all our testing databases this works fine and
also if you created engine db according to doc (either automatically
by engine-se
On Wed, Jan 06, 2016 at 08:45:16AM +0200, Dan Kenigsberg wrote:
> On Mon, Jan 04, 2016 at 01:54:37PM +0200, Dan Kenigsberg wrote:
> > On Mon, Jan 04, 2016 at 12:31:38PM +0100, Stefano Danzi wrote:
> > > I did some tests:
> > >
> > > kernel-3.10.0-327.3.1.el7.x86_64 -> bond mode 4 doesn't work (if
You cant see my mouse (because scrot removes it when you take a picture)
but it is hovering over the ! and it says up (almost like it thinks im over
the green arrow but I'm not) http://i.imgur.com/5u2Yvay.png
To that end I cannot see what the issue is
On Thu, Feb 4, 2016 at 10:43 AM, Joe DiTommas
I set up a new oVirt 3.6.2 cluster on CentOS 7.2 (everything up to date
as of yesterday). I created a basic CentOS 7.2 VM with my local
customizations, created a template from it, and then created a VM from
that template.
That new VM has an exclamation mark next to it in the web GUI (between
the
I have this too. Thank you, I was going to email about this as well
http://i.imgur.com/cZ6P5dp.png
On Thu, Feb 4, 2016 at 10:38 AM, Chris Adams wrote:
> I set up a new oVirt 3.6.2 cluster on CentOS 7.2 (everything up to date
> as of yesterday). I created a basic CentOS 7.2 VM with my local
> cu
Eldad is working on making it working with engine 3.6. He should be able to
give you information you need.
On Wed, Feb 3, 2016 at 12:33 PM, wrote:
> Anything?
>
> El 2016-02-02 10:18, Nicolás escribió:
>>
>> Hi,
>>
>> I'm trying to set up VDSM-Fake
>> (git://gerrit.ovirt.org/ovirt-vdsmfake.git)
Hi Colin,
Can you share more info?
1) what version of oVirt you have?
2) what was the VM state when you tried to remove the snapshot?
3) How the snapshot was created in the first place (the VM state was UP or
DOWN)?
4) What is the status of the snapshot's disks (click on snapshot tab and on
the rig
Hi Simone,
Op do 4 feb. 2016 om 09:26 schreef Simone Tiraboschi :
> On Thu, Feb 4, 2016 at 9:11 AM, Paul Groeneweg | Pazion
> wrote:
>
>> What can I expect? webinterface of 3.5 Hosts will remain broken?
>>
>
> Just to clarify,
> Paul, feel free to correct me, the engine is already at 3.6, the h
Hi,
oVirt allows such operations using this feature [1].
Basically, while the storage domain is deactivated (maintenance status),
you'll have to replicate all the data to the fourth LUN (from the storage
server side), replace the storage connections of this storage domain from
RHEVM REST API (expl
Hi,
I just verified that upgrades on both Centos 6.7 and Centos 7.2 works fine,
so there's something bad with psql on you machines :-(
Could you please execute following steps and send me result?
1. Please take a look at your engine db configuration in
/etc/ovirt-engine/engine.conf.d/10-setup
On Thu, Feb 4, 2016 at 9:11 AM, Paul Groeneweg | Pazion
wrote:
> What can I expect? webinterface of 3.5 Hosts will remain broken?
>
Just to clarify,
Paul, feel free to correct me, the engine is already at 3.6, the hosts are
still at 3.5 so the cluster compatibility level is still at 3.5 and so n
What can I expect? webinterface of 3.5 Hosts will remain broken?
Or will there be an update which fixes this ( with autoimport? )?
Would you strongly advise to upgrade Hosts to RHEL/Centos 7?
Op wo 3 feb. 2016 om 23:39 schreef Michal Skrivanek :
>
>
> On 03 Feb 2016, at 12:02, Paul Groeneweg | P
40 matches
Mail list logo