3.6. Any idea?
--
Met vriendelijke groeten / With kind regards,
Johan Kooijman
___
Users mailing list
Users@ovirt.org
http://lists.ovirt.org/mailman/listinfo/users
the ixgbe driver,
>> where there is a driver
>> issue causing the network adapter to reset continuously when there are
>> network traffic.
>>
>>
>> Regards,
>> Siggi
>>
>>
>>
>> On Thu, March 17, 2016 12:52, Nir Soffer wrote:
>>
t;
> ___
> Users mailing list
> Users@ovirt.org
> http://lists.ovirt.org/mailman/listinfo/users
>
>
--
Met vriendelijke groeten / With kind regards,
Johan Kooijman
___
Users mailing list
Users@ovirt.org
http://lists.ovirt.org/mailman/listinfo/users
te+0x48/0xc0
> [1123306.014565] [] cpuidle_idle_call+0xc5/0x200
> [1123306.014569] [] arch_cpu_idle+0xe/0x30
> [1123306.014574] [] cpu_startup_entry+0xf5/0x290
> [1123306.014580] [] start_secondary+0x1ba/0x230
> [1123306.014582] ---[ end trace 4d5a1bc838e1fcc0 ]---
>
> If so,
vriendelijke groeten / With kind regards,
Johan Kooijman
___
Users mailing list
Users@ovirt.org
http://lists.ovirt.org/mailman/listinfo/users
Hi all,
I can't seem to find ovirt 3.6 node ISO. Is there a specific reason for
this? I have the issue with 3.5 (ovirt-node-iso-3.5-0.201502231653.el7.iso)
that I can't do live merge on machines installed with ovirt node.
--
Met vriendelijke groeten / With kind regards,
Johan Kooijman
My apologies, totally overlooked that!
On Thu, Mar 3, 2016 at 12:35 PM, Alexandr Krivulya <shur...@shurik.kiev.ua>
wrote:
> Hi, Windows Server 2008 R2 Standard supports up to 4 sockets.
>
> 03.03.16 13:30, Johan Kooijman пишет:
>
> Hi all,
>
> I created a VM on our
Hi all,
I created a VM on our ovirt 3.5 cluster having 8 CPU's and 16 GB of RAM (
http://imgur.com/aaXfbfq). The guest however, only sees 4 CPU's:
http://imgur.com/dEYVe0S.
Any clues what may have caused this?
--
Met vriendelijke groeten / With kind regards,
Johan Kooijman
Ack, thx.
On Tue, Feb 16, 2016 at 11:46 AM, Yedidyah Bar David <d...@redhat.com>
wrote:
> On Tue, Feb 16, 2016 at 11:58 AM, Johan Kooijman <m...@johankooijman.com>
> wrote:
> > Didi,
> >
> > Ok, the piece of information on engine still supported on 3.6 was
>
com> wrote:
> On Tue, Feb 16, 2016 at 9:30 AM, Johan Kooijman <m...@johankooijman.com>
> wrote:
> > Yes. I pasted the information on AIO, that was wrong on my end. I have an
> > engine running on dedicated hardware and about 20 nodes in this cluster.
> I
> > d
would be for engine.
On Mon, Feb 15, 2016 at 9:21 PM, Alexander Wels <aw...@redhat.com> wrote:
> On Monday, February 15, 2016 08:21:40 PM Johan Kooijman wrote:
> > Hi Alexander,
> >
> > Thanks for the input! My 3.5 is running on C6 however:
> >
> > Upgrad
15, 2016 02:40:47 PM Johan Kooijman wrote:
> > Hi,
> >
> > Can anybody recommend me best practice upgrade path for an upgrade from
> > oVirt 3.5 on C6 to 3.6 on C7.2?
>
> The answer sort of depends on what you want. Do you want no downtime on
> your
> VMs or
Hi,
Can anybody recommend me best practice upgrade path for an upgrade from
oVirt 3.5 on C6 to 3.6 on C7.2?
--
Met vriendelijke groeten / With kind regards,
Johan Kooijman
___
Users mailing list
Users@ovirt.org
http://lists.ovirt.org/mailman/listinfo
the same
experience so far? I'm using Intel X540-AT2 10 gbit cards in all my nodes,
setup with LACP bonding.
--
Met vriendelijke groeten / With kind regards,
Johan Kooijman
___
Users mailing list
Users@ovirt.org
http://lists.ovirt.org/mailman/listinfo/users
.x86_64
ovirt-engine-sdk-python-3.5.2.1-1.el6.noarch
Any clues?
--
Met vriendelijke groeten / With kind regards,
Johan Kooijman
___
Users mailing list
Users@ovirt.org
http://lists.ovirt.org/mailman/listinfo/users
age domain? Is that even possible
when VM's have disks associated to it that live on this storage domain?
On Wed, Nov 25, 2015 at 8:54 AM, Liron Aravot <lara...@redhat.com> wrote:
>
>
> - Original Message -
> > From: "Johan Kooijman" <m...@johankooijman.com
n.
On Mon, Nov 23, 2015 at 12:46 PM, Johan Kooijman <m...@johankooijman.com>
wrote:
> Ok. Any way to do it without? Because with snapshot deletion I end up with
> the same issue - I can't remove images form my storage.
>
> On Mon, Nov 23, 2015 at 12:18 PM, Liron Aravo
One weird thing though: when I try to remove the VM itself, it won't let me
uncheck the "Remove disks" checkbox.
On Sun, Nov 22, 2015 at 9:00 PM, Nir Soffer <nsof...@redhat.com> wrote:
> On Sun, Nov 22, 2015 at 6:14 PM, Johan Kooijman <m...@johankooijman.com>
> wrote
Hi all,
I'm using an NFS storage domain, backed by a ZFS cluster. I need to deploy
a new storage domain, what would the recommended record size be in this?
--
Met vriendelijke groeten / With kind regards,
Johan Kooijman
___
Users mailing list
Users
Ok. Any way to do it without? Because with snapshot deletion I end up with
the same issue - I can't remove images form my storage.
On Mon, Nov 23, 2015 at 12:18 PM, Liron Aravot <lara...@redhat.com> wrote:
>
>
> - Original Message -----
> > From: "Johan Kooijma
22, 2015 at 6:14 PM, Johan Kooijman <m...@johankooijman.com>
> wrote:
> > Hi all,
> >
> > I have about 100 old VM's in my cluster. They're powered down, ready for
> > deletion. What I want to do is delete the VM's including disks without
> > actually deleti
(which
then should not hold any data, as far as ovirt is concerned).
Reason for this: it's a ZFS pool with dedup enabled, deleting the images
one by one will kill the array with 100% iowa for some time.
--
Met vriendelijke groeten / With kind regards,
Johan Kooijman
Ah, more interesting: the disk lives half on storage domain #1, half on
storage domain #2. I don't really need these disks, but can't do anything
to these disks at the moment.
What to do?
On Thu, Nov 5, 2015 at 4:41 PM, Johan Kooijman <m...@johankooijman.com>
wrote:
> Hi all,
>
&g
Hi all,
I was moving a disk from one storage domain to the other when the engine
was restarted. The VM the disk is on, is fine, but the disk stays in locked
status.
How can I resolve this?
--
Met vriendelijke groeten / With kind regards,
Johan Kooijman
://own.cloud.nl/index.php/s/Y49KTsv2vkt8L4a
Any clue?
--
Met vriendelijke groeten / With kind regards,
Johan Kooijman
___
Users mailing list
Users@ovirt.org
http://lists.ovirt.org/mailman/listinfo/users
regards,
Johan Kooijman
___
Users mailing list
Users@ovirt.org
http://lists.ovirt.org/mailman/listinfo/users
Ok, thanks. Turned out that restart vdsm did the trick.
On Fri, Oct 9, 2015 at 1:23 PM, Michal Skrivanek <
michal.skriva...@redhat.com> wrote:
>
> On 9 Oct 2015, at 09:48, Johan Kooijman wrote:
>
> > Hi all,
> >
> > Since a couple of days I have one node wit
That works, thx.
On Thu, Oct 1, 2015 at 8:10 AM, Dan Kenigsberg <dan...@redhat.com> wrote:
> On Wed, Sep 30, 2015 at 10:57:55AM +0200, Johan Kooijman wrote:
> > Hi all,
> >
> > I'm adding my first CentOS 7 host to my cluster today, but running into
> an
>
vriendelijke groeten / With kind regards,
Johan Kooijman
___
Users mailing list
Users@ovirt.org
http://lists.ovirt.org/mailman/listinfo/users
,
Johan Kooijman
___
Users mailing list
Users@ovirt.org
http://lists.ovirt.org/mailman/listinfo/users
It's a Zetavault server, ZFS on Linux.
On Thu, Jul 23, 2015 at 11:00 PM, Karli Sjöberg karli.sjob...@slu.se
wrote:
Forgot reply all...
Den 23 jul 2015 10:13 em skrev Johan Kooijman m...@johankooijman.com:
iotop on NFS servers tells me it's NFS what's taking the load :)
What server
My bad - should've mentioned we're running on NFS, iotop doesn't show that.
On Thu, Jul 23, 2015 at 9:14 PM, Chris Adams c...@cmadams.net wrote:
Once upon a time, Johan Kooijman m...@johankooijman.com said:
We're having some storage issues at the moment, some piece of our ovirt
setup
iotop on NFS servers tells me it's NFS what's taking the load :)
On Thu, Jul 23, 2015 at 10:11 PM, Karli Sjöberg karli.sjob...@slu.se
wrote:
Den 23 jul 2015 9:53 em skrev Johan Kooijman m...@johankooijman.com:
My bad - should've mentioned we're running on NFS, iotop doesn't show
groeten / With kind regards,
Johan Kooijman
___
Users mailing list
Users@ovirt.org
http://lists.ovirt.org/mailman/listinfo/users
Hi all,
What would be the best upgrade path for upgrading a 3.5.2 cluster from C6
to C7?
Am I right in understanding that a cluster can have mixed hosts, but once a
VM is on a C7 host, it cannot be migrated back to C6?
--
Met vriendelijke groeten / With kind regards,
Johan Kooijman
, latest stable versions of all packages. Am
I missing something?
--
Met vriendelijke groeten / With kind regards,
Johan Kooijman
___
Users mailing list
Users@ovirt.org
http://lists.ovirt.org/mailman/listinfo/users
-setup after yum update ovirt-engine-setup?
___
Users mailing list
Users@ovirt.org
http://lists.ovirt.org/mailman/listinfo/users
--
Met vriendelijke groeten / With kind regards,
Johan Kooijman
___
Users
://lists.ovirt.org/mailman/listinfo/users
--
Met vriendelijke groeten / With kind regards,
Johan Kooijman
___
Users mailing list
Users@ovirt.org
http://lists.ovirt.org/mailman/listinfo/users
Found the bug I'm hitting:
https://bugzilla.redhat.com/show_bug.cgi?id=1145636
Workaround is to put host in maintenance mode and uncheck the JSON checkbox
change under advanced settings for the node.
On Wed, Jan 7, 2015 at 12:26 PM, Johan Kooijman m...@johankooijman.com
wrote:
All,
I
All,
I was able to reproduce this every time I did an install of ovirt 3.5 on
CentOS 6. The issue does not occur when I move to the snapshot version of
3.5
On Tue, Jan 6, 2015 at 2:20 PM, Johan Kooijman m...@johankooijman.com
wrote:
Hi all,
Been playing with an ovirt test setup for the last
trying to migrate a VM, the node throws this error:
https://plakbord.cloud.nl/p/4Syi9A7tEd8L3A2pQg6boVB6
Any clue on what's happening?
--
Met vriendelijke groeten / With kind regards,
Johan Kooijman
___
Users mailing list
Users@ovirt.org
http
--
Met vriendelijke groeten / With kind regards,
Johan Kooijman
___
Users mailing list
Users@ovirt.org
http://lists.ovirt.org/mailman/listinfo/users
,nfsvers=3,addr=10.0.24.30)
Am I missing something here?
--
Met vriendelijke groeten / With kind regards,
Johan Kooijman
___
Users mailing list
Users@ovirt.org
http://lists.ovirt.org/mailman/listinfo/users
Jan 2 21:24 __DIRECT_IO_TEST__
It just fails with AcquireHostIdFailure.
On Fri, Jan 2, 2015 at 8:49 PM, Johan Kooijman m...@johankooijman.com
wrote:
Forgot to mention: engine nodes are all CentOS 6.6
On Fri, Jan 2, 2015 at 8:44 PM, Johan Kooijman m...@johankooijman.com
wrote:
Hi,
I have
@ovirt.org
http://lists.ovirt.org/mailman/listinfo/users
--
Met vriendelijke groeten / With kind regards,
Johan Kooijman
___
Users mailing list
Users@ovirt.org
http://lists.ovirt.org/mailman/listinfo/users
-
___
Users mailing list
Users@ovirt.org
http://lists.ovirt.org/mailman/listinfo/users
--
Met vriendelijke groeten / With kind regards,
Johan Kooijman
___
Users mailing list
Users@ovirt.org
http://lists.ovirt.org
/mailman/listinfo/users
___
Users mailing list
Users@ovirt.org
http://lists.ovirt.org/mailman/listinfo/users
--
Met vriendelijke groeten / With kind regards,
Johan Kooijman
___
Users mailing list
, but I'd rather not
move away from ovirt because there's no support.
--
Met vriendelijke groeten / With kind regards,
Johan Kooijman
___
Users mailing list
Users@ovirt.org
http://lists.ovirt.org/mailman/listinfo/users
if
the local disk is actually an RBD block device.
Hope somebody can help me.
--
Met vriendelijke groeten / With kind regards,
Johan Kooijman
___
Users mailing list
Users@ovirt.org
http://lists.ovirt.org/mailman/listinfo/users
/mailman/listinfo/users
___
Users mailing list
Users@ovirt.org
http://lists.ovirt.org/mailman/listinfo/users
--
Met vriendelijke groeten / With kind regards,
Johan Kooijman
T +31(0) 6 43 44 45 27
F +31(0) 162 82 00 01
E m...@johankooijman.com
Hey all,
Am I missing something or is it not possible to snapshot a virtual disk
onto a different storage domain?
--
Met vriendelijke groeten / With kind regards,
Johan Kooijman
___
Users mailing list
Users@ovirt.org
http://lists.ovirt.org/mailman
___
Users mailing list
Users@ovirt.org
http://lists.ovirt.org/mailman/listinfo/users
--
Met vriendelijke groeten / With kind regards,
Johan Kooijman
T +31(0) 6 43 44 45 27
F +31(0) 162 82 00 01
E m...@johankooijman.com
/24/2014 11:48 AM, Nir Soffer wrote:
- Original Message -
From: Johan Kooijman m...@johankooijman.com
To: Nir Soffer nsof...@redhat.com
Cc: users users@ovirt.org
Sent: Monday, February 24, 2014 2:45:59 AM
Subject: Re: [Users] Nodes lose storage at random
Interestingly enough - same
regards,
Johan Kooijman
T +31(0) 6 43 44 45 27
F +31(0) 162 82 00 01
E m...@johankooijman.com
___
Users mailing list
Users@ovirt.org
http://lists.ovirt.org/mailman/listinfo/users
00:41:43 UTC 2014 x86_64 x86_64 x86_64 GNU/Linux
[root@hv3 ~]# rpm -qa | grep vdsm
vdsm-4.13.3-3.el6.x86_64
vdsm-xmlrpc-4.13.3-3.el6.noarch
vdsm-python-4.13.3-3.el6.x86_64
vdsm-cli-4.13.3-3.el6.noarch
--
Met vriendelijke groeten / With kind regards,
Johan Kooijman
E m...@johankooijman.com
Thanks for looking into it. I've been running the ovirt ISO untill now,
will switch to stock C6.5 to see if it makes a difference.
On Sat, Feb 22, 2014 at 8:57 PM, Nir Soffer nsof...@redhat.com wrote:
- Original Message -
From: Johan Kooijman m...@johankooijman.com
To: Nir Soffer
Been reinstalling to stocj CentOS 6.5 last night, all successful. Until
roughly midnight GMT, 2 out of 4 hosts were showing the same errors.
Any more suggestions?
On Sat, Feb 22, 2014 at 8:57 PM, Nir Soffer nsof...@redhat.com wrote:
- Original Message -
From: Johan Kooijman m
is 10)
Thanks!
--
*From: *Johan Kooijman m...@johankooijman.com
*To: *Meital Bourvine mbour...@redhat.com
*Cc: *users users@ovirt.org
*Sent: *Tuesday, February 18, 2014 2:55:11 PM
*Subject: *Re: [Users] Nodes lose storage at random
To follow up
Nir,
Messages: https://t-x.dignus.nl/messages.txt
Sanlock: https://t-x.dignus.nl/sanlock.log.txt
Any input is more than welcome!
On Wed, Feb 19, 2014 at 10:38 AM, Nir Soffer nsof...@redhat.com wrote:
- Original Message -
From: Johan Kooijman m...@johankooijman.com
To: users users
--
*From: *Johan Kooijman m...@johankooijman.com
*To: *users users@ovirt.org
*Sent: *Tuesday, February 18, 2014 1:32:56 PM
*Subject: *[Users] Nodes lose storage at random
Hi All,
We're seeing some weird issues in our ovirt setup. We have 4 nodes
connected and an NFS (v3) filestore (FreeBSD
/show_bug.cgi?id=853011
--
*From: *Johan Kooijman m...@johankooijman.com
*To: *users users@ovirt.org
*Sent: *Tuesday, February 18, 2014 1:32:56 PM
*Subject: *[Users] Nodes lose storage at random
Hi All,
We're seeing some weird issues in our ovirt setup. We have 4
not defined
there, the default is 10)
Thanks!
--
*From: *Johan Kooijman m...@johankooijman.com
*To: *Meital Bourvine mbour...@redhat.com
*Cc: *users users@ovirt.org
*Sent: *Tuesday, February 18, 2014 2:55:11 PM
*Subject: *Re: [Users] Nodes lose storage at random
-d7f780772b04:Data in
problem. vds: hv5
On Tue, Feb 18, 2014 at 3:04 PM, Johan Kooijman m...@johankooijman.comwrote:
Ok, will do. The process_pool_max_slots_per_domain is not defined, default
node values.
On Tue, Feb 18, 2014 at 2:56 PM, Meital Bourvine mbour...@redhat.comwrote:
Hi Johan
___
Users mailing list
Users@ovirt.org
http://lists.ovirt.org/mailman/listinfo/users
--
Met vriendelijke groeten / With kind regards,
Johan Kooijman
T +31(0) 6 43 44 45 27
F +31(0) 162 82 00 01
E m...@johankooijman.com
___
Users mailing list
This can be resolved by executing:
[root@hv1 ~]# setsebool -P virt_use_sanlock=on
[root@hv1 ~]# setsebool -P virt_use_nfs=on
But it's not persistent after a reboot. How can I make that a persistent
setting?
--
Met vriendelijke groeten / With kind regards,
Johan Kooijman
T +31(0) 6 43 44 45 27
F +31(0
65 matches
Mail list logo