On Sep 10, 2015, at 20:35 , Michael Kleinpaste
wrote:
> Hi everybody.
>
> So I ran into that high mem usage thing. The problem I have with patching is
> that this is a live system so I can't do it mid day. Can anybody tell me if
> it is possible to
Thanks all. I restarted it and that fixed the issue temporarily freeing up
memory but it continued the leak process. I updated the vdsm package and
that fixed it overall.
On Tue, Sep 15, 2015 at 12:57 AM Michal Skrivanek <
michal.skriva...@redhat.com> wrote:
>
> On Sep 10, 2015, at 20:35 ,
Hello Michael,
I ran into the issue myself and can confirm restarting vdsm with nfs
mitgates the issue. I even had a cron job for that
On 11.09.2015 04:30, Darrell Budic wrote:
> If you’re using nfs mounts (even if they are gluster based), it’s safe to
> restart vdsmd, you’ll see it change
Hi everybody.
So I ran into that high mem usage thing. The problem I have with patching
is that this is a live system so I can't do it mid day. Can anybody tell
me if it is possible to just restart the vdsm service or does the host have
to be in "maintenance mode" before restarting it? It is
If you’re using nfs mounts (even if they are gluster based), it’s safe to
restart vdsmd, you’ll see it change status in ovirt, but your VMs will continue
running. If you’re mounting gluster based storage as glusterfs shares directly
(not over nfs), there’s another issue that will cause all your
5 matches
Mail list logo