On Mon, 2018-02-05 at 11:46 -0500, Karl Johnson wrote:
> On Mon, Feb 5, 2018 at 11:33 AM, Roman Haefeli <reduz...@gmail.com>
> wrote:
> > Hey all
> >
> > Since I upgraded my CT from Debian Jessie to Debian Stretch (9.3),
> > I
> > cannot online migrat
Hey all
Since I upgraded my CT from Debian Jessie to Debian Stretch (9.3), I
cannot online migrate it anymore. I figured out that the issue only
appears while the CT is running mariadb-server (10.1). When I suspend
the CT while mariadb is running, I get the following errors when I try
to restore:
On Fri, 2016-06-10 at 09:44 +, Dmitry Mishin wrote:
>
> On 09/06/16 18:27, "users-boun...@openvz.org on behalf of Roman
> Haefeli"
> <users-boun...@openvz.org on behalf of reduz...@gmail.com> wrote:
>
> >
> > On Fri, 2016-06-03 at 15:10
On Fri, 2016-06-03 at 15:10 +, Dmitry Mishin wrote:
> Hi,
>
> On 03/06/16 11:52, "users-boun...@openvz.org on behalf of Roman
> Haefeli"
> <users-boun...@openvz.org on behalf of reduz...@gmail.com> wrote:
>
> >
> > Dear All,
> >
>
Dear All,
We're considering creating regular snapshots of our containers. The
setup would include deleting the oldest snapshots. While playing around
with creating and deleting snapshots, I noticed that the root.hdd file
keeps growing. It seems there is no point at all in deleting old
snapshots
On Mon, 2015-05-18 at 16:46 -0700, Kir Kolyshkin wrote:
On 02/09/2015 02:06 AM, Roman Haefeli wrote:
Hi all
I'm doing some testing with Debian Jessie (8.0) Containers. Debian
Jessie comes with systemd as its default init system. I wonder what is
the correct way to tell vzctl when
Hi all
I'm doing some testing with Debian Jessie (8.0) Containers. Debian
Jessie comes with systemd as its default init system. I wonder what is
the correct way to tell vzctl when a container is done starting up. What
was a simple line in /etc/inittab for sysv-init, seems more complex in
systemd.
Hi all
We're running a bunch of Containers on hostnodes that still have plenty
of free memory. Also, we monitor the user beancounters on the hostnodes
and it appears that the barrier for dcachesize is exceed by many
containers and this causes a warning in our monitoring system.
Since there is
Hi
We need jumbo frames for containers since they need access to an
internal network which has jumbo frames enabled and is used solely for
NFS traffic.
I figured it is not possible to have jumbo frames enabled inside a CT.
Even if the CT internally configures its interface to mtu 9000 and the
On Die, 2014-10-28 at 11:14 -0700, Kir Kolyshkin wrote:
On 10/28/2014 09:55 AM, Roman Haefeli wrote:
Hi all
I tried to increase ploop filesystem size of a container and it failed:
$ vzctl set db2new --diskspace 90G --save
Error in get_balloon (balloon.c:111): Can't ioctl mount
point
On Die, 2014-10-28 at 22:11 +0400, Pavel Odintsov wrote:
Hello!
Please try this:
touch /vz/root/991/.balloon-c3a5ae3d-ce7f-43c4-a1ea-c61e2b4504e8
And retry resize again.
I did:
$ touch /iscsi/root/991/.balloon-c3a5ae3d-ce7f-43c4-a1ea-c61e2b4504e8
$ vzctl set 991 --diskspace 90G --save
On Fre, 2014-10-24 at 15:46 -0700, Kir Kolyshkin wrote:
Sorry for hijacking the thread,
It's good that you ask this.
but back to the original problem.
Can you tell why vzctl snapshot-mount (or ploop snapshot-mount)
is/was not working for you? Ideally, please provide a detailed scenario.
anywhere and everything should work fine. But I
suppose alignment issues which not handled in my tool.
On Friday, September 12, 2014, Roman Haefeli reduz...@gmail.com wrote:
On Fri, 2014-09-12 at 11:15 +0200, Roman Haefeli wrote:
On Fri, 2014-09-12 at 10:56 +0200, Roman Haefeli wrote
On Fri, 2014-10-24 at 16:35 +0400, Pavel Odintsov wrote:
Hello!
Could you send complete ploop_userspace output and dmesg output to
gist.github.com?
https://gist.github.com/anonymous/343ca13508366d7c5b6a
Roman
On Fri, Oct 24, 2014 at 4:25 PM, Roman Haefeli reduz...@gmail.com wrote
you know.
Roman
On Fri, Oct 24, 2014 at 5:48 PM, Roman Haefeli reduz...@gmail.com wrote:
On Fri, 2014-10-24 at 16:35 +0400, Pavel Odintsov wrote:
Hello!
Could you send complete ploop_userspace output and dmesg output to
gist.github.com?
https://gist.github.com/anonymous
Hi Scott
Without having fully read your second mail I think you want something
like this:
* You have CT with Zimbra 8.0.6 installed and you want to upgrade t
8.0.8
* You create a snapshot like this:
$ vzctl snapshot CTID --name pre-upgrade-808
* You perform the upgrade and it goes havoc and
On Don, 2014-10-16 at 16:38 -0600, Scott Dowdle wrote:
Greetings,
- Original Message -
I think I'm having some brainfart moment on how snapshots really
function.
Ok, I played with it... and I *THINK* I understand it now.
Your first snapshot is like a base... and you have to
Hey Scott
Sorry, I accidentally used the address of my son as sender, but it was
still me writing the mail.
On Sam, 2014-10-18 at 13:53 -0600, Scott Dowdle wrote:
Martin,
- Original Message -
I'm not quite following all of your statements. Specifically, I don't
see why you think
Hi Pavel
On Mon, 2014-09-15 at 14:49 +0400, Pavel Odintsov wrote:
I found bug! Thx Maxim Patlasov for helping with ploop v1 BAT format.
Please check version from git and it support ploop v1 and v2 correctly :)
I confirm it is working for both ploop layouts. Thanks a lot for fixing
it.
this github issue.
On Thu, Aug 28, 2014 at 6:12 PM, Roman Haefeli reduz...@gmail.com wrote:
Some more info:
It works on our test cluster where we have
2.6.32-openvz-042stab093.4-amd64 installed. The report from below is
from a host node running 2.6.32-042stab081.3-amd64.
Is ploop_userspace
On Fri, 2014-09-12 at 10:56 +0200, Roman Haefeli wrote:
Hi Pavel
I might have some more information on the issue. It seems that only
'old' ploop images cannot be mounted by ploop_userspace. I actually
don't quite know the ploop version I used for creating the 'old' ploop
images, but I know
On Fri, 2014-09-12 at 11:15 +0200, Roman Haefeli wrote:
On Fri, 2014-09-12 at 10:56 +0200, Roman Haefeli wrote:
Hi Pavel
I might have some more information on the issue. It seems that only
'old' ploop images cannot be mounted by ploop_userspace. I actually
don't quite know the ploop
Hi
This might be not a bug, but a problem of having different CPU models on
both host nodes. You cannot always live-migrate between different CPU
models as not all have similar capabilities.
It's still possible to offline-migrate, because when a CT starts on a
mode with lesser capabilities, it
RO root.hdd images with my tool:
https://github.com/FastVPSEestiOu/ploop_userspace but it's not stable
now. You can try it and provide feedback.
On Tue, Aug 19, 2014 at 12:24 PM, Roman Haefeli reduz...@gmail.com wrote:
Hi all
At the university I work, we plan to switch all containers from
Some more info:
It works on our test cluster where we have
2.6.32-openvz-042stab093.4-amd64 installed. The report from below is
from a host node running 2.6.32-042stab081.3-amd64.
Is ploop_userspace dependent on kernel version?
Roman
On Thu, 2014-08-28 at 15:59 +0200, Roman Haefeli wrote
Hi all
At the university I work, we plan to switch all containers from simfs to
ploop images on the long run. Despite the many advantages of using
ploop, there is one major drawback that keeps us from switching
production already now: We can't mount ploop images from read-only
snapshots. In case
When using VETH and using automatic MAC address assignment, a hard-coded
prefix 00:18:51 is used. Is it possible to change that prefix through
some configuration file? The problem could be mitigated by using a high
enough prefix (e.g. fe:18:51).
Roman
On Tue, 2014-04-01 at 04:32 +, Dietmar
Hey
On Thu, 2014-07-17 at 20:25 +0400, CoolCold wrote:
I'm a a bit confused with your questions, why standard packages
AMD64 (x86_64, EM64T)
File
Date
Size
linux-image-2.6.32-openvz-042stab092.2-amd64_1_amd64.deb
2014-07-09 17:20:17
45 Mb
Hey again
May I ask again, where I can get the sources of the OpenVZ Kernel _DEB_
archive (I don't mean the kernel sources themselves).
Thanks,
Roman
On Fri, 2014-07-11 at 09:44 +0200, Roman Haefeli wrote:
On Fri, 2014-07-11 at 11:01 +0400, Pavel Odintsov wrote:
You could extract patch
Hi all
I'd like to test patches created by OpenVZ devs and would like to be
able to compile my own OpenVZ kernels for our Debian hostnodes. Where
can I find the source packages to build OpenVZ kernels as .deb packages?
Unlike other OpenVZ Debian packages like 'vzctl' or 'ploop', the kernel
of a release, but from the bugzilla bugtracker. I really need the
sources of the deb package and not of the rpm, because I want to build
the kernel the exact same way as the one I'm using now.
Roman
On Fri, Jul 11, 2014 at 10:45 AM, Roman Haefeli reduz...@gmail.com wrote:
Hi all
I'd like
Hi all
We're still using OpenVZ kernels from the Proxmox project (which are -
from what I know - based on the rpm packages provided by the OpenVZ
team) in our produciton environment while testing the kernels provided
by download.openvz.org.
I found a small difference in bridging behavior, which
On Thu, 2014-03-27 at 21:02 +0100, Ola Lundqvist wrote:
Hi Kir
This can easily be solved by adding a 1: prefix to the version
number. This is the usual debian practice in these kind of situations.
I was always thinking bumping the epoch is kind of a last resort thing
for situations where
removes the dump file.
Roman
On Wed, Mar 26, 2014 at 05:35:55PM +0100, Roman Haefeli wrote:
Hi all
I happened to be able to crash one hostnode of our testing cluster when
restoring a CT.
Hostnodes:
* 3 hostnodes running Debian 7 amd64 with OpenVZ kernel
* Kernel: 042stab085.20
be alternatives for version numbering (I've tested
them with
dpkg --compare-versions):
As a complete upstream version:
42.85.20
Combining old and new schema:
042+stab085.20
El 26/03/14 17:07, Roman Haefeli ha escrit
/24/2014 09:04 AM, Roman Haefeli wrote:
Hi all, Ola
I followed the recent discussion about OpenVZ kernel package management
for Debian. While I don't really have a qualified opinion on the subject
matter (personally, I slightly tend towards a new package for each
release), let me mention
Hi all
I happened to be able to crash one hostnode of our testing cluster when
restoring a CT.
Hostnodes:
* 3 hostnodes running Debian 7 amd64 with OpenVZ kernel
* Kernel: 042stab085.20
* VE_ROOT / VE_PRIVATE is on an NFS mount shared by nodes
Test-CT:
* Debian 7 from self-made template
*
Hi all, Ola
I followed the recent discussion about OpenVZ kernel package management
for Debian. While I don't really have a qualified opinion on the subject
matter (personally, I slightly tend towards a new package for each
release), let me mention problems with the current situation:
* 'uname
, 2014-03-24 at 17:04 +0100, Roman Haefeli wrote:
Hi all, Ola
I followed the recent discussion about OpenVZ kernel package management
for Debian. While I don't really have a qualified opinion on the subject
matter (personally, I slightly tend towards a new package for each
release), let me
On Mon, 2014-01-13 at 11:03 -0800, Kir Kolyshkin wrote:
On 01/13/2014 10:08 AM, Kir Kolyshkin wrote:
On 01/13/2014 01:20 AM, Roman Haefeli wrote:
When you mentioned the scripts
in /etc/init/ I found that all our flawlessly running Debian 6 CTs don't
have this folder at all. I removed
Hi
When I start Debian 7 containers with the '--wait' option, it does start
the CT, but the vzctl command never returns. The same works fine with
Debian 6 containers.
I figured that 'vzctl start CTID --wait' usually adds a line to the
CT's /etc/inittab:
vz:2345:once:touch /.vzfifo
However,
Hi all
We are running several CTs on a cluster of a few OpenVZ host nodes. The
nodes share an NFS export where the ploop images of the CTs are located.
I noticed that 'vzctl start CTID' will happily start a ploop-based CT
that is already running on a different node. This is also possible with
Hi all
Is it possible to add a second ploop device to a CT? If so, how?
I figured how to manually create a ploop image and mount it into the
running CT. However, I didn't know how to automate that on CT start.
Any hints are welcome.
Roman
___
Users
Hi all
One of our CTs cannot be online migrated because it seems to produce
corrupt dump files. Checkpointing finishes without errors, but restoring
fails. The CT is running Debian 6.0.7 like all our other CTs. I haven't
figured out yet why only this CT is causing troubles.
The more severe
On Mon, 2013-04-15 at 11:14 +0200, Roman Haefeli wrote:
On one of our host nodes this problem even triggered a kernel panic and
thus killed all CTs running on that host. The problem could have been
mitigated if 'vzctl start' would not try to read corrupt dump files. Or,
if it detects a dump
Hi all
I'm experiencing sporadic errors when starting CTs having their FS on a
ploop device. The thing is I can't reproduce the error when starting the
CTs manually, I only get them when the cluster managment (pacemaker)
starts them.
The setup is a 3-node (hostnodes) cluster with pacemaker
On Don, 2013-02-14 at 17:58 +0100, Benjamin Henrion wrote:
I spent a day debugging why when I was restarting a VZ with veth the
HN was suddenly not pingable anymore.
Then I found this article:
http://notes.asd.me.uk/2011/01/31/openvz-bridge-vanishing-network-fix/
I put
Hi!
When adding VETH interfaces to containers with the following command:
vzct set CTID --netif_add eth0vzbr0 --save
an automatically generated MAC address of the format 00:18:51:XX:XX:XX
is configured for the container.
This can cause problems with bridges, since bridges seem to use the
On Mon, 2013-01-21 at 13:32 +0400, Stanislav Kinsbursky wrote:
On Fre, 2013-01-18 at 16:26 +0100, Roman Haefeli wrote:
Hi all
Only recently I discovered that online migration seems to work for us
now. CT on NFS or NFS mounted inside a CT is non-issue now.
We are running all our CTs
Hi all
Only recently I discovered that online migration seems to work for us
now. CT on NFS or NFS mounted inside a CT is non-issue now.
We are running all our CTs on an NFS filesystem shared between
hostnodes. While checkpointing and restoring works flawlessly with that
setup, I noticed that
On Fre, 2013-01-18 at 16:26 +0100, Roman Haefeli wrote:
Hi all
Only recently I discovered that online migration seems to work for us
now. CT on NFS or NFS mounted inside a CT is non-issue now.
We are running all our CTs on an NFS filesystem shared between
hostnodes. While checkpointing
On Mit, 2012-10-10 at 19:16 +0200, Corin Langosch wrote:
On 10.10.2012 at 18:25 +0200, Roman Haefeli
reduz...@gmail.com wrote:
We're having issues with processes in a container being killed by OOM,
although the hostnode does not have even half of its memory used.
How can
Hi all
We're running a Corosync/Pacemaker cluster with OpenVZ conainers as
resources running on a mounted (by all nodes) NFS export.
From what I can tell, there is no way to configure the path for the
quota files, so we left them at their default location
in /var/lib/vzquota. Of course, this
On Fre, 2012-06-29 at 22:57 +0800, Tommy Tang wrote:
Dose anyone know the problem? is it fixable or some wrong with my
configuration?
BTW : is it possible to put the private of CT on the nfs?
We once had a similar problem. However, our setup is a bit different:
containers running in a
On Fri, 2012-05-04 at 11:58 +0200, Roman Haefeli wrote:
Thanks for all the responses!
On Thu, 2012-05-03 at 09:14 +0400, Kir Kolyshkin wrote:
On 05/02/2012 09:39 PM, Timh B wrote:
This was linked earlier this week;
https://github.com/CoolCold/tools/blob/master/openvz/kernel/create-ovz
On Wed, 2012-05-02 at 18:31 +0200, Roman Haefeli wrote:
Hi all
We're running OpenVZ on Debian Squeeze with the kernel shipped by
Debian.
Several sources recommend to use RHEL 6 stable kernel. Is it recommended
to use it also on Debian stable? If so, how should it be installed? The
wiki
On Wed, 2012-05-09 at 12:54 +0200, Aleksandar Ivanisevic wrote:
Roman Haefeli reduz...@gmail.com
writes:
[...]
We have a HA cluster with corosync/pacemaker running, which manages the
CTs which are running on an NFS export shared across all nodes. As the
HA layer makes sure
Thanks for all the responses!
On Thu, 2012-05-03 at 09:14 +0400, Kir Kolyshkin wrote:
On 05/02/2012 09:39 PM, Timh B wrote:
This was linked earlier this week;
https://github.com/CoolCold/tools/blob/master/openvz/kernel/create-ovz-kernel-for-debian.sh
Might be useful for you if you wish
Hi all
We're running OpenVZ on Debian Squeeze with the kernel shipped by
Debian.
Several sources recommend to use RHEL 6 stable kernel. Is it recommended
to use it also on Debian stable? If so, how should it be installed? The
wiki has links to rpm files only, it seems.
The reason I ask is that
Hi
Our institution forces us linux admins to get rid of all our physical
hardware machines and only wants to maintain VMware virtual machines.
Since we built our whole linux infrastructure based on OpenVZ and
optimized the maintenance of our infrastructure with a lot of OpenVZ
specific scripts,
Hi all
We would like to build a mysql-cluster with two mysql-servers in a
master-master configuration. We're trying to use mysql-mmm [1] for a
fail-over setup. Both mysql-servers are running in their own container
on two different host nodes.
Now, what mmm does is to manage a set of IP adresses,
61 matches
Mail list logo