Re: [Users] Centos/RHEL 7

2014-07-08 Thread Benjamin Henrion
On Tue, Jul 8, 2014 at 12:08 AM, jjs - mainphrame j...@mainphrame.com wrote:
 I'm downloading Centos 7 now. Naturally I'm eager to run openvz. Is there
 any sort of road map or schedule for the availability of a RHEL 7 based OVZ
 kernel?

I guess many people are still waiting for an answer from the OpenVZ
devs regarding support of a more recent kernel then 2.6.32, in this
case 3.10 if I am not mistaken.

-- 
Benjamin Henrion bhenrion at ffii.org
FFII Brussels - +32-484-566109 - +32-2-4148403
In July 2005, after several failed attempts to legalise software
patents in Europe, the patent establishment changed its strategy.
Instead of explicitly seeking to sanction the patentability of
software, they are now seeking to create a central European patent
court, which would establish and enforce patentability rules in their
favor, without any possibility of correction by competing courts or
democratically elected legislators.
___
Users mailing list
Users@openvz.org
https://lists.openvz.org/mailman/listinfo/users


Re: [Users] CentOS 7 OS Template now in contrib

2014-07-08 Thread Benjamin Henrion
On Mon, Jul 7, 2014 at 10:22 PM, Scott Dowdle dow...@montanalinux.org wrote:
 Greetings,

 CentOS sent out an announcement about the release of CentOS 7:
 http://lists.centos.org/pipermail/centos-announce/2014-July/020393.html

 I built a regular and a minimal OS Template and have uploaded it to contrib.  
 Inside of /root are the scripts to create the OS Template from scratch.  It 
 assumes it is being built from a CentOS 7 host/container that has the CentOS 
 repos configured correctly.

I think at some point Openvz.org should provide trusted builds like
docker is doing.

At least we could get good docs on how those images are built.

--
Benjamin Henrion bhenrion at ffii.org
FFII Brussels - +32-484-566109 - +32-2-4148403
In July 2005, after several failed attempts to legalise software
patents in Europe, the patent establishment changed its strategy.
Instead of explicitly seeking to sanction the patentability of
software, they are now seeking to create a central European patent
court, which would establish and enforce patentability rules in their
favor, without any possibility of correction by competing courts or
democratically elected legislators.

___
Users mailing list
Users@openvz.org
https://lists.openvz.org/mailman/listinfo/users


[Users] strange reboot when starting a VE on a kernel 2.6.32-042stab084.26

2014-07-08 Thread Aleksandar Ivanisevic

Hi,

I have a host node that only works with kernels up to and including
2.6.32-042stab084.26. Any kernel higher than that and the host reboots
during vzctl start after The container has been mounted message. There
is no oops, no logs (netconsole is configured and running), no crash
dump, nothing, just a reboot.

To make things more interesting, if I boot the debug kernel, everything
works fine as far as I can tell.

Any idea how do I debug this further?

regards,

___
Users mailing list
Users@openvz.org
https://lists.openvz.org/mailman/listinfo/users


[Users] flashcache

2014-07-08 Thread Aleksandar Ivanisevic

Hi,

is anyone using flashcache vith openvz? If so, which version and with
which kernel? Versions lower than 3 do not compile against the latest
el6 kernel and version 3.11 and the latest git oopses in
flashcache_md_write_kickoff with a null pointer.

I see provisions to detect ovz kernel source in flashcache makefile, so
someone must be compiling and using it.

Any other SSD caching software that works with openvz?

regards,

___
Users mailing list
Users@openvz.org
https://lists.openvz.org/mailman/listinfo/users


Re: [Users] strange reboot when starting a VE on a kernel 2.6.32-042stab084.26

2014-07-08 Thread Aleksandar Ivanisevic
Aleksandar Ivanisevic
aleksan...@ivanisevic.de writes:

[...]

 To make things more interesting, if I boot the debug kernel, everything
 works fine as far as I can tell.

 Any idea how do I debug this further?

Eh, it looks like a kernel update kicked in while I was testing, so I
was loading a debug kernel 2.6.32-042stab092.1, while testing with
regular 090.5 kernel.

So, it seems that this issue has been fixed in 092.1.

-- 
Ti si arogantan, prepotentan i peglaš vlastitu frustraciju. -- Ivan
Tišljar, hr.comp.os.linux

___
Users mailing list
Users@openvz.org
https://lists.openvz.org/mailman/listinfo/users


Re: [Users] flashcache

2014-07-08 Thread Pavel Odintsov
Hi all!

I thought it's really not good idea because technology like ssd
caching should be tested _thoroughly_ before production use. But you
could try it with  simfs but beware of ploop because it's really not
an standard ext4 with custom caches and unexpected behaviour in some
cases.

On Tue, Jul 8, 2014 at 1:59 PM, Aleksandar Ivanisevic
aleksan...@ivanisevic.de wrote:

 Hi,

 is anyone using flashcache vith openvz? If so, which version and with
 which kernel? Versions lower than 3 do not compile against the latest
 el6 kernel and version 3.11 and the latest git oopses in
 flashcache_md_write_kickoff with a null pointer.

 I see provisions to detect ovz kernel source in flashcache makefile, so
 someone must be compiling and using it.

 Any other SSD caching software that works with openvz?

 regards,

 ___
 Users mailing list
 Users@openvz.org
 https://lists.openvz.org/mailman/listinfo/users



-- 
Sincerely yours, Pavel Odintsov
___
Users mailing list
Users@openvz.org
https://lists.openvz.org/mailman/listinfo/users


Re: [Users] strange reboot when starting a VE on a kernel 2.6.32-042stab084.26

2014-07-08 Thread Vasily Averin
On 07/08/2014 02:02 PM, Aleksandar Ivanisevic wrote:
 Aleksandar Ivanisevic
 aleksan...@ivanisevic.de writes:
 
 [...]
 
 To make things more interesting, if I boot the debug kernel, everything
 works fine as far as I can tell.

 Any idea how do I debug this further?
 
 Eh, it looks like a kernel update kicked in while I was testing, so I
 was loading a debug kernel 2.6.32-042stab092.1, while testing with
 regular 090.5 kernel.
 
 So, it seems that this issue has been fixed in 092.1.

According to description, it looks similar to openvz bug 2983 (with duplicates 
2973,3004 and 3010)
https://bugzilla.openvz.org/show_bug.cgi?id=2983

Fixed in 91.4 and in last released  kernel 092.1



___
Users mailing list
Users@openvz.org
https://lists.openvz.org/mailman/listinfo/users


Re: [Users] flashcache

2014-07-08 Thread Aleksandar Ivanisevic

I am actually planning on using it only on test systems where i have
commodity SATA disks that are getting a bit overwhelmed. I hope to get
better value from a SATA+SSD combination that I would with SAS disks and
the appropriate controllers and fancy RAID levels that cost 3 times
more at least.

Anyway, looks like that bug also got fixed in 092.1, at least it doesn't
oops immediately any more.

Pavel Odintsov pavel.odint...@gmail.com
writes:

 Hi all!

 I thought it's really not good idea because technology like ssd
 caching should be tested _thoroughly_ before production use. But you
 could try it with  simfs but beware of ploop because it's really not
 an standard ext4 with custom caches and unexpected behaviour in some
 cases.

 On Tue, Jul 8, 2014 at 1:59 PM, Aleksandar Ivanisevic
 aleksan...@ivanisevic.de wrote:

 Hi,

 is anyone using flashcache vith openvz? If so, which version and with
 which kernel? Versions lower than 3 do not compile against the latest
 el6 kernel and version 3.11 and the latest git oopses in
 flashcache_md_write_kickoff with a null pointer.

 I see provisions to detect ovz kernel source in flashcache makefile, so
 someone must be compiling and using it.

 Any other SSD caching software that works with openvz?

 regards,

 ___
 Users mailing list
 Users@openvz.org
 https://lists.openvz.org/mailman/listinfo/users

___
Users mailing list
Users@openvz.org
https://lists.openvz.org/mailman/listinfo/users


Re: [Users] Manage KVM/Qemu and OpenVZ the same way

2014-07-08 Thread Bosson VZ
Hello,

I know about the recent change in UB limits default settings in vzctl. I 
haven't implemented the same logic into bossonvz yet because 
I have had some problems with the mentioned change in vzctl on some of my 
testing containers in the past. I will return to the problem and fix it, it 
will be very simple.

About ploop: we don't use ploop on our clusters because we have a shared 
storage based on top of drbd. All container data is replicated to all cluster 
nodes and so during migration, we don't have to wait for the sync (be it ploop 
copy or scp). How long would the container migrate using the ploop copy if a 
container partition would be, let's say, 500 GiB?

We also try to keep the amount of storage layers as small as possible to gain 
maximum throughput. OCFS, clustered LVMs and drbd suit us quite well. You can 
extend or shrink LVMs online and extend fss online with ext4 at least.

You can transfer an LVM to a remote node online using pvmove and iscsi. We use 
it regularly without any trouble.

Online snapshot merging is not possible with LVMs, that is true.

I don't know if I like the fact that you had to patch ext4 to get the extra 
features of ploop. I wouldn't say that trading stability for functions is the 
best idea. We've had problems in the past with bugs in the modified vzkernel 
ext4 leading to broken production file systems.

-- 
David Fabian
Cluster Design, s.r.o.

Dne Čt 3. července 2014 11:13:19, Kir Kolyshkin napsal(a):
 On 07/03/2014 05:19 AM, Bosson VZ wrote:
 
  Hi,
 
   
 
  the driver supports vSwap AFAIK. You can set RAM limits (soft, hard)
  and SWAP hard limit (soft is always 0). Could you explain to me what
  you mean by vRaw?
 
   
 
  I understand that vzctl provides a lot of knobs to play with but since
  libvirt is supposed to be a common management interface, I had to play
  by the libvirt rules. I extended the libvirt XML syntax only as much
  as was needed for supporting the amount of features that we use at the
  company. I also didn't want break too many things and sticked with the
  semantics that was already in libvirt for LXC.
 
   
 
  UBC settings is currently reduces to only manage RAM and SWAP. All
  other UBC limits are set as unrestricted.
 
 
 You'd better also set some secondary UBCs (if those are not set
 explicitly), same
 as vzctl does and as described at
 http://openvz.org/VSwap#Implicit_UBC_parameters
 
 Those are lockedpages, oomguarpages, vmguarpages, and optionally
 privvmpages.
 
  Technically, it is not hard to extend the driver to support all the
  UBC limits but I don't want to divert from the upstream libvirt domain
  file syntax too much.
 
   
 
  Ploop is not supported as we don't use it in the company. Most of the
  features that ploop provides can be easily achieved with LVM/drbd.
 
 
 I tend to disagree here.
 
 First, ploop works on top of ext4 filesystem, so it's way more flexible.
 You just can't move an LVM partition to a different server (in ploop
 case it's just scp).
 
 There is no online snapshot merging in LVM as far as I know.
 
 Finally, I can't see how DRBD can be a replacement for ploop copy.
 
 This is just from the top of my head, maybe there's something else
 as well (say I'm not sure how good is LVM online grow/shrink).
 
  However, unlike the UBC limits, there is already a syntax for
  mounting a file as a file system in libvirt, so implementing ploop
  might be possible.
 
   
 
 
  David Fabian
 
  Cluster Design, s.r.o.
 
   
 
  Dne St 2. c(ervence 2014 17:20:19, Pavel Odintsov napsal(a):
 
   Hello!
 
  
 
   Very nice! But what about vRaw/vSwap/non standard UBC management and
 
   ploop support? I'm used libvirt for kvm few years but it's really ugly
 
   for OpenVZ in upstream repository from RH.
 
  
 
   On Wed, Jul 2, 2014 at 4:26 PM, Bosson VZ bosso...@bosson.eu wrote:
 
Helo,
 
   
 
   
 
   
 
for everyone who would like to manage their Qemu/KVM and OpenVZ
  virtuals in
 
the same fashion, I am presenting a new libvirt driver, bossonvz,
  which will
 
allow you to manage OpenVZ containers with libvirt. To name a
  couple of
 
features:
 
   
 
   
 
   
 
- complete control over the container
 
   
 
- live migration via libvirtd
 
   
 
- remote VNC console
 
   
 
- fs mounts management
 
   
 
   
 
   
 
Just check this web page out to find out more.
 
   
 
   
 
   
 
http://bossonvz.bosson.eu/
 
   
 
   
 
   
 
The driver is provided as a separate patch to libvirt and as RPM
  packages
 
for CentOS/SL 6.5.
 
   
 
   
 
   
 
--
 
   
 
David Fabian
 
   
 
Cluster Design, s.r.o.
 
   
 
   
 
   
 
   
 
___
 
Users mailing list
 
Users@openvz.org
 
https://lists.openvz.org/mailman/listinfo/users
 
   
 
  
 
  
 
  
 
  
 
 
 
  ___
  Users mailing list
  Users@openvz.org
  https://lists.openvz.org/mailman/listinfo/users
 

Re: [Users] CentOS 7 OS Template now in contrib

2014-07-08 Thread Scott Dowdle
Greetings,

- Original Message Benjamin Henrion bhenrion at ffii.org -
 I think at some point Openvz.org should provide trusted builds like
 docker is doing.

Ok, I'll bite.  What is a Docker Trusted Build?  Whatever those are, I'm sure 
the OpenVZ official OS Templates are the equivalent.

In several of OS Templates I contribute (Fedora 20, CentOS 6 and 7, SL 6 [7 
ASAP], Oracle EL 6 [7 ASAP]), the build scripts are included within the OS 
Template (/root/create-*.sh) so the user can build their own from scratch if 
desired.

 At least we could get good docs on how those images are built.

There are fairly good docs sprinkled throughout the wiki but it varies from 
distro to distro.

I'd guess that the vast majority of OS Templates come from various chroot build 
environment programs that many distros have now.  Provide those programs with a 
list of packages and they download them from the distro's official 
repositories, extract them into a install root directory, and then when done, 
makes some minor changes for containerization (fix up /etc/fstab, eliminate 
unneeded gettys, etc).  It probably works best when you are building distro X 
from within distro X.

For the contributed OS Templates, there is supposed to be a corresponding forum 
post with build details but very few people seem to follow that including 
myself.  I need to get better at that.

TYL,
-- 
Scott Dowdle
704 Church Street
Belgrade, MT 59714
(406)388-0827 [home]
(406)994-3931 [work]
___
Users mailing list
Users@openvz.org
https://lists.openvz.org/mailman/listinfo/users


Re: [Users] flashcache

2014-07-08 Thread Pavel Odintsov
I knew about few incidents with ___FULL___ data loss from customers of
flashcache. Beware of it in production.

If you want speed you can try ZFS with l2arc/zvol cache because it's
native solution.

On Tue, Jul 8, 2014 at 8:05 PM, Nick Knutov m...@knutov.com wrote:
 We are using latest flashcache 2.* with 2.6.32-042stab083.2 in
 production for a long time. Planning to migrate 3.0 with latest 090.5
 but did not tried yet.


 08.07.2014 15:59, Aleksandar Ivanisevic пишет:

 Hi,

 is anyone using flashcache vith openvz? If so, which version and with
 which kernel? Versions lower than 3 do not compile against the latest
 el6 kernel and version 3.11 and the latest git oopses in
 flashcache_md_write_kickoff with a null pointer.

 I see provisions to detect ovz kernel source in flashcache makefile, so
 someone must be compiling and using it.

 Any other SSD caching software that works with openvz?


 --
 Best Regards,
 Nick Knutov
 http://knutov.com
 ICQ: 272873706
 Voice: +7-904-84-23-130
 ___
 Users mailing list
 Users@openvz.org
 https://lists.openvz.org/mailman/listinfo/users



-- 
Sincerely yours, Pavel Odintsov

___
Users mailing list
Users@openvz.org
https://lists.openvz.org/mailman/listinfo/users


Re: [Users] flashcache

2014-07-08 Thread Nick Knutov
We are using this for only cashing reads (mode thru), not writes.


(offtopic) We can not use ZFS. Unfortunately, NAS with something like
Nexenta is to expensive for us.

Anyway, we are doing completely migrate to SSD. It's just cheaper.


08.07.2014 22:23, Pavel Odintsov пишет:
 I knew about few incidents with ___FULL___ data loss from customers of
 flashcache. Beware of it in production.
 
 If you want speed you can try ZFS with l2arc/zvol cache because it's
 native solution.

-- 
Best Regards,
Nick Knutov
http://knutov.com
ICQ: 272873706
Voice: +7-904-84-23-130
___
Users mailing list
Users@openvz.org
https://lists.openvz.org/mailman/listinfo/users


Re: [Users] flashcache

2014-07-08 Thread Scott Dowdle
Greetings,

- Original Message -
 (offtopic) We can not use ZFS. Unfortunately, NAS with something like
 Nexenta is to expensive for us.

From what I've gathered from a few presentations, ZFS on Linux 
(http://zfsonlinux.org/) is as stable but more performant than it is on the 
OpenSolaris forks... so you can build your own if you can spare the people to 
learn the best practices.

I don't have a use for ZFS myself so I'm not really advocating it.

TYL,
-- 
Scott Dowdle
704 Church Street
Belgrade, MT 59714
(406)388-0827 [home]
(406)994-3931 [work]
___
Users mailing list
Users@openvz.org
https://lists.openvz.org/mailman/listinfo/users


Re: [Users] flashcache

2014-07-08 Thread Nick Knutov
I know about this project, but what about stability/compatibility ZFS on
Linux with OpenVZ kernel? Has anyone ever tested it?

Also, with ext4 I can always at any [our] datacenter boot to rescue mode
and, for example, move data to another/new server. I have no idea how to
get ZFS data if something happen wrong with hardware or recently
installed kernel with usual

From the other side, we are using flashcache in production for about two
years. With zero problems during all this time. It is not as fast as
Bcache (which is not compatible with OpenVZ  I think), but it solves
problem well.


08.07.2014 23:52, Scott Dowdle пишет:
 Greetings,
 
 - Original Message -
 (offtopic) We can not use ZFS. Unfortunately, NAS with something like
 Nexenta is to expensive for us.
 
 From what I've gathered from a few presentations, ZFS on Linux 
 (http://zfsonlinux.org/) is as stable but more performant than it is on the 
 OpenSolaris forks... so you can build your own if you can spare the people to 
 learn the best practices.
 
 I don't have a use for ZFS myself so I'm not really advocating it.
 
 TYL,
 

-- 
Best Regards,
Nick Knutov
http://knutov.com
ICQ: 272873706
Voice: +7-904-84-23-130
___
Users mailing list
Users@openvz.org
https://lists.openvz.org/mailman/listinfo/users


[Users] RHEL5 kernel end of life

2014-07-08 Thread Kir Kolyshkin
This is to announce that RHEL5 based OpenVZ kernel branch will reach
End Of Life in October, 2014, and will no longer be supported thereafter.

There is no guarantee for any RHEL5 kernel updates after the given date,
so we urge everyone to migrate their systems to RHEL6-based kernels.

If you experience difficulties and need help migrating your systems,
see http://openvz.org/Support for available support options.
The best one is official support from Parallels:
http://www.parallels.com/support/virtualization-suite/openvz/

Regards,
  OpenVZ team.
___
Users mailing list
Users@openvz.org
https://lists.openvz.org/mailman/listinfo/users


Re: [Users] flashcache

2014-07-08 Thread Pavel Odintsov
Hello!

Yep, Read cache is nice and safe solution but not write cache :)

No, we do not use ZFS in production yet. We done only very specific
tests like this: https://github.com/zfsonlinux/zfs/issues/2458 But you
can do some performance  tests and share :)

On Wed, Jul 9, 2014 at 12:55 AM, Nick Knutov m...@knutov.com wrote:
 I read
 http://www.stableit.ru/2014/07/using-zfs-with-openvz-openvzfs.html . Do
 you use it in production? Can you share speed tests or some other
 experience with zfs and openvz?


 08.07.2014 22:23, Pavel Odintsov пишет:
 I knew about few incidents with ___FULL___ data loss from customers of
 flashcache. Beware of it in production.

 If you want speed you can try ZFS with l2arc/zvol cache because it's
 native solution.


 --
 Best Regards,
 Nick Knutov
 http://knutov.com
 ICQ: 272873706
 Voice: +7-904-84-23-130
 ___
 Users mailing list
 Users@openvz.org
 https://lists.openvz.org/mailman/listinfo/users



-- 
Sincerely yours, Pavel Odintsov

___
Users mailing list
Users@openvz.org
https://lists.openvz.org/mailman/listinfo/users