Bumping up - anyone still on flashcache & openvz kernels? Tried to compile flashcache 3.1.3 dkms against 2.6.32-042stab112.15 , getting errors:
DKMS make.log for flashcache-1.0-227-gc0eeb3d1e539 for kernel 2.6.32-042stab112.15-el6-openvz (x86_64) Fri Nov 13 13:56:24 MSK 2015 make[1]: Entering directory '/var/lib/dkms/flashcache/1.0-227-gc0eeb3d1e539/build' grep: /etc/redhat-release: No such file or directory make -C /lib/modules/2.6.32-042stab112.15-el6-openvz/build M=/var/lib/dkms/flashcache/1.0-227-gc0eeb3d1e539/build modules V=0 make[2]: Entering directory '/usr/src/linux-headers-2.6.32-042stab112.15-el6-openvz' grep: /etc/redhat-release: No such file or directory CC [M] /var/lib/dkms/flashcache/1.0-227-gc0eeb3d1e539/build/flashcache_conf.o In file included from /usr/src/linux-headers-2.6.32-042stab112.15-el6-openvz/arch/x86/include/asm/timex.h:5:0, from include/linux/timex.h:171, from include/linux/jiffies.h:8, from include/linux/ktime.h:25, from include/linux/timer.h:5, from include/linux/workqueue.h:8, from include/linux/mmzone.h:19, from include/linux/gfp.h:4, from include/linux/kmod.h:22, from include/linux/module.h:13, from /var/lib/dkms/flashcache/1.0-227-gc0eeb3d1e539/build/flashcache_conf.c:26: /usr/src/linux-headers-2.6.32-042stab112.15-el6-openvz/arch/x86/include/asm/tsc.h: In function ‘vget_cycles’: /usr/src/linux-headers-2.6.32-042stab112.15-el6-openvz/arch/x86/include/asm/tsc.h:45:2: error: implicit declaration of function ‘__native_read_tsc’ [-Werror=implicit-function-declaration] return (cycles_t)__native_read_tsc(); ^ In file included from include/linux/sched.h:72:0, from include/linux/kmod.h:28, from include/linux/module.h:13, from /var/lib/dkms/flashcache/1.0-227-gc0eeb3d1e539/build/flashcache_conf.c:26: include/linux/signal.h: In function ‘sigaddset’: include/linux/signal.h:41:6: error: ‘_NSIG_WORDS’ undeclared (first use in this function) if (_NSIG_WORDS == 1) ... On Fri, Jul 11, 2014 at 4:34 AM, Nick Knutov <m...@knutov.com> wrote: > I think you are speaking here about different cases. > > One is making HA backup node. When we are backing up full node to > another node (1:1) - zfs send/receive is much better (and the goal is to > save data, not running processes). Without zfs - ploop snapshotting and > vzmigrate is good enough (over SSD), and rsync with ext4 (simfs inside > CT) is really pain. > > The other case is migrating large amount of CTs over large amount of > nodes for resource usage balancing [with zero downtime]. There is no > alternatives to vzmigrate here although zfs send/receive with > per-container ZVOL can speed up this process [if it's important to > transfer between nodes faster with less network usage] > > 10.07.2014 15:35, Pavel Odintsov пишет: >>> Why? ZFS send/receive is able to do bit-by-bit identical copy of the FS, >>> >I thought the point of migration is to don't have the CT notice any >>> >change, I don't see why the inode numbers should change. >> Do you have really working zero downtime vzmigrate on ZFS? >> > > -- > Best Regards, > Nick Knutov > http://knutov.com > ICQ: 272873706 > Voice: +7-904-84-23-130 > _______________________________________________ > Users mailing list > Users@openvz.org > https://lists.openvz.org/mailman/listinfo/users -- Best regards, [COOLCOLD-RIPN] _______________________________________________ Users mailing list Users@openvz.org https://lists.openvz.org/mailman/listinfo/users