Let's bump thread again - has anyone tried dm-cache on openvz/centos 6
kernels? looks like some support is included:
root@mu2:~# fgrep CONFIG_DM_CACHE /boot/config-2.6.32-042stab112.15-el6-openvz
CONFIG_DM_CACHE=m
CONFIG_DM_CACHE_MQ=m
CONFIG_DM_CACHE_CLEANER=m
From userspace utils view, I'm
Virtuozzo Storage, PM
Odin
-Original Message-
From: users-boun...@openvz.org [mailto:users-boun...@openvz.org] On Behalf Of
Corrado Fiore
Sent: Monday, November 16, 2015 1:22 PM
To: OpenVZ users <users@openvz.org>
Subject: Re: [Users] flashcache
Hi Nick,
could you elaborat
I'v heard this from large VPS hosting provider.
Anyway, even our intenal projects require more then 100MB/s in peak and
more then 100 GB of storage (while only 100 GB are free). So local SSDs
are cheaper for us, then 10G network and commercial version of pstorage.
16.11.2015 15:22, Corrado
Hi guys,
i'm not sure about flashcache 3.x (if anybody used it and thus it had been ever
compilable against OpenVZ kernels),
but for flashcache 2.x i know for sure that it compiled ok several months ago =>
if something got broken now it is most probably some simple issue.
So i suggest
1) file
Hi Nick,
could you elaborate more on the second point? As far as I understood, pstorage
is in fact targeted towards clusters with hundreds of containers, so I am a bit
curious to understand where you got that information.
If there's anyone on the list that has used pstorage in clusters > 7 -
Unfortunately, pstorage has two major disadvantages:
1) it's not free
2) it not usable for more then 1-4 CT over 1 gigabit network in real
world cases (as far as I know)
14.11.2015 16:12, Corrado Fiore пишет:
You might want to use Odin Cloud Storage (pstorage) instead, as it goes beyond
SSD
Hi,
even if FlashCache compiled correctly, I would suggest you to not use it as the
performance will most likely be sub-optimal (at least in my experience).
You might want to use Odin Cloud Storage (pstorage) instead, as it goes beyond
SSD acceleration, i.e. it is distributed and it offers
Bumping up - anyone still on flashcache & openvz kernels? Tried to
compile flashcache 3.1.3 dkms against 2.6.32-042stab112.15 , getting
errors:
DKMS make.log for flashcache-1.0-227-gc0eeb3d1e539 for kernel
2.6.32-042stab112.15-el6-openvz (x86_64)
Fri Nov 13 13:56:24 MSK 2015
make[1]: Entering
No. Even 2.x flashcashe is not possible to compile with recent openvz
rhel6 kernels.
13.11.2015 15:57, CoolCold пишет:
Bumping up - anyone still on flashcache & openvz kernels? Tried to
compile flashcache 3.1.3 dkms against 2.6.32-042stab112.15 , getting
errors:
--
Best Regards,
Nick
On 07/09/2014 06:58 PM, Kir Kolyshkin wrote:
On 07/08/2014 11:54 PM, Pavel Snajdr wrote:
On 07/08/2014 07:52 PM, Scott Dowdle wrote:
Greetings,
- Original Message -
(offtopic) We can not use ZFS. Unfortunately, NAS with something like
Nexenta is to expensive for us.
From what I've
Pavel Odintsov pavel.odint...@gmail.com
writes:
Hello!
Yep, Read cache is nice and safe solution but not write cache :)
No, we do not use ZFS in production yet. We done only very specific
tests like this: https://github.com/zfsonlinux/zfs/issues/2458 But you
can do some performance tests
Not true, IO limits are working as they should (if we're talking vzctl
set --iolimit/--iopslimit). I've kicked the ZoL guys around to add IO
accounting support, so it is there.
You can share tests with us? For standard folders like simfs this
limits works bad in big number of cases
How? ZFS
On 07/10/2014 11:35 AM, Pavel Odintsov wrote:
Not true, IO limits are working as they should (if we're talking vzctl
set --iolimit/--iopslimit). I've kicked the ZoL guys around to add IO
accounting support, so it is there.
You can share tests with us? For standard folders like simfs this
Thank you for your answers! It's really useful information.
On Thu, Jul 10, 2014 at 2:08 PM, Pavel Snajdr li...@snajpa.net wrote:
On 07/10/2014 11:35 AM, Pavel Odintsov wrote:
Not true, IO limits are working as they should (if we're talking vzctl
set --iolimit/--iopslimit). I've kicked the ZoL
Could you share your patches to vzmigrate and vzctl?
On Thu, Jul 10, 2014 at 2:25 PM, Pavel Odintsov
pavel.odint...@gmail.com wrote:
Thank you for your answers! It's really useful information.
On Thu, Jul 10, 2014 at 2:08 PM, Pavel Snajdr li...@snajpa.net wrote:
On 07/10/2014 11:35 AM, Pavel
On 07/10/2014 12:32 PM, Pavel Odintsov wrote:
Could you share your patches to vzmigrate and vzctl?
We don't have any, where vzctl/vzmigrate didn't satisfy our needs, we've
went the way around these utilities and let vpsAdmin on the hwnode
manage things.
You can take a look here:
On 07/10/2014 12:50 PM, Pavel Snajdr wrote:
On 07/10/2014 12:32 PM, Pavel Odintsov wrote:
Could you share your patches to vzmigrate and vzctl?
We don't have any, where vzctl/vzmigrate didn't satisfy our needs, we've
went the way around these utilities and let vpsAdmin on the hwnode
manage
There are two important moments here:
1) As Pavel wrote IO can't be separated easily with one fs (now, but I
think it can change with cgroups in future)
2) per-user quota inside CT is supported for ext4 only now
This two moments can be important for you or not.
In our real life we never had
I think you are speaking here about different cases.
One is making HA backup node. When we are backing up full node to
another node (1:1) - zfs send/receive is much better (and the goal is to
save data, not running processes). Without zfs - ploop snapshotting and
vzmigrate is good enough (over
On 07/08/2014 07:52 PM, Scott Dowdle wrote:
Greetings,
- Original Message -
(offtopic) We can not use ZFS. Unfortunately, NAS with something like
Nexenta is to expensive for us.
From what I've gathered from a few presentations, ZFS on Linux
(http://zfsonlinux.org/) is as stable
On 07/08/2014 11:54 PM, Pavel Snajdr wrote:
On 07/08/2014 07:52 PM, Scott Dowdle wrote:
Greetings,
- Original Message -
(offtopic) We can not use ZFS. Unfortunately, NAS with something like
Nexenta is to expensive for us.
From what I've gathered from a few presentations, ZFS on
Hi all!
I thought it's really not good idea because technology like ssd
caching should be tested _thoroughly_ before production use. But you
could try it with simfs but beware of ploop because it's really not
an standard ext4 with custom caches and unexpected behaviour in some
cases.
On Tue,
I am actually planning on using it only on test systems where i have
commodity SATA disks that are getting a bit overwhelmed. I hope to get
better value from a SATA+SSD combination that I would with SAS disks and
the appropriate controllers and fancy RAID levels that cost 3 times
more at least.
I knew about few incidents with ___FULL___ data loss from customers of
flashcache. Beware of it in production.
If you want speed you can try ZFS with l2arc/zvol cache because it's
native solution.
On Tue, Jul 8, 2014 at 8:05 PM, Nick Knutov m...@knutov.com wrote:
We are using latest flashcache
We are using this for only cashing reads (mode thru), not writes.
(offtopic) We can not use ZFS. Unfortunately, NAS with something like
Nexenta is to expensive for us.
Anyway, we are doing completely migrate to SSD. It's just cheaper.
08.07.2014 22:23, Pavel Odintsov пишет:
I knew about few
Greetings,
- Original Message -
(offtopic) We can not use ZFS. Unfortunately, NAS with something like
Nexenta is to expensive for us.
From what I've gathered from a few presentations, ZFS on Linux
(http://zfsonlinux.org/) is as stable but more performant than it is on the
OpenSolaris
I know about this project, but what about stability/compatibility ZFS on
Linux with OpenVZ kernel? Has anyone ever tested it?
Also, with ext4 I can always at any [our] datacenter boot to rescue mode
and, for example, move data to another/new server. I have no idea how to
get ZFS data if something
Hello!
Yep, Read cache is nice and safe solution but not write cache :)
No, we do not use ZFS in production yet. We done only very specific
tests like this: https://github.com/zfsonlinux/zfs/issues/2458 But you
can do some performance tests and share :)
On Wed, Jul 9, 2014 at 12:55 AM, Nick
28 matches
Mail list logo