Hello,
online resizing is limited by the capabilities of EXT4. Since EXT4 does
allow only online upsizing we have to do the downsizing offline. On the other
hand, our customers almost always request adding more space which is doable
online with a single command. Another problem with
Hi,
On 08/04/2015 12:51 AM, Bosson VZ wrote:
Hello,
yes, we have 200+ containers all running on top of simfs in clusters. Most of
the limitations of simfs mentioned above are not a problem to us as we use
clustered LVMs on top of DRBD storage. Every container has its own file-system
sitting on
Hello,
yes, we have 200+ containers all running on top of simfs in clusters. Most of
the limitations of simfs mentioned above are not a problem to us as we use
clustered LVMs on top of DRBD storage. Every container has its own file-system
sitting on an LVM block device. This layout is
Hello Sergey,
I've still got four HWNs with SimFS containers (approx 550 CTs). What
will you need them for?
*John Edel*
Owner, General Manager
*Jetfire Networks, L.L.C.*
6950 S.W. Hampton St.
Suite #103
Tigard, Oregon 97223
971-285-4567 tel:971-285-4567
j...@jetfirenetworks.com
On 23.07.2015 3:45, Scott Dowdle wrote:
vzctl has a compact option that will basically take the free space
and give it back to the host. I've used compact a few times but I
don't use it regularly... so I'm not sure how efficient it is nor
how good it is at reclaiming 100% of the unused
Yes. There are number of cases when use of ploop is bad idea. Large part
of them is already mentioned in list.
For example I can use ploop only for small CTs (less then 5-10 GB) as
ploop disadvantages like image size (even compacted) is not critical in
this case.
21.07.2015 12:21, Sergey
Hi,
I'm using one LVM logical volumes per container with simfs on one setup. Is it
possible to do something like that with ploop, too?
Regards
Volker
Am 21.07.2015 um 23:11 schrieb Kir Kolyshkin k...@openvz.org:
On 07/21/2015 08:51 AM, Michael Stauber wrote:
Hi Scott,
Ummm,
Den 2015-07-22 15:23, Michael Stauber skrev:
Hi Johan,
I guess containers could be converted, but can the bind mounts be done
without simfs in a simple way as well?
Regardless if you use simfs or ploop, a VPS or a real machine: For
things like that I find fuse-sshfs quite useful. With that
Greetings,
- Original Message -
and also ext4 over ploop over ext4 wasting disk space as overhead.
That is the case for all disk-file-as-disk-image containers and not unique to
ploop. You said if you can't use OpenVZ and ZFS together (in the future maybe)
then you'd switch to KVM...
On 22.07.2015 5:56, Scott Dowdle wrote:
I've read the recipes.
Some say you have to dedicate 1GB of RAM for every TB of storage.
dedicate 1GB of RAM for every TB of storage
need only if deduplication is turned on in ZFS.
But deduplication is not recommended to enable - it uses
lot of memory
and if ZFS means it can't be used... that is just another reason (for me)
not to use ZFS.
It can be used with ZFS if both hosts have ZFS and containers in zvol, imho.
But yes - it require vzmigrate version with support copy zvol snapshots to
another host.
2015-07-22 11:03 GMT+03:00 Сергей
On 22/07/15 00:11, users-boun...@openvz.org on behalf of Kir Kolyshkin
users-boun...@openvz.org on behalf of k...@openvz.org wrote:
Other why not simfs considerations are listed at
http://openvz.org/Ploop/Why#Before_ploop
That¹s good, but there is an issue with using ploops. If you want to
We are using simfs because ploop do not support 16+ TB partitions. Please
do not drop simfs support.
2015-07-22 8:47 GMT+03:00 Kir Kolyshkin k...@openvz.org:
On 07/21/2015 07:56 PM, Scott Dowdle wrote:
Greetings,
- Original Message -
ZFS is really The Last Word in File Systems,
Hi Johan,
I guess containers could be converted, but can the bind mounts be done
without simfs in a simple way as well?
Regardless if you use simfs or ploop, a VPS or a real machine: For
things like that I find fuse-sshfs quite useful. With that you can mount
a remote directory via SSH to a
On 07/22/2015 01:21 AM, Pavel Gashev wrote:
On 22/07/15 00:11, users-boun...@openvz.org on behalf of Kir Kolyshkin
users-boun...@openvz.org on behalf of k...@openvz.org wrote:
Other why not simfs considerations are listed at
http://openvz.org/Ploop/Why#Before_ploop
That¹s good, but there is
Den 2015-07-21 16:48, Scott Dowdle skrev:
Greetings,
- Original Message -
If our users have to choose between no more inode issues or having
direct access to all VPS files and folders from the HN, then ploop
will probably always get the short end of the stick.
Ummm, you can still
On 07/22/2015 11:31 AM, Gena Makhomed wrote:
my point is there will always be bugs... but to point at a bug report
and give up saying that it isn't stable because of bug report x... or
that some people have had panics at some point in history... well,
that isn't very reflective of the overall
On 07/22/2015 11:31 AM, Gena Makhomed wrote:
Regarding OpenVZ checkpoint / restore and live migration... it has
worked well for me since it was originally released in 2007 (or was
it 2008?). While I've had a few kernel panics in the almost 10 years
I've been using OpenVZ (starting with the
On 22.07.2015 21:58, Scott Dowdle wrote:
ext4 over ploop over ext4 wasting disk space as overhead.
That is the case for all disk-file-as-disk-image containers and not
unique to ploop. You said if you can't use OpenVZ and ZFS together
(in the future maybe) then you'd switch to KVM... at
Greetings,
- Original Message -
In case of ploop - free space inside ploop can't be used any more,
and from hardware node point of view - this disk space is wasted,
even in case when it unused and marked as free inside ploop image.
That is not entirely true. While I haven't really
Hi Scott,
Ummm, you can still access the files inside of ploop-based container
when it isn't running... simply by mounting it. Is there an issue
with that?
Granted: It's probably more of a psychological or philosophical issue
than a technical one. Filesystem on a filesystem. That adds a
Greetings,
- Original Message -
If our users have to choose between no more inode issues or having
direct access to all VPS files and folders from the HN, then ploop
will probably always get the short end of the stick.
Ummm, you can still access the files inside of ploop-based
On 07/21/2015 07:56 PM, Scott Dowdle wrote:
Greetings,
- Original Message -
ZFS is really The Last Word in File Systems,
and now you can just use it for free,
without reinventing the wheel.
OpenVZ + ZFS or Virtuozzo + ZFS == atom bomb,
killer feature with horrible devastation power.
Greetings,
- Original Message -
But maybe someone has some talking points that would help me to
win some hearts and minds there?
The biggest feature of ploop is snapshots. Of course snapshots are also great
for backups too. Oh and those performance gains are good too. Another thing
On 22.07.2015 3:17, Kir Kolyshkin wrote:
simfs need for using OpenVZ with ZFS
Other why not simfs considerations are listed at
http://openvz.org/Ploop/Why#Before_ploop
there are three levels:
1. before ploop: simfs over ext4
2. with ploop: ext4 over ploop over ext4
3. after ploop: simfs
On 07/21/2015 08:51 AM, Michael Stauber wrote:
Hi Scott,
Ummm, you can still access the files inside of ploop-based container
when it isn't running... simply by mounting it. Is there an issue
with that?
Granted: It's probably more of a psychological or philosophical issue
than a technical
Greetings,
- Original Message -
ZFS is really The Last Word in File Systems,
and now you can just use it for free,
without reinventing the wheel.
OpenVZ + ZFS or Virtuozzo + ZFS == atom bomb,
killer feature with horrible devastation power.
Or - you just forcing users to migrate
Greetings,
- Original Message -
Maybe it's worth the thought to provide vzmigrate with an extension that
allows to vzmigrate from a simfs source VPS to ploop target VPS?
I can do that on foot (if need be) or hack some scripts together that
do the transition with or without a
On 22.07.2015 0:11, Kir Kolyshkin wrote:
The biggest problem with simfs appears to be security. We have recently
found a few bugs (not in simfs per se, but in the kernel in general,
i.e. these
are not our bugs for the most part) that can be exploited to escape
the simfs and let container access
Hi Scott,
vzctl has a convert option.
Oh? Nice. That must have slipped right by me. Thanks for pointing that out.
It seems like your argument is basically, because it was already
there and people don't have to learn anything to use them.
While that is true, it doesn't sound too compelling
Hello,
we want find people who still use simfs for OpenVZ containers.
Do we have such users?
Sergey B.
___
Users mailing list
Users@openvz.org
https://lists.openvz.org/mailman/listinfo/users
we want find people who still use simfs for OpenVZ containers.
Do we have such users?
All proxmox VE users (as we do not have ploop support).
___
Users mailing list
Users@openvz.org
https://lists.openvz.org/mailman/listinfo/users
On 21.07.2015 10:21, Sergey Bronnikov wrote:
Hello,
we want find people who still use simfs for OpenVZ containers.
Do we have such users?
Solar with his rhel5-based Openwall distribution
___
Users mailing list
Users@openvz.org
I'm using mixed simfs/ploop.
Regards,
Volker
Am 21.07.2015 um 09:21 schrieb Sergey Bronnikov serg...@openvz.org:
Hello,
we want find people who still use simfs for OpenVZ containers.
Do we have such users?
Sergey B.
___
Users mailing
-BEGIN PGP SIGNED MESSAGE-
Hash: SHA1
Here!
_ hostcmd vz 'vzlist -Holayout | fgrep simfs' | wc -l
911
On 21.07.2015 09:21, Sergey Bronnikov wrote:
Hello,
we want find people who still use simfs for OpenVZ containers.
Do we have such users?
Sergey B.
Hi,
We have 150 containers running on simfs.
What do you need ?
Regards,
Le 21 juil. 2015 à 09:26, Sergey Bronnikov serg...@openvz.org a écrit :
Hello,
we want find people who still use simfs for OpenVZ containers.
Do we have such users?
Sergey B.
Hi Sergey,
we want find people who still use simfs for OpenVZ containers.
Do we have such users?
All Aventurin{e} users, as we'll only add ploop sometime later this
year. Even then simfs will remain the standard fs for new VPS's.
If our users have to choose between no more inode issues or
37 matches
Mail list logo