On 25.07.2015 4:46, Kir Kolyshkin wrote:
This tool is to be used for inner ploop ext4. As a result, the data will
be less sparse, there will be more empty blocks for ploop to discard.
I encourage you to experiment with e4defrag2 and post your results here.
Usage is something like this
It is not normal deduplication with FS tools.
And we said about delete data and problem with TRIM it.
If we do not remove anything and never - it is not bad.
But in real life we create it with vzctl create, without ln.
In real life we write and DELETE data.
And we believe that numbers that we see
On 23.07.2015 5:44, Kir Kolyshkin wrote:
My experience with ploop:
DISKSPACE limited to 256 GiB, real data used inside container
was near 40-50% of limit 256 GiB, but ploop image is lot bigger,
it use near 256 GiB of space at hardware node. Overhead ~ 50-60%
I found workaround for this: run
On 07/24/2015 05:41 AM, Gena Makhomed wrote:
To anyone reading this, there are a few things here worth noting.
a. Such overhead is caused by three things:
1. creating then removing data (vzctl compact takes care of that)
2. filesystem fragmentation (we have some experimental patches to ext4
On 25.07.2015 1:06, Kir Kolyshkin wrote:
I think this is not good idea run ploop compaction more frequently,
then one time per day at the night - so we need take into account
not minimal value of overhead, but maximal one, after 24 hours
of container work in normal mode - to planning disk space
On 07/24/2015 05:57 PM, Gena Makhomed wrote:
On 25.07.2015 1:06, Kir Kolyshkin wrote: what I am doing wrong, and
how I can decrease ploop overhead here?
Most probably it's because of filesystem defragmentation (my item #2
above).
We are currently working on that. For example, see this report:
Completely agree with Gena Makhomed on points he raised about ploop.
If you run HN on SSD (256gb for example) ploop is not good to use at all,
too much overhead of space.
Would be nice to have better free space management in ploop somehow.
Also about OpenVZ restore option:
Here is real example
And many added to bugzilla. And many already fixed from you and other guys
from OpenVZ team.
But the all picture, unfortunately, it has not changed cardinally, yet.
Some people afraid use it, yet.
PS And suspend container failed without iptables-save since 2007 year )
On 07/23/2015 06:22 AM, Сергей Мамонов wrote:
And many added to bugzilla. And many already fixed from you and other
guys from OpenVZ team.
But the all picture, unfortunately, it has not changed cardinally,
yet. Some people afraid use it, yet.
PS And suspend container failed without
26G used is CTID and 43G root.hdd after vzctl compact are not the expected
behavior or not?
Try to report it?
About big load to disks - I not seem what we can do with it, it look
like expected behavior for compact, unfortunatly.
Why you are ignoring my arguments about ext4? It's filesystem from
1. creating then removing data (vzctl compact takes care of that)
So, #1 is solved
Only partially in fact.
1. Compact eat a lot of resources, because of the heavy use of the disk.
2. You need compact your ploop very very regulary.
On our nodes, when we run compact every day, with 3-5T /vz/
On 07/22/2015 11:59 PM, Сергей Мамонов wrote:
1. creating then removing data (vzctl compact takes care of that)
So, #1 is solved
Only partially in fact.
1. Compact eata lot of resources, because of the heavy use of the disk.
2. You need compact your ploop very very regulary.
On our nodes, when
On 22.07.2015 8:39, Kir Kolyshkin wrote:
1) currently even suspend/resume not work reliable:
https://bugzilla.openvz.org/show_bug.cgi?id=2470
- I can't suspend and resume containers without bugs.
and as result - I also can't use it for live migration.
Valid point, we need to figure it out.
Greetings,
- Original Message -
Compare two situations:
1) Live migration not used at all
2) Live migration used and containers migrated between HN
In which situation possibility to obtain kernel panic is higher?
If you say possibility are equals this means
what OpenVZ live
FWIW, I regularly did live migrations of e.g. smtp servers, mailguard nodes
etc for years while working at Toyota, and never saw any kernel panic, and
never heard of anyone else having an issue. I'm not saying it's impossible,
but it seems you must have some sort of strange corner case there.
On 22.07.2015 21:02, Scott Dowdle wrote:
Compare two situations:
1) Live migration not used at all
2) Live migration used and containers migrated between HN
In which situation possibility to obtain kernel panic is higher?
If you say possibility are equals this means
what OpenVZ live
On 07/22/2015 01:08 PM, Gena Makhomed wrote:
My experience with ploop:
DISKSPACE limited to 256 GiB, real data used inside container
was near 40-50% of limit 256 GiB, but ploop image is lot bigger,
it use near 256 GiB of space at hardware node. Overhead ~ 50-60%
I found workaround for
On 07/22/2015 10:08 AM, Gena Makhomed wrote:
On 22.07.2015 8:39, Kir Kolyshkin wrote:
1) currently even suspend/resume not work reliable:
https://bugzilla.openvz.org/show_bug.cgi?id=2470
- I can't suspend and resume containers without bugs.
and as result - I also can't use it for live
18 matches
Mail list logo