Hi,
here the patch
Thanks in advance
Bye,
David Arendt
> Hi,
> On Wed, 7 Apr 2010 12:39:39 +0200, ad...@prnet.org wrote:
>> Hi,
>>
>> At the moment of submitting my cleaner patches, I didn't know if you
>> would
>> use the new behavior as default behavior or the old one therefore in
>> documenta
Hi,
On Wed, 7 Apr 2010 12:39:39 +0200, ad...@prnet.org wrote:
> Hi,
>
> At the moment of submitting my cleaner patches, I didn't know if you would
> use the new behavior as default behavior or the old one therefore in
> documentation I had described that min_clean_segments = 0 would mean
> normal
Hi,
At the moment of submitting my cleaner patches, I didn't know if you would
use the new behavior as default behavior or the old one therefore in
documentation I had described that min_clean_segments = 0 would mean
normal cleaner behavior. However as the new behavior is now used as
default behav
Hi,
On Tue, 06 Apr 2010 19:41:33 +0200, David Arendt wrote:
> Hi,
>
> here the updated patch
>
> Thanks in advance
> Bye,
> David Arendt
Quick work ;)
Looks fine to me.
I'll apply it after some tests.
Thanks.
Ryusuke Konishi
> On 04/06/10 18:06, Ryusuke Konishi wrote:
> > Hi,
> > On Mon, 0
Hi,
On Mon, 5 Apr 2010 14:35:51 +0200, Jan de Kruyf wrote:
> Hallo,
> May I break into this discussion with something perhaps partially
> connected to the thread:
> I had my usual disk full problem again on a half full partition after
> a power crash, that so far has not been explained yet.
>
> D
Hi,
here the updated patch
Thanks in advance
Bye,
David Arendt
On 04/06/10 18:06, Ryusuke Konishi wrote:
> Hi,
> On Mon, 05 Apr 2010 15:35:34 +0200, David Arendt wrote:
>
>> Hi,
>>
>> here is the patch
>>
>> Thanks in advance
>> Bye,
>> David Arendt
>>
> Thanks.
>
> Looks functionally f
On Mon, 05 Apr 2010 20:34:50 +0900 (JST), Ryusuke Konishi
wrote:
> > If you decide for the second set of nsegments_per_clean and
> > cleaning_interval_parameters, please tell me if I should implement it or
> > if you will implement it, not that we are working on the same
> > functionality at the
Hi,
On Mon, 05 Apr 2010 15:35:34 +0200, David Arendt wrote:
> Hi,
>
> here is the patch
>
> Thanks in advance
> Bye,
> David Arendt
Thanks.
Looks functionally fine. Just a matter of coding style:
The following part of cleaner loop looks bloated. I think it's a good
opportunity to divide it f
Hi,
here is the patch
Thanks in advance
Bye,
David Arendt
On 04/05/10 13:34, Ryusuke Konishi wrote:
> Hi,
> On Mon, 05 Apr 2010 09:50:11 +0200, David Arendt wrote:
>
>> Hi,
>>
>> Actually I run with min_clean_segments at 250 and found that to be a
>> good value. However for example for a 2 g
Hallo,
May I break into this discussion with something perhaps partially
connected to the thread:
I had my usual disk full problem again on a half full partition after
a power crash, that so far has not been explained yet.
Did any of you pick up anything in the cleanerd code that might point
somew
Hi,
On Mon, 05 Apr 2010 09:50:11 +0200, David Arendt wrote:
> Hi,
>
> Actually I run with min_clean_segments at 250 and found that to be a
> good value. However for example for a 2 gbyte usb key, this value would
> not work at all, therefor I find it a good idea to set the default at
> 10% as it
Hi,
Actually I run with min_clean_segments at 250 and found that to be a
good value. However for example for a 2 gbyte usb key, this value would
not work at all, therefor I find it a good idea to set the default at
10% as it would be more general for any device size as lots of people
simply try th
Hi!
On Mon, 29 Mar 2010 16:39:02 +0900 (JST), Ryusuke Konishi
wrote:
> On Mon, 29 Mar 2010 06:35:27 +0200, David Arendt wrote:
> > Hi,
> >
> > here the changes
> >
> > Thank in advance,
> > David Arendt
>
> Looks fine to me. Will apply later.
>
> Thanks for your quick work.
>
> Ryusuke Kon
On Mon, 29 Mar 2010 06:35:27 +0200, David Arendt wrote:
> Hi,
>
> here the changes
>
> Thank in advance,
> David Arendt
Looks fine to me. Will apply later.
Thanks for your quick work.
Ryusuke Konishi
> On 03/29/10 05:59, Ryusuke Konishi wrote:
> > Hi,
> > On Sun, 28 Mar 2010 23:52:52 +0200
Hi,
here the changes
Thank in advance,
David Arendt
On 03/29/10 05:59, Ryusuke Konishi wrote:
> Hi,
> On Sun, 28 Mar 2010 23:52:52 +0200, David Arendt wrote:
>
>> Hi,
>>
>> thanks for applying the patches. I did all my tests on 2 gbyte loop
>> devices and now that it is officially in git, I
Hi,
On Sun, 28 Mar 2010 23:52:52 +0200, David Arendt wrote:
> Hi,
>
> thanks for applying the patches. I did all my tests on 2 gbyte loop
> devices and now that it is officially in git, I deployed it to some
> production systems with big disks. Here I have noticed, that I have
> completely forgot
Hi,
thanks for applying the patches. I did all my tests on 2 gbyte loop
devices and now that it is officially in git, I deployed it to some
production systems with big disks. Here I have noticed, that I have
completely forgotten the reserved segments. Technically this is not a
problem, but I think
Hi,
On Sun, 28 Mar 2010 14:17:00 +0200, David Arendt wrote:
> Hi,
>
> here the nogc patch
>
> As changelog description for this one, we could put:
>
> add mount option to disable garbage collection
>
> Thanks in advance
> Bye,
> David Arendt
Hmm, the patch looks perfect.
Will queue both in t
Hi,
here the nogc patch
As changelog description for this one, we could put:
add mount option to disable garbage collection
Thanks in advance
Bye,
David Arendt
On 03/28/10 03:55, Ryusuke Konishi wrote:
> Hi,
>
> On Sat, 27 Mar 2010 21:00:52 +0100, David Arendt wrote:
>
>> Hi,
>>
>> here th
Hi,
On Sat, 27 Mar 2010 21:00:52 +0100, David Arendt wrote:
> Hi,
>
> here the revised version of the patch
>
> As changelog description we could put:
>
> add options for cleaning based on number of free segments
Thanks.
Ok, it looks fine to me.
> In order to pass different config files to
Hi,
here the revised version of the patch
As changelog description we could put:
add options for cleaning based on number of free segments
In order to pass different config files to cleaner while not increasing
mount options, another solution might be adding a mount option
nocleanerd to disable
Hi,
On 03/27/10 18:48, Ryusuke Konishi wrote:
> Hi David,
> On Wed, 24 Mar 2010 06:35:00 +0100, David Arendt wrote:
>
>> Hi,
>>
>> just for completeness, here is a re-post of the complete patch using
>> cleanerd->c_running instead of local variable "sleeping".
>>
>> Bye,
>> David Arendt
>>
Hi David,
On Wed, 24 Mar 2010 06:35:00 +0100, David Arendt wrote:
> Hi,
>
> just for completeness, here is a re-post of the complete patch using
> cleanerd->c_running instead of local variable "sleeping".
>
> Bye,
> David Arendt
Sorry for my late response.
I'm planning to apply your patch.
The
Hi,
just for completeness, here is a re-post of the complete patch using
cleanerd->c_running instead of local variable "sleeping".
Bye,
David Arendt
On 03/17/10 19:11, Ryusuke Konishi wrote:
> Hi,
> On Mon, 15 Mar 2010 22:24:28 +0100, David Arendt wrote:
>
>> Hi,
>>
>> Well I didn't know that
Hi,
Well I think this parameters never need to be changed as emergency.
However I can imagine that for some situations, it would be useful to
have different parameters for different drives when file systems are
mounted from /etc/fstab.
There was a proposal of another person to modify the configu
Hi,
I have changed cleanerd.c to use cleanerd->c_running instead of sleeping.
Here the changed function for review. I will post a new complete patch
after receiving your comments.
static int nilfs_cleanerd_clean_loop(struct nilfs_cleanerd *cleanerd)
{
struct nilfs_sustat sustat;
__u64 pr
On Tue, 16 Mar 2010 12:17:53 +0100, ad...@prnet.org wrote:
> Hi,
>
> if it is ok for you, I will create a second patch to add the following
> mount options: minfree, maxfree (or do you prefer other names ?). So
> different values can be specified for different mount points.
>
> What do you think
Hi,
On Mon, 15 Mar 2010 22:24:28 +0100, David Arendt wrote:
> Hi,
>
> Well I didn't know that a few days can pass as fast :-)
>
> I have attached the patch to this mail.
>
> Until now the patch has only been shortly tested on a loop device, so it
> might contain bugs and destroy your data.
Than
Hi,
On Mon, 15 Mar 2010 18:09:38 +0100, David Arendt wrote:
> Hi,
>
> In fact by segment check interval I mean the time to sleep before
> checking again for free space. I used 3600 in the example as this is
> would be suitable for my workload, but 60 might be a safer default value.
>
> Specifying
Hi,
if it is ok for you, I will create a second patch to add the following
mount options: minfree, maxfree (or do you prefer other names ?). So
different values can be specified for different mount points.
What do you think ?
Thanks,
Arendt David
> Hi,
> On Mon, 15 Mar 2010 00:03:45 +0100, Davi
Hi,
Well I didn't know that a few days can pass as fast :-)
I have attached the patch to this mail.
Until now the patch has only been shortly tested on a loop device, so it
might contain bugs and destroy your data.
If you decide to apply it, please change the default values to the ones
you find
Hi,
In fact by segment check interval I mean the time to sleep before
checking again for free space. I used 3600 in the example as this is
would be suitable for my workload, but 60 might be a safer default value.
Specifying a percentage would also be an idea. I thought about segments
as nsegments
Hi,
On Mon, 15 Mar 2010 00:03:45 +0100, David Arendt wrote:
> Hi,
>
> I am posting this again to the correct mailing list as I cc'ed it to the
> old inactive one.
>
> Maybe I am understanding something wrong, but if I would use the count
> of reclaimed segments, how could I determine if one clean
Hi,
I am posting this again to the correct mailing list as I cc'ed it to the
old inactive one.
Maybe I am understanding something wrong, but if I would use the count
of reclaimed segments, how could I determine if one cleaning pass has
finished as I don't know in advance how many segments could b
ment
allocator, and rotate target segments with skipping free or mostly
in-use ones. In that case, nilfs_cleanerd_select_segments() should be
modified to select segments in the order of segment number.
Cheers,
Ryusuke Konishi
> Bye,
> David Arendt
>
> -original message-
> Subj
pass based on minimum free space
From: Ryusuke Konishi
Date: 14/03/2010 12:59
Hi,
On Sun, 14 Mar 2010 09:47:55 +0100, David Arendt wrote:
> Hi,
>
> In order to avoid working both at the same thing, do you think about
> implementing this yourself in near future or do you prefe
Hi,
On Sun, 14 Mar 2010 09:47:55 +0100, David Arendt wrote:
> Hi,
>
> In order to avoid working both at the same thing, do you think about
> implementing this yourself in near future or do you prefer that I try
> implementing it myself and send you a patch ?
I'd appreciate it if you try it yourse
Hi,
In order to avoid working both at the same thing, do you think about
implementing this yourself in near future or do you prefer that I try
implementing it myself and send you a patch ?
For the case where you prefer that I try implementing it, here a quick
information in natural language how I
Hi,
On Sat, 13 Mar 2010 21:49:43 +0100, David Arendt wrote:
> Hi,
>
> In order to reduce cleaner io, I am thinking it could be usefull to
> implement a parameter where you can specify the minimum free space. If
> this parameter is set, instead of normal cleaning operation, the cleaner
> would wait
Hi,
In order to reduce cleaner io, I am thinking it could be usefull to
implement a parameter where you can specify the minimum free space. If
this parameter is set, instead of normal cleaning operation, the cleaner
would wait until there is less than minimum free space available and
then run one
40 matches
Mail list logo