ack for the little
improvement.
- Akira
On 2015/02/21 1:17, Joe Thornber wrote:
> On Sat, Feb 21, 2015 at 01:06:08AM +0900, Akira Hayakawa wrote:
>> The size is configurable but typically 512KB (that's the default).
>>
>> Refer to bio payload sounds really dangerous but it ma
;move"
the ownership?
- Akira
On 2015/02/21 0:50, Joe Thornber wrote:
> On Sat, Feb 21, 2015 at 12:25:53AM +0900, Akira Hayakawa wrote:
>> Yes.
> How big are your log chunks? Presumably they're relatively small (eg,
> 256k). In which case you can optimise for the common case whe
Jan 2015 16:09:52 -0800
Greg KH wrote:
> On Thu, Jan 01, 2015 at 05:44:39PM +0900, Akira Hayakawa wrote:
> > This patch adds dm-writeboost to staging tree.
> >
> > dm-writeboost is a log-structured SSD-caching driver.
> > It caches data in log-structured way
Jan 2015 16:09:52 -0800
Greg KH gre...@linuxfoundation.org wrote:
On Thu, Jan 01, 2015 at 05:44:39PM +0900, Akira Hayakawa wrote:
This patch adds dm-writeboost to staging tree.
dm-writeboost is a log-structured SSD-caching driver.
It caches data in log-structured way on the cache device
for the little
improvement.
- Akira
On 2015/02/21 1:17, Joe Thornber wrote:
On Sat, Feb 21, 2015 at 01:06:08AM +0900, Akira Hayakawa wrote:
The size is configurable but typically 512KB (that's the default).
Refer to bio payload sounds really dangerous but it may be possible
in some tricky way
the ownership?
- Akira
On 2015/02/21 0:50, Joe Thornber wrote:
On Sat, Feb 21, 2015 at 12:25:53AM +0900, Akira Hayakawa wrote:
Yes.
How big are your log chunks? Presumably they're relatively small (eg,
256k). In which case you can optimise for the common case where you
have enough bios to hand
Signed-off-by: Akira Hayakawa
---
include/linux/rbtree.h | 2 +-
1 file changed, 1 insertion(+), 1 deletion(-)
diff --git a/include/linux/rbtree.h b/include/linux/rbtree.h
index 57e75ae..fb31765 100644
--- a/include/linux/rbtree.h
+++ b/include/linux/rbtree.h
@@ -51,7 +51,7 @@ struct rb_root
y i didn't exhibit before but it's truly a bug.
- Fully revised the README.
Now that we have read-caching support, the old README was completely obsolete.
- Update TODO
Implementing read-caching is done.
- bump up the copyright year to 2015
- fix up comments
Signed-off-by: Akira Hayak
didn't exhibit before but it's truly a bug.
- Fully revised the README.
Now that we have read-caching support, the old README was completely obsolete.
- Update TODO
Implementing read-caching is done.
- bump up the copyright year to 2015
- fix up comments
Signed-off-by: Akira Hayakawa ruby.w
Signed-off-by: Akira Hayakawa ruby.w...@gmail.com
---
include/linux/rbtree.h | 2 +-
1 file changed, 1 insertion(+), 1 deletion(-)
diff --git a/include/linux/rbtree.h b/include/linux/rbtree.h
index 57e75ae..fb31765 100644
--- a/include/linux/rbtree.h
+++ b/include/linux/rbtree.h
@@ -51,7 +51,7
uch as petit freeze.
- Akira
On 12/14/14 11:46 AM, Jianjian Huo wrote:
> On Sat, Dec 13, 2014 at 6:07 AM, Akira Hayakawa wrote:
>> Hi,
>>
>> Jianjian, You really get a point at the fundamental design.
>>
>>> If I understand it correctly, the whole idea indee
t's like this
before: bio -> ~map:bio->bio
after: bio -> ~should_split:bio->bool -> ~map:bio->bio
- Akira
On 12/13/14 12:09 AM, Akira Hayakawa wrote:
>> However, after looking at the current code, and using it I think it's
>> a long, long way from being ready
irst searchs for the
oldest log as the starting point. It's 4KB metadata reads but spends to some
extent.
The other 2 sec is thought to be spent by this)
- Akira
On 12/13/14 11:07 PM, Akira Hayakawa wrote:
> Hi,
>
> Jianjian, You really get a point at the fundamental design.
use garbage
> collection to clean old invalid logs, this will avoid the double
> garage collection effect other caching module has, which essentially
> both caching module and internal SSD will perform garbage collections
> twice.
>
> And one question, how long will be data logs replay
will perform garbage collections
twice.
And one question, how long will be data logs replay time during init,
if SSD is almost full of dirty data logs?
Jianjian
On Fri, Dec 12, 2014 at 7:09 AM, Akira Hayakawa ruby.w...@gmail.com wrote:
However, after looking at the current code, and using
for the
oldest log as the starting point. It's 4KB metadata reads but spends to some
extent.
The other 2 sec is thought to be spent by this)
- Akira
On 12/13/14 11:07 PM, Akira Hayakawa wrote:
Hi,
Jianjian, You really get a point at the fundamental design.
If I understand it correctly
like this
before: bio - ~map:bio-bio
after: bio - ~should_split:bio-bool - ~map:bio-bio
- Akira
On 12/13/14 12:09 AM, Akira Hayakawa wrote:
However, after looking at the current code, and using it I think it's
a long, long way from being ready for production. As we've already
discussed
, Jianjian Huo wrote:
On Sat, Dec 13, 2014 at 6:07 AM, Akira Hayakawa ruby.w...@gmail.com wrote:
Hi,
Jianjian, You really get a point at the fundamental design.
If I understand it correctly, the whole idea indeed is very simple,
the consumer/provider and circular buffer model. use SSD as a circular
on for dm.
If you read the code further, you will find how simple the mechanism is.
Not to mention the code itself is.
- Akira
On 12/12/14 11:24 PM, Joe Thornber wrote:
> On Fri, Dec 12, 2014 at 09:42:15AM +0900, Akira Hayakawa wrote:
>> The SSD-caching should be log-structured.
&
On 12/12/14 6:12 PM, Bart Van Assche wrote:
> This is the first time I see someone claiming that reducing the request size
> improves performance. I don't know any SSD model for which splitting requests
> improves performance.
Writeboost batches number of writes into a log (that is 512KB large)
On 12/12/14 6:12 PM, Bart Van Assche wrote:
This is the first time I see someone claiming that reducing the request size
improves performance. I don't know any SSD model for which splitting requests
improves performance.
Writeboost batches number of writes into a log (that is 512KB large)
the mechanism is.
Not to mention the code itself is.
- Akira
On 12/12/14 11:24 PM, Joe Thornber wrote:
On Fri, Dec 12, 2014 at 09:42:15AM +0900, Akira Hayakawa wrote:
The SSD-caching should be log-structured.
No argument there, and this is why I've supported you with
dm-writeboost over the last
n Wed, Dec 10 2014 at 6:42am -0500,
> Akira Hayakawa wrote:
>
>> This patch adds dm-writeboost to staging tree.
>>
>> dm-writeboost is a log-structured SSD-caching driver.
>> It caches data in log-structured way on the cache device
>> so that the per
2014 at 6:42am -0500,
Akira Hayakawa ruby.w...@gmail.com wrote:
This patch adds dm-writeboost to staging tree.
dm-writeboost is a log-structured SSD-caching driver.
It caches data in log-structured way on the cache device
so that the performance is maximized.
The merit of putting
On 12/10/14 10:42 PM, Joe Thornber wrote:
> On Wed, Dec 10, 2014 at 10:31:31PM +0900, Akira Hayakawa wrote:
>> Joe,
>>
>>> So you copy the bio payload to a different block of ram and then
>>> complete the bio? Or does the rambuf refer to the bio paylo
ce is one of my concerns.
I guess it can be over 1GB/sec.
- Akira
On 12/10/14 10:13 PM, Joe Thornber wrote:
> On Wed, Dec 10, 2014 at 09:59:12PM +0900, Akira Hayakawa wrote:
>> Joe,
>>
>> I appreciate your continuous work.
>>
>> Is that read or write?
>> The
tore
and the newest SSD (How would it be if it's PCI-e SSD)...
- Akira
On 12/10/14 9:33 PM, Joe Thornber wrote:
> On Wed, Dec 10, 2014 at 08:00:13PM +0900, Akira Hayakawa wrote:
>> Hi, Joe
>>
>> Thanks for continuous evaluation.
>
> Some more details:
>
> dd, with
Hi, Joe
Thanks for continuous evaluation.
I think it's to soon to conclude splitting is the case.
In general, the time order of memory operations and disk operations is much
different.
So, it's not likely that bio splitting, memory operation, is the case. Again,
in general.
But yes, I will add
ead caching support (could the lack of read
> caching be contributing to why the git_extract test is so poor?)
This will be my first work in staging.
- Akira
On 12/10/14 12:48 AM, Mike Snitzer wrote:
> On Tue, Dec 09 2014 at 10:12am -0500,
> Joe Thornber wrote:
>
>> On Mon, Dec 08, 2
the git_extract test is so poor?)
This will be my first work in staging.
- Akira
On 12/10/14 12:48 AM, Mike Snitzer wrote:
On Tue, Dec 09 2014 at 10:12am -0500,
Joe Thornber thorn...@redhat.com wrote:
On Mon, Dec 08, 2014 at 06:04:41AM +0900, Akira Hayakawa wrote:
Mike and Alasdair,
I need your ack
Hi, Joe
Thanks for continuous evaluation.
I think it's to soon to conclude splitting is the case.
In general, the time order of memory operations and disk operations is much
different.
So, it's not likely that bio splitting, memory operation, is the case. Again,
in general.
But yes, I will add
(How would it be if it's PCI-e SSD)...
- Akira
On 12/10/14 9:33 PM, Joe Thornber wrote:
On Wed, Dec 10, 2014 at 08:00:13PM +0900, Akira Hayakawa wrote:
Hi, Joe
Thanks for continuous evaluation.
Some more details:
dd, with block size 512b
raw spindle : 143
be over 1GB/sec.
- Akira
On 12/10/14 10:13 PM, Joe Thornber wrote:
On Wed, Dec 10, 2014 at 09:59:12PM +0900, Akira Hayakawa wrote:
Joe,
I appreciate your continuous work.
Is that read or write?
The difference between Type 0 and 1 should only show up in write path.
So is it write test
On 12/10/14 10:42 PM, Joe Thornber wrote:
On Wed, Dec 10, 2014 at 10:31:31PM +0900, Akira Hayakawa wrote:
Joe,
So you copy the bio payload to a different block of ram and then
complete the bio? Or does the rambuf refer to the bio payload
directly?
Good question.
The answer is, copy
the "real" part of
> the kernel someday.
OK.
Mike and Alasdair,
I need your ack
- Akira
On 12/8/14 5:08 AM, Greg KH wrote:
> On Sun, Dec 07, 2014 at 09:35:26PM +0900, Akira Hayakawa wrote:
>> --- /dev/null
>> +++ b/drivers/staging/writeboost/TODO
>> @@ -0,0 +1,47
users
and polish the codes.
Signed-off-by: Akira Hayakawa
---
MAINTAINERS|6 +
drivers/staging/Kconfig|2 +
drivers/staging/Makefile |1 +
drivers/staging/writeboost/Kconfig
users
and polish the codes.
Signed-off-by: Akira Hayakawa ruby.w...@gmail.com
---
MAINTAINERS|6 +
drivers/staging/Kconfig|2 +
drivers/staging/Makefile |1 +
drivers/staging/writeboost/Kconfig
to the real part of
the kernel someday.
OK.
Mike and Alasdair,
I need your ack
- Akira
On 12/8/14 5:08 AM, Greg KH wrote:
On Sun, Dec 07, 2014 at 09:35:26PM +0900, Akira Hayakawa wrote:
--- /dev/null
+++ b/drivers/staging/writeboost/TODO
@@ -0,0 +1,47 @@
+TODO:
+
+- Expose bugs and fix
pper-test-suite and
the design of dmsetup commands are similar to dm-cache (which was your order,
too).
I will continue to push my tests to device-mapper-test-suite so that
other guys can test my driver easily.
- Akira
On 11/27/14 12:28 AM, Mike Snitzer wrote:
> On Wed, Nov 26 2014 at 10:02
commands are similar to dm-cache (which was your order,
too).
I will continue to push my tests to device-mapper-test-suite so that
other guys can test my driver easily.
- Akira
On 11/27/14 12:28 AM, Mike Snitzer wrote:
On Wed, Nov 26 2014 at 10:02am -0500,
Akira Hayakawa ruby.w...@gmail.com
Hi,
I am wondering what's the next step of dm-writeboost, my log-structured
SSD-caching driver.
I want to discuss this.
I will start from introducing my activity on dm-writeboost.
It was more than a year ago that I proposed my dm-writeboost for staging.
Mike Snitzer, a maintainer of
Hi Guys,
This progress report includes very important benchmarking results, which shows
i) Write will always improve - It boosts writes (396% in the best case) even
with really small cache (say, 64MB) because of the sophisticated writeback
optimization.
ii) Read won't be bad - It doesn't so
Hi Guys,
This progress report includes very important benchmarking results, which shows
i) Write will always improve - It boosts writes (396% in the best case) even
with really small cache (say, 64MB) because of the sophisticated writeback
optimization.
ii) Read won't be bad - It doesn't so
Hi DM Guys,
I will share the latest progress report about Writeboost.
1. Where we are working on now
Kernel code
---
First of all, Writeboost is now merged into
thin-dev branch in Joe's tree.
URL: https://github.com/jthornber/linux-2.6
Testing
---
Testing for Writeboost is merged
Hi DM Guys,
I will share the latest progress report about Writeboost.
1. Where we are working on now
Kernel code
---
First of all, Writeboost is now merged into
thin-dev branch in Joe's tree.
URL: https://github.com/jthornber/linux-2.6
Testing
---
Testing for Writeboost is merged
Hi, Matias,
Sorry for jumping in. I am interested in this new feature, too.
> > Does it even make sense to expose the underlying devices as block
> > devices? It surely would help to send this together with a driver
> > that you plan to use it on top of.
> >
>
> Agree, an underlying driver is
Hi, Matias,
Sorry for jumping in. I am interested in this new feature, too.
Does it even make sense to expose the underlying devices as block
devices? It surely would help to send this together with a driver
that you plan to use it on top of.
Agree, an underlying driver is missing.
Hi, DM Guys
Let me share a new progress report.
I am sorry I have been off for weeks.
Writeboost is getting better I believe.
Many progress, please git pull
https://github.com/akiradeveloper/dm-writeboost.git
1. Removing version switch macros
Previously the code included 10 or more version
Hi, DM Guys
Let me share a new progress report.
I am sorry I have been off for weeks.
Writeboost is getting better I believe.
Many progress, please git pull
https://github.com/akiradeveloper/dm-writeboost.git
1. Removing version switch macros
Previously the code included 10 or more version
Dave,
# -EIO retuned corrupts XFS
I turned up
lockdep, frame pointer, xfs debug
and also changed to 3.12.0-rc5 from rc1.
What's changed is that
the problem we discussed in previous mails
*never* reproduce.
However, if I turn off the lockdep only
it hangs up by setting blockup to 1 and then to 0
Dave,
# -EIO retuned corrupts XFS
I turned up
lockdep, frame pointer, xfs debug
and also changed to 3.12.0-rc5 from rc1.
What's changed is that
the problem we discussed in previous mails
*never* reproduce.
However, if I turn off the lockdep only
it hangs up by setting blockup to 1 and then to 0
Hi, all
I am trying to run writeboost on a tiny ARM machine, Raspberry Pi.
writeboost can compile without any error nor warning but
it doesn't even start ctr.
What I do before running `dmsetup create` is
# modprobe dm-mod
# insmod dm-writeboost.ko
After these two commands, lsmod looks insane.
I
Hi, all
I am trying to run writeboost on a tiny ARM machine, Raspberry Pi.
writeboost can compile without any error nor warning but
it doesn't even start ctr.
What I do before running `dmsetup create` is
# modprobe dm-mod
# insmod dm-writeboost.ko
After these two commands, lsmod looks insane.
I
Dave
> XFS shuts down because you've returned EIO to a log IO. That's a
> fatal error. If you do the same to an ext4 journal write, it will do
> the equivalent of shut down (e.g. complain and turn read-only).
You mean block device should not return -EIO anyway if
it doesn't want XFS to suddenly
Dave
> Akira, can you please post the entire set of messages you are
> getting when XFS showing problems? That way I can try to confirm
> whether it's a regression in XFS or something else.
Environment:
- The kernel version is 3.12-rc1
- The debuggee is a KVM virtual machine equipped with 8
Dave
Akira, can you please post the entire set of messages you are
getting when XFS showing problems? That way I can try to confirm
whether it's a regression in XFS or something else.
Environment:
- The kernel version is 3.12-rc1
- The debuggee is a KVM virtual machine equipped with 8 vcpus.
Dave
XFS shuts down because you've returned EIO to a log IO. That's a
fatal error. If you do the same to an ext4 journal write, it will do
the equivalent of shut down (e.g. complain and turn read-only).
You mean block device should not return -EIO anyway if
it doesn't want XFS to suddenly shut
Mikulas,
> I/Os shouldn't be returned with -ENOMEM. If they are, you can treat it as
> a hard error.
It seems to be blkdev_issue_discard returns -ENOMEM
when bio_alloc fails, for example.
Waiting for a second and we can alloc the memory is my idea
for handling -ENOMEM returned.
> Blocking I/O
Mikulas,
I/Os shouldn't be returned with -ENOMEM. If they are, you can treat it as
a hard error.
It seems to be blkdev_issue_discard returns -ENOMEM
when bio_alloc fails, for example.
Waiting for a second and we can alloc the memory is my idea
for handling -ENOMEM returned.
Blocking I/O
Hi, DM Guys
I suppose I have finished the tasks to
answer Mikulas's pointing outs.
So, let me update the progress report.
The code is updated now on my Github repo.
Checkout the develop branch to avail
the latest source code.
Compilation Status
--
First, compilation status.
Hi, DM Guys
I suppose I have finished the tasks to
answer Mikulas's pointing outs.
So, let me update the progress report.
The code is updated now on my Github repo.
Checkout the develop branch to avail
the latest source code.
Compilation Status
--
First, compilation status.
Mikulas,
> Next, you need to design some locking - which variables are protected by
> which locks. If you use shared variables without locks, you need to use
> memory barriers (it is harder to design code using memory barriers than
> locks).
First I will explain the locking and the shared
Mikulas,
Next, you need to design some locking - which variables are protected by
which locks. If you use shared variables without locks, you need to use
memory barriers (it is harder to design code using memory barriers than
locks).
First I will explain the locking and the shared
Mikulas,
> Waking up every 100ms in flush_proc is not good because it wastes CPU time
> and energy if the driver is idle.
Yes, 100ms is too short. I will change it to 1sec then.
We can wait for 1 sec in termination.
> The problem is that if you fill up the whole cache device in less time
>
Mike,
I am happy to see that
guys from filesystem to the block subsystem
have been discussing how to handle barriers in each layer
almost independently.
>> Merging the barriers and replacing it with a single FLUSH
>> by accepting a lot of writes
>> is the reason for deferring barriers in
Mikulas,
Let me ask you about this comment
of choosing the best API.
For the rest, I will reply later.
> BTW. You should use wait_event_interruptible_lock_irq instead of
> wait_event_interruptible and wait_event_interruptible_lock_irq_timeout
> instead of wait_event_interruptible_timeout. The
Dave,
> i.e. there's no point justifying a behaviour with "we could do this
> in future so lets ignore the impact on current users"...
Sure, I am happy if we find a solution that
is good for both of us or filesystem and block in other word.
> e.g. what happens if a user has a mixed workload -
Christoph,
> You can detect O_DIRECT writes by second guession a special combination
> of REQ_ flags only used there, as cfg tries to treat it special:
>
> #define WRITE_SYNC (WRITE | REQ_SYNC | REQ_NOIDLE)
> #define WRITE_ODIRECT (WRITE | REQ_SYNC)
>
> the lack of
Christoph,
You can detect O_DIRECT writes by second guession a special combination
of REQ_ flags only used there, as cfg tries to treat it special:
#define WRITE_SYNC (WRITE | REQ_SYNC | REQ_NOIDLE)
#define WRITE_ODIRECT (WRITE | REQ_SYNC)
the lack of REQ_NOIDLE
Dave,
i.e. there's no point justifying a behaviour with we could do this
in future so lets ignore the impact on current users...
Sure, I am happy if we find a solution that
is good for both of us or filesystem and block in other word.
e.g. what happens if a user has a mixed workload - one
Mikulas,
Let me ask you about this comment
of choosing the best API.
For the rest, I will reply later.
BTW. You should use wait_event_interruptible_lock_irq instead of
wait_event_interruptible and wait_event_interruptible_lock_irq_timeout
instead of wait_event_interruptible_timeout. The
Mike,
I am happy to see that
guys from filesystem to the block subsystem
have been discussing how to handle barriers in each layer
almost independently.
Merging the barriers and replacing it with a single FLUSH
by accepting a lot of writes
is the reason for deferring barriers in writeboost.
Mikulas,
Waking up every 100ms in flush_proc is not good because it wastes CPU time
and energy if the driver is idle.
Yes, 100ms is too short. I will change it to 1sec then.
We can wait for 1 sec in termination.
The problem is that if you fill up the whole cache device in less time
than 1
Mikulas,
Thank you for your reviewing.
I will reply to polling issue first.
For the rest, I will reply later.
> Polling for state
> -
>
> Some of the kernel threads that you spawn poll for data in one-second
> interval - see migrate_proc, modulator_proc, recorder_proc,
Mikulas,
Thank you for your reviewing.
I will reply to polling issue first.
For the rest, I will reply later.
Polling for state
-
Some of the kernel threads that you spawn poll for data in one-second
interval - see migrate_proc, modulator_proc, recorder_proc, sync_proc.
Mikulas,
> The change seems ok. Please, also move this piece of code in flush_proc
> out of the spinlock:
> if (kthread_should_stop())
> return 0;
>
> It caused the workqueue warning I reported before and still causes warning
> with
Dave,
> That's where arbitrary delays in the storage stack below XFS cause
> problems - if the first FUA log write is delayed, the next log
> buffer will get filled, issued and delayed, and when we run out of
> log buffers (there are 8 maximum) the entire log subsystem will
> stall, stopping
Mikulas,
> nvidia binary driver, but it may happen in other parts of the kernel too.
> The fact that it works in your setup doesn't mean that it is correct.
You are right. I am convinced.
As far as I looked around the kernel code,
it seems to be choosing kthread when one needs looping in
Mikulas,
nvidia binary driver, but it may happen in other parts of the kernel too.
The fact that it works in your setup doesn't mean that it is correct.
You are right. I am convinced.
As far as I looked around the kernel code,
it seems to be choosing kthread when one needs looping in
Dave,
That's where arbitrary delays in the storage stack below XFS cause
problems - if the first FUA log write is delayed, the next log
buffer will get filled, issued and delayed, and when we run out of
log buffers (there are 8 maximum) the entire log subsystem will
stall, stopping *all* log
Mikulas,
The change seems ok. Please, also move this piece of code in flush_proc
out of the spinlock:
if (kthread_should_stop())
return 0;
It caused the workqueue warning I reported before and still causes warning
with kthreads:
be working.
Is it documented that
looping job should not be put into
any type of workqueue?
You are only mentioning that
putting a looping work item in system_wq
is the wrong way since
nvidia driver flush the workqueue.
Akira
On 10/4/13 10:38 PM, Mikulas Patocka wrote:
>
>
> On Fri,
Hi, all
Let me introduce my future plan
of applying persistent memory to dm-writeboost.
dm-writeboost can potentially
gain many benefits by the persistent memory.
(1) Problem
The basic mechanism of dm-writeboost is
(i) first stores the write data to RAM buffer
whose size is 1MB at maximum
Hi, all
Let me introduce my future plan
of applying persistent memory to dm-writeboost.
dm-writeboost can potentially
gain many benefits by the persistent memory.
(1) Problem
The basic mechanism of dm-writeboost is
(i) first stores the write data to RAM buffer
whose size is 1MB at maximum
into
any type of workqueue?
You are only mentioning that
putting a looping work item in system_wq
is the wrong way since
nvidia driver flush the workqueue.
Akira
On 10/4/13 10:38 PM, Mikulas Patocka wrote:
On Fri, 4 Oct 2013, Akira Hayakawa wrote:
Hi, Mikulas,
I am sorry to say that
I don't
Hi, Mikulas,
I am sorry to say that
I don't have such machines to reproduce the problem.
But agree with that I am dealing with workqueue subsystem
in a little bit weird way.
I should clean them up.
For example,
free_cache() routine below is
a deconstructor of the cache metadata
including all
Hi, Mikulas,
Thank you for reporting.
I am really happy to see this report.
First, I respond to the performance problem.
I will make time later for investigating the rest and answer.
Some deadlock issues are difficult to solve in short time.
> I tested dm-writeboost with disk as backing
Hi, Mikulas,
Thank you for reporting.
I am really happy to see this report.
First, I respond to the performance problem.
I will make time later for investigating the rest and answer.
Some deadlock issues are difficult to solve in short time.
I tested dm-writeboost with disk as backing device
Hi, Mikulas,
I am sorry to say that
I don't have such machines to reproduce the problem.
But agree with that I am dealing with workqueue subsystem
in a little bit weird way.
I should clean them up.
For example,
free_cache() routine below is
a deconstructor of the cache metadata
including all
Mike suggested.
Maybe, superblock_recorder should be in the -metadata.c file
but I chose to put it on this file since for unity.
Thanks,
Akira
followed by the current .h files.
-- dm-writeboost-daemon.h --
/*
* Copyright (C) 2012-2013 Akira Hayakawa
*
* This file is released under the
, superblock_recorder should be in the -metadata.c file
but I chose to put it on this file since for unity.
Thanks,
Akira
followed by the current .h files.
-- dm-writeboost-daemon.h --
/*
* Copyright (C) 2012-2013 Akira Hayakawa ruby.w...@gmail.com
*
* This file is released under the GPL
the key name in the status is
> meaningless.
I understand.
I forgot the possibility of adding another daemon that is tunable.
However, I don't see the reason not to
add "read-miss" key to the #read-miss in dm-cache status for example.
Only tunable parameters are in K
design rule?
Akira
On 9/26/13 2:37 AM, Mike Snitzer wrote:
On Tue, Sep 24 2013 at 8:20am -0400,
Akira Hayakawa ruby.w...@gmail.com wrote:
Hi, Mike
I am now working on redesigning and implementation
of dm-writeboost.
This is a progress report.
Please run
git clone https://github.com
Hi, Mike
I am now working on redesigning and implementation
of dm-writeboost.
This is a progress report.
Please run
git clone https://github.com/akiradeveloper/dm-writeboost.git
to see full set of the code.
* 1. Current Status
writeboost in new design passed my test.
Documentations are
Hi, Mike
I am now working on redesigning and implementation
of dm-writeboost.
This is a progress report.
Please run
git clone https://github.com/akiradeveloper/dm-writeboost.git
to see full set of the code.
* 1. Current Status
writeboost in new design passed my test.
Documentations are
m-cache will be
> beneficial.
It sounds really good to me.
Huge benefit.
Akira
n 9/18/13 5:59 AM, Mike Snitzer wrote:
> On Tue, Sep 17 2013 at 8:43am -0400,
> Akira Hayakawa wrote:
>
>> Hi, Mike
>>
>> There are two designs in my mind
>> regarding the form
benefit.
Akira
n 9/18/13 5:59 AM, Mike Snitzer wrote:
On Tue, Sep 17 2013 at 8:43am -0400,
Akira Hayakawa ruby.w...@gmail.com wrote:
Hi, Mike
There are two designs in my mind
regarding the formatting cache.
You said
administer the writeboost devices. There is no need for this. Just
Hi, Mike
There are two designs in my mind
regarding the formatting cache.
You said
> administer the writeboost devices. There is no need for this. Just
> have a normal DM target whose .ctr takes care of validation and
> determines whether a device needs formatting, etc.
makes me wonder
Mike,
First, thank you for your commenting.
I was looking forward to your comments.
I suppose you are sensing some "smell" in my design.
You are worrying that dm-writeboost will not only confuse users
but also fall into worst situation of giving up backward-compatibility
after merging into
Mike,
First, thank you for your commenting.
I was looking forward to your comments.
I suppose you are sensing some smell in my design.
You are worrying that dm-writeboost will not only confuse users
but also fall into worst situation of giving up backward-compatibility
after merging into tree.
1 - 100 of 113 matches
Mail list logo