Hi Martin,

Am Mittwoch, 8. Februar 2012 schrieb Martin:
> My understanding is that for x86 architecture systems, btrfs only
> allows a sector size of 4kB for a HDD/SSD. That is fine for the
> present HDDs assuming the partitions are aligned to a 4kB boundary for
> that device.
> 
> However for SSDs...
> 
> I'm using for example a 60GByte SSD that has:
> 
>     8kB page size;
>     16kB logical to physical mapping chunk size;
>     2MB erase block size;
>     64MB cache.
> 
> And the sector size reported to Linux 3.0 is the default 512 bytes!
> 
> 
> My first thought is to try formatting with a sector size of 16kB to
> align with the SSD logical mapping chunk size. This is to avoid SSD
> write amplification. Also, the data transfer performance for that
> device is near maximum for writes with a blocksize of 16kB and above.
> Yet, btrfs supports a 4kByte page/sector size only at present...

Thing is as far as I know the better SSDs and even the dumber ones have 
quite some intelligence in the firmware. And at least for me its not clear 
what the firmware of my Intel SSD 320 all does on its own and whether any 
of my optimization attempts even matter.

So I am not sure, whether just thinking about one write operation of say 4 
KB or 2 KB singularily even may sense. I bet often several processes write 
data at once. So there is more amount of data to write.

What now is not clear to me whether the SSD will combine several write 
requests into a single mapping chunk or erase block or combine them into 
the already erased space of an erase block. I would bet at least the 
better SSDs would do it. So even when from the OS point of view, in a 
simplistic example, one write of 1 MB goes to LBA 40000 and one write of 1 
MB to LBA 80000 the SSD might still just use a single erase block and 
combine the writes next to each other. As far as I understand SSDs do COW 
to spread writes evenly across erase blocks. As far as I furtherly 
understand from a seek time point of view the exact location where to put 
a write request does not matter at all. So for me for an SSD firmware it 
looks perfectly sane to combine writes as they see fit. And SSDs that carry 
condensators, like above mentioned Intel SSD, may even cache writes for a 
while to wait for further requests.

The article on write amplication on wikipedia gives me a glimpse of the 
complexity involved¹. Yes, I set stripe-width as well on my Ext4 
filesystem, but frankly said I am not even sure whether this has any 
positive effect except of maybe sparing the SSD controller firmware some 
reshuffling work.

So from my current point of view most of what you wrote IMHO is more 
important for really dumb flash. Like as I understood some kernel 
developers really like to see so that most of the logic could be put into 
the kernel and be easily modifyable: JBOF - just a bunch of flash cells 
with an interface to access them directly. But for now AFAIK most consumer 
grade SSDs just provide a SATA interface and hide the internals. So an 
optimization for one kind or one brand of SSDs may not be suitable for 
another one.

There are PCI express models but these probably aren“t dumb either. And 
then there is the idea of auto commit memory (ACM) by Fusion-IO which just 
makes a part of the virtual address space persistent.

So its a question on where to put the intelligence. For current SSDs is 
seems the intelligence is really near the storage medium and then IMHO it 
makes sense to even reduce the intelligence on the Linux side.

[1] http://en.wikipedia.org/wiki/Write_amplification

Ciao,
-- 
Martin 'Helios' Steigerwald - http://www.Lichtvoll.de
GPG: 03B0 0D6C 0040 0710 4AFA  B82F 991B EAAC A599 84C7
--
To unsubscribe from this list: send the line "unsubscribe linux-btrfs" in
the body of a message to majord...@vger.kernel.org
More majordomo info at  http://vger.kernel.org/majordomo-info.html

Reply via email to