Re: Recommended settings for SSD

2013-05-26 Thread Leonidas Spyropoulos
On Sat, May 25, 2013 at 11:33 PM, Harald Glatt m...@hachre.de wrote:
 Data that already exists will only be compressed on re-write. You can
 do it with btrfs fi defrag and a script that traverses the fs to call
 defrag on every file. Another good way is the find command that has
 been outlined here:

 https://btrfs.wiki.kernel.org/index.php/Problem_FAQ#Defragmenting_a_directory_doesn.27t_work


I tried to my home partition the 'find' command and worked without
errors, not sure if it did compress (how can I check?). I tried also
on the root partition and every file that was in use (obviously) it
didn't defrag it. I am guessing I have to mount the partition from a
LiveCD but since the LiceCD kernel is usually old (in terms of btrfs)
do you reckon there will be any problems?

Thanks
--
Caution: breathing may be hazardous to your health.

#include stdio.h
int main(){printf(%s,\x4c\x65\x6f\x6e\x69\x64\x61\x73);}
--
To unsubscribe from this list: send the line unsubscribe linux-btrfs in
the body of a message to majord...@vger.kernel.org
More majordomo info at  http://vger.kernel.org/majordomo-info.html


Re: Recommended settings for SSD

2013-05-26 Thread Harald Glatt
I don't know a better way to check than doing df -h before and
after... If you use space_cache you have to clear_cache though to make
the numbers be current for sure each time before looking at df.

Old kernel can be a problem. You can use the Arch CDs to do it, they
usually come with the newest kernels.
https://www.archlinux.org/download/

If you need to install anything a quick guide to the package managing:

# Update Repos and Upgrade system:
pacman -Suy

# Install a specific package:
pacman -S packagename

# Search for a package
pacman -Ss search term


On Sun, May 26, 2013 at 12:00 PM, Leonidas Spyropoulos
artafi...@gmail.com wrote:
 On Sat, May 25, 2013 at 11:33 PM, Harald Glatt m...@hachre.de wrote:
 Data that already exists will only be compressed on re-write. You can
 do it with btrfs fi defrag and a script that traverses the fs to call
 defrag on every file. Another good way is the find command that has
 been outlined here:

 https://btrfs.wiki.kernel.org/index.php/Problem_FAQ#Defragmenting_a_directory_doesn.27t_work


 I tried to my home partition the 'find' command and worked without
 errors, not sure if it did compress (how can I check?). I tried also
 on the root partition and every file that was in use (obviously) it
 didn't defrag it. I am guessing I have to mount the partition from a
 LiveCD but since the LiceCD kernel is usually old (in terms of btrfs)
 do you reckon there will be any problems?

 Thanks
 --
 Caution: breathing may be hazardous to your health.

 #include stdio.h
 int main(){printf(%s,\x4c\x65\x6f\x6e\x69\x64\x61\x73);}
--
To unsubscribe from this list: send the line unsubscribe linux-btrfs in
the body of a message to majord...@vger.kernel.org
More majordomo info at  http://vger.kernel.org/majordomo-info.html


Re: Recommended settings for SSD

2013-05-26 Thread cwillu
On Sun, May 26, 2013 at 9:16 AM, Harald Glatt m...@hachre.de wrote:
 I don't know a better way to check than doing df -h before and
 after... If you use space_cache you have to clear_cache though to make
 the numbers be current for sure each time before looking at df.

Not sure what you're thinking of; space_cache is just a mount-time
optimization, storing and loading a memory structure to disk so that
it doesn't have to be regenerated.

As I understand it, if it's ever wrong, it's a serious bug.
--
To unsubscribe from this list: send the line unsubscribe linux-btrfs in
the body of a message to majord...@vger.kernel.org
More majordomo info at  http://vger.kernel.org/majordomo-info.html


Re: Recommended settings for SSD

2013-05-25 Thread Leonidas Spyropoulos
On Sat, May 25, 2013 at 4:58 AM, Duncan 1i5t5.dun...@cox.net wrote:

 Without going back to check the wiki, IIRC it was there that the /sys
 paths it checks for that detection are listed.  Those paths are then
 based on what the drive itself claims.  If it claims to be rotating
 storage...
I remember reading something like that myself, maybe my SSD (Crucial
S4) is old enough to report rotating storage, don't know..

 It may also depend on the kernel version, etc, as I'm not sure when that
 auto-detection was added (tho for all I know it has been there awhile).

 I do know my new SSDs (Corsair Neutrons, 256GB) are detected here, and
 the ssd mount option is thus not needed.  However, I'm running current
 v3.10-rcX-git kernels, tho I'm a few days behind ATM as I'm still working
 on switching over to the SSDs ATM and am having to do some reconfiguring
 to get there.

 Btrfs still being marked for testing only and under heavy development, if
 people aren't at least running current Linus stable or better and don't
 have a specific bug as a reason not to, they're actually behind and are
 likely missing potentially critical patches.  That means most people
 trying to run btrfs on stock distro kernels will be behind...

I agree on that, it could be related, my kernel version is stock
3.8.0-22-generic (getting sources now to recompile latest)

 Meanwhile, what about the discard option?  As I'm still setting up on the
 SSDs as well as btrfs here, I haven't had a chance to decide whether I
 want that, or would rather setup fstrim as a cron job, or what.  But
 that's the other big question for SSD.

I decided not to add the discard option and run the daily script from
cron (fstrim) as I think there's a performance hit with the discard.
It mainly depends on your hardware I think.

 Here, I'm actually partitioning for near 100% over-provisioning, (120-ish
 GiB of partitions on the 238GiB/256GB drives, so I suspect actually
 running with discard as a mount option won't be such a big deal and will
 likely only cut write performance as I head toward stable-state, since
 the drive should have plenty of trimmed space to work with in any case
 due to the over-provisioning.  But I suspect it could be of benefit to
 those much closer to 0% over-provisioning than to my near 100%.

 --
 Duncan - List replies preferred.   No HTML msgs.
 Every nonfree program has a lord, a master --
 and if you use the program, he is your master.  Richard Stallman

 --
 To unsubscribe from this list: send the line unsubscribe linux-btrfs in
 the body of a message to majord...@vger.kernel.org
 More majordomo info at  http://vger.kernel.org/majordomo-info.html

Since I am going to build the kernel and it's a good test on the SSD
does anyone recommend some ways to speed up the build related to SSD?

Thanks

--
Caution: breathing may be hazardous to your health.

#include stdio.h
int main(){printf(%s,\x4c\x65\x6f\x6e\x69\x64\x61\x73);}
--
To unsubscribe from this list: send the line unsubscribe linux-btrfs in
the body of a message to majord...@vger.kernel.org
More majordomo info at  http://vger.kernel.org/majordomo-info.html


Re: Recommended settings for SSD

2013-05-25 Thread Russell Coker
On Sat, 25 May 2013, Leonidas Spyropoulos artafi...@gmail.com wrote:
 I decided not to add the discard option and run the daily script from
 cron (fstrim) as I think there's a performance hit with the discard.
 It mainly depends on your hardware I think.

I experienced a massive performance hit from discard and turned it off at the 
recommendation of members of this list.  Since then there has not been a 
reason for me to enable it again.

-- 
My Main Blog http://etbe.coker.com.au/
My Documents Bloghttp://doc.coker.com.au/
--
To unsubscribe from this list: send the line unsubscribe linux-btrfs in
the body of a message to majord...@vger.kernel.org
More majordomo info at  http://vger.kernel.org/majordomo-info.html


Re: Recommended settings for SSD

2013-05-25 Thread Martin Steigerwald
Am Samstag, 25. Mai 2013, 03:58:12 schrieb Duncan:
 Leonidas Spyropoulos posted on Fri, 24 May 2013 23:38:17 +0100 as
 
 excerpted:
  On 24 May 2013 21:07, cwillu cwi...@cwillu.com wrote:
  No need to specify ssd, it's automatically detected.
  
  I'm not so sure it did detected. When I manually set it I saw
  significant improvement.
 
 Without going back to check the wiki, IIRC it was there that the /sys
 paths it checks for that detection are listed.  Those paths are then
 based on what the drive itself claims.  If it claims to be rotating
 storage...

This is:

martin@merkaba:~ cat /sys/block/sda/queue/rotational 
0

 
 It may also depend on the kernel version, etc, as I'm not sure when that
 auto-detection was added (tho for all I know it has been there awhile).
 
 I do know my new SSDs (Corsair Neutrons, 256GB) are detected here, and
 the ssd mount option is thus not needed.  However, I'm running current
 v3.10-rcX-git kernels, tho I'm a few days behind ATM as I'm still working
 on switching over to the SSDs ATM and am having to do some reconfiguring
 to get there.

And can be verified by:

martin@merkaba:~ grep ssd /proc/mounts
/dev/mapper/merkaba-debian / btrfs rw,noatime,compress=lzo,ssd,space_cache 0 0
/dev/mapper/merkaba-debian /mnt/debian-zeit btrfs 
rw,noatime,compress=lzo,ssd,space_cache 0 0
/dev/mapper/merkaba-home /home btrfs rw,noatime,compress=lzo,ssd,space_cache 0 
0
/dev/mapper/merkaba-home /mnt/home-zeit btrfs 
rw,noatime,compress=lzo,ssd,space_cache 0 0
martin@merkaba:~ grep ssd /etc/fstab
martin@merkaba:~#1

 Meanwhile, what about the discard option?  As I'm still setting up on the
 SSDs as well as btrfs here, I haven't had a chance to decide whether I
 want that, or would rather setup fstrim as a cron job, or what.  But
 that's the other big question for SSD.

I just use fstrim once in a while.

The Intel SSD 320 still claims it is new here:

merkaba:~ smartctl -a /dev/sda | grep -i wear
226 Workld_Media_Wear_Indic 0x0032   100   100   000Old_age   Always   
-   2203907
233 Media_Wearout_Indicator 0x0032   100   100   000Old_age   Always   
-   0

We had a discussion on debian-user-german, where one user has an Intel SSD 
with media wear out indicator down to 98 I think.

The SSD is in use for about 2 years. I left about 25 GiB free of the 300 GB it 
has.

merkaba:~ smartctl -a /dev/sda | grep Host
225 Host_Writes_32MiB   0x0032   100   100   000Old_age   Always   
-   261260
227 Workld_Host_Reads_Perc  0x0032   100   100   000Old_age   Always   
-   49
241 Host_Writes_32MiB   0x0032   100   100   000Old_age   Always   
-   261260
242 Host_Reads_32MiB0x0032   100   100   000Old_age   Always   
-   559520

So thats 261260 * 32 MiB = 8360320 MiB = 8164,375 GiB = about 8 TiB writes in 
total.

Intel claims a useful life of 5 years with 20 GB of host writes per day. For 2 
years thats 365*20 = 7300 GB. So it seems that I am exceeding this a bit.

Strange, last time I looked it was way under the specified limit. KDE Nepomuk / 
Akonadi stuff? Switch of /home to BTRFS? I don´t know. What I know that Akonadi 
/ KDEPIM has gone wild once and wrote 450 GB in a row until I stopped it doing 
that manually.

Anyway, it seems this SSD is still good to go. Erase fail count has not gotten 
higher:

merkaba:~ smartctl -a /dev/sda | grep Erase
172 Erase_Fail_Count0x0032   100   100   000Old_age   Always   
-   169

It went up from zero to 169 at some time but stayed there since then.

Also according to some other Intel PDF Intel recommends to replace the SSD 
when Media Wearout Indicitator reaches 1. This SSD is far from it at the 
moment. But I don´t know who quickly that indicator can raise.

 Here, I'm actually partitioning for near 100% over-provisioning, (120-ish
 GiB of partitions on the 238GiB/256GB drives, so I suspect actually
 running with discard as a mount option won't be such a big deal and will
 likely only cut write performance as I head toward stable-state, since
 the drive should have plenty of trimmed space to work with in any case
 due to the over-provisioning.  But I suspect it could be of benefit to
 those much closer to 0% over-provisioning than to my near 100%.

100% overprovisioning is a lot. There is a PDF from Intel where 20% was 
beneficial and 40% even more so, but I think much more really isn´t need. But 
in case you don´t need the space for something, hey, why not?

Ciao,
-- 
Martin 'Helios' Steigerwald - http://www.Lichtvoll.de
GPG: 03B0 0D6C 0040 0710 4AFA  B82F 991B EAAC A599 84C7
--
To unsubscribe from this list: send the line unsubscribe linux-btrfs in
the body of a message to majord...@vger.kernel.org
More majordomo info at  http://vger.kernel.org/majordomo-info.html


Re: Recommended settings for SSD

2013-05-25 Thread Duncan
Martin Steigerwald posted on Sat, 25 May 2013 14:13:07 +0200 as excerpted:

 Am Samstag, 25. Mai 2013, 03:58:12 schrieb Duncan:
 Leonidas Spyropoulos posted on Fri, 24 May 2013 23:38:17 +0100 as
 
 excerpted:
  On 24 May 2013 21:07, cwillu cwi...@cwillu.com wrote:
  No need to specify ssd, it's automatically detected.
  
  I'm not so sure it did detected. When I manually set it I saw
  significant improvement.
 
 Without going back to check the wiki, IIRC it was there that the /sys
 paths it checks for that detection are listed.

 cat /sys/block/sda/queue/rotational
 0

Thanks.  That looks like what I read/checked, indeed.

 I do know my new SSDs (Corsair Neutrons, 256GB) are detected here, and
 the ssd mount option is thus not needed.  

 And can be verified by: grep ssd /proc/mounts

Yes, that's effectively what I did.

 Meanwhile, what about the discard option?  As I'm still setting up on
 the SSDs as well as btrfs here, I haven't had a chance to decide
 whether I want that, or would rather setup fstrim as a cron job, or
 what.  But that's the other big question for SSD.
 
 I just use fstrim once in a while.

=:^)

 We had a discussion on debian-user-german, where one user has an Intel
 SSD with media wear out indicator down to 98 I think.
 
 The SSD is in use for about 2 years. I left about 25 GiB free of the 300
 GB it has.

 Intel claims a useful life of 5 years with 20 GB of host writes per day.
 For 2 years thats 365*20 = 7300 GB. So it seems that I am exceeding this
 a bit.
 
 Strange, last time I looked it was way under the specified limit. KDE
 Nepomuk / Akonadi stuff? Switch of /home to BTRFS? I don´t know. What I
 know that Akonadi / KDEPIM has gone wild once and wrote 450 GB in a row
 until I stopped it doing that manually.

I run kde as my desktop and in fact am active on the kde lists as well, 
but to keep this from going /too/ far OT, let's just say that I run claws-
mail for mail now, tho I do my mailing lists via gmane using pan, so 
that's what should be in my headers.  Being on gentoo, I set
USE=-semantic-desktop and rid myself of that millstone around my neck 
around kde 4.7.  Ironic that given that the semantic-desktop millstone 
was a major kde4 feature bullet-point, it was only after I dropped it 
entirely -- at build time not just runtime -- that I FINALLY found the 
kde4 experience surpassed that of kde3, for me.

I'll leave it there, but it's safe to say there's no love here for akonadi 
and the rest of the semantic-desktop junk.

 Anyway, it seems this SSD is still good to go. Erase fail count has not
 gotten higher:
 
 merkaba:~ smartctl -a /dev/sda | grep Erase 172 Erase_Fail_Count   
 0x0032   100   100   000Old_age   Always -   169
 
 It went up from zero to 169 at some time but stayed there since then.
 
 Also according to some other Intel PDF Intel recommends to replace the
 SSD when Media Wearout Indicitator reaches 1. This SSD is far from it at
 the moment. But I don´t know who quickly that indicator can raise.

That's interesting to know.  I don't have a media wearout indicator 
listed in smart for my devices, but I have program-fail and erase-fail 
counts, and there's still several attributes listed as unknown, that 
will likely be filled in by smartctl updates over time.

 Here, I'm actually partitioning for near 100% over-provisioning,
 (120-ish GiB of partitions on the 238GiB/256GB drives, so I suspect
 actually running with discard as a mount option won't be such a big
 deal and will likely only cut write performance as I head toward
 stable-state, since the drive should have plenty of trimmed space to
 work with in any case due to the over-provisioning.  But I suspect it
 could be of benefit to those much closer to 0% over-provisioning than
 to my near 100%.
 
 100% overprovisioning is a lot. There is a PDF from Intel where 20% was
 beneficial and 40% even more so, but I think much more really isn´t
 need. But in case you don´t need the space for something, hey, why not?

Yes.  I was actually planning on 128 MB SSDs or so as I had counted only 
64 gig or so of partitions I really wanted on SSD and figured I'd grow 
them a bit, but the 128-ish gig units didn't seem to have a particularly 
good price point and 240-256 gig seemed the lower-end price-point knee, 
so that's what I bought.  That let me put a few more partitions on them 
-- everything but the media partition, basically.  I just couldn't see 
spending a bit under $1/gig for it, even if there were 500-ish gig units 
available for $400-ish.

And since I'm leaving the spinning rust in (on my main machine anyway, 
not the netbook when I get to it, but it's currently a 120-gig spinning 
rust drive anyway, so it'll end up similarly over-provisioned), the 
backups can go to that, meaning I don't need them on the SSDs either...

Which left me with near 100% over-provisioning!

But whatever.  I suppose I'll throw additional partitions on or expand 
what I have, over time.

-- 
Duncan - List replies 

Re: Recommended settings for SSD

2013-05-25 Thread Leonidas Spyropoulos
On Sat, May 25, 2013 at 1:13 PM, Martin Steigerwald mar...@lichtvoll.de wrote:
 Am Samstag, 25. Mai 2013, 03:58:12 schrieb Duncan:
 [...]
 And can be verified by:

 martin@merkaba:~ grep ssd /proc/mounts
 /dev/mapper/merkaba-debian / btrfs rw,noatime,compress=lzo,ssd,space_cache 0 0
 /dev/mapper/merkaba-debian /mnt/debian-zeit btrfs
 rw,noatime,compress=lzo,ssd,space_cache 0 0
 /dev/mapper/merkaba-home /home btrfs rw,noatime,compress=lzo,ssd,space_cache 0
 0
 /dev/mapper/merkaba-home /mnt/home-zeit btrfs
 rw,noatime,compress=lzo,ssd,space_cache 0 0
 martin@merkaba:~ grep ssd /etc/fstab
 martin@merkaba:~#1
 [...]
 More majordomo info at  http://vger.kernel.org/majordomo-info.html

I see you are using compression. I don't have compression at the
moment and I would like to use it. What will happen to the data that
are already on the partitions? Will it be compressed when I use them?
Do I have to re-write them? Would it be compressed with btrfs defrag
command?

Thanks for the information

--
Caution: breathing may be hazardous to your health.

#include stdio.h
int main(){printf(%s,\x4c\x65\x6f\x6e\x69\x64\x61\x73);}
--
To unsubscribe from this list: send the line unsubscribe linux-btrfs in
the body of a message to majord...@vger.kernel.org
More majordomo info at  http://vger.kernel.org/majordomo-info.html


Re: Recommended settings for SSD

2013-05-25 Thread Harald Glatt
Data that already exists will only be compressed on re-write. You can
do it with btrfs fi defrag and a script that traverses the fs to call
defrag on every file. Another good way is the find command that has
been outlined here:

https://btrfs.wiki.kernel.org/index.php/Problem_FAQ#Defragmenting_a_directory_doesn.27t_work

On Sun, May 26, 2013 at 12:29 AM, Leonidas Spyropoulos
artafi...@gmail.com wrote:
 On Sat, May 25, 2013 at 1:13 PM, Martin Steigerwald mar...@lichtvoll.de 
 wrote:
 Am Samstag, 25. Mai 2013, 03:58:12 schrieb Duncan:
 [...]
 And can be verified by:

 martin@merkaba:~ grep ssd /proc/mounts
 /dev/mapper/merkaba-debian / btrfs rw,noatime,compress=lzo,ssd,space_cache 0  0
 /dev/mapper/merkaba-debian /mnt/debian-zeit btrfs
 rw,noatime,compress=lzo,ssd,space_cache 0 0
 /dev/mapper/merkaba-home /home btrfs rw,noatime,compress=lzo,ssd,space_cache  0
 0
 /dev/mapper/merkaba-home /mnt/home-zeit btrfs
 rw,noatime,compress=lzo,ssd,space_cache 0 0
 martin@merkaba:~ grep ssd /etc/fstab
 martin@merkaba:~#1
 [...]
 More majordomo info at  http://vger.kernel.org/majordomo-info.html

 I see you are using compression. I don't have compression at the
 moment and I would like to use it. What will happen to the data that
 are already on the partitions? Will it be compressed when I use them?
 Do I have to re-write them? Would it be compressed with btrfs defrag
 command?

 Thanks for the information

 --
 Caution: breathing may be hazardous to your health.

 #include stdio.h
 int main(){printf(%s,\x4c\x65\x6f\x6e\x69\x64\x61\x73);}
 --
 To unsubscribe from this list: send the line unsubscribe linux-btrfs in
 the body of a message to majord...@vger.kernel.org
 More majordomo info at  http://vger.kernel.org/majordomo-info.html
--
To unsubscribe from this list: send the line unsubscribe linux-btrfs in
the body of a message to majord...@vger.kernel.org
More majordomo info at  http://vger.kernel.org/majordomo-info.html


Re: Recommended settings for SSD

2013-05-25 Thread Martin Steigerwald
Am Samstag, 25. Mai 2013, 14:13:07 schrieb Martin Steigerwald:
 The SSD is in use for about 2 years. I left about 25 GiB free of the 300 GB 
 it 
 has.
 
 merkaba:~ smartctl -a /dev/sda | grep Host
 225 Host_Writes_32MiB   0x0032   100   100   000Old_age   Always  
  
 -   261260
 227 Workld_Host_Reads_Perc  0x0032   100   100   000Old_age   Always  
  
 -   49
 241 Host_Writes_32MiB   0x0032   100   100   000Old_age   Always  
  
 -   261260
 242 Host_Reads_32MiB0x0032   100   100   000Old_age   Always  
  
 -   559520
 
 So thats 261260 * 32 MiB = 8360320 MiB = 8164,375 GiB = about 8 TiB writes in 
 total.
 
 Intel claims a useful life of 5 years with 20 GB of host writes per day. For 
 2 
 years thats 365*20 = 7300 GB. So it seems that I am exceeding this a bit.
 
 Strange, last time I looked it was way under the specified limit. KDE Nepomuk 
 / 
 Akonadi stuff? Switch of /home to BTRFS? I don´t know. What I know that 
 Akonadi 
 / KDEPIM has gone wild once and wrote 450 GB in a row until I stopped it 
 doing 
 that manually.

Well, all is well: I just calculated for one year. But the SSD is two years old.

That makes: 365*2*20 = 14600 GB. About 8 TiB is way below it :)

-- 
Martin 'Helios' Steigerwald - http://www.Lichtvoll.de
GPG: 03B0 0D6C 0040 0710 4AFA  B82F 991B EAAC A599 84C7
--
To unsubscribe from this list: send the line unsubscribe linux-btrfs in
the body of a message to majord...@vger.kernel.org
More majordomo info at  http://vger.kernel.org/majordomo-info.html


Re: Recommended settings for SSD

2013-05-25 Thread Martin Steigerwald
Am Samstag, 25. Mai 2013, 23:29:41 schrieb Leonidas Spyropoulos:
 On Sat, May 25, 2013 at 1:13 PM, Martin Steigerwald mar...@lichtvoll.de 
 wrote:
  Am Samstag, 25. Mai 2013, 03:58:12 schrieb Duncan:
  [...]
  And can be verified by:
 
  martin@merkaba:~ grep ssd /proc/mounts
  /dev/mapper/merkaba-debian / btrfs rw,noatime,compress=lzo,ssd,space_cache 
  0 0
  /dev/mapper/merkaba-debian /mnt/debian-zeit btrfs
  rw,noatime,compress=lzo,ssd,space_cache 0 0
  /dev/mapper/merkaba-home /home btrfs 
  rw,noatime,compress=lzo,ssd,space_cache 0
  0
  /dev/mapper/merkaba-home /mnt/home-zeit btrfs
  rw,noatime,compress=lzo,ssd,space_cache 0 0
  martin@merkaba:~ grep ssd /etc/fstab
  martin@merkaba:~#1
  [...]
  More majordomo info at  http://vger.kernel.org/majordomo-info.html
 
 I see you are using compression. I don't have compression at the
 moment and I would like to use it. What will happen to the data that
 are already on the partitions? Will it be compressed when I use them?
 Do I have to re-write them? Would it be compressed with btrfs defrag
 command?
 
 Thanks for the information

Only new or defragmented data as Harald explained already.

Beware: I wouldn´t use compression on SSDs that compress themselves, like
any modern SandForce SSDs I bet.

The Intel SSD 320 in use here doesn´t compress itself, it just encrypts.

-- 
Martin 'Helios' Steigerwald - http://www.Lichtvoll.de
GPG: 03B0 0D6C 0040 0710 4AFA  B82F 991B EAAC A599 84C7
--
To unsubscribe from this list: send the line unsubscribe linux-btrfs in
the body of a message to majord...@vger.kernel.org
More majordomo info at  http://vger.kernel.org/majordomo-info.html


Re: Recommended settings for SSD

2013-05-24 Thread cwillu
 At the moment I am using:
 defaults,noatime,nodiratime,ssd,subvol=@home

No need to specify ssd, it's automatically detected.
--
To unsubscribe from this list: send the line unsubscribe linux-btrfs in
the body of a message to majord...@vger.kernel.org
More majordomo info at  http://vger.kernel.org/majordomo-info.html


Re: Recommended settings for SSD

2013-05-24 Thread Leonidas Spyropoulos
On 24 May 2013 21:07, cwillu cwi...@cwillu.com wrote:

 No need to specify ssd, it's automatically detected.
I'm not so sure it did detected. When I manually set it I saw
significant improvement.
--
To unsubscribe from this list: send the line unsubscribe linux-btrfs in
the body of a message to majord...@vger.kernel.org
More majordomo info at  http://vger.kernel.org/majordomo-info.html


Re: Recommended settings for SSD

2013-05-24 Thread Duncan
Leonidas Spyropoulos posted on Fri, 24 May 2013 23:38:17 +0100 as
excerpted:

 On 24 May 2013 21:07, cwillu cwi...@cwillu.com wrote:

 No need to specify ssd, it's automatically detected.
 I'm not so sure it did detected. When I manually set it I saw
 significant improvement.

Without going back to check the wiki, IIRC it was there that the /sys 
paths it checks for that detection are listed.  Those paths are then 
based on what the drive itself claims.  If it claims to be rotating 
storage...

It may also depend on the kernel version, etc, as I'm not sure when that 
auto-detection was added (tho for all I know it has been there awhile).

I do know my new SSDs (Corsair Neutrons, 256GB) are detected here, and 
the ssd mount option is thus not needed.  However, I'm running current 
v3.10-rcX-git kernels, tho I'm a few days behind ATM as I'm still working 
on switching over to the SSDs ATM and am having to do some reconfiguring 
to get there.

Btrfs still being marked for testing only and under heavy development, if 
people aren't at least running current Linus stable or better and don't 
have a specific bug as a reason not to, they're actually behind and are 
likely missing potentially critical patches.  That means most people 
trying to run btrfs on stock distro kernels will be behind...


Meanwhile, what about the discard option?  As I'm still setting up on the 
SSDs as well as btrfs here, I haven't had a chance to decide whether I 
want that, or would rather setup fstrim as a cron job, or what.  But 
that's the other big question for SSD.

Here, I'm actually partitioning for near 100% over-provisioning, (120-ish 
GiB of partitions on the 238GiB/256GB drives, so I suspect actually 
running with discard as a mount option won't be such a big deal and will 
likely only cut write performance as I head toward stable-state, since 
the drive should have plenty of trimmed space to work with in any case 
due to the over-provisioning.  But I suspect it could be of benefit to 
those much closer to 0% over-provisioning than to my near 100%.

-- 
Duncan - List replies preferred.   No HTML msgs.
Every nonfree program has a lord, a master --
and if you use the program, he is your master.  Richard Stallman

--
To unsubscribe from this list: send the line unsubscribe linux-btrfs in
the body of a message to majord...@vger.kernel.org
More majordomo info at  http://vger.kernel.org/majordomo-info.html