Re: [gentoo-user] 1-Terabyte drives - 4K sector sizes? - bar performance so far

2010-02-16 Thread Frank Steinmetzger
Am Dienstag, 16. Februar 2010 schrieb Alex Schuster:

 No need for either, just look up the drive on Samsung's homepage [*]. It's
 512 bytes/sector, you should be fine.

Gee thanks. Though that still keeps me baffled about my results, I can start 
looking for other reasons for it. :) Consider the thread closed (again ;-) .

-- 
Gruß | Greetings | Qapla'
Beamy, Scot me up!


signature.asc
Description: This is a digitally signed message part.


Re: [gentoo-user] 1-Terabyte drives - 4K sector sizes? - bar performance so far

2010-02-15 Thread Frank Steinmetzger

Am Montag, 15. Februar 2010 schrieb Willie Wong:
 On Mon, Feb 15, 2010 at 01:48:01AM +0100, Frank Steinmetzger wrote:
  Sorry if I reheat a topic that some already consider closed. I used the
  weekend to experiment on that stuff and need to report my results.
  Because they startle me a little.
  [...]
 Instead of guessing using this rather imprecise metric, why not just
 look up the serial number of your drive and see what the physical
 sector size is?

Well, at differences of 50%, precision is of no relevance anymore.
Also, I already did look it up and it didn’t turn up any conclusive results. 
Just search hits from fdisk output of people who are partitioning the drive. 
So the only thing I can think of yet is to call Samsung’s expensive hotline. 
Hm... oh well, perhaps I could write an e-mail, because I’m too niggard and 
phonophobe to make a call. ^^

 If you don't want to open your box, you can usually 
 get the information from dmesg.

I put the drive in myself after I bought it at
http://www.alternate.de/html/product/Festplatten_2,5_Zoll_SATA/Samsung/HM500JI_500_GB/342736/?showTecData=true
But they don’t show much information either. :-/
I don’t suppose it’s written on the disk’s label? I don’t wanna loosen those 
screws too often, because the windings tend to wear out quickly.
-- 
Gruß | Greetings | Qapla'
I guess irony can be pretty ironic sometimes.


signature.asc
Description: This is a digitally signed message part.


Re: [gentoo-user] 1-Terabyte drives - 4K sector sizes? - bar performance so far

2010-02-15 Thread Alex Schuster
Frank Steinmetzger writes:

 Am Montag, 15. Februar 2010 schrieb Willie Wong:

  Instead of guessing using this rather imprecise metric, why not just
  look up the serial number of your drive and see what the physical
  sector size is?
 
 Well, at differences of 50%, precision is of no relevance anymore.
 Also, I already did look it up and it didn’t turn up any conclusive
 results. Just search hits from fdisk output of people who are
 partitioning the drive. So the only thing I can think of yet is to
 call Samsung’s expensive hotline. Hm... oh well, perhaps I could write
 an e-mail, because I’m too niggard and phonophobe to make a call. ^^

No need for either, just look up the drive on Samsung's homepage [*]. It's 
512 bytes/sector, you should be fine.

Wonko

[*]http://www.samsung.com/global/business/hdd/productmodel.do?group=72type=94subtype=99model_cd=446



Re: [gentoo-user] 1-Terabyte drives - 4K sector sizes? - bar performance so far

2010-02-14 Thread Frank Steinmetzger
Am Sonntag, 7. Februar 2010 schrieb Mark Knecht:

 Hi Willie,
OK - it turns out if I start fdisk using the -u option it show me
 sector numbers. Looking at the original partition put on just using
 default values it had the starting sector was 63 - probably about the
 worst value it could be. As a test I blew away that partition and
 created a new one starting at 64 instead and the untar results are
 vastly improved - down to roughly 20 seconds from 8-10 minutes. That's
 roughly twice as fast as the old 120GB SATA2 drive I was using to test
 the system out while I debugged this issue.

Sorry if I reheat a topic that some already consider closed. I used the 
weekend to experiment on that stuff and need to report my results. Because 
they startle me a little.

I first tried different start sectors around sector 63: 63, 64, 66, 68 etc. 
They showed nearly the same results in speed. So I almost thought that my 
drive, albeit being new and of high capacity, is not affected by this yet.

But then I tested my main media partition, which starts in the middle of the 
disk. I downloaded a portage snapshot and put it into a ramdisk, so reading 
it would not manipulate measurements. I also copied a 1GB file into that 
ramdisk to test consecutive writes.

As a start sector I chose 288816640, which is divisible by 64. The startling 
result: this gave the lowest performance. If the partition starts in one of 
the sectors behind it, performance was always better. I repeated the test 
several times to confirm it. How do you explain this? :-?

The following table shows the ‘real’ value from the output of the time 
command. SS means the aforementioned start sector with SS % 64 == 0.

action SS (1st)   SS (2nd)   SS+2   SS+4   SS+6   SS+8
-+--+--+--+--+--+--
untar portage  3m12.517   2m55.916   1m46.663   1m35.341   1m47.829   1m43.677
rm portage 4m11.109   3m54.950   3m18.820   3m11.378   3m21.804   3m12.433
cp 1GB file0m21.383   0m13.558   0m14.920   0m12.813   0m13.407   0m13.681

-- 
Gruß | Greetings | Qapla'
How are things in the collective? - Perfect.
(Captain Jainway to the Borg queen)


signature.asc
Description: This is a digitally signed message part.


Re: [gentoo-user] 1-Terabyte drives - 4K sector sizes? - bar performance so far

2010-02-14 Thread Willie Wong
On Mon, Feb 15, 2010 at 01:48:01AM +0100, Frank Steinmetzger wrote:
 Sorry if I reheat a topic that some already consider closed. I used the 
 weekend to experiment on that stuff and need to report my results. Because 
 they startle me a little.
 
 I first tried different start sectors around sector 63: 63, 64, 66, 68 etc. 
 They showed nearly the same results in speed. So I almost thought that my 
 drive, albeit being new and of high capacity, is not affected by this yet.
 
 But then I tested my main media partition, which starts in the middle of the 
 disk. I downloaded a portage snapshot and put it into a ramdisk, so reading 
 it would not manipulate measurements. I also copied a 1GB file into that 
 ramdisk to test consecutive writes.
 
 As a start sector I chose 288816640, which is divisible by 64. The startling 
 result: this gave the lowest performance. If the partition starts in one of 
 the sectors behind it, performance was always better. I repeated the test 
 several times to confirm it. How do you explain this? :-?
 
 The following table shows the ‘real’ value from the output of the time 
 command. SS means the aforementioned start sector with SS % 64 == 0.
 
 action SS (1st)   SS (2nd)   SS+2   SS+4   SS+6   SS+8
 -+--+--+--+--+--+--
 untar portage  3m12.517   2m55.916   1m46.663   1m35.341   1m47.829   1m43.677
 rm portage 4m11.109   3m54.950   3m18.820   3m11.378   3m21.804   3m12.433
 cp 1GB file0m21.383   0m13.558   0m14.920   0m12.813   0m13.407   0m13.681

Instead of guessing using this rather imprecise metric, why not just
look up the serial number of your drive and see what the physical
sector size is? If you don't want to open your box, you can usually
get the information from dmesg. 

Only caveat: don't trust the harddrive to report accurate geometry.
This whole issue is due to the harddrives lying about their physical
geometry to be compatible with older versions of Windows. So the
physical sector size listed in dmesg may not be the real one. Which is
why you are advised to look up the model number on the vendor's
website yourself to determine the physical sector size. 

W
-- 
Willie W. Wong ww...@math.princeton.edu
Data aequatione quotcunque fluentes quantitae involvente fluxiones invenire 
 et vice versa   ~~~  I. Newton



Re: [gentoo-user] 1-Terabyte drives - 4K sector sizes? - bar performance so far

2010-02-14 Thread Mark Knecht
2010/2/14 Willie Wong ww...@math.princeton.edu:
 On Mon, Feb 15, 2010 at 01:48:01AM +0100, Frank Steinmetzger wrote:
SNIP

 action         SS (1st)   SS (2nd)   SS+2       SS+4       SS+6       SS+8
 -+--+--+--+--+--+--
 untar portage  3m12.517   2m55.916   1m46.663   1m35.341   1m47.829   
 1m43.677
 rm portage     4m11.109   3m54.950   3m18.820   3m11.378   3m21.804   
 3m12.433
 cp 1GB file0m21.383   0m13.558   0m14.920   0m12.813   0m13.407   
 0m13.681





 Instead of guessing using this rather imprecise metric, why not just
 look up the serial number of your drive and see what the physical
 sector size is? If you don't want to open your box, you can usually
 get the information from dmesg.


hdparm capital eye works very nicely:

gandalf ~ # hdparm -I /dev/sda

/dev/sda:

ATA device, with non-removable media
Model Number:   WDC WD10EARS-00Y5B1
Serial Number:  WD-WCAV55464493
Firmware Revision:  80.00A80
Transport:  Serial, SATA 1.0a, SATA II Extensions,
SATA Rev 2.5, SATA Rev 2.6
Standards:
Supported: 8 7 6 5
Likely used: 8
SNIP


 Only caveat: don't trust the harddrive to report accurate geometry.
 This whole issue is due to the harddrives lying about their physical
 geometry to be compatible with older versions of Windows. So the
 physical sector size listed in dmesg may not be the real one. Which is
 why you are advised to look up the model number on the vendor's
 website yourself to determine the physical sector size.

 W
 --
 Willie W. Wong                                     ww...@math.princeton.edu

Very true...

Since this thread started and you help (me at least1) understand what
I was dealing with I got in contact with Mark Lord - the developer and
maintainer of the hdparm program. I was interested in seeing if we
could get hdparm to recognize this aspect of the drive. He was very
interested and asked me to send along additional info which he then
analyzed and decided that, at least at this time, even drives that we
__know__ are 4K sector sizes are not implementing any way of reading
it from the drive's firmware which is supported, at least in the newer
SATA specs. With that he decided that even for his own new 4K drives
he cannot do anything except either assume they are 4K and partition
appropriately or look up specs specifically as you suggest.

Currently I'm partial to the idea that all my sector starting
addresses will end in '000'. It's easy to remember and at most that
wastes (I think) 512K bytes between sectors so it's not much in terms
of the overall disk space. Just a couple of megabyte on a drive with 4
partitions.

= Mark



Re: [gentoo-user] 1-Terabyte drives - 4K sector sizes? - bar performance so far

2010-02-12 Thread Mick
On Tuesday 09 February 2010 16:31:15 Mark Knecht wrote:
 On Mon, Feb 8, 2010 at 4:37 PM, Mark Knecht markkne...@gmail.com wrote:
 SNIP
 
  There's a few small downsides I've run into with all of this so far:
 
  1) Since we don't use sector 63 it seems that fdisk will still tell
  you that you can use 63 until you use up all your primary partitions.
  It used to be easier to put additional partitions on when it gave you
  the next sector you could use after the one you just added.. Now I'm
  finding that I need to write things down and figure it out more
  carefully outside of fdisk.
 
 Replying mostly to myself, WRT the value 63 continuing to show up
 after making the first partition start at 64, in  my case since for
 desktop machines the first partition is general /boot, and as it's
 written and read so seldom, in the future when faced with this problem
 I will likely start /boot at 63 and just ensure that all the other
 partitions - /, /var, /home, etc., start on boundaries divisible by 8.
 
 It will make using fdisk slightly more pleasant.

I noticed while working on two new laptops with gparted that resizing Windows 
7 and creating new partitions showed up small blank partitions (marked as 
hidden) in between the resized, and/or the new partitions.  If I recall 
correctly these were only a few KB each so rather small as such.  I am not 
sure why gparted created these - could it be related to the drive 
automatically aligning partitions to this 4K sector size that is discussed 
here?
-- 
Regards,
Mick


signature.asc
Description: This is a digitally signed message part.


Re: [gentoo-user] 1-Terabyte drives - 4K sector sizes? - bar performance so far

2010-02-12 Thread Mark Knecht
On Fri, Feb 12, 2010 at 1:06 AM, Mick michaelkintz...@gmail.com wrote:
 On Tuesday 09 February 2010 16:31:15 Mark Knecht wrote:
 On Mon, Feb 8, 2010 at 4:37 PM, Mark Knecht markkne...@gmail.com wrote:
 SNIP

  There's a few small downsides I've run into with all of this so far:
 
  1) Since we don't use sector 63 it seems that fdisk will still tell
  you that you can use 63 until you use up all your primary partitions.
  It used to be easier to put additional partitions on when it gave you
  the next sector you could use after the one you just added.. Now I'm
  finding that I need to write things down and figure it out more
  carefully outside of fdisk.

 Replying mostly to myself, WRT the value 63 continuing to show up
 after making the first partition start at 64, in  my case since for
 desktop machines the first partition is general /boot, and as it's
 written and read so seldom, in the future when faced with this problem
 I will likely start /boot at 63 and just ensure that all the other
 partitions - /, /var, /home, etc., start on boundaries divisible by 8.

 It will make using fdisk slightly more pleasant.

 I noticed while working on two new laptops with gparted that resizing Windows
 7 and creating new partitions showed up small blank partitions (marked as
 hidden) in between the resized, and/or the new partitions.  If I recall
 correctly these were only a few KB each so rather small as such.  I am not
 sure why gparted created these - could it be related to the drive
 automatically aligning partitions to this 4K sector size that is discussed
 here?
 --
 Regards,
 Mick


http://lkml.indiana.edu/hypermail/linux/kernel/0902.3/01024.html

Cheers,
Mark



Re: [gentoo-user] 1-Terabyte drives - 4K sector sizes? - bar performance so far

2010-02-10 Thread Iain Buchanan
On Wed, 2010-02-10 at 06:59 +, Neil Walker wrote:
 Iain Buchanan wrote:
  I'm starting to stray OT here, but I'm considering a second-hand Adaptec
  2420SA - this is real hardware raid right?

 
 It's a PCI-X card (not PCI-E). Are you sure that's right for your system?

yes, I have an old server tower with everything but the disks (or RAID
controller), so it needs PCI-X.

thanks,
-- 
Iain Buchanan iaindb at netspace dot net dot au

Three minutes' thought would suffice to find this out; but thought is
irksome and three minutes is a long time.
-- A.E. Houseman




Re: [gentoo-user] 1-Terabyte drives - 4K sector sizes? - bar performance so far

2010-02-10 Thread Volker Armin Hemmann
On Mittwoch 10 Februar 2010, Iain Buchanan wrote:
 On Wed, 2010-02-10 at 07:31 +0100, Volker Armin Hemmann wrote:
  On Mittwoch 10 Februar 2010, Iain Buchanan wrote:
   so long as you didn't have any non-detectable disk errors before
   removing the disk, or any drive failure while one of the drives were
   removed.  And the deterioration in performance while each disk was
   removed in turn might take more time than its worth.  Of course RAID 1
   wouldn't suffer from this (with 2 disks)...
  
  Raid 6. Two disks can go down.
 
 not that I know enough about RAID to comment on this page, but you might
 find it interesting:
 http://www.baarf.com/
 specifically:
 http://www.miracleas.com/BAARF/RAID5_versus_RAID10.txt

and that is very wrong:

 but if
the drive is going these will not last very long and will run out and SCSI
does NOT report correctable errors back to the OS!  Therefore you will not
know the drive is becoming unstable until it is too late and there are no
more replacement sectors and the drive begins to return garbage.  [Note
that the recently popular IDE/ATA drives do not (TMK) include bad sector
remapping in their hardware so garbage is returned that much sooner.]

so if the author is wrong on that, what is with the rest of his text?

And why do you think Raid6 was created?

With Raid6 one disk can fail and another return garbage and it is still able 
to recover. 

Another reason to use raid6 is the error rate. One bit per 10^16 sounds good - 
until you are fiddling with terabyte disks.


Conclusion?  For safety and performance favor RAID10 first, RAID3 second,
RAID4 third, and RAID5 last! 

and that is just mega stupid. You can google. Or just go straight to 
wikipedia, if you don't know why.



Re: [gentoo-user] 1-Terabyte drives - 4K sector sizes? - bar performance so far

2010-02-10 Thread Volker Armin Hemmann
On Mittwoch 10 Februar 2010, Iain Buchanan wrote:
 On Wed, 2010-02-10 at 07:31 +0100, Volker Armin Hemmann wrote:
  On Mittwoch 10 Februar 2010, Iain Buchanan wrote:
   so long as you didn't have any non-detectable disk errors before
   removing the disk, or any drive failure while one of the drives were
   removed.  And the deterioration in performance while each disk was
   removed in turn might take more time than its worth.  Of course RAID 1
   wouldn't suffer from this (with 2 disks)...
  
  Raid 6. Two disks can go down.
 
 not that I know enough about RAID to comment on this page, but you might
 find it interesting:
 http://www.baarf.com/
 specifically:
 http://www.miracleas.com/BAARF/RAID5_versus_RAID10.txt

to give you an example, why raid 1 is not a good choice (and raid 10 too).

You have two disks configured as mirror. They report different blocks. Which 
one 
is the correct one?

And suddenly your system has to guess and you are very out of luck.

Another reason, the author of that text stresses that you have to do more 
writes. Newsflash: with Raid1 every single block has to be written twice. So if 
you use additional writes against Raid5, Raid1 is instantly disqualified.


You shouldn't listen to people with an agenda.

This is almost as bad as the site that claimed that SATA is much worse than 
PATA in every single aspect ...



Re: [gentoo-user] 1-Terabyte drives - 4K sector sizes? - bar performance so far

2010-02-10 Thread J. Roeleveld
On Wednesday 10 February 2010 00:22:31 Iain Buchanan wrote:
 On Tue, 2010-02-09 at 08:47 +0100, J. Roeleveld wrote:
  I now only need to figure out the best way to configure LVM over this to
  get the best performance from it. Does anyone know of a decent way of
  figuring this out?
  I got 6 disks in Raid-5.
 
 why LVM?  Planning on changing partition size later?  LVM is good for
 (but not limited to) non-raid setups where you want one partition over a
 number of disks.
 
 If you have RAID 5 however, don't you just get one large disk out of it?
 In which case you could just create x partitions.  You can always use
 parted to resize / move them later.
 
 IMHO recovery from tiny boot disks is easier without LVM too.
 

I've been using LVM for quite a while now and prefer it over any existing 
partitioning method. Especially as this array is for filesharing and I prefer 
to keep different shares on different partitions and the requirements for 
sizes are not known at the beginning.

Also, the machine this is in uses Xen virtualisation to consolidate different 
servers on a single machine (power-consumption and most servers only need a 
lot of resources occasionally) and I already have over 80 LVs just for the 
virtual machines themselves. (multiple each, as I don't like a single large 
partition for any machine)

As for recovery, I always use sysrescuecd (http://www.sysresccd.org) and 
this has Raid and LVM support in it. (Same with the Gentoo-livecds)

--
Joost



Re: [gentoo-user] 1-Terabyte drives - 4K sector sizes? - bar performance so far

2010-02-10 Thread J. Roeleveld
On Wednesday 10 February 2010 08:08:44 Alan McKinnon wrote:
 On Wednesday 10 February 2010 01:22:31 Iain Buchanan wrote:
  On Tue, 2010-02-09 at 08:47 +0100, J. Roeleveld wrote:
   I now only need to figure out the best way to configure LVM over this
   to get the best performance from it. Does anyone know of a decent way
   of figuring this out?
   I got 6 disks in Raid-5.
 
  why LVM?  Planning on changing partition size later?  LVM is good for
  (but not limited to) non-raid setups where you want one partition over a
  number of disks.
 
  If you have RAID 5 however, don't you just get one large disk out of it?
  In which case you could just create x partitions.  You can always use
  parted to resize / move them later.
 
  IMHO recovery from tiny boot disks is easier without LVM too.
 
 General observation (not saying that Iain is wrong):
 
 You use RAID to get redundancy, data integrity and performance.
 
 You use lvm to get flexibility, ease of maintenance and the ability to
  create volumes larger than any single disk or array. And do it at a
  reasonable price.
 
 These two things have nothing to do with each other and must be viewed as
 such. There are places where RAID and lvm seem to overlap, where one might
 think that a feature of one can be used to replace the other. But both
  really suck in these overlaps and are not very good at them.
 
 Bottom line: don't try and use RAID or LVM to do $STUFF outside their core
 functions. They each do one thing and do it well.
 

I completely agree with this.
RAID is for redundancy (Loose a disk, and the system will keep running)
LVM is for flexibility (Resizing/moving partitions using parted or similar 
takes time during which the whole system is unusable)

With LVM, I can resize a partition while it is actually in use (eg. write-
activities)




Re: [gentoo-user] 1-Terabyte drives - 4K sector sizes? - bar performance so far

2010-02-10 Thread Volker Armin Hemmann
On Mittwoch 10 Februar 2010, J. Roeleveld wrote:

 As for recovery, I always use sysrescuecd (http://www.sysresccd.org) and
 this has Raid and LVM support in it. (Same with the Gentoo-livecds)

sysrescuecd failed me hard two nights ago. 64bit kernel paniced with stack 
corruptions, 32bit kernel took an hour to unpack 300kb from a 20gb tar...

it was pathetic...



Re: [gentoo-user] 1-Terabyte drives - 4K sector sizes? - bar performance so far

2010-02-10 Thread J. Roeleveld
On Wednesday 10 February 2010 02:28:59 Stroller wrote:
 On 9 Feb 2010, at 19:37, J. Roeleveld wrote:
  ...
  Don't get me started on those ;)
  The reason I use Linux Software Raid is because:
  1) I can't afford hardware raid adapters
  2) It's generally faster then hardware fakeraid
 
 I'd rather have slow hardware RAID than fast software RAID. I'm not
 being a snob, it just suits my purposes better.

I don't consider that comment as snobbish as I actually agree.
But as I am using 6 disks in the array, a hardware RAID card to handle that 
would have pushed me above budget.
It is planned for a future upgrade (along with additional disks), but that 
will have to wait till after another few expenses.

 If speed isn't an issue then secondhand prices of SATA RAID
 controllers (PCI  PCI-X form-factor) are starting to become really
 cheap. Obviously new cards are all PCI-e - industry has long moved to
 that, and enthusiasts are following.

My mainboard has PCI, PCI-X and PCI-e (1x and 16x), which connector-type would 
be best suited?
Also, I believe a PCI-e 8x card would work in a PCI-e 16x slot, but does this 
work with all mainboards/cards? Or are some more picky about this?
 
 I would be far less invested in hardware RAID if I could find regular
 SATA controllers which boasted hot-swap. I've read reports of people
 hot-swapping SATA drives just fine on their cheap controllers but
 last time I checked there were no manufacturers who supported this as
 a feature.

The mainboard I use (ASUS M3N-WS) has a working hotswap support (Yes, I tested 
this) using hotswap drive bays.
Take a disk out, Linux actually sees it being removed prior to writing to it 
and when I stick it back in, it gets a new device assigned.

On a different machine, where I tried it, the whole machine locked up when I 
removed the disk (And SATA is supposed to be hotswappable by design...)

--
Joost



Re: [gentoo-user] 1-Terabyte drives - 4K sector sizes? - bar performance so far

2010-02-10 Thread J. Roeleveld
On Wednesday 10 February 2010 12:03:51 Volker Armin Hemmann wrote:
 On Mittwoch 10 Februar 2010, J. Roeleveld wrote:
  As for recovery, I always use sysrescuecd (http://www.sysresccd.org)
  and this has Raid and LVM support in it. (Same with the Gentoo-livecds)
 
 sysrescuecd failed me hard two nights ago. 64bit kernel paniced with stack
 corruptions, 32bit kernel took an hour to unpack 300kb from a 20gb tar...
 
 it was pathetic...
 

Never had a problem with it myself, but I always test rescuediscs semi-
regularly on all my machines, just to be sure. :)

I'm also paranoid when it comes to backups of my private data.



Re: [gentoo-user] 1-Terabyte drives - 4K sector sizes? - bar performance so far

2010-02-10 Thread Volker Armin Hemmann
On Mittwoch 10 Februar 2010, J. Roeleveld wrote:
 On Wednesday 10 February 2010 12:03:51 Volker Armin Hemmann wrote:
  On Mittwoch 10 Februar 2010, J. Roeleveld wrote:
   As for recovery, I always use sysrescuecd (http://www.sysresccd.org)
   and this has Raid and LVM support in it. (Same with the Gentoo-livecds)
  
  sysrescuecd failed me hard two nights ago. 64bit kernel paniced with
  stack corruptions, 32bit kernel took an hour to unpack 300kb from a 20gb
  tar...
  
  it was pathetic...
 
 Never had a problem with it myself, but I always test rescuediscs semi-
 regularly on all my machines, just to be sure. :)
 
 I'm also paranoid when it comes to backups of my private data.

because of my backup harddisk (I first copy everything on a seperate disk, then 
later the important stuff onto tapes), I was able to boot into a pretty actual 
system and untar from their. And suddenly I was hitting 100mb/sec+ writing 
speed...



Re: [gentoo-user] 1-Terabyte drives - 4K sector sizes? - bar performance so far

2010-02-10 Thread Stroller


On 10 Feb 2010, at 11:14, J. Roeleveld wrote:


On Wednesday 10 February 2010 02:28:59 Stroller wrote:

On 9 Feb 2010, at 19:37, J. Roeleveld wrote:

...
Don't get me started on those ;)
The reason I use Linux Software Raid is because:
1) I can't afford hardware raid adapters
2) It's generally faster then hardware fakeraid


I'd rather have slow hardware RAID than fast software RAID. I'm not
being a snob, it just suits my purposes better.


I don't consider that comment as snobbish as I actually agree.
But as I am using 6 disks in the array, a hardware RAID card to  
handle that

would have pushed me above budget.


See, for example, eBay item 280459693053.

LSI is also a popular brand amongst Linux enthusiasts.

3ware have been taken over by LSI and their support has deteriorated  
over the last few months, but 3ware cards come with transferrable 3  
year warranty, expiry date identifiable by serial number, and you will  
often find eBay cards are still in warranty.


It is planned for a future upgrade (along with additional disks),  
but that

will have to wait till after another few expenses.


If speed isn't an issue then secondhand prices of SATA RAID
controllers (PCI  PCI-X form-factor) are starting to become really
cheap. Obviously new cards are all PCI-e - industry has long moved to
that, and enthusiasts are following.


My mainboard has PCI, PCI-X and PCI-e (1x and 16x), which connector- 
type would

be best suited?


PCI-e, PCI-X, PCI in that order, I *think*.

PCI-X is very good, IIRC, it may be fractionally faster than PCI-e,  
but I get the impression it's going out of fashion a bit on  
motherboards.


PCI-e is very fast and is the most readily usable on new  future  
motherboards. It is what one would choose if buying new (I'm not sure  
if PCI-X cards are still available), and so it is the most expensive  
on the secondhand market.


Some 3ware PCI-X cards (eg the 9500S at least) are usable in regular  
PCI slots, obviously at the expense of speed. Not sure about other  
brands.


Avoid 3ware 7000  8000 series cards - they are now ancient, although  
you can pick them up for £10.


Also, I believe a PCI-e 8x card would work in a PCI-e 16x slot, but  
does this

work with all mainboards/cards? Or are some more picky about this?


No idea, sorry. I would have thought so, but I don't use PCI-e here yet.


I would be far less invested in hardware RAID if I could find regular
SATA controllers which boasted hot-swap. I've read reports of people
hot-swapping SATA drives just fine on their cheap controllers but
last time I checked there were no manufacturers who supported this as
a feature.


The mainboard I use (ASUS M3N-WS) has a working hotswap support  
(Yes, I tested

this) using hotswap drive bays.
Take a disk out, Linux actually sees it being removed prior to  
writing to it

and when I stick it back in, it gets a new device assigned.


This is very interesting to know.

This would be very useful here, even if just for auxiliary use -  
swapping in a drive from another machine just to clone it, backup or  
recover data, for instance.


If I found an Atom-based board that did hotswap on its normal SATA  
ports I would probably purchase one in a flash.


On a different machine, where I tried it, the whole machine locked  
up when I
removed the disk (And SATA is supposed to be hotswappable by  
design...)


This is what I would normally expect, at least from when I last  
checked a year or two ago.


AIUI SATA by design *may* be hotswappable at the *option* of the  
manufacturer.

(Please correct me if I am mistaken)

Stroller.




Re: [gentoo-user] 1-Terabyte drives - 4K sector sizes? - bar performance so far

2010-02-10 Thread J. Roeleveld
On Wednesday 10 February 2010 17:37:47 Stroller wrote:
 On 10 Feb 2010, at 11:14, J. Roeleveld wrote:
  On Wednesday 10 February 2010 02:28:59 Stroller wrote:
  On 9 Feb 2010, at 19:37, J. Roeleveld wrote:
  ...
  Don't get me started on those ;)
  The reason I use Linux Software Raid is because:
  1) I can't afford hardware raid adapters
  2) It's generally faster then hardware fakeraid
 
  I'd rather have slow hardware RAID than fast software RAID. I'm not
  being a snob, it just suits my purposes better.
 
  I don't consider that comment as snobbish as I actually agree.
  But as I am using 6 disks in the array, a hardware RAID card to
  handle that
  would have pushed me above budget.
 
 See, for example, eBay item 280459693053.
 
 LSI is also a popular brand amongst Linux enthusiasts.
 
 3ware have been taken over by LSI and their support has deteriorated
 over the last few months, but 3ware cards come with transferrable 3
 year warranty, expiry date identifiable by serial number, and you will
 often find eBay cards are still in warranty.

Yes, except that I tend to avoid eBay as much as possible for reasons that 
don't belong on this list.

  My mainboard has PCI, PCI-X and PCI-e (1x and 16x), which connector-
  type would
  be best suited?
 
 PCI-e, PCI-X, PCI in that order, I *think*.
 
 PCI-X is very good, IIRC, it may be fractionally faster than PCI-e,
 but I get the impression it's going out of fashion a bit on
 motherboards.
 
 PCI-e is very fast and is the most readily usable on new  future
 motherboards. It is what one would choose if buying new (I'm not sure
 if PCI-X cards are still available), and so it is the most expensive
 on the secondhand market.

I know at least one shop in NL that sells them (They're also online)

 Some 3ware PCI-X cards (eg the 9500S at least) are usable in regular
 PCI slots, obviously at the expense of speed. Not sure about other
 brands.
 
 Avoid 3ware 7000  8000 series cards - they are now ancient, although
 you can pick them up for £10.
 
  Also, I believe a PCI-e 8x card would work in a PCI-e 16x slot, but
  does this
  work with all mainboards/cards? Or are some more picky about this?
 
 No idea, sorry. I would have thought so, but I don't use PCI-e here yet.

It's what all the buzz says, but I've yet to have that confirmed. It's 
especially the size of the slots and the cards where my concerns come from.

  I would be far less invested in hardware RAID if I could find regular
  SATA controllers which boasted hot-swap. I've read reports of people
  hot-swapping SATA drives just fine on their cheap controllers but
  last time I checked there were no manufacturers who supported this as
  a feature.
 
  The mainboard I use (ASUS M3N-WS) has a working hotswap support
  (Yes, I tested
  this) using hotswap drive bays.
  Take a disk out, Linux actually sees it being removed prior to
  writing to it
  and when I stick it back in, it gets a new device assigned.
 
 This is very interesting to know.
 
 This would be very useful here, even if just for auxiliary use -
 swapping in a drive from another machine just to clone it, backup or
 recover data, for instance.

Yes, but just for cloning, wouldn't it be just as easy to power down the 
machine, plug in the drive and then power it back up?
Or even stick it on a quick-change USB-case? :)

 If I found an Atom-based board that did hotswap on its normal SATA
 ports I would probably purchase one in a flash.
 
  On a different machine, where I tried it, the whole machine locked
  up when I
  removed the disk (And SATA is supposed to be hotswappable by
  design...)
 
 This is what I would normally expect, at least from when I last
 checked a year or two ago.

I do have to say here that the mainboard for that machine is now easily 5 
years old, so I didn't actually expect it to work.

 AIUI SATA by design *may* be hotswappable at the *option* of the
 manufacturer.
 (Please correct me if I am mistaken)

I think it depends on if the controller actually sends the correct signals to 
the OS as I'm not sure if it was Linux or the hardware locking up.



Re: [gentoo-user] 1-Terabyte drives - 4K sector sizes? - bar performance so far

2010-02-10 Thread Stroller


On 10 Feb 2010, at 17:26, J. Roeleveld wrote:

...
The mainboard I use (ASUS M3N-WS) has a working hotswap support
(Yes, I tested
this) using hotswap drive bays.
Take a disk out, Linux actually sees it being removed prior to
writing to it
and when I stick it back in, it gets a new device assigned.


This is very interesting to know.

This would be very useful here, even if just for auxiliary use -
swapping in a drive from another machine just to clone it, backup or
recover data, for instance.


Yes, but just for cloning, wouldn't it be just as easy to power down  
the

machine, plug in the drive and then power it back up?
Or even stick it on a quick-change USB-case? :)


I'd really rather not power the machine down. Likely it's in the  
middle of a 24-hour DVD rip, or something.


A quick-change USB-case (or similar) is the current method, but I have  
4 spare hot-swap bays on the front of this box, so slapping the drive  
in one of those reduces the clutter in the server cabinet. And that  
does have a tendency to get VERY cluttered, so if I can reduce that it  
also reduces the potential for human errors (pulling the wrong USB  
cable by mistake c).


Stroller.




Re: [gentoo-user] 1-Terabyte drives - 4K sector sizes? - bar performance so far

2010-02-09 Thread J. Roeleveld
On Monday 08 February 2010 21:34:01 Paul Hartman wrote:
 On Mon, Feb 8, 2010 at 12:52 PM, Valmor de Almeida val.gen...@gmail.com 
wrote:
  Mark Knecht wrote:
  [snip]
 
 This has been helpful for me. I'm glad Valmor is getting better
  results also.
 
  [snip]
 
  These 4k-sector drives can be problematic when upgrading older
  computers. For instance, my laptop BIOS would not boot from the toshiba
  drive I mentioned earlier. However when used as an external usb drive, I
  could boot gentoo. Since I have been using this drive as backup storage
  I did not investigate the reason for the lower speed. I am happy to get
  a factor of 8 in speed up now after you did the research :)
 
  Thanks for your postings.
 
 Thanks for the info everyone, but do you understand the agony I am now
 suffering at the fact that all disk in my system (including all parts
 of my RAID5) are starting on sector 63 and I don't have sufficient
 free space (or free time) to repartition them? :) I am really curious
 if there are any gains to be made on my own system...
 
 Next time I partition I will definitely pay attention to this, and
 feel foolish that I didn't pay attention before. Thanks.
 

I have similar disks in my new system and was lucky that I was still in the 
testing phase and hadn't filled the disks yet.
After changing the partitions to start at sector 64, the creation of the 
RAID-5 set went from around 22 hours to 9 hours.

I also get a much higher throughput (in the range of at least 4 times faster), 
so I would recommend doing the change if you can.

I now only need to figure out the best way to configure LVM over this to get 
the best performance from it. Does anyone know of a decent way of figuring 
this out?
I got 6 disks in Raid-5.

Thanks,

Joost Roeleveld



Re: [gentoo-user] 1-Terabyte drives - 4K sector sizes? - bar performance so far

2010-02-09 Thread Stroller


On 9 Feb 2010, at 00:27, Neil Bothwick wrote:


On Mon, 8 Feb 2010 14:34:01 -0600, Paul Hartman wrote:

Thanks for the info everyone, but do you understand the agony I am  
now

suffering at the fact that all disk in my system (including all parts
of my RAID5) are starting on sector 63 and I don't have sufficient
free space (or free time) to repartition them? :)


With the RAID, you could fail one disk, repartition, re-add it,  
rinse and

repeat. But that doesn't take care of the time issue.


Aren't you thinking of LVM, or something?

Stroller.



Re: [gentoo-user] 1-Terabyte drives - 4K sector sizes? - bar performance so far

2010-02-09 Thread Neil Bothwick
On Tue, 9 Feb 2010 12:46:40 +, Stroller wrote:

  With the RAID, you could fail one disk, repartition, re-add it,  
  rinse and
  repeat. But that doesn't take care of the time issue.  
 
 Aren't you thinking of LVM, or something?

No. The very nature of RAID is redundancy, so you could remove one disk
from the array to modify its setup then replace it.


-- 
Neil Bothwick

One World, One Web, One Program - Microsoft Promotional Ad
Ein Volk, Ein Reich, Ein Fuhrer - Adolf Hitler


signature.asc
Description: PGP signature


Re: [gentoo-user] 1-Terabyte drives - 4K sector sizes? - bar performance so far

2010-02-09 Thread Volker Armin Hemmann
On Dienstag 09 Februar 2010, Stroller wrote:
 On 9 Feb 2010, at 00:27, Neil Bothwick wrote:
  On Mon, 8 Feb 2010 14:34:01 -0600, Paul Hartman wrote:
  Thanks for the info everyone, but do you understand the agony I am
  now
  suffering at the fact that all disk in my system (including all parts
  of my RAID5) are starting on sector 63 and I don't have sufficient
  free space (or free time) to repartition them? :)
  
  With the RAID, you could fail one disk, repartition, re-add it,
  rinse and
  repeat. But that doesn't take care of the time issue.
 
 Aren't you thinking of LVM, or something?
 
 Stroller.

no



Re: [gentoo-user] 1-Terabyte drives - 4K sector sizes? - bar performance so far

2010-02-09 Thread J. Roeleveld
On Tuesday 09 February 2010 13:46:40 Stroller wrote:
 On 9 Feb 2010, at 00:27, Neil Bothwick wrote:
  On Mon, 8 Feb 2010 14:34:01 -0600, Paul Hartman wrote:
  Thanks for the info everyone, but do you understand the agony I am
  now
  suffering at the fact that all disk in my system (including all parts
  of my RAID5) are starting on sector 63 and I don't have sufficient
  free space (or free time) to repartition them? :)
 
  With the RAID, you could fail one disk, repartition, re-add it,
  rinse and
  repeat. But that doesn't take care of the time issue.
 
 Aren't you thinking of LVM, or something?
 
 Stroller.
 

Not sure where LVM would fit into this, as then you'd need to offload the data 
from that PV (Physical Volume) to a different PV first.

With Raid (NOT striping) you can remove one disk, leaving the Raid-array in a 
reduced state. Then repartition the disk you removed, repartition and then re-
add the disk to the array.
Wait for the rebuild to complete and do the same with the next disk in the 
array.
Eg: (for a 3-disk raid5):
1) remove disk-1 from raid
2) repartition disk-1
3) add disk-1 as new disk to raid
4) wait for the synchronisation to finish
5) remove disk-2 from raid
6) repartition disk-2
7) add disk-2 as new disk to raid
8) wait for the synchronisation to finish
9) remove disk-3 from raid
10) repartition disk-3
11) add disk-3 as new disk to raid
12) wait for the synchronisation to finish

(These steps can easily be adapted for any size and type of raid, apart from 
striping/raid-0)

I do, however, see a potential problem, if you repartition starting from 
sector 64 instead of from sector 63, the disk has 1 sector less, which means 
4KB less in size.
The Raid-array may not accept the re-partitioned disk back into the array 
because it's not big enough for the array.

I had this issue with an older system once where I replaced a dead 80GB (Yes, 
I did say old :) ) with a new 80GB drive. This drive was actually a few KB 
smaller in size and the RAID would refuse to accept it.

--
Joost Roeleveld



Re: [gentoo-user] 1-Terabyte drives - 4K sector sizes? - bar performance so far

2010-02-09 Thread Stroller


On 9 Feb 2010, at 13:57, J. Roeleveld wrote:

...
With Raid (NOT striping) you can remove one disk, leaving the Raid- 
array in a
reduced state. Then repartition the disk you removed, repartition  
and then re-

add the disk to the array.


Exactly. Except the partitions extend, in the same positions, across  
all the disks.


You cannot remove one disk from the array and repartition it, because  
the partition is across the array, not the disk. The single disk,  
removed from a RAID 5 (specified by Paul Hartman) array does not  
contain any partitions, just one stripe of them.


I apologise if I'm misunderstanding something here, or if your RAID  
works differently to mine.


Stroller.




Re: [gentoo-user] 1-Terabyte drives - 4K sector sizes? - bar performance so far

2010-02-09 Thread J. Roeleveld
On Tuesday 09 February 2010 16:11:14 Stroller wrote:
 On 9 Feb 2010, at 13:57, J. Roeleveld wrote:
  ...
  With Raid (NOT striping) you can remove one disk, leaving the Raid-
  array in a
  reduced state. Then repartition the disk you removed, repartition
  and then re-
  add the disk to the array.
 
 Exactly. Except the partitions extend, in the same positions, across
 all the disks.
 
 You cannot remove one disk from the array and repartition it, because
 the partition is across the array, not the disk. The single disk,
 removed from a RAID 5 (specified by Paul Hartman) array does not
 contain any partitions, just one stripe of them.
 
 I apologise if I'm misunderstanding something here, or if your RAID
 works differently to mine.
 
 Stroller.
 

Stroller, it is my understanding that you use hardware raid adapters?
If that is the case, then the mentioned method won't work for you and if your 
raid-adapters already align everything properly, then you shouldn't notice any 
problems with these drives.
It would, however, be interesting to know how hardware raid adapters handle 
these 4KB sector-sizes.

I believe Paul Hartman is, like me, using Linux Sofware raid (mdadm+kernel 
drivers).

In that case, you can do either of the following:
Put the whole disk into the RAID, eg:
mdadm --create --level=5 --devices=6 /dev/sd[abcdef]
Or, you create 1 or more partitions on the disk and use these, eg:
mdadm --create --level=5 --devices=6 /dev/sd[abcdef]1

To have linux auto-detect for raid devices work, as far as I know, the 
partitioning method is required.
For that, I created a single full-disk partition on my drives:
--
# fdisk -l -u /dev/sda

Disk /dev/sda: 1500.3 GB, 1500301910016 bytes
255 heads, 63 sectors/track, 182401 cylinders, total 2930277168 sectors
Units = sectors of 1 * 512 = 512 bytes
Disk identifier: 0xda7d8d6d

   Device Boot  Start End  Blocks   Id  System
/dev/sda1  64  2930277167  1465138552   fd  Linux raid autodetect
--

I, after reading this, redid the array with the partition starting at sector 
64. Paul was unfortunate to have already filled his disks before this thread 
appeared.

The downside is: you loose one sector, but the advantage is a much improved 
performance (Or more precisely, not incur the performance penalty from having 
misaligned partitions)

--
Joost Roeleveld



Re: [gentoo-user] 1-Terabyte drives - 4K sector sizes? - bar performance so far

2010-02-09 Thread Neil Bothwick
On Tue, 9 Feb 2010 15:11:14 +, Stroller wrote:

 You cannot remove one disk from the array and repartition it, because  
 the partition is across the array, not the disk. The single disk,  
 removed from a RAID 5 (specified by Paul Hartman) array does not  
 contain any partitions, just one stripe of them.

A 3 disk RAID 5 array can handle one disk failing. Although information
is striped across all three disks, any two are enough to retrieve it.

If this were not the case, it would be called AID 5.


-- 
Neil Bothwick

Always remember to pillage before you burn.


signature.asc
Description: PGP signature


Re: [gentoo-user] 1-Terabyte drives - 4K sector sizes? - bar performance so far

2010-02-09 Thread Mark Knecht
On Mon, Feb 8, 2010 at 4:37 PM, Mark Knecht markkne...@gmail.com wrote:
SNIP

 There's a few small downsides I've run into with all of this so far:

 1) Since we don't use sector 63 it seems that fdisk will still tell
 you that you can use 63 until you use up all your primary partitions.
 It used to be easier to put additional partitions on when it gave you
 the next sector you could use after the one you just added.. Now I'm
 finding that I need to write things down and figure it out more
 carefully outside of fdisk.


Replying mostly to myself, WRT the value 63 continuing to show up
after making the first partition start at 64, in  my case since for
desktop machines the first partition is general /boot, and as it's
written and read so seldom, in the future when faced with this problem
I will likely start /boot at 63 and just ensure that all the other
partitions - /, /var, /home, etc., start on boundaries divisible by 8.

It will make using fdisk slightly more pleasant.

- Mark



Re: [gentoo-user] 1-Terabyte drives - 4K sector sizes? - bar performance so far

2010-02-09 Thread Frank Steinmetzger
Am Dienstag, 9. Februar 2010 schrieb Frank Steinmetzger:

  4) Everything I've done so far leave me with messages about partition
  1 not ending on a cylinder boundary. Googling on that one says don't
  worry about it. I don't know...

Well since only the start of a partition determines its alignment with 
hardware sectors, I think it's really not that important. Worst case: mkfs 
truncates the last few sectors to make it a multiple of its cluster size.

 Anyway, mine's like this, just to throw it into the pot to the others
 ( those # are added by me to show their respective use )

 eisen # fdisk -l -u /dev/sda

 Disk /dev/sda: 500.1 GB, 500107862016 bytes
 255 heads, 63 sectors/track, 60801 cylinders, total 976773168 sectors
 Units = sectors of 1 * 512 = 512 bytes
 Disk identifier: 0x80178017

Device Boot Start End  Blocks   Id  System
 /dev/sda1   * 632515778912578863+   7  HPFS/NTFS # Windows
 /dev/sda2   251577908808439431463302+   7  HPFS/NTFS # Games
 /dev/sda3   88084395   12794165919928632+  83  Linux # / 
 /dev/sda4  127941660   976768064   424413202+   5  Extended
 /dev/sda5  127941723   28881656980437423+  83  Linux # /home
 /dev/sda6  288816633   780341309   245762338+  83  Linux # music
 /dev/sda7  813113973   97670380481794916   83  Linux # X-Plane
 /dev/sda8   *  976703868   976768064   32098+  83  Linux #
 /boot /dev/sda9  780341373   81311390916386268+   7  HPFS/NTFS #
 Win7 test

I have started amending my partitioning scheme, starting at the rear. Since my 
backup drive has exactly the same scheme, I’m working on that and then 
restore my local drive from it, so I need as little time in a LiveCD 
environment as possible.

I have reset sdb7 to use boundaries divisible by 64.
Old rangebegin%64  size%64  New rangebegin%64  size%64
813113973-976703804  0.82810.125813113984-976703935  0 0

And guess what - the speed of truecrypt at creating a new container doubled. 
With the old scheme, it started at 13.5 MB/s, now it started at 26-odd. I’m 
blaming that cap on the USB connection to the drive, though it’s gradually 
getting more: after 2/3 of the partition, it’s at 27.7.

So sdb7 now ends at sector 976703935. Interestingly, I couldn’t use the 
immediate next sector for sdb8:
start for sdb8   response by fdisk
976703936sector already allocated
976703944Value out of range. First sector... (default 976703999):

The first one fdisk offered me was exactly 64 sectors behind the end sector of 
sdb7 (976703999), which would leave a space of those mysterious 62 “empty” 
sectors in between. So I used 976704000, which is divisable by 64 again, 
though it’s not that relevant for a partition of 31 MB. :D

As soon as truecrypt is finished, I'm going to solidify my findings by 
performing this on another partition, and I’ll also see what happens if I 
start at a start sector of k*64+1. Just out of curiousity. :-)
-- 
Gruß | Greetings | Qapla'
Crayons can take you more places than starships. (Guinan)


signature.asc
Description: This is a digitally signed message part.


Re: [gentoo-user] 1-Terabyte drives - 4K sector sizes? - bar performance so far

2010-02-09 Thread Stroller


On 9 Feb 2010, at 15:43, Neil Bothwick wrote:


On Tue, 9 Feb 2010 15:11:14 +, Stroller wrote:


You cannot remove one disk from the array and repartition it, because
the partition is across the array, not the disk. The single disk,
removed from a RAID 5 (specified by Paul Hartman) array does not
contain any partitions, just one stripe of them.


A 3 disk RAID 5 array can handle one disk failing. Although  
information

is striped across all three disks, any two are enough to retrieve it.

If this were not the case, it would be called AID 5.


Of course you can REMOVE this disk.

However, in hardware RAID you cannot do anything USEFUL to the single  
disk.


In hardware RAID it is the controller card which manages the arrays  
and consolidates them for the o/s. You attach three drives to a  
hardware RAID controller, setup a RAID5 array and then the controller  
exports the array to the operating system as a block device (e.g. /dev/ 
sda). You then run fdisk on this virtual disk and create the  
partitions. You cannot connect just a partition to a hardware RAID  
controller.


Thus in hardware RAID there are no partitions on each single disk,  
only (as I said before) stripes of the partitions. You cannot usefully  
repartition a single hard-drive from a hardware RAID set - anything  
you do to that single drive will be wiped out when you re-add it to  
the array and the current state of the virtual disk is propagated on  
to it.


I hope this explanation makes sense.

I was not aware that Linux software RAID behaved differently. See  
Joost's explanation of 9 February 2010 15:27:32 GMT. I asked if you  
were referring to LVM because I set that up several years ago, and it  
also allows you to add partitions as PVs. I can see how it would be  
useful to add just a partition to a RAID array, and it's great that  
you can do this in software RAID.


So this:

On 9 Feb 2010, at 00:27, Neil Bothwick wrote:
With the RAID, you could fail one disk, repartition, re-add it,  
rinse and

repeat. But that doesn't take care of the time issue


only applies in the specific case that Paul Hartman is using Linux  
software RAID, not the general case of RAID in general.


Stroller.



Re: [gentoo-user] 1-Terabyte drives - 4K sector sizes? - bar performance so far

2010-02-09 Thread Paul Hartman
On Mon, Feb 8, 2010 at 6:27 PM, Neil Bothwick n...@digimed.co.uk wrote:
 On Mon, 8 Feb 2010 14:34:01 -0600, Paul Hartman wrote:

 Thanks for the info everyone, but do you understand the agony I am now
 suffering at the fact that all disk in my system (including all parts
 of my RAID5) are starting on sector 63 and I don't have sufficient
 free space (or free time) to repartition them? :)

 With the RAID, you could fail one disk, repartition, re-add it, rinse and
 repeat. But that doesn't take care of the time issue.

I will admit that if a drive fails I will have to google for the
instructions to proceed from there. When I first set it up, I read the
info, but since I never had to use it I've completely forgotten the
specifics. And in hindsight I should have labeled the disks so I know
more easily which one failed (when one fails). Next time, I'll do it
right. :)

 I am really curious
 if there are any gains to be made on my own system...

 Me too, so post back after you've done it ;-)

I have a dmcrypt on top of the (software) RAID5, so speed is not so
much of an issue in this case, but reducing physical wear  tear on
the disks would always be a good thing. Maybe someday if I am brave I
will try it... but probably not until I made a full backup, just in
case.



Re: [gentoo-user] 1-Terabyte drives - 4K sector sizes? - bar performance so far

2010-02-09 Thread Stroller


On 9 Feb 2010, at 15:27, J. Roeleveld wrote:


On Tuesday 09 February 2010 16:11:14 Stroller wrote:

On 9 Feb 2010, at 13:57, J. Roeleveld wrote:

...
With Raid (NOT striping) you can remove one disk, leaving the Raid-
array in a
reduced state. Then repartition the disk you removed, repartition
and then re-
add the disk to the array.


Exactly. Except the partitions extend, in the same positions, across
all the disks.

You cannot remove one disk from the array and repartition it, because
the partition is across the array, not the disk. The single disk,
removed from a RAID 5 (specified by Paul Hartman) array does not
contain any partitions, just one stripe of them.

I apologise if I'm misunderstanding something here, or if your RAID
works differently to mine.


Stroller, it is my understanding that you use hardware raid adapters?


Yes.


If that is the case, then the mentioned method won't work for you ...

I believe Paul Hartman is, like me, using Linux Sofware raid (mdadm 
+kernel

drivers).

In that case, you can do either of the following:
Put the whole disk into the RAID, eg:
mdadm --create --level=5 --devices=6 /dev/sd[abcdef]
Or, you create 1 or more partitions on the disk and use these, eg:
mdadm --create --level=5 --devices=6 /dev/sd[abcdef]1


Thank you for identifying the source of this misunderstanding.

and if your raid-adapters already align everything properly, then  
you shouldn't notice any problems with these drives.
It would, however, be interesting to know how hardware raid adapters  
handle these 4KB sector-sizes.


I think my adaptor at least, being older, may very well be prone to  
this problem. I discussed this in my post of 8 February 2010 19:57:46  
GMT - certainly I have a RAID array aligned beginning at sector 63,  
and it is at least a little slow. I will test just as soon as I can  
afford 3 x 1TB drives.


I think the RAID adaptor would have to be quite clever to avoid this  
problem. It may be a feature added in newer controllers, but that  
would be a special attempt to compensate. I think in the general case  
the RAID controller should just consolidate 3 x physical block devices  
(or more) into 1 x virtual block device, and should not do anything  
more complicated that this. I am sure that a misalignment will  
propagate downwards through the levels of obscusification.


IMO this is a fdisk bug. A feature should be added so that it tries  
to align optimally in most circumstances. RAID controllers should not  
be trying to do anything clever to accommodate potential misalignment  
unless it is really cheap to do so.


Stroller.




Re: [gentoo-user] 1-Terabyte drives - 4K sector sizes? - bar performance so far

2010-02-09 Thread Neil Walker
Hey guys,

There seems to be a lot of confusion over this RAID thing.

Hardware RAID does not use partitions. The entire drive is used (or,
actually, the amount defined in setting up the array) and all I/O is
handled by the BIOS on the RAID controller. The array appears as a
single drive to the OS and can then be partitioned and formatted like
any other drive.

Software RAID can be created within existing MSDOS-style partitions -
indeed must be if the array is to be bootable.

The OP seems to be doing the latter so the comments about removing a
drive and re-formatting are perfectly valid.

In order not to confuse the matter further, I deliberately left out the
pseudo-hardware controllers on many modern motherboards. ;)


Be lucky,

Neil
http://www.neiljw.com/







Re: [gentoo-user] 1-Terabyte drives - 4K sector sizes? - bar performance so far

2010-02-09 Thread Mark Knecht
On Tue, Feb 9, 2010 at 9:09 AM, Frank Steinmetzger war...@gmx.de wrote:
SNIP
 So sdb7 now ends at sector 976703935. Interestingly, I couldn’t use the
 immediate next sector for sdb8:
 start for sdb8   response by fdisk
 976703936        sector already allocated
 976703944        Value out of range. First sector... (default 976703999):

 The first one fdisk offered me was exactly 64 sectors behind the end sector of
 sdb7 (976703999), which would leave a space of those mysterious 62 “empty”
 sectors in between. So I used 976704000, which is divisable by 64 again,
 though it’s not that relevant for a partition of 31 MB. :D
SNIP

Again, this is probably unrelated to anything going on in this thread
but I started wondering this morning if maybe fdisk could take a step
forward with these newer disk technologies and build in some smarts
about where to put partition boundaries. I.e. - if I'm using a 4K
block size disk why not have fdisk do things better?

My first thought was to look at the man page for fdisk and see who the
author was. I did not find any email addresses. However I did find
some very interesting comments about partitioning disks in the bugs
section, quoted below.

I don't think I need what the 'bugs' author perceives as the
advantages of fdisk so I think I'll try to focus a bit more on cfdisk.
Interestingly cfdisk was the tool Willie pointed out when he kindly
took the time to educate me on what was going on physically.

- Mark

[QUOTE]

BUGS
   There  are several *fdisk programs around.  Each has its
problems and strengths.  Try
   them in the order cfdisk, fdisk, sfdisk.  (Indeed, cfdisk is a
beautiful program that
   has strict requirements on the partition tables it accepts, and
produces high quality
   partition tables. Use it if you can.  fdisk is a buggy program
that does fuzzy things
   -  usually  it happens to produce reasonable results. Its
single advantage is that it
   has some support for BSD disk labels and other non-DOS
partition tables.  Avoid it if
   you can.  sfdisk is for hackers only - the user interface is
terrible, but it is more
   correct than fdisk and more powerful than both fdisk and
cfdisk.  Moreover, it can be
   used noninteractively.)

[/QUOTE]



Re: [gentoo-user] 1-Terabyte drives - 4K sector sizes? - bar performance so far

2010-02-09 Thread Mark Knecht
On Tue, Feb 9, 2010 at 9:38 AM, Stroller strol...@stellar.eclipse.co.uk wrote:
SNIP
 IMO this is a fdisk bug. A feature should be added so that it tries to
 align optimally in most circumstances. RAID controllers should not be trying
 to do anything clever to accommodate potential misalignment unless it is
 really cheap to do so.

 Stroller.

We think alike. I personally wouldn't call it a bug because drives
with 4K physical sectors are very new, but adding a feature to align
things better is dead on the right thing to do. It's silly to expect
every Linux user installing binary distros to have to learn this stuff
to get good performance.

- Mark



Re: [gentoo-user] 1-Terabyte drives - 4K sector sizes? - bar performance so far

2010-02-09 Thread J. Roeleveld
On Tuesday 09 February 2010 19:25:00 Mark Knecht wrote:
 On Tue, Feb 9, 2010 at 9:38 AM, Stroller strol...@stellar.eclipse.co.uk
  wrote: SNIP
 
  IMO this is a fdisk bug. A feature should be added so that it tries to
  align optimally in most circumstances. RAID controllers should not be
  trying to do anything clever to accommodate potential misalignment unless
  it is really cheap to do so.
 
  Stroller.
 
 We think alike. I personally wouldn't call it a bug because drives
 with 4K physical sectors are very new, but adding a feature to align
 things better is dead on the right thing to do. It's silly to expect
 every Linux user installing binary distros to have to learn this stuff
 to get good performance.
 
 - Mark
 

I actually agree, although I think the 'best' solution (untill someone comes 
up with an even better one, that is :) ) would be for the drive to actually be 
able to inform the OS (via S.M.A.R.T.?) that it has 4KB sectors.
If then fdisk-programs and RAID-cards (ok, new firmware) then uses this to 
come to sensible settings, that would then work.

If these RAID-cards then also pass on the correct settings for the raid-array 
for optimal performance (stripe-size = sector-size?) using the same method, 
then everyone would end up with better performance.

Now, if anyone has any idea on how to get this idea implemented by the 
hardware vendors, then I'm quite certain the different tools can be modified 
to take this information into account?

And Mark, it's not just people installing binary distros, I think it's 
generally people who don't fully understand the way harddrives work on a 
physical level. I consider myself lucky to have worked with older computers 
where this information was actually necessary to even get the BIOS to 
recognize the harddrive.

--
Joost



Re: [gentoo-user] 1-Terabyte drives - 4K sector sizes? - bar performance so far

2010-02-09 Thread J. Roeleveld
On Tuesday 09 February 2010 19:03:39 Neil Walker wrote:
 Hey guys,
 
 There seems to be a lot of confusion over this RAID thing.
 
 Hardware RAID does not use partitions. The entire drive is used (or,
 actually, the amount defined in setting up the array) and all I/O is
 handled by the BIOS on the RAID controller. The array appears as a
 single drive to the OS and can then be partitioned and formatted like
 any other drive.
 
 Software RAID can be created within existing MSDOS-style partitions -
 indeed must be if the array is to be bootable.
 
 The OP seems to be doing the latter so the comments about removing a
 drive and re-formatting are perfectly valid.
 
 In order not to confuse the matter further, I deliberately left out the
 pseudo-hardware controllers on many modern motherboards. ;)

Don't get me started on those ;)
The reason I use Linux Software Raid is because:
1) I can't afford hardware raid adapters
2) It's generally faster then hardware fakeraid

--
Joost



Re: [gentoo-user] 1-Terabyte drives - 4K sector sizes? - bar performance so far

2010-02-09 Thread Neil Bothwick
On Tue, 9 Feb 2010 17:17:48 +, Stroller wrote:

 only applies in the specific case that Paul Hartman is using Linux  
 software RAID, not the general case of RAID in general.

That's true, although in the Linux world I expect that the number of
software RAID users far outnumbers the hardware RAID users. Unlike the
pseudo-RAID that Windows usually offers, Linux software RAID is proper
RAID with performance comparable to all but the most expensive hardware
setups.

With hardware RAID, removing and reading a disk wouldn't work for this,
just as it wouldn't for software RAID using whole disks. However, using
whole disk with RAID5 is unlikely unless you have another disk too,
otherwise you wouldn't be able to load the kernel.


-- 
Neil Bothwick

Top Oxymorons Number 16: Peace force


signature.asc
Description: PGP signature


Re: [gentoo-user] 1-Terabyte drives - 4K sector sizes? - bar performance so far

2010-02-09 Thread Frank Steinmetzger
Am Dienstag, 9. Februar 2010 schrieb Frank Steinmetzger:

 I have reset sdb7 to use boundaries divisible by 64.
 Old rangebegin%64  size%64  New rangebegin%64 
 size%64 813113973-976703804  0.82810.125813113984-976703935  0 
0

 And guess what - the speed of truecrypt at creating a new container
 doubled. With the old scheme, it started at 13.5 MB/s, now it started at
 26-odd. I’m blaming that cap on the USB connection to the drive, though
 it’s gradually getting more: after 2/3 of the partition, it’s at 27.7.

I fear I'll have to correct that a little. This 13.5 figure seems to be 
incorrect, in another try it was also shown at the beginning, but then 
quickly got up to 20. Also, a buddy just told me that this 4k stuff applies 
only to most recent drives, as old as 5 months or so.

When I use parted on the drives, it says (both the old external and my 2 
months old internal):
Sector size (logical/physical): 512B/512B
So no speedup for me then. :-/
-- 
Gruß | Greetings | Qapla'
Keyboard not connected, press F1 to continue.


signature.asc
Description: This is a digitally signed message part.


Re: [gentoo-user] 1-Terabyte drives - 4K sector sizes? - bar performance so far

2010-02-09 Thread J. Roeleveld
On Tuesday 09 February 2010 22:13:39 Frank Steinmetzger wrote:
snipped
 When I use parted on the drives, it says (both the old external and my 2
 months old internal):
 Sector size (logical/physical): 512B/512B
 So no speedup for me then. :-/
 

That doesn't mean a thing, I'm afraid.
I have the 4KB drives (product-code and behaviour match) and parted also 
claims my drives have a 512B logical/physical sector size.



Re: [gentoo-user] 1-Terabyte drives - 4K sector sizes? - bar performance so far

2010-02-09 Thread Mark Knecht
On Tue, Feb 9, 2010 at 1:13 PM, Frank Steinmetzger war...@gmx.de wrote:
 Am Dienstag, 9. Februar 2010 schrieb Frank Steinmetzger:

 I have reset sdb7 to use boundaries divisible by 64.
 Old range            begin%64  size%64  New range            begin%64
 size%64 813113973-976703804  0.8281    0.125    813113984-976703935  0
    0

 And guess what - the speed of truecrypt at creating a new container
 doubled. With the old scheme, it started at 13.5 MB/s, now it started at
 26-odd. I’m blaming that cap on the USB connection to the drive, though
 it’s gradually getting more: after 2/3 of the partition, it’s at 27.7.

 I fear I'll have to correct that a little. This 13.5 figure seems to be
 incorrect, in another try it was also shown at the beginning, but then
 quickly got up to 20. Also, a buddy just told me that this 4k stuff applies
 only to most recent drives, as old as 5 months or so.

 When I use parted on the drives, it says (both the old external and my 2
 months old internal):
 Sector size (logical/physical): 512B/512B
 So no speedup for me then. :-/

Frank,
   As best I can tell so far none of the Linux tools will tell you
that the sectors are 4K. I had to go to the WD web site and find the
actual drive specs to discover that was true.

   As far as I know so far there isn't a big improvement to be had
when the sector size is 512B.

- Mark



Re: [gentoo-user] 1-Terabyte drives - 4K sector sizes? - bar performance so far

2010-02-09 Thread Iain Buchanan
On Tue, 2010-02-09 at 08:47 +0100, J. Roeleveld wrote:

 I now only need to figure out the best way to configure LVM over this to get 
 the best performance from it. Does anyone know of a decent way of figuring 
 this out?
 I got 6 disks in Raid-5.

why LVM?  Planning on changing partition size later?  LVM is good for
(but not limited to) non-raid setups where you want one partition over a
number of disks.

If you have RAID 5 however, don't you just get one large disk out of it?
In which case you could just create x partitions.  You can always use
parted to resize / move them later.

IMHO recovery from tiny boot disks is easier without LVM too.

-- 
Iain Buchanan iaindb at netspace dot net dot au

Failure is not an option -- it comes bundled with Windows. 




Re: [gentoo-user] 1-Terabyte drives - 4K sector sizes? - bar performance so far

2010-02-09 Thread Iain Buchanan
On Tue, 2010-02-09 at 13:34 +, Neil Bothwick wrote:
 On Tue, 9 Feb 2010 12:46:40 +, Stroller wrote:
 
   With the RAID, you could fail one disk, repartition, re-add it,  
   rinse and
   repeat. But that doesn't take care of the time issue.  
  
  Aren't you thinking of LVM, or something?
 
 No. The very nature of RAID is redundancy, so you could remove one disk
 from the array to modify its setup then replace it.

so long as you didn't have any non-detectable disk errors before
removing the disk, or any drive failure while one of the drives were
removed.  And the deterioration in performance while each disk was
removed in turn might take more time than its worth.  Of course RAID 1
wouldn't suffer from this (with 2 disks)...
-- 
Iain Buchanan iaindb at netspace dot net dot au

Keep on keepin' on.




Re: [gentoo-user] 1-Terabyte drives - 4K sector sizes? - bar performance so far

2010-02-09 Thread Iain Buchanan
On Tue, 2010-02-09 at 20:37 +0100, J. Roeleveld wrote:

 Don't get me started on those ;)
 The reason I use Linux Software Raid is because:
 1) I can't afford hardware raid adapters
 2) It's generally faster then hardware fakeraid

I'm starting to stray OT here, but I'm considering a second-hand Adaptec
2420SA - this is real hardware raid right?

If I'm buying drives in the 1Tb size - does this 4k issue affect
hardware RAID and how do you get around it?  (Never set up a HW RAID
card before)

thanks,
-- 
Iain Buchanan iaindb at netspace dot net dot au

You know you're using the computer too much when:
you count from zero all the time.
-- Stormy Eyes




Re: [gentoo-user] 1-Terabyte drives - 4K sector sizes? - bar performance so far

2010-02-09 Thread Peter Humphrey
On Tuesday 09 February 2010 18:03:39 Neil Walker wrote:

 Be lucky,
 
 Neil

How would I go about doing that?

-- 
Rgds
Peter.



Re: [gentoo-user] 1-Terabyte drives - 4K sector sizes? - bar performance so far

2010-02-09 Thread Iain Buchanan
On Tue, 2010-02-09 at 14:54 -0800, Mark Knecht wrote:
 On Tue, Feb 9, 2010 at 1:13 PM, Frank Steinmetzger war...@gmx.de wrote:


  When I use parted on the drives, it says (both the old external and my 2
  months old internal):
  Sector size (logical/physical): 512B/512B
  So no speedup for me then. :-/

so does mine :)

 Frank,
As best I can tell so far none of the Linux tools will tell you
 that the sectors are 4K. I had to go to the WD web site and find the
 actual drive specs to discover that was true.

however if you use dmesg:
$ dmesg | grep ata
ata1: SATA max UDMA/133 irq_stat 0x00400040, connection status changed
irq 17
ata2: DUMMY
ata3: SATA max UDMA/133 abar m2...@0xf6ffb800 port 0xf6ffba00 irq 17
ioatdma: Intel(R) QuickData Technology Driver 4.00
ata3: SATA link down (SStatus 0 SControl 300)
ata1: SATA link up 3.0 Gbps (SStatus 123 SControl 300)
ata1.00: ATA-7: ST9160823ASG, 3.ADD, max UDMA/133
ata1.00: 312581808 sectors, multi 8: LBA48 NCQ (depth 31/32)
...

you can look up your drive model number (in my case ST9160823ASG) and
find out the details.  (That's a Seagate Momentus 160Gb with actual 512
byte sectors).

saves having to open up your laptop / pc if you didn't order the drive
separately or you've forgotten.
-- 
Iain Buchanan iaindb at netspace dot net dot au

polygon:
Dead parrot.




Re: [gentoo-user] 1-Terabyte drives - 4K sector sizes? - bar performance so far

2010-02-09 Thread Stroller


On 9 Feb 2010, at 23:52, Iain Buchanan wrote:

...
I'm starting to stray OT here, but I'm considering a second-hand  
Adaptec

2420SA - this is real hardware raid right?


Looks like it. Looks pretty nice, too.

The affordable PCI / PCI-X 3wares don't do RAID6 - you have to go PCIe  
for that, I think - and that snapshot backup feature looks cute.



If I'm buying drives in the 1Tb size - does this 4k issue affect
hardware RAID and how do you get around it?  (Never set up a HW RAID
card before)


Posted elsewhere - I think it'll be just the same.

Stroller.




Re: [gentoo-user] 1-Terabyte drives - 4K sector sizes? - bar performance so far

2010-02-09 Thread Mark Knecht
On Tue, Feb 9, 2010 at 4:31 PM, Iain Buchanan iai...@netspace.net.au wrote:
 On Tue, 2010-02-09 at 14:54 -0800, Mark Knecht wrote:
 On Tue, Feb 9, 2010 at 1:13 PM, Frank Steinmetzger war...@gmx.de wrote:


  When I use parted on the drives, it says (both the old external and my 2
  months old internal):
  Sector size (logical/physical): 512B/512B
  So no speedup for me then. :-/

 so does mine :)

 Frank,
    As best I can tell so far none of the Linux tools will tell you
 that the sectors are 4K. I had to go to the WD web site and find the
 actual drive specs to discover that was true.

 however if you use dmesg:
 $ dmesg | grep ata
 ata1: SATA max UDMA/133 irq_stat 0x00400040, connection status changed
 irq 17
 ata2: DUMMY
 ata3: SATA max UDMA/133 abar m2...@0xf6ffb800 port 0xf6ffba00 irq 17
 ioatdma: Intel(R) QuickData Technology Driver 4.00
 ata3: SATA link down (SStatus 0 SControl 300)
 ata1: SATA link up 3.0 Gbps (SStatus 123 SControl 300)
 ata1.00: ATA-7: ST9160823ASG, 3.ADD, max UDMA/133
 ata1.00: 312581808 sectors, multi 8: LBA48 NCQ (depth 31/32)
 ...

 you can look up your drive model number (in my case ST9160823ASG) and
 find out the details.  (That's a Seagate Momentus 160Gb with actual 512
 byte sectors).

 saves having to open up your laptop / pc if you didn't order the drive
 separately or you've forgotten.
 --
 Iain Buchanan iaindb at netspace dot net dot au

 polygon:
        Dead parrot.




Consider as an alternative hdparm dash capital eye. Note that is the
1TB drive and it still suggests 512B Logical/Physical sector size so
I'd still have to go find out for sure but there's lots of easily
readable info there to make it reasonably easy.

- Mark


gandalf ~ # hdparm -I /dev/sda

/dev/sda:

ATA device, with non-removable media
Model Number:   WDC WD10EARS-00Y5B1
Serial Number:  WD-WCAV55464493
Firmware Revision:  80.00A80
Transport:  Serial, SATA 1.0a, SATA II Extensions, SATA Rev
2.5, SATA Rev 2.6
Standards:
Supported: 8 7 6 5
Likely used: 8
Configuration:
Logical max current
cylinders   16383   16383
heads   16  16
sectors/track   63  63
--
CHS current addressable sectors:   16514064
LBAuser addressable sectors:  268435455
LBA48  user addressable sectors: 1953525168
Logical/Physical Sector size:   512 bytes
device size with M = 1024*1024:  953869 MBytes
device size with M = 1000*1000: 1000204 MBytes (1000 GB)
cache/buffer size  = unknown
Capabilities:
LBA, IORDY(can be disabled)
Queue depth: 32
Standby timer values: spec'd by Standard, with device specific minimum
R/W multiple sector transfer: Max = 16  Current = 16
Recommended acoustic management value: 128, current value: 128
DMA: mdma0 mdma1 mdma2 udma0 udma1 udma2 udma3 udma4 udma5 *udma6
 Cycle time: min=120ns recommended=120ns
PIO: pio0 pio1 pio2 pio3 pio4
 Cycle time: no flow control=120ns  IORDY flow control=120ns
Commands/features:
Enabled Supported:
   *SMART feature set
Security Mode feature set
   *Power Management feature set
   *Write cache
   *Look-ahead
   *Host Protected Area feature set
   *WRITE_BUFFER command
   *READ_BUFFER command
   *NOP cmd
   *DOWNLOAD_MICROCODE
Power-Up In Standby feature set
   *SET_FEATURES required to spinup after power up
SET_MAX security extension
   *Automatic Acoustic Management feature set
   *48-bit Address feature set
   *Device Configuration Overlay feature set
   *Mandatory FLUSH_CACHE
   *FLUSH_CACHE_EXT
   *SMART error logging
   *SMART self-test
   *General Purpose Logging feature set
   *64-bit World wide name
   *{READ,WRITE}_DMA_EXT_GPL commands
   *Segmented DOWNLOAD_MICROCODE
   *Gen1 signaling speed (1.5Gb/s)
   *Gen2 signaling speed (3.0Gb/s)
   *Native Command Queueing (NCQ)
   *Host-initiated interface power management
   *Phy event counters
   *NCQ priority information
   *DMA Setup Auto-Activate optimization
   *Software settings preservation
   *SMART Command Transport (SCT) feature set
   *SCT Features Control (AC4)
   *SCT Data Tables (AC5)
unknown 206[12] (vendor specific)
unknown 206[13] (vendor specific)
Security:
Master password revision code = 65534
supported
not enabled
not locked
frozen
not expired: security count

Re: [gentoo-user] 1-Terabyte drives - 4K sector sizes? - bar performance so far

2010-02-09 Thread Stroller


On 9 Feb 2010, at 19:37, J. Roeleveld wrote:

...
Don't get me started on those ;)
The reason I use Linux Software Raid is because:
1) I can't afford hardware raid adapters
2) It's generally faster then hardware fakeraid


I'd rather have slow hardware RAID than fast software RAID. I'm not  
being a snob, it just suits my purposes better.


If speed isn't an issue then secondhand prices of SATA RAID  
controllers (PCI  PCI-X form-factor) are starting to become really  
cheap. Obviously new cards are all PCI-e - industry has long moved to  
that, and enthusiasts are following.


I would be far less invested in hardware RAID if I could find regular  
SATA controllers which boasted hot-swap. I've read reports of people  
hot-swapping SATA drives just fine on their cheap controllers but  
last time I checked there were no manufacturers who supported this as  
a feature.


Stroller. 



Re: [gentoo-user] 1-Terabyte drives - 4K sector sizes? - bar performance so far

2010-02-09 Thread Volker Armin Hemmann
On Mittwoch 10 Februar 2010, Iain Buchanan wrote:
 On Tue, 2010-02-09 at 13:34 +, Neil Bothwick wrote:
  On Tue, 9 Feb 2010 12:46:40 +, Stroller wrote:
With the RAID, you could fail one disk, repartition, re-add it,
rinse and
repeat. But that doesn't take care of the time issue.
   
   Aren't you thinking of LVM, or something?
  
  No. The very nature of RAID is redundancy, so you could remove one disk
  from the array to modify its setup then replace it.
 
 so long as you didn't have any non-detectable disk errors before
 removing the disk, or any drive failure while one of the drives were
 removed.  And the deterioration in performance while each disk was
 removed in turn might take more time than its worth.  Of course RAID 1
 wouldn't suffer from this (with 2 disks)...

Raid 6. Two disks can go down.



Re: [gentoo-user] 1-Terabyte drives - 4K sector sizes? - bar performance so far

2010-02-09 Thread Neil Walker
Peter Humphrey wrote:
 On Tuesday 09 February 2010 18:03:39 Neil Walker wrote:

   
 Be lucky,

 Neil
 

 How would I go about doing that?
   

Well, you need a rabbit's foot, a four leaf clover, a horseshoe
(remember to keep the open end uppermost), a black cat, 

;)

Be lucky,

Neil





Re: [gentoo-user] 1-Terabyte drives - 4K sector sizes? - bar performance so far

2010-02-09 Thread Neil Walker
Iain Buchanan wrote:
 I'm starting to stray OT here, but I'm considering a second-hand Adaptec
 2420SA - this is real hardware raid right?
   

It's a PCI-X card (not PCI-E). Are you sure that's right for your system?

 If I'm buying drives in the 1Tb size - does this 4k issue affect
 hardware RAID and how do you get around it?  (Never set up a HW RAID
 card before)
   

You would need to check with  Adaptec. The latest BIOS is 2 years old so
it may not support the latest drives.


Be lucky,

Neil





Re: [gentoo-user] 1-Terabyte drives - 4K sector sizes? - bar performance so far

2010-02-09 Thread Iain Buchanan
On Tue, 2010-02-09 at 17:27 -0800, Mark Knecht wrote:
 On Tue, Feb 9, 2010 at 4:31 PM, Iain Buchanan iai...@netspace.net.au wrote:
  On Tue, 2010-02-09 at 14:54 -0800, Mark Knecht wrote:

  Frank,
 As best I can tell so far none of the Linux tools will tell you
  that the sectors are 4K. I had to go to the WD web site and find the
  actual drive specs to discover that was true.
 
  however if you use dmesg:

 Consider as an alternative hdparm dash capital eye.

Not sure why you spelt it, but tee hach ae en kay ess!

I knew there was another way somewhere, but it didn't spring to mind
immediately.
-- 
Iain Buchanan iaindb at netspace dot net dot au

Actually, my goal is to have a sandwich named after me.




Re: [gentoo-user] 1-Terabyte drives - 4K sector sizes? - bar performance so far

2010-02-09 Thread Alan McKinnon
On Wednesday 10 February 2010 01:22:31 Iain Buchanan wrote:
 On Tue, 2010-02-09 at 08:47 +0100, J. Roeleveld wrote:
  I now only need to figure out the best way to configure LVM over this to
  get the best performance from it. Does anyone know of a decent way of
  figuring this out?
  I got 6 disks in Raid-5.
 
 why LVM?  Planning on changing partition size later?  LVM is good for
 (but not limited to) non-raid setups where you want one partition over a
 number of disks.
 
 If you have RAID 5 however, don't you just get one large disk out of it?
 In which case you could just create x partitions.  You can always use
 parted to resize / move them later.
 
 IMHO recovery from tiny boot disks is easier without LVM too.
 

General observation (not saying that Iain is wrong):

You use RAID to get redundancy, data integrity and performance.

You use lvm to get flexibility, ease of maintenance and the ability to create 
volumes larger than any single disk or array. And do it at a reasonable price.

These two things have nothing to do with each other and must be viewed as 
such. There are places where RAID and lvm seem to overlap, where one might 
think that a feature of one can be used to replace the other. But both really 
suck in these overlaps and are not very good at them.

Bottom line: don't try and use RAID or LVM to do $STUFF outside their core 
functions. They each do one thing and do it well.


-- 
alan dot mckinnon at gmail dot com



Re: [gentoo-user] 1-Terabyte drives - 4K sector sizes? - bar performance so far

2010-02-09 Thread Iain Buchanan
On Wed, 2010-02-10 at 07:31 +0100, Volker Armin Hemmann wrote:
 On Mittwoch 10 Februar 2010, Iain Buchanan wrote:

  so long as you didn't have any non-detectable disk errors before
  removing the disk, or any drive failure while one of the drives were
  removed.  And the deterioration in performance while each disk was
  removed in turn might take more time than its worth.  Of course RAID 1
  wouldn't suffer from this (with 2 disks)...
 
 Raid 6. Two disks can go down.
 

not that I know enough about RAID to comment on this page, but you might
find it interesting:
http://www.baarf.com/
specifically:
http://www.miracleas.com/BAARF/RAID5_versus_RAID10.txt

-- 
Iain Buchanan iaindb at netspace dot net dot au

The executioner is, I hear, very expert, and my neck is very slender.
-- Anne Boleyn




Re: [gentoo-user] 1-Terabyte drives - 4K sector sizes? - bar performance so far

2010-02-08 Thread Mark Knecht
On Sun, Feb 7, 2010 at 6:08 PM, Willie Wong ww...@math.princeton.edu wrote:
 On Sun, Feb 07, 2010 at 01:42:18PM -0800, Mark Knecht wrote:
    OK - it turns out if I start fdisk using the -u option it show me
 sector numbers. Looking at the original partition put on just using
 default values it had the starting sector was 63 - probably about the
 worst value it could be. As a test I blew away that partition and
 created a new one starting at 64 instead and the untar results are
 vastly improved - down to roughly 20 seconds from 8-10 minutes. That's
 roughly twice as fast as the old 120GB SATA2 drive I was using to test
 the system out while I debugged this issue.

 That's good to hear.

    I'm still a little fuzzy about what happens to the extra sectors at
 the end of a track. Are they used and I pay for a little bit of
 overhead reading data off of them or are they ignored and I lose
 capacity? I think it must be the former as my partition isn't all that
 much less than 1TB.

 As far as I know, you shouldn't worry about it. The
 head/track/cylinder addressing is a relic of an older day. Almost all
 modern drives should be accessed via LBA. If interested, take a look
 at the wikipedia entry on Cylinder-Head-Sector and Logical Block
 Addressing.

 Basically, you are not losing anything.

 Cheers,

 W
 --
 Willie W. Wong                                     ww...@math.princeton.edu
 Data aequatione quotcunque fluentes quantitae involvente fluxiones invenire
         et vice versa   ~~~  I. Newton



Hi,
   Yeah, a little more study and thinking confirms this. The sectors
are 4K. WD put them on there. The sectors are 4K.

   Just because there might be extra physical space at the end of a
track doesn't mean I can ever use it.

   The sectors are 4K and WD put them on there and they've taken ALL
that into account already. They are 4K physically with ECC but
accessible by CHS  and by LBA in 512B chunks. The trick for speed at
the OS/driver level is to make sure we are always grabbing 4K logical
blocks from a single 4K physical sector off the drive. If we do it's
fast. If we don't and start asking for a 4K block that isn't in a
single 4K physical block then it becomes very slow as the drive
hardware/firmware/processor has to do multiple reads and piece it
together for us which is slow. (VERY slow...) By using partitions
mapped to sector number values divisible by 8 we do this. (8 * 512B =
4K)

   The extra space at the end of a track/cylinder is 'lost' but it was
lost before we bought the drive because the sectors are 4K so there is
nothing 'lost' by the choices we make in fdisk. I must remember to use
fdisk -u to see the sector numbers when making the partitions and
remember to do some test writes to the partition to ensure it's right
and the speed is good before doing any real work.

   This has been helpful for me. I'm glad Valmor is getting better
results also.

   I wish I had checked the title before I sent the original email it
was supposed to be

1-Terabyte drives - 4K sector sizes? - bad performance so far

Maybe sticking that here will help others when they Google for this later.

Cheers,
Mark



Re: [gentoo-user] 1-Terabyte drives - 4K sector sizes? - bar performance so far

2010-02-08 Thread Valmor de Almeida
Mark Knecht wrote:
[snip]
 
This has been helpful for me. I'm glad Valmor is getting better
 results also.
[snip]

These 4k-sector drives can be problematic when upgrading older
computers. For instance, my laptop BIOS would not boot from the toshiba
drive I mentioned earlier. However when used as an external usb drive, I
could boot gentoo. Since I have been using this drive as backup storage
I did not investigate the reason for the lower speed. I am happy to get
a factor of 8 in speed up now after you did the research :)

Thanks for your postings.

--
Valmor






Re: [gentoo-user] 1-Terabyte drives - 4K sector sizes? - bar performance so far

2010-02-08 Thread Stroller


On 8 Feb 2010, at 05:25, Valmor de Almeida wrote:


Mark Knecht wrote:
On Sun, Feb 7, 2010 at 11:39 AM, Willie Wong ww...@math.princeton.edu 
 wrote:

[snip]

  OK - it turns out if I start fdisk using the -u option it show me
sector numbers. Looking at the original partition put on just using
default values it had the starting sector was 63 - probably about the


I too was wondering why a Toshiba HDD 1.8 MK2431GAH (4kB-sector), 240
GB I've recently obtained was slow:

- time tar xfj portage-latest.tar.bz2

real16m5.500s
user0m28.535s
sys 0m19.785s

Following your post I recreated a single partition (reiserfs 3.6)
starting at the 64th sector:

Disk /dev/sdb: 240.1 GB, 240057409536 bytes
255 heads, 63 sectors/track, 29185 cylinders, total 468862128 sectors
Units = sectors of 1 * 512 = 512 bytes
Disk identifier: 0xe7bf4b8e

  Device Boot  Start End  Blocks   Id  System
/dev/sdb1  64   468862127   234431032   83  Linux

and the time was improved

- time tar xfj portage-latest.tar.bz2

real2m15.600s
user0m28.156s
sys 0m18.933s


Thanks to both you  Mark for posting this information about these  
improved timings.


I have just checked, and I am getting 3.5 - 6 minutes (real) to untar  
portage. I had blamed performance of this array on the fact that the  
RAID controller is an older model PCI card I got cheap(ish) off eBay,  
but I see it is also aligned beginning at sector 63.


I'm not quite sure if this is cause of poor performance here, as the  
drives in this array are not quite as modern as yours - I'm guessing  
that at least a couple of the drives have been bought in the last 6  
months, but they are only 500GB drives. However I guess it would only  
require one drive in the array to have 4K sectors and it would cause  
this kind of slowdown. I will try checking their spec now.


This is the same server that caused me to post in relation to slow  
Samba transfers 3 weeks ago (How to determine if a NIC is playing  
gigabit?). I have still not yet tested thoroughly - there are always  
chores getting in the way! - but it seems like I was able to transfer  
the same files in about a third (or maybe even a quarter) the time at  
100mbit, between my laptop  desktop Macs.


I am not immediately able to alter the partition layout, as I have  
scads of data on this array. In order to test I think I will need to  
create a second array, aligned optimally, and copy the data across.


I had been recently thinking that 2TB drives are now 40% cheaper per  
gig than 500GB ones, so perhaps I will have to splash out on 3 of  
them. This seems rather a lot of money, but I could probably use the  
space. Hmmmn... actually 1TB are nearly as cheap as per gig -  
considering the eBaying of my current drives, those would make a lot  
of sense.


Stroller.



$ time tar xfj portage-latest.tar.bz2

real6m3.128s
user0m37.810s
sys 0m39.614s
$ echo p  | sudo  fdisk -u /dev/sdb

The number of cylinders for this disk is set to 182360.
There is nothing wrong with that, but this is larger than 1024,
and could in certain setups cause problems with:
1) software that runs at boot time (e.g., old versions of LILO)
2) booting and partitioning software from other OSs
   (e.g., DOS FDISK, OS/2 FDISK)

Command (m for help):
Disk /dev/sdb: 1500.0 GB, 1499968045056 bytes
255 heads, 63 sectors/track, 182360 cylinders, total 2929625088 sectors
Units = sectors of 1 * 512 = 512 bytes
Disk identifier: 0x27a827a7

   Device Boot  Start End  Blocks   Id  System
/dev/sdb1  63  2929613399  1464806668+  83  Linux

Command (m for help): Command (m for help): Command (m for help):
got EOF thrice - exiting..
$



Re: [gentoo-user] 1-Terabyte drives - 4K sector sizes? - bar performance so far

2010-02-08 Thread Paul Hartman
On Mon, Feb 8, 2010 at 12:52 PM, Valmor de Almeida val.gen...@gmail.com wrote:
 Mark Knecht wrote:
 [snip]

This has been helpful for me. I'm glad Valmor is getting better
 results also.
 [snip]

 These 4k-sector drives can be problematic when upgrading older
 computers. For instance, my laptop BIOS would not boot from the toshiba
 drive I mentioned earlier. However when used as an external usb drive, I
 could boot gentoo. Since I have been using this drive as backup storage
 I did not investigate the reason for the lower speed. I am happy to get
 a factor of 8 in speed up now after you did the research :)

 Thanks for your postings.

Thanks for the info everyone, but do you understand the agony I am now
suffering at the fact that all disk in my system (including all parts
of my RAID5) are starting on sector 63 and I don't have sufficient
free space (or free time) to repartition them? :) I am really curious
if there are any gains to be made on my own system...

Next time I partition I will definitely pay attention to this, and
feel foolish that I didn't pay attention before. Thanks.



Re: [gentoo-user] 1-Terabyte drives - 4K sector sizes? - bar performance so far

2010-02-08 Thread Frank Steinmetzger
Am Sonntag, 7. Februar 2010 schrieb Mark Knecht:

 Hi Willie,
OK - it turns out if I start fdisk using the -u option it show me
 sector numbers. Looking at the original partition put on just using
 default values it had the starting sector was 63

Same here.

 - probably about the worst value it could be.

Hm what about those first 62 sectors?
I bought this 500GB drive for my laptop recently and did a fresh partitioning 
scheme on it, and then rsynced the filesystems of the old, smaller drive onto 
it. The first two partitions are ntfs, but I believe they also use cluster 
sizes of 4k by default. So technically I could repartition everything and 
then restore the contents from my backup drive.

And indeed my system becomes very sluggish when I do some HDD shuffling. 

 As a test I blew away that partition and 
 created a new one starting at 64 instead and the untar results are
 vastly improved - down to roughly 20 seconds from 8-10 minutes. That's
 roughly twice as fast as the old 120GB SATA2 drive I was using to test
 the system out while I debugged this issue.

Though the result justifies your decision, I would have though one has to 
start at 65, unless the disk starts counting its sectors at 0.
-- 
Gruß | Greetings | Qapla'
Programmers don’t die, they GOSUB without RETURN.


signature.asc
Description: This is a digitally signed message part.


Re: [gentoo-user] 1-Terabyte drives - 4K sector sizes? - bar performance so far

2010-02-08 Thread Neil Bothwick
On Mon, 8 Feb 2010 14:34:01 -0600, Paul Hartman wrote:

 Thanks for the info everyone, but do you understand the agony I am now
 suffering at the fact that all disk in my system (including all parts
 of my RAID5) are starting on sector 63 and I don't have sufficient
 free space (or free time) to repartition them? :)

With the RAID, you could fail one disk, repartition, re-add it, rinse and
repeat. But that doesn't take care of the time issue.

 I am really curious
 if there are any gains to be made on my own system...

Me too, so post back after you've done it ;-)


-- 
Neil Bothwick

Barth's Distinction:
There are two types of people: those who divide people into two types, and
those who don't.


signature.asc
Description: PGP signature


Re: [gentoo-user] 1-Terabyte drives - 4K sector sizes? - bar performance so far

2010-02-08 Thread Mark Knecht
On Mon, Feb 8, 2010 at 4:05 PM, Frank Steinmetzger war...@gmx.de wrote:
 Am Sonntag, 7. Februar 2010 schrieb Mark Knecht:

 Hi Willie,
    OK - it turns out if I start fdisk using the -u option it show me
 sector numbers. Looking at the original partition put on just using
 default values it had the starting sector was 63

 Same here.

 - probably about the worst value it could be.

 Hm what about those first 62 sectors?
 I bought this 500GB drive for my laptop recently and did a fresh partitioning
 scheme on it, and then rsynced the filesystems of the old, smaller drive onto
 it. The first two partitions are ntfs, but I believe they also use cluster
 sizes of 4k by default. So technically I could repartition everything and
 then restore the contents from my backup drive.

 And indeed my system becomes very sluggish when I do some HDD shuffling.

 As a test I blew away that partition and
 created a new one starting at 64 instead and the untar results are
 vastly improved - down to roughly 20 seconds from 8-10 minutes. That's
 roughly twice as fast as the old 120GB SATA2 drive I was using to test
 the system out while I debugged this issue.

 Though the result justifies your decision, I would have though one has to
 start at 65, unless the disk starts counting its sectors at 0.
 --
 Gruß | Greetings | Qapla'
 Programmers don’t die, they GOSUB without RETURN.


Good question. I don't know where it starts counting but 63 seems to
be the first one you can use on any blank drive I've looked at so far.

There's a few small downsides I've run into with all of this so far:

1) Since we don't use sector 63 it seems that fdisk will still tell
you that you can use 63 until you use up all your primary partitions.
It used to be easier to put additional partitions on when it gave you
the next sector you could use after the one you just added.. Now I'm
finding that I need to write things down and figure it out more
carefully outside of fdisk.

2) When I do something like +60G fdisk chooses the final sector, but
it seems that it doesn't end 1 sector before something divisible by 8,
so again, once the new partition is in I need to do more calculations
to find where then next one will go. Probably better to decide what
you want for an end and make sure that the next sector is divisible by
8.

3) When I put in an extended partition I put the start of it at
something divisible by 8. When I went to add a logical partition
inside of that I found that there was some strange number of sectors
dedicated to the extended partition itself and I had to waste a few
more sectors getting the logical partitions divisible by 8.

4) Everything I've done so far leave me with messages about partition
1 not ending on a cylinder boundary. Googling on that one says don't
worry about it. I don't know...

So, it works - the new partitions are fast but it's a bit of work
getting them in place.

- Mark



Re: [gentoo-user] 1-Terabyte drives - 4K sector sizes? - bar performance so far

2010-02-08 Thread Stroller


On 9 Feb 2010, at 00:05, Frank Steinmetzger wrote:

...

- probably about the worst value it could be.


Hm what about those first 62 sectors?


If I'm understanding correctly, then the drive will *always* have to  
start at the 63rd sector, then swing back round and start reading a  
1st sector, for every read larger than 1 byte.


This will result in a minimum of one extra rotation of the disk's  
platter for every read, and instead of reading larger data  
contiguously the effect will be like a *completely*, least-optimally  
fragmented filesystem.


I may be mistaken on this - if that's the case I would love to be  
corrected.


The results shown by Valmor  Mark are *two orders of magnitude  
faster* when the partitions are correctly aligned.


Stroller.





Re: [gentoo-user] 1-Terabyte drives - 4K sector sizes? - bar performance so far

2010-02-08 Thread Willie Wong
On Tue, Feb 09, 2010 at 01:05:11AM +0100, Frank Steinmetzger wrote:
 Am Sonntag, 7. Februar 2010 schrieb Mark Knecht:
 
  Hi Willie,
 OK - it turns out if I start fdisk using the -u option it show me
  sector numbers. Looking at the original partition put on just using
  default values it had the starting sector was 63
 
 Same here.
 
  - probably about the worst value it could be.
 
 Hm what about those first 62 sectors?

It is possible you can use some of those; I never tried. That's a
negligible amount of space on modern harddrives anyway. And actually,
starting on sector number 63 means that you are skipping 63 sectors,
not 62, since LBA numbering starts with 0. 

Historically there is a reason for all drives coming with default
formatting with the first partition at section 63. Sector 0 is the
MBR, which you shouldn't overwrite. MSDOS and all Windows up to XP
requires the partitions be aligned on Cylinder boundary. So it is
safest to just partition the drive, by default, such that the first
partition starts at LBA 63, or the 64th sector, or the first sector of
the second cylinder. 

Actually, this is why Western Digital et al are releasing this flood
of 4K physical sector discs now. Windows XP has been EOLed and Vista
and up supports partitioning not on cylinder boundary. If Windows XP
still had support, this order of magnitude inefficiency wouldn't have
been overlooked by most consumers. 

 I bought this 500GB drive for my laptop recently and did a fresh partitioning 
 scheme on it, and then rsynced the filesystems of the old, smaller drive onto 
 it. The first two partitions are ntfs, but I believe they also use cluster 
 sizes of 4k by default. So technically I could repartition everything and 
 then restore the contents from my backup drive.

Are you sharing the harddrive with a Windows operating system?
Especially Windows XP? There are reports that Windows XP supports
partitioning not aligned to cylinder boundary. However, if you are
dual booting you will almost surely be fscked if you try that. I had
some fun earlier last year when I did everything else right but
couldn't figure out why my laptop tells me it cannot find the
operating system when I tried to dual boot. 

 Though the result justifies your decision, I would have though one has to 
 start at 65, unless the disk starts counting its sectors at 0.

I've always assumed by default that computer programmers starts
counting at 0. Mathematicians, on the other hand, varies: analysts
start at 0 or minus infinity; number theorists at 1; algebraists at 1
for groups but 0 for rings; and logicians start counting at the empty
set. :)

Cheers, 

W
-- 
Willie W. Wong ww...@math.princeton.edu
Data aequatione quotcunque fluentes quantitae involvente fluxiones invenire 
 et vice versa   ~~~  I. Newton



Re: [gentoo-user] 1-Terabyte drives - 4K sector sizes? - bar performance so far

2010-02-08 Thread Frank Steinmetzger
Am Dienstag, 9. Februar 2010 schrieb Mark Knecht:

 4) Everything I've done so far leave me with messages about partition
 1 not ending on a cylinder boundary. Googling on that one says don't
 worry about it. I don't know...

Would that be when there’s a + sign behind the end sector? I believe to 
remember that _my_ fdisk didn’t show this warning, only parted did.

Anyway, mine's like this, just to throw it into the pot to the others
( those # are added by me to show their respective use )

eisen # fdisk -l -u /dev/sda

Disk /dev/sda: 500.1 GB, 500107862016 bytes
255 heads, 63 sectors/track, 60801 cylinders, total 976773168 sectors
Units = sectors of 1 * 512 = 512 bytes
Disk identifier: 0x80178017

   Device Boot Start End  Blocks   Id  System
/dev/sda1   * 632515778912578863+   7  HPFS/NTFS # Windows
/dev/sda2   251577908808439431463302+   7  HPFS/NTFS # Win Games
/dev/sda3   88084395   12794165919928632+  83  Linux # /
/dev/sda4  127941660   976768064   424413202+   5  Extended
/dev/sda5  127941723   28881656980437423+  83  Linux # /home
/dev/sda6  288816633   780341309   245762338+  83  Linux # music
/dev/sda7  813113973   97670380481794916   83  Linux # X-Plane =o)
/dev/sda8   *  976703868   976768064   32098+  83  Linux # /boot
/dev/sda9  780341373   81311390916386268+   7  HPFS/NTFS # Win7 test

-- 
Gruß | Greetings | Qapla'
begin signature_virus
  Hi! I’m a signature virus.
  Please copy me to your signature to help me spread.
end


signature.asc
Description: This is a digitally signed message part.


[gentoo-user] 1-Terabyte drives - 4K sector sizes? - bar performance so far

2010-02-07 Thread Mark Knecht
Hi,
   I got a WD 1T drive to use in a new machine for my dad. I didn't
pay a huge amount of attention to the technical details when I
purchased it other than it was SATA2, big, and the price was good.
Here's the NewEgg link:

http://www.newegg.com/Product/Product.aspx?Item=N82E16822136490

   I installed the drive, created some partitions and set off to put
ext3 on it using just mke2fs -j /dev/sda3. The partitions gets written
and everything works but when I started installing Gentoo on it I was
getting some HUGE delays at times, such as when unpacking
portage.latest.tar.bz. Basically the tar step would be rolling along
and then the drive would literally appear to stop for 1 minute before
proceeding. No CPU usage, the machine is alive in other terminals, but
anything directed at the disk just seems dead. Sticking my ear on the
drive it doesn't sound like the drive is doing anything.

   I was trying to determine what to do - I.e is this a bad drive, how
to return it, etc. - and started reading the reviews at NewEgg. One
guy using it with Linux had this to say:

QUOTE
4KB physical sectors: KNOW WHAT YOU'RE DOING!

Pros: Quiet, cool-running, big cache

Cons: The 4KB physical sectors are a problem waiting to happen. If you
misalign your partitions, disk performance can suffer. I ran
benchmarks in Linux using a number of filesystems, and I found that
with most filesystems, read performance and write performance with
large files didn't suffer with misaligned partitions, but writes of
many small files (unpacking a Linux kernel archive) could take several
times as long with misaligned partitions as with aligned partitions.
WD's advice about who needs to be concerned is overly simplistic,
IMHO, and it's flat-out wrong for Linux, although it's probably
accurate for 90% of buyers (those who run Windows or Mac OS and use
their standard partitioning tools). If you're not part of that 90%,
though, and if you don't fully understand this new technology and how
to handle it, buy a drive with conventional 512-byte sectors!
/QUOTE

   Now, I don't mind getting a bit dirty learning to use this
correctly but I'm wondering what that means in a practical sense.
Reading the mke2fs man page the word 'sector' doesn't come up. It's my
understanding the Linux 'blocks' are groups of sectors. True? If the
disk must use 4K sectors then what - the smallest block has to be 4K
and I'm using 1 sector per block? It seems that ext3 doesn't support
anything larger than 4K?

   As a test I blew away all the partitions and made one huge 1
terabyte partition using ext3. I think tried untarring the portage
snapshot and then deleting the directory where I put it a bunch of
times. I get very different times each time I do this. untarring
varies from 6 minutes 24 seconds to 10 minutes 25 seconds. Removing
the directory varies from 3 seconds to 1 minute 22 seconds.

   Every time there is an apparent delay I just see the hard drive
light turned on solid. That said as far as I know if I wait for things
to complete the data is there but I haven't tested it extensively.

   Is this a bad drive or am I somehow using it incorrectly?

Thanks,
Mark


gandalf TestMount # time tar xjf /mnt/TestMount/portage-latest.tar.bz2
-C /mnt/TestMount/usr

real6m24.736s
user0m9.969s
sys 0m3.537s
gandalf TestMount # time rm -rf /mnt/TestMount/usr/

real0m3.229s
user0m0.110s
sys 0m1.809s
gandalf TestMount # mkdir usr
gandalf TestMount # time tar xjf /mnt/TestMount/portage-latest.tar.bz2
-C /mnt/TestMount/usr

real7m50.193s
user0m8.647s
sys 0m2.811s
gandalf TestMount # time rm -rf /mnt/TestMount/usr/

real0m3.234s
user0m0.119s
sys 0m1.792s
gandalf TestMount # mkdir usr
gandalf TestMount # time tar xjf /mnt/TestMount/portage-latest.tar.bz2
-C /mnt/TestMount/usr

real10m25.926s
user0m8.645s
sys 0m2.765s
gandalf TestMount # time rm -rf /mnt/TestMount/usr/

real1m22.330s
user0m0.124s
sys 0m1.810s
gandalf TestMount # mkdir usr
gandalf TestMount # time tar xjf /mnt/TestMount/portage-latest.tar.bz2
-C /mnt/TestMount/usr

real8m12.307s
user0m8.463s
sys 0m2.708s
gandalf TestMount # time rm -rf /mnt/TestMount/usr/

real0m29.517s
user0m0.114s
sys 0m1.810s
gandalf TestMount #




gandalf ~ # hdparm -tT /dev/sdb

/dev/sdb:
 Timing cached reads:   11362 MB in  2.00 seconds = 5684.46 MB/sec
 Timing buffered disk reads:  314 MB in  3.00 seconds = 104.64 MB/sec
gandalf ~ #



Re: [gentoo-user] 1-Terabyte drives - 4K sector sizes? - bar performance so far

2010-02-07 Thread Alexander
On Sunday 07 February 2010 19:27:46 Mark Knecht wrote:

Every time there is an apparent delay I just see the hard drive
 light turned on solid. That said as far as I know if I wait for things
 to complete the data is there but I haven't tested it extensively.
 
Is this a bad drive or am I somehow using it incorrectly?
 

Is there any related info in dmesg?



Re: [gentoo-user] 1-Terabyte drives - 4K sector sizes? - bar performance so far

2010-02-07 Thread Volker Armin Hemmann
On Sonntag 07 Februar 2010, Alexander wrote:
 On Sunday 07 February 2010 19:27:46 Mark Knecht wrote:
 Every time there is an apparent delay I just see the hard drive
  
  light turned on solid. That said as far as I know if I wait for things
  to complete the data is there but I haven't tested it extensively.
  
 Is this a bad drive or am I somehow using it incorrectly?
 
 Is there any related info in dmesg?

or maybe there is too much cached and seeking is not the drives strong point 
...



Re: [gentoo-user] 1-Terabyte drives - 4K sector sizes? - bar performance so far

2010-02-07 Thread Mark Knecht
On Sun, Feb 7, 2010 at 9:30 AM, Alexander b3n...@yandex.ru wrote:
 On Sunday 07 February 2010 19:27:46 Mark Knecht wrote:

    Every time there is an apparent delay I just see the hard drive
 light turned on solid. That said as far as I know if I wait for things
 to complete the data is there but I haven't tested it extensively.

    Is this a bad drive or am I somehow using it incorrectly?


 Is there any related info in dmesg?



No, nothing in dmesg at all.

Here are two tests this morning. The first is to the 1T drive, the
second is to a 120GB drive I'm currently using as a system drive until
I work this out:

gandalf TestMount # time tar xjf /mnt/TestMount/portage-latest.tar.bz2
-C /mnt/TestMount/usr

real8m13.077s
user0m8.184s
sys 0m2.561s
gandalf TestMount #


m...@gandalf ~ $ time tar xjf /mnt/TestMount/portage-latest.tar.bz2 -C
/home/mark/Test_usr/

real0m39.213s
user0m8.243s
sys 0m2.135s
m...@gandalf ~ $

8 minutes vs 39 seconds!

The amount of data written appears to be the same:

gandalf ~ # du -shc /mnt/TestMount/usr/
583M/mnt/TestMount/usr/
583Mtotal
gandalf ~ #


m...@gandalf ~ $ du -shc /home/mark/Test_usr/
583M/home/mark/Test_usr/
583Mtotal
m...@gandalf ~ $


I did some reading at the WD site and it seems this drive does use the
4K sector size. The way it's done is the addressing on cable is still
512 byte 'user sectors', but they are packed into 4K physical sectors
and internal hardware does the mapping.

I suspect the performance issue is figuring out how to get the file
system to keep things on 4K boundaries. I assume that's what the 4K
block size is for when building the file system but I need to go find
out more about that. I did not select it specifically. Maybe I need
to.

Thanks,
Mark



Re: [gentoo-user] 1-Terabyte drives - 4K sector sizes? - bar performance so far

2010-02-07 Thread Volker Armin Hemmann
On Sonntag 07 Februar 2010, Mark Knecht wrote:
 On Sun, Feb 7, 2010 at 9:30 AM, Alexander b3n...@yandex.ru wrote:
  On Sunday 07 February 2010 19:27:46 Mark Knecht wrote:
 Every time there is an apparent delay I just see the hard drive
  light turned on solid. That said as far as I know if I wait for things
  to complete the data is there but I haven't tested it extensively.
  
 Is this a bad drive or am I somehow using it incorrectly?
  
  Is there any related info in dmesg?
 
 No, nothing in dmesg at all.
 
 Here are two tests this morning. The first is to the 1T drive, the
 second is to a 120GB drive I'm currently using as a system drive until
 I work this out:
 
 gandalf TestMount # time tar xjf /mnt/TestMount/portage-latest.tar.bz2
 -C /mnt/TestMount/usr
 
 real  8m13.077s
 user  0m8.184s
 sys   0m2.561s
 gandalf TestMount #
 
 
 m...@gandalf ~ $ time tar xjf /mnt/TestMount/portage-latest.tar.bz2 -C
 /home/mark/Test_usr/
 
 real  0m39.213s
 user  0m8.243s
 sys   0m2.135s
 m...@gandalf ~ $
 
 8 minutes vs 39 seconds!
 
 The amount of data written appears to be the same:
 
 gandalf ~ # du -shc /mnt/TestMount/usr/
 583M  /mnt/TestMount/usr/
 583M  total
 gandalf ~ #
 
 
 m...@gandalf ~ $ du -shc /home/mark/Test_usr/
 583M  /home/mark/Test_usr/
 583M  total
 m...@gandalf ~ $
 
 
 I did some reading at the WD site and it seems this drive does use the
 4K sector size. The way it's done is the addressing on cable is still
 512 byte 'user sectors', but they are packed into 4K physical sectors
 and internal hardware does the mapping.
 
 I suspect the performance issue is figuring out how to get the file
 system to keep things on 4K boundaries. I assume that's what the 4K
 block size is for when building the file system but I need to go find
 out more about that. I did not select it specifically. Maybe I need
 to.
 
 Thanks,
 Mark

no. 4k block size is the default for linux filesystems. But you might have 
'misaligned' the partitions. There is a lot of text to read about 
'eraseblocks' on ssds and how important it is to align the partitions. You 
might want to read up on that to learn how to align partitions.



Re: [gentoo-user] 1-Terabyte drives - 4K sector sizes? - bar performance so far

2010-02-07 Thread Mark Knecht
On Sun, Feb 7, 2010 at 10:19 AM, Volker Armin Hemmann
volkerar...@googlemail.com wrote:
 On Sonntag 07 Februar 2010, Alexander wrote:
 On Sunday 07 February 2010 19:27:46 Mark Knecht wrote:
     Every time there is an apparent delay I just see the hard drive
 
  light turned on solid. That said as far as I know if I wait for things
  to complete the data is there but I haven't tested it extensively.
 
     Is this a bad drive or am I somehow using it incorrectly?

 Is there any related info in dmesg?

 or maybe there is too much cached and seeking is not the drives strong point
 ...

It's an interesting question. There is new physical seeking technology
in this line of drives which is intended to reduce power and noise,
but it seem unlikely to me that WD would purposely make a drive that's
10-20x slower than previous generations. Could be though...

Are there any user space Linux tools that can test that?

The other thing I checked out was that when the block size is not
specified it seems that mke2fs uses the default values from
/etc/mke2fs.conf and my file says blocksize = 4096 so it would seem to
me that if all partitions use blocks then at least the partitions
would be properly aligned.

My question about that would be when I write a 1 byte file to this
drive do I use all 4K of the block it's written in? It's wasteful, but
faster, right? I want files to be block-aligned so that the drive
isn't doing lots of translation to get the right data. It seems that's
been the problem with these drives in the Windows world so WD had to
release updated software to get the Windows disk formatters to do
things right, or so I think.

Thanks Volker.

Cheers,
Mark



Re: [gentoo-user] 1-Terabyte drives - 4K sector sizes? - bar performance so far

2010-02-07 Thread Willie Wong
On Sun, Feb 07, 2010 at 08:27:46AM -0800, Mark Knecht wrote:
 QUOTE
 4KB physical sectors: KNOW WHAT YOU'RE DOING!
 
 Pros: Quiet, cool-running, big cache
 
 Cons: The 4KB physical sectors are a problem waiting to happen. If you
 misalign your partitions, disk performance can suffer. I ran
 benchmarks in Linux using a number of filesystems, and I found that
 with most filesystems, read performance and write performance with
 large files didn't suffer with misaligned partitions, but writes of
 many small files (unpacking a Linux kernel archive) could take several
 times as long with misaligned partitions as with aligned partitions.
 WD's advice about who needs to be concerned is overly simplistic,
 IMHO, and it's flat-out wrong for Linux, although it's probably
 accurate for 90% of buyers (those who run Windows or Mac OS and use
 their standard partitioning tools). If you're not part of that 90%,
 though, and if you don't fully understand this new technology and how
 to handle it, buy a drive with conventional 512-byte sectors!
 /QUOTE
 
Now, I don't mind getting a bit dirty learning to use this
 correctly but I'm wondering what that means in a practical sense.
 Reading the mke2fs man page the word 'sector' doesn't come up. It's my
 understanding the Linux 'blocks' are groups of sectors. True? If the
 disk must use 4K sectors then what - the smallest block has to be 4K
 and I'm using 1 sector per block? It seems that ext3 doesn't support
 anything larger than 4K?

The problem is not when you are making the filesystem with mke2fs, but
when you partitioned the disk using fdisk. I'm sure I am making some
small mistakes in the explanation below, but it goes something like
this:

a) The harddrive with 4K sectors allows the head to efficiently
read/write 4K sized blocks at a time. 
b) However, to be compatible in hardware, the harddrive allows 512B
sized blocks to be addressed. In reality, this means that you can
individually address the 8 512B-sized chunks of the 4K sized blocks,
but each will count as a separate operation. To illustrate: say the
hardware has some sector X of size 4K. It has 8 addressable slots
inside X1 ... X8 each of size 512B. If your OS clusters read/writes on
the 512B level, it will send 8 commands to read the info in those 8
blocks separately. If your OS clusters in 4K, it will send one
command. So in the stupid analysis I give here, it will take 8 times
as long for the 512B addressing to read the same data, since it will
take 8 passes, and each time inefficiently reading only 1/8 of the
data required. Now in reality, drives are smarter than that: if all 8
of those are sent in sequence, sometimes the drives will cluster them
together in one read. 
c) A problem occurs, however, when your OS deals with 4K clusters but
when you make the partition, the partition is offset! Imagine the
physical read sectors of your disk looking like



but when you make your partitions, somehow you partitioned it



This is possible because the drive allows addressing by 512K chunks.
So for some reason one of your partitions starts halfway inside a
physical sector. What is the problem with this? Now suppose your OS
sends data to be written to the  block. If it were completely
aligned, the drive will just go kink-move the head to the block, and
overwrite it with this information. But since half of the block is
over the  phsical sector, and half over , what the disk now
needs to do is to 

pass 1) read 
pass 2) modify the second half of  to match the first half of 
pass 3) write 
pass 4) read 
pass 5) modify the first half of  to match the second half of 
pass 6) write 

Or what is known as a read-modify-write operation. Thus the disk
becomes a lot less efficient. 

--

Now, I don't know if this is the actual problem is causing your
performance problems. But this may be it. When you use fdisk, it
defaults to aligning the partition to cylinder boundaries, and use the
default (from ancient times) value of 63 x (512B sized) sectors per
track. Since 63 is not evenly divisible by 8, you see that quite
likely some of your partitions are not aligned to the physical sector
boundaries. 

If you use cfdisk, you can try to change the geometry with the command
g. Or you can use the command u to change the units used in the
partitioning to either sectors or megabytes, and make sure your
partition sizes are a multiple of 8 in the former, or an integer in
the latter. 

Again, take what I wrote with a grain of salt: this information came
from the research I did a little while back after reading the slashdot
article on this 4K switch. So being my own understanding, it may not
completely be correct. 

HTH, 

W
-- 
Willie W. Wong ww...@math.princeton.edu
Data aequatione quotcunque fluentes quantitae involvente fluxiones invenire 
 et vice versa   ~~~ 

Re: [gentoo-user] 1-Terabyte drives - 4K sector sizes? - bar performance so far

2010-02-07 Thread Mark Knecht
On Sun, Feb 7, 2010 at 11:39 AM, Willie Wong ww...@math.princeton.edu wrote:
 On Sun, Feb 07, 2010 at 08:27:46AM -0800, Mark Knecht wrote:
 QUOTE
 4KB physical sectors: KNOW WHAT YOU'RE DOING!

 Pros: Quiet, cool-running, big cache

 Cons: The 4KB physical sectors are a problem waiting to happen. If you
 misalign your partitions, disk performance can suffer. I ran
 benchmarks in Linux using a number of filesystems, and I found that
 with most filesystems, read performance and write performance with
 large files didn't suffer with misaligned partitions, but writes of
 many small files (unpacking a Linux kernel archive) could take several
 times as long with misaligned partitions as with aligned partitions.
 WD's advice about who needs to be concerned is overly simplistic,
 IMHO, and it's flat-out wrong for Linux, although it's probably
 accurate for 90% of buyers (those who run Windows or Mac OS and use
 their standard partitioning tools). If you're not part of that 90%,
 though, and if you don't fully understand this new technology and how
 to handle it, buy a drive with conventional 512-byte sectors!
 /QUOTE

    Now, I don't mind getting a bit dirty learning to use this
 correctly but I'm wondering what that means in a practical sense.
 Reading the mke2fs man page the word 'sector' doesn't come up. It's my
 understanding the Linux 'blocks' are groups of sectors. True? If the
 disk must use 4K sectors then what - the smallest block has to be 4K
 and I'm using 1 sector per block? It seems that ext3 doesn't support
 anything larger than 4K?

 The problem is not when you are making the filesystem with mke2fs, but
 when you partitioned the disk using fdisk. I'm sure I am making some
 small mistakes in the explanation below, but it goes something like
 this:

 a) The harddrive with 4K sectors allows the head to efficiently
 read/write 4K sized blocks at a time.
 b) However, to be compatible in hardware, the harddrive allows 512B
 sized blocks to be addressed. In reality, this means that you can
 individually address the 8 512B-sized chunks of the 4K sized blocks,
 but each will count as a separate operation. To illustrate: say the
 hardware has some sector X of size 4K. It has 8 addressable slots
 inside X1 ... X8 each of size 512B. If your OS clusters read/writes on
 the 512B level, it will send 8 commands to read the info in those 8
 blocks separately. If your OS clusters in 4K, it will send one
 command. So in the stupid analysis I give here, it will take 8 times
 as long for the 512B addressing to read the same data, since it will
 take 8 passes, and each time inefficiently reading only 1/8 of the
 data required. Now in reality, drives are smarter than that: if all 8
 of those are sent in sequence, sometimes the drives will cluster them
 together in one read.
 c) A problem occurs, however, when your OS deals with 4K clusters but
 when you make the partition, the partition is offset! Imagine the
 physical read sectors of your disk looking like

 

 but when you make your partitions, somehow you partitioned it

 

 This is possible because the drive allows addressing by 512K chunks.
 So for some reason one of your partitions starts halfway inside a
 physical sector. What is the problem with this? Now suppose your OS
 sends data to be written to the  block. If it were completely
 aligned, the drive will just go kink-move the head to the block, and
 overwrite it with this information. But since half of the block is
 over the  phsical sector, and half over , what the disk now
 needs to do is to

 pass 1) read 
 pass 2) modify the second half of  to match the first half of 
 pass 3) write 
 pass 4) read 
 pass 5) modify the first half of  to match the second half of 
 pass 6) write 

 Or what is known as a read-modify-write operation. Thus the disk
 becomes a lot less efficient.

 --

 Now, I don't know if this is the actual problem is causing your
 performance problems. But this may be it. When you use fdisk, it
 defaults to aligning the partition to cylinder boundaries, and use the
 default (from ancient times) value of 63 x (512B sized) sectors per
 track. Since 63 is not evenly divisible by 8, you see that quite
 likely some of your partitions are not aligned to the physical sector
 boundaries.

 If you use cfdisk, you can try to change the geometry with the command
 g. Or you can use the command u to change the units used in the
 partitioning to either sectors or megabytes, and make sure your
 partition sizes are a multiple of 8 in the former, or an integer in
 the latter.

 Again, take what I wrote with a grain of salt: this information came
 from the research I did a little while back after reading the slashdot
 article on this 4K switch. So being my own understanding, it may not
 completely be correct.

 HTH,

 W
 --
 Willie W. Wong                                     

Re: [gentoo-user] 1-Terabyte drives - 4K sector sizes? - bar performance so far

2010-02-07 Thread Mark Knecht
On Sun, Feb 7, 2010 at 11:39 AM, Willie Wong ww...@math.princeton.edu wrote:
 On Sun, Feb 07, 2010 at 08:27:46AM -0800, Mark Knecht wrote:
 QUOTE
 4KB physical sectors: KNOW WHAT YOU'RE DOING!

 Pros: Quiet, cool-running, big cache

 Cons: The 4KB physical sectors are a problem waiting to happen. If you
 misalign your partitions, disk performance can suffer. I ran
 benchmarks in Linux using a number of filesystems, and I found that
 with most filesystems, read performance and write performance with
 large files didn't suffer with misaligned partitions, but writes of
 many small files (unpacking a Linux kernel archive) could take several
 times as long with misaligned partitions as with aligned partitions.
 WD's advice about who needs to be concerned is overly simplistic,
 IMHO, and it's flat-out wrong for Linux, although it's probably
 accurate for 90% of buyers (those who run Windows or Mac OS and use
 their standard partitioning tools). If you're not part of that 90%,
 though, and if you don't fully understand this new technology and how
 to handle it, buy a drive with conventional 512-byte sectors!
 /QUOTE

    Now, I don't mind getting a bit dirty learning to use this
 correctly but I'm wondering what that means in a practical sense.
 Reading the mke2fs man page the word 'sector' doesn't come up. It's my
 understanding the Linux 'blocks' are groups of sectors. True? If the
 disk must use 4K sectors then what - the smallest block has to be 4K
 and I'm using 1 sector per block? It seems that ext3 doesn't support
 anything larger than 4K?

 The problem is not when you are making the filesystem with mke2fs, but
 when you partitioned the disk using fdisk. I'm sure I am making some
 small mistakes in the explanation below, but it goes something like
 this:

 a) The harddrive with 4K sectors allows the head to efficiently
 read/write 4K sized blocks at a time.
 b) However, to be compatible in hardware, the harddrive allows 512B
 sized blocks to be addressed. In reality, this means that you can
 individually address the 8 512B-sized chunks of the 4K sized blocks,
 but each will count as a separate operation. To illustrate: say the
 hardware has some sector X of size 4K. It has 8 addressable slots
 inside X1 ... X8 each of size 512B. If your OS clusters read/writes on
 the 512B level, it will send 8 commands to read the info in those 8
 blocks separately. If your OS clusters in 4K, it will send one
 command. So in the stupid analysis I give here, it will take 8 times
 as long for the 512B addressing to read the same data, since it will
 take 8 passes, and each time inefficiently reading only 1/8 of the
 data required. Now in reality, drives are smarter than that: if all 8
 of those are sent in sequence, sometimes the drives will cluster them
 together in one read.
 c) A problem occurs, however, when your OS deals with 4K clusters but
 when you make the partition, the partition is offset! Imagine the
 physical read sectors of your disk looking like

 

 but when you make your partitions, somehow you partitioned it

 

 This is possible because the drive allows addressing by 512K chunks.
 So for some reason one of your partitions starts halfway inside a
 physical sector. What is the problem with this? Now suppose your OS
 sends data to be written to the  block. If it were completely
 aligned, the drive will just go kink-move the head to the block, and
 overwrite it with this information. But since half of the block is
 over the  phsical sector, and half over , what the disk now
 needs to do is to

 pass 1) read 
 pass 2) modify the second half of  to match the first half of 
 pass 3) write 
 pass 4) read 
 pass 5) modify the first half of  to match the second half of 
 pass 6) write 

 Or what is known as a read-modify-write operation. Thus the disk
 becomes a lot less efficient.

 --

 Now, I don't know if this is the actual problem is causing your
 performance problems. But this may be it. When you use fdisk, it
 defaults to aligning the partition to cylinder boundaries, and use the
 default (from ancient times) value of 63 x (512B sized) sectors per
 track. Since 63 is not evenly divisible by 8, you see that quite
 likely some of your partitions are not aligned to the physical sector
 boundaries.

 If you use cfdisk, you can try to change the geometry with the command
 g. Or you can use the command u to change the units used in the
 partitioning to either sectors or megabytes, and make sure your
 partition sizes are a multiple of 8 in the former, or an integer in
 the latter.

 Again, take what I wrote with a grain of salt: this information came
 from the research I did a little while back after reading the slashdot
 article on this 4K switch. So being my own understanding, it may not
 completely be correct.

 HTH,

 W
 --
 Willie W. Wong                                     

Re: [gentoo-user] 1-Terabyte drives - 4K sector sizes? - bar performance so far

2010-02-07 Thread Kyle Bader
 4KB physical sectors: KNOW WHAT YOU'RE DOING!

Good article by Theodore T'so, might be helpful:

http://thunk.org/tytso/blog/2009/02/20/aligning-filesystems-to-an-ssds-erase-block-size/

-- 

Kyle



Re: [gentoo-user] 1-Terabyte drives - 4K sector sizes? - bar performance so far

2010-02-07 Thread Willie Wong
On Sun, Feb 07, 2010 at 01:42:18PM -0800, Mark Knecht wrote:
OK - it turns out if I start fdisk using the -u option it show me
 sector numbers. Looking at the original partition put on just using
 default values it had the starting sector was 63 - probably about the
 worst value it could be. As a test I blew away that partition and
 created a new one starting at 64 instead and the untar results are
 vastly improved - down to roughly 20 seconds from 8-10 minutes. That's
 roughly twice as fast as the old 120GB SATA2 drive I was using to test
 the system out while I debugged this issue.

That's good to hear. 
 
I'm still a little fuzzy about what happens to the extra sectors at
 the end of a track. Are they used and I pay for a little bit of
 overhead reading data off of them or are they ignored and I lose
 capacity? I think it must be the former as my partition isn't all that
 much less than 1TB.

As far as I know, you shouldn't worry about it. The
head/track/cylinder addressing is a relic of an older day. Almost all
modern drives should be accessed via LBA. If interested, take a look
at the wikipedia entry on Cylinder-Head-Sector and Logical Block
Addressing. 

Basically, you are not losing anything. 

Cheers, 

W
-- 
Willie W. Wong ww...@math.princeton.edu
Data aequatione quotcunque fluentes quantitae involvente fluxiones invenire 
 et vice versa   ~~~  I. Newton



Re: [gentoo-user] 1-Terabyte drives - 4K sector sizes? - bar performance so far

2010-02-07 Thread Valmor de Almeida
Mark Knecht wrote:
 On Sun, Feb 7, 2010 at 11:39 AM, Willie Wong ww...@math.princeton.edu wrote:
[snip]
OK - it turns out if I start fdisk using the -u option it show me
 sector numbers. Looking at the original partition put on just using
 default values it had the starting sector was 63 - probably about the

I too was wondering why a Toshiba HDD 1.8 MK2431GAH (4kB-sector), 240
GB I've recently obtained was slow:

- time tar xfj portage-latest.tar.bz2

real16m5.500s
user0m28.535s
sys 0m19.785s

Following your post I recreated a single partition (reiserfs 3.6)
starting at the 64th sector:

Disk /dev/sdb: 240.1 GB, 240057409536 bytes
255 heads, 63 sectors/track, 29185 cylinders, total 468862128 sectors
Units = sectors of 1 * 512 = 512 bytes
Disk identifier: 0xe7bf4b8e

   Device Boot  Start End  Blocks   Id  System
/dev/sdb1  64   468862127   234431032   83  Linux

and the time was improved

- time tar xfj portage-latest.tar.bz2

real2m15.600s
user0m28.156s
sys 0m18.933s


--
Valmor